url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1102.2350 | The best possible upper bound on the probability of undetected error for linear codes of full support | There is a known best possible upper bound on the probability of undetected error for linear codes. The $[n,k;q]$ codes with probability of undetected error meeting the bound have support of size $k$ only. In this note, linear codes of full support ($=n$) are studied. A best possible upper bound on the probability of undetected error for such codes is given, and the codes with probability of undetected error meeting this bound are characterized. | \section*{Upper bounds on $P_{\rm ue}(C,p)$ for linear codes $C$}
Let $n\ge k \ge 1$. An $[n,k;q]$ code is a linear code of length $n$ and dimension $k$ over the field $F_q$ of $q$ elements.
For an $[n,k;q]$ code $C$, the probability of undetected error $P_{\rm ue}(C,p)$ is the probability that a
codeword is changed to another codeword when transmitted over the $q$-ary symmetric channel. It is known,
see \cite[Theorem 2.51]{K}, that
\begin{theorem}
If $C$ is an $[n,k;q]$ code, then
\begin{equation}
\label{gb}
P_{\rm ue}(C,p) \le (1-p)^{n-k}-(1-p)^n
\end{equation}
for all $p\in [0,(q-1)/q]$. Moreover, the bound is best possible since the bound is met with equality for all $p$
for the code $C_{n,k}$ generated by $[I_k|0_{k\times (n-k)}]$.
Here $I_k$ is the $k\times k$ identity matrix, and $0_{k\times (n-k)}$ is the $k\times (n-k)$ matrix with all entries zero.
\end{theorem}
It is known (see e.g. \cite{K} Theorem 2.1) that
\[P_{\rm ue}(C,p)=(1-p)^n \left\{A_C\left( \frac{p}{(q-1)(1-p)}\right)-1 \right\}\]
where $A_C(z)$ is the weight distribution function of $C$.
In terms of the weight distribution, (\ref{gb}) is equivalent to
\[A_C(z)\le A_{C_{n,k}}(z) \mbox{ for all }z\in [0,1].\]
For a code $C$ of length $n$, the support $\chi(C)$ is the set of positions $i$ such that $c_i\ne 0$ for
some codeword $(c_1,c_2,\ldots ,c_n)\in C$. The code has \emph{full support} if $|\chi(C)|=n$, that is,
for any position there is a codeword that is non-zero in this position. For example, the code
$C_{n,k}$ has support $k$.
In practical applications, one usually uses codes with full support. We expect to find a sharper upper bound on
$P_{\rm ue}(C,p)$ for codes of full support. In this paper we find the following best possible
upper bound on $P_{\rm ue}(C,p)$ for linear codes of full support.
\begin{theorem}
\label{th2}
If $C$ is an $[n,k;q]$ code of full support, then
\[P_{\rm ue}(C,p) \le (1-p)^{n-k+1}+(q-1)^{k-n}p^{n-k+1}-(1-p)^n\]
for all $p\in [0,(q-1)/q]$. Moreover, the bound is best possible since the bound is met with equality for all $p$
for the code $D_{n,k,{\mathbf{v}}}$
generated by
\[ \left[I_k\Bigl \lvert \begin{matrix} {\mathbf{v}} \\ 0_{(k-1)\times (n-k)} \end{matrix} \right],\]
where ${\mathbf{v}}\in F_q^{n-k} $ is a vector of full support (that is, without zero in any position).
Moreover, any code of full support meeting the bound is equivalent to $D_{n,k, {\mathbf{v}} }$
for some ${\mathbf{v}}$ of full support.
\end{theorem}
This bound is tighter than the bound (\ref{gb}). The improvement for $p\in (0,(q-1)/q)$ is
\[ p(1-p)^{n-k} \left \{1-\left( \frac{p}{(q-1)(1-p)}\right)^{n-k}\right\}. \]
\section*{Proof of Theorem \ref{th2}}
The weight distribution of $D_{n,k, {\mathbf{v}} }$ is
\[A_{D_{n,k,{\mathbf{v}} }}(z)= (1+(q-1)z)^{k-1}(1+(q-1)z^{n-k+1}) .\]
Therefore, Theorem \ref{th2} is equivalent to
\begin{theorem}
\label{th3}
If $C$ is an $[n,k;q]$ code of full support, then
\[A_C(z) \le (1+(q-1)z)^{k-1}(1+(q-1)z^{n-k-1}) \]
for all $z\in [0,1]$, with equality if and only if $C$ is equivalent to $ D_{n,k,{\mathbf{v}}} $
for some vector ${\mathbf{v}}$ of full support.
\end{theorem}
\begin{lemma}
\label{full}
An $[n,k;q]$ code $C$ has full support if and only if $C^\perp$ is an $[n,k,2;q]$ code, that is, it has minimum distance at least 2.
\end{lemma}
\begin{IEEEproof}
The result follows from the observation that if $i$ is not in the support, then the unit vector ${\mathbf{e}}_i$ is
contained in $C^\perp$ and vice versa.
\end{IEEEproof}
By the MacWilliams theorem, if $C$ is an $[n,k;q]$ code, then
\begin{equation}
\label{mw}
A_{C^\perp}(z)=\frac{1}{q^k} \left(1+(q-1)z \right)^n A_C\left(\frac{1-z}{1+(q-1)z} \right).
\end{equation}
This implies that $A_{C_1}(z)\le A_{C_2}(z)$ for all $z\in [0,1]$ if and only if
$A_{C^\perp_1}(z)\le A_{C^\perp_2}(z)$ for all $z\in [0,1]$.
Let $E_{n,k,{\mathbf{v}}}=D_{n,n-k,{\mathbf{v}}}^\perp$. This code is generated by the matrix $[I_{k}|{\mathbf{v}}^t|0_{k\times (n-k-1)}]$.
Using (\ref{mw}), we see that
\begin{equation}
\label{ew}
A_{E_{n,n-k,{\mathbf{v}}} }(z)= \frac{1}{q} \Bigl\{ \bigl(1+(q-1)z\bigr)^{n-k+1}+(q-1)(1-z)^{n-k+1}\Bigr\}.
\end{equation}
Combining all these facts, we see that Theorem \ref{th3} is equivalent to the following (where we substitute $n-k$ for $k$).
\bigskip
\begin{theorem}
\label{th4}
If $C$ is an $[n,k,2;q]$ code, then
\begin{equation}
\label{gb4}
A_C(z) \le f(z),
\end{equation}
where \[f(z)=\frac{1}{q} \left\{ \left(1+(q-1)z\right)^{k+1}+(q-1)(1-z)^{k+1}\right\},\]
for all $z\in [0,1]$, with equality if and only if $C$ is equivalent to $E_{n,k,{\mathbf{v}}}$
for some vector ${\mathbf{v}}\in F_q^k$ of full support.
\end{theorem}
Before proving this theorem, we give a couple of simple lemmas. For $z\in [0,1]$ we clearly have $z^i\ge z^j$ for $i\le j$.
This implies the following lemma.
\begin{lemma}
\label{max un err}
For $[n,k;q]$ codes $C$ and $C'$, if
\[\sum\limits_{i=1}^jA_i(C)\leq \sum\limits_{i=1}^jA_i(C')\]
for any $1\leq j\leq n$, then for all $z\in [0,1]$, we
have
\[A_C(z)\le A_{C'}(z).\]
Moreover, we have equality for any $z\in (0,1)$ if and only if $A_i(C)=A_i(C')$
for all $i$, $1\leq i\leq n$.
\end{lemma}
\begin{lemma}
\label{ewl}
Let ${\mathbf{v}}$ be a vector of full support. Then
\noindent {\rm a)}
\[ A_i(E_{n,k, {\mathbf{v}} }) = \frac{1}{q}\binom{k+1}{i} \left\{(q-1)^i+(q-1)(-1)^i \right\}.\]
\noindent {\rm b)}
\begin{align}
\sum_{i=2}^{j} A_i(E_{n,k,{\mathbf{v}}}) &= \sum_{i=1}^{j-1}\binom{k}{i}(q-1)^i \nonumber \\
& +\frac{1}{q} \binom{k}{j} \left\{(q-1)^{j} +(-1)^j(q-1)\right\}. \label{sume}
\end{align}
\end{lemma}
\begin{IEEEproof}
We see that a) follows immediately from (\ref{ew}). From a) we get
\begin{align*}
\sum_{i=2}^{j} A_i(E_{n,k,{\mathbf{v}}}) =\, & \frac{1}{q} \sum_{i=2}^{j}\binom{k+1}{i}(q-1)^i\\
& + \frac{q-1}{q} \sum_{i=2}^{j}\binom{k+1}{i}(-1)^i.
\end{align*}
Let
\[F(z)=\sum_{i=2}^{j}\binom{k+1}{i}z^i.\]
Then
\begin{align*}
F(z) =\, & \sum_{i=2}^{j}\binom{k}{i}z^i + \sum_{i=2}^{j}\binom{k}{i-1}z^i \\
=\, & \sum_{i=2}^{j}\binom{k}{i}z^i + \sum_{i=1}^{j-1}\binom{k}{i}z^{i+1} \\
=\, & (z+1)\sum_{i=1}^{j-1}\binom{k}{i}z^i + \binom{k}{j}z^{j}-zk. \\
\end{align*}
Hence
\begin{eqnarray*}
\lefteqn{q \sum_{i=2}^{j} A_i(E_{n,k,{\mathbf{v}}})} \\
&=& F(q-1)+(q-1)F(-1) \\
&=& q \sum_{i=1}^{j-1}\binom{k}{i}(q-1)^i + \binom{k}{j}(q-1)^{j}-(q-1)k \\
&& + (q-1) \binom{k}{j}(-1)^j+(q-1)k.
\end{eqnarray*}
Hence, b) follows.
\end{IEEEproof}
We now give the proof of Theorem \ref{th4}.
\begin{IEEEproof}
Suppose $C$ is generated by $G=[I_k|Q]$ where
the rows of $Q$ are
${\mathbf{v}}_1, {\mathbf{v}}_2,\cdots,
{\mathbf{v}}_k$ (and where ${\mathbf{v}}_i\neq
{\mathbf{0}}$ for $1\leq i\leq k$). Then for any ${\mathbf{x}}\in F_q^k$, the codeword
$ {\mathbf{x}}G =( {\mathbf{x}} | {\mathbf{x}}Q) $ has
weight
\[w({\mathbf{x}}G ) = w({\mathbf{x}} )+w({\mathbf{x}}Q).\]
Hence
\begin{equation}
\label{weight less j}
\sum_{i=2}^j A_i(C) = S_1+S_2,
\end{equation}
where
\begin{align*}
S_1 &= |\{{\mathbf{x}}\mid {\mathbf{x}}\neq
{\mathbf{0}},w({\mathbf{x}})\leq j-1, w({\mathbf{x}}Q)+w({\mathbf{x}})\leq j\}| \\
&\le |\{{\mathbf{x}}\mid {\mathbf{x}}\neq
{\mathbf{0}},w({\mathbf{x}})\leq j-1\}| \\
&= \sum\limits_{i=1}^{j-1}\binom{k}{i}(q-1)^i,
\end{align*}
and
\[S_2= |\{{\mathbf{x}}\mid w({\mathbf{x}})= j,{ \mathbf{x}}Q={\mathbf{0}}.\}| \]
To evaluate $S_2$, we first choose
$j$ positions out of $k$, the number of choices is $\binom{k}{j}$.
Without loss of generality we can assume that
${\mathbf{x}}=(x_1,x_2,\cdots, x_k)$, where
$x_1, x_2, \cdots, x_j$ are
nonzero and $x_{j+1}=\cdots =x_k=0$. Then we have
\begin{equation}\label{reduce}
\left\{ \begin{array}{ll}
&x_1, x_2,\cdots, x_j\neq 0\\
&x_1 {\mathbf{v}}_1+x_2{\mathbf{v}}_2+\cdots+x_j {\mathbf{v}}_j={\mathbf{0}}.
\end{array}
\right.
\end{equation}
Let $r$ be the rank of the matrix with rows
${\mathbf{v}}_1, {\mathbf{v}}_2,\cdots, {\mathbf{v}}_j$.
If $r=1$, then for $1\leq i\leq j$, ${\mathbf{v}}_i=t_i {\mathbf{v}}_j$ for some
$t_i\in F_q^*$. Denote by $n_j$ the number of solutions of
$(\ref{reduce})$. For arbitrary nonzero elements $x_1, x_2,\cdots,
x_{j-1}$,
\begin{itemize}
\item if $x_1 t_1+x_2 t_2+\cdots+x_{j-1} t_{j-1}=0$, then $(x_1, x_2, \cdots,
x_{j-1})$ contributes $1$ to $n_{j-1}$.
\item if $x_1 t_1+x_2 t_2+\cdots+x_{j-1} t_{j-1}\neq 0$, then
\[x_j=-x_1 t_1-x_2 t_2-\cdots-x_{j-1} t_{j-1}\]
and $(x_1, x_2, \cdots, x_{j-1}, x_j)$ contributes $1$ to $n_{j}$.
\end{itemize}
Therefore we have $n_{j-1}+n_{j}=(q-1)^{j-1}$. This recurrence
relation and the first term $n_1=0$ imply that
\begin{equation}\label{num nj}
n_j= \frac{1}{q} \left((q-1)^{j}+(-1)^j(q-1)\right).
\end{equation}
If $r\geq 2$, then we may assume that ${\mathbf{v}}_1$ and ${\mathbf{v}}_2$ are
linearly independent. For any fixed nonzero elements $x_3, \cdots,
x_j$, the equation
\[x_1 {\mathbf{v}}_1+ x_2{\mathbf{v}}_2=-x_3
{\mathbf{v}}_3-\cdots-x_j{\mathbf{v}}_j\]
has at most one solution. Therefore the
number of solutions of (\ref{reduce}) is at most $(q-1)^{j-2}$ which
is less than (\ref{num nj}) except when $q=2$, $j$ is odd, and
\[ {\mathbf{v}}_1+{\mathbf{v}}_2+\cdots+{\mathbf{v}}_j ={\mathbf{0}}.\]
In this exceptional
case, $n_j=0<1=(q-1)^{j-2}$ and at least one of ${\mathbf{v}}_i$ has
Hamming weight at least $2$ (since an odd number of binary vectors of
weight $1$ can not have sum ${\mathbf{0}}$). We may assume
$w({\mathbf{v}}_j)\geq 2$. Choose ${\mathbf{x}}=(1,1,\cdots, 1,0)$. Then
$w({\mathbf{x}})=j-1$ and
\[{\mathbf{x}}Q = {\mathbf{v}}_1+{\mathbf{v}}_2+\cdots+ {\mathbf{v}}_{j-1} = {\mathbf{v}}_{j}.\]
Hence
\[w({\mathbf{x}}G) = w({\mathbf{x}})+w({\mathbf{v}}_j) \ge j-1+2=j+1. \]
Therefore, in the exceptional case,
\[S_1< \sum_{i=1}^{j-1}\binom{k}{i}(q-1)^i.\]
In total, by (\ref{weight less j}) we obtain
\begin{align}
\sum_{i=2}^j A_i(C) \leq & \sum_{i=1}^{j-1} \binom{k}{i}(q-1)^i \nonumber \\
& +\frac{1}{q} \binom{k}{j} \left((q-1)^{j} +(-1)^j(q-1)\right) \nonumber \\
= & \sum_{i=2}^j A_i(E_{n,k,{\mathbf{v}}}) \label{weight j 2}
\end{align}
for $j\geq 2$ by (\ref{sume}).
By Lemma
\ref{max un err} we get that $A_C(z)$ takes the maximal value
for any $z\in (0,1)$ if and only if $C$ is (equivalent to)
$E_{n,k,{\mathbf{v}}}$.
\end{IEEEproof}
\section*{On an older bound}
A special case of \cite[Theorem 2.51 ]{K} is equivalent to the statement that
\begin{equation}
\label{gam}
A_C(z)\le g(z) \stackrel{\rm def}{=} (1+(q-1)z)^k+k(q-1)(z^2-z)
\end{equation}
for all $[n,k,2;q]$ codes and all $z\in [0,1]$.
A simple proof goes as follows: we have
\[ w({\mathbf{x}}G) \ge w( {\mathbf{x}} ) \]
for all ${\mathbf{x}} \in F^k$. Moreover, if $w({\mathbf{x}})=1$,
then $w({\mathbf{x}}G) \ge 2$. Hence
\begin{align*}
A_C(z) \le &\, \sum_ {i=0}^k \binom{k}{i}((q-1)z)^i - k(q-1)z + k(q-1)z^2 \\
= &\, (1+(q-1)z)^k +k(q-1)(z^2-z).
\end{align*}
Since (\ref{gb4}) is best possible for codes with minimum distance 2, it is clearly at least as good as (\ref{gam}).
If $k=0$, then $f(z)=g(z)=1$. If $k=1$, then \[f(z)=g(z)=1+(q-1)z^2.\]
If $k=q=2$, then $f(z)=g(z)=1+3z^2$. We will show that in all other cases, $g(z)>f(z)$.
\begin{theorem}
For $q\ge 2$ and $k\ge 1$ we have
\[g(z)-f(z)= \frac{q-1}{q}(1-z) \Bigl\{ \sum_{j=2}^k \binom{k}{j} \Bigl( (q-1)^j-(-1)^j\Bigr) z^j\Bigr\}.\]
In particular, $g(z)>f(z)$ for all $z\in (0,1)$, except when $q=k=2$ or $k=1$.
\end{theorem}
\medskip
\begin{IEEEproof}
\noindent $g(z)-f(z)$
\begin{align*}
=\, & \Bigl(1+(q-1)z\Bigr)^k + k(q-1)(z^2-z) \\
& -\frac{1}{q} \Bigl(1+(q-1)z\Bigr)^{k+1} - \frac{q-1}{q} (1-z)^{k+1}\\
=\, & \frac{1}{q} \Bigl(1+(q-1)z \Bigr)^k \Bigl\{q-1-(q-1)z \Bigr\} \\
& -k(q-1)z(1-z) - \frac{q-1}{q} (1-z)^{k+1} \\
=\, & \frac{q-1}{q}(1-z) \Bigl\{ \Bigl(1+(q-1)z \Bigr)^k - (1-z)^k-kqz\Bigr\} \\
=\, & \frac{q-1}{q}(1-z) \Bigl\{ \sum_{j=0}^k \binom{k}{j} \Bigl( (q-1)^j -(-1)^j\Bigr) z^j -kqz \Bigr\} \\
=\, & \frac{q-1}{q}(1-z) \Bigl\{ \sum_{j=2}^k \binom{k}{j} \Bigl( (q-1)^j -(-1)^j\Bigr) z^j\Bigr\}.
\end{align*}
In particular, if $q>2$, then $(q-1)^j-(-1)^j>0$ for all $j\ge 2$. If $q=2$, $(q-1)^j-(-1)^j>0$ if $j$ is odd.
Hence, $g(z)>f(z)$, except when $k=q=2$ or $k=1$.
\end{IEEEproof}
\section*{Acknowledgement}
This work is supported by the Norwegian Research Council under the grant 191104/V30.
The research of Jinquan Luo is also supported by NSF of China under grant 60903036,
NSF of Jiangsu Province under grant 2009182 and the open research fund of National Mobile Communications Research Laboratory,
Southeast University (No. 2010D12).
| {
"timestamp": "2011-02-14T02:01:34",
"yymm": "1102",
"arxiv_id": "1102.2350",
"language": "en",
"url": "https://arxiv.org/abs/1102.2350",
"abstract": "There is a known best possible upper bound on the probability of undetected error for linear codes. The $[n,k;q]$ codes with probability of undetected error meeting the bound have support of size $k$ only. In this note, linear codes of full support ($=n$) are studied. A best possible upper bound on the probability of undetected error for such codes is given, and the codes with probability of undetected error meeting this bound are characterized.",
"subjects": "Information Theory (cs.IT)",
"title": "The best possible upper bound on the probability of undetected error for linear codes of full support",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683465856102,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7096610761368913
} |
https://arxiv.org/abs/1703.09595 | Notes on Pointed Gromov-Hausdorff Convergence | The present article addresses to everyone who starts working with (pointed) Gromov-Hausdorff convergence. In the major part, both Gromov-Hausdorff convergence of compact and of pointed metric spaces are introduced and investigated. Moreover, the relation of sublimits occurring with pointed Gromov-Hausdorff convergence and ultralimits is discussed. | \section{The compact case}\label{sec:GH-cpt}
Given a metric space, an interesting question is
whether it is possible to assign each two subsets a distance
such that this distance in turn defines a metric.
In \cite[Chapter VIII §6]{hausdorff}, Hausdorff answered this question
by describing what nowadays is called the Hausdorff distance:
For two subsets of a metric space, this is the minimal radius
such that each subset is contained in the ball (with this radius) of the other subset.
This was extended by Gromov in \cite[section 6]{gromov-groups} to describe
how far two compact metric spaces are from being isometric
by mapping two such spaces isometrically into a third one
and measuring the Hausdorff distance of the images.
(In fact, one can restrict to embedding the two spaces isometrically into their disjoint union.)
This is the so called Gro\-mov-Haus\-dorff\xspace distance.
\begin{defn}\label{def:dGH-cpt-npt}
For bounded subsets $A$ and $B$ of a metric space $(X,d)$,
the \emph{Hausdorff distance of $A$ and $B$} is defined as
\begin{align*}
d_{\textit{H}}^d(A,B) &:= \inf \{\eps > 0 \mid A \subseteq B^X_{\eps}(B)~\xspace\textrm{and}\xspace~B \subseteq B^X_{\eps}(A)\}
\intertext{where $B^X_{\eps}(B) := \{x \in X \mid \exists b \in B: d(x,b) < \eps\}$.
For two compact metric spaces $(X,d_X)$ and $(Y,d_Y)$,
the \emph{Gro\-mov-Haus\-dorff\xspace distance of $X$ and $Y$} is defined as}
d_{\textit{GH}}(X,Y) &:= \inf \{ d_{\textit{H}}^d(X,Y) \mid d \text{ admissible metric on } X \amalg Y\},
\end{align*}
where a metric $d$ on the disjoint union $X \amalg Y$ is called \textit{admissible}
if it satisfies $d_{|X \times X} = d_X$ and $d_{|Y \times Y} = d_Y$.
\end{defn}
On the space of (non-empty) compact subspaces of $X$, this $d_{\textit{H}}$ defines a metric,
while $d_{\textit{GH}}$ defines a metric on the set of isometry classes of (non-empty) compact metric spaces.
This will be proven below. From now on, all metric spaces are assumed to be non-empty.
In order to compare two metric spaces with respect to some fixed base points,
the pointed Gro\-mov-Haus\-dorff\xspace distance is used.
\begin{defn}\label{def:dGH-cpt-pt}
Let $(X,d)$ be a metric space, $A, B \subseteq X$ bounded subsets
and $a \in A$, $b \in B$ base points.
The \emph{pointed Hausdorff distance of $(A,a)$ and $(B,b)$} is given by
\begin{align*}
d_{\textit{H}}^d((A,a),(B,b)) &:= d_{\textit{H}}^d(A,B) + d(a,b)
\intertext{and the \emph{pointed Gro\-mov-Haus\-dorff\xspace distance} between
two pointed compact metric spaces $(X,x_0)$ and $(Y,y_0)$ is defined as}
d_{\textit{GH}}((X,x_0),(Y,y_0)) &:= \inf \{ d_{\textit{H}}^d((X,x_0),(Y,y_0)) \mid d \text{ adm.~on } X \amalg Y\}.
\end{align*}
\end{defn}
As in the non-pointed case, the pointed Gro\-mov-Haus\-dorff\xspace distance defines a metric
on the set of isometry classes of (non-empty) pointed compact metric spaces.
In order to prove this, a notion strongly related to the one of Gro\-mov-Haus\-dorff\xspace distance is used.
\begin{defn}
Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces and $\eps > 0$.
A pair of (not necessarily continuous) maps $f: X \to Y$ and $g : Y \to X$
is called \emph{($\eps$-)Gro\-mov-Haus\-dorff\xspace approximations} or \emph{$\eps$-approximations}
if for all $x,x_1,x_2 \in X$ and $y,y_1,y_2 \in Y$,
\begin{align*}
&|d_X(x_1,x_2) - d_Y(f(x_1),f(x_2))| < \eps, &d_X(g \circ f (x), x) < \eps, \\
&|d_Y(y_1,y_2) - d_X(g(y_1),g(y_2))| < \eps, &d_Y(f \circ g (y), y) < \eps.
\end{align*}
The set of all such pairs is denoted by $\Isom{\eps}(X,Y)$.
In the pointed case, one restricts to pointed maps:
For $p \in X$ and $q \in Y$,
\begin{align*}
\Isom{\eps}((X,p), (Y,q)) := \{(f,g) \in \Isom{\eps}(X,Y) \mid f(p) = q~\xspace\textrm{and}\xspace~g(q) = p\}.
\end{align*}
\end{defn}
\begin{rmk}
In the literature, Gro\-mov-Haus\-dorff\xspace approximations often are not defined as pairs of maps
but as one map $f : X \to Y$ where $f$ has \emph{distortion less than $\eps$},
i.e.~for all $x_1,x_2 \in X$, $f$ satisfies
\[|d_Y(f(x_1),f(x_2)) - d_X(x_1,x_2)| < \eps, \]
and $B_{\eps}(f(X)) = Y$.
Observe that $(f,g) \in \Isom{\eps}(X,Y)$ already implies that $f$ has these properties
(for the same $\eps$).
In the following it will be seen that Gro\-mov-Haus\-dorff\xspace distance less than $\eps$
corresponds to $\eps$-approximations (up to a factor).
The next proposition shows that (up to another factor)
the definition of Gro\-mov-Haus\-dorff\xspace approximations used here can be replaced by the one described above.
\end{rmk}
\begin{prop}
Let $f:(X,d_X) \to (Y,d_Y)$ be a map between metric spaces
with distortion smaller than $\eps > 0$.
Then there exists a map $g : f(X) \to X$ satisfying $(f,g) \in \Isom{\eps}(X,f(X))$.
Moreover, if $Y = B_{\eps}(f(X))$,
then there exists a map $h : Y \to X$ such that $(f,h) \in \Isom{3\eps}(X,Y)$.
\end{prop}
\begin{proof}
For each $y \in f(X)$ choose some $g(y) \in f^{-1}(y)$.
In particular, the such defined map $g$ satisfies $f \circ g = \id_{|f(X)}$.
For $y_1,y_2 \in f(X)$,
\begin{align*}
&|d_X(g(y_1),g(y_2)) - d_Y(y_1,y_2)|
\\&= |d_X(g(y_1),g(y_2)) - d_Y(f(g(y_1)),f(g(y_2)))|
< \eps,
\end{align*}
and for $x \in X$,
\begin{align*}
d(x,g \circ f(x))
= |d(x,g \circ f(x))) - d(f(x),f(g \circ f(x)))| < \eps.
\end{align*}
Thus, $(f,g) \in \Isom{\eps}(X,f(X))$.
Now assume $Y = B_{\eps}(f(X))$.
For $y \in f(X)$, define $h(y) := g(y)$, otherwise,
choose $y' \in f(X)$ with $d_Y(y,y') < \eps$ and define $h(y) := y'$.
By construction, $h \circ f = g \circ f$,
i.e.~for all $x \in X$, \[d_X(h \circ f (x),x) < \eps.\]
For arbitrary $y \in Y$, using $f \circ g = \id_{|f(X)}$,
$f \circ h (y) = f \circ g (y') = y'$
for $y' \in f(X) \cap B_{\eps}(y)$ as in the definition of $h$.
Hence, \[d_Y(f \circ h(y),y) = d_Y(y',y) < \eps.\]
Finally, for arbitrary $y_1,y_2 \in Y$,
\begin{align*}
&|d_X(h(y_1),h(y_2)) - d_Y(y_1,y_2)|
\\&\leq |d_X(h(y_1),h(y_2)) - d_Y(f(h(y_1)),f(h(y_2)))|
\\&\quad + |d_Y(f(h(y_1)),f(h(y_2))) - d_Y(y_1,y_2)|
\\&< \eps + d_Y(f \circ h(y_1),y_1) + d_Y(f \circ h(y_2),y_2)
\\&< 3 \eps.\qedhere
\end{align*}
\end{proof}
Next, a strong connection between existence of Gro\-mov-Haus\-dorff\xspace approximations
and the Gro\-mov-Haus\-dorff\xspace distance will be proven.
\begin{prop}\label{dgh_small_iff_eps_approx}
Let $X$ and $Y$ be compact metric spaces with base points $p \in X$ and $q \in Y$, respectively,
and $\eps > 0$.
\begin{enumerate}
\item\label{dgh_small_iff_eps_approx--a} If $d_{\textit{GH}}(X,Y) < \eps$,
then $\Isom{2\!\eps}(X,Y) \ne \emptyset$.
\item\label{dgh_small_iff_eps_approx--b} If $\Isom{\eps}(X,Y) \ne \emptyset$,
then $d_{\textit{GH}}(X,Y) \leq 2\eps$.
\item\label{dgh_small_iff_eps_approx--c} If $d_{\textit{GH}}((X,p),(Y,q)) < \eps$,
then $\Isom{2\!\eps}((X,p),(Y,q)) \ne \emptyset$.
\item\label{dgh_small_iff_eps_approx--d} If $\Isom{\eps}((X,p),(Y,q)) \ne \emptyset$,
then $d_{\textit{GH}}((X,p),(Y,q)) \leq 2\eps$.
\end{enumerate}
\end{prop}
\begin{proof}
As the proofs of \ref{dgh_small_iff_eps_approx--a} and \ref{dgh_small_iff_eps_approx--b},
respectively, are very similar to,
but slightly easier than those
of \ref{dgh_small_iff_eps_approx--c} and \ref{dgh_small_iff_eps_approx--d}, respectively,
only the latter two are proven here.
\par\smallskip\noindent\ref{dgh_small_iff_eps_approx--c}
Let $0 < \delta < \eps - d_{\textit{GH}}((X,p),(Y,q))$
and choose an admissible metric $d$ with
\[d_{\textit{H}}^d((X,p),(Y,q)) < d_{\textit{GH}}((X,p),(Y,q)) + \delta < \eps.\]
Then $d(p,q) < \eps$ on the one hand and $d_{\textit{H}}^d(X,Y) < \eps$ on the other,
i.e.~for all $x \in X$ there exists $y_x \in Y$ that satisfies $d(x,y_x) < \eps$.
Analogously, for each $y \in Y$ there is $x_y \in X$ satisfying $d(y,x_y) < \eps$.
Define $f:X \to Y$ and $g: Y \to X$ by
\[ f(x) := \begin{cases}
q & \text{if } x = p, \\
y_x & \text{otherwise,}
\end{cases}
\quad \quad \quad
g(y) := \begin{cases}
p & \text{if } y = q, \\
x_y & \text{otherwise.}
\end{cases} \]
As seen above, $d(f(x),x) < \eps$ for all $x \in X$.
Thus, for all $x,x' \in X$,
\begin{align*}
|d_Y(f(x),f(x'))- d_X(x,x')|
&\leq d(f(x),x)) + d(f(x'),x')
< 2\eps.
\end{align*}
Analogously, $|d_X(g(y),g(y'))- d_Y(y,y')| <2\eps$ for all $y,y' \in Y$.
Similarly, for $x \in X$,
\begin{align*}
d_X(g \circ f (x), x)
&= d(g \circ f (x), x)\\
& \leq d(g (f (x)), f(x)) + d(f(x),x)\\
& < 2 \eps,
\end{align*}
as well as $d_Y(f \circ g(y),y) < 2 \eps$ for all $y \in Y$.
Thus, \[(f,g) \in \Isom{2\!\eps}((X,p),(Y,q)).\]
This proves \ref{dgh_small_iff_eps_approx--c}.
\par\smallskip\noindent\ref{dgh_small_iff_eps_approx--d}
Fix an arbitrary pair $(f,g) \in \Isom{\eps}((X,p),(Y,q))$.
The definition of an admissible metric $d: (X \amalg Y) \times (X \amalg Y) \to \rr$
requires $d_{|X \times X} := d_X$, $d_{|Y \times Y} := d_Y$
and $d(y,x) := d(x,y)$ for $x \in X$ and $y \in Y$.
Hence, it suffices to define $d(x,y)$ for $x \in X$ and $y \in Y$.
Then $d$ is positive definite and symmetric by definition.
Thus, in order to prove that $d$ is a metric, it remains to check the triangle inequality.
If done so, then $d$ is in fact an admissible metric.
Define $d: (X \amalg Y) \times (X \amalg Y) \to \rr$ via
\[d(x,y) := \frac{\eps}{2} + \inf\{d_X(x,x') + d_Y(f(x'),y) \mid x' \in X\}\]
for $x \in X$ and $y \in Y$.
It remains to check the triangle inequality.
For $x_1,x_2 \in X$ and $y \in Y$,
\begin{align*}
&d(x_1,x_2)+ d(x_2,y)\\
&= d_X(x_1,x_2) + \frac{\eps}{2} + \inf\{d_X(x_2,x') + d_Y(f(x'),y) \mid x' \in X \} \\
&= \frac{\eps}{2} + \inf\{d_X(x_1,x_2) + d_X(x_2,x') + d_Y(f(x'),y) \mid x' \in X \} \\
&\geq \frac{\eps}{2} + \inf\{d_X(x_1,x') + d_Y(f(x'),y) \mid x' \in X \} \\
&=d(x_1,y)
\intertext{and}
&d(x_1,y)+ d(y,x_2)\\
&= \eps + \inf\{d_X(x_1,x') +d_Y(f(x'),y)
\\& {\color{white} = \eps + \inf\{}
+ d_X(x_2,x'') + d_Y(f(x''),y) \mid x',x'' \in X\}
\displaybreak[0]\\
&\geq \eps + \inf\{d_X(x_1,x') + d_Y(f(x'),f(x'')) + d_X(x_2,x'') \mid x',x'' \in X\}
\displaybreak[0]\\
&\geq \eps + \inf\{d_X(x_1,x') + (d_X(x',x'') - \eps) + d_X(x_2,x'') \mid x',x'' \in X\}
\displaybreak[0]\\
&\geq \inf\{d_X(x_1,x_2) \mid x',x'' \in X\} \\
&= d(x_1,x_2).
\end{align*}
The two remaining triangle inequalities
$d(x,y_1) + d(y_1,y_2) \geq d(x,y_2)$ and $d(y_1,x) + d(x,y_2) \geq d(y_1,y_2)$,
where $x \in X$ and $y_1,y_2 \in Y$,
can be proven analogously.
Using this metric $d$,
\begin{align*}
&d(p,q)
= \frac{\eps}{2} + \inf\{d_X(p,x') + d_Y(f(x'),q) \mid x' \in X\} = \frac{\eps}{2}
\intertext{since
$0 \leq \inf\{d_X(p,x') + d_Y(f(x'),q) \mid x' \in X\} \leq d_X(p,p) + d_Y(f(p),q) = 0$.
Furthermore, for $x \in X$,}
&d(x,f(x))
= \frac{\eps}{2} + \inf\{ d_X(x,x') + d_Y(f(x'),f(x)) \mid x' \in X \} = \frac{\eps}{2}
\intertext{using $x' = x$. For $y \in Y$, this implies}
&d(y,g(y))
\leq d(y, f \circ g(y)) + d(f \circ g(y), g(y))
< \eps + \frac{\eps}{2} = \frac{3 \eps}{2}.
\end{align*}
Thus, $X \subseteq B^d_{{\eps}/{2}}(f(X)) \subseteq B^d_{{3\eps}/{2}}(Y)$
and $Y \subseteq B^d_{{3 \eps}/{2}}(X)$,
i.e.~$d_{\textit{H}}^d(X,Y) \leq \frac{3 \eps}{2} $ and
\[d_{\textit{GH}}((X,p),(Y,q)) \leq d_{\textit{H}}^d((X,p),(Y,q)) = d_{\textit{H}}^d(X,Y) + d(p,q) \leq 2 \eps.\]
This proves \ref{dgh_small_iff_eps_approx--d}.
\end{proof}
Using these approximations, one can prove that the pointed Gro\-mov-Haus\-dorff\xspace distance defines a metric.
Two pointed metric spaces $(X,p)$ and $(Y,q)$ are called \emph{isometric}
if there exists an isometry $f: X \to Y$ with $f(p) = q$.
\begin{prop}
On the space of isometry classes of (pointed) compact metric spaces, $d_{\textit{GH}}$ defines a metric.
\end{prop}
\begin{proof}
In order to prove that the Gro\-mov-Haus\-dorff\xspace distance indeed defines a metric,
one needs that the Hausdorff distance defines a metric.
Therefore, this proof splits into several steps:
First, the Hausdorff distance will be investigated.
Then it will be proven that the Gro\-mov-Haus\-dorff\xspace distance defines a pseudo-metric
on the class of (pointed) compact metric spaces,
i.e.~it is not definite, but satisfies all the other properties of a metric.
Finally, it will be proven that this already defines a metric up to isometry.
\par\smallskip\noindent\textit{Step 1: $d_{\textit{H}}$ defines a metric in the non-pointed case.}
Let $(X,d)$ be a metric space and $A,B,C \subseteq X$ be compact.
First, prove that $d_{\textit{H}}$ is a metric in the non-pointed case:
By definition, $d_{\textit{H}}^d(B,A) = d_{\textit{H}}^d(A,B)$, $d_{\textit{H}}^d(A,B) \geq 0$ and $d_{\textit{H}}^d(A,A) = 0$.
In order to prove the triangle inequality,
define $r_1 := d_{\textit{H}}^d(A,B) \geq 0$ and $r_2 := d_{\textit{H}}^d(B,C) \geq 0$
and let $\eps > 0$ be arbitrary.
For $a \in A$ there exists $b \in B$ with $d(a,b) < r_1 + \eps$.
Furthermore, there is $c \in C$ with $d(b,c) < r_2 + \eps$.
Hence, $d(a,c) < r_1 + r_2 + 2 \eps$ and this proves $A \subseteq B_{r_1+r_2+2\eps}(C)$.
An analogous argumentation proves $C \subseteq B_{r_1+r_2+2\eps}(A)$.
Therefore, $d_{\textit{H}}^d(A,C) \leq r_1 + r_2 + 2 \eps$.
Since $\eps > 0$ was arbitrary,
\[d_{\textit{H}}^d(A,C) \leq r_1 + r_2 = d_{\textit{H}}^d(A,B) + d_{\textit{H}}^d(B,C).\]
Assume that $A \ne B$ and $d_{\textit{H}}^d(A,B) = 0$.
Without loss of generality, assume there exists $a \in A$ with $a \notin B$.
In particular, $d(a,b) > 0$ for all $b \in B$.
Because $B$ is compact,
$0 < \inf\{d(a,b) \mid b \in B\} \leq d_{\textit{H}}^d(A,B)$,
and this is a contradiction.
\par\smallskip\noindent\textit{Step 2: $d_{\textit{H}}$ defines a metric in the pointed case.}
Now fix $a \in A$, $b \in B$ and $c \in C$.
Since $d_{\textit{H}}$ is a metric in the non-pointed case,
\[d_{\textit{H}}^d((A,a),(B,b)) = d_{\textit{H}}^d(A,B) + d(a,b) \geq 0\]
and equality holds if and only if $A=B$ and $a=b$.
Obviously, $d_{\textit{H}}$ is symmetric and
\begin{align*}
\lefteqn{d_{\textit{H}}^d((A,a),(B,b)) + d_{\textit{H}}^d((B,b),(C,c))}\\
&= d_{\textit{H}}^d(A,B) + d_{\textit{H}}^d(B,C) + d(a,b) + d(b,c) \\
&\geq d_{\textit{H}}^d(A,C) + d(a,c) \\
&= d_{\textit{H}}^d((A,a),(C,c)).
\end{align*}
Thus, $d_{\textit{H}}$ defines a metric.
\par\smallskip\noindent\textit{Step 3: $d_{\textit{GH}}$ defines a pseudo-metric.}
From now on, the proof restricts to the case of pointed metric spaces
since the other one can be done completely analogously.
Obviously, $d_{\textit{GH}}$ is non-negative and symmetric.
It remains to prove the triangle inequality.
Let $(X,x_0)$, $(Y,y_0)$ and $(Z,z_0)$ be pointed compact metric spaces.
For arbitrary $\eps > 0$,
choose admissible metrics $d_{XY}$ on $X \amalg Y$ and $d_{YZ}$ on $Y \amalg Z$ such that
\begin{align*}
d_{\textit{H}}^{d_{XY}}((X,x_0),(Y,y_0)) &< d_{\textit{GH}}((X,x_0),(Y,y_0)) + \eps \quad\xspace\textrm{and}\xspace\\
d_{\textit{H}}^{d_{YZ}}((Y,y_0),(Z,z_0)) &< d_{\textit{GH}}((Y,y_0),(Z,z_0)) + \eps.
\intertext{Define an admissible metric $d_{XZ}$ on $X \amalg Z$ by}
d_{XZ}(x,z) &= \inf\{d_{XY}(x,y) + d_{YZ}(y,z) \mid y \in Y \}.
\end{align*}
This actually defines a metric:
Since everything else is obvious,
only the triangle inequality needs to be checked.
If all regarded points are contained in $X$ or all in $Z$, there is nothing to prove.
For $x_1,x_2 \in X$ and $z \in Z$,
\begin{align*}
&d_{XZ}(x_1,x_2) + d_{XZ}(x_2,z)\\
&= d_X(x_1,x_2) + \inf\{d_{XY}(x_2,y') + d_{YZ}(y',z) \mid y' \in Y\}
\displaybreak[0]\\
&= \inf\{d_{XY}(x_1,x_2) + d_{XY}(x_2,y') + d_{YZ}(y',z) \mid y' \in Y\}
\displaybreak[0]\\
&\geq \inf\{d_{XY}(x_1,y') + d_{YZ}(y',z) \mid y' \in Y\} \\
&= d_{XZ}(x_1,z)
\intertext{and}
&d_{XZ}(x_1,z) + d_{XZ}(z,x_2)\\
&= \inf\{d_{XY}(x_1,y') + d_{YZ}(y',z) + d_{YZ}(z,y'') + d_{XY}(y'',x_2) \mid y',y'' \in Y\}
\displaybreak[0]\\
&\geq \inf\{d_{XY}(x_1,y') + d_{Y}(y',y'') + d_{XY}(y'',x_2) \mid y',y'' \in Y\}
\displaybreak[0]\\
&\geq \inf\{d_{XY}(x_1,y') + d_{XY}(y',x_2) \mid y' \in Y\}
\displaybreak[0]\\
&\geq d_X(x_1,x_2)\\
&= d_{XZ}(x_1,x_2).
\end{align*}
The remaining triangle inequalities
$d_{XZ}(z_1,z_2) + d_{XZ}(z_2,x) \geq d_{XZ}(z_1,x)$
and $d(z_1,x) + d_{XZ}(x,z_2) \geq d_{XZ}(z_1,z_2)$,
where $x \in X$ and $z_1,z_2 \in Z$,
can be proven analogously.
With similar arguments,
one can prove that $d_{XYZ}$ defines an admissible metric on $X \amalg Y \amalg Z$ where
\[
d_{XYZ}(x,y) :=
\begin{cases}
d_{XY}(x,y) &\text{if } x,y \in X \amalg Y, \\
d_{XZ}(x,y) &\text{if } x,y \in X \amalg Z, \\
d_{YZ}(x,y) &\text{if } x,y \in Y \amalg Z.
\end{cases}
\]
With those admissible metrics,
\begin{align*}
\lefteqn{d_{\textit{GH}}((X,x_0),(Z,z_0))}\\
& \leq d_{\textit{H}}^{d_{XYZ}} (X,Z) + d_{XYZ} (x_0,z_0) \\
&\leq d_{\textit{H}}^{d_{XYZ}} (X,Y) + d_{\textit{H}}^{d_{XYZ}} (Y,Z) + d_{XYZ} (x_0,y_0) + d_{XYZ} (y_0,z_0) \\
&\leq d_{\textit{H}}^{d_{XY}} (X,Y) + d_{\textit{H}}^{d_{YZ}} (Y,Z) + d_{XY} (x_0,y_0) + d_{YZ} (y_0,z_0) \\
&< d_{\textit{GH}}((X,x_0),(Y,y_0)) + d_{\textit{GH}}((Y,y_0),(Z,z_0)) + 2 \eps,
\end{align*}
where in the second last inequality the fact is used
that for every $r > 0$
the inclusion $X \subseteq B^{d_{XY}}_r(Y)$ implies the inclusion $X \subseteq B^{d_{XYZ}}_r(Y)$.
Now letting $\eps \to 0$ proves the triangle inequality for $d_{\textit{GH}}$.
\par\smallskip\noindent\textit{Step 4: $d_{\textit{GH}}$ defines a metric up to isometry.}
It is easy to see that the distance of isometric pointed compact spaces vanishes:
Let $(X,p)$ and $(Y,q)$ be isometric via isometries $f$ and $g$.
Then $(f,g) \in \Isom{\eps/2}((X,p),(Y,q))$
for arbitrary $\eps > 0$.
By \autoref{dgh_small_iff_eps_approx}, $d_{\textit{GH}}((X,p),(Y,q)) \leq \eps$.
Hence, $d_{\textit{GH}}((X,p),(Y,q))=0$.
Conversely, let $(X,p)$ and $(Y,q)$ be two pointed compact metric spaces
satisfying $d_{\textit{GH}}((X,p),(Y,q)) = 0$.
By definition, for each $n \geq 1$
there is an admissible metric $d_n$ on $X \amalg Y$ with $d_{\textit{H}}^{d_n}(X,Y) + d_n(p,q) < \frac{1}{n}$.
Since $X$ is compact and thus separable,
there exists a countable dense subset $X' = \{x_i \mid i \in \nn\} \subseteq X$ with $x_0 = p$.
Define $y^0_n := q$.
The constant sequence $(y^0_n)_{n \in \nn}$ converges to $q$,
and for each $n$, $d_n(x_0,y^0_n) = d_n(p,q) < \frac{1}{n}$.
Because of $d_{\textit{H}}^{d_n}(X,Y) < \frac{1}{n}$,
there exists some $y^1_n \in Y$ with $d_n(x_1, y^1_n) < \frac{1}{n}$.
Since $Y$ is compact,
$(y^1_n)_n$ has a convergent subsequence $(y^1_{n_{i}})_{i \in \nn}$ with some limit $y_1 \in Y$.
Then
\[
d_{n_{i}} (x_1,y_1)
\leq d_{n_{i}}(x_1, y^1_{n_{i}}) + d_{n_{i}}(y^1_{n_{i}}, y_1)
\to 0 \textrm{ as } i \to \infty.
\]
The same argument for $x_2$ gives a subsequence $d_{n_{i_j}}$ of $d_{n_{i}}$
and a point $y_2 \in Y$ with $d_{n_{i_j}}(x_2,y_2) \to 0$ as $j \to \infty$.
By a diagonal argument,
there is a subsequence $d_l$ of $d_n$ and a sequence $(y_i)_{i \in \nn}$ with $y_0 = q$
with $d_l(x_i,y_i) \to 0$ as $l \to \infty$ for all $i$.
Define $f: X' \to Y$ by $f(x_i) := y_i$. Since the $d_l$ are admissible metrics, for each $l$,
\begin{align*}
d_Y(f(x_i), f(x_j))
&= d_l(f(x_i), f(x_j))
= d_l(y_i,y_j)
\intertext{and}
d_X(x_i,x_j)
&= d_l(x_i,x_j).
\end{align*}
Therefore,
\begin{align*}
|d_Y(f(x_i), f(x_j)) - d_X(x_i,x_j)|
&= |d_l(y_i,y_j) - d_l(x_i,x_j)| \\
&\leq d_l(y_i,x_i) + d_l(x_j,y_j)) \\
&\to 0 \textrm{ as } l \to \infty.
\end{align*}
Hence, $f$ is an isometry.
Since $X'$ is dense,
$f$ can be extended uniquely to an isometric embedding $f : X \to Y$ with $f(p) = q$.
With a similar construction and using a subsequence of $d_l$,
there is an isometric embedding $g : Y \to X$ with $g(q) = p$.
After passing to this subsequence, for each $x$,
\begin{align*}
d_l(g \circ f (x), x)
\leq d_l(g(f(x)),f(x)) + d_l(f(x), x)
\to 0 \textrm{ as } {l \to \infty}.
\end{align*}
Thus, $f$ is an isometry with $f(p) = q$, i.e.~$(X,p)$ and $(Y,q)$ are isometric.
\qedhere
\end{proof}
The definitions of pointed and non-pointed Gromov-Haus\-dorff distance
essentially give the same notion of convergence.
This will be proven next.
\begin{prop}\label{prop_GH:comparison_pointed_unpointed_compact}
Let $X$ and $Y$ be compact metric spaces.
\begin{enumerate}
\item\label{prop_GH:comparison_pointed_unpointed_compact-a}
For each $x \in X$ and $y \in Y$,
\[d_{\textit{GH}}(X,Y) \leq d_{\textit{GH}}((X,x),(Y,y)).\]
\item\label{prop_GH:comparison_pointed_unpointed_compact-b}
For any $x \in X$ there exists $y \in Y$ such that
\[d_{\textit{GH}}((X,x),(Y,y)) \leq 2 \cdot d_{\textit{GH}}(X,Y).\]
\end{enumerate}
\end{prop}
\begin{proof}
Both statements follow easily from the definitions:
\par\smallskip\noindent\ref{prop_GH:comparison_pointed_unpointed_compact-a}
First, let $x \in X$ and $y \in Y$ be arbitrary. Then
\begin{align*}
d_{\textit{GH}}(X,Y)
&= \inf \{ d_{\textit{H}}^d(X,Y) \mid d \text{ admissible metric on } X \amalg Y\} \\
&\leq \inf \{ d_{\textit{H}}^d(X,Y) + d(x,y) \mid d \text{ admissible metric on } X \amalg Y\}
\displaybreak[0]\\
&= \inf \{ d_{\textit{H}}^d((X,x),(Y,y)) \mid d \text{ admissible metric on } X \amalg Y\} \\
&= d_{\textit{GH}}((X,x),(Y,y)).
\end{align*}
\par\smallskip\noindent\ref{prop_GH:comparison_pointed_unpointed_compact-b}
Now let $r := d_{\textit{GH}}(X,Y) \geq 0$.
For arbitrary $n \in \nn$, let $d_n$ be an admissible metric on $X \amalg Y$ satisfying
\[d_{\textit{H}}^{d_n}(X,Y) < d_{\textit{GH}}(X,Y) + \frac{1}{n} = r + \frac{1}{n}.\]
Thus, $X \subseteq \B^{d_n}_{r+1/n}(Y)$,
i.e.~for given $x \in X$ there exists $y_n \in Y$ such that $d_n(x,y_n) \leq r+\frac{1}{n}$.
Since $Y$ is compact,
there exists a convergent subsequence $(y_{n_m})_{m \in \nn}$ of $(y_n)_{n \in \nn}$
with limit $y \in Y$.
Then
\begin{align*}
&d_{\textit{H}}^{d_{n_m}}((X,x),(Y,y))\\
&= d_{\textit{H}}^{d_{n_m}}(X,Y) + d_{n_m}(x,y) \\
&\leq r + \frac{1}{n_m} + d_{n_m}(x,y_{n_m}) + d_{n_m}(y_{n_m},y) \\
&\leq 2r + \frac{2}{n_m} + d_Y(y_{n_m},y)
\intertext{and}
&d_{\textit{GH}}((X,x),(Y,y)) \\
&= \inf \{ d_{\textit{H}}^d((X,x),(Y,y)) \mid d \text{ admissible metric on } X \amalg Y\} \\
&\leq \inf \{ d_{\textit{H}}^{d_{n_m}}((X,x),(Y,y)) \mid m \in \nn\} \\
&\leq \inf \{ 2r + \frac{2}{n_m} + d_Y(y_{n_m},y) \mid m \in \nn\} \\
& = 2r.
\qedhere
\end{align*}
\end{proof}
It is not hard to give examples where the inequality in
\autoref{prop_GH:comparison_pointed_unpointed_compact}
\ref{prop_GH:comparison_pointed_unpointed_compact-a}
is strict
or where equality in \ref{prop_GH:comparison_pointed_unpointed_compact-b}
holds for either all or none of the points.
In order to improve readability of the example,
the following two statements are proven first.
\begin{lemma}\label{lem:comparison_hausdorff_distance_pointed_unpointed}
Let $(X,d)$ be a metric space, $a \in A \subseteq X$ and $b \in X$.
Then $d_{\textit{H}}((A,a), (\{b\},b)) \geq d_{\textit{H}}((A,a), (\{a\},a)) = \sup\{d(a,a') \mid a' \in A\}$.
\end{lemma}
\begin{proof}
First,
recall
\begin{align*}
d_{\textit{H}}(A,\{b\})
&= \inf\{r > 0 \mid A \subseteq B_r(b), b \in B_r(A)\}
\\& = \max\{ \inf\{r > 0 \mid A \subseteq B_r(b)\},
\inf\{r > 0 \mid b \in B_r(A)\} \}
\\& = \max\{ \sup\{d(a',b) \mid a' \in A\}, d(A,b)\}.
\intertext{In particular,
\[d_{\textit{H}}((A,a), (\{a\},a))
= d_{\textit{H}}(A,\{a\})
= \sup\{d(a',a) \mid a' \in A\}.\]
Moreover,}
d_{\textit{H}}((A,a), (\{b\},b))
&= d_{\textit{H}}(A,\{b\}) + d(a,b)
\\& = \max\{ \sup\{d(a',b) \mid a' \in A\}, d(A,b)\} + d(a,b)
\\&\geq \sup\{d(a',b) + d(a,b) \mid a' \in A\}
\\&\geq \sup\{d(a',a) \mid a' \in A\}
\\&= d_{\textit{H}}((A,a), (\{a\},a)).
\qedhere
\end{align*}
\end{proof}
\begin{prop}\label{lem:comparison_compact_gromov_hausdorff_distance_pointed_unpointed}
Let $(X,d_X)$ be a compact metric space and $x \in X$.
Then
$d_{\textit{GH}}((X,x),(\{\pt\},\pt)) = \sup\{d_X(x,p) \mid p \in X\}$.
\end{prop}
\begin{proof}
By \autoref{lem:comparison_hausdorff_distance_pointed_unpointed},
\begin{align*}
d_{\textit{GH}}((X,x),(\{\pt\},\pt))
&= \inf \{d_{\textit{H}}^d((X,x),(\{\pt\},\pt)) \mid d \text{ adm.~on } X \amalg \{\pt\}\}
\\&\geq \inf \{\sup\{d(x,p) \mid p \in X\} \mid d \text{ adm.~on } X \amalg \{\pt\}\}
\\&= \sup\{d_X(x,p) \mid p \in X\}
\\&= d_{\textit{H}}^{d_X}((X,x), (\{x\},x)).
\intertext{On the other hand,}
d_{\textit{GH}}((X,x),(\{\pt\},\pt))
&\leq d_{\textit{H}}^{d_X}((X,x), (\{x\},x))
\end{align*}
and this proves the claim.
\end{proof}
The following examples proves
that $d_{\textit{GH}}((X,x),(Y,y))$ may attain any value between $d_{\textit{GH}}(X,Y)$ and $2 \cdot d_{\textit{GH}}(X,Y)$.
\begin{exm}
Let $D^2 = \{x \in \rr^2 \mid \norm{x} \leq 1\}$ denote the disk of radius $1$ in $\rr^2$.
By \autoref{lem:comparison_compact_gromov_hausdorff_distance_pointed_unpointed},
for arbitrary $x \in D^2$,
\[ d_{\textit{GH}}((D^2,x),(\{\pt\},\pt)) = \sup\{d_{D^2}(x,p) \mid p \in D^2\} = \norm{x} + 1.\]
Hence, for any $\lambda \in [1,2]$,
every point $x$ with $\norm{x} = \lambda - 1$ satisfies
\[d_{\textit{GH}}((D^2,x), (\{\pt\},\pt)) = \lambda \cdot d_{\textit{GH}}(D^2, \{\pt\}).\]
\end{exm}
In particular, two extreme cases occur
in the situation of \autoref{prop_GH:comparison_pointed_unpointed_compact}:
For $X = D^2$, $Y = \{\pt\}$ and $x = (0,0) \in \rr^2$,
there is no $y \in Y$ with
\[d_{\textit{GH}}((X,x),(Y,y)) = 2 \cdot d_{\textit{GH}}(X,Y).\]
On the contrary, in this case,
$d_{\textit{GH}}((X,x),(Y,y)) = d_{\textit{GH}}(X,Y)$ for all $y \in Y$.
On the other hand, if $x = (1,0) \in \rr^2$, then
\[d_{\textit{GH}}((X,x),(Y,y)) = 2 \cdot d_{\textit{GH}}(X,Y)\]
for all $y \in Y$.
\begin{defn}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed compact metric spaces.
\begin{enumerate}
\item
If $d_{\textit{GH}}(X_i, X) \to 0$ as $i \to \infty$,
then \emph{$X_i$ converges to $X$}.
\item
If $d_{\textit{GH}}((X_i,p_i), (X,p)) \to 0$ as $i \to \infty$,
then \emph{$(X_i,p_i)$ converges to $(X,p)$}.
\end{enumerate}
If $X_i$ converges to $X$, this is denoted by $X_i \to X$.
If $(X_i,p_i)$ converges to $(X,p)$, this is denoted by $(X_i,p_i) \to (X,p)$.
\end{defn}
\begin{cor}\label{cor:compact_case:pointed=non-pointed_convergence}
Let $(X,d_X)$ and $(X_i,d_{X_i})$, $i \in \nn$, be compact metric spaces.
\begin{enumerate}
\item
If $(X_i,x_i) \to (X,x)$ for some $x_i \in X_i$ and $x \in X$,
then $X_i \to X$ as well.
\item
If $X_i \to X$ and $x \in X$,
then there exist points $x_i \in X_i$ such that $(X_i,x_i) \to (X,x)$.
\end{enumerate}
\end{cor}
\begin{proof}
This is a direct consequence of
\autoref{lem:comparison_compact_gromov_hausdorff_distance_pointed_unpointed}.
\end{proof}
Recall that a metric space $(X,d_X)$ is called \emph{length space} if
\[d(x,y) = \inf\{L(c) \mid c \text{ continuous curve from } x \text{ to } y\}\]
for any $x,y \in X$, where $L(c)$ denotes the length of $c$.
\begin{prop}\label{prop:cpt_length_spaces_cvg_to_length_spaces}
A complete compact Gro\-mov-Haus\-dorff\xspace limit of compact length spaces is a length space.
\end{prop}
In the proof, the following statement is used.
\begin{lemma}[{cf.~\cite[Theorem 2.4.16]{burago}}]\label{lem:charact_length_space}
Let $(X,d)$ be a complete metric space.
Then $(X,d)$ is a length space
if and only if for all $x,y \in X$ and $\eps > 0$ there exists an $\eps$-midpoint,
i.e.~a point $z \in X$ with
$|2 d(x,z) - d(x,y)| \leq \eps$ and $|2 d(y,z) - d(x,y)| \leq \eps$,
\end{lemma}
\begin{proof}
First, let $(X,d)$ be a length space and $x,y \in X$ and $\eps > 0$ be arbitrary.
Since $X$ is a length space, there exists a curve $c: [0,L] \to X$ with
$c(0) = x$, $c(L) = y$ and length $L(c) \leq d(x,y) + \eps$.
Without loss of generality, assume $c$ to be parametrised by arc length.
In particular, $L = L(c) \leq d(x,y) + \eps$.
Define $z := c(\frac{L}{2})$.
Clearly,
\begin{align*}
2d(x,z)
\leq 2 \cdot L(c_{|[0,\frac{L}{2}]})
= L \leq d(x,y) + \eps,
\end{align*}
and analogously,
$2d(y,z) \leq d(x,y) + \eps$.
Now assume $d(y,z) - d(x,z) > \eps$.
Then
\begin{align*}
d(x,y)
\leq d(x,z) + d(z,y)
< 2 d(y,z) - \eps
\leq d(x,y),
\end{align*}
and this is a contradiction.
Hence,
\begin{align*}
d(x,y) - 2 d(x,z)
\leq d(y,z) - d(x,z)
\leq \eps.
\end{align*}
Analogously, $|2 d(y,z) - d(x,y)| \leq \eps$.
Now let $X$ be a metric space
such that for all pairs of points and $\eps > 0$ there exists an $\eps$-midpoint,
and let $x,y \in X$ be arbitrary.
If for every $\eps>0$ there is a curve $\gamma$ connecting $x$ and $y$
of length $L(\gamma) \leq d(x,y) + \eps$,
then \[\inf\{L(\gamma) \mid \gamma~\text{connects}~x~\xspace\textrm{and}\xspace~y\} = d(x,y)\] and
this proves that $(X,d)$ is a length space.
So, let $L := d(x,y)$, $\eps > 0$ be arbitrary and define $\gamma$ inductively as follows:
First, let $\gamma(0) = x$ and $\gamma(1) = y$.
Now, assume $\gamma(\frac{k}{2^m})$ to be defined
for some $m \in \nn$ and all $k \in \nn$ with $0 \leq k \leq 2^m$.
For odd $1 \leq k \leq 2^{m+1} - 1$,
let $\gamma(\frac{k}{2^{m+1}})$ be an
$\frac{\eps}{2^{2m+1}}$-midpoint
of $\gamma(\frac{k-1}{2^{m+1}})$ and $\gamma(\frac{k+1}{2^{m+1}})$.
Inductively,
$d(\gamma(\frac{k}{2^m}), \gamma(\frac{k+1}{2^m}))
\leq \frac{L}{2^m} + \frac{\eps}{2^m} \cdot \sum_{i=1}^{m} \frac{1}{2^i}$:
For $m=0$, by definition, $d(\gamma(0), \gamma(1)) = L$.
Let the statement be true for some $m \in \nn$,
and let $0 \leq k \leq 2^{m+1}-1$.
First assume $k = 2l + 1$ to be odd.
Then
\begin{align*}
2 \cdot d\big(\gamma\Big(\frac{k}{2^{m+1}}\Big), \gamma\Big(\frac{k+1}{2^{m+1}}\Big)\big)
& \leq \frac{\eps}{2^{2m+1}}
+ d\big(\gamma\Big(\frac{l}{2^{m}}\Big), \gamma\Big(\frac{l+1}{2^{m}}\Big)\big)
\\& \leq \frac{\eps}{2^{2m+1}}
+ \frac{L}{2^m} + \frac{\eps}{2^m} \cdot \sum_{i=1}^{m} \frac{1}{2^i}
\\& = \frac{L}{2^m} + \frac{\eps}{2^m} \cdot \sum_{i=1}^{m+1} \frac{1}{2^i}.
\end{align*}
The proof for even $k$ can be done analogously.
Observe
\begin{align*}
d\big(\gamma\Big(\frac{k}{2^m}\Big), \gamma\Big(\frac{k+1}{2^m}\Big)\big)
\leq \frac{L}{2^m} + \frac{\eps}{2^m} \cdot \sum_{i=1}^{m} \frac{1}{2^i}
\leq \frac{L + \eps}{2^m}.
\end{align*}
Hence, for all $m \in \nn$ and $0 \leq k < l \leq 2^m$,
\begin{align*}
d\big(\gamma\Big(\frac{k}{2^m}\Big), \gamma\Big(\frac{l}{2^m}\Big)\big)
&\leq \sum_{j=k}^{l-1} d\big(\gamma\Big(\frac{j}{2^m}\Big), \gamma\Big(\frac{j+1}{2^m}\Big)\big)
\\&\leq \sum_{j=k}^{l-1} \frac{L+\eps}{2^m}
= (L + \eps) \cdot \Big(\frac{l}{2^m} - \frac{k}{2^m}\Big).
\end{align*}
In particular, defined as a function on the dyadic numbers in $[0,L]$, $\gamma$ is Lipschitz.
Thus, it can be extended to a Lipschitz, hence continuous, curve $\gamma: [0,L] \to X$
where $\gamma(t)$ is defined as the limit of $\gamma(t_n)$ for dyadic numbers $t_n \to t$.
For such $0 \leq s < t \leq L$ and dyadic numbers $s_n \to s$ and $t_n \to t$,
\begin{align*}
d(\gamma(s), \gamma(t))
&= \lim_{n \to \infty} d(\gamma(s_n), \gamma(t_n))
\\&\leq \lim_{n \to \infty} (L + \eps) \cdot |t_n-s_n|
= (L + \eps) \cdot (t-s).
\end{align*}
Therefore,
\begin{align*}
L(\gamma)
&= \sup\Big\{ \sum_{i=0}^{N-1} d(\gamma(t_i), \gamma(t_{i+1}))
\mid N \in \nn, 0 = t_0 < t_1 < \ldots < t_N = 1 \Big\}
\\&\leq \sup\Big\{ \sum_{i=0}^{N-1} (L + \eps) \cdot (t_{i+1}-t_i)
\mid N \in \nn,0 = t_0 < t_1 < \ldots < t_N = 1 \Big\}
\\&= L + \eps.
\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of \autoref{prop:cpt_length_spaces_cvg_to_length_spaces}]
Let $x,y \in X$ and $\eps > 0$ be arbitrary.
Applying \autoref{lem:charact_length_space},
it is enough to find an $\eps$-midpoint $z$ of $x$ and $y$.
Choose $i \in \nn$ such that $d_{\textit{GH}}(X_i,X) < \frac{\eps}{12}$.
By \autoref{dgh_small_iff_eps_approx},
there exist $(f,g) \in \Isom{\eps/6}(X_i,X)$.
Let $z'$ be an $\frac{\eps}{6}$-midpoint of $g(x)$ and $g(y)$,
and define $z := f(z')$.
Then
\begin{align*}
|2 d_X(x,z) - d_X(x,y)|
&\leq |2 d_X(x,z) - 2 d_{X_i}(g(x),g(z))|
\\&\quad + |2 d_{X_i}(g(x),g(z)) - 2 d_{X_i}(g(x),z')|
\\&\quad + |2 d_{X_i}(g(x),z') - d_{X_i}(g(x),g(y))|
\\&\quad + |d_{X_i}(g(x),g(y)) - d_X(x,y)|
\\& < 2 \cdot \frac{\eps}{6}
+ 2 \cdot d_{X_i}(g \circ f(z'),z')|
+ \frac{\eps}{6}
+ \frac{\eps}{6}
\\& < \eps.
\end{align*}
Analogously, $|2 d_X(y,z) - d_X(x,y)| < \eps$.
\end{proof}
In general, the Gro\-mov-Haus\-dorff\xspace distance of two subsets of the same metric space,
equipped with the induced metric,
can be estimated by their Hausdorff distance.
If this metric space is a length space and the subsets are balls,
this estimate can be expressed by using the radii and the distance of the base points.
This uses the property of length spaces
that $r$-ball around a ball of radius $s$
coincides with the $r+s$ ball (around the same base point).
\begin{lemma}\label{lem_GH:ball_around_ball_is_ball_of_radii-sum}
Let $(X,d)$ be a length space, $p \in X$ and $r,s > 0$.
Then \[B_r(B_s(p)) = B_{r+s}(p).\]
\end{lemma}
\begin{proof}
Let $q \in B_r(B_s(p))$, i.e.~there exists $x \in B_s(p)$ with $d(x,q) < r$.
Then \[d(q,p) \leq d(q,x) + d(x,p) < r+s\] proves $B_r(B_s(p)) \subseteq B_{r+s}(p)$.
In fact, this inclusion holds in every metric space.
Conversely, let $q \in B_{r+s}(p)$.
Since $B_s(p) \subseteq B_r(B_s(p))$,
without loss of generality,
assume $q \in B_{r+s}(p) \setminus B_s(p)$.
Let $l := d(p,q)$ denote the distance of $p$ and $q$.
In particular, $s \leq l < r+s$.
Fix a shortest geodesic $\gamma: [0,l] \to X$ with $\gamma(0) = p$ and $\gamma(l) = q$.
Define $\eps := \frac{1}{2} \cdot \min\{s,r+s-l\} > 0$ and $t := s-\eps \in (0,s) \subseteq [0,l]$.
Then
\begin{align*}
d(\gamma(t),p) &= t < s
\intertext{and}
d(\gamma(t),q) &= l-t = l-s + \eps < l-s+r+s-l = r.
\end{align*}
Hence, $\gamma(t) \in B_s(p)$ and $q \in B_r(\gamma(t))$.
Thus, $B_{r+s}(p) \subseteq B_r(B_s(p))$.
\end{proof}
\begin{lemma}
Let $(X,d)$ be a length space, $p,q \in X$, $r,s > 0$.
Then \[d_{\textit{H}}^d(\B_r(p), \B_s(q)) \leq d(p,q) + |r-s|.\]
\end{lemma}
\begin{proof}
Let $\eps := d(p,q) + |r-s|$.
If $\eps = 0$, the claim holds due to $p=q$ and $r=s$.
Hence, assume $\eps > 0$.
Then, applying \autoref{lem_GH:ball_around_ball_is_ball_of_radii-sum},
\[
B_r(p)
\subseteq B_{d(p,q) + r}(q)
\subseteq B_{d(p,q) + |r-s|+s}(q)
= B_{\eps + s}(q)
= B_{\eps}(B_s(q)).
\]
Analogously, $B_s(q) \subseteq B_{\eps}(B_r(p))$.
Therefore, \[d_{\textit{H}}^d(\B_r(p),\B_s(q)) = d_{\textit{H}}^d(B_r(p),B_s(q)) \leq \eps.\qedhere\]
\end{proof}
\begin{cor}\label{lem_GH:estimate_GH-distance_of_balls_in_same_space}
Let $(X,d)$ be a length space, $p,q \in X$, $r,s > 0$. Then
\begin{enumerate}
\item $d_{\textit{GH}}((\B^X_r(p),p),(\B^X_s(p),p)) \leq |r-s|$,
\item $\dghp{r}{X}{p}{X}{q} \leq 2 d(p,q)$.
\end{enumerate}
\end{cor}
The diameters of metric spaces with small Gro\-mov-Haus\-dorff\xspace distance are almost the same.
In particular, for a convergent sequence of metric spaces,
their diameters converge to the diameter of the limit space.
\begin{prop}\label{prop:GH_small->diff_diam_small}
For compact metric spaces $(X,d_X)$ and $(Y,d_Y)$, \[|\diam(X)-\diam(Y)| \leq 2 d_{\textit{GH}}(X,Y).\]
In particular, if $X_i \to X$ for compact metric spaces $(X_i,d_{X_i})$, $i \in \nn$,
then \[\diam(X_i) \to \diam(X).\]
\end{prop}
\begin{proof}
Let $\eps := d_{\textit{GH}}(X,Y)$, $\delta > 0$ and $d$ be an admissible metric on $X \amalg Y$
such that \[d_{\textit{H}}^d(X,Y) < d_{\textit{GH}}(X,Y) + \delta = \eps + \delta.\]
This implies $Y \subseteq B^d_{\eps + \delta}(X)$.
Thus, for any $y_1, y_2 \in Y$ there are $x_1, x_2 \in X$
with $d(x_i,y_i) < \eps + \delta$ for $1 \leq i \leq 2$.
Hence,
\[
d_Y(y_1,y_2)
\leq d(y_1,x_1) + d_X(x_1,x_2) + d(x_2,y_2)
< 2 \eps + 2 \delta + \diam(X).
\]
Therefore,
\[
\diam(Y)
= \sup\{d_Y(y_1,y_2) \mid y_1,y_2 \in Y\}
\leq \diam(X) + 2 \eps + 2 \delta.
\]
Since $\delta > 0$ was arbitrary, $\diam(Y) \leq \diam(X) + 2 \eps$.
The other inequality can be proven analogously.
\end{proof}
\begin{cor}
If $(X,d)$ is a compact metric space and $\{\pt\}$ the space consisting of only one point,
then $d_{\textit{GH}}(X,\{\pt\}) = \frac{1}{2} \cdot \diam(X)$.
\end{cor}
\begin{proof}
By \autoref{prop:GH_small->diff_diam_small}, $\diam(X) \leq 2 \cdot d_{\textit{GH}}(X,\{\pt\})$.
Thus, only the other inequality has to be proven.
Let $\delta := \frac{1}{2} \cdot \diam(X)$,
and define an admissible metric $d$ on the disjoint union $X \amalg \{\pt\}$
by $d(x,\pt) := \delta$.
As usually, only the triangle inequality needs to be checked.
For arbitrary $x_1,x_2 \in X$,
\begin{align*}
&d(x_1,x_2) + d(x_2,\pt) = d(x_1,x_2) + \delta \geq \delta = d(x_1,\pt) \quad \xspace\textrm{and}\xspace\\
&d(x_1,\pt) + d(\pt,x_2) = 2 \delta = \diam(X) \geq d(x_1,x_2).
\end{align*}
Using this metric,
\[d_{\textit{GH}}(X,\{\pt\}) \leq d_{\textit{H}}^d(X,\{\pt\}) = \delta.\qedhere\]
\end{proof}
For a metric space $(X,d_X)$,
let $\lambda X$ denote the rescaled metric space $(\lambda X, d_{\lambda X}) := (X, \lambda d_X)$.
Rescaling of compact metric spaces behaves nicely under Gro\-mov-Haus\-dorff\xspace distance.
For any $p \in X$ and $r > 0$, observe
\[B_r^X(p)
= \{q \in X \mid d_X(q,p) < r\}
= \{q \in X \mid \lambda d_X(q,p) < \lambda r\}
= B_{\lambda r}^{\lambda X}(p).
\]
\begin{lemma}
Let $(X,d_X)$ and $(Y, d_Y)$ be compact metric spaces.
For the Hausdorff distance, $d_{\textit{H}}^{\lambda X}= \lambda \cdot d_{\textit{H}}^X$
(both in the standard and in the pointed case).
For the Gro\-mov-Haus\-dorff\xspace distance, both $d_{\textit{GH}}(\lambda X, \lambda Y) = \lambda \cdot d_{\textit{GH}}(X,Y)$
and, for all $x \in X$ and $y \in Y$,
$d_{\textit{GH}}((\lambda X,x), (\lambda Y,y)) = \lambda \cdot d_{\textit{GH}}((X,x),(Y,y))$.
\end{lemma}
\begin{proof}
First, let $A,B \subseteq X$. Then
\begin{align*}
d_{\textit{H}}^{\lambda X}(A,B)
&= \inf\{\eps > 0
\mid A \subseteq B^{\lambda X}_{\eps}(B)~\xspace\textrm{and}\xspace~B \subseteq B^{\lambda X}_{\eps}(A)\} \\
&= \inf\{\lambda \tilde{\eps} > 0
\mid A \subseteq B^X_{\tilde{\eps}}(B)~\xspace\textrm{and}\xspace~B \subseteq B^X_{\tilde{\eps}}(A)\} \\
&= \lambda \cdot \inf\{\tilde{\eps} > 0
\mid A \subseteq B^X_{\tilde{\eps}}(B)~\xspace\textrm{and}\xspace~B \subseteq B^X_{\tilde{\eps}}(A)\} \\
&= \lambda \cdot d_{\textit{H}}^X(A,B).
\intertext{Furthermore, for $a \in A$ and $b \in B$,}
d_{\textit{H}}^{\lambda X}((A,a),(B,b))
&= d_{\textit{H}}^{\lambda X}(A,B) + d_{\lambda X}(a,b) \\
&= \lambda \cdot d_{\textit{H}}^{X}(A,B) + \lambda \cdot d_{X}(a,b) \\
&= \lambda \cdot d_{\textit{H}}^{X}((A,a),(B,b)).
\end{align*}
By definition, an admissible metric $\tilde{d}$ on $\lambda X \amalg \lambda Y$
is a metric on $X \amalg Y$ satisfying
$\tilde{d}_{|X \times X} = d_{\lambda X} = \lambda \cdot d_X$
and $\tilde{d}_{|Y \times Y} = d_{\lambda Y} = \lambda \cdot d_Y$.
Furthermore, $d := \frac{1}{\lambda} \cdot \tilde{d}$ is a metric
if and only if $\tilde{d}$ is a metric.
In addition, this metric $d$ satisfies
$d_{|X \times X} = \frac{1}{\lambda} \cdot \tilde{d}_{|X \times X} = d_X$
and $d_{|Y \times Y} = d_Y$.
Thus, $d$ is an admissible metric on $X \amalg Y$.
On the other hand, using similar arguments,
if $d$ is an admissible metric on $X \amalg Y$,
then $\tilde{d} := \lambda \cdot d$ is an admissible metric on $\lambda X \amalg \lambda Y$.
Hence,
\begin{align*}
d_{\textit{GH}}(\lambda X, \lambda Y)
&= \inf \{ d_{\textit{H}}^{\tilde{d}}(\lambda X,\lambda Y)
\mid \tilde{d} \text{ admissible metric on } \lambda X \amalg \lambda Y\} \\
&= \inf \{ d_{\textit{H}}^{\lambda d}(\lambda X,\lambda Y)
\mid \lambda \cdot d \text{ admissible metric on } \lambda X \amalg \lambda Y\} \\
&= \inf \{ \lambda \cdot d_{\textit{H}}^{d}(\lambda X,\lambda Y)
\mid d \text{ admissible metric on } X \amalg Y\} \\
&= \lambda \cdot d_{\textit{GH}}(X,Y).
\end{align*}
Analogously, $d_{\textit{GH}}((\lambda X,x), (\lambda Y,y)) = \lambda \cdot d_{\textit{GH}}((X,x),(Y,y))$.
\end{proof}
\section{The non-compact case}\label{sec:GH-ncpt}
For non-compact metric spaces, the above way of defining a metric (up to isometry) does not work:
Using the Hausdorff distance as before on unbounded sets may give distance infinity.
Thus, instead of defining a notion of distance for non-compact metric spaces,
convergence is defined by using compact subspaces of these spaces only.
On these, the previous definitions can be applied.
A metric space is called \emph{proper} if all closed balls are compact.
Throughout the remaining section, all metric spaces will assumed to be proper.
Notice that proper metric spaces are complete.
For a metric space $(X,d_X)$, $p \in X$ and $r > 0$,
let
\[\B_r(p) := \{q \in X \mid d_X(p,q) \leq r\}\]
denote the closed ball of radius $r$ around $p$.
\begin{defn}\label{def:dGH-noncpt-pt}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed proper metric spaces.
If
\[\dghp{r}{X_i}{p_i}{X}{p} \to 0 \textrm{ as } {i \to \infty}\]
for all $r > 0$,
where the balls are equipped with the restricted metric,
then \emph{$(X_i,p_i)$ converges to $(X,p)$ (in the pointed Gro\-mov-Haus\-dorff\xspace sense)}.
If $(X_i,p_i)$ converges to $(X,p)$,
this is denoted by $(X_i,p_i) \to (X,p)$
and $(X,p)$ is called the \emph{(pointed Gro\-mov-Haus\-dorff\xspace) limit} of $(X_i,p_i)$.
Frequently, a sequence $(X_i,p_i)$ does not converge itself but has a converging subsequence.
The limit of such a subsequence is called \emph{sublimit} of $(X_i,p_i)$,
and $(X_i,p_i)$ is said to \emph{subconverge} to this limit.
\end{defn}
Naturally, the question arises under which conditions a given sequence of metric spaces
converges in the pointed Gro\-mov-Haus\-dorff\xspace sense.
For mani\-folds, the following theorem by Gromov states
that in some cases at least a (Gro\-mov-Haus\-dorff\xspace) sublimit exists.
In \autoref{sec:ultralimits}, another, more general concept
of creating and guaranteeing \myquote{limits} will be introduced.
It will turn out that these limits in fact are Gro\-mov-Haus\-dorff\xspace sublimits as well.
\begin{thm}[\precptnessThm, {\cite[Cor.~1.11]{petersen}}]
For $n \geq 2$, $\kappa \in \rr$ and $D > 0$, the following classes are pre-compact,
i.e.~every sequence in the class has a convergent subsequence
whose limit lies in the closure of this class:
\begin{enumerate}
\item
The collection of $n$-dimensional closed Riemannian manifolds
with $\Ric \geq (n-1) \cdot \kappa$ and $\diam \leq D$.
\item
The collection of $n$-dimensional pointed complete Riemannian manifolds
with $\Ric \geq (n-1) \cdot \kappa$.
\end{enumerate}
\end{thm}
The section is structured as follows:
In \autoref{sec:GH-ncpt--comparison_compact_case},
the compability of the definition of pointed Gro\-mov-Haus\-dorff\xspace convergence in \autoref{def:dGH-noncpt-pt}
with the notion of convergence induced by the Gro\-mov-Haus\-dorff\xspace distance of compact metric (length) spaces
(\autoref{def:dGH-cpt-npt} and \autoref{def:dGH-cpt-pt}) is verified.
Subsequently, \autoref{sec:GH-ncpt--properties} deals with
stating and verifying several properties of pointed Gro\-mov-Haus\-dorff\xspace convergence.
In this context, convergence of points and convergence of maps, respectively,
are introduced in \autoref{sec:GH-ncpt--convergence_of_points}
and \autoref{sec:GH-ncpt--convergence_of_maps}, respectively.
\subsection{Comparison with the compact case}\label{sec:GH-ncpt--comparison_compact_case}
Applied to compact length spaces, the convergence in the pointed Gro\-mov-Haus\-dorff\xspace sense
coincides with the convergence of compact metric spaces
in the pointed sense defined in the previous section.
Conversely, given (non-pointed) convergence as defined for compact metric spaces
and a fixed base point in the limit space,
there exist base points such that the spaces converge in the pointed Gro\-mov-Haus\-dorff\xspace sense.
In order to prove this, one uses the fact that approximations can be restricted to smaller balls.
This is shown in the following lemma.
Another statement of the lemma is that base points can be changed in a certain way.
This will be useful later on as well.
\begin{lemma}\label{lem:GHA_restrict_to_smaller_set_and_different_base_point}
Let $(X,d_X)$ and $(Y,d_Y)$ be length spaces,
\begin{enumerate}
\item\label{lem:GHA_restrict_to_smaller_set_and_different_base_point--a}
Let $p, p' \in X$, $q,q' \in Y$ and $R \geq r > 0$ satisfy
$\B^{X}_r(p') \subseteq \B^{X}_{R}(p)$ and $\B^Y_r(q') \subseteq \B^Y_{R}(q)$.
Moreover, let $\eps > 0$,
\[(f,g) \in \Isomp{\eps}{R}{X}{p}{Y}{q}\]
and $\delta := \max\{d(f(p'),q'), d(p',g(q'))\} \geq 0$.
Then \[\Isomp{{4\eps+\delta}}{r}{X}{p'}{Y}{q'} \ne \emptyset\]
and \[\dghp{r}{X}{p'}{Y}{q'} \leq 8\eps+2\delta.\]
\item\label{lem:GHA_restrict_to_smaller_set_and_different_base_point--b}
For $p \in X$, $q \in Y$ and $R \geq r > 0$,
\[\dghp{r}{X}{p}{Y}{q} \leq 16\cdot\dghp{R}{X}{p}{Y}{q}.\]
\end{enumerate}
\end{lemma}
\begin{proof}
\par\smallskip\noindent\ref{lem:GHA_restrict_to_smaller_set_and_different_base_point--a}
For simplicity, let $\delta_f := d(f(p'),q')$ and $\delta_g := d(p',g(q'))$.
In particular, $\delta = \max\{\delta_f,\delta_g\}$. Let $\tilde{\eps} := 4\eps + \delta$.
As $\B^{X}_r(p') \subseteq \B^{X}_{R}(p)$, one can restrict $f$ to $\B^{X}_r(p')$.
For $x \in \B^{X}_r(p')$,
\begin{align*}
d_Y(f(x),q')
&\leq d_Y(f(x), f(p')) + d_Y(f(p'),q') \\
&\leq (d_X(x,p') + \eps) + \delta_f \\
&< r + \eps + \delta_f.
\end{align*}
Hence, $f(\B^X_r(p')) \subseteq \B^Y_{r + \eps + \delta_f}(q')$.
Analogously, one can prove the inclusion $g(\B^Y_r(q')) \subseteq \B^X_{r + \eps + \delta_g}(p')$.
Now modify $f$ and $g$ in order to obtain maps $\tilde{f}$ and $\tilde{g}$, respectively,
whose images are contained in $\B^Y_r(q')$ and $\B^X_r(p')$, respectively,
such that $(\tilde{f},\tilde{g})$ are $\tilde{\eps}$-approximations:
For $y \in \B^Y_{r+\eps+\delta_f}(q') \setminus \B^Y_r(q')$
choose a shortest geodesic $c: [0,l] \to Y$ with $c(0)=q'$ and $c(1)=y$
where $r < l := d_Y(y,q') \leq r+\eps+\delta_f$.
Then $d_Y(c(r),q') = r$,
in particular, $c(r) \in \B^Y_r(q')$,
and for $\hat{y} := c(r)$,
\begin{align*}
d(y,\hat{y})
&= d_Y(y,q') - d_Y(\hat{y},q')\\
&< (r + \eps + \delta_f) - r\\
&= \eps + \delta_f.
\end{align*}
Using this, define $\tilde{f} : \B^X_{r}(p') \to \B^Y_{r}(q')$ by
\[\tilde{f}(x) :=
\begin{cases}
q' &\text{if } x = p', \\
f(x) &\text{if } x \neq p'~\xspace\textrm{and}\xspace~f(x) \in \B^Y_r(q'), \\
\widehat{f(x)} &\text{if } x \neq p'~\xspace\textrm{and}\xspace~f(x) \notin \B^Y_r(q'). \\
\end{cases}
\]
Since $d_Y(\tilde{f}(p'), f(p')) = d_Y(q', f(p')) = \delta_f < \eps + \delta_f$
and by construction,
\[d_Y(\tilde{f}(x),f(x)) < \eps + \delta_f\] for all $x \in \B_r^X(p')$.
Similarly, define $\tilde{g} : \B^Y_r(q') \to \B^X_r(p')$.
Using analogous arguments proves
\[d_X(\tilde{g}(y), g(y)) < \eps + \delta_g\] for all $y \in \B^Y_r(q')$.
By definition, $\tilde{f}(p') = q'$ and $\tilde{g}(q') = p'$,
so it remains to prove that $(\tilde{f},\tilde{g})$ are $\tilde{\eps}$-approximations.
By construction,
\begin{align*}
&|d_X(x_1,x_2) - d_Y(\tilde{f}(x_1),\tilde{f}(x_2))|\\
&\leq |d_X(x_1,x_2) - d_Y(f(x_1),f(x_2))|
+ |d_Y(f(x_1),f(x_2)) - d_Y(\tilde{f}(x_1),\tilde{f}(x_2))|\\
&< \eps + (d_Y(f(x_1),\tilde{f}(x_1)) + d_Y(f(x_2),\tilde{f}(x_2)))\\
&< \eps + 2(\eps + \delta_f)\\
&< \tilde{\eps},
\intertext{where $x_1, x_2 \in \B^X_r(p')$.
Analogously, $|d_Y(y_1,y_2) - d_X(\tilde{g}(y_1),\tilde{g}(y_2))| < \tilde{\eps}$
for arbitrary $y_1,y_2 \in \B^Y_r(q')$.
Furthermore, for $x \in \B_r^X(p')$,}
&d_X(x, \tilde{g} \circ \tilde{f}(x))\\
&\leq d_X(x, g \circ f(x))
+ d_X(g \circ f(x), g \circ \tilde{f}(x))
+ d_X(g \circ \tilde{f}(x), \tilde{g} \circ \tilde{f}(x))\\
&< \eps + (\eps + d_Y(f(x),\tilde{f}(x))) + (\eps + \delta_g)\\
&< 4\eps + \delta_f + \delta_g\\
&= \tilde{\eps}.
\end{align*}
Analogously, $d_Y(y, \tilde{f} \circ \tilde{g}(y)) < \tilde{\eps}$ for all $y \in \B_r^Y(q')$.
Hence,
\[(\tilde{f},\tilde{g}) \in \Isomp{\tilde{\eps}}{r}{X}{p'}{Y}{q'},\]
and by \autoref{dgh_small_iff_eps_approx},
\[\dghp{r}{X}{p'}{Y}{q'} \leq 2 \tilde{\eps}.\]
\par\smallskip\noindent\ref{lem:GHA_restrict_to_smaller_set_and_different_base_point--b}
Let $\delta > 0$ be arbitrary and $\eps := \dghp{R}{X}{p}{Y}{q} + \delta > 0$.
By \autoref{dgh_small_iff_eps_approx},
\[\Isomp{2\eps}{R}{X}{p}{Y}{q} \ne \emptyset,\]
and by \ref{lem:GHA_restrict_to_smaller_set_and_different_base_point--a},
\begin{align*}
&\dghp{r}{X}{p}{Y}{q}
\\&\leq 16 \eps
\\&= 16\cdot\dghp{R}{X}{p}{Y}{q} + 16\,\delta.
\end{align*}
Since $\delta > 0$ was arbitrary, this implies the claim.
\end{proof}
In order to avoid confusion,
for the next two statements,
let $X_i \stackrel{\textit{\tiny GH}}\to X$ and $(X_i,p_i) \stackrel{\textit{\tiny GH}}\to (X,p)$, respectively,
denote the convergence of compact metric spaces
in the sense of \autoref{def:dGH-cpt-npt} and \autoref{def:dGH-cpt-pt}, respectively.
Further, denote by $(X_i,p_i) \stackrel{\textit{\tiny pGH}}\to (X,p)$
the convergence in the pointed Gro\-mov-Haus\-dorff\xspace sense of \autoref{def:dGH-noncpt-pt}.
\begin{prop}\label{prop:pt-GH_convegence->diam_convergence}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed compact length spaces
with $(X_i,p_i) \stackrel{\textit{\tiny pGH}}\to (X,p)$.
Then $X_i \stackrel{\textit{\tiny GH}}\to X$,
in particular, $\diam(X_i) \to \diam(X)$.
\end{prop}
\begin{proof}
Assume $(\diam(X_i))_{i \in \nn}$ is not bounded.
Let $r > \diam(X)$.
Without loss of generality, assume $\diam(X_i) > r$ for all $i \in \nn$.
Let $0 < \eps < r-\diam(X)$
and choose points $x_i, y_i \in B_r^{X_i}(p_i)$
satisfying $d_{X_i}(x_i,y_i) \geq r - \frac{\eps}{2}$.
Let $\eps_i := 2\cdotd_{\textit{GH}}((X_i,p_i),(X,p))$
and fix approximations $(f_i,g_i) \in \Isom{\eps_i}((X_i,p_i),(X,p))$.
Then
\[\diam(X) \geq d_{X}(f_i(x_i),f_i(y_i)) \geq r - \frac{\eps}{2} - \eps_i.\]
Since this holds for all $i \in \nn$,
\[\diam(X) \geq r - \frac{\eps}{2} > \diam(X) + \frac{\eps}{2}.\]
This is a contradiction.
Thus, there is an $R > \diam(X)$ with $\diam(X_i) < R$ for all $i \in \nn$.
Then
\begin{align*}
d_{\textit{GH}}(X_i,X)
&= d_{\textit{GH}}(\B_R^{X_i}(p_i),\B_R^X(p)) \\
&\leq \dghp{R}{X_i}{p_i}{X}{p} \\
&\to 0 \textrm{ as } i \to \infty.
\end{align*}
Hence, $X_i \to X$. \autoref{prop:GH_small->diff_diam_small} implies the second part of the claim.
\end{proof}
\begin{cor}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed compact length spaces.
Then $(X_i,p_i) \stackrel{\textit{\tiny GH}}\to (X,p)$ if and only if $(X_i,p_i) \stackrel{\textit{\tiny pGH}}\to (X,p)$.
\end{cor}
\begin{proof}
The proof is done by proving both implications separately.
First, assume $(X_i,p_i) \stackrel{\textit{\tiny GH}}\to (X,p)$ and let $r > 0$ be arbitrary.
By \autoref{prop:GH_small->diff_diam_small}, $\diam(X_i) \to \diam(X)$,
i.e.~without loss of generality, assume a strict diameter bound $D$ on all spaces $X_i$ and $X$.
In particular, for all $r \geq D$,
$(\B^{X_i}_r(p_i),p_i) = (X_i,p_i)$ converges to $(X,p) = (\B^X_r(p),p)$.
For $0 < r < D$,
\begin{align*}
&\dghp{r}{X_i}{p_i}{X}{p} \\
&\leq 16 \cdot \dghp{D}{X_i}{p_i}{X}{p} \\
&= 16 \cdot d_{\textit{GH}}((X_i,p_i),(X,p)) \\
&\to 0
\end{align*}
by \autoref{lem:GHA_restrict_to_smaller_set_and_different_base_point}.
Hence, $(X_i,p_i) \stackrel{\textit{\tiny pGH}}\to (X,p)$.
Now let $(X_i,p_i) \stackrel{\textit{\tiny pGH}}\to (X,p)$.
By \autoref{prop:pt-GH_convegence->diam_convergence}, $\diam(X_i) \to \diam(X)$.
Without loss of generality, assume $\diam (X_i) \leq 2 \diam(X) =: r$.
Thus, \[d_{\textit{GH}}((X_i,p_i), (X,p)) = \dghp{r}{X_i}{p_i}{X}{p} \to 0.\qedhere\]
\end{proof}
In particular, if $X_i, X$ are compact and $p \in X$,
then, by \autoref{cor:compact_case:pointed=non-pointed_convergence},
there exist $p_i \in X_i$ such that $(X_i,p_i) \stackrel{\textit{\tiny GH}}\to (X,p)$.
Hence, $(X_i,p_i) \stackrel{\textit{\tiny pGH}}\to (X,p)$.
From now on, let $(X_i,p_i) \to (X,p)$ denote convergence in the pointed Gro\-mov-Haus\-dorff\xspace sense.
\subsection{Properties as in the compact case}\label{sec:GH-ncpt--properties}
This subsection deals with several properties which are familiar from the compact case.
First of all,
the Gro\-mov-Haus\-dorff\xspace distance defines a metric on the set of the isometry classes of compact metric spaces.
In the non-compact case,
the limit of pointed Gro\-mov-Haus\-dorff\xspace convergence still is unique up to isometry.
\begin{prop}\label{lem_GH:GH_limit_unique_up_to_pointed_isometry}
Let $(X,d_X,p)$, $(Y,d_Y,q)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces.
Assume $(X_i,p_i) \to (X,p)$ and $(X_i,p_i) \to (Y,q)$.
Then $(X,p)$ and $(Y,q)$ are isometric.
\end{prop}
\begin{proof}
For every $r > 0$, both $\B_r^X(p)$ and $\B_r^Y(q)$ are limits of $\B_r^{X_i}(p_i)$,
and thus, there exists a (bijective) isometry $f_r: \B_r^X(p) \to \B_r^Y(q)$ with $f_r(p) = q$.
Choose a countable dense subset $X' := \{x_0, x_1, x_2, \dots\}$ of $X$ with $x_0 = p$,
fix $i \in \nn$
and let $N_i$ be the minimal natural number with $d(x_i,q) < N_i$.
Define $y_i^n := q$ if $n < N_i$ and $y_i^n := f_n(x_i)$ otherwise.
For $n \geq N_i$, \[d_Y(y_i^n, q) = d_Y(f_n(x_i), f_n(p)) = d_X(x_i,p),\]
i.e.~$(y_i^n)_{n \in \nn}$ is a sequence in the compact subset $\B^Y_{d_X(x_i,p)}(q)$.
By a diagonal argument, there exists a subsequence $(n_m)_{m \in \nn}$ of the natural numbers
such that for every $i \in \nn$
the sequence $(y_i^{n_m})_{m \in \nn}$ has a limit $y_i \in \B^Y_{d_X(x_i,p)}(q)$.
In particular, $y_0^n = f_n(p) = q$ for all $n \in \nn$ implies $y_0 = q$.
For $i,j \in \nn$, by construction,
\begin{align*}
d_Y(y_i,y_j)
&= \lim_{m \to \infty} d_Y(y_i^{n_m}, y_j^{n_m})
\\&= \lim_{m \to \infty} d_Y(f_{n_m}(x_i), f_{n_m}(x_j))
\\&= d_X(x_i, x_j),
\end{align*}
i.e.~the map $\tilde{f}: X' \to Y$ defined by $\tilde{f}(x_i) := y_i$
is an isometry with $\tilde{f}(p) = q$.
As $Y$ is complete,
there exists an extension of $\tilde{f}$ to an isometry $f: X \to Y$ with $f(p) = q$:
Let $x \in X$ be arbitrary.
Since $X'$ was chosen to be dense,
there exists a sequence $(x_{i_j})_{j \in \nn}$ in $X'$ converging to $x$.
This is a Cauchy sequence,
hence, $(\tilde{f}(x_{i_j}))_{j \in \nn}$ is a Cauchy sequence as well
and has a limit $y =: f(x)$.
This defines indeed an isometry $f: X \to Y$:
Let $x,x' \in X$ be arbitrary and $x_{i_j}$ and $x_{i_l}$, respectively,
be sequences in $X'$ converging to $x$ and $x'$, respectively.
Then
\begin{align*}
d_Y(f(x),f(x'))
&= \lim_{j,l \to \infty} d_Y(\tilde{f}(x_{i_j}),\tilde{f}(x_{i_l}))
\\&= \lim_{j,l \to \infty} d_X(x_{i_j},x_{i_l})
\\&= d_X(x,x').
\end{align*}
Thus, $f$ is an isometry. It remains to prove that $f$ is bijective:
Using a further subsequence $n_{m_a}$ and the inverse maps $f_{n_{m_a}}^{-1}$,
an isometry $g : Y \to X$ can be constructed analogously.
For arbitrary $x \in X$,
let $(y_{k_l})_{l \in \nn}$ be the sequence in the dense subset $Y' \subseteq Y$
used in the construction of $g$ converging to $f(x) \in Y$.
Then
\begin{align*}
d_X(g \circ f (x),x)
&= \lim_{a \to \infty} \lim_{l,j \to \infty} d_X(f_{n_{m_a}}^{-1}(y_{k_l}),x_{i_j}) \\
&= \lim_{a \to \infty} \lim_{l,j \to \infty} d_Y(y_{k_l},f_{n_{m_a}}(x_{i_j}))\\
&= d_Y(f(x),f(x))
= 0.
\end{align*}
Analogously, $f \circ g = \id$.
Thus, $f$ is bijective.
\end{proof}
As in the compact case,
Gromov-Haus\-dorff convergence preserves being a length space.
\begin{prop}
Let $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces
and $(X,d_X,p)$ be a pointed metric space.
If $(X_i,p_i) \to (X,p)$, then $X$ is a length space.
\end{prop}
\begin{proof}
Let $x,y \in X$ and $\eps > 0$ be arbitrary.
For $r := \max\{d_X(x,p), d_X(y,p)\}$
choose $n \in \nn$ with $\dghp{r}{X_n}{p_n}{X}{p} < \frac{\eps}{12}$.
The rest of the proof can be done completely analogously
to the one of \autoref{prop:cpt_length_spaces_cvg_to_length_spaces}.
\end{proof}
As in the compact case,
in the non-compact case there is a correspondence between
(pointed) Gro\-mov-Haus\-dorff\xspace convergence and approximations.
In order to prove this, the following lemma is needed.
\begin{lemma}\label{prop:eps^(r_n)_n<=h(1/r_n)}
For all $r > 0$,
let $(\eps^r_n)_{n \in \nn}$ be a monotonically decreasing null sequence,
and $h: \rr^{>0} \to \rr^{>0}$ a function with $\lim_{x \to 0} h(x) = 0$.
Then there exists a sequence $(r_n)_{n \in \nn}$
with $\lim_{n \to \infty} r_n = \infty$
and $\eps^{r_n}_n \leq h\big(\frac{1}{r_n}\big)$ for almost all $n \in \nn$.
\end{lemma}
\begin{proof}
Let $A := \{n \in \nn \mid \forall r > 0 : \eps^r_n > h\big(\frac{1}{r}\big)\}$
denote the set of all natural numbers $n$ for which no such \myquote{$r_n$} can exist.
This set is finite:
Fix $r > 0$.
Then $\eps^r_n > h\big(\frac{1}{r}\big)$ for all $n \in A$,
but, since $(\eps^r_n)_{n \in \nn}$ is a null sequence,
this inequality only holds for finitely many $n$.
Hence, $A$ is finite.
Without loss of generality,
assume that for each $n$ there is at least one $r > 0$
such that $\eps^r_n \leq h\big(\frac{1}{r}\big)$.
Let $R_n := \{r > 0 \mid \eps^r_n \leq h\big(\frac{1}{r}\big)\} \ne \emptyset$
denote the set of all radii which are possible candidates for \myquote{$r_n$}.
Then $(R_n)_{n \in \nn}$ is an increasing sequence:
Fix $r \in R_n$.
Since $(\eps^r_n)_{n \in \nn}$ is monotonically decreasing,
$\eps^r_{n+1} \leq \eps^r_n \leq h\big(\frac{1}{r}\big)$.
Thus, $r \in R_{n+1}$.
Suppose that these sets are uniformly bounded,
i.e.~there exists $C > 0$ such that $\bigcup_{n \in \nn} R_n \subseteq [0,C]$.
Then
$\eps^r_n > h\big(\frac{1}{r}\big)$
for all $n$ and all $r > C$.
Consequently, for all $r > C$
the sequence $(\eps^r_n)_{n \in \nn}$ is bounded below by $h\big(\frac{1}{r}\big)$.
This is a contradiction to $(\eps^r_n)_{n \in \nn}$ being a null sequence.
Therefore, $\bigcup_{n \in \nn} R_n$ is unbounded,
i.e.~for all $C > 0$ there exists some $N \in \nn$
such that $R_j \not \subseteq [0,C]$ for all $j \geq N$.
In particular, for all $k \in \nn$ there is a minimal $N_k \in \nn$
such that for all $j \geq N_k$ there is some $r^k_j \in R_j$ with $r^k_j > k$.
There are two cases:
\begin{enumerate}
\item[1.] Let $N_k \to \infty$.
For every $n \in \nn$, $n \geq N_0$,
there is some $k \in \nn$ with $N_k \leq n < N_{k+1}$.
Fix this $k$
and define $r_n := r^k_n$ for some $r^k_n\in R_n$ satisfying $r^k_n> k$.
Then, for arbitrary $k \in \nn$ and all $n \geq N_k$, $r_n> k$.
Thus, $r_n \to \infty$.
Furthermore, by choice, $\eps_n^{r_n} \leq h\big(\frac{1}{r_n}\big)$.
\item[2.] Let $k_0 \in \nn$ such that $N_k = N_{k_0}$ for all $k \geq k_0$.
For $n < N_{k_0}$, define $r_n$ as in the first case.
For $n = N+m \geq N_{k_0} = N_{k_0 + m}$,
choose any $r_n := r^{k_0+m}_n \in R_n \cap (k_0+m, \infty)$.
Then $r_n \to \infty$ and $\eps_n^{r_n} \leq h\big(\frac{1}{r_n}\big)$.
\qedhere
\end{enumerate}
\end{proof}
\begin{prop}\label{prop:dgh_small_iff_eps_approx_noncompact}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be length spaces.
Then the following statements are equivalent.
\begin{enumerate}
\item\label{prop:dgh_small_iff_eps_approx_noncompact--a}
$(X_i,p_i) \to (X,p)$.
\item\label{prop:dgh_small_iff_eps_approx_noncompact--b}
For all functions $g: \rr^{>0} \to \rr^{>0}$ with $\lim_{x \to 0} g(x) = 0$
there exists $r_i \to \infty$ with
\[\dghp{r_i}{X_i}{p_i}{X}{p} \leq g\Big(\frac{1}{r_i}\Big).\]
\item\label{prop:dgh_small_iff_eps_approx_noncompact--c}
There exist $r_i \to \infty$ and $\eps_i \to 0$ with
\[\dghp{r_i}{X_i}{p_i}{X}{p} \leq \eps_i.\]
\end{enumerate}
\end{prop}
\begin{proof}
The proof is done by verifying the implications
\ref{prop:dgh_small_iff_eps_approx_noncompact--a}
$\Rightarrow$ \ref{prop:dgh_small_iff_eps_approx_noncompact--b},
\ref{prop:dgh_small_iff_eps_approx_noncompact--b}
$\Rightarrow$ \ref{prop:dgh_small_iff_eps_approx_noncompact--c} and
\ref{prop:dgh_small_iff_eps_approx_noncompact--c}
$\Rightarrow$ \ref{prop:dgh_small_iff_eps_approx_noncompact--a}.
First, let $(X_i,p_i) \to (X,p)$
and $g: \rr^{>0} \to \rr^{>0}$ with $\lim_{x \to 0} g(x) = 0$ be arbitrary.
For fixed $r > 0$, define
\begin{align*}
\tilde{\eps}^r_i &:= \dghp{r}{X_i}{p_i}{X}{p} \to 0 \textrm{ as } i \to \infty
\intertext{and}
\eps^r_i &:= \sup\{ \tilde{\eps}^r_j \mid j \geq i\} \to 0 \textrm{ as } i \to \infty.
\end{align*}
This sequence $(\eps^r_i)_{i \in \nn}$ is monotonically decreasing
and satisfies $\eps^r_i \geq \tilde{\eps}^r_i$.
By \autoref{prop:eps^(r_n)_n<=h(1/r_n)},
there exists $r_i \to \infty$ such that $\eps^{r_i}_i \leq g\big(\frac{1}{r_i}\big)$
for all $i \in \nn$.
In particular,
\[
\dghp{r_i}{X_i}{p_i}{X}{p}
= \tilde{\eps}^{r_i}_i
\leq \eps^{r_i}_i
\leq g\Big(\frac{1}{r_i}\Big),
\]
and this proves \ref{prop:dgh_small_iff_eps_approx_noncompact--b}.
Obviously, \ref{prop:dgh_small_iff_eps_approx_noncompact--b}
implies \ref{prop:dgh_small_iff_eps_approx_noncompact--c}
via choosing $g:=\id$ and $\eps_i := \frac{1}{r_i}$.
Finally, let $\dghp{r_i}{X_i}{p_i}{X}{p} \leq \eps_i$ for some $r_i \to \infty$ and $\eps_i \to 0$.
Fix $r > 0$.
Let $i \in \nn$ be large enough such that $r < r_i$.
Then
\[\dghp{r}{X_i}{p_i}{X}{p} \leq 16 \eps_i,\]
by \autoref{lem:GHA_restrict_to_smaller_set_and_different_base_point},
and this implies the claim.
\end{proof}
\begin{cor}\label{cor:dgh_small_iff_eps_approx_noncompact}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces.
Then the following statements are equivalent.
\begin{enumerate}
\item
$(X_i,p_i) \to (X,p)$.
\item
There is $\eps_i \to 0$ such that
$\Isomp{\eps_i}{1/\eps_i}{X_i}{p_i}{X}{p} \ne \emptyset$ for all $i$.
\item
There is $\eps_i \to 0$ such that
$\dghp{1/\eps_i}{X_i}{p_i}{X}{p} \leq \eps_i$ for all $i$.
\end{enumerate}
\end{cor}
\begin{proof}
This is a direct implication
of \autoref{dgh_small_iff_eps_approx}
and \autoref{prop:dgh_small_iff_eps_approx_noncompact}.
\end{proof}
Similarly to the compact case,
the Gro\-mov-Haus\-dorff\xspace distance and convergence, respectively, is related to the diameters of the spaces:
On the one hand,
the distance of balls in $X$ and $X \times Y$ are bounded from above by the diameter of $Y$.
Recall that in the special case of $X = \{\pt\}$,
the (non-pointed) distance equals $\frac{1}{2} \diam(Y)$.
On the other hand,
in the compact case it was proven that convergence of spaces implies convergence of the diameters.
For length spaces, an analogous statement will be established.
\begin{prop}\label{prop-C}
Let $(X,d_X,x_0)$ and $(Y,d_Y,y_0)$ be pointed metric spaces.
If $Y$ is compact,
then \[\dghp{r}{X}{x_0}{X \times Y}{(x_0,y_0)} \leq \diam(Y)\]
for all $r > 0$.
\end{prop}
\begin{proof}
It suffices to define an admissible metric
and to estimate the Hausdorff distance with respect to this metric.
Let $\delta > 0$ be arbitrary.
Define an admissible metric $d$ on $(X \times Y) \amalg X$ by
\[ d((x,y),x') := \sqrt{d_X(x,x')^2 + d_Y(y,y_0)^2 + \delta^2}.\]
As usual, the only tricky part is to prove the triangle inequality:
By the Minkowski inequality, for $x_1,x_1',x_2,x_2' \in X$ and $y_1,y_2 \in Y$,
\begin{align*}
&d((x_1,y_1),x_1') + d(x_1',x_2')\\
&=\sqrt{d_X(x_1,x_1')^2 + d_Y(y_1,y_0)^2 + \delta^2} + d_X(x_1',x_2')
\displaybreak[0]\\
&\geq \sqrt{d_X(x_1,x_1')^2 + d_X(x_1',x_2')^2 + d_Y(y_1,y_0)^2 + \delta^2}
\displaybreak[0]\\
&\geq \sqrt{d_X(x_1,x_2')^2 + d_Y(y_1,y_0)^2 + \delta^2} \\
&= d((x_1,y_1),x_2').
\end{align*}
With completely analogous argumentation, one can prove the remaining inequalities
\begin{align*}
d(x_1',(x_1,y_1)) + d((x_1,y_1),x_2') &\geq d(x_1',x_2') ,
\\
d((x_1,y_1),(x_2,y_2)) + d((x_2,y_2),x_2') &\geq d((x_1,y_1),x_2')
\quad\xspace\textrm{and}\xspace\displaybreak[0]\\
d((x_1,y_1),x_1') + d(x_1',(x_2,y_2)) &\geq d((x_1,y_1),(x_2,y_2)).
\end{align*}
Now fix $r > 0$ and let $(x,y) \in \B_r^{X \times Y}((x_0,y_0))$ be arbitrary.
In particular, $x \in \B_r^{X}(x_0)$.
Thus,
\begin{align*}
d((x,y), \B_r^{X}(x_0))
&\leq d((x,y),x)
\\&= \sqrt{d_Y(y,y_0)^2 + \delta^2}
\\&\leq \sqrt{\diam(Y)^2 + \delta^2}.
\end{align*}
Hence,
\[\B_r^{X \times Y}((x_0,y_0)) \subseteq \B^d_{\sqrt{\diam(Y)^2 + \delta^2}} (\B_r^{X}(x_0)).\]
For arbitrary $x \in \B_r^{X}(x_0)$,
one has $d((x,y_0),(x_0,y_0)) = d_X(x,x_0) < r$,
and therefore, $(x,y_0) \in \B_r^{X \times Y}((x_0,y_0))$.
Thus,
\[d(x, \B_r^{X \times Y}(x_0,y_0)) \leq d(x,(x,y_0)) = \delta\]
and \[\B_r^{X}(x_0) \subseteq \B^d_{\delta} (\B_r^{X \times Y}(x_0,y_0)).\]
Hence,
\begin{align*}
&\dghp{r}{X}{x_0}{X \times Y}{(x_0,y_0)}\\
&\leq d_{\textit{H}}^d(\B_r^{X}(x_0),\B_r^{X \times Y}(x_0,y_0) ) \\
&\leq \max\{\sqrt{\diam(Y)^2 + \delta^2}, \delta\}\\
&= \sqrt{\diam(Y)^2 + \delta^2}.
\end{align*}
Since $\delta$ was arbitrary, this proves the claim.
\end{proof}
In order to prove the convergence of diameters,
one needs the following property of length spaces of infinite diameter:
Any ball of radius $r$ has diameter at least $r$.
Though it is easy to see this, for the sake of completeness, the proof is given first.
\begin{lemma}\label{lem_GH:r_ball_has_diam>=r}
Let $(X,d,p)$ be a pointed length space and $0 < r < \frac{\diam(X)}{2}$.
Then $\diam(\B_r^X(p)) \geq r$.
\end{lemma}
\begin{proof}
Assume that $d(q,p) \leq r$ for all $q \in X$.
Hence, $\B_r(p) = X$,
and this implies $\diam(X) \leq 2r < \diam(X)$, which is a contradiction.
Hence, there is $q_r \in X$ such that $l_r := d(q_r,p) > r$.
Fix a minimising geodesic $\gamma: [0,l_r] \to X$ with $\gamma(0) = p$ and $\gamma(l_r) = q_r$.
Then $d(p,\gamma(r)) = r$, hence, $\gamma(r) \in \B_r(p)$.
In particular,
$\diam(\B_r(p)) \geq d(p,\gamma(r)) = r$.
\end{proof}
\begin{prop}\label{prop:GH-small->diff_diam_small--noncompact}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces.
If $(X_i,p_i) \to (X,p)$,
then $\diam(X_i) \to \diam(X)$.
(Here, both $\diam(X_i)$ tending to infinity
as well as the notion $\infty \to \infty$ are allowed.)
\end{prop}
\begin{proof}
Let $\eps_i \to 0$ be as in \autoref{cor:dgh_small_iff_eps_approx_noncompact}
with
\[\dghp{1/\eps_i}{X_i}{p_i}{X}{p} \leq \eps_i.\]
By \autoref{prop:GH_small->diff_diam_small},
$|\diam(\B^{X_i}_{1/\eps_i}(p_i)) - \diam(\B^X_{1/\eps_i}(p))| \leq 2 \eps_i \to 0$.
Distinguish the two cases of $X$ being bounded and unbounded, respectively.
\textit{Case 1: $\diam(X) < \infty$.}
Without loss of generality, assume $\diam(X) < \frac{1}{2\eps_i}$ for all $i \in \nn$.
Then $X = B_{1/\eps_i}^X(p)$
and
\[
|\diam(\B^{X_i}_{1/\eps_i}(p_i)) - \diam(X)|
= |\diam(\B^{X_i}_{1/\eps_i}(p_i)) - \diam(\B^X_{1/\eps_i}(p))|
\to 0,
\]
in particular, $\diam(\B^{X_i}_{1/\eps_i}(p_i)) \to \diam(X)$ as $i \to \infty$.
Without loss of generality,
assume $\diam(\B^{X_i}_{1/\eps_i}(p_i)) \leq 2 \cdot \diam(X)$ for all $i \in \nn$.
Let $r_i := \min\!\big\{ \frac{1}{\eps_i}, \frac{1}{3} \cdot \diam(X_i) \big\}
< \frac{1}{2} \cdot \diam(X_i)$.
By \autoref{lem_GH:r_ball_has_diam>=r},
\begin{align*}
r_i
\leq \diam(B_{r_i}^{X_i}(p_i))
\leq \diam(B_{1/\eps_i}^{X_i}(p_i))
\leq 2 \cdot \diam(X)
< \frac{1}{\eps_i}.
\end{align*}
Hence, $\diam(X_i) = 3 r_i \leq 6 \cdot \diam(X)$, the $X_i$ are compact
and \autoref{prop:GH_small->diff_diam_small} implies the claim.
\textit{Case 2: $\diam(X) = \infty$.}
Assume there is a subsequence $(i_j)_{j \in \nn}$ and $C > 0$
with $\diam(X_{i_j}) < C$ for all $j \in \nn$.
Pass to this subsequence.
After passing to a further subsequence,
$C < \frac{1}{\eps_i}$ for all $i \in \nn$.
Then $X_i = B_{1/\eps_i}^{X_i}(p_i)$
and this implies $\diam(\B^{X_i}_{1/\eps_i}(p_i)) = \diam(X_i) < C$.
Further, by \autoref{lem_GH:r_ball_has_diam>=r},
$\diam(\B_{1/\eps_i}^X(p)) \geq \frac{1}{\eps_i}$
and
\begin{align*}
|\diam(\B^{X_i}_{1/\eps_i}(p_i)) - \diam(\B^X_{1/\eps_i}(p))|
\geq \frac{1}{\eps_i} - C
\to \infty.
\end{align*}
This is a contradiction. Hence, $\diam(X_i) \to \infty$.
\end{proof}
Gro\-mov-Haus\-dorff\xspace convergence is compatible with rescaling:
Given a converging sequence of length spaces and a converging sequence of rescaling factors,
the rescaled sequence converges
and the limit space is the original one rescaled by the limit of the rescaling sequence.
More generally, given a converging sequence of metric spaces
and some bounded sequence of rescaling factors,
the sublimits of the rescaled sequence correspond exactly
to the sublimits of the rescaling sequence.
For a metric space $(X,d)$,
recall that $\alpha X$ denotes the rescaled metric space $(X,\alpha\,d)$.
\begin{prop}\label{prop-E}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces
and $r_i, r, \alpha_i, \alpha > 0$.
\begin{enumerate}
\item\label{prop-E--a}
If $(X_i,p_i) \to (X,p)$ and $r_i \to r$,
then $(\B_{r_i}^{X_i}(p_i),p_i) \to (\B_r^{X}(p),p)$.
\item\label{prop-E--b}
If $\alpha_i \to \alpha$,
then $(\alpha_i X, p) \to (\alpha X, p)$.
\item\label{prop-E--c}
If $(X_i,p_i) \to (X,p)$ and $\alpha_i \to \alpha$,
then $(\alpha_i X_i,p_i) \to (\alpha X,p)$.
\item\label{prop-E--d}
If $(X_i,p_i) \to (X,p)$ and $(\alpha_i X_i,p_i) \to (Y,q)$,
then there is $\alpha$ such that $\alpha_i \to \alpha$
and $(Y,q) \cong (\alpha X,p)$.
\end{enumerate}
\end{prop}
\begin{proof}
\ref{prop-E--a}
By \autoref{lem_GH:estimate_GH-distance_of_balls_in_same_space},
\[
d_{\textit{GH}}((\B_{r_i}^{X_i}(p_i),p_i), (\B_r^{X_i}(p_i),p_i))
\leq |r - r_i|
\to 0,
\]
and the triangle inequality implies
\begin{align*}
&d_{\textit{GH}}(\B_{r_i}^{X_i}(p_i), \B_r^{X}(p))
\\&\leq d_{\textit{GH}}(\B_{r_i}^{X_i}(p_i), \B_r^{X_i}(p_i)) + d_{\textit{GH}}(\B_{r}^{X_i}(p_i), \B_r^{X}(p))
\\&\to 0.
\end{align*}
\par\smallskip\noindent\ref{prop-E--b}
Without loss of generality, let $\alpha = 1$.
First, let $X$ be compact.
Define $f_i: X \to \alpha_i X$ and $g_i : \alpha_i X \to X$
by $f_i(x) := x$ and $g_i(x) := x$ for all $x \in X$.
Furthermore, let $0 < \eps_i := 2 \cdot |\alpha_i - 1| \cdot \diam(X) \to 0$.
For any $x,x' \in X$,
\begin{align*}
|d_{\alpha_i X}(f_i(x),f_i(x')) - d_X(x,x')| &= |\alpha_i-1| \cdot d_X(x,x') < \eps_i.
\intertext{Analogously,}
|d_{\alpha_i X}(x,x') - d_X(g_i(x),g_i(x'))| & < \eps_i.
\end{align*}
Furthermore, $d_X(x, g_i \circ f_i(x)) = 0 < \eps_i$
and $d_X(f_i \circ g_i(x),x) = 0 < \eps_i$.
Thus, $(f_i,g_i) \in \Isom{\eps_i}((\alpha_i X,p), (X,p))$
and $(\alpha_i X,p) \to (X,p)$.
Now let $X$ be non-compact and $r > 0$. Then, using \ref{prop-E--a} and the compact case,
\begin{align*}
&d_{\textit{GH}}((\B_r^{\alpha_i X}(p),p), (\B_r^X(p),p))\\
&\leq d_{\textit{GH}}((\B_r^{\alpha_i X}(p),p), (\B_{\alpha_i r}^{\alpha_i X}(p),p))
+ d_{\textit{GH}}((\B_{\alpha_i r}^{\alpha_i X}(p),p), (\B_r^X(p),p)) \\
&= \alpha_i \cdot d_{\textit{GH}}((\B_{r/\alpha_i}^{X}(p),p), (\B_{r}^{X}(p),p))
+ d_{\textit{GH}}((\alpha_i \B_{r}^{X}(p),p), (\B_r^X(p),p)) \\
&\to 0.
\end{align*}
\par\smallskip\noindent\ref{prop-E--c}
By the triangle inequality, for fixed $r > 0$,
\begin{align*}
&d_{\textit{GH}}((\B_{r}^{\alpha_i X_i}(p_i),p_i), (\B_{r}^{\alpha X}(p),p)) \\
&\leqd_{\textit{GH}}((\B_{r}^{\alpha_i X_i}(p_i),p_i), (\B_{\alpha_i r/\alpha}^{\alpha_i X}(p),p)) \\&\quad
+ d_{\textit{GH}}((\B_{\alpha_i r/\alpha}^{\alpha_i X}(p),p), (\B_{r}^{\alpha_i X}(p),p)) \\&\quad
+ d_{\textit{GH}}((\B_{r}^{\alpha_i X}(p),p), (\B_{r}^{\alpha X}(p),p)).
\end{align*}
By \ref{prop-E--a},
\begin{align*}
&d_{\textit{GH}}((\B_{r}^{\alpha_i X_i}(p_i),p_i), (\B_{\alpha_i r/\alpha}^{\alpha_i X}(p),p))
\\&= \alpha_i \cdot d_{\textit{GH}}((\B_{r/\alpha_i}^{X_i}(p_i),p_i), (\B_{r/\alpha}^{X}(p),p))
\to 0,
\end{align*}
by \autoref{lem_GH:estimate_GH-distance_of_balls_in_same_space},
\begin{align*}
d_{\textit{GH}}((\B_{\alpha_i r/\alpha}^{\alpha_i X}(p),p), (\B_{r}^{\alpha_i X}(p),p))
\leq |r - \frac{\alpha_i}{\alpha} \cdot r|
\to 0,
\end{align*}
and by \ref{prop-E--b},
\[d_{\textit{GH}}((\B_{r}^{\alpha_i X}(p),p), (\B_{r}^{\alpha X}(p),p)) \to 0.\]
Hence,
$(\B_{r}^{\alpha_i X_i}(p_i),p_i) \to (\B_{r}^{\alpha X}(p),p)$ for every $r > 0$.
\par\smallskip\noindent\ref{prop-E--d}
Let $\alpha$ be an arbitrary accumulation point of $(\alpha_i)_{i \in \nn}$.
Hence, for a subsequence $(i_j)_{j \in \nn}$,
both $\alpha_{i_j} \to \alpha$, and by \ref{prop-E--c},
$(\alpha_{i_j} X_{i_j}, p_{i_j}) \to (\alpha X, p) \textrm{ as } j \to \infty$.
On the other hand, $(\alpha_{i_j} X_{i_j}, p_{i_j}) \to (Y, q) \textrm{ as } j \to \infty$.
Thus, $(Y,q)$ and $(\alpha X, p)$ are isometric
(cf.~\autoref{lem_GH:GH_limit_unique_up_to_pointed_isometry}).
\end{proof}
\begin{cor}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces
and $(\alpha_i)_{i \in \nn}$ be a bounded sequence.
If $(X_i,p_i) \to (X,p)$,
then the sublimits of $(\alpha_i X_i,p_i)$
correspond to the $(\alpha X, p)$
for exactly the accumulation points $\alpha$ of $(\alpha_i)_{i \in \nn}$.
\end{cor}
\begin{proof}
Let $\alpha$ be an accumulation point of $(\alpha_i)_{i \in \nn}$
and $(\alpha_{i_j})_{j \in \nn}$ be the subsequence converging to $\alpha$.
Then $(X_{i_j},p_{i_j}) \to (X, p)$, and by \autoref{prop-E},
\[(\alpha_{i_j} X_{i_j},p_{i_j}) \to (\alpha X, p).\]
Now let $(Y,y)$ be a sublimit of $(\alpha_i X_i,p_i)$,
i.e.~$(\alpha_{i_j} X_{i_j},p_{i_j}) \to (Y,y)$ for some subsequence $(i_j)_{j \in \nn}$.
Since $(\alpha_{i_j})_{j \in \nn}$ is a bounded sequence,
there exists a convergent subsequence $(\alpha_{i_{j_l}})_{l \in \nn}$ with limit $\alpha$.
For this subsequence, $(\alpha_{i_{j_l}} X_{i_{j_l}},p_{i_{j_l}}) \to (Y,y)$,
and $(\alpha_{i_{j_l}} X_{i_{j_l}},p_{i_{j_l}}) \to (\alpha X, p)$ by the first part.
Thus, $(Y,y)$ is isometric to $(\alpha X, p)$
for an accumulation point $\alpha$ of $(\alpha_i)_{i \in \nn}$.
\end{proof}
\subsection{Convergence of points}\label{sec:GH-ncpt--convergence_of_points}
In the previous section, convergent sequences of pointed metric (length) spaces were studied.
Given such a sequence and using the corresponding approximations,
a notion for convergence of points can be introduced.
\begin{defn}\label{dfn:q_i->q}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces.
Assume $(X_i,p_i) \to (X,p)$ and let
$\eps_i \to 0$ and
\[(f_i,g_i) \in \Isomp{\eps_i}{1/\eps_i}{X_i}{p_i}{X}{p}\]
as in \autoref{cor:dgh_small_iff_eps_approx_noncompact}.
Let $q_i \in \B^{X_i}_{1/\eps_i}(p_i)$ and $q \in X$.
Then \emph{$q_i$ converges to $q$}, denoted by $q_i \to q$,
if $f_i(q_i)$ converges to $q$ (in $X$).
\end{defn}
For $(X_i, p_i) \to (X,p)$ as in the definition, $p_i \to p$ due to $f_i(p_i) = p$.
Moreover, for each $x \in X$ there exists such a sequence $x_i$ satisfying $x_i \to x$,
e.g.~$x_i := g_i(x)$.
Convergence $q_i \to q$ depends on the choice of the underlying Gro\-mov-Haus\-dorff\xspace approximations:
Convergence with respect to one pair of approximations
does not necessarily imply convergence for another,
as the following example shows.
\begin{exm}
For $i \in \nn$, let $X_i = X = \mathbb{S}^2$ be the $2$-dimensional sphere,
$p_i=p=N$ the north pole and $q_i = q$ some fixed point on the equator.
Let $\phi$ denote the rotation of $\mathbb{S}^2$ by $\frac{\pi}{2}$ fixing $p$
and define $f_i = g_i = f_{2i}' = g_{2i}' = \id_{\mathbb{S}^2}$,
$f_{2i+1}'=\phi$ and $g_{2i+1}' = \phi^{-1}$.
Then both $(f_i,g_i)$ and $(f_i',g_i')$ are pointed isometries
between $(X_i,p_i)$ and $(X,p)$
satisfying $f_i(q_i) = q$, but $f_{2i}'(q_{2i}) = q \neq \phi(q) = f_{2i+1}'(q_{2i+1})$.
Hence, $f_i'(q_i)$ is not convergent at all,
but subconvergent with limits $q$ and $\phi(q)$.
\end{exm}
In this example, after replacing the approximations, two sublimits occur:
One sublimit is the limit corresponding to the original approximations,
the other one is its image under an isometry of the limit space.
Since Gro\-mov-Haus\-dorff\xspace convergence distinguishes spaces only up to isometry,
concretely $(X,p) \cong (h(X),h(p)) = (X,h(p))$ for any isometry $h$,
this can be interpreted as follows:
If $q$ is a sublimit of $q_i$ with respect to one Gro\-mov-Haus\-dorff\xspace approximation,
then it is a sublimit for all Gro\-mov-Haus\-dorff\xspace approximations.
This is a general fact as the subsequent lemma shows.
In order to prove this, the separability of a connected proper metric space is needed.
Though it is easy to see that such a space is separable,
for completeness, the proof is given first.
\begin{lemma}\label{connected&proper=>separable}
A connected proper metric space is separable.
\end{lemma}
\begin{proof}
Let $(X,p)$ be a connected proper metric space and let $p \in X$ be arbitrary.
Then \[X = \bigcup_{q \in \qq \cap (0,\infty)} \B_q(p).\]
As a compact set, every $\B_q(p)$ is separable where $q \in \qq$ is positive.
Therefore, there exists a countable dense subset $A_q \subseteq \B_q(p)$.
Let $A := \bigcup_{q \in \qq \cap (0,\infty)} A_q$.
This $A$ is countable, and for arbitrary $x \in X$
there is a positive $q \in \qq$ such that $x \in \B_q(p)$,
i.e.~there exists a sequence $x_n \in A_q \subseteq A$ converging to $x$.
Thus, $A$ is dense in $X$, hence, $X$ is separable.
\end{proof}
\begin{lemma}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces.
Assume $(X_i, p_i) \to (X,p)$ and let $\eps_i,\eps_i' \to 0$, $r_i,r_i' \to \infty$ and
\begin{align*}
(f_i,g_i) &\in \Isomp{\eps_i}{r_i}{X_i}{p_i}{X}{p},\\
(f_i',g_i') &\in \Isomp{\eps_i'}{r_i'}{X_i}{p_i}{X}{p}.
\end{align*}
Let $q_i \in \B_{\min\{r_i,r_i'\}}^{X_i}(p_i)$ and $q \in X$.
If $f_i(q_i) \to q$ and $q'$ is an accumulation point of $f_i'(q_i)$,
then there exists an isometry $h : X \to X$ such that $h(q)=q'$.
\end{lemma}
\begin{proof}
Without loss of generality, let $r_i = r_i'$:
Otherwise, let $R_i := \min\{r_i,r_i'\}$
and, by \autoref{lem:GHA_restrict_to_smaller_set_and_different_base_point}
and the construction in its proof,
there are
\begin{align*}
(\tilde{f}_i,\tilde{g}_i) &\in \Isomp{\eps_i}{R_i}{X_i}{p_i}{X}{p}\\
(\tilde{f}_i',\tilde{g}_i') &\in \Isomp{\eps_i'}{R_i}{X_i}{p_i}{X}{p}
\end{align*}
with
\begin{align*}
\tilde{f}_i(q_i) \to q &\text{ if and only if } f_i(q_i) \to q,\\
\tilde{f}'_i(q_i) \to q &\text{ if and only if } f_i'(q_i) \to q.
\end{align*}
Define $h_i, \bar{h}_i: \B_{r_i}^X(p) \to \B_{r_i}^X(p)$ by
\[h_i := f_i' \circ g_i \quad\xspace\textrm{and}\xspace\quad \bar{h}_i := f_i \circ g_i'.\]
In particular, $h_i(p) = \bar{h}_i(p) = p$.
For any $x,x' \in \B_{r_i}^X(p)$,
\begin{align*}
&|d_X(h_i(x),h_i(x')) - d_X(x,x')|\\
&\leq |d_X(f_i'(g_i(x)),f_i'(g_i(x'))) - d_{X_i}(g_i(x),g_i(x'))|
\\&\quad+ |d_{X_i}(g_i(x),g_i(x')) - d_X(x,x')| \\
&\leq \eps_i' + \eps_i \to 0.
\intertext{Analogously, $|d_X(\bar{h}_i(x),\bar{h}_i(x')) - d_X(x,x')| \to 0$. Moreover,}
&d_X(\bar{h}_i \circ h_i (x),x)\\
&= d_X(f_i \circ g_i' \circ f_i' \circ g_i (x),x) \\
&\leq d_{X_i}(g_i \circ f_i \circ g_i' \circ f_i' \circ g_i (x),g_i(x)) + \eps_i \\
&\leq d_{X_i}(g_i' \circ f_i' \circ g_i (x),g_i(x)) + 2\eps_i \\
&\leq d_{X_i}(g_i (x),g_i(x)) + 2\eps_i + \eps_i' \to 0,
\end{align*}
and analogously, $d_X(h_i \circ \bar{h}_i (x),x) \to 0$.
Hence, if the $h_i$ and $\bar{h}_i$ (sub)converge (in some sense),
their corresponding (sub)limits are isometries fixing $p$ with $\bar{h} = h^{-1}$.
The idea for proving subconvergence is to choose a countable dense subset $A \subseteq X$,
to define the sublimit of all $h_i(a)$ where $a \in A$
and to extend this limit to a continuous map on $X$.
Doing the same simultaneously for $\bar{h}_i$
gives another sublimit that turns out to be the inverse of the first.
In the end, identifying $X$ with itself using this isometry proves the claim.
Choose a countable dense subset $A = \{a_n \mid n \in \nn\} \subseteq X$
(cf.~\autoref{connected&proper=>separable})
and, for $i$ large enough such that $d_X(a_n,p) \leq r_i$,
define $z_n^i := h_i(a_n)$ and $\bar{z}_n^i := \bar{h}_i(a_n)$.
Since \[d_X(z_n^i, p) = d_X(h_i(a_n),h_i(p)) \to d_X(a_n,p),\]
the sequence $(d(z_n^i, p))_{i \in \nn}$ is bounded from above by some $R > 0$.
Hence, $z_n^i$ is contained in $\B_R^X(p)$,
and therefore, has a convergent subsequence.
An analogous argument proves subconvergence for $(\bar{z}_n^i)_{i \in \nn}$.
Thus, using a diagonal argument,
there is a subsequence $(i_j)_{j \in \nn}$ such that
for any $n \in \nn$ the sequences $(z_n^{i_j})_{j \in \nn}$ and $(\bar{z}_n^{i_j})_{j \in \nn}$,
respectively, converge to some $z_n \in X$ and $\bar{z}_n \in X$, respectively.
Define $h(a_n) := z_n$ and $\bar{h}(a_n) := \bar{z}_n$.
In particular,
\[d_X(h(a_n),h(a_m)) = d_X(a_n,a_m) = d_X(\bar{h}(a_n),\bar{h}(a_m))\]
for all $n,m \in \nn$.
For arbitrary $x \in X$,
choose a Cauchy sequence $(a_{n_k})_{k \in \nn}$ in $A$ converging to $x$
and let
\[
h(x) := \lim_{k \to \infty} h(a_{n_k})
\quad\xspace\textrm{and}\xspace\quad
\bar{h}(x) := \lim_{k \to \infty} \bar{h}(a_{n_k}).
\]
In fact, for any $k \in \nn$,
\begin{align*}
&d_X(h_{i_j}(x),h(x))\\
&\leq d_X(h_{i_j}(x), h_{i_j}(a_{n_k}))
+ d_X(h_{i_j}(a_{n_k}),h(a_{n_k}))
+ d_X(h(a_{n_k}), h(x))\\
&\leq d_X(x, a_{n_k}) + \eps_{i_j} + \eps_{i_j}'
+ d_X(h_{i_j}(a_{n_k}),h(a_{n_k}))
+ d_X(h(a_{n_k}), h(x))\\
&\to d_X(x, a_{n_k}) + d_X(h(a_{n_k}), h(x)) \textrm{ as } {j \to \infty}.
\end{align*}
Since this holds for every $k \in \nn$
and $d_X(x, a_{n_k}) + d_X(h(a_{n_k}), h(x)) \to 0$ as $k \to \infty$,
\begin{align*}
h_{i_j}(x) \to h(x) \textrm{ as } {j \to \infty}.
\end{align*}
Analogously,
$\bar{h}_{i_j}(x) \to \bar{h}(x) \textrm{ as } {j \to \infty}$.
In particular, $\bar{h}_{i_j} \circ h_{i_j} \to \bar{h} \circ h$ and vice versa.
Thus, $h$ is an isometry on $X$ with inverse $\bar{h}$.
Now let $f_i(q_i) \to q$. Then
\begin{align*}
d_X(f_{i_j}'(q_{i_j}),h(q))
&\leq d_{X_{i_j}}(g_{i_j}' \circ f_{i_j}'(q_{i_j}),g_{i_j}' \circ h(q)) + \eps_{i_j}'\\
&\leq d_{X_{i_j}}(q_{i_j},g_{i_j}' \circ h(q)) + 2 \eps_{i_j}'\\
&\leq d_X(f_{i_j}(q_{i_j}),f_{i_j} \circ g_{i_j}' \circ h(q)) + 2 \eps_{i_j}' + \eps_{i_j}\\
&\leq d_X(f_{i_j}(q_{i_j}),q) + d_X(q,\bar{h}_{i_j}' \circ h(q)) + 2 \eps_{i_j}' + \eps_{i_j}\\
&\to 0 \textrm{ as } {j \to \infty}.
\end{align*}
This proves
$f_{i_j}'(q_{i_j}) \to h(q) \textrm{ as } {j \to \infty}$.
\end{proof}
The following statements allow to change the base points of a given convergent sequence.
\begin{prop}\label{(X_i,p_i)->(X,p),q_i->q=>(X_i,q_i)->(X,q)}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces,
and let $q_i \in X_i$ and $q \in X$.
If $(X_i,p_i) \to (X,p)$ and $q_i \to q$, then $(X_i,q_i) \to (X,q)$.
\end{prop}
\begin{proof}
The proof is an immediate consequence of
\autoref{lem:GHA_restrict_to_smaller_set_and_different_base_point}
and \autoref{prop:dgh_small_iff_eps_approx_noncompact}:
Choose $\eps_i \to 0$ and $(f_i,g_i) \in \Isomp{\eps_i}{1/\eps_i}{X_i}{p_i}{X}{p}$
as in \autoref{dfn:q_i->q} with $f_i(q_i) \to q$.
In particular,
\[
d_{X_i}(q_i,g_i(q))
\leq \eps_i + d_X(f_i(q_i), f_i(g_i(q)))
\leq 2 \eps_i + d_X(f_i(q_i), q)
\to 0.
\]
Hence, $\delta_i := \max\{d_X(f_i(q_i),q),d_{X_i}(q_i,g_i(q))\} \to 0$.
Since $f_i(p_i) = p$,
\[
d_{X_i}(p_i,q_i)
\leq \eps_i + d_X(p,q) + d_X(q,f_i(q_i))
\to d_X(p,q).
\]
Let $r > 0$ be arbitrary.
Fix $i$ large enough such that
$2(r + d_X(p,q)) \leq \frac{1}{\eps_i}$
and such that $d_{X_i}(p_i,q_i) \leq 2 d_X(p,q)$ or $d_{X_i}(p_i,q_i) \leq r$, respectively,
if $p \neq q$ or $p=q$, respectively.
In particular,
\begin{align*}
&\B^{X_i}_r(q_i)
\subseteq \B^{X_i}_{r + d_{X_i}(p_i,q_i)}(p_i)
\subseteq \B^{X_i}_{1/\eps_i}(p_i),
\\
&\B^X_r(q)
\subseteq \B^X_{r + d_X(p,q)}(p)
\subseteq \B^X_{1/\eps_i}(p)
\end{align*}
and $\Isomp{4\eps_i + \delta_i}{r}{X_i}{q_i}{X}{q} \ne \emptyset$
by \autoref{lem:GHA_restrict_to_smaller_set_and_different_base_point}.
By \autoref{dgh_small_iff_eps_approx},
\[
\dghp{r}{X_i}{q_i}{X}{q}
\leq 8\eps_i + 2 \delta_i
\to 0,
\]
and \autoref{prop:dgh_small_iff_eps_approx_noncompact} implies the claim.
\qedhere
\end{proof}
\begin{cor}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces.
Let $q_i \in X_i$ with $d_{X_i}(p_i,q_i) \to 0$.
Assume $(X_i,p_i) \to (X,p)$.
Then $(X_i,q_i) \to (X,p)$.
\end{cor}
\begin{proof}
Choose $\eps_i \to 0$ and $(f_i,g_i) \in \Isomp{\eps_i}{1/\eps_i}{X_i}{p_i}{X}{p}$
as in \autoref{cor:dgh_small_iff_eps_approx_noncompact}.
Then
\[
d_X(f_i(q_i),p)
= d_X(f_i(q_i),f_i(p))
\leq d_{X_i}(q_i,p_i) + \eps_i
\to 0.
\]
Hence, $q_i \to p$,
and \autoref{(X_i,p_i)->(X,p),q_i->q=>(X_i,q_i)->(X,q)} implies the claim.
\end{proof}
\begin{cor}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces.
Let $q_i \in X_i$ with $d_{X_i}(p_i,q_i) \leq C$ for some $C>0$.
If $(X_i,p_i) \to (X,p)$,
then there exists $q \in X$ such that $(X_i,q_i)$ subconverges to $(X,q)$.
\end{cor}
\begin{proof}
Let $\eps_i \to 0$ and $(f_i,g_i) \in \Isomp{\eps_i}{1/\eps_i}{X_i}{p_i}{X}{p}$
be as in \autoref{cor:dgh_small_iff_eps_approx_noncompact}.
For $R > C$ there is $i_0 > 0$ such that $C + \eps_i \leq R$ for all $i \geq i_0$.
Therefore, $f_i(q_i) \in \B_R(p)$ for all $i \geq i_0$.
Since this ball is compact,
there exists a convergent subsequence with limit $q \in \B_R(p)$.
After passing to this subsequence, $q_i \to q$,
and \autoref{(X_i,p_i)->(X,p),q_i->q=>(X_i,q_i)->(X,q)} implies the claim.
\end{proof}
\subsection{Convergence of maps}\label{sec:GH-ncpt--convergence_of_maps}
So far, statements about the convergence of metric spaces and points were made.
But even statements about maps between those convergent space are possible:
In fact, Lipschitz maps (sub)con\-verge (in some sense) to Lipschitz maps.
The proof of this seems to be rather technical,
but in fact essentially only uses the same methods
one can use to prove convergence of compact subsets (without bothering \precptnessThm).
Therefore, a proof of the latter is given in advance
after establishing the following (technical) lemma.
\begin{lemma}\label{lem_GH:similar_isometries_imply_convergence}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be pointed length spaces.
Assume $(X_i,p_i) \to (X,p)$ and
let $\eps_i \to 0$
and \[(f_i,g_i) \in \Isomp{\eps_i}{1/\eps_i}{X_i}{p_i}{X}{p}\]
be as in \autoref{cor:dgh_small_iff_eps_approx_noncompact}.
Moreover, let $A_i \subseteq B_{1/\eps_i}^{X_i}(p_i)$ and $A \subseteq X$ be compact
and $f_i': A_i \to A$, $g_i' : A \to A_i$ and $\delta_i \to 0$ satisfy
\begin{align*}
d_X(f_i'(x_i),f_i(x_i)) \leq \delta_i \quad\xspace\textrm{and}\xspace\quad
d_{X_i}(g_i'(x),g_i(x)) \leq \delta_i
\end{align*}
for all $x_i \in A_i$ and $x \in A$.
Then $A_i \to A$.
\end{lemma}
\begin{proof}
Prove $(f_i',g_i') \in \Isom{2(\eps_i + \delta_i)}(A_i,A)$:
For $x_i^1, x_i^2 \in A_i$,
\begin{align*}
&|d_{X}(f_i'(x_i^1), f_i'(x_i^2)) - d_{X_i}(x_i^1, x_i^2)|
\\& \leq |d_{X}(f_i'(x_i^1), f_i'(x_i^2)) - d_{X}(f_i(x_i^1), f_i(x_i^2))|
\\&\quad + |d_{X}(f_i(x_i^1), f_i(x_i^2)) - d_{X_i}(x_i^1, x_i^2)|
\\& < d_X(f_i'(x_i^1), f_i(x_i^1)) + d_X(f_i'(x_i^2),f_i(x_i^2)) + \eps_i
\\&\leq \eps_i + 2 \delta_i.
\end{align*}
Analogously, $|d_{X_i}(g_i'(x^1), g_i'(x^2)) - d_{X}(x^1, x^2)| < \eps_i + 2 \delta_i$
for all $x^1,x^2 \in A$.
Moreover, for $x_i \in A_i$,
\begin{align*}
&d_{X_i}(g_i' \circ f_i' (x_i),x_i))
\\& \leq d_{X_i}(g_i' \circ f_i' (x_i),g_i \circ f_i' (x_i))
\\&\quad + d_{X_i}(g_i \circ f_i' (x_i),g_i \circ f_i(x_i)) + d_{X_i}(g_i \circ f_i(x_i),x_i)
\\& < \delta_i + (d_{X_i}(f_i' (x_i),f_i(x_i)) + \eps_i) + \eps_i
\\& \leq 2(\eps_i + \delta_i),
\end{align*}
and analogously, $d_{X}(f_i' \circ g_i'(x),x)) < 2(\eps_i + \delta_i)$ for all $x \in A$.
\end{proof}
\begin{prop}\label{lem_GH:compact_subets_converge}
Let $(X,d_X,p)$ and $(X_i,d_{X_i},p_i)$, $i \in \nn$, be length spaces
such that $(X_i,p_i) \to (X,p)$
and let $\eps_i \to 0$ and
\[(f_i,g_i) \in \Isomp{\eps_i}{1/\eps_i}{X_i}{p_i}{X}{p}\]
be as in \autoref{cor:dgh_small_iff_eps_approx_noncompact}.
Let $K_i \in X_i$ be compact with $K_i \subseteq \B_R^{X_i}(p_i)$ for some $R > 0$.
After passing to a subsequence,
there exists $K \subseteq \B_r(p)$ such that $K_i$ subconverges to $K$.
\end{prop}
\begin{proof}
Without loss of generality,
assume $R \leq \frac{1}{\eps_i}$ and $\eps_i \leq 1$ for all $i \in \nn$.
Let $x_i \in K_i \subseteq \B_R^{X_i}(p_i)$ be arbitrary.
Then $f_i(x_i) \in B_{R + \eps_i}^X(p) \subseteq \B_{R+1}^X(p)$.
Hence, the sequence $(f_i(x_i))_{i \in \nn}$ is contained in a compact set,
and therefore has a convergent subsequent.
Unfortunately, for different choices of $x_i$ different subsequences might converge.
Therefore, a diagonal argument on countable dense subsets of the $K_i$ will be used.
Let $A_i = \{a_i^n \mid n \in \nn\} \subseteq K_i$ be a countable dense subset.
As seen above, the sequence $(f_i(a_i^n))_{i \in \nn}$, where $n \in \nn$,
has a convergent subsequence with limit $y_n \in \B_{R+1}^X(p)$.
Moreover, this subsequence can be chosen such that,
after passing to this subsequence,
$d_X(f_i(a_i^n),y_n) < \frac{\eps_i}{4}$.
By a diagonal argument,
there exists a common subsequence
such that for every $n \in \nn$ there is $y_n \in \B_{R+1}(p)$
with $d_X(f_i(a_i^n),y_n) < \frac{\eps_i}{4}$ for all $i \in \nn$.
Pass to this subsequence.
Define $A := \{y_n \mid n \in \nn\}$ as the set of all these limits
and let $K := \bar{A}$ denote its closure.
In particular, $K$ is compact.
Define maps $f_i' : K_i \to K$ and $g_i' : K \to K_i$ in the following way:
For $x_i \in A_i$, i.e.~$x_i = a_i^n$ for some $n \in \nn$,
define $f_i' (x_i):= y_n \in A \subseteq K$.
If $x_i \in K_i \setminus A_i$,
choose $a_i^n \in A_i$ with $d_{X_i}(x_i,a_i^n) < \frac{\eps_i}{4}$
and define $f_i'(x_i) := y_n \in A \subseteq K$.
In particular,
\begin{align*}
d_X(f_i'(x_i),f_i(x_i))
&\leq d_X(y_n,f_i(a_i^n)) + d_X(f_i(a_i^n),f_i(x_i))
\\&< \frac{\eps_i}{4} + (\eps_i + d_{X_i}(a_i^n,x_i))
\\&< \frac{\eps_i}{4} + \Big(\eps_i + \frac{\eps_i}{4}\Big)
= \frac{3}{2} \eps_i.
\end{align*}
For $x \in A$, i.e.~$x = y_n$ for some $n \in \nn$,
define $g_i'(y_n) := a_i^n \in A_i \subseteq K_i$.
For $x \in X \setminus A$,
choose $y_n \in A$ with $d_X(x,y_n) < \frac{\eps_i}{4}$
and let $g_i'(x) := a_i^n \in A_i \subseteq K_i$.
Then
\begin{align*}
d_{X_i}(g_i'(x),g_i(x)) = d_{X_i}(a_i^n,g_i(x))
&< 2 \eps_i + d_X(f_i(a_i^n),x)
\\&\leq 2 \eps_i + d_X(f_i(a_i^n),y_n) + d_X(y_n,x)
\\&< \frac{5}{2}\eps_i.
\end{align*}
Now \autoref{lem_GH:similar_isometries_imply_convergence} implies the claim.
\end{proof}
\begin{lemma}\label{lem:Lipschitz-maps_converge_to_Lipschitz-limit-map}
Let $(X,d_X)$, $(Y,d_Y)$, $(X_i,d_{X_i})$ and $(Y_i,d_{Y_i})$, $i \in \nn$,
be compact length spaces such that $X_i \to X$ and $Y_i \to Y$.
Moreover, let $\alpha > 0$, $K_i \subseteq X_i$ be compact subsets
and $f_i : K_i \to Y_i$ be $\alpha$-bi-Lipschitz.
After passing to a subsequence, the following holds:
\begin{enumerate}
\item\label{lem:Lipschitz-maps_converge_to_Lipschitz-limit-map--a}
There exist compact subsets $K \subseteq X$ and $K' \subseteq Y$
which are Gro\-mov-Haus\-dorff\xspace limits of $K_i$ and $f_i(K_i)$, respectively,
and an $\alpha$-bi-Lipschitz map $f : K \to K'$ with $f(K)=K'$.
\item\label{lem:Lipschitz-maps_converge_to_Lipschitz-limit-map--b}
For any compact subset $L \subseteq K \subseteq X$
there are compact subsets $L_i \subseteq K_i$ such that
$L_i \to L$ and $f_i(L_i) \to f(L)$ in the Gro\-mov-Haus\-dorff\xspace sense.
\end{enumerate}
\end{lemma}
\newcommand{\figureone}{
\begin{center}
\begin{tikzpicture}[auto,>=stealth]
\node (x2)
at (0,3.25)
{$X_i$};
\node[rotate=90] (ss1)
at (0,2.6)
{$\subseteq$};
\node (s2)
at (0,1.75)
{$K_i$};
\node (t2)
at (0,0)
{$f_i(K_i)$};
\node[rotate=90] (ss2)
at (0,-0.75)
{$\supseteq$};
\node (y2)
at (0,-1.5)
{$Y_i$};
%
\node (x3)
at (2.5,3.25)
{$X$};
\node[rotate=90] (ss3)
at (2.5,2.6)
{$\subseteq$};
\node (s3)
at (2.5,1.75)
{$K$};
\node (t3)
at (2.5,0)
{$K'$};
\node[rotate=90] (ss4)
at (2.5,-0.75)
{$\supseteq$};
\node (y3)
at (2.5,-1.5)
{$Y$};
%
%
\node (empty)
at (-3.5,0)
{\quad};
%
\path
(0.0,1.45) edge[->]
node [left] {$f_i$}
(0.0,0.3)
(2.5,1.45) edge[->]
node [right] {$f$}
(2.5,0.3)
(2.75,1.75)edge[->,bend left=80]
node [right] {$h_i= f_i^Y \circ f_i \circ g_i^X$}
(2.75,0.0)
(0.6,3.25) edge[->]
node [left] {}
(2.1,3.25)
(0.6,-1.5) edge[->]
node [left] {}
(2.1,-1.5)
(0.6,1.825)edge[->,bend left =10,dashed]
node [above] {$f_i^X$}
(2.1,1.825)
(0.6,1.675)edge[<-,bend right =10]
node [below] {$g_i^X$}
(2.1,1.675)
(0.6,0.075)edge[->,bend left =10]
node [above] {$f_i^Y$}
(2.1,0.075)
(0.6,-0.075)edge[<-,bend right =10,dashed]
node [below] {$g_i^Y$}
(2.1,-0.075)
;
\end{tikzpicture}
\caption{Sets and maps used to construct $f: K \to K'$.}
\label{pic:Lipschitz-maps_converge_to_Lipschitz-limit-map-a}
\end{center}
}
\newcommand{\figuretwo}{
\begin{center}
\begin{tikzpicture}[auto,>=stealth]
\node (xi)
at (0,4.75)
{$X_i$};
\node[rotate=90] (ss0)
at (0,4.0)
{$\subseteq$};
\node (ki)
at (0,3.25)
{$K_i$};
\node[rotate=90] (ss1)
at (0,2.5)
{$\subseteq$};
\node (li)
at (-0.5,1.75)
{$L_i = \overline{g_i^X(L)}$};
\node (fili)
at (0,0)
{$f_i(L_i)$};
\node[rotate=90] (ss2)
at (0,-0.75)
{$\supseteq$};
\node (fiki)
at (0,-1.5)
{$f_i(K_i)$};
\node[rotate=90] (ss5)
at (0,-2.25)
{$\supseteq$};
\node (yi)
at (0,-3)
{$Y_i$};
%
\node (x)
at (2.5,4.75)
{$X$};
\node[rotate=90] (ss0)
at (2.5,4.0)
{$\subseteq$};
\node (k)
at (2.5,3.25)
{$K$};
\node[rotate=90] (ss3)
at (2.5,2.5)
{$\subseteq$};
\node (l)
at (2.5,1.75)
{$L$};
\node (t3)
at (2.5,0)
{$f(L)$};
\node[rotate=90] (ss4)
at (2.5,-0.75)
{$\supseteq$};
\node (y3)
at (3.0,-1.5)
{$f(K) = K'$};
\node[rotate=90] (ss6)
at (2.5,-2.25)
{$\supseteq$};
\node (z3)
at (2.5,-3)
{$Y$};
%
\path
(0.6,4.75) edge[->]
node [left] {}
(2.1,4.75)
(0.6,3.325) edge[->,bend left =05,dashed]
node [above] {$f_i^X$}
(2.1,3.325)
(0.6,3.175) edge[<-,bend right =05]
node [below] {$g_i^X$}
(2.1,3.175)
(0.6,1.825) edge[->,bend left =05,dashed]
node [above] {$\tilde{f}_i^X$}
(2.1,1.825)
(0.6,1.675) edge[<-,bend right =05]
node [below] {$\tilde{g}_i^X$}
(2.1,1.675)
(0.0,1.45) edge[->]
node [left] {${f_i}_{|L_i}$}
(0.0,0.3)
(2.5,1.45) edge[->]
node [right] {$f_L$}
(2.5,0.3)
(0.6,0.075) edge[->,bend left =05]
node [above] {$\tilde{f}_i^Y$}
(2.1,0.075)
(0.6,-0.075) edge[<-,bend right =05,dashed]
node [below] {$\tilde{g}_i^Y$}
(2.1,-0.075)
(0.6,-1.425) edge[->,bend left =05]
node [above] {$f_i^Y$}
(2.1,-1.425)
(0.6,-1.575) edge[<-,bend right =05,dashed]
node [below] {$g_i^Y$}
(2.1,-1.575)
(0.6,-3) edge[->]
node [left] {}
(2.1,-3)
;
\end{tikzpicture}
\caption{Sets and maps used to construct $L_i \to L$.}
\label{pic:Lipschitz-maps_converge_to_Lipschitz-limit-map-b}
\end{center}
}
\begin{proof}
\par\smallskip\noindent\ref{lem:Lipschitz-maps_converge_to_Lipschitz-limit-map--a}
In order to prove the first part,
pass to the subsequence of \autoref{lem_GH:compact_subets_converge}.
Then
there are compact sets $K \subseteq X$ and $K' \subseteq Y$
such that $K_i \to K$ and $f_i(K_i) \to K'$.
For these, fix $\eps_i \to 0$,
$(f_i^X, g_i^X) \in \Isom{\eps_i}(K_i,K)$
and $(f_i^Y, g_i^Y) \in \Isom{\eps_i}(f_i(K_i),K')$,
cf.~\autoref{pic:Lipschitz-maps_converge_to_Lipschitz-limit-map-a}.
\begin{figure}[t]
\figureone
\end{figure}
The idea is to define $f$ as a limit of $h_i := f_i^Y \circ f_i \circ g_i^X: K \to K'$:
For $x,x' \in K$,
\begin{align*}
d_Y(h_i(x),h_i(x'))
&= d_Y(f_i^Y \circ f_i \circ g_i^X (x), f_i^Y \circ f_i \circ g_i^X(x')) \\
&\leq \eps_i + d_{Y_i}(f_i \circ g_i^X (x), f_i \circ g_i^X(x')) \\
\displaybreak[0]
&\leq \eps_i + (\alpha \cdot d_{X_i}(g_i^X (x), g_i^X(x'))) \\
&\leq \eps_i + (\alpha \cdot (\eps_i + d_X(x,x'))) \\
&= \alpha \cdot d_X(x,x') + (\alpha+1) \cdot \eps_i.
\end{align*}
As in the proof of \autoref{lem_GH:compact_subets_converge},
the $h_i(x)$ do not have to converge.
Therefore, a diagonal argument on a dense subset of $K$ will be used
to construct a limit map which can be extended using the completeness of the limit space.
Let $A = \{x_j \mid j \in \nn\}$ be a countable dense subset of $K$.
Then $h_i(x_j) \in K'$ for all $i,j \in \nn$, and since $K'$ is compact,
by a diagonal argument, there is a subsequence $(i_n)_{n \in \nn}$
such that $(h_{i_n}(x_j))_{n \in \nn}$ converges for every $j \in \nn$.
Define $f: A \to K'$ by $f(x_j) = \lim_{n \to \infty} h_{i_n}(x_j)$.
This map is $\alpha$-bi-Lipschitz:
For arbitrary $j,l \in \nn$, with the above estimate,
\begin{align*}
d_Y(f(x_j), f(x_l))
&= \lim_{n \to \infty} d_Y(h_{i_n}(x_j), h_{i_n}(x_l))\\
&\leq \lim_{n \to \infty} (\alpha+1) \cdot \eps_{i_n} + \alpha \cdot d_X(x_j,x_l) \\
&= \alpha \cdot d_X(x_j,x_l).
\end{align*}
Analogously,
$d_Y(f(x_j), f(x_l)) \geq \frac{1}{\alpha} \cdot d_X(x_j,x_l).$
Since $A$ is a countable dense subset of $K$,
$f$ can be extended to an $\alpha$-bi-Lipschitz map $f: K \to K'$
(cf.~\autoref{lem:extending_Lipschitz_maps})
where $f(x) = \lim_{l \to \infty} f(x_{j_l})$
for $x \in K$ and $x_{j_l} \in A$ with $x_{j_l} \to x$.
In particular, for $n \in \nn$ and $l \in \nn$,
\begin{align*}
&d_Y(f(x), h_{i_n}(x))\\
&\leq d_Y(f(x), f(x_{j_l}))
+ d_Y(f(x_{j_l}),h_{i_n}(x_{j_l}))
+ d_Y(h_{i_n}(x_{j_l}), h_{i_n}(x)) \\
&\leq d_Y(f(x), f(x_{j_l}))
+ d_Y(f(x_{j_l}),h_{i_n}(x_{j_l}))
+ \alpha \cdot d_X(x_{j_l}, x)
+ (\alpha+1) \cdot \eps_{i_n} \\
&\to d_Y(f(x), f(x_{j_l}))
+ \alpha \cdot d_X(x_{j_l}, x) \textrm{ as } n \to \infty \\
&\to 0 \textrm{ as } l \to \infty.
\end{align*}
Hence, $f(x) = \lim_{n \to \infty} h_{i_n}(x)$.
Moreover, observe the following: Since $f_i$ is $\alpha$-bi-Lipschitz, it is injective.
Therefore,
the inverse $f_i^{-1}$ of $f_i$ exists on $f_i(K_i) \supseteq \im(g_i^X)$
and is $\alpha$-bi-Lipschitz as well.
Hence, for $x \in K$ and $y \in K'$,
\begin{align*}
d_Y(h_i(x),y)
&= d_Y(f_i^Y \circ f_i \circ g_i^X(x),y) \\
\displaybreak[0]
&\leq 2 \eps_i + d_{Y_i}(f_i \circ g_i^X(x),g_i^Y(y)) \\
\displaybreak[0]
&\leq 2 \eps_i + \alpha \cdot d_{X_i}(g_i^X(x),f_i^{-1} \circ g_i^Y(y)) \\
&\leq 2 \eps_i + \alpha \cdot (2 \eps_i + d_{X_i}(x,f_i^X \circ f_i^{-1} \circ g_i^Y(y))) \\
&= 2 (\alpha +1) \eps_i + \alpha \cdot d_{X_i}(x,h_i'(y))
\end{align*}
where $h'_i := f_i^X \circ f_i^{-1} \circ g_i^Y$.
With analogous arguments
and using a further subsequence $(i_{n_m})_{m \in \nn}$ of $(i_n)_{n \in \nn}$,
there is an $\alpha$-bi-Lipschitz map $g : K' \to K$
with $g(y) = \lim_{m \to \infty} h_{i_{n_m}}'(y)$ for all $y \in K'$.
In particular, for all $y \in K'$,
\begin{align*}
d_Y(f \circ g(y),y)
&= \lim_{m \to \infty} d_Y(h_{i_{n_m}}(g(y)),y)\\
&\leq \lim_{m \to \infty} 2(\alpha+1)\eps_{i_{n_m}}
+ \alpha \cdot d_X(g(y),h_{i_{n_m}}'(y))\\
&=0.
\end{align*}
Thus, $f \circ g = \id_{K'}$.
Hence, $K' \subseteq \im(f)$ which proves $K' = f(K)$.
In fact, with analogous argumentation, one can prove $g \circ f = \id_K$,
i.e.~$g$ is the inverse of $f$.
This proves the first part.
\par\smallskip\noindent\ref{lem:Lipschitz-maps_converge_to_Lipschitz-limit-map--b}
The proof of the second statement is based on the first part
and is done with very similar methods.
Let $(f_i^X, g_i^X) \in \Isom{\eps_i}(K_i,K)$
and $(f_i^Y, g_i^Y) \in \Isom{\eps_i}(f_i(K_i),K')$ be as before.
Then $L_i := \overline{g_i^X(L)} \subseteq K_i$ is a compact subset of $K_i$.
The proof of the subconvergences will be done in two steps:
First, prove $L_i \to L$, then $f_i(L_i) \to f(L)$.
For the maps defined below,
cf.~\autoref{pic:Lipschitz-maps_converge_to_Lipschitz-limit-map-b}.
First, define $(\tilde{f}_i^X, \tilde{g}_i^X) \in \Isom{2\eps_i}(L_i,L)$ as follows:
For $x_i \in g_i^X(L)$, choose a point $y \in L$ with $x_i = g_i^X(y)$;
for $x_i \in L_i \setminus g_i^X(L)$,
choose $y \in L$ with $d_{X_i}(x_i,g_i^X(y)) < \frac{\eps_i}{2}$.
Then define $\tilde{f}_i^X(x_i) := y$.
Finally, set $\tilde{g}_i^X := g_i^X$.
By definition,
\[
d_{X_i}(\tilde{g}_i^X \circ \tilde{f}_i^X(x_i),x_i)
= d_{X_i}(g_i^X \circ \tilde{f}_i^X(x_i),x_i) < \frac{\eps_i}{2}
\]
for all $x_i \in L_i$.
Conversely, for $x \in L$ and by applying this inequality,
\begin{align*}
d_X(\tilde{f}_i^X \circ \tilde{g}_i^X(x),x)
\displaybreak[0]
&= d_X(\tilde{f}_i^X \circ g_i^X(x),x) \\
&\leq d_{X_i}(g_i^X \circ \tilde{f}_i^X (g_i^X(x))),g_i^X(x)) + \eps_i
\\&\leq \frac{3}{2} \eps_i.
\end{align*}
Now let $x_i,x_i' \in L_i$ be arbitrary. Then
\begin{align*}
&|d_X(\tilde{f}_i^X(x_i),\tilde{f}_i^X(x_i')) - d_{X_i}(x_i,x_i')| \\
&\leq |d_X(\tilde{f}_i^X(x_i),\tilde{f}_i^X(x_i'))
- d_{X_i}(g_i^X(\tilde{f}_i^X(x_i)),g_i^X(\tilde{f}_i^X(x_i')))|
\\&\quad + |d_{X_i}(g_i^X(\tilde{f}_i^X(x_i)),g_i^X(\tilde{f}_i^X(x_i')))
- d_{X_i}(x_i,x_i')|\\
&< \eps_i + d_{X_i}(g_i^X \circ \tilde{f}_i^X(x_i),x_i)
+ d_{X_i}(g_i^X \circ \tilde{f}_i^X(x_i'),x_i') \\
&< 2 \eps_i.
\end{align*}
For $x,x' \in L$,
by definition,
\[
|d_{X_i}(\tilde{g}_i^X(x),\tilde{g}_i^X(x')) - d_X(x,x')|
< \eps_i
< 2 \eps_i,
\]
and this proves
$(\tilde{f}_i^X, \tilde{g}_i^X) \in \Isom{2\eps_i}(L_i,L)$.
\begin{figure}[t]
\figuretwo
\end{figure}
In order to prove the subconvergence of $f_i(L_i)$ to $f(L)$,
observe that the compactness of $L_i$ and $L$, respectively,
and the continuity of $f_i$ and $f$, respectively,
prove the compactness of $f_i(L_i)$ and $f(L)$, respectively.
Let
\[\delta_i(x) := d_Y(h_i(x),f(x))\] for $x \in L$ and
\[\delta_i := \sup_{x \in L} \delta_i(x).\]
For the subsequence $(i_n)_{n \in \nn}$ of the first part, $\delta_{i_n}(x)$ converges to $0$.
Then $\delta_{i_n}$ converges to $0$ as well:
Assume this is not the case,
i.e.~there is $\epsilon > 0$ such that for all $l \in \nn$
there exists $i_{n_l} \in \nn$ and $x_{n_l} \in X$
with $\delta_{i_{n_l}}(x_{n_l}) \geq \eps$.
After passing to a subsequence,
there is $x \in X$ such that $x_{n_l} \to x$ as $l \to \infty$.
Then
\begin{align*}
\eps
&\leq \delta_{i_{n_l}}(x_{n_l}) \\
&= d_Y(h_{i_{n_l}}(x_{n_l}),f(x_{n_l})) \\
&\leq d_Y(h_{i_{n_l}}(x_{n_l}),h_{i_{n_l}}(x))
+ d_Y(h_{i_{n_l}}(x),f(x))
+ d_Y(f(x),f(x_{n_l})) \\
&\leq (\alpha \cdot d_X(x_{n_l},x)
+ (\alpha+1) \cdot \eps_{i_{n_l}})
+ \delta_{i_{n_l}}(x)
+ \alpha \cdot d_X(x,x_{n_l}) \\
&\to 0 \textrm{ as } l \to \infty.
\end{align*}
This is a contradiction.
Construct $(\tilde{f}_i^Y, \tilde{g}_i^Y) \in \Isom{\tilde{\eps_i}}(f_i(L_i),f(L))$
for $\tilde{\eps}_i := (4\alpha+1)\eps_i + 2 \delta_i$ as follows:
Define $\tilde{f}_i^Y := f \circ \tilde{f}_i^X \circ f_i^{-1}$
and $\tilde{g}_i^Y := f_i \circ g_i^X \circ f^{-1}$
(recall that $f_i^{-1}$ exists on $f_i(L_i) \subseteq f_i(K_i)$
and that $f : K \to K'$ is bijective).
First, let $y_i \in L_i$ and $y \in L$ be arbitrary.
Then
\begin{align*}
d_{Y_i}(\tilde{g}_i^Y \circ \tilde{f}_i^Y (y_i),y_i)
&= d_{Y_i}(f_i \circ g_i^X \circ \tilde{f}_i^X \circ f_i^{-1} (y_i),y_i) \\
&\leq \alpha \cdot d_{X_i}(g_i^X \circ \tilde{f}_i^X (f_i^{-1} (y_i)),f_i^{-1}(y_i)) \\
&< \alpha \cdot 2\eps_i \leq \tilde{\eps}_i ,
\end{align*}
and completely analogously,
\begin{align*}
d_Y(\tilde{f}_i^Y \circ \tilde{g}_i^Y (y),y)
&= d_Y(f \circ \tilde{f}_i^X \circ g_i^X \circ f^{-1} (y),y)
<2 \alpha \eps_i\leq \tilde{\eps}_i.
\end{align*}
For $y,y' \in L$,
\begin{align*}
&|d_{Y_i}(\tilde{g}_i^Y(y),\tilde{g}_i^Y(y')) - d_Y(y,y')|\\
&\leq |d_{Y_i}(\tilde{g}_i^Y(y),\tilde{g}_i^Y(y'))
- d_Y(f_i^Y \circ \tilde{g}_i^Y(y),f_i^Y \circ \tilde{g}_i^Y(y'))|
\\&\quad
+ |d_Y(f_i^Y \circ f_i \circ g_i^X \circ f^{-1}(y),
f_i^Y \circ f_i \circ g_i^X \circ f^{-1}(y'))
- d_Y(y,y')|\\
&< \eps_i
+ d_Y(h_i \circ f^{-1}(y),f \circ f^{-1}(y))
+ d_Y(h_i \circ f^{-1}(y'),f \circ f^{-1}(y'))\\
&\leq \eps_i + 2 \delta_i \leq \tilde{\eps}_i.
\end{align*}
Finally, let $y_i,y_i' \in Y_i$. Using the above estimates,
\begin{align*}
&|d_Y(\tilde{f}_i^Y(y_i),\tilde{f}_i^Y(y_i')) - d_{Y_i}(y_i,y_i')|\\
&\leq |d_Y(\tilde{f}_i^Y(y_i),\tilde{f}_i^Y(y_i'))
- d_{Y_i}(\tilde{g}_i^Y(\tilde{f}_i^Y(y_i)),\tilde{g}_i^Y(\tilde{f}_i^Y(y_i')))|
\\&\quad + |d_{Y_i}(\tilde{g}_i^Y(\tilde{f}_i^Y(y_i)),\tilde{g}_i^Y(\tilde{f}_i^Y(y_i')))
- d_{Y_i}(y_i,y_i')|
\displaybreak[0]\\
&< \eps_i + 2 \delta_i
+ d_{Y_i}(\tilde{g}_i^Y(\tilde{f}_i^Y(y_i)),y_i)
+ d_{Y_i}(\tilde{g}_i^Y(\tilde{f}_i^Y(y_i')),y_i')\\
&\leq \eps_i + 2 \delta_i + 2 \cdot 2\alpha\eps_i
= \tilde{\eps}_i.
\end{align*}
Thus, $(\tilde{f}_i^Y, \tilde{g}_i^Y) \in \Isom{\tilde{\eps}_i}(f_i(L_i),f(L))$.
Since $\tilde{\eps}_{i_n} \to 0 \textrm{ as } {n \to\infty}$,
this proves $f_{i_n}(L_{i_n}) \to f(L) \textrm{ as } {n \to\infty}$.
\end{proof}
\begin{lemma}\label{lem:extending_Lipschitz_maps}
Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces where $Y$ is complete,
let $A \subseteq X$
and $f : A \to Y$ be $\alpha$-(bi)-Lipschitz for some $\alpha > 0$.
Then $f$ can be extended
to an $\alpha$-(bi)-Lipschitz map $\hat{f} : \bar{A} \to Y$.
\end{lemma}
\begin{proof}
Let $a \in \bar{A}\setminus A$ be arbitrary.
Then there exists a (Cauchy) sequence $(a_n)_{n \in \nn}$ in $A$ converging to $a$.
By Lipschitz continuity of $f$,
$(f(a_n))_{n \in \nn}$ is a Cauchy sequence,
and thus has a limit $\hat{a}$ in the complete metric space $Y$.
For any sequence $(\tilde{a}_n)_{n \in \nn}$ converging to $a$,
$d_Y(f(a_n), f(\tilde{a}_n)) \leq \alpha \cdot d_X(a_n,\tilde{a}_n) \to 0$,
i.e.~the limit $\hat{a}$ is independent of the choice of $(a_n)_{n \in \nn}$.
Now define $\hat{f}(a) := \hat{a}$ for $a \in \bar{A}\setminus A$
and $\hat{f}(a) := f(a)$ for $a \in A$.
For arbitrary $a,b \in A$ and sequences $a_n \to a$, $b_n \to b$ in $A$,
\begin{align*}
d_Y(\hat{f}(a),\hat{f}(b))
&= \lim_{n \to \infty} d_Y(f(a_n),f(b_n))
\\&\leq \lim_{n \to \infty} \alpha \cdot d_X(a_n,b_n)
\\&= \alpha \cdot d(a,b).
\end{align*}
Hence, $\hat{f}$ is $\alpha$-Lipschitz.
Analogously, if $f$ is $\alpha$-bi-Lipschitz, $\hat{f}$ is $\alpha$-bi-Lipschitz.
\end{proof}
\section{Ultralimits}\label{sec:ultralimits}
Since sequences of proper spaces do not necessarily converge in the pointed Gro\-mov-Haus\-dorff\xspace sense,
a tool to enforce convergence can be useful. Such a tool are the so called ultralimits
since they always exist and are sublimits in the pointed Gro\-mov-Haus\-dorff\xspace sense.
A basic reference from which the following definitions are taken
is \cite[section I.5]{bridson-haefliger}.
Another, more set theoretical, reference is \cite[chapter 7]{jech}.
In the following, ultralimits will be introduced and some properties will be investigated.
\begin{defn}[{\cite[Definition I.5.47]{bridson-haefliger}}]
A \emph{non-principal ultrafilter on $\nn$}
is a finitely additive probability measure $\omega$ on $\nn$ such that
all subsets $S \subseteq \nn$ are $\omega$-measurable
with $\omega(S) \in \{0,1\}$ and $\omega(S) = 0$ if $S$ is finite.
\end{defn}
\begin{rmk}
If two sets have $\omega$-measure $1$, their intersection has $\omega$-measure $1$ as well:
Let $\omega(A) = \omega(B) = 1$.
Then
\[
\omega(\nn \setminus (A \cap B))
= \omega(\nn \setminus A \cup \nn \setminus B)
\leq \omega(\nn \setminus A) + \omega (\nn \setminus B)
= 0,
\]
hence, $\omega(A \cap B) = 1$.
\end{rmk}
Using Zorn's Lemma, the existence of such a non-principal ultrafilter can be proven.
But even more is true: Given any infinite set,
there exists a non-principal ultrafilter such that the set has measure $1$
with respect to this ultrafilter.
\begin{lemma}\label{lem_UL:existence_of_ultrafilter}
Let $A \subseteq \nn$ be an infinite set.
Then there exists a non-principal ultrafilter $\omega$ on $\nn$ such that $\omega(A) = 1$.
\end{lemma}
\begin{proof}
Let
\[
G := \{B \subseteq \nn
\mid B \supseteq A~\text{or}~\nn \setminus B \text{ is finite}\}.
\]
For any $B_1, B_2 \in G$, the intersection $B_1 \cap B_2$ is non-empty:
This is obviously correct if both $B_j \supseteq A$ or both $\nn \setminus B_j$ are finite.
Thus, let $B_1 \supseteq A$ and $\nn \setminus B_2$ be finite:
Then $A \setminus B_2$ is finite as well,
hence, $B_1 \cap B_2 \supseteq A \cap B_2 = A \setminus (A \setminus B_2)$ is infinite
since $A$ is infinite.
In particular, the intersection is non-empty.
Using that $G$ contains all sets with finite complement,
it follows from
\cite[Lemma 7.2 (iii)]{jech}, \cite[Theorem 7.5]{jech} and the subsequent remark therein
that there exists a non-principal ultrafilter $\omega$
such that $\omega(X) = 1$ for all $X \in G$.
In particular, $\omega(A) = 1$.
\end{proof}
Given a bounded sequence of real numbers,
a non-principal ultrafilter provides a kind of \myquote{limit}.
In fact, these \myquote{limits} are accumulation points
and non-principal ultrafilters pick out convergent subsequences.
\begin{lemma}[{\cite[Lemma I.5.49]{bridson-haefliger}}]
Let $\omega$ be a non-principal ultrafilter on $\nn$.
For every bounded sequence of real numbers $(a_i)_{i \in \nn}$
there exists a unique real number $l \in \rr$
such that
\[\omega( \{i \in \nn \mid |a_i - l| < \eps \}) = 1\]
for every $\eps > 0$.
Denote this $l$ by $\lim\nolimits_\omega a_i$.
\end{lemma}
\begin{lemma}\label{lem_UL:ultrafilter_pick_cvgt_subsequence}
If $\omega$ is a non-principal ultrafilter on $\nn$
and $(a_i)_{i \in \nn}$ a bounded sequence of real numbers,
then $\lim\nolimits_\omega a_i$ is an accumulation point of $(a_i)_{i \in \nn}$.
Moreover, there exists a subsequence $(a_{i_j})_{j \in \nn}$ converging to $\lim\nolimits_\omega a_i$
such that $\omega(\{i_j \mid j \in \nn\}) = 1$.
Conversely, if $(a_i)_{i \in \nn}$ is a bounded sequence of real numbers
and $a \in \rr$ any accumulation point,
then there exists a non-principal ultrafilter $\omega$ on $\nn$ such that $a = \lim\nolimits_\omega a_i$.
\end{lemma}
\begin{proof}
Let $(a_i)_{i \in \nn}$ be any bounded sequence of real numbers.
First, fix a non-principal ultrafilter $\omega$,
let $a := \lim\nolimits_\omega a_i$ and
\[A_{\eps} := \{i \in \nn \mid |a_i - a| < \eps\}\]
for $\eps > 0$.
By definition, $\omega(A_{\eps}) = 1$;
in particular, $A_{\eps}$ has infinitely many elements.
Thus, $a$ is an accumulation point.
Next, prove that there exists $I \subseteq \nn$ with $\omega(I) = 1$
such that the subsequence $(a_i)_{i \in I}$ converges to $a$.
Assume this is not the case,
i.e.~every $I \subseteq \nn$ satisfies $\omega(I) = 0$
or $(a_i)_{i \in I}$ does not converge to $a$.
Since $\omega(\nn) = 1$, $(a_i)_{i \in \nn}$ does not converge to $a$.
Hence, there exists $\eps > 0$
such that $A_{\eps}$ is finite.
In particular, $\omega(A_{\eps}) = 0$ and this is a contradiction.
Now let $J \subseteq \nn$ be a set of indices such that
$\omega(J) = 1$ and the subsequence $(a_j)_{j \in J}$ converges to $a$.
By \autoref{lem_UL:existence_of_ultrafilter},
there exists a non-principal ultrafilter $\omega$ such that $\omega(J) = 1$.
By the first part, there exists a subsequence of indices $I \subseteq \nn$
with $\omega(I) = 1$ and $a_j \to \lim\nolimits_\omega a_i$ as $j \to \infty$ for $j \in I$.
Now $\omega(I \cap J) = 1$
and both $a_j \to a$ and $a_j \to \lim\nolimits_\omega a_i$ as $j \to \infty$ for $j \in I \cap J$.
This proves $a = \lim\nolimits_\omega a_i$.
\end{proof}
An immediate consequence of the above lemma is the following:
Given two bounded sequences of real numbers,
investigating sublimits coming from a common subsequence
and
investigating the \myquote{limits} with respect to the same non-principal ultrafilter
is the same.
\begin{lemma}\label{lem_UL:common-subseq=using-ultrafilter}
Let $(a_i)_{i \in \nn}$ and $(b_i)_{i \in \nn}$ be bounded sequences of real numbers.
\begin{enumerate}
\item\label{lem_UL:common-subseq=using-ultrafilter--a}
If $\omega$ is a non-principal ultrafilter on $\nn$,
then there exists a subsequence $(i_j)_{j \in \nn}$ such that both
$a_{i_j} \to \lim\nolimits_\omega a_i$ and $b_{i_j} \to \lim\nolimits_\omega b_i$ as $j \to \infty$.
\item\label{lem_UL:common-subseq=using-ultrafilter--b}
If there are $a, b \in \rr$ and a subsequence $(i_j)_{j \in \nn}$
such that both $a_{i_j} \to a$ and $b_{i_j} \to b$ as $j \to \infty$,
then there exists a non-principal ultrafilter $\omega$ on $\nn$
such that $a = \lim\nolimits_\omega a_i$ and $b = \lim\nolimits_\omega b_i$.
\end{enumerate}
\end{lemma}
\begin{proof}
\par\smallskip\noindent\ref{lem_UL:common-subseq=using-ultrafilter--a}
By \autoref{lem_UL:ultrafilter_pick_cvgt_subsequence},
there are subsequences of indices $I, J \subseteq \nn$
with measures $\omega(I) = \omega(J) = 1$,
\begin{align*}
&a_j \to \lim\nolimits_\omega a_i \textrm{ as } j \to \infty~\text{for}~j \in I
\quad\xspace\textrm{and}\xspace\\
&b_j \to \lim\nolimits_\omega b_i \textrm{ as } j \to \infty~\text{for}~j \in J.
\end{align*}
In particular, $I \cap J$ has $\omega$-measure $1$.
Hence, it is infinite and provides a common subsequence which satisfies the claim.
\par\smallskip\noindent\ref{lem_UL:common-subseq=using-ultrafilter--b}
This follows directly from the second part
of \autoref{lem_UL:ultrafilter_pick_cvgt_subsequence}
since the non-principal ultrafilter constructed there
depends only on the indices of the convergent subsequence.
\end{proof}
\begin{cor}
Let $(a_i)_{i \in \nn}$ and $(b_i)_{i \in \nn}$ be bounded sequences of real numbers.
\begin{enumerate}
\item If $a_i \leq b_i$ for all $i \in \nn$, then $\lim\nolimits_\omega a_i \leq \lim\nolimits_\omega b_i$.
\item $\lim\nolimits_\omega (a_i + b_i) = \lim\nolimits_\omega a_i + \lim\nolimits_\omega b_i$.
\end{enumerate}
\end{cor}
\begin{proof}
Observe that \autoref{lem_UL:common-subseq=using-ultrafilter} holds not only for two
but finitely many sequences for real numbers.
Applying this and the corresponding statements for limits of sequences of real numbers
implies the claim.
\end{proof}
An ultralimit is a \myquote{limit space} assigned to a (pointed) sequence of metric spaces
by using a non-principal ultrafilter.
The construction of this ultralimit is related to Gro\-mov-Haus\-dorff\xspace convergence in the sense that
such a limit space is a sublimit in the pointed Gro\-mov-Haus\-dorff\xspace sense.
On the other hand, given any sublimit in the pointed Gro\-mov-Haus\-dorff\xspace sense,
there exists a non-principal ultrafilter
such that the corresponding ultralimit is exactly this sublimit.
This fact can be extended to a similar statement about finitely many different sequences
and corresponding sublimits coming from a common subsequence.
\begin{defn}[{\cite[Definition I.5.50]{bridson-haefliger}}]
Let $\omega$ be a non-principal ultrafilter on $\nn$,
$(X_i,d_i,p_i)$, $i \in \nn$, be pointed metric spaces and
\[
X_\omega := \{ [(x_i)_{i \in \nn}]
\mid x_i \in X_i~\xspace\textrm{and}\xspace~\sup\nolimits_{i \in \nn} d_i(x_i,p_i) < \infty\}
\]
where
\[(x_i)_{i \in \nn} \sim (y_i)_{i \in \nn}~\text{if and only if}~\lim\nolimits_\omega d_i(x_i,y_i) = 0.\]
Furthermore,
let $d_\omega([(x_i)_{i \in \nn}],[(y_i)_{i \in \nn}]) := \lim\nolimits_\omega d_i(x_i,y_i)$.
Then $(X_\omega, d_\omega)$ is a metric space,
called \emph{ultralimit} of $(X_i,d_i,p_i)$ and denoted by $\lim\nolimits_\omega (X_i,d_i,p_i)$.
\end{defn}
\begin{rmk}
Let $\omega$ be a non-principal ultrafilter on $\nn$,
$(X_i,d_i,p_i)$, $i \in \nn$, be pointed metric spaces and $Y_i \subseteq X_i$.
The limit $(Y_\omega, d_{Y_\omega}) := \lim\nolimits_\omega (Y_i,d_i,p_i)$
is canonically a subset of $(X_\omega, d_{X_\omega}) := \lim\nolimits_\omega (X_i,d_i,p_i)$:
Obviously,
\begin{align*}
&\{(y_i)_{i \in \nn}
\mid y_i \in Y_i~\xspace\textrm{and}\xspace~\sup\nolimits_i d_i(y_i,p_i) < \infty\} \\
&\subseteq \{ (x_i)_{i \in \nn}
\mid x_i \in X_i~\xspace\textrm{and}\xspace~\sup\nolimits_i d_i(x_i,p_i) < \infty\}.
\end{align*}
Since the metric is the same on both $X_i$ and $Y_i$
and since the equivalence classes are only defined by using the ultrafilter and the metric,
$Y_\omega \subseteq X_\omega$.
With the same argumentation, the metric coincides: For $y_i,y_i' \in Y_i$,
\begin{align*}
&d_{Y_\omega}([(y_i)_{i \in \nn}]_{Y_\omega},[(y_i')_{i \in \nn}]_{Y_\omega})
\\&= \lim\nolimits_\omega d_i(y_i,y_i')
\\&= d_{X_\omega}([(y_i)_{i \in \nn}]_{X_\omega},[(y_i')_{i \in \nn}]_{X_\omega}).
\end{align*}
\end{rmk}
\begin{lemma}[{\cite[Lemma I.5.53]{bridson-haefliger}}]
The ultralimit of a sequence of metric spaces is complete.
\end{lemma}
In order to prove the correspondence of sublimits and ultralimits,
first, compact metric spaces are investigated.
\begin{prop}\label{prop_UL:ultralimits_are_sublimits_cpt}
Let $\omega$ be a non-principal ultrafilter on $\nn$ and
$(X_i,d_i,p_i)$, $i \in \nn$, be pointed compact metric spaces
with compact ultralimit $(X_\omega,d_\omega)$
and define $p_\omega := [(p_i)_{i \in \nn}] \in X_\omega$.
Then $\lim\nolimits_\omega d_{\textit{GH}}((X_i,p_i),(X_\omega,p_\omega)) = 0$.
\end{prop}
\begin{proof}
The statement will be proven by using $\eps$-nets:
First, finite $\eps$-nets in $X_i$ will be fixed
and it will be proven that their ultralimit is a finite $\eps$-net in $X_\omega$.
Then the Gro\-mov-Haus\-dorff\xspace distance of these nets will be estimated.
Finally, using the triangle inequality and $\eps \to 0$ prove the claim.
Fix $\eps > 0$.
For every $i \in \nn$,
fix a finite $\eps$-net $A_i^{\eps} = \{a_i^1,\dots,a_i^{n_i}\}$ in the compact space $X_i$
with $a_i^1 = p_i$,
i.e.~$d(a_i^k,a_i^l) \geq \eps$ for all $k \ne l$ and $X_i = \bigcup_{j=1}^{n_i} B_{\eps}(a_i^j)$.
Let $A_\omega^{\eps}$ be the ultralimit of these $A_i^{\eps}$,
i.e.
\[
A_\omega^{\eps}
= \{[(a_i)_{i \in \nn}] \mid \forall i \in \nn \, \exists 1 \leq j_i \leq n_i: a_i = a_i^{j_i}\}
\subseteq X_\omega,
\]
and let $p_\omega := [(p_i)_{i \in \nn}] \in A_\omega^{\eps}$.
Then $A_\omega^{\eps}$ is again a finite $\eps$-net in $X_\omega$:
Let $[(a_i^{k_i})_{i \in \nn}], [(a_i^{l_i})_{i \in \nn}] \in A_\omega^{\eps}$.
By definition,
\[[(a_i^{k_i})_{i \in \nn}] = [(a_i^{l_i})_{i \in \nn}]
\text{ if and only if }
\lim\nolimits_\omega d_i(a_i^{k_i},a_i^{l_i}) = 0.\]
Since $d_i(a_i^{k_i},a_i^{l_i}) = 0$ exactly for those $i$ with $k_i = l_i$
and $d_i(a_i^{k_i},a_i^{l_i}) \geq \eps$ otherwise,
this implies
\[[(a_i^{k_i})_{i \in \nn}] = [(a_i^{l_i})_{i \in \nn}]
\text{ if and only if }
\omega(\{i \in \nn \mid k_i = l_i \}) = 1.\]
In particular, for $[(a_i^{k_i})_{i \in \nn}] \ne [(a_i^{l_i})_{i \in \nn}]$,
\[d_{X_\omega}([(a_i^{k_i})_{i \in \nn}] , [(a_i^{l_i})_{i \in \nn}])
=\lim\nolimits_\omega d_i(a_i^{k_i},a_i^{l_i})
\geq \eps.\]
Furthermore, for arbitrary $[(x_i)_{i \in \nn}]$ there are $a_i^{j_i}$
such that $x_i \in B_{\eps}(a_i^{j_i})$.
Thus,
\begin{align*}
d_\omega([(x_i)_{i \in \nn}], [(a_i^{j_i})_{i \in \nn}])
= \lim\nolimits_\omega d_i(x_i, a_i^{j_i})
< \eps.
\end{align*}
This proves that $A_\omega^{\eps}$ is an $\eps$-net in $X_\omega$.
It remains to prove that $A_\omega^{\eps}$ is finite:
Assume it is not.
Then $\bigcup_{p \in A_\omega^{\eps}} B_{\eps}(p)$ is an open cover of $X_\omega$,
and thus, has a finite subcover $X_\omega = \bigcup_{j=1}^k B_{\eps}(q_j)$
with $q_j \in A_\omega^{\eps}$.
Hence, for any $q \in A_\omega^{\eps} \setminus \{q_1,\dots,q_k\}$
there exists $q_j$ such that $q \in B_{\eps}(q_j)$.
This is a contradiction to $d_\omega(q,q_j) \ge \eps$.
Let $n_\omega < \infty$ denote the cardinality of $A_\omega^{\eps}$
and $I := \{i \in \nn \mid n_i = n_\omega\}$ be those indices
such that $A_i^{\eps}$ and $A_\omega^{\eps}$ have the same cardinality.
Then $\omega(I) = 1$:
Let $A_\omega^{\eps} = \{z_1,\dots,z_{n_\omega}\}$
and $z_k = [(a_i^{j_i^k})_{i \in \nn}]$
where $1 \leq j_i^k \leq n_i$ for each $1 \leq k \leq n_{\omega}$.
For $k \ne l$,
one has $1 = \omega(\{i \in \nn \mid j_i^k \ne j_i^l\})$.
Thus,
\begin{align*}
1
&= \omega\big(\bigcap\nolimits_{1 \leq k < l \leq n_\omega}
\{i \in \nn \mid j_i^k \ne j_i^l\}\big)\\
&= \omega(\{i \in \nn \mid \forall 1 \leq k < l \leq n_\omega : j_i^k \ne j_i^l\}) \\
&\geq \omega(\{i \in \nn \mid n_\omega \leq n_i\}) \\
&= \omega (I \cup J)
\end{align*}
where $J := \{i \in \nn \mid n_i > n_\omega\}$.
Assume $\omega(J) = 1$.
For all $1 \leq j \leq n_\omega + 1$, let
\[ q_i^j :=
\begin{cases}
a_i^j & \text{if } i \in J, \\
p_i & \text{if } i \notin J \\
\end{cases}
\]
and $\tilde{z}_j := [(q_i^j)_{i \in \nn}] \in A_\omega^{\eps}$.
By definition,
$q_i^j = q_i^l$ if and only if $k \ne l$ or $i \in I$.
Hence, if $k \ne l$, then
$\omega(\{i \in \nn \mid q_i^k = q_i^l\})
= \omega(\nn \setminus J)
= 1 - \omega(J)
= 0$.
Thus, $\tilde{z}_k \ne \tilde{z}_l$
and $\{\tilde{z}_1, \dots, \tilde{z}_{n_\omega+1}\} \subseteq A_\omega^{\eps}$,
hence, $n_\omega+1 \leq n_\omega$. This is a contradiction.
Therefore, $\omega (J) = 0$ and $\omega(I) = \omega(I \cup J) = 1$.
Similarly,
for all $1 \leq j \leq n_\omega$, let
\[ p_i^j :=
\begin{cases}
a_i^j & \text{if } i \in I, \\
p_i & \text{if } i \notin I \\
\end{cases}
\]
and $y_j := [(p_i^j)_{i \in \nn}] \in A_\omega^{\eps}$.
Analogously, $y_k = y_l$ if and only if $k = l$.
This implies $A_\omega^{\eps} = \{ y_1, \dots, y_{n_\omega} \} $.
In particular, $y_1 = p_\omega$.
For $1 \leq k < l \leq n_\omega$,
define
\begin{align*}
I^{kl}_\delta
:= &\{ i \in I \mid |d_\omega(y_k,y_l) - d_i(a_i^k, a_i^l)| < \delta \}
\\= &\{ i \in I \mid |d_\omega(y_k,y_l) - d_i(p_i^k, p_i^l)| < \delta \}.
\end{align*}
Since $d_\omega(y_k,y_l) = \lim\nolimits_\omega d_i(p_i^k, p_i^l)$ by definition,
$\omega(I^{kl}_\delta) = 1$ for any $\delta > 0$.
Therefore, $\lim\nolimits_\omega \delta_i^{kl} = 0$
for $\delta_i^{kl} := |d_\omega(y_k,y_l) - d_i(a_i^k, a_i^l)|$.
Thus, $\lim\nolimits_\omega \eps_i = 0$
where $\eps_i := \max\{ \delta_i^{kl} \mid 1 \leq k < l \leq n_\omega\}$ for $i \in I$
and $\eps_i := 0$ for $i \notin I$.
Let $i \in I$ be fixed
and define $f_{i} : A_{i}^{\eps} \to A_\omega^{\eps}$
and $g_{i} : A_\omega^{\eps} \to A_{i}^{\eps}$ by
\[ f_{i}(a_i^j) := y_j~\xspace\textrm{and}\xspace~g_{i}(y_j) = a_i^j\]
for $1 \leq j \leq n_{\omega}$.
In particular,
\[
f_{i}(p_i) = f_{i}(a_i^1) = y_1 = p_\omega
\quad\xspace\textrm{and}\xspace\quad
g_{i}(p_\omega) = g_{i}(y_1) = a_i^1 = p_i.
\]
Obviously, $f_{i} \circ g_{i} = \id_{A_\omega^{\eps}}$ and $g_{i} \circ f_{i} = \id_{A_{i}^{\eps}}$.
Further, for $1 \leq k < l \leq n_{\omega}$,
\begin{align*}
&|d_\omega(f_{i}(a_{i}^k), f_{i}(a_{i}^l)) - d_{i}(a_{i}^k, a_{i}^l)|
= |d_\omega(y_k, y_l) - d_{i}(a_{i}^k, a_{i}^l)|
= \delta_{i}^{kl}
\leq \eps_{i},
\intertext{and analogously,}
&|d_{i}(g_{i}(y_k), g_{i}(y_l)) - d_\omega(y_k,y_l)| \leq \eps_{i},
\end{align*}
i.e.~$(f_{i},g_{i}) \in \Isom{\eps_{i}}((A_{i}^{\eps},p_i),(A_\omega^{\eps},p_\omega))$.
Thus, $d_{\textit{GH}}((A_{i}^{\eps}, p_{i}), (A_\omega^{\eps}, p_\omega)) \leq 2 \eps_i$
for any $i \in I$.
For any compact metric space $(Z,d_Z)$ and $\eps$-net $A \subseteq Z$,
\begin{align*}
d_{\textit{H}}^{d_Z}(Z,A)
= \inf \{r > 0 \mid B_r(A) \supseteq Z = B_{\eps}(A)\}
\leq \eps.
\end{align*}
Hence, for any $p \in A$, $d_{\textit{GH}}((A,p),(Z,p)) \leq d_H(Z,A) + d_Z(p,p) \leq \eps$.
Applying this general statement, for fixed $i \in I$ and $\eps > 0$,
\begin{align*}
&d_{\textit{GH}}((X_{i},p_{i}), (X_\omega, p_\omega)) \\
&\leq d_{\textit{GH}}((X_{i},p_{i}), (A_{i}^{\eps}, p_{i}))
\\&\quad + d_{\textit{GH}}((A_{i}^{\eps}, p_{i}), (A_\omega^{\eps}, p_\omega))
\\&\quad + d_{\textit{GH}}((A_\omega^{\eps}, p_\omega), (X_\omega, p_\omega)) \\
&\leq 2 \eps + 2 \eps_{i}.
\end{align*}
In particular, $\lim\nolimits_\omega d_{\textit{GH}}((X_{i},p_{i}), (X_\omega, p_\omega)) \leq 2 \eps$.
Since this holds for all $\eps > 0$,
\[\lim\nolimits_\omega d_{\textit{GH}}((X_{i},p_{i}), (X_\omega, p_\omega)) = 0.\qedhere\]
\end{proof}
\begin{cor}
Let $\omega$ be a non-principal ultrafilter on $\nn$.
If the ultralimit of compact metric spaces is compact, it is a sublimit in the pointed Gro\-mov-Haus\-dorff\xspace sense
which comes from a subsequence with index set whose $\omega$-measure is $1$.
\end{cor}
\begin{proof}
Let $(X_i,d_i,p_i)$, $i \in \nn$, be pointed compact metric spaces,
$(X_\omega,d_\omega)$ their compact ultralimit and $p_\omega = [(p_i)_{i \in \nn}]$.
By the previous proposition, \[\lim\nolimits_\omega d_{\textit{GH}}((X_i,p_i), (X_\omega, p_\omega)) = 0,\]
and by \autoref{lem_UL:ultrafilter_pick_cvgt_subsequence},
there exists a subsequence $(i_j)_{j \in \nn}$ of natural numbers satisfying
$\omega(\{i_j \mid j \in \nn\}) = 1$ such that
\[d_{\textit{GH}}((X_{i_j},p_{i_j}), (X_\omega, p_\omega)) \to 0 \textrm{ as } {j \to \infty}.\qedhere\]
\end{proof}
This result now gives a corresponding result for non-compact spaces.
\begin{prop}\label{prop_UL:ultralimits_are_sublimits_ncpt}
Let $\omega$ be a non-principal ultrafilter on $\nn$.
The ultralimit of a sequence of pointed proper length spaces
is a sublimit in the pointed Gro\-mov-Haus\-dorff\xspace sense
(which comes from a subsequence with index set of $\omega$-measure $1$).
Conversely,
the sublimit of a sequence of pointed proper length spaces in the pointed Gro\-mov-Haus\-dorff\xspace sense
is the ultralimit with respect to a non-prin\-ci\-pal ultrafilter.
\end{prop}
\begin{proof}
Let $(X_i,d_i,p_i)$, $i \in \nn$, be pointed proper length spaces,
$(X_\omega, d_\omega)$ the corresponding ultralimit
and $p_\omega := [(p_i)_{i \in \nn}] \in X_\omega$.
First it will be shown that an $r$-ball in the ultralimit is the ultralimit of $r$-balls.
Then applying the corresponding statement for compact sets proves the claim.
For $r > 0$,
let $X_\omega^r \subseteq X_\omega$ denote the ultralimit of $(\B_r^{X_i}(p_i), d_i, p_i)$.
This is a closed subset of $X_\omega$:
First, observe
\[X_\omega^r = \{[(q_i)_{i \in \nn}] \mid q_i \in X_i~\xspace\textrm{and}\xspace~d_i(q_i,p_i) \leq r\}.\]
Let $(z_n)_{n \in \nn}$ be a sequence in $X_\omega^r$ which converges to a limit $z \in X_\omega$.
Denote $z_n = [(q_i^n)_{i \in \nn}]$ and $z = [(q_i)_{i \in \nn}]$
where $q_i^n, q_i \in X_i$ with $d_i(q_i^n,p_i) \leq r$ for all $i,n \in \nn$
and $\sup_{i \in \nn} d_i(q_i,p_i) < \infty$.
Moreover, $d_\omega(z_n,z) = \lim\nolimits_\omega d_i(q_i^n,q_i) \to 0$ as $n \to \infty$.
For all $n \in \nn$,
$d_\omega(z_n,p_\omega) = \lim\nolimits_\omega d_i(q_i^n,p_i) \leq r$.
Hence,
\begin{align*}
d_\omega(z,p_\omega)
&\leq \lim_{n \to \infty} d_\omega(z,z_n) + d_\omega(z_n,p_\omega)
\leq r
\end{align*}
and $z \in X_\omega^r$. This proves that $X_\omega^r$ is closed.
In fact, $X_\omega^r = \B_r^{X_\omega}(p_\omega)$:
First, let $[(q_i)_{i\in\nn}] \in X_\omega^r \subseteq X_\omega$ be arbitrary.
Since
\[
d_\omega([(q_i)_{i\in\nn}],[(p_i)_{i\in\nn}])
= \lim\nolimits_\omega d_i(p_i,q_i)
\leq r,
\]
$[(q_i)_{i\in\nn}] \in \B_r^{X_\omega}(p_\omega)$.
Now let $[(q_i)_{i\in\nn}] \in B_r^{X_\omega}(p_\omega)$
and $I := \{i \in \nn \mid d_i(p_i,q_i) < r\}$.
Define
\[ \tilde{q}_i :=
\begin{cases}
q_i & \text{if } i \in I,\\
p_i & \text{if } i \notin I.
\end{cases}
\]
By definition, $[(\tilde{q}_i)_{i\in\nn}] \in X_\omega^r$.
Furthermore, $[(q_i)_{i\in\nn}] = [(\tilde{q}_i)_{i\in\nn}] \in X_\omega^r$:
Since $[(q_i)_{i\in\nn}] \in B_r^{X_\omega}(p_\omega)$, $0 \leq l:= \lim\nolimits_\omega d_i(q_i,p_i) < r$.
For $\delta := r-l > 0$,
\begin{align*}
1
&= \omega(\{ i \in \nn \mid |d_i(q_i,p_i) - l| < \delta \}) \\
&\leq \omega(\{ i \in \nn \mid d_i(q_i,p_i) < l + \delta = r \}) \\
&= \omega(I).
\end{align*}
Thus, for arbitrary $\eps > 0$,
\begin{align*}
\omega(\{ i \in \nn \mid d_i(q_i,\tilde{q}_i) < \eps\})
&\geq \omega(\{i \in \nn \mid q_i = \tilde{q}_i\}) \\
&= \omega(I)
= 1.
\end{align*}
Therefore, $\lim\nolimits_\omega d_i(q_i,\tilde{q}_i) = 0$
and $[(q_i)_{i\in\nn}] = [(\tilde{q}_i)_{i\in\nn}] \in X_\omega^r$.
Consequently,
$B_r^{X_\omega}(p_\omega) \subseteq X_\omega^r$.
Since $X_\omega^r$ is closed,
this proves
$\B_r^{X_\omega}(p_\omega) \subseteq X_\omega^r$, and hence, equality,
i.e.~$\B_r^{X_\omega}(p_\omega) = \lim\nolimits_\omega (\B_r^{X_i}(p_i), d_i, p_i)$.
For any $r > 0$
and $\eps_i^r := d_{\textit{GH}}((\B_r^{X_i}(p_i),p_i),( \B_r^{X_\omega}(p_\omega),p_\omega))$,
$\lim\nolimits_\omega \eps_i^r = 0$ by \autoref{prop_UL:ultralimits_are_sublimits_cpt}.
By \autoref{prop_UL:eps^(r_n)_n<=1/r_n--ultrafilter},
there exists $r_i>0$ with
\[
\lim\nolimits_\omega \frac{1}{r_i} = 0
\quad\xspace\textrm{and}\xspace\quad
\omega\Big(\Big\{i \in \nn \mid \eps_i^{r_i} \leq \frac{1}{r_i}\Big\}\Big) = 1.
\]
By \autoref{lem_UL:ultrafilter_pick_cvgt_subsequence},
there is $J = \{i_1 < i_2 < \dots \} \subseteq \nn$
such that $\omega(J) = 1$ and $r_{i_j} \to \infty$.
Let \[I := J \cap \Big\{i \in \nn \mid \eps_i^{r_i} \leq \frac{1}{r_i}\Big\}.\]
Then $\omega(I) = 1$ and $I = \{i_{j_1} < i_{j_2} < \dots\} \subseteq J$.
Thus, $r_{i_{j_l}} \to \infty$
and
\[
d_{\textit{GH}}((\B_{r_{i_{j_l}}}^{X_{i_{j_l}}}(p_{i_{j_l}}),p_{i_{j_l}}),
(\B_{r_{i_{j_l}}}^{X_\omega}(p_\omega),p_\omega))
= \eps_{i_{j_l}}^{r_{i_{j_l}}}
\leq \frac{1}{r_{i_{j_l}}}
\to 0
\]
as $l \to \infty$.
Now \autoref{cor:dgh_small_iff_eps_approx_noncompact} proves
$(X_{i_{j_l}},p_{i_{j_l}}) \to (X_\omega, p_\omega)$ in the pointed Gro\-mov-Haus\-dorff\xspace sense
where $\omega(\{i_{j_l} \mid l \in \nn \}) = 1$
and this finishes the proof of the first part.
The proof of the second statement
can be done completely analogously
to the one of \autoref{lem_UL:ultrafilter_pick_cvgt_subsequence}.
\end{proof}
\begin{lemma}\label{prop_UL:eps^(r_n)_n<=1/r_n--ultrafilter}
Let $\omega$ be an ultrafilter on $\nn$
and for every $r > 0$ let $(\eps_i^r)_{i \in \nn}$ be a sequence
such that $\lim\nolimits_\omega \eps_i^r = 0$.
Then there exists a sequence $(r_i)_{i \in \nn}$ of positive real numbers
such that $\lim\nolimits_\omega \frac{1}{r_i} = 0$
and $\omega(\{i \in \nn \mid \eps_i^{r_i} \leq \frac{1}{r_i} \}) = 1$.
\end{lemma}
\begin{proof}
For $i \in \nn$, let $R_i := \{r > 0 \mid \eps_i^r \leq \frac{1}{r}\}$.
The idea of this proof, similar to the one of \autoref{prop:eps^(r_n)_n<=h(1/r_n)},
is to find a sequence $r_i \in R_i$ with $r_i > i$ for a set of indices of $\omega$-measure $1$.
Since the $R_i$ need to be non-empty, let $I := \{i \in \nn \mid R_i \ne \emptyset\}$.
Due to $\lim\nolimits_\omega \eps_i^1 = 0$,
\begin{align*}
\omega(I)
= \omega\big(\big\{i \in \nn \mid \exists\,r > 0 : \eps_i^r \leq \frac{1}{r} \big\}\big)
\geq \omega(\{i \in \nn \mid \eps_i^1 \leq 1\})
= 1,
\end{align*}
i.e.~$\omega(I) = 1$.
Let $J := \{i \in \nn \mid \neg \exists\,C > 0 : R_i \subseteq [0,C]\}$ be
the indices of the unbounded sets.
In particular, $J \subseteq I$.
In the following, the cases of $\omega(J) = 0$ and $\omega(J) = 1$ will be distinguished.
In advance, observe that for sets of indices of $\omega$-measure $1$
the corresponding $R_i$ cannot have a uniform upper bound:
Let $A \subseteq \nn$ be any subset such that there exists $C > 0$
with $\bigcup_{i \in A} R_i \subseteq [0,C]$
and let $r > C$.
Then $i \in A$ implies $r \notin R_i$, i.e.~$\eps_i^r > \frac{1}{r}$.
Thus, $\omega(A) \leq \omega(\{i \in \nn \mid \eps_i^r > \frac{1}{r}\}) = 0$.
First, let $\omega(J) = 1$.
For $i \in J$, choose $r_i \in R_i \cap (i, \infty)$. For $i \in \nn \setminus J$, let $r_i := 1$.
Then
\[\omega\Big(\Big\{i \in \nn \mid \eps_i^{r_i} \leq \frac{1}{r_i}\Big\}\Big)
\geq \omega(\{i \in \nn \mid r_i\in R_i\})
\geq \omega(J) = 1.\]
For arbitrary $\eps > 0$, choose $N \in \nn$ with $\frac{1}{N} \leq \eps$.
For $i \in J$ with $i \geq N$,
\[\frac{1}{r_i} < \frac{1}{i} \leq \frac{1}{N} \leq \eps\]
and
\[\omega\Big(\Big\{i \in \nn \mid \frac{1}{r_i} \leq \eps \Big\}\Big)
\geq \omega(J \cap [N,\infty)) = 1.\]
Thus, $\lim\nolimits_\omega \frac{1}{r_i} = 0$ and $r_i$ has the desired properties.
Now let $\omega(J) = 0$.
For $i \in I \cap J^c$,
let $s_i := \sup R_i$ denote the least upper bound of $R_i$
and choose $r_i \in [\frac{s_i}{2},s_i] \cap R_i$.
For $i \in I^c \cup J$,
let $s_i := r_i := 1$.
Then
\[
\omega\Big(\Big\{i \in \nn \mid \eps_i^{r_i} \leq \frac{1}{r_i}\Big\}\Big)
\geq \omega(\{i \in \nn \mid r_i\in R_i\})
\geq \omega(I \cap J^c)
= 1.
\]
Let $\eps > 0$ and $K_{\eps} := \{i \in I \cap J^c \mid \frac{1}{s_i} > \eps\}$.
Then
\[
\bigcup_{i \in K_{\eps}} R_i
\subseteq \bigcup_{i \in K_{\eps}} [0,s_i]
\subseteq \Big[0,\frac{1}{\eps}\Big],
\]
and thus, by the above argumentation, $\omega(K_{\eps}) = 0$.
Then, using $\omega(I \cap J^c) = 1$,
\begin{align*}
\omega\big(\big\{i \in \nn \mid \frac{1}{s_i} \leq \eps \big\}\big)
&= 1 - \omega\big(\big\{i \in \nn \mid \frac{1}{s_i} > \eps \big\}\big) \\
&= 1 - \omega\big(\big\{i \in I \cap J^c \mid \frac{1}{s_i} > \eps \big\}\big) \\
&= 1 - \omega(K_{\eps})
= 1.
\end{align*}
Hence, $\lim\nolimits_\omega \frac{1}{s_i} = 0$ and $\frac{1}{r_i} \leq \frac{2}{s_i}$ proves the claim.
\end{proof}
As for bounded sequences of real numbers,
investigating sublimits coming from the same subsequence is the same as investigating ultralimits.
\begin{lemma}
Let $(X_i,d_{X_i},p_i)$ and $(Y_i,d_{Y_i},q_i)$, $i \in \nn$, be pointed proper length spaces.
\begin{enumerate}
\item
Let $\omega$ be a non-principal ultrafilter on $\nn$.
Then there exists a subsequence $(i_j)_{j \in \nn}$ such that both
\begin{align*}
&(X_{i_j},p_{i_j}) \to \lim\nolimits_\omega (X_i,d_{X_i},p_i) \quad\xspace\textrm{and}\xspace\\
&(Y_{i_j},q_{i_j}) \to \lim\nolimits_\omega (Y_i,d_{Y_i},q_i)
\end{align*}
in the pointed Gro\-mov-Haus\-dorff\xspace sense
as $j \to \infty$.
\item
Let $(X,d_X,p)$ and $(Y,d_Y,q)$ be pointed length spaces
and $(i_j)_{j \in \nn}$ be a subsequence such that both
\begin{align*}
&(X_{i_j},p_{i_j}) \to (X,p) \quad\xspace\textrm{and}\xspace\\
&(Y_{i_j},q_{i_j}) \to (Y,q)
\end{align*}
in the pointed Gro\-mov-Haus\-dorff\xspace sense
as $j \to \infty$.
Then there exists a non-principal ultrafilter $\omega$ on $\nn$
such that there are isometries
\begin{align*}
&\lim\nolimits_\omega (X_i,d_{X_i},p_i) \cong (X,p)
\quad\xspace\textrm{and}\xspace\\
&\lim\nolimits_\omega (Y_i,d_{Y_i},q_i) \cong (Y,q).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
Using \autoref{prop_UL:ultralimits_are_sublimits_ncpt},
the proof can be done completely analogously
to the one of \autoref{lem_UL:common-subseq=using-ultrafilter}.
\end{proof}
\bibliographystyle{amsalpha}
| {
"timestamp": "2017-03-29T02:08:54",
"yymm": "1703",
"arxiv_id": "1703.09595",
"language": "en",
"url": "https://arxiv.org/abs/1703.09595",
"abstract": "The present article addresses to everyone who starts working with (pointed) Gromov-Hausdorff convergence. In the major part, both Gromov-Hausdorff convergence of compact and of pointed metric spaces are introduced and investigated. Moreover, the relation of sublimits occurring with pointed Gromov-Hausdorff convergence and ultralimits is discussed.",
"subjects": "Metric Geometry (math.MG)",
"title": "Notes on Pointed Gromov-Hausdorff Convergence",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683517080176,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.709661073866223
} |
https://arxiv.org/abs/2208.07781 | Note on the pinned distance problem over finite fields | Let F_q be a finite field with odd q elements. In this article, we prove that if E \subseteq \mathbb F_q^d, d\ge 2, and |E|\ge q, then there exists a set Y \subseteq \mathbb F_q^d with |Y|\sim q^d$ such that for all y\in Y, the number of distances between the point y and the set E is similar to the size of the finite field \mathbb F_q. As a corollary, we obtain that for each set E\subseteq \mathbb F_q^d with |E|\ge q, there exists a set Y\subseteq \mathbb F_q^d with |Y|\sim q^d so that any set E\cup \{y\} with y\in Y determines a positive proportion of all possible distances. An averaging argument and the pigeonhole principle play a crucial role in proving our results. | \section{Introduction}
Let $\mathbb F_q^d$ be the $d$-dimensional vector space over the finite field $\mathbb F_q$ with $q$ elements.
In 2005, Iosevich and Rudnev \cite{IR07} initially posed and studied an analogue of the Falconer distance problem over finite fields.
They asked for the minimal exponent $\alpha>0$ such that if $E\subseteq \mathbb F_q^d$ and $|E|\ge C q^\alpha$ for a sufficiently large constant $C>0$, then
$$ |\Delta(E) | \ge c q$$
for some $0\le c\le 1,$ where $|\Delta(E)|$ denotes the cardinality of the distance set $\Delta(E)$, defined by
$$ \Delta(E)=\{||x-y||: x, y\in E\}.$$
Here we recall that $||\alpha||:=\sum\limits_{j=1}^d \alpha_j^2$ for $\alpha=(\alpha_1, \ldots, \alpha_d) \in \mathbb F_q^d.$\\
By developing the discrete Fourier machinery, Iosevich and Rudnev \cite{IR07} proved that $|\Delta(E)|\sim q$ whenever $|E|\ge C q^{(d+1)/2}.$ We recall that $A \ll B$ means that $A\le CB$ for some constant $C>0$, which is independent of $q$, and we use $A\sim B$ if $A\ll B $ and $B\ll A.$ The authors in \cite{HIKR10} showed that the exponent $(d+1)/2$ is optimal for
all odd dimensions $d\ge 3$ except for the cases when $-1$ is not a square and $d=4k-1$ for $k\in \mathbb N.$
However, in any other cases including even dimensions $d\ge 2,$ it has been conjectured by Iosevich and Rudnev \cite{IR07} that in order to have a positive proportion of all distances, the exponent $(d+1)/2$ can be improved to $d/2.$
\begin{conjecture} [Iosevich-Rudnev's Conjecture] Let $E\subseteq \mathbb F_q^d.$
Suppose that $d\ge 2$ is even or $d, q \equiv 3 \mod{4}.$ Then if $|E|\ge C q^{d/2}$ for a sufficiently large constant $C>0$, we have
$|\Delta(E)|\sim q.$
\end{conjecture}
Iosevich-Rudnev's Conjecture is still open and even the threshold $(d+1)/2$ has not been improved except for two dimensions.
In the case of $d=2$ over general finite fields, the authors in \cite{CEHIK09} obtained the $4/3$ exponent, which is the first result to break down the exponent $(d+1)/2.$ This result was obtained by applying the restriction estimates for the circles on the plane.
More precisely, they proved the following result with an explicit constant.
\begin{theorem} [\cite{CEHIK09}]\label{wolffin2d} Let $E$ be a subset of $\subset {\mathbb F}_q^2$ with $|E|\ge q^{4/3}.$ Then following statements hold:
\begin{enumerate}
\item If $q\equiv 3 \mod{4},$ then $|\Delta(E)|\ge \frac{q}{1+\sqrt{3}}.$
\item If $q \equiv 1 \mod{4},$ then $|\Delta(E)|\ge C_{q} q,$ where the constant $C_q$ is defined by
$$C_q:= \frac{\left(1-2q^{-1}\right)^2}{1+\sqrt{3}-\sqrt{3}q^{-2/3}}.$$
\end{enumerate}
\end{theorem}
Notice that $C_q >0$ for all $q\ge 3,$ and $C_q$ converges to $\frac{1}{1+\sqrt{3}}$ as $q\to \infty.$ Since a convergent sequence is bounded, we therefore choose a constant $c>0,$ independent of $q,$ such that $C_q \ge c >0.$ From this observation, the following corollary is a direct consequence of Theorem \ref{wolffin2d}.
\begin{corollary}[\cite{CEHIK09}] Suppose that $E \subseteq \mathbb F_q^2$ with $|E|\ge q^{4/3}.$ Then we have
$$ |\Delta(E)|\sim q.$$
\end{corollary}
Using a group action approach, Bennett, Hart, Iosevich, Pakianathan, and Rudnev \cite{BHIPR} provided an alternative proof for the exponent $4/3$ in the above corollary.\\
As a strong version of the Falconer distance problem, one has studied the pinned distance problem over finite fields.
Given $E \subseteq \mathbb F_q^d, d\ge 2,$ and $y\in \mathbb F_q,$ the pinned distance set with a pin $y$, denoted by $\Delta_y(E),$ is defined by
$$ \Delta_y(E)=\{||x-y||: x\in E\}.$$
The Chapman, Erdo\~{g}an, Hart, Iosevich, and Koh \cite{CEHIK09} showed that the exponent $(d+1)/2$ due to Iosevich and Rudnev holds true for the pinned distance sets.
More precisely they proved the following.
\begin{theorem}
[\cite{CEHIK09}] \label{CEHIK} Let $E\subseteq \mathbb F_q^d, d\ge 2.$ If $|E|\ge q^{\frac{d+1}{2}},$ then there exists a subset $E'$ of $E$ with $|E'|\sim |E|$ so that for every $y\in E'$, we have
$$ |\Delta_y(E)|\sim q.$$
\end{theorem}
As seen in the conjecture of the Falconer distance set problem, the exponent $(d+1)/2$ cannot be improved except for the cases when $d, q\equiv 3 \mod{4}$ or $d\ge 2$ is even.
However, in those cases it have been believed that $d/2$ can be the best possible exponent for the pinned distance sets.
As partial evidence for this prediction, the $4/3$ exponent result was extended to the pinned distance sets in $\mathbb F_q^2$ by Hanson, Lund, and Roche-Newton \cite{HLR16}, who successfully performed the bisector energy estimate.
\begin{theorem}[\cite{HLR16}] Let $E\subseteq \mathbb F_q^2.$ If $|E|\ge q^{4/3}$, then the conclusion of Theorem \ref{CEHIK} holds.
\end{theorem}
When $q$ is prime, the exponent $4/3$ have been improved to $5/4$ by Murphy, Petridis, Pham, Rudnev, and Stevenson \cite{MPPRS}.
\begin{theorem} [\cite{MPPRS}] Let $q$ be prime. Then if $E \subseteq \mathbb F_q^2$ with $|E|\ge q^{\frac{5}{4}}$, we have
$$\max\limits_{y\in E} |\Delta_y(E)|\sim q.$$
\end{theorem}
Despite researchers' efforts, the conjectured exponent $d/2$ has not been proven. It is unlikely that one can establish the conjecture by using the known techniques.
Moreover, there is very little evidence to support that the conjecture is true.\\
The main purpose of this paper is not to derive an improved result on the distance problem, but to address that the probability that random sets satisfy the distance conjecture is very high.
\subsection{The statement of main results}
Our main theorem is as follows.
\begin{theorem} \label{main} Let $E\subseteq \mathbb F_q^d.$ Then given $a>1,$ there exists $Y\subseteq \mathbb F_q^d$ with $|Y|\ge \frac{a-1}{a} q^d$ such that for all $y\in Y$,
$$ |\Delta_y(E)|\ge \min\left\{\frac{q}{2a},~\frac{|E|}{2a}\right\}.$$
\end{theorem}
The following result is a direct consequence of Theorem \ref{main}.
\begin{corollary}
Suppose that $E\subseteq \mathbb F_q^d, d\ge 2,$ with $|E|\ge q.$ Then for any $a>1$, there exists $Y\subseteq \mathbb F_q^d$ with $|Y|\ge \frac{a-1}{a} q^d$ so that for all $y\in Y,$ we have
$$ |\Delta(E\cup \{y\})|\ge \frac{q}{2a}.$$
\end{corollary}
\begin{proof} Since $|E|\ge q,$ we have
$$ \min\left\{ \frac{q}{2a}, ~ \frac{|E|}{2a}\right\}=\frac{q}{2a}.$$
In addtition, note that for all $y\in \mathbb F_q,$ we have $ |\Delta(E\cup \{y\})|\ge |\Delta_y(E)|.$
Hence, the statement of the corollary follows immediately from Theorem \ref{main}.
\end{proof}
\bibliographystyle{amsplain}
\section{Proof of main result (Theorem \ref{main})}
We begin with the standard counting argument as in \cite{CEHIK09}.
To find a lower bound of the cardinality of the $y$-pinned distance set $\Delta_y(E)$, we consider the $y$-pinned counting function $\nu_y: \mathbb F_q \to \mathbb N \cup \{0\},$ which maps an element $t$ in $\mathbb F_q$ to the number of elements $x$ in $E$ such that $||x-y||=t.$ In other words, for $y\in \mathbb F_q^d, t\in \mathbb F_q,$ we have
$$\nu_y(t)=\sum_{x\in E: ||x-y||=t} 1.$$
Since $|E|^2=\left(\sum_{t\in \Delta_y(E)} \nu_y(t) \right)^2, $ it follows from the Cauchy-Schwarz inequality that
\begin{equation}\label{PinForm} |\Delta_y(E)|\ge \frac{|E|^2} {\sum_{t\in \mathbb F_q} \nu_y^2(t)}.\end{equation}
\subsection{Key lemmas}
The average of $\sum_{t\in \mathbb F_q} \nu_y^2(t)$ over $y$ in $\mathbb F_q^d$ is explicitly given as follows:
\begin{lemma}\label{AVPin} Let $E\subseteq \mathbb F_q^d.$ Then we have
$$ \frac{1}{q^d} \sum_{y\in \mathbb F_q^d} \sum_{t\in \mathbb F_q} \nu_y^2(t) = \frac{|E|^2}{q} + \frac{q-1}{q} |E|.$$
\end{lemma}
\begin{proof}
By the definition of the $y$-pinned counting function $\nu_y(t)$, we have for each $y\in \mathbb F_q,$
$$ \sum_{t\in \mathbb F_q} \nu_y^2(t) = \sum_{x,z\in E: ||x-y||=||z-y||} 1.$$
Hence, the average of it over $y\in \mathbb F_q^d$ is given as follows:
\begin{align}\label{Sib} \frac{1}{q^d} \sum_{y\in \mathbb F_q^d} \sum_{t\in \mathbb F_q} \nu_y^2(t)&=\frac{1}{q^d} \sum_{x, z\in E: x=z} \sum_{y\in \mathbb F_q^d: ||x-y||=||z-y||} 1+\frac{1}{q^d} \sum_{x, z\in E: x\ne z} \sum_{y\in \mathbb F_q^d: ||x-y||=||z-y||} 1\\ \nonumber
&=|E| +\frac{1}{q^d} \sum_{x, z\in E: x\ne z} \sum_{y\in \mathbb F_q^d: ||x-y||=||z-y||} 1.\end{align}
\end{proof}
Now, we notice that for $x,z\in E$ with $x\ne z$, we have
\begin{equation}\label{HPEq}\sum_{y\in \mathbb F_q^d: ||x-y||=||z-y||} 1 = q^{d-1}.\end{equation}
In fact, since $x\ne z$, the quantity $\sum_{y\in \mathbb F_q^d: ||x-y||=||z-y||} 1$ is the number of the elements in the hyper-plane which bisects the line segment joining $x$ and $z$. Alternatively we can prove this rigorously by using the finite field Fourier analysis. To see this, let $\chi$ denote a nontrivial additive character of $\mathbb F_q.$ Then by the orthogonality of $\chi$, we see that if $x\ne z,$ then
\begin{align*} \sum_{y\in \mathbb F_q^d: ||x-y||=||z-y||} 1
&= q^{-1} \sum_{y\in \mathbb F_q^d} \sum_{s\in \mathbb F_q} \chi(s (||x-y||-||z-y||))\\
&=q^{d-1} + q^{-1} \sum_{y\in \mathbb F_q^d} \sum_{s\ne 0} \chi(s (||x-y||-||z-y||)).\end{align*}
Applying the orthogonality of $\chi$ to the sum over $y$, we see that the second term above is zero since $\chi(s (||x-y||-||z-y||))= \chi(-2s (x-z)\cdot y) \chi(s(||x||-||z||))$ and $s(x-z)$ is not a zero vector. Hence, the equation \eqref{HPEq} holds.
Finally, combining the above two estimates \eqref{Sib}, \eqref{HPEq}, we obtain the desirable estimate.\\
The following result can be obtained by the pigeonhole principle together with Lemma \ref{AVPin}.
\begin{lemma}\label{CorPinForm} Let $E\subseteq \mathbb F_q^d.$ Then for any $a>1,$ there exists $Y\subseteq \mathbb F_q^d$ with $|Y|\ge \frac{a-1}{a} q^d$ such that for every $y\in Y$,
$$ \sum_{t\in \mathbb F_q} \nu_y^2(t) \le \frac{a}{q} |E|^2 + \frac{a(q-1)}{q} |E|.$$
\end{lemma}
\begin{proof}
Let us fix $a>1.$ Define
$$Y=\left\{y\in \mathbb F_q^d: \sum_{t\in \mathbb F_q} \nu_y^2(t) \le \frac{a}{q} |E|^2 + \frac{a(q-1)}{q} |E|\right\}.$$
To complete the proof, it remains to show that
$$|Y|\ge \frac{a-1}{a} q^d.$$
By contradiction, let us assume that
\begin{equation}\label{False} |Y| < \frac{a-1}{a} q^d.\end{equation}
It is clear that
\begin{equation}\label{Pige1} \mathbb F_q^d \setminus Y=\left\{y\in \mathbb F_q^d: \sum_{t\in \mathbb F_q} \nu_y^2(t) > \frac{a}{q} |E|^2 + \frac{a(q-1)}{q} |E|\right\}.\end{equation}
We also notice that for all $y\in \mathbb F_q^d,$
\begin{equation} \label{Pige2} \sum_{t\in \mathbb F_q} \nu_y^2(t) \ge \sum_{t\in \mathbb F_q} \nu_y(t) = |E|.\end{equation}
Now by Lemma \ref{AVPin}, it follows that
\begin{equation}\label{LemA}\frac{1}{q^d} \sum_{y\in \mathbb F_q^d} \sum_{t\in \mathbb F_q} \nu_y^2(t) = \frac{|E|^2}{q} + \frac{q-1}{q} |E|.\end{equation}
However, we can also estimate it as follows. Using \eqref{Pige1} and \eqref{Pige2}, we have
\begin{align*}\frac{1}{q^d} \sum_{y\in \mathbb F_q^d} \sum_{t\in \mathbb F_q} \nu_y^2(t)
&= \frac{1}{q^d} \sum_{y\in Y} \sum_{t\in \mathbb F_q} \nu_y^2(t) + \frac{1}{q^d} \sum_{y\in \mathbb F_q^d\setminus Y} \sum_{t\in \mathbb F_q} \nu_y^2(t)\\
& > \frac{1}{q^d} |Y| |E| + \frac{1}{q^d}(q^d-|Y|) \left(\frac{a}{q} |E|^2 + \frac{a(q-1)}{q} |E|\right)\\
& = \frac{a|E|^2}{q} + \frac{a(q-1)|E|}{q} + \left(\frac{|E|}{q^d}- \frac{a|E|^2}{q^{d+1}} -\frac{a(q-1)|E|}{q^{d+1}}\right) |Y|. \end{align*}
Since $a>1$, in the third term above, the coefficient of $|Y|$ is negative. Hence, we can combine the above estimate with \eqref{False} to deduce that
$$ \frac{1}{q^d} \sum_{y\in \mathbb F_q^d} \sum_{t\in \mathbb F_q} \nu_y^2(t)
> \frac{a|E|^2}{q} + \frac{a(q-1)|E|}{q} + \left(\frac{|E|}{q^d}- \frac{a|E|^2}{q^{d+1}} -\frac{a(q-1)|E|}{q^{d+1}}\right) \left(\frac{a-1}{a} q^d \right)$$
Simplifying the RHS of the above equation, we get
$$ \frac{1}{q^d} \sum_{y\in \mathbb F_q^d} \sum_{t\in \mathbb F_q} \nu_y^2(t) >\frac{|E|^2}{q} + \frac{q-1}{q} |E| + \frac{a-1}{a} |E|,$$
which contradicts the equation \eqref{LemA} since $a>1.$
\end{proof}
\subsection{Proof of Theorem \ref{main}}
Combining \eqref{PinForm} and Lemma \ref{CorPinForm}, we get the required result:
$$ |\Delta_y(E)|\ge \frac{|E|^2}{\frac{a}{q} |E|^2 + \frac{a(q-1)}{q} |E|} \ge \min\left\{ \frac{q}{2a}, ~ \frac{q|E|}{2a(q-1)} \right\}\ge \min\left\{ \frac{q}{2a}, ~ \frac{|E|}{2a}\right\}. $$
| {
"timestamp": "2022-08-17T02:15:19",
"yymm": "2208",
"arxiv_id": "2208.07781",
"language": "en",
"url": "https://arxiv.org/abs/2208.07781",
"abstract": "Let F_q be a finite field with odd q elements. In this article, we prove that if E \\subseteq \\mathbb F_q^d, d\\ge 2, and |E|\\ge q, then there exists a set Y \\subseteq \\mathbb F_q^d with |Y|\\sim q^d$ such that for all y\\in Y, the number of distances between the point y and the set E is similar to the size of the finite field \\mathbb F_q. As a corollary, we obtain that for each set E\\subseteq \\mathbb F_q^d with |E|\\ge q, there exists a set Y\\subseteq \\mathbb F_q^d with |Y|\\sim q^d so that any set E\\cup \\{y\\} with y\\in Y determines a positive proportion of all possible distances. An averaging argument and the pigeonhole principle play a crucial role in proving our results.",
"subjects": "Number Theory (math.NT)",
"title": "Note on the pinned distance problem over finite fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683513421315,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7096610736032993
} |
https://arxiv.org/abs/0812.0456 | The existence of thick triangulations -- an "elementary" proof | We provide an alternative, simpler proof of the existence of thick triangulations for noncompact $\mathcal{C}^1$ manifolds. Moreover, this proof is simpler than the original one given in \cite{pe}, since it mainly uses tools of elementary differential topology. The role played by curvatures in this construction is also emphasized. | \section{Introduction}
The existence of the so called ``thick'' or ``fat'' triangulations
is an important both in Pure Mathematics, in Differential Geometry
(where it plays a crucial role in the computation of curvatures for
piecewise-flat approximations of smooth Riemannian manifolds, with
applications to Regge Calculus, see \cite{reg:61}); and in Geometric
Function Theory (mainly in the construction of {\it quasimeromorphic
mappings}, see \cite{ms2}, \cite{tu}, \cite{pe}, \cite{s1},
\cite{s4}, \cite{s3}).
Thick triangulations play also an important role in many
applications, mainly in Computational Geometry, Computer Graphics,
Image Processing and their related fields (see, \cite{ab}, \cite{bcer}, \cite{cdr}, \cite{e}, \cite{pa}, \cite{saz}, to name just a few).
Recall that ``thick'' or ``fat'' triangulations are defined as
follows:
\begin{defn} Let $\tau \subset \mathbb{R}^n$ ; $0 \leq k \leq n$ be a $k$-dimensional simplex.
The {\it thickness} $\varphi$ of $\tau$ is defined as being:
\begin{equation}
\varphi = \varphi(\tau) = \hspace{-0.3cm}\inf_{\hspace{0.4cm}\sigma
< \tau
\raisebox{-0.25cm}{\hspace{-0.9cm}\mbox{\scriptsize$dim\,\sigma=j$}}}\!\!\frac{Vol_j(\sigma)}{diam^{j}\,\sigma}
\end{equation}
The infimum is taken over all the faces of $\tau$, $\sigma < \tau$,
and $Vol_{j}(\sigma)$ and $diam\,\sigma$ stand for the Euclidian
$j$-volume and the diameter of $\sigma$ respectively. (If
$dim\,\sigma = 0$, then $Vol_{j}(\sigma) = 1$, by convention.)
\\ A simplex $\tau$ is $\varphi_0${\it-thick}, for some $\varphi_0 > 0$, if $\varphi(\tau) \geq \varphi_0$. A triangulation (of a submanifold of $\mathbb{R}^n$) $\mathcal{T} = \{ \sigma_i \}_{i\in \bf I}$ is
$\varphi_0${\it-thick} if all its simplices are $\varphi_0$-thick. A
triangulation $\mathcal{T} = \{ \sigma_i \}_{i\in \bf I }$ is {\it
thick} if there exists $\varphi_0 \geq 0$ such that all its
simplices are $\varphi_0${\it-thick}.
\end{defn}
The definition above is the one introduced in \cite{cms}. For some
different, yet equivalent definitions of thickness, see \cite{ca1},
\cite{ca2}, \cite{mun}, \cite{pe}, \cite{tu}.
For $\mathcal{C}^\infty$ Riemannian manifolds without boundary the
the existence has been proved by Peltonen \cite{pe}. This result was
extended by the first author to include manifolds of lower
differentiability class with boundary \cite{s2} and to a large class
of orbifolds \cite{s3}.
Peltonen's proof is based on the construction of an exhaustion of
the manifold using a delicate curvature-based argument, both to
decide the ``size'' of the compact ``pieces'' (i.e. of the elements
of the exhaustion) and to chose the mash of the triangulation (see
\cite{pe} and also Section 2). The technique used in \cite{s2} is
much more elementary, using only the Differential Topology apparatus
(and results) of \cite{mun}. This discrepancy between methods
creates a kind of ``esthetic asymmetry'' that naturally gives rise
to the following question:
\begin{quest}
Is it possible to prove the existence of thick triangulations for
non-compact manifolds using only techniques of Elementary
Differential Topology?
\end{quest}
We shall prove that the answer to the question above is positive by
showing that the ``meshing'' technique of thick triangulations
developed in \cite{s2} allows us to discard the curvature
considerations of the original proof.
However, in discarding the curvature-related information, one also
loses geometric intuition and, with it, any possibility of applying
the technique in any non-trivial, concrete case. Hence, the next
question ensues immediately:
\begin{quest}
Can one recover the Differential Geometric information (i.e.
curvatures) from the constructed $PL$ triangulation?
\end{quest}
We show that, again, the answer is affirmative, and it follows from
the results of \cite{cms}. Moreover, we indicate how this approach
can also be simplified using tools that may be considered more
``elementary''.
The reminder of the paper is organized as follows: In the
next Section we briefly sketch, for the benefit of the reader and
for the sake of the paper's self-containment, the main steps in the
proofs of the main results in \cite{pe} and \cite{s2}. In Section 3,
we show how our result in \cite{s2} allows us to give a simpler,
``elementary'' proof of Peltonen's theorem. Finally, in the last
Section, we discuss the role played by curvature in our simplified
proof and indicate how, using our method, one still can recapture
the Differential Geometric information encoded in Peltonen's proof.
\section{Background}
We bring below a very brief sketch of the methods used in proving
the existence of thick triangulations. We concentrate mainly on
those aspects that are pertinent to our present study.
\subsection{Open Riemannian Manifolds}
First, a number of necessary definitions:
Let $M^n$ denote an $n$-dimensional complete Riemannian manifold,
and let $M^n$ be isometrically embedded into $\mathbb{R}^\nu$
(``$\nu$''-s existence is guaranteed by Nash's Theorem -- see, e.g.
\cite{pe}).
Let $\mathbb{B}^\nu(x,r) = \{y \in \mathbb{R}^\nu\,|\, d_{eucl} <
r\}$; $\partial\mathbb{B}^\nu(x,r) = \mathbb{S}^{\nu-1}(x,r)$. If $x
\in M^n$, let $\sigma^n(x,r) = M^n \cap \mathbb{B}^\nu(x,r)$,
$\beta^n(x,r) = exp_x\big(\mathbb{B}^n(0,r)\big)$, where: $exp_x$
denotes the exponential map: $exp_x:T_x(M^n) \rightarrow M^n$, and
where $\mathbb{B}^n(0,r) \subset T_x\big(M^n\big)$,
$\mathbb{B}^n(0,r) = \{y \in \mathbb{R}^n\,|\,d_{eucl}(y,0) < r\}$.
The following definitions generalize in a straightforward manner classical ones used for surfaces in
$\mathbb{R}^3$:
\begin{defn}
\begin{enumerate}
\item $\mathbb{S}^{\nu-1}(x,r)$ is {\em tangent} to $M^n$ at $x\in M^n$ iff there exists $\mathbb{S}^n(x,r) \subset
\mathbb{S}^{\nu-1}(x,r)$, such that $T_x(\mathbb{S}^n(x,r)) \equiv
T_x(M^n)$.
\item Let $l \subset \mathbb{R}^\nu$ be a line, then $l$ is {\em secant} to $X \subset M^n$ iff $|\,l \cap X| \geq 2$.
\end{enumerate}
\end{defn}
\begin{defn}
\begin{enumerate}
\item $\mathbb{S}^{\nu-1}(x,\rho)$ is an {\rm osculatory sphere} at $x \in M^n$ iff:
\begin{enumerate}
\item $\mathbb{S}^{\nu-1}(x,\rho)$ is tangent at x;
\\ and
\item $\mathbb{B}^n(x,\rho) \cap M^n = \emptyset$.
\end{enumerate}
\item Let $X \subset M^n$. The number $\omega = \omega_X = \sup\{\rho > 0\,|\, \mathbb{S}^{\nu-1}(x,\rho) \; {\rm osculatory} \\{\rm at\; any}\; x \in
X\}$ is called the {\em maximal osculatory} ({\em tubular}) {\em radius} at $X$.
\end{enumerate}
\end{defn}
\begin{rem}
There exists an osculatory sphere at any point of $M^n$ (see
\cite{ca3}\,).
\end{rem}
\begin{defn} Let $U \subset M^n, U \neq \emptyset$, be a relatively compact set, and let $T = \bigcup_{x \in
\bar{U}}\sigma(x,\omega_U)$. The number $\kappa_U =
\max\{r\,|\,\sigma^n(x,r)\; {\rm is\; connected \; for \; all}\; s
\leq \omega_U,\, x \in \bar{T}\}$, is called the {\em maximal
connectivity radius} at U, defined as follows:
\end{defn}
Note that the maximal connectivity radius and the maximal osculatory
radius are interconnected by the following inequality (\cite{pe},
Lemma 3.1):
\[\omega_U \leq \frac{\sqrt{3}}{3}\kappa_U\,.\] \label{ec:1}
We are now able to present the main steps of Peltone's proof, which
generalizes both the result and method of proof of Cairns
\cite{ca3}:
\begin{enumerate}
\item
\begin{enumerate}
\item Construct an exhaustive set $\{E_i\}$ of $M^n$,
generated by the pair $(U_i,\eta_i)$, where:
\begin{enumerate}
\item $U_i$ is the relatively compact set $E_i \setminus \bar{E}_{i-1}$ and
\item $\eta_i$ is a number that controls the fatness of the simplices of the triangulation of $E_i$\,, constructed in Part 2, such that it will not differ to much
on adjacent simplices, i.e.:
\\ (${\rm ii_1}$) The sequence $(\eta_i)_{i\geq1}$ descends to $0$\,;
\\ (${\rm ii_2}$) $2\eta_i \geq \eta_{i-1} \,.$
\end{enumerate}
The numbers $\eta_i$ are chosen such that they satisfy the following bounds:
\[\eta_i \leq \frac{1}{4}\min_{i \geq 1}\{\omega_{\bar{U}_{i-1}},\omega_{\bar{U}_i},\omega_{\bar{U}_{i+1}}\}\,.\]
\item
\begin{enumerate}
\item Produce a maximal set $A$, $|A| \leq \aleph_0$, s.t. $A \cap U_i$ satisfies:
\\ (${\rm i_1}$) a density condition, namely:
\[d(a,b) \geq \eta_i/2\,, {\rm for\; all}\; i \geq 1\,;\]
(${\rm i_2}$) a ``gluing'' condition for $U_i, U_{i+1}$\,, i.e.
their intersection is large enough.
Note that according to the density condition (${\rm i_1}$), the
following holds: For any $i$ and for any $x \in \bar{U}_i$, there
exists $a \in A$ such that $d(x,a) \leq \eta_i/2$\,.
\item Prove that the Dirichlet complex $\{\bar{\gamma}_i\}$ defined by the sets $A_i$ is a cell complex and
every cell has a finite number of faces (so that it can be triangulated in a standard manner).
\end{enumerate}
\end{enumerate}
\item
Consider first the dual complex $\Gamma$, and prove that it is a Euclidian
simplicial complex with a ``good'' density. Project then $\Gamma$ on
$M^n$ (using the normal map). Finally, prove that the resulting
complex $\widetilde{\Gamma}$ can be triangulated by fat simplices.
\end{enumerate}
\subsection{Manifolds With Boundary
The idea of the proof in this case is to build first two fat
triangulations: $\mathcal{T}_{1}$ of a product neighbourhood $N$ of
$\partial M^n$ in $M^n$ and $\mathcal{T}_{2}$ of $int\, M^n$ (its
existence follows from Peltonen's result), and then to ``mash'' the
two triangulations into a new triangulation $\mathcal{T}$, while
retaining their thickness (see \cite{s2}). While the mashing
procedure of the two triangulations is basically the classical one
of \cite{mun}, the triangulation of $\mathcal{T}_{1}$ has been
modified, in order to ensure the thickness of the simplices of
$\mathcal{T}_{1}$ (see \cite{s2}, Theorem 2.9). To thicken
triangulations one can use either the method used in \cite{cms}, or,
alternatively, the one developed in \cite{s1}. For the technical
details, see \cite{s2}.
\section{Main Result}
The idea of the proof
is to use the basic fact that $M^n$ is $\sigma$-compact, i.e. it
admits an exhaustion by compact submanifolds $\{K_j\}_j$ (see, e.g.
\cite{spi}). This is a standard fact for metrizable manifolds.
However, it is conceivable that the ``cutting surfaces'' $N_{ij}$\,,
$\bigcup_{\scriptscriptstyle{i=1,...k_j}}N_{ij} =
\partial K_j$\,, are merely $\mathcal{C}^0$, so even the existence of a
triangulation for these hypersurfaces is not always assured,
hence a fortiori that of smooth triangulations.
(See. e.g. \cite{th} for a brief review of the results regarding the
existence of triangulations).
To show that one can obtain (by ``cutting along'') smooth
hypersurfaces, we briefly review the main idea of the proof of the
$\sigma$-compactness of $M^n$ (for the full details, see, for
example \cite{spi}): Starting from an arbitrary base point $x_0 \in
M^n$, one considers the interval $I = I(x_0) = \{r > 0\,|\,
\beta^n(x_0,r)\; {\rm is \; compact}\}$, where $\beta^n(x_0,r)$ is
as in Section 2. If $I = \mathbb{R}$, then $M^n =
\bigcup_{\scriptscriptstyle
1}^{\scriptscriptstyle\infty}{\beta^n(x,n)}$, thence
$\sigma$-compact. If $I \neq \mathbb{R}$, one constructs the
compacts sets $K_j$, $K_0 = \{x_0\}$, $K_{n+1} =
\bigcup_{\scriptscriptstyle y \in K_j}\beta^n(y,r(y))$, where $r(y)
= \frac{1}{2}\sup\{r \in I(y)\}$. Then it can be shown that $M^n =
\bigcup_{\scriptscriptstyle n \geq 0}K_j$, i.e. $M^n$ is
$\sigma$-compact.
The smoothness of the surfaces $N_{ij}$ now follows from Wolter's
result \cite{wo} regarding the $2$-differentiability of the cut
locus of the exponential map.
By \cite{ca1} (see also \cite{ca2}) the sets $K_j$ and $N_{ij}$
admit thick triangulations and, moreover, these triangulations have
thickness $\varphi_1 = \varphi_1(n)$ and $\varphi_2 =
\varphi_2(n-1)$, respectively. One can thus apply repeatedly the
``mashing'' technique developed in \cite{s2}, for collars of
$N_{ij}$ in $K_j$ and $K_{j + 1}$, $j \geq 0$, rendering a
triangulation of $M^n$, of uniform thickness $\varphi = \varphi(n)$
(see \cite{s2}, \cite{cms}).
\begin{rem}
Instead of the less known and more complicated method of \cite{ca1},
we could have employed the simpler and more intuitive one in
\cite{ca2}. However, the later one still makes use of the principal
curvatures (at each point of) the manifold, thus using this method
the ``Differential Geometric content'', so to speak, of our proof
would still be rather higher. Since we strive to obtain a proof that
is as elementary as possible, i.e. using only (or mainly) the tools
of elementary differential topology, we prefer to adopt the methods
of Cairns' earlier work.
\end{rem}
\begin{rem}
In some special cases a ``purely'' Euclidean construction can be
achieved, without resorting to embeddings and piecewise-flat (or
just piecewise-linear) approximations. Such a construction is
provided in \cite{ep}, where a method of dividing a non-compact
hyperbolic manifold into (canonical) Euclidean pieces is described.
Note that, moreover, these pieces can be easily subdivided into
thick simplices using a number of the methods mentioned above (and
in the bibliography).
\end{rem}
\begin{rem}
The opposite problem, that of extending a thick triangulation from
the interior of a manifold to its boundary, is also worth
considering, in itself and for its importance in Geometric Function
Theory (see \cite{s3}).
\end{rem}
\section{Curvatures}
We conclude with a few brief remarks regarding the role of
curvature:, its possible ``recovery'' in the $PL$ context, and
further directions of study.
First, let us note that, manifestly enough, our simplified proof is
not ``Differential Geometry free'', since we used both the
exponential map and Wolter's result. Regarding an ``elementary''
proof, this is evidently a weakness, as is the appeal we have made
to Nash's Embedding Theorem (with its complicated Differential
Geometric apparatus).
One would be tempted to substitute for the balls $\beta^n(x,j)$,
Euclidean balls $\mathbb{B}^\nu(x,j)$, and replace, if necessary,
the surfaces $L_{jk} = \partial\mathbb{B}^\nu(x,j) \cap M^n$ by
$\mathcal{C}^1$ surfaces $L^\ast_j$, that are $\varepsilon$-close to
$L_j$\,. Such an approach would allow us to consider only
$\mathcal{C}^1$ embeddings, thus permitting us to dispense with the
use of Nash's Theorem. Unfortunately, in general, one cannot discard
the use of
the exponential map (and, consequently, neither can one ignore Wolter's work). %
Indeed, for wildly embedded manifolds, the simple method above is
not applicable.
However, if only tame embeddings are considered, than the method
above can be employed.
Secondly, let us note that Wolter's method is -- obviously -- not
independent of curvature. Therefore, the curvature considerations do
play a decisive role, even if in a more ``soft'' manner. Moreover,
even if applying the very simple and direct method described in the
previous paragraph, the curvature plays an intrinsic role in
determining the meshes of the triangulations of two adjacent
``pieces'' $K_j$ and $K_{j+1}$, ($K_j = \mathbb{B}^\nu(x,j) \cap
M^n$), and their common boundary $L^\ast_{jk}$. For this reason, a
``naive'' exhaustion of the manifold using Euclidean balls, hence
without control of curvature, may prove, in general, to be
counterproductive, rendering not a monotone (and even highly
oscillating) series of meshes for the members of the exhaustion.
Finally, as noted in the Introduction, the (Lipschitz-Killing)
curvatures have ``good'' convergence properties, under
piecewise-flat (secant) approximations of $M^n$. This result of
Cheeger et al. (see \cite{cms}) is the goal for which they developed
the ``mashing of triangulations'' technique mentioned above.
However, a simpler and more direct approach to curvatures
computation in $PL$ approximation, along the lines indicated in
\cite{s4} and based on metric curvatures, is currently in
preparation.
\subsection*{Acknowledgment}
The first author wishes to express his gratitude to Professor Shahar
Mendelson -- his warm support is gratefully acknowledged. He would also like to thank
Professor Klaus-Dieter Semmler for bringing to his attention
Wolter's work.
| {
"timestamp": "2008-12-02T10:41:26",
"yymm": "0812",
"arxiv_id": "0812.0456",
"language": "en",
"url": "https://arxiv.org/abs/0812.0456",
"abstract": "We provide an alternative, simpler proof of the existence of thick triangulations for noncompact $\\mathcal{C}^1$ manifolds. Moreover, this proof is simpler than the original one given in \\cite{pe}, since it mainly uses tools of elementary differential topology. The role played by curvatures in this construction is also emphasized.",
"subjects": "Geometric Topology (math.GT); Differential Geometry (math.DG)",
"title": "The existence of thick triangulations -- an \"elementary\" proof",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683506103591,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7096610730774517
} |
https://arxiv.org/abs/1807.06170 | Learning Convex Partitions and Computing Game-theoretic Equilibria from Best Response Queries | Suppose that an $m$-simplex is partitioned into $n$ convex regions having disjoint interiors and distinct labels, and we may learn the label of any point by querying it. The learning objective is to know, for any point in the simplex, a label that occurs within some distance $\epsilon$ from that point. We present two algorithms for this task: Constant-Dimension Generalised Binary Search (CD-GBS), which for constant $m$ uses $poly(n, \log \left( \frac{1}{\epsilon} \right))$ queries, and Constant-Region Generalised Binary Search (CR-GBS), which uses CD-GBS as a subroutine and for constant $n$ uses $poly(m, \log \left( \frac{1}{\epsilon} \right))$ queries.We show via Kakutani's fixed-point theorem that these algorithms provide bounds on the best-response query complexity of computing approximate well-supported equilibria of bimatrix games in which one of the players has a constant number of pure strategies. We also partially extend our results to games with multiple players, establishing further query complexity bounds for computing approximate well-supported equilibria in this setting. | \section{Introduction}\label{sec:introduction}
The computation of game-theoretic equilibria is a topic of long-standing interest in the algorithmic and AI communities. This includes computation in the ``classical'' setting of complete information about a game, as well as settings of partial information, communication-bounded settings, and distributed algorithms (for example, best-response dynamics). A recent line of research has studied computation of equilibria based on query access to players' payoff functions. That work, along with the notion of revealed preferences in economics, inspires the new setting we study here.
We study algorithms that have query access to the players' best-response behaviour: an algorithm may query a mixed-strategy profile (i.e. probability distributions constructed by the algorithm, over each player's pure strategies) and learn the players' best responses. Our focus is on standard bimatrix games, which is arguably the most natural starting-point for an investigation of this new query model. The solution concept of interest is $\varepsilon$-approximate Nash equilibria (exact equilibria are typically impossible to find using finitely many such queries, see Corollary \ref{cor:exact-nash-inft}). A basic challenge is to identify algorithms that achieve this goal with good bounds on their query complexity (and also, ideally, their runtime complexity).
In more detail, we assume an $m\times n$ game $G$: a row player has $m$ pure strategies and a column player has $n$ pure strategies. $G$ has two unknown $m\times n$ payoff matrices that represent payoffs to the players for all combinations of pure strategy choices they may make. A query consists of a probability distribution over the pure strategies of one of the players, and elicits an answer consisting of a best response for the other player (i.e. a pure strategy that maximises that player's expected payoff). We seek an $\varepsilon$-well-supported Nash equilibrium ($\varepsilon$-WSNE): a pair of probability distributions over their pure strategies with the property that any strategy of player $p$ whose expected payoff is more than $\varepsilon$ below the value of $p$'s best response, gets probability zero. The general question of interest is: how many queries are needed, as a function of $m,n,\varepsilon$.
Using Kakutani's fixed point theorem, we reduce this question to a novel and more geometrical challenge in the design of query protocols. Suppose that the $m$-simplex $\Delta^m$ is partitioned into $n$ convex regions having labels in $[n]=\{1,\ldots,n\}$. When we query a point $x\in\Delta^m$ we are told the label of $x$. How many queries (in terms of $m,n,\varepsilon$) are needed in order to ensure that all points in $\Delta^m$ are within $\varepsilon$ of a point whose label we know? We show how to achieve this using time and queries polynomial in $\log \varepsilon$ and $\max(m,n)$ provided that $\min(m,n)$ is constant. This leads to a polynomial query complexity algorithm for 2-player games, provided that one of the players has a constant number of strategies.
\subsection{Further details}
In essence, we consider partitions of the unit $m$-simplex $\Delta^m$ into $n$ convex polytopes, $P_1,...,P_n$, with disjoint interiors, and aim to approximately learn the partition with access to a membership oracle that for a given $x \in \Delta^m$, returns a polytope to which $x$ belongs. The notion of approximation we study is that of {\em $\varepsilon$-close labellings}, a collection of empirical polytopes, $\{\hat{P}_i\}_{i=1}^n$, such that $\hat{P}_i \subseteq P_i$ for $i=1,...,n$ and $\cup_{i=1}^n \hat{P}_i$ is an $\varepsilon$-net of $\Delta^m \subset \mathbb{R}^m$ in the $\ell_2$ norm.
Note that in one dimension ($m=1$) we can use binary search to solve this problem using $n\log(1/\varepsilon)$ queries.
We generalise to higher dimension, exploiting convexity of the regions to reduce query usage in computing $\varepsilon$-close labellings. We present two algorithms for this task: Constant-Dimension Generalised Binary Search (CD-GBS), which for constant $m$ uses $poly(n, \log \left( \frac{1}{\varepsilon} \right))$ queries, and Constant-Region Generalised Binary Search (CR-GBS), which uses CD-GBS as a subroutine and for constant $n$ uses $poly(m, \log \left( \frac{1}{\varepsilon} \right))$ queries.
This problem derives from the question of how to compute approximate (well-supported) Nash equilibra ($\varepsilon$-WSNE) using only best response information, obtained via queries in which the algorithm selects a mixed strategy profile and a player, and receives a best response for that player to the mixed profile. Via Kakutani's fixed-point theorem \cite{kakutani1941} we reduce this variant of equilibrium computation to finding $\varepsilon$-close labellings of polytope partitions. For $m \times n$ games where $m$ is constant (or $n$ equivalently, by symmetry), we show that an $\varepsilon$-WSNE can be computed using $poly(n, \log \left( \frac{1}{\varepsilon} \right) )$ best response queries.
In addition, we briefly delve into the problem of computing $\varepsilon$-WSNE in multiplayer games with best response queries. Unfortunately, as soon as there are more than two players, the geometric connection between computing $\varepsilon$-WSNE and learning polytope partitions of the simplex breaks down. Nonetheless, fixed-point techniques from Section \ref{sec:discrete-nash} can still be applied in this setting, and we present a simple algorithm that computes an $\varepsilon$-WSNE with a finite query complexity. To be more specific, in a game with $n$ players each having $k$ actions, our algorithm computes an $\varepsilon$-WSNE using $O\left( n \left( \frac{nk}{\varepsilon} \right)^{nk} \right)$ best response queries.
\subsection{Related Work}
Earlier work in computational learning theory has studied exact learning of geometrical regions over a discretised domain, where algorithms are sought with query complexity logarithmic in a “resolution” parameter and binary search is repeatedly applied in a systematic way \cite{BGGM98}. Goldberg and Kwek~\cite{GK00} specifically study the learnability of polytopes in this context, deriving query efficient algorithms, and precisely classifying polytopes learnable in this setting. These algorithms can be adapted to approximately learn a single polytope with membership queries, but the obtained notion of approximation is not directly applicable to computing $\varepsilon$-close labellings.
The Nash equilibrium (NE) is a fundamental concept in game theory \cite{Nash}. They are guaranteed to exist in finite games, yet computational challenges in finding one abound, most notably, the PPAD-completeness of computing an exact equilibrium even for two-player normal form games \cite{DGP, CDT}. For this reason, query complexity has been extensively used as a tool to differentiate hardness of equilibrium concepts in games. For payoff queries, some notable examples include: exponential lower bounds for randomised computation of exact Nash equilibria and exact correlated equilibria via communication complexity lower bounds in multiplayer games \cite{HM10, HN13}; exponential lower bounds for randomised computation of approximate well-supported equilibria and general approximate equilibria for a small enough approximation factor in multiplayer games \cite{B13}; upper and lower bounds for equilibrium computation in bimatrix games, congestion games \cite{FGGS} and anonymous games \cite{GT14}; upper and lower bounds for randomised algorithms computing approximate correlated equilibria \cite{GR14}. Babichenko et al. have also proved lower bounds in communication complexity for computing $\varepsilon$-WSNE for small enough $\varepsilon$ in both bimatrix and multiplayer games \cite{BabR}.
Best response queries are a weaker but natural query model which is powerful enough to implement fictitious play, a dynamic first proposed by Brown \cite{Brown}, and proven to converge by Robinson \cite{Robinson} in two-player zero-sum games to an approximate NE. Fictitious play does not always converge for general games where both players have more than two strategies \cite{FL98}. Furthermore, Daskalakis and Pan have proven that the rate of convergence of the dynamic is quite slow in the worst case (with arbitrary tie-breaking) \cite{DFictitious}. Also, beyond non-convergence, the dynamic can have a poor approximation value for general games \cite{GSSV11}. In addition, the relationship between best responses and convex partitions of simplices has been studied by Von Stengel \cite{VS04} in the context of sequential games where one player has to commit to and announce a strategy before playing.
For a bimatrix game, simple $\varepsilon$-close labellings can be constructed by querying best responses at mixed strategies arising as uniform distributions over sufficiently large multisets of pure strategies. As a consequence of our main theorem, best responses to these multiset distributions contain enough information to compute approximate WSNE. This result is in the spirit of \cite{BabBarman} and \cite{LMM03}, who aim to quantify specific $k$ such that some approximate equilibrium arises as a uniform mixture over multisets of size $k$. We note in our scenario that there is also a guaranteed existence of an approximate equilibrium using sufficiently large multisets, however {\em verifying} that a {\em specific} pair of mixed strategies is an approximate WSNE is not straightforward using only best response queries. This is in contrast to the verification of approximate equilibria via utility queries as studied in \cite{BabBarman}.
Separately, we note that the present paper is possibly relevant to the search for a price equilibrium in certain markets. Baldwin and Klemperer study markets consisting of {\em strong-substitutes} demand functions for $N$ different goods available in multiple discrete units \cite{BK16}. These markets are a generalisation of the {\em product-mix auction} of \cite{Kle10}; a basic task is to identify prices at which some desired bundle of the goods is demanded. Consider the space $(\mathbb{R}^+)^N$ of all price vectors. As analysed in \cite{BK16}, a strong-substitutes demand function partitions this price space into convex polytopes, each of which comprises the prices at which some particular bundle of goods is demanded. So, the present paper relates to a setting where price vectors may be queried, and responses consist of demand bundles. The connection is imperfect, since the main objective in the context of \cite{BK16} would be to learn a price at which some target bundle is demanded, rather than the entire demand function. The ideas here may be useful for learning the values that the market has for various bundles.
\section{Preliminaries and Notation}\label{sec:preliminaries}
Our main object of study will be families of polytopes that precisely cover the unit simplex, with the property that any two distinct polytopes from the family are either disjoint, meet at their boundary, or entirely coincide. Throughout, the polytopes we work with are convex.
\begin{definition}[$(m,n)$-Polytope Partition]\label{def:polytope-partition}
A {\em $(m,n)$-polytope partition} consists of a set of $n$ convex polytopes in $\mathbb{R}^m$, $\mathscr{P} = \{P_1,...,P_n\}$, with the following properties:
\begin{itemize}
\item $\bigcup P_i = \Delta^m = \{x \in \mathbb{R}^m \ | \ \forall i, \ x_i\geq 0, \ \sum_i x_i \leq 1\}$.
\item For each $i \neq j$, either $relint(P_i) \cap relint(P_j) = \emptyset$ or $P_i = P_j$, where $relint(H)$ means the relative interior of $H$.
\end{itemize}
\end{definition}
\begin{figure}
\center{
\begin{tikzpicture}[scale=0.5]
\tikzstyle{xxx}=[dashed,thick]
\fill[red!20](-8,6)--(-7,6)--(-5.5,7.5)--(-8,10)--cycle;
\fill[blue!20](-8,6)--(-7,6)--(-5,2)--(-8,2)--cycle;
\fill[orange!20](-6,4)--(-3,5)--(-5.5,7.5)--(-7,6)--cycle;
\fill[magenta!20](-5,2)--(-6,4)--(-4,4.65)--(-4,2)--cycle;
\fill[green!20](-4,2)--(-4,4.65)--(-3,5)--(0,2)--cycle;
\fill[white!20](-8,2)--(0,2)--(0,-0.2)--(-8,-0.2)--cycle;
\draw[thick, -](-8,6)--(-7,6)--(-5.5,7.5);
\draw[thick, -](-7,6)--(-5,2);
\draw[thick, -](-6,4)--(-3,5);
\draw[thick, -](-4,2)--(-4,4.65);
\draw[thick, -](-8,2)--(-8,10);
\draw[thick, -](-8,2)--(0,2);
\draw[thick, -](-8,10)--(0,2);
\node at (-7,4){$P_1$};
\node at (-7,7.5){$P_2$};
\node at (-5.25,5.5){$P_3$};
\node at (-2.5,3){$P_4$};
\node at (-4.75,3.5){$P_5$};
\end{tikzpicture}
\hspace{1cm}
\begin{tikzpicture}[scale=0.5]
\tikzstyle{xxx}=[dashed,thick]
\fill[red!20](-7.6,6)--(-7.4,6)--(-7.4,9.4)--(-7.6,9.6)--cycle;
\fill[blue!20](-7.6,2)--(-7.4,2)--(-7.4,6)--(-7.6,6)--cycle;
\fill[orange!20](-4.5,4.5)--(-3,5)--(-4.5,6.5)--cycle;
\fill[magenta!20](-4.5,2)--(-4.5,4.5)--(-4,4.65)--(-4,2)--cycle;
\fill[green!20](-4,2)--(-4,4.65)--(-3,5)--(-2.5,4.5)--(-2.5,2)--cycle;
\draw[thick, -](-8,6)--(-7,6)--(-5.5,7.5);
\draw[thick, -](-7,6)--(-5,2);
\draw[thick, -](-6,4)--(-3,5);
\draw[thick, -](-4,2)--(-4,4.65);
\draw[thick, -](-8,2)--(-8,10);
\draw[thick, -](-8,2)--(0,2);
\draw[thick, -](-8,10)--(0,2);
\node at (-7,4){$P_1$};
\node at (-7,7.5){$P_2$};
\node at (-5.25,5.5){$P_3$};
\node at (-2.5,3){$P_4$};
\node at (-4.75,3.5){$P_5$};
\draw[thick, <->](-8,1)--(0,1);
\draw[thick, -](-7.5,0.8)--(-7.5,1.2);
\draw[thick, -](-4.5,0.8)--(-4.5,1.2);
\draw[thick, -](-2.5,0.8)--(-2.5,1.2);
\node at (-7.5,1.5){$x$};
\node at (-4.5,1.5){$y$};
\node at (-2.5,1.5){$z$};
\node at (-7.5,0.3){$\mathscr{P}^x$};
\node at (-3.5,0.3){$\mathscr{P}^{y,z}$};
\end{tikzpicture}
\caption{Polytope partition, cross-section and slices.}\label{fig:poly-part}
}
\end{figure}
\begin{definition}[Cross-sections and Slices]\label{def:cross-section}
Let $P \subset \mathbb{R}^m$ be a polytope and $\pi: \mathbb{R}^m \rightarrow \mathbb{R}$ the projection function into the first coordinate. For $x \in \mathbb{R}$, we define the {\em $x$-cross-section} of $P$ as $P^x = \pi^{-1}(x)\cap P$. For any $I = [x,y] \subset \mathbb{R}$ we define the {\em $[x,y]$-slice} of $P$ as $P^I = P^{x,y} = \pi^{-1}([x,y])\cap P$. Suppose that $\mathscr{P} = \{P_i\}_i$ is an $(m,n)$-polytope partition. The definitions of cross-sections and slices extend to $\mathscr{P}^x = \{P_i^x\}_i$ and $\mathscr{P}^I = \mathscr{P}^{x,y} = \{P_i^{x,y}\}_i$.
\end{definition}
Figure \ref{fig:poly-part} gives a visualisation of these two definitions. Notice that in the same figure, $\mathscr{P}^x$ is essentially a lower-dimensional polytope partition linearly scaled by a factor of $(1-x)$. This however, is not the case in general, as visible in Figure \ref{fig:degenerate-cross}, where $\mathscr{P}^x$ fails the second condition of Definition \ref{def:polytope-partition}. We distinguish between these two scenarios with the following formal definition:
\begin{definition}[Non-Degenerate and Degenerate cross-sections]\label{def:degenerate-cross-section}
Let $\mathscr{P}$ be an $(m,n)$-polytope partition. For $x \in [0,1)$ let $f_x: \mathscr{P}^x \rightarrow \Delta^{m-1}$ be defined as $f_x(v_1,...,v_m) = \frac{1}{1-x}(v_2,...,v_m)$. If $f_x(\mathscr{P}^x)$ is an $(m-1,n)$-polytope partition, we say that $\mathscr{P}^x$ is a non-degenerate cross-section. Otherwise we say that $\mathscr{P}^x$ is a degenerate cross-section.
\end{definition}
The recursive structure of polytope partitions on non-degenerate cross-sections will be crucial to our constructions. Luckily, for any polytope partition, there are only a finite number of points $x \in [0,1)$ that give rise to degenerate cross-sections. Before showing this, we define an important discrete subset of $[0,1]$ given by the projections of vertices of polytopes under $\pi$.
\begin{definition}[Vertex Critical Coordinates]\label{def:vertex-critical-pts}
For a given polytope $P \subset \mathbb{R}^m$ let $V_P \subset \mathbb{R}^m$ be the vertex set of $P$. Define the set of {\em vertex critical coordinates} as $C_P = \pi(V_P) \subset \mathbb{R}$. If $\mathscr{P} = \{P_i\}_{i=1}^n$ is an $(m,n)$-polytope partition, then we extend our definition to define $C_{\mathscr{P}} = \bigcup_{i=1}^n C_{P_i} \subset[0,1]$ as the vertex critical coordinates of $\mathscr{P}$.
\end{definition}
\begin{lemma}\label{lemma:degenerate-cross-section}
Suppose that $\mathscr{P}$ is an $(m,n)$-polytope partition and that $x \in [0,1) \setminus C_{\mathscr{P}}$. Then $\mathscr{P}^x$ is non-degenerate.
\end{lemma}
\begin{proof}
First we show that if $P \subset \mathbb{R}^m$ is an arbitrary polytope and $x \in \mathbb{R} \setminus C_P $ then $relint(P^x) = relint(P)\cap \pi^{-1}(x)$.
First of all, we notice that $P \neq P^x$ since we have assumed that $x$ is not the projection of a vertex of $P$. Suppose that the affine dimension of $P$ is $k \leq m$ so that $P$ is full dimensional in the affine subspace $H$ of dimension $k$. Let $z \in relint(P^x) \subset P^x$. Clearly $\pi(z) = x$, hence we simply need to show that $z \in relint(P)$. Suppose that this is not the case, then $z$ lies on some boundary hyperplane to $P$ in $H$. Call this boundary hyperplane $D$. $D$ cannot lie entirely in $\pi^{-1}(x)$ due to the fact that $x$ is not a critical coordinate. It follows that $D \cap \pi^{-1}(x)$ is thus a boundary hyperplane to $P^x$. This contradicts the fact that $z \in relint(P^x)$, thus establishing the fact that $z \in relint(P) \cap \pi^{-1}(x)$.
Suppose that $z \in relint(P) \cap \pi^{-1}(x)$. Since $relint(P) \subset P$, we know that $z \in P_x$. Furthermore, $z \in relint(P)$ means that for some $\varepsilon > 0$, the $k$-dimensional ball $B_\varepsilon(z) \cap H$ is entirely contained in $P$. Clearly this also holds for the $k-1$ dimensional ball $B_\varepsilon(z) \cap H \cap \pi^{-1}(x)$, establishing the fact that $z \in relint(P^x)$.
Let us return to the lemma statement which involves a polytope partition $\mathscr{P}$ instead of a single polytope $P$. If $x \in [0,1) \setminus C_{\mathscr{P}}$ then we have shown $relint(P_i^x) = relint(P_i)\cap \pi^{-1}(x)$ for all $P_i \in \mathscr{P}$, which from the fact that $\mathscr{P}$ satisfies the second condition of Definition \ref{def:polytope-partition} establishes the fact that $\mathscr{P}^x$ also satisfies this second condition. The fact that $\mathscr{P}^x$ satisfies the first condition of Definition \ref{def:polytope-partition} trivially follows from the fact that $\mathscr{P}$ covers $\Delta^m$ as per the first condition of Definition \ref{def:polytope-partition}.
\end{proof}
\begin{figure}
\center{
\begin{tikzpicture}[scale=0.8]
\tikzstyle{xxx}=[dashed,thick]
\fill[red!20](2,2)--(2,4)--(4,4)--(4,2)--cycle;
\fill[green!20](2,4)--(4,4)--(4,6)--(2,8)--cycle;
\fill[blue!20](4,2)--(8,2)--(7,3)--(4,3)--cycle;
\fill[pink!20](4,3)--(7,3)--(5.5,4.5)--(4,4.5)--cycle;
\fill[orange!20](4,4.5)--(5.5,4.5)--(4,6)--cycle;
\draw[thick, -](2,2)--(8,2);
\draw[thick, -](2,2)--(2,8);
\draw[thick, -](2,8)--(8,2);
\draw[thick, -](4,4)--(4,6);
\draw[thick, -](4,4)--(4,2);
\draw[thick, -](2,4)--(4,4);
\draw[thick, -](4,4.5)--(5.5,4.5);
\draw[thick, -](4,3)--(7,3);
\node at (3,3){$P_1$};
\node at (3,5.5){$P_2$};
\node at (4.5,5){$P_3$};
\node at (5,3.75){$P_4$};
\node at (5,2.5){$P_5$};
\draw[thick, <->](2,1)--(8,1);
\draw[thick, -](4,0.8)--(4,1.2);
\node at (4,1.5){$x$};
\node at (4,0.3){$\mathscr{P}^x$};
\end{tikzpicture}
\caption{Degenerate cross-section at $x$}\label{fig:degenerate-cross}
}
\end{figure}
\subsection{Query Oracle Models}
We study two natural query oracle models for polytope membership in any $\mathscr{P}$.
\begin{definition}[Membership Query Oracles for Polytope Partitions]\label{def:query-oracle}
Any $(m,n)$-polytope partition, $\mathscr{P}$ has the following membership query oracles:
\begin{itemize}
\item Lexicographic query oracle: $Q_\ell: \Delta^m \rightarrow [n]$, which for a given $y$ returns the smallest index of polytope to which $y$ belongs, namely $Q_\ell(y) = \min \{i \in [n] \ | \ y \in P_i\}$.
\item Adversarial query oracle(s): $Q_A: \Delta^m \rightarrow [n]$, which can return any polytope to which $y$ belongs. Namely $Q_A$ is any function such that $Q_A(y) \in \{i \in [n] \ | \ y \in P_i\}$ for all $y \in \Delta^m$.
\end{itemize}
\end{definition}
When we wish to refer to an arbitrary oracle from the above models, we use the notation $Q$. Before continuing, we also clarify that for $A,B \subseteq \mathbb{R}^n$, we denote $Conv(A,B) \subseteq \mathbb{R}^m$ as the convex combination of the two sets. In addition, if $A_i \subseteq \mathbb{R}^m$ is an indexed family of sets with $i = 1,...,r$, we denote $Conv(A_i \ | i =1,...,r) \subseteq \mathbb{R}^n$ as the convex combination of all $A_i$.
\subsection{\texorpdfstring{$\varepsilon$}{}-close Labellings}
Upon making queries to $Q$, we can infer labels of $x \in \Delta^m$ by taking convex combinations. We abstract this notion in the following definition.
\begin{definition}[Empirical Polytopes and Labellings]\label{def:implicit-labelling}
Suppose that $\mathscr{P}$ is an $(m,n)$-polytope partition
and $S \subset \Delta^m$ is a finite set for which queries to $Q$ have been made. Let $\hat{P}_i = Conv(\{x \in S \ | \ Q(x) = i\}) \subset P_i$. We say each $\hat{P}_i$ is an empirical polytope of $P_i$ and that $\hat{\mathscr{P}} = \{\hat{P}_i\}$ is an empirical labelling of $\mathscr{P}$. Furthermore, we use the notation $\hat{P}_\bot = \Delta^m \setminus \cup_{i=1}^n \hat{P}_i$. to refer to points in $\Delta^m$ unlabelled under $\hat{\mathscr{P}}$.
\end{definition}
An $\varepsilon$-net in the $\ell_2$ norm for $ \Delta^m \subset \mathbb{R}^m$ is a set $N^m_\varepsilon \subseteq \Delta^m$ with the property that for all $x \in \Delta^m$, there exists a $y \in N^m_\varepsilon$ such that $\|x - y\|_2 \leq \varepsilon$. Our learning goal is to use query access to an oracle, $Q$, to compute an empirical labelling $\hat{\mathscr{P}}$ such that $\cup_{i=1}^n \hat{P}_i$ is an $\varepsilon$-net of $\Delta^m$.
\begin{definition}[$\varepsilon$-close Labelling]\label{def:eps-close-label-thin}
Suppose that $\varepsilon \geq 0$ and that $\hat{\mathscr{P}}$ is an empirical labelling for $\mathscr{P}$. If $\cup_{i=1}^n \hat{P}_i$ is an $\varepsilon$-net of $\Delta^m \subset \mathbb{R}^m$ in the $\ell_2$ norm, we say that $\hat{\mathscr{P}}$ is an {\em $\varepsilon$-close labelling} of $\mathscr{P}$.
\end{definition}
Although $\varepsilon$-close labellings are defined for polytope partitions, we extend our terminology to also encompass slices of polytope partitions. As such, when we mention computing an $\varepsilon$-close labelling of $\mathscr{P}^{x,y}$, we mean an empirical labelling of $\mathscr{P}^{x,y}$ (in the same vein as Definition \ref{def:implicit-labelling}) with the property that the union of its empirical polytopes forms an $\varepsilon$-net of $(\Delta^m)^{x,y}$.
\subsection{Learning in Thickness to Learning in Distance}
\begin{definition}[Thickness of Sets]\label{def:thickness}
Suppose that $Z \subseteq \mathbb{R}^m$ is a set. We define the {\em thickness} of $Z$ as the radius of the largest $\ell_2$ ball fully contained in $Z$ and we denote it by $\tau(Z) = \sup \{\delta \geq 0 \ | \ \exists x \in Z \text{ with } B_\delta(x) \subseteq Z\}$ where $B_\delta(x) = \{ y \in \mathbb{R}^m \ | \ \|x - y\|_2 \leq \delta\}$. In the language of convex geometry, $\tau(Z)$ is the depth of the Chebyshev centre of $Z$.
\end{definition}
For a polytope partition $\mathscr{P}$, if $\hat{\mathscr{P}}$ is an $\varepsilon$-close labelling, then $\tau(\hat{P}_\bot) \leq \varepsilon$, but the converse does not hold in general.
Even though $\hat{P}_\bot$ may be of small thickness, if it contains vertices of $\Delta^m$, these vertices may be far from labelled points. The following results lead up to Lemma \ref{lemma:thickness-to-distance}, a slightly weaker version of the converse. Lemma \ref{lemma:thickness-to-distance} shows that if we are able to learn an empirical labelling where the set of unlabelled points is of small enough thickness, then we will in fact have succeeded in learning an $\varepsilon$-close labelling, where any unlabelled point is close in distance to a labelled point.
\begin{lemma}\label{lemma:cones-nets}
Let $P \subset \mathbb{R}^m$ be a full-dimensional polytope with $Diam(P) = \sup_{x,y \in P} \|x - y\|_2$.
\begin{itemize}
\item If $A \subsetneq P$ and $\gamma > \left(\frac{Diam(P)}{\tau(P)} \right) \tau(A)$, then $B_\gamma(x) \cap \left( P \setminus A \right) \neq \emptyset$ for all $x \in A$.
\item If $A \subseteq P$ is such that $int(P) \setminus A \neq \emptyset$ ($int(P)$ refers to the interior of $P$) and $\gamma > \left(\frac{Diam(P)}{\tau(P)} \right) \tau(A)$, then $B_\gamma(x) \cap \left( int(P) \setminus A \right) \neq \emptyset$ for all $x \in A$.
\end{itemize}
\end{lemma}
\begin{proof}
The proof of the first claim follows by considering the picture given in Figure \ref{fig:thickness-to-distance}. Pick an arbitrary $x \in A$. Due to the definition of thickness, there exists some $v \in P$ such that $B_{\tau(P)}(v) \subset P$. Consider the convex combination, $Conv(x, B_{\tau(P)}(v)) \subset P$. The furthest point in this convex combination from $x$ is at the other extreme of $B_{\tau(P)}(v)$ from $x$, and we denote the distance between these two points by $z = \sup_{a \in B_{\tau(P)}(v)} \|x - a\|_2 \leq Diam(P)$.
By similarity however, it now follows that if we consider $B_\gamma(x)$, and the fact that $Diam(P) \geq z$, a similar inscribed sphere of radius strictly greater than $\tau(A)$ will exist within $F = B_{\gamma}(x) \cap Conv(x, B_{\tau(P)}(v)) \subset B_{\gamma}(x) \cap P$. By definition, $\tau(F) > \tau(A)$. It follows that $F \not\subset A$, which proves the claim as $F \subset B_{\gamma}(x)$.
As for a proof of the second claim, it follows by considering the same picture above and noticing that $int(Conv(x, B_{\tau(P)}(v))) \subset int(P)$ as well as the fact that the former set is non-empty since $P$ is of full dimension.
\end{proof}
\begin{figure}[h]
\center{
\begin{tikzpicture}[scale=0.8]
\tikzstyle{xxx}=[dashed,thick]
\fill[blue!20](-3,-0)--(4.5,2.5)--(4.5,-2.5)--cycle;
\draw[red,fill=blue!20] (5,0) circle (2.5cm);
\filldraw (-3,0) circle (2pt);
\draw (5,0) circle (2.5cm);
\draw[thick, -](-3,0)--(4.5,2.5);
\draw[thick, -](-3,0)--(4.5,-2.5);
\draw (-3,0) circle (3cm);
\draw (-0.69,0) circle (0.7cm);
\draw[thick, <->](-3,-3.5)--(7.5,-3.5);
\draw[thick, <->](-3,3.5)--(0,3.5);
\draw[thick, <->] (5,0)--(5,2.5);
\draw[thick, <->] (-0.69,0)--(-0.69,0.7);
\node at (-1.4,0.95) {$\tau(A)$};
\node at (5.6,1.2) {$\tau(P)$};
\node at (2,-4) {$z$};
\node at (-1.5,4) {$\gamma = \frac{z\tau(A)}{\tau(P)}$};
\node at (8.5,0) {$\subset P$};
\node at (-3.5,0) {$x$};
\node at (5,-0.25) {$v$};
\end{tikzpicture}
\caption{Proof of Lemma \ref{lemma:cones-nets}
\label{fig:thickness-to-distance}
}
}
\end{figure}
\begin{lemma}\label{lemma:thick-diam-simplex}
$Diam(\Delta^m) = \sqrt{2}$ and $\tau(\Delta^m) \geq \frac{1}{m + \sqrt{m}}$.
\end{lemma}
\begin{proof}
For the first part of the statement, let us fix an $x \in \Delta^m$. If we consider the function $f_x(z) = \|x - z\|_2^2$, then this function is differentiable and clearly achieves local maximum when $z$ is a vertex of $\Delta^m$. It thus follows that the distance between two points in $\Delta^m$ is maximised when both are vertices. This in turn is maximal when both points are vertices not equal to the zero vector, in which case they are at distance $\sqrt{2}$ from each other.
As for the second part of the claim, we explicitly construct an inscribed sphere of the desired radius. Let $\lambda = \frac{1}{m + \sqrt{m}}$, and define $x = \lambda \vec{1} \subset \Delta^m$. Clearly $B_\lambda(x)$ is tangent to $\Delta^m$ on axis-aligned faces (defined by the set of all $z$ such that $\pi_i(z) = 0$ in the positive orthant). The remaining face is given by the set of $z$ in the positive orthant such that $\|z\|_1 = 1$. The most extremal point of $B_\lambda(x)$ in the direction of this face is given by $\frac{1}{m}\vec{1}$, hence the sphere is inscribed in $\Delta^m$.
\end{proof}
\begin{figure}[h]
\center{
\begin{tikzpicture}[scale=1]
\tikzstyle{xxx}=[dashed,thick]
\draw[thick, -] (0,0)--(0,5)--(5,0)--(0,0);
\draw[dashed] (0,0)--(2.5,2.5);
\filldraw (1.464,1.464) circle (2pt);
\draw (1.464,1.464) circle (1.464cm);
\draw[thick, <->] (1.464,1.464)--(1.464,0);
\draw[thick, <->] (0,1.464)--(1.464,1.464);
\node at (-1,1.464) {$\frac{1}{m + \sqrt{m}}$};
\node at (1.464,-0.5) {$\frac{1}{m + \sqrt{m}}$};
\node at (2.85,2.85) {$\frac{1}{m}\vec{1}$};
\end{tikzpicture}
\caption{Proof of Lemma \ref{lemma:thick-diam-simplex}
\label{fig:simplex-thickness}
}
}
\end{figure}
\begin{lemma}\label{lemma:thickness-to-distance}
Suppose that $\mathscr{P}$ is an $(m,n)$-polytope partition. Furthermore suppose that $\hat{\mathscr{P}}$ is an empirical labelling with $\tau(\hat{P}_\bot) < \varepsilon$. For any $\gamma > \sqrt{2}(m + \sqrt{m})\varepsilon$, it follows that $\hat{\mathscr{P}}$ is a $\gamma$-close labelling. In particular, if $\gamma > 4m\varepsilon$, the claim also holds.\footnote{An identical result which may be of separate interest holds if we consider partitions of arbitrary $m$-dimensional convex polytopes (not just $\Delta^m$ as per the definition of polytope partitions). As long as we can bound the thickness and diameter of the ambient convex polytope, learning in thickness translates to learning in distance.}
\end{lemma}
\begin{proof}
From Lemma \ref{lemma:thick-diam-simplex}, we know that $\tau(\Delta^m) \geq \frac{1}{m + \sqrt{m}}$ and $Diam(\Delta^m) = \sqrt{2}$. Suppose that $x \in \hat{P}_\bot$. From Lemma \ref{lemma:cones-nets} our choice of $\gamma$ implies $B_\gamma(x) \cap (\Delta_m \setminus \hat{P}_\bot) \neq \emptyset$. This in turn means that $\hat{\mathscr{P}}$ is a $\gamma$-close labelling. As for the final claim, this holds since $m \geq 1$.
\end{proof}
\section{Constant-Dimension Generalised Binary Search for \texorpdfstring{$Q_\ell$}{}}
Let us first build some intuition for why generalisations of binary search lead to query efficient algorithms for computing $\varepsilon$-close labellings of $(m,n)$-polytope partitions.
Finding an $\varepsilon$-close labelling of a $(1,n)$-polytope partition using a lexicographic oracle is the same as approximately learning $n$ sub-intervals of $[0,1]$. Using binary search techniques and an optimal $O(n \log (\frac{1}{\varepsilon}))$ queries, we can compute an $\varepsilon$-close labelling.
Query efficiency comes from the fact that if $x,y$ have the same label, it becomes unnecessary to further query any point in $[x,y]$. To be more specific, unless $[x,y]$ contains the boundary of a sub-interval, all labels can be inferred within $[x,y]$. Boundary points of intervals thus serve as ``critical points'' with respect to the query oracle $Q_\ell$, where the information it provides changes.
We will use a higher-dimensional analogue of this property at the core of CD-GBS. At a high level, suppose that we have an $(m,n)$-polytope partition that we want to learn via queries and an algorithm for computing arbitrary $\varepsilon$-close labellings of $(m-1,n)$-polytope partitions. We can use this algorithm as a subroutine on the cross-sections of two coordinates $x \neq y$ and ask whether the convex combination of these two $\varepsilon$-close labellings will itself result in a $g(\varepsilon)$-close labelling of $\mathscr{P}^{x,y}$ (recall Definition \ref{def:cross-section}) for a reasonable $g$.
Suppose that we could compute $0$-close labellings (i.e. perfectly recover a polytope partition), it is clear that if we let $\mathscr{P}_V$ be the set of all vertices of all $P_i$ in the polytope partition, then $\pi(\mathscr{P}_V)$ is a suitable set of critical points (not necessarily the smallest one though) in the sense that if $[x,y] \cap \pi(\mathscr{P}_V) = \emptyset$, then the convex combination of both lower-dimensional $0$-close labellings for $\mathscr{P}^x$ and $\mathscr{P}^y$ will result in a $0$-close labelling for $\mathscr{P}^{x,y}$. Taking the contrapositive of this, if the convex combination does not result in a $0$-close labelling ---a condition which can be verified--- then we know there is a critical point in $[x,y]$. Thus we recover a binary search mechanism, whereby we can isolate critical points up to a desired tolerance $\varepsilon$.
\subsection{Warm-up: Learning Slices of Single Polytopes}
We set up important groundwork by focusing on arbitrary polytopes $P \subset \mathbb{R}^m$. We let $\pi:\mathbb{R}^m \rightarrow \mathbb{R}$ be the projection function introduced in Definition \ref{def:cross-section}, and we recall Definition \ref{def:vertex-critical-pts} regarding the vertex critical coordinates of $P$ denoted by $C_P$.
\begin{lemma} \label{lemma:perfect-fleshing}
Suppose that $x,y \in \mathbb{R}$ are such that $[x,y] \bigcap C_P = \emptyset$. Then taking convex hulls of cross-sections we get $Conv(P^x, P^y) = P^{x,y}$.
\end{lemma}
\begin{proof}
$[x,y] \cap C_{P} = \emptyset$ implies the vertices of the polytope $P^{x,y}$ lie in $\mathscr{P}_x$ and $\mathscr{P}_y$. Since the convex hull of the set of all vertices of a bounded polytope is the polytope itself, the claim follows.
\end{proof}
\begin{figure}[h]
\center{
\begin{tikzpicture}[scale=0.6]
\tikzstyle{xxx}=[dashed,thick]
\fill[blue!20](-4,2.1)--(-2,2.3)--(-2,6.1)--(-4,6.3)--cycle;
\fill[blue!20](-9,3)--(-5,2.1)--(-5,6.35)--(-9,5.5)--cycle;
\draw[thick, -](-6,2)--(0,2.5);
\draw[thick, -](0,2.5)--(4,4);
\draw[thick, -](4,4)--(-1,6);
\draw[thick, -](-1,6)--(-7,6.5);
\draw[thick, -](-7,6.5)--(-12,4);
\draw[thick, -](-12,4)--(-6,2);
\draw[dashed](-12,1)--(-12,4);
\draw[dashed](-7,1)--(-7,6.5);
\draw[dashed](-6,1)--(-6,2);
\draw[dashed](-1,1)--(-1,6);
\draw[dashed](0,1)--(0,2.5);
\draw[dashed](4,1)--(4,4);
\draw[dashed](-10.6,1)--(-10.6,3.6);
\draw[dashed](2.5,1)--(2.5,3.5);
\draw[thick, <->](-12,1)--(4,1);
\node at (-12,0.5){$v_1$};
\node at (-7,0.5){$v_2$};
\node at (-6,0.5){$v_3$};
\node at (-1,0.5){$v_4$};
\node at (0,0.5){$v_5$};
\node at (4,0.5){$v_6$};
\node at (-10.5,0.5){$l_\alpha(P)$};
\node at (2.5,0.5){$r_\alpha(P)$};
\draw[thick, -](-4,0.8)--(-4,1.2);
\draw[thick, -](-2,0.8)--(-2,1.2);
\node at (-4,1.5){$c$};
\node at (-2,1.5){$d$};
\node at (-3,4.5){$\mathcal{P}^{c,d}$};
\draw[thick, -](-9,0.8)--(-9,1.2);
\draw[thick, -](-5,0.8)--(-5,1.2);
\node at (-9,1.5){$a$};
\node at (-5,1.5){$b$};
\node at (-6.9,4.5){$\mathcal{P}^{a,b}$};
\draw[thick, <->](-10.6,3.6)--(-10.6,4.6);
\node at (-10,4.1){$\alpha$};
\draw[thick, <->](2.5,3.5)--(2.5,4.5);
\node at (2,4.1){$\alpha$};
\end{tikzpicture}
\caption{$Conv(P^a, P^b) \neq P^{a,b}$ and $Conv(P^c,P^d) = P^{c,d}$}
\label{fig:perfect-fleshing-2}
}
\end{figure}
This property of polytopes whereby convex combinations give rise to complete information except when traversing a discrete set of critical points (visualised in Figure
\ref{fig:perfect-fleshing-2}) is critical to CD-GBS. With query access to polytopes however, we no longer fully recover $P_x$ perfectly, but instead an approximation given by an $\varepsilon$-close labelling, $\hat{P}_x$. It becomes more subtle to show that by taking convex hulls of $\hat{P}_x$ and $\hat{P}_y$, we recover the desired information along $[x,y]$.
\subsection{Necessary Machinery}\label{sec:machinery}
We delve into the specifics of CD-GBS by defining some important machinery. We recall our notion of thickness in Definition \ref{def:thickness}, and see that it satisfies a sub-additivity property when the sets being considered are convex polytopes:
\begin{lemma}\label{lemma:subadd}
Let $P_1,..,P_k \subseteq \mathbb{R}^m$ be convex polytopes. Then $\tau \left( \cup_i P_i \right) < \frac{10}{3}(\sum_i \tau(P_i)) (m+1)^{3/2}$.
\end{lemma}
\begin{proof}
Let $R = \frac{10}{3}(\sum_i \tau(P_i)) (m+1)^{3/2}$. Suppose that $x \in \cup_i P_i$. We will show that $B_R(x)$ cannot be a subset of $\cup_i P_i$ via a volume argument. For this proof, we will let $V(A)$ denote the volume of the set $A \subset \mathbb{R}^m$. We will also let $S(m,R)$ denote the volume of the hypersphere in $m$ dimensions of radius $R$.
First of all, we need to show that for a given $P_i$, we have the following volume bound:
$$
V(P_i \cap B_R(x)) \leq 2\tau(P_i) S(m-1,R).
$$
This follows from Fritz John's Theorem \cite{Fritz} especially as referenced in \cite{BallConvex}.
The statement of this theorem says that if $K\subseteq \mathbb{R}^m$ is a convex body, then there exists a unique ellipsoid of maximal volume $\mathcal{E}\subseteq K$, with the property that $\mathcal{E} \subseteq P \subseteq m \mathcal{E}$. Any higher-dimensional ellipsoid has at most $m$ axes of symmetry, and for $\mathcal{E}$, it must be the case that the smallest axis is at most the thickness of the convex body: $\tau(K)$ (Otherwise there would be a sphere of radius larger than $\tau(K)$ inscribed in $\mathcal{E}$, contradicting the definition of thickness). Furthermore, since $K \subseteq m \mathcal{E}$, the projection of $K$ onto this axis must be contained in a segment of length at most $2m \tau(K)$. This means that if we take an arbitrary polytope $P_i$, there exist two parallel supporting hyperplanes to $P_i$, call them $H_1$ and $H_2$ that are at most $2m\tau(P_i)$ apart. Call the convex body between these halfspaces $H$. Since the majority of the mass of a hypersphere is contained around its centre, it follows that the volume of the intersection of $H$ with $B_R(x)$ is maximised when $x$ is equidistant from $H_1$ and $H_2$. Furthermore, the volume of this cross-section is bounded by the distance between $H_1$ and $H_2$ multiplied by $S(m-1,R)$ which is at most $2\tau(P_i) S(m-1,R)$. Since $V(P_i \cap B_R(x)) \leq V(H \cap B_R(x))$, the claim holds.
Now if we take a union bound over all $i$, we get $V((\cup_i P_i) \cap B_R(x)) \leq 2m\sum_i\tau(P_i) S(m-1,R)$. If it were the case that the right hand side were strictly less than $S(m,R)$, we would have found an $R$ such that $B_R(x)$ contains points not contained in any $P_i$. To this end, we use the following ratio:
$$
\frac{S(m,R)}{S(m-1,R)} = R \sqrt{\pi} \frac{\Gamma(\frac{m+1}{2})}{\Gamma(\frac{m+2}{2})} \geq 0.6 R (m+1)^{-1/2}.
$$
The inequality uses Stirling's formula for the gamma function. We can therefore see that with our value $R > \frac{10}{3}(\sum_i \tau(P_i)) (m+1)^{3/2}$, we get the desired volume bound.
\end{proof}
For a given polytope partition $\mathscr{P} = \{P_i\}_i$, it will be important to establish thickness bounds on $P_i$ at specific cross-sections.
\begin{definition}[$\alpha$-Critical Coordinates]\label{def:critical-thick}
Let $P \subset \mathbb{R}^m$ be a polytope. For $\alpha > 0$, we define $l_\alpha(P) = \inf \{ x\in \mathbb{R} \ | \ \tau(P^x) \geq \alpha\}$ and $r_\alpha(P) = \sup \{ x\in \mathbb{R} \ | \ \tau(P^x) \geq \alpha\}$ so that $\forall z \in \mathbb{R}$, $\tau(P^z) \geq \alpha$ if and only if $z \in [l_\alpha(P), r_\alpha(P)]$ (Here thickness is with respect to the natural embedding of $P^x$ in $\mathbb{R}^{m-1}$). These are called {\em $\alpha$-critical coordinates} for $P$.
\end{definition}
The previous definition allows us to associate to each polytope $P_i$ a segment of $[0,1]$ within which cross-sections of $P_i$ are thick above a threshold. By combining this with Definition \ref{def:vertex-critical-pts} we get the correct notion of critical coordinates mentioned at the beginning of Section \ref{sec:discrete-nash}.
\begin{definition}[Critical Coordinates of a $(m,n)$-Polytope Partition]\label{def:critical-partition}
Suppose that $\mathscr{P} = \{P_1,...,P_n\}$ is an $(m,n)$-polytope partition. For $\alpha > 0$, we let $C_\mathscr{P}^\alpha$ be the union of the sets of all vertex critical coordinates of all $P_i$ as defined in Definition \ref{def:vertex-critical-pts}, and the set of all $\alpha$-critical coordinates for all $P_i$ as in Definition \ref{def:critical-thick}. Specifically, $C_\mathscr{P}^\alpha = \left( \cup_i C_{P_i} \right) \bigcup \left( \cup_i \{l_\alpha(P_i), r_\alpha(P_i) \} \right)$.
\end{definition}
As mentioned in the beginning of this section, CD-GBS clusters queries around critical coordinates (up to a desired tolerance). For this reason it is important to bound the number of critical coordinates in a given $(m,n)$-Polytope partition.
\begin{lemma}\label{lemma:bound-critical-cardinality}
If $\mathscr{P}$ is a $(m,n)$-polytope partition $|C_\mathscr{P}^\alpha| \leq \binom{n + m} {m} + 2n$.
\end{lemma}
\begin{proof}
For any given $(m,n)$-polytope partition, $\mathscr{P}$, if a vertex occurs, it must be the case that out of the $n$ polytopes in $\mathscr{P}$ and $m$ boundary halfspaces of $\Delta^m$, $m$ of them meet. Furthermore, each collection of $m$ polytopes and boundary halfspaces can give rise to only one vertex (which can be seen as a consequence of the fact that vertices are points in $\Delta^m$). It follows that the set of all vertex critical coordinates is at most $\binom {n + m} {m}$ and the first part of the bound holds. As for the second half, there are at most two $\alpha$-critical coordinates per $P_i$, which completes the expression above.
\end{proof}
With this machinery in hand, we are in a position to prove the main result necessary to demonstrate correctness of CD-GBS. We show that if $x,y \in [0,1]$ are such that $[x,y]$ contains no critical coordinates, then computing sufficiently fine empirical labellings of $\mathscr{P}^x$ and $\mathscr{P}^y$ with $Q_\ell$ will contain enough information to compute an $\varepsilon$-close labelling of $\mathscr{P}^{x,y}$ by simply taking convex combinations of the empirical labellings at both cross-sections.
\begin{lemma}\label{lemma:crux-for-gbs}
Given $m,n,\varepsilon>0$ let $\alpha = \frac{\varepsilon}{20 n m^{5/2}} $ and $\beta = \frac{\varepsilon^2}{85 n m^{5/2}} $.
Suppose that $\mathscr{P}$ is an $(m,n)$-polytope partition and that the following hold:
\begin{itemize}
\item $x,y \in [0,1]$ are such that $x < y \leq 1 - \frac{\varepsilon}{3}$.
\item $[x,y] \cap C_{\mathscr{P}}^\alpha = \emptyset$.
\item $\hat{\mathscr{P}}^x$ and $\hat{\mathscr{P}}^y$ are empirical labellings of $\mathscr{P}^x$ and $\mathscr{P}^y$ computed via $Q_\ell$, such that $\cup_i \hat{P}_i^x$ and $\cup_j \hat{P}_j^y$ are $\beta$-nets for $(\Delta^m)^x$ and $(\Delta^m)^y$ respectively.
\end{itemize}
Then $\bigcup_i Conv(\hat{P}^x_i, \hat{P}^y_i)$ is an $\varepsilon$-net of $(\Delta^m)^{x,y}$.
\end{lemma}
\begin{proof}
Let us define the following:
$$
U = \{i \in [n] \ | \ [l_\alpha(P_i), r_\alpha(P_i)] \cap [x,y] = \emptyset \}
$$
$$
V = \{i \in [n] \ | \ [x,y] \subsetneq [l_\alpha(P_i), r_\alpha(P_i)]\}
$$
We call $U$ the set of $\alpha$-insignificant polytopes and $V$ the set of $\alpha$-significant polytopes. From the fact that $[x,y]$ contains no critical coordinates, we know that $U \cup V = [n]$ and from Lemma \ref{lemma:degenerate-cross-section}, we also know that for all $z \in [x,y]$, $\mathscr{P}^z$ is non-degenerate. We proceed by proving the following claims:
\begin{enumerate}
\item $V \neq \emptyset$.
\item Any point in the cross-section of an $\alpha$-insignificant polytope is $\frac{2\varepsilon}{3}$ close to an $\alpha$-significant polytope (within that same cross-section).
\item If $e \in P_j^x \setminus \left( \bigcup_{i=1}^n \hat{P}_i^x \right)$, and $j \in V$, then there exists a $e' \in \hat{P}_j^x$ such that $\|e-e'\|_2 < \frac{\varepsilon}{3}$.
\item If $w \in P_j^z$ for some $j \in V$ and $z \in [x,y]$, then there exists a $w' \in Conv(\hat{P}_j^x, \hat{P}_j^y) \cap \pi^{-1}(z)$ such that $\|w-w'\|_2 < \frac{\varepsilon}{3}$.
\end{enumerate}
(2) and (4) suffice to prove the theorem. To see this, suppose that $w \in (\Delta^m)^{x,y}$. This means that $w \in P_i^z$ for some $i \in [n]$ and $z \in [x,y]$. If $i \in V$, then from (4) $\exists w' \in Conv(\hat{P}_i^x, \hat{P}_i^y) \subset \bigcup_i Conv(\hat{P}^x_i, \hat{P}^y_i)$ such that $\|w-w'\|_2 < \frac{\varepsilon}{3}$. On the other hand, if $i \in U$, then by (2) $\exists w' \in P_j^z$ for some $j \in V$ such that $\|w-w'\|_2 < \frac{2\varepsilon}{3}$. In turn by (1) again, $\exists w'' \in Conv(\hat{P}_j^x, \hat{P}_j^y) \subset \bigcup_i Conv(\hat{P}^x_i, \hat{P}^y_i)$ such that $\|w'-w''\|_2 < \frac{\varepsilon}{3}$. Using the triangle inequality $\|w-w''\|_2 < \varepsilon$, and hence $\bigcup_i Conv(\hat{P}^x_i, \hat{P}^y_i)$ is an $\varepsilon$-net of $(\Delta^m)^{x,y}$ in the $\ell_2$ norm as desired.
Let us prove statement (1). We know that if $i \in U$, for all $z\in [x,y]$ it holds that $\tau(P_i^z) \leq \alpha$. Using the union bound from Lemma \ref{lemma:subadd} we see $\tau( \cup_{i \in U} P_i^z ) \leq \frac{10n \alpha m^{3/2}}{3} = \frac{\varepsilon}{6m}$.
On the other hand, we also know that $\cup_{i \in U} P_i^z \subset (\Delta^m)^z \cong (1-z) \Delta^{m-1}$. From Lemma \ref{lemma:thick-diam-simplex}, we know $\tau((\Delta^m)^z) \geq \frac{1-z}{((m-1) + \sqrt{m-1})}$. It follows that if $\frac{\varepsilon}{6m} < \frac{1-z}{((m-1) + \sqrt{m-1})}$, then $\cup_{i \in U} P_i^z \neq (\Delta^m)^z$. The condition $y \leq 1 - \frac{\varepsilon}{3}$ ensures that this happens for all $z \in [x,y]$. This in turn implies $V \neq \emptyset$.
Let us prove statement (2). From Lemma \ref{lemma:thick-diam-simplex}, we know $\frac{Diam((\Delta^m)^z))}{\tau((\Delta^m)^z)} \leq \sqrt{2}((m-1) + \sqrt{m-1})$. We can apply Lemma \ref{lemma:cones-nets} in exactly the same fashion as Lemma \ref{lemma:thickness-to-distance} to get $\gamma_1 = \frac{2\varepsilon}{3} > \left(\frac{10n \alpha m^{3/2}}{3} \right) 4(m-1)$. We know that if $w \in \cup_{i \in U} P_i^z$, then $\exists w' \in \cup_{i \in V} P_i^z$ such that $\|w - w'\|_2 < \gamma_1 = \frac{2\varepsilon}{3}$, which is what we wanted to show.
Let us prove statement (3). Let us define $(\hat{P}_j^x)_\bot = P_j^x \setminus \left( \bigcup_{i \in [n]} \hat{P}_i^x \right)$. Note that these are the points in $P_j^x$ that do not have any label whatsoever under the empirical labelling at $x$. Importantly, some points could have a label other than $j$ if these points are on the boundary of another polytope with a label that has higher priority in the lexicographic ordering. By the fact that we have a $\beta$-close labelling of $\mathscr{P}^x$, it must hold that $\tau((\hat{P}_j^x)_\bot) \leq \beta$. Also, since $P_j^x \subset (\Delta^m)^x$, we know $Diam(P_j^x) \leq \sqrt{2}$ from Lemma \ref{lemma:thick-diam-simplex}. Since $j \in V$, we also know that $\tau(P_j^x) \geq \alpha$, hence $\tau((\hat{P}_j^x)_\bot ) \leq \beta < \alpha \leq \tau(P_j^x)$ which in turn implies that $int(P_j^x) \setminus ( \hat{P}_j^x )_\bot \neq \emptyset$. Let $\eta^* = \frac{1}{2} \left( \frac{\varepsilon}{3} - \left( \frac{\sqrt{2}}{\alpha} \right) \beta \right) > 0$ and let $\gamma_2 = \frac{\varepsilon}{3} - \eta^* > \left(\frac{\sqrt{2}}{\alpha}\right) \beta$ (the addition of the $\eta^*$ gap is to help with the proof of statement (4)). We can use the second part of Lemma \ref{lemma:cones-nets} to see $B_{\gamma_2}(e) \cap \left( int(P_j^x) \setminus (\hat{P}_j^x)_\bot \right) \neq \emptyset$. Since all points in $int(P_j^x)$ only belong to $P_j$, it follows that under the lexicographic oracle one only sees the label $j$ for those points. This implies that $B_{\gamma_2}(e) \cap \hat{P}_j^x \neq \emptyset$, which in turn implies $\exists e' \in \hat{P}_j^x$ such that $\|e-e'\|_2 < \gamma_2 = \frac{\varepsilon}{3} - \eta^* < \frac{\varepsilon}{3}$ as desired.
Finally, we prove statement (4). Since $[x,y]$ has no critical points, from Lemma \ref{lemma:perfect-fleshing} we know that $Conv(P_j^x,P_j^y) = P_j^{x,y}$, which in turn means that there exist $a \in P_j^x$ and $b \in P_j^y$ such that $w \in Conv(a,b)$. To be precise $w = Conv(a,b) \cap \pi^{-1}(z)$. Now if $a \in \hat{P}_j^x$ and $b \in \hat{P}_j^y$, then we are done. Let us suppose that this is not the case. We focus on $a$. If $a \in (\hat{P}_j^x)_\bot$, the previously proved statement says there is some $a' \in \hat{P}_j^x$ such that $\|a-a'\|<\frac{\varepsilon}{3}$. If $a \notin (\hat{P}_j^x)_\bot \cup \hat{P}_j^x$, then it must be the case that $a \in \hat{P}_k^x \cap P_j^x$ for some other $k \in [n]$. This however only happens if $a \in P_j\cap P_k$ for some $P_k \neq P_j$, from the second property of polytope partitions from Definition \ref{def:polytope-partition} and the fact that using the lexicographic query oracle means that if $P_j = P_k$ and $j<k$ then $\hat{P}_k = \emptyset$ always. Invoking the second property of Definition \ref{def:polytope-partition} again, we see that $a$ lies on a bounding hyperplane of $P_j$. This in turn means that for every $\delta > 0$, $B_\delta(a) \cap int(P_j^x) \neq \emptyset$. Let us thus consider $\delta^* = \min\{ \frac{\varepsilon}{3}, \frac{\eta^*}{2} \}$, where $\eta^*$ is defined as in the previous paragraph. Let $x^*$ be a point in $B_{\delta^*}(a) \cap int(P_j^x)$. Either $x^* \in \hat{P}_j^x$ or $x^* \in (\hat{P}_j^x)_{\bot}$. In the former case, since $\delta^* < \frac{\varepsilon}{3}$ we are done, we have succeeded in finding $a' = x^* \in \hat{P}_j^x$ such that $\|a-a'\|_2 < \frac{\varepsilon}{3}$. In the latter case, from the previous paragraph, since $x^* \in (\hat{P}_j^x)_{\bot}$ we know that $\exists a' \in \hat{P}_j^x$ such that $\|x^*-a'\|_2 < \frac{\varepsilon}{3} - \eta^*$. Since $\delta^* < \frac{\eta^*}{2}$, we can use the triangle inequality to conclude that $\|a-a'\|_2 < \frac{\varepsilon}{3}$. In either case, we have proven what we wanted.
The same argumentation as the previous paragraph shows us that $\exists b' \in \hat{P}_j^y$ such that $\|b-b'\|_2 < \frac{\varepsilon}{3}$. If we let $w' = Conv(a',b') \cap \pi^{-1}(z)$, then $w'$ satisfies the requirements of statement (4) and we have finished our proof.
\end{proof}
For the following corollary, suppose that $\mathscr{P}$ is an $(m,n)$ polytope partition and that $0 = t_0 < t_1,...,<t_k = 1$ are points in $[0,1]$. Furthermore suppose that $\beta = \frac{\varepsilon^2}{85 n m^{5/2}}$ as in Lemma \ref{lemma:crux-for-gbs}. For each $t_i$, if $t_i \notin C^\alpha_{\mathscr{P}}$, let $\hat{\mathscr{P}}^{t_i}$ be a $\beta$-close labelling of $\mathscr{P}^x$, otherwise let $\hat{\mathscr{P}}^{t_i} = \emptyset$. Let $\hat{\mathscr{P}} = Conv_i(\hat{\mathscr{P}}^{t_i})$ and for $i = 1,..,k$, let $I_j = [t_{j-1}, t_j]$. If $\hat{\mathscr{P}}^{t_{i-1},t_i}$ is an $\varepsilon$-close labelling of $\mathscr{P}^{t_{i-1},t_i}$, we say that $I_j$ is covered, otherwise we say $I_j$ is uncovered.
\begin{corollary}
For any collection of $\{t_i\}_{i=1}^k$, there are no more than $2C^\alpha_{\mathscr{P}}$ intervals $I_j$ that are uncovered.
\end{corollary}\label{cor:width-bound}
\begin{proof}
Suppose that $I_j$ is uncovered, then one of the following holds:
\begin{itemize}
\item Either $t_{j-1}$ or $t_j$ are in $C^\alpha_{\mathscr{P}}$
\item $t_{j-1}, t_j \notin C^\alpha_{\mathscr{P}}$ yet $Conv(\hat{P}^{t_{j-1}}, \hat{P}^{t_j})$ is not an $\varepsilon$-close labelling of $\mathscr{P}^{t_{j-1}, t_j}$.
\end{itemize}
From the contrapositive of Lemma \ref{lemma:crux-for-gbs}, the latter case implies $I_j \cap C^\alpha_{\mathscr{P}} \neq \emptyset$, hence in either case there is a critical coordinate in $I_j$. In the worst case each $x \in C^\alpha_{\mathscr{P}}$ lies on a $t_j$, causing both $I_j$ and $I_{j+1}$ to be uncovered. This implies that there are at most $2C^\alpha_{\mathscr{P}}$ intervals $I_j$ that are uncovered.
\end{proof}
\subsection{Specification of CD-GBS and Query Usage}\label{sec:SBS}
\paragraph{Terms and Notation:} The details of CD-GBS are presented in Algorithm \ref{alg:GBS}. We recall our notation from Definition \ref{def:degenerate-cross-section} where for $x \in [0,1)$ we defined $f_x: (\Delta^m)^x \rightarrow \Delta^{m-1}$ given by $f_x(x,...,v_m) = \frac{1}{1-x}(v_2,...,v_m)$. We note that this is a bijection between both polytopes, hence it is well-defined to use $f_x^{-1}$. In addition, we let $\mathcal{D}^k = \{\frac{i}{2^k} \ | 1 \leq i \leq 2^i \}$ be the dyadic fractions of $k$-th power in the unit interval (excluding 0). For every $x \in \mathcal{D}^k$ we can associate the interval $I_x^k = [x - \frac{1}{2^k}, x]$. For each of these intervals $midpoint(I^k_x)$ denotes its midpoint. We also use the same language as Corollary \ref{cor:width-bound} when we talk about whether $I^k_x$ is covered or not (with respect to the current empirical labelling, $\hat{\mathscr{P}}$, obtained from taking convex hulls of labels in $\Delta^m$). We note that in order to have a well-defined base case of CD-GBS (which is equivalent to binary search), we let $\Delta^0 = \mathbb{R}^0 = \{0\}$. Finally, we say that a point $x \in [0,1]$ is an uncovered critical point if $\hat{\mathscr{P}}^x$ is computed via a recursive call to CD-GBS and for $(a,b) = B_{\varepsilon/2}(x) \cap [0,1]$, it holds that $\hat{\mathscr{P}}^{a,b}$ is not an $\varepsilon$-close labelling of $\mathscr{P}^{a,b}$.
\begin{theorem}\label{thm:genbinsrch-correct}
If CD-GBS is given access to $Q_\ell$ for a $(m,n)$-polytope partition, it computes an $\varepsilon$-close labelling of $\mathscr{P}$ using at most $\left( \prod_{i=1}^m \left( \binom{n+i}{i} + 2n \right) \right) 2^{2m^2}\log^m \left( \frac{170 n m^{5/2}}{\varepsilon} \right)$ membership queries. For constant $m$ this constitutes $O(n^{m^2}\log^m \left( \frac{n}{\varepsilon} \right) ) = poly(n,\log \left( \frac{1}{\varepsilon} \right))$ queries \footnote{CD-GBS runs in polynomial time for constant $m$. The time-intensive operation consists of identifying uncovered intervals, but since the dimension of the ambient simplex is constant, each empirical polytope $\hat{P}_i$ has at most a constant number of bounding hyperplanes. These hyperplanes can each be extruded by $\varepsilon$, and checking whether there exists a point outside all these extrusions can be done in time polynomial in $n$ via brute force. In fact, all other algorithms in this paper have efficient runtimes (in their relevant parameters) due to similar reasoning.}.
\end{theorem}
\begin{proof}
We first prove that CD-GBS indeed computes an $\varepsilon$-close labelling when given access to a valid $Q_\ell$ by inducting on $m$. It is straightforward to see that in the case $m=1$, if CD-GBS is given access to a valid $Q_\ell$ for a $(1,n)$ polytope partition (a partition of the unit interval into conected subintervals), then it simply performs binary search on the interval $[0,1] \cong \Delta^1$.
As for the inductive step, for $k = \lceil \log (2/\varepsilon) \rceil$, any two contiguous points of $\mathcal{D}^k$ are less than $\varepsilon/2$ away from each other. For now suppose that every recursive call to CD-GBS was along a non-degenerate cross section $\mathscr{P}^t$. From the inductive assumption, this means that CD-GBS computes an $\varepsilon/2$-close labellings of those cross-sections, using the triangle inequality, we know that $\hat{\mathscr{P}}$ is an $\varepsilon$-close labelling of $\mathscr{P}$.
We note however that there is no guarantee for what a recursive call to CD-GBS does on a degenerate cross section $\hat{\mathscr{P}}^t$. For this reason, it could be the case that at the end of the loop over $\mathcal{D}^i$, $\hat{\mathscr{P}}$ is not an $\varepsilon$-close labelling. This can only happen if there is some $t \in C^\alpha_{\mathscr{P}} \cap \mathcal{D}^k$ which is an uncovered critical coordinate.
If $t$ is an uncovered critical coordinate we can rectify the situation. If we find a $z \in B_{\varepsilon/2}$ that is not a critical coordinate, then $\mathscr{P}^z$ is non-degenerate and computing CD-GBS along the cross-section gives us an $\frac{\varepsilon}{2}$-close labelling of $\mathscr{P}^z$. Using the triangle inequality, we see that this in turn removes $t$ from the set of uncovered critical coordinates, and we say that $t$ is ``fixed''. Thus the final while loop of the algorithm eliminates the set of uncovered critical coordinates so that $\hat{\mathscr{P}}$ is indeed an $\varepsilon$-close labelling.
It thus remains to show that the final while loop terminates. However, there are at most $|C^\alpha_\mathscr{P}|$ uncovered critical coordinates, and over the course of fixing all uncovered critical coordinates, there are at most $|C^\alpha_\mathscr{P}|$ bad guesses for $z \in B_{\varepsilon/2}(x)$ where $\mathscr{P}^z$ is degenerate. Therefore the final while loop makes at most $2|C^\alpha_\mathscr{P}|$ invocations to CD-GBS along cross-sections. This concludes the proof of correctness for CD-GBS.
Let us bound the total query usage of CD-GBS. For all values of $k$ in the first for loop, we know from Corollary \ref{cor:width-bound} that since $Q_\ell$ is a valid lexicographic oracle for $\mathscr{P}$, that the number of uncovered $I_x^k$ will not exceed $2 \left( \binom {n+m}{m} + 2n \right)$, and since CD-GBS is called once per uncovered interval, it follows that for each $k$ there at most $2 \left( \binom {n+m}{m} + 2n \right)$ recursive calls to CD-GBS. Furthermore, since $Q_\ell$ is a valid lexicographic oracle for $\mathscr{P}$, it will also never be the case that $\exists i,j \in [n], \ z \in \Delta^m$ such that $dim(\hat{P}_i) = m$ and $z \in int(\hat{P}_i)$.
In the worst case, $k$ loops from 1 to $\lceil \log(2/\varepsilon) \rceil$ and makes an extra $2|C^\alpha_{\mathscr{P}}|$ recursive calls to CD-GBS to fix all uncovered critical coordinates. In total if we let $T(m,n,\varepsilon)$ denote the query cost of running CD-GBS on a valid lexicographic oracle, we get the following recursion:
$$
T(m,n,\varepsilon) \leq 2 |C^\alpha_{\mathscr{P}}| \log \left( \frac{2}{\varepsilon} \right) T \left(m-1, n, \frac{\varepsilon^2}{85 n m^{5/2}} \right) + 2|C^\alpha_{\mathscr{P}}|
$$
In order to make this more amenable, we define $f(m) = \left(\binom{n+m}{m} + 2n \right)$ and use Lemma \ref{lemma:bound-critical-cardinality} to bound this expression as follows:
$$
T(m,n,\varepsilon) \leq 3 f(m) \log \left( \frac{2}{\varepsilon} \right) T \left(m-1, n, \frac{\varepsilon^2}{85 n m^{5/2}} \right)
$$
Furthermore, from the fact that the base case is binary search, we know $T(1,n,\varepsilon) \leq n \log \left( \frac{2}{\varepsilon} \right)$.
To unpack the recursion. Let us define $\varepsilon_0 = \varepsilon$ and $\varepsilon_{k+1} = \frac{\varepsilon_k^2}{85n(m-k)^{5/2}}$ for $k = 1,...,m-1$. With this in hand, we can unroll the recursion to obtain:
$$
T(m,n,\varepsilon) \leq \left( 3^{m-1} \prod_{i=1}^{m-1} f(i) \right)
\left( \prod_{k=1}^{m-1} \log \left( \frac{2}{\varepsilon_k} \right) \right)
$$
Since each $\varepsilon_{k+1} < \varepsilon_k$, we can upper bound the right-hand product by bounding each term with $\varepsilon_{m-1}$. If we first solve for this value, we obtain:
$$
\varepsilon_{m-1} = \frac{\varepsilon^{2^{m-1}}}{\prod_{j=1}^{m-1}(85nj^{5/2})^{2^j}}
\geq \frac{\varepsilon^{2^{m-1}}}{\prod_{j=1}^{m-1}(85nm^{5/2})^{2^j}}
\geq \left( \frac{\varepsilon}{85nm^{5/2}} \right)^{2^m}.
$$
In the first inequality we bounded the denominator product in the base by $j \leq m$, as for the second inequality, we evaluated the geometric series in 2 for the exponent to bound the exponent by $2^m$. With this in hand we obtain the desired bounds:
$$
T(m,n,\varepsilon) \leq 3^{m} 2^{m^2} \prod_{i=1}^{m} f(i)
\log^m \left( \frac{170nm^{5/2}}{\varepsilon} \right)
\leq \left( \prod_{i=1}^m \left( \binom{n+i}{i} + 2n \right) \right) 2^{2m^2}\log^m \left( \frac{170 n m^{5/2}}{\varepsilon} \right)
$$
Finally, For large enough $n$, every term in $\prod_{i=1}^m \left( \binom{n+i}{i} + 2n \right)$ is bounded by $(n+m)^m +2n$. It follows that this product is $O(n^{m^2})$, and thus for constant $m$, this constitutes $O(n^{m^2}\log^m \left( \frac{n}{\varepsilon} \right) ) = poly(n,\log \left( \frac{1}{\varepsilon} \right))$ queries.
\end{proof}
The previous results show that for constant dimension, $m$, CD-GBS is query efficient in $n$ and $\frac{1}{\varepsilon}$. In the following section we use this algorithm as a building block to construct a method for computing efficient $\varepsilon$-close labellings when the number of regions, $n$, is held constant instead.
\begin{algorithm}[h]
\caption{CD-GBS$(m,n,\varepsilon, Q)$}
\label{alg:GBS}
\begin{algorithmic}
\item[\algorithmicinput] $m \geq 0, \ n,\varepsilon > 0$, query access to function $Q: \Delta^m \rightarrow [n]$.
\item[\algorithmicoutput] $\hat{\mathscr{P}}$: an $\varepsilon$-close labelling of $\mathscr{P}$.
\IF{$m=0$}
\STATE Query $Q(0)$
\ELSE
\STATE $\hat{\mathscr{P}}^0 \leftarrow f_0^{-1} \left( \text{CD-GBS} \left( m-1,n,\frac{\varepsilon^2}{85 n m^{5/2}}, Q \circ f_0^{-1} \right) \right)$, $\hat{\mathscr{P}}^1 \leftarrow Q(\vec{e}_1)$.
\FOR{$k = 1$ to $\lceil \log(2/\varepsilon) \rceil$}
\IF{Number of uncovered $I_x^k$ exceeds $2 \left( \binom{n + m} {m} + 2n \right)$}
\STATE Halt
\ENDIF
\FOR{$x \in \mathscr{D}^k$}
\IF{$I_x^k$ is uncovered}
\STATE $t \leftarrow midpoint(I_x)$
\STATE $\hat{\mathscr{P}}^t \leftarrow f_t^{-1} \left( \text{CD-GBS} \left( m-1,n,\frac{\varepsilon^2}{85(1-t) n m^{5/2}}, Q \circ f_t^{-1} \right) \right)$
\ENDIF
\STATE Recompute $\hat{\mathscr{P}}$ by taking convex hulls of labels
\IF{$\exists i,j \in [n]$ such that $int(\hat{P}_i) \cap \hat{P}_j \neq \emptyset$ or $\hat{\mathscr{P}}$ is an $\varepsilon$-close labelling}
\STATE Halt
\ENDIF
\ENDFOR
\ENDFOR
\WHILE{$\exists x \in [0,1]$ an uncovered critical point }
\STATE $t \leftarrow z$ for arbitrary $z \in B_{\varepsilon/2}(x)$
\STATE $\hat{\mathscr{P}}^t \leftarrow f_t^{-1} \left( \text{CD-GBS} \left( m-1,n,\frac{\varepsilon^2}{85(1-t) n m^{5/2}}, Q \circ f_t^{-1} \right) \right)$
\STATE Recompute $\hat{\mathscr{P}}$ by taking convex hulls of labels
\ENDWHILE
\ENDIF
\iffalse
\STATE Define $\hat{Q}: \Delta^m \rightarrow [n] \cup \{\bot\}$ as follows:
\FOR{$i \in [n]$}
\STATE if $ x \in \hat{P}_i$, $\hat{Q}(x) \leftarrow i$, else $\hat{Q}(x) = \bot$
\ENDFOR
\fi
\RETURN $\hat{\mathscr{P}}$
\end{algorithmic}
\end{algorithm}
\section{Constant-Region Generalised Binary Search for \texorpdfstring{$Q_\ell$}{}}\label{sec:GBS}
In this section we introduce Constant-Region Generalised Binary Search, (CR-GBS), which as the name suggests, is a query-efficient algorithm for computing $\varepsilon$-close labellings of $(m,n)$-polytope partitions when $n$ is constant and $m$ and $\varepsilon$ are allowed to vary.
The intuition behind the algorithm lies in the fact that if $m$ is much greater than $n$ (it suffices for $m > \binom {n} {2}$ ), then any vertex of a given $P_i$ cannot lie in the interior of the ambient simplex $\Delta^m$. This is because a vertex in $\Delta^m$ must consist of the intersections of at least $m$ half-spaces, all of which cannot arise from adjacencies between different $P_i$.
Not only do all vertices lie on the boundary of $\Delta^m$, but one can easily show that they are all contained in faces of the boundary of $\Delta^m$ that have dimension $O(n^2)$ which is presumed to be constant. The number of such faces in the boundary of $\Delta^m$ is thus polynomial in $m$, and moreover if we could compute $0$-close labellings of these faces we could take convex combinations and recover a $0$-close labelling of the entire polytope partition.
We will demonstrate that for an appropriate value of $\varepsilon'$, if we compute $\varepsilon'$-close labellings of such faces in the boundary, we can recover an $\varepsilon$-close labelling of the entire polytope partition over all of $\Delta^m$ by taking convex combinations. CR-GBS computes the necessary $\varepsilon'$-close labellings of lower dimensional faces by using CD-GBS as a subroutine, which as we shall see results in our desired query efficiency for $n$ constant.
We note however, that not all faces in the boundary of $\Delta^m$ are axis-aligned, which poses a problem if we are to use CD-GBS as a subroutine. As we show in the following section, this is not an issue since we can translate such simplices into axis-aligned simplices via a simple transformation.
\subsection{Non-axis-aligned Simplices} \label{subsec:face-to-simplex}
So far we have focused on the case where $\Delta^m = \{x \in \mathbb{R}^m \ | \ \|x\|_1 \leq 1,$ $x_i \geq 0\}$. In a straightforward fashion we transform our results to the equivalent simplex $\Lambda^m = \{x \in \mathbb{R}^{m+1} \ | \ \|x\|_1 = 1$, $x_i \geq 0\}$. To do so, we define the invertible linear map $\phi_m: \Delta^m \rightarrow \Lambda^{m+1}$ given by $\phi_m(x_1,...,x_m) = \left( \left( 1-\sum_{i=1}^m{x_i} \right), x_1,...,x_m \right)$. It is straightforward to see that $\phi_m$ is $\sqrt{m+1}$-Lipschitz continuous. Via standard Lipschitz continuity arguments we get the following:
\begin{lemma}
Suppose that $\mathscr{P}$ is an $(m,n)$-polytope partition of $\Lambda^{m+1}$. If $\hat{\mathscr{P}}$ is an $\frac{\varepsilon}{\sqrt{m+1}}$-close labelling of $\phi^{-1}_m(\mathscr{P})$, then $\phi_m (\hat{\mathscr{P}})$ is an $\varepsilon$-close labelling of $\mathscr{P}$.
\end{lemma}
\begin{proof}
Suppose that $x\in \Lambda^{m+1}$ has no label under $\phi_m(\hat{\mathscr{P}})$. Since $\hat{\mathscr{P}}$ is an $\frac{\varepsilon}{\sqrt{m+1}}$-close labelling, there must be some $y \in \Delta^m$ with the property that $\|\phi_m^{-1}(x) - y\|_2 < \frac{\varepsilon}{\sqrt{m+1}}$. If we consider $\phi_m^{-1}(x)$, since $\hat{\mathscr{P}}$ is $\frac{\varepsilon}{\sqrt{m+1}}$-close, there must be some $y$ from an empirical polytope $\hat{P}_i \subset \Delta^m$ with the property that $\|\phi_m(x) - y\|_2 < \frac{\varepsilon}{\sqrt{m+1}}$. It follows that $\phi_m(y) \in \phi_m(\hat{\mathscr{P}})$ and by Lipschitz continuity of $\phi_m$, $\|x - \phi_m(y)\|_2 < \varepsilon$ as desired.
\end{proof}
\subsection{Necessary Machinery for CR-GBS}
\label{subsec:overview-CR-GBS}
Suppose that $\mathscr{P}$ is an $(m,n)$-polytope partition with the property that $m > \binom{n}{2}$. Furthermore, let $k = \binom{n}{2}$ and let $\partial_k(\Delta^m)$ denote all $k$-dimensional faces of $\Delta^m$. For each face $F$, let $\mathscr{P}_F$ be the restriction of $\mathscr{P}$ to $F$. If $F$ is axis-aligned (equivalently, if $F$ contains the origin), then it is an isometric embedding of $\Delta^k$ in $\Delta^m$, so we let $\phi_F$ be a canonical isomorphism from $F$ to $\Delta^k$. If $F$ is not axis-aligned, we let $\phi_F$ be any canonical isomorphism from $F$ to $\Delta^k$ as per Section \ref{subsec:face-to-simplex}.
As mentioned previously, computing empirical labellings of every face in $\partial_k(\Delta^m)$ via CD-GBS will be enough to compute an empirical labelling for $\mathscr{P}$. The only issue with this strategy however, is that CD-GBS is only guaranteed to return an $\varepsilon$-close labelling if it is given access to a valid lexicographic membership oracle for a polytope partition, and for an arbitrary polytope partition, it is not always the case that $\phi_F(\mathscr{P}_F)$ is a $(k,n)$-polytope partition for all $F \in \partial_k(\Delta^m)$. As an example, consider a polytope partition with an arbitrary $m-1$-dimensional polytope $P_i$ contained in $F = \mathscr{P}^0$ (the $0$-cross-section of $\mathscr{P}$). Any full-dimensional $P_j \in \mathscr{P}$ must have the property that $0 \notin \pi(relint(P_j))$, hence it still holds that $relint(P_i) \cap relint(P_j) = \emptyset$. However, when restricted to $\mathscr{P}_F$, relative interiors are with respect to $\mathscr{P}^0$, and it can be the case that $relint((P_i)_F) \cap relint((P_j)_F) \neq \emptyset$. For this reason, we slightly refine our notion of polytope partition.
\begin{definition}\label{def:proper-polytope-partition}
Suppose that $\mathscr{P}$ is an $(m,n)$-polytope partition such that for all $0 \leq k \leq m$ and $F \in \partial_k(\Delta^m)$, $\phi_F(\mathscr{P}_F)$ is a $(k,n)$-polytope partition. Then we say that $\mathscr{P}$ is a {\em proper polytope partition}.
\end{definition}
For the remainder of this section, we focus on proper polytope partitions. In addition, in order to prove correctness of CR-GBS we define a robust approximation of any $P_i \in \mathscr{P}$.
\begin{definition}\label{def:robust-clipping}
Suppose that $P \subset \Delta^m$ is a polytope. We define $int_\gamma(P)$ as
$$
int_\gamma(P) = \{x \in P \ | \ B_\gamma(x) \cap \Delta^m \subset P \}
$$
We call this the {\em $\gamma$-interior} of $P$.
\end{definition}
Intuitively, the $\gamma$-interior of $P$ consists of points that are ``robustly'' within $P$ by a margin of $\gamma$ relative to the interior of $\Delta^m$, as visualised in Figure \ref{fig:gamma-interior}. In Lemma~\ref{lemma:robust-vertex-faces} we show that $int_\gamma(P)$ is a sub-polytope of $P$ with certain supporting hyperplanes translated towards the interior of $P$ by a margin of $\gamma$. We also show that if $P_i$ is an element of an $(m,n)$-polytope partition where $m > k$, then the vertices of $int_\gamma(P_i)$ also lie in some $F \in \partial_k(\Delta^m)$.
\begin{figure}
\center{
\begin{tikzpicture}[scale=0.7, every node/.style={draw,shape=circle,fill=blue}]
\tikzstyle{xxx}=[dashed,thick]
\fill[red!20](2,2)--(2,3.8)--(3.8,3.8)--(3.8,2)--cycle;
\fill[blue!20](4.2,2)--(8,2)--(5.2,4.8)--(4.2,3.85)--cycle;
\fill[green!20](2,4.2)--(3.8,4.2)--(4.8,5.2)--(2,8)--cycle;
\draw[thick, -](2,2)--(8,2);
\draw[thick, -](2,2)--(2,8);
\draw[thick, -](2,8)--(8,2);
\draw[thick, -](4,4)--(5,5);
\draw[thick, -](4,4)--(4,2);
\draw[thick, -](2,4)--(4,4);
\filldraw (2,2) circle (2pt);
\filldraw (2,3.8) circle (2pt);
\filldraw (3.8,3.8) circle (2pt);
\filldraw (3.8,2) circle (2pt);
\filldraw (4.2,2) circle (2pt);
\filldraw (8,2) circle (2pt);
\filldraw (5.2,4.8) circle (2pt);
\filldraw (4.2,3.85) circle (2pt);
\filldraw (2,4.2) circle (2pt);
\filldraw (3.8,4.2) circle (2pt);
\filldraw (4.8,5.2) circle (2pt);
\filldraw (2,8) circle (2pt);
\end{tikzpicture}
\caption{$\gamma$-Interiors of Polytopes in a Partition }\label{fig:gamma-interior}
}
\end{figure}
\begin{lemma}\label{lemma:robust-vertex-faces}
Suppose that $\mathscr{P}$ is an $(m,n)$-polytope partition with $m > k = \binom{n}{2}$. For each $P_i \in \mathscr{P}$, and any $\gamma > 0$, $int_\gamma(P_i)$ is a sub-polytope of $P_i$. Furthermore, each vertex of $int_\gamma(P_i)$ lies in some $F \in \partial_k(\Delta^m)$.
\end{lemma}
\begin{proof}
Since $P_i \subset \mathbb{R}^m$ is a polytope, it can be expressed as the intersection of finitely many half-spaces: $P_i = \bigcap_{j=1}^q H_j$, such that $H_j = \{x \in \mathbb{R}^m \ | \ a_j \cdot x \geq b_j, \text{ where } a_j \in \mathbb{R}^m, \ \|a_j\|_2 = 1, \ b_j \in \mathbb{R}\}$. As mentioned before, each half-space, $H_j$, can either arise as an adjacency of $P_i$ with the boundary of $\Delta^m$, or as an adjacency of $P_i$ with some other $P_r \in \mathscr{P}$. Let us call the former set of half-spaces $A$ and the latter $B$. We abuse notation slightly and also let $A$ refer to the sets of indices $j \in [q]$ such that $H_j \in A$ (similarly for $B$).
For each $H_j \in B$, let $H_j' = \{x \in \mathbb{R}^m \ | \ a_j \cdot x \geq b_j + \gamma, \text{ where } a_j \in \mathbb{R}^m, \ \|a_j\|_2 = 1, \ b_j \in \mathbb{R}\}$. Clearly $H_j' \subset H_j$, and in fact the boundary hyperplane of $H_j'$ is parallel to that of $H_j$ (and translated by a margin of $\gamma$ towards the interior of $H_j$). We now define $C = \left( \bigcap_{j \in A} H_j \right) \cap \left( \bigcap_{j \in B} H_j' \right)$ and we show that $int_\gamma(P_i) = C$, which proves the first part of the lemma.
Suppose that $x \in C$. By virtue of the construction of all $H_j'$, it must be the case that $B_\gamma(x)$ does not intersect the boundary of any $H_j \in B$. Since all $H_j \in A$ are unchanged in $C$, we obtain $B_\gamma(x) \cap \Delta^m \subset P_i$, therefore $x \in int_\gamma(P_i)$.
Now suppose that $x \in int_\gamma(P_i)$. Since $int_\gamma(P_i) \subset P_i$, it is clear that $x \in H_j$ for all $H_j \in A$. As for $H_j \in B$, we know that $x \in H_j$ from the fact that $int_\gamma(P_i) \subset P_i$. If $x \notin H_j'$, then $B_\gamma(x) \not \subset H_j$, which in turn implies $B_\gamma(x) \not \subset P$, contradicting our assumption that $x \in int_\gamma(P_i)$. This proves the claim that $C = int_\gamma(P_i)$.
As for the final claim of the lemma, we note that since each $H_j \in A$ arises as an adjacency of $P_i$ with the boundary of $\Delta^m$, it must be the case that $|A| \leq m$. Furthermore, since each $H_j \in B$ arises as an adjacency of two polytopes in $\mathscr{P}$, it follows that $|B| \leq \binom {n} {2} = k < m$. Since at least $m$ half-spaces need to meet in $\mathbb{R}^m$ to make a vertex, it must be the case that any vertex of $C = int_\gamma(P_i)$ lies on some $F \in \partial_k(\Delta^m)$.
\end{proof}
Suppose that $\mathscr{P}$ is a proper $(m,n)$-polytope partition with $m > k = \binom{n}{2}$. Furthermore, suppose that $P_i \in \mathscr{P}$ is of full affine dimension and consider a vertex, $v$, of $int_\gamma(P_i)$ which is ``robustly'' in the interior of $P_i$ by definition. From the previous lemma we know that $v$ lies in some $F \in \partial_k(\Delta^m)$. We now show that due to the margin $\gamma$ with which $v$ lies within $P_i$, we can recover a label of $v$ by computing a suitable empirical-labelling of $F$.
\begin{lemma} \label{lemma:face-to-thickness}
Suppose that $\mathscr{P}$ is a proper $(m,n)$-polytope partition with $m > k = \binom{n}{2}$. Furthermore, suppose that $P_i \in \mathscr{P}$ is of full affine dimension and that $v$ is a vertex of $int_\gamma(P_i)$ that lies on some face $F \in \partial_k (\Delta^m)$. It follows that any $\frac{2\gamma}{5}$-close labelling of $F$ that correctly labels the vertices of $F$ gives $v$ the label $i$. Furthermore, suppose that for all $F \in \partial_k(\Delta^m)$ we compute a $\frac{2\gamma}{5}$-close labelling. By taking convex combinations of these empirical labellings, we get $\tau(\hat{P}_\bot) \leq \frac{10}{3} n^2 \gamma (m+1)^{3/2}$.
\end{lemma}
\begin{proof}
If $v$ is a vertex as in the statement of the lemma, it must either be a vertex of the original simplex, or $B_\gamma(v) \cap P_i\cap F$ must contain a $r$-dimensional $\ell_2$ ball of radius $\gamma$ which we call $A_2$ (where $r$ corresponds to the dimension of the sub-face of $F$ that $v$ lies on, implying $1 \leq r \leq k$). If $v$ is a vertex of the original simplex, then it is correctly labelled by assumption, so we focus on the the latter case.
Let $A_1$ be any $r$-dimensional $\ell_1$ ball of radius $\frac{3\gamma}{5}$ such that $A_1 \subsetneq A_2$, and denote the corners of $A_1$ by $x_1,...,x_s$. For $i = 1,...,s$, let $V_i = B_{2\gamma/5}(x_i) \cap A_2 \subset F$. We note that $V_i \cap V_j = \emptyset$ for all $i \neq j$.
By the conditions of empirical labellings and the fact that $P_i$ is of full affine dimension, there must exist $z_1,...,z_s$ such that $z_r \in V_r$ and $z_r$ gets its correct label, $i$, under $Q_\ell$. Furthermore, it is straightforward to see that $v \in Conv(z_1,...,z_s)$, hence $v$ gets its correct label, $i$, as visualised in Figure \ref{fig:face-thickness} for $r = 2$.
Along with Lemma \ref{lemma:robust-vertex-faces}, this shows that if for all $F \in \partial (\Delta^m)^k$ we compute $\frac{2\gamma}{5}$-close labellings that correctly label the vertices of $F$, then we will have correctly labelled all vertices of the polytope $int_\gamma(P_i)$. Consequently, by taking convex combinations of these labellings, the entirety of $int_\gamma(P_i)$ will be labelled correctly for an arbitrary full-dimensional $P_i$.
For a given full-dimensional $P_i \subset \mathscr{P}$, it is the case that $P_i \setminus int_\gamma(P_i)$ can be expressed as a disjoint union of at most $k \leq n^2$ polytopes of thickness bounded by $\gamma$ (using the notation from the proof of Lemma \ref{lemma:robust-vertex-faces}, these polytopes are all of the form $(H_j \setminus H_j') \cap P_i$, of which there are at most $|B| = k$ ). For a given $P_j$ that is not full-dimensional, it trivially holds that $\tau(P_j) = 0$ Thus we can use Lemma \ref{lemma:subadd} to see that $\tau(\hat{P}_\bot) = \tau(\cup_i \left(P_i \setminus int_\gamma(P_i)\right) ) \leq \frac{10}{3} n^2 \gamma (m+1)^{3/2} $.
\end{proof}
\begin{figure}
\center{
\begin{tikzpicture}[scale=0.7]
\tikzstyle{xxx}=[dashed,thick]
\fill[blue!20](-3,-0.5)--(0.5,-2)--(2.5,0.5)--(1,4)--cycle;
\filldraw (0,0) circle (2pt);
\draw (0,0) circle (5cm);
\iffalse
\filldraw (0,3) circle (2pt);
\filldraw (0,-3) circle (2pt);
\filldraw (3,0) circle (2pt);
\filldraw (-3,0) circle (2pt);
\fi
\filldraw (1,4) circle (2pt);
\filldraw (0.5,-2) circle (2pt);
\filldraw (2.5,0.5) circle (2pt);
\filldraw (-3,-0.5) circle (2pt);
\draw (0,3) circle (2cm);
\draw (0,-3) circle (2cm);
\draw (3,0) circle (2cm);
\draw (-3,0) circle (2cm);
\draw[thick, <->](3,0)--(5,0);
\draw[thick, <->](0,6)--(5,6);
\node at (2.5,6.5) {$\gamma$};
\node at (4,0.5) {$\frac{2\gamma}{5}$};
\node at (0,0.4) {$v$};
\node at (0,-4.5) {$V_1$};
\node at (-3,-1.5) {$V_2$};
\node at (0,1.5) {$V_3$};
\node at (3,-1.5) {$V_4$};
\node at (0.5,-2.5) {$z_1$};
\node at (-3,0) {$z_2$};
\node at (1,3.5) {$z_3$};
\node at (2.5,1) {$z_4$};
\end{tikzpicture}
\caption{Proof of Lemma \ref{lemma:face-to-thickness}}
\label{fig:face-thickness}
}
\end{figure}
\begin{corollary}\label{ref:corollary-for-gbs-correct}
Suppose that $\mathscr{P}$ is a proper $(m,n)$-polytope partition. Let $\gamma = \frac{3\varepsilon}{40n^2(m+1)^{5/2}}$, and suppose that for all $F \in \partial_k(\Delta^m)$, a $\frac{2\gamma}{5}$-close labelling that correctly labels the vertices of $F$ is computed with $Q_\ell$. Taking a convex combination of these empirical labellings results in an $\varepsilon$-close labelling of $\mathscr{P}$.
\end{corollary}
\begin{proof}
This follows from the fact that $\tau(\hat{P}_\bot) \leq \frac{10}{3} n^2 \gamma (m+1)^{3/2}$ from the previous theorem. We can therefore use Lemma \ref{lemma:thickness-to-distance} and obtain the desired result.
\end{proof}
The previous result gives us precisely what we need to prove the correctness of CR-GBS. In fact, it shows that CR-GBS can use any algorithm as a sub-routine (not just CD-GBS) as long as it computes empirical labellings of polytope partitions along all faces $F \in \partial (\Delta^m)^k$ while correctly labelling the vertices of $\Delta^m$.
\subsection{Specification of CR-GBS and Query Usage}
\paragraph{Terms and Notation:} For $F \in \partial_k(\Delta^m)$, we let $\phi_F$ denote a canonical isomorphism from $F$ to $\Delta^k$ as per Section \ref{subsec:overview-CR-GBS}. Furthermore, for each such $F$, we let $\hat{\mathscr{P}}_F$ empirical labelling returned by CD-GBS on a given face, $F$.
\begin{algorithm}
\caption{CR-GBS$(m,n,\varepsilon, Q)$}
\label{alg:FGBS}
\begin{algorithmic}
\item[\algorithmicinput] $m,n,\varepsilon > 0$, query access to membership oracle $Q$ for $(m,n)$-polytope partition $\mathscr{P}$.
\item[\algorithmicoutput] $\varepsilon$-close labelling of $\mathscr{P}$.
\STATE $k \leftarrow \binom{n} {2}$
\FOR{$F \in \partial_k(\Delta^m)$}
\STATE $\hat{\mathscr{P}}^F \leftarrow \phi_F^{-1} \left( \text{CD-GBS}\left(k, n, \frac{3\varepsilon}{100n^2\sqrt{k+1}(m+1)^{5/2}}, Q \circ \phi^{-1}_F \right) \right)$.
\ENDFOR
\STATE $\hat{\mathscr{P}} \leftarrow Conv_F(\hat{\mathscr{P}}_F)$
\iffalse
\STATE Define $\hat{Q}: \Delta^m \rightarrow [n] \cup \{\bot\}$ as follows:
\FOR{$i \in [n]$}
\STATE if $ x \in \hat{P}_i$, $\hat{Q}(x) \leftarrow i$, else $\hat{Q}(x) = \bot$
\ENDFOR
\fi
\RETURN $\hat{\mathscr{P}}$
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{thm:full-gbs-guarantee}
Let $\mathscr{P}$ be a proper $(m,n)$-polytope partition where $n$ is constant and $m > k = \binom{n}{2}$. CR-GBS computes an $\varepsilon$-close labelling of $\mathscr{P}$ and uses $O \left( m^k \log^k \left( \frac{m}{\varepsilon} \right) \right) = poly(m,\log \left( \frac{1}{\varepsilon} \right) )$ queries.
\end{theorem}
\begin{proof}
The correctness follows from Corollary \ref{ref:corollary-for-gbs-correct}. In the worst case faces are of the form $\Lambda^k$, which incur an extra cost of $\sqrt{k+1}$ in the approximation factor of empirical labellings. We use this as a worst case bound.
For simplicity in notation, we define $m_0 = k$, $\varepsilon_0 = \frac{3\varepsilon}{100n^2\sqrt{k+1}(m+1)^{5/2}}$. From Theorem \ref{thm:genbinsrch-correct}, the CD-GBS subroutine uses at most $\left( \prod_{i=1}^{m_0} \left( \binom{n+i}{i} + 2n \right) \right) 2^{2m_0^2}\log^{m_0} \left( \frac{170 n {m_0}^{5/2}}{\varepsilon_0} \right)$ queries. Since $k = \binom{n}{2}$ is constant, this expression can be written as $O \left( \log^k \left( \frac{m^{5/2}}{\varepsilon} \right) \right) = O \left( \log^k \left( \frac{m}{\varepsilon} \right) \right)$. Finally, there are $\binom{m}{k}$ possible faces upon which CD-GBS can be called as a subroutine, hence the total query usage is indeed $O \left( m^k \log^k \left( \frac{m}{\varepsilon} \right) \right) = poly(m,\log \left( \frac{1}{\varepsilon} \right) )$.
\end{proof}
\section{Upper Envelope Polytope Partitions}\label{sec:uepp}
Up until now we have focused completely on the lexicographic query oracle $Q_\ell$, creating algorithms CD-GBS and CR-GBS that compute $\varepsilon$-close labellings of $(m,n)$-polytope partitions when given access to $Q_\ell$. If these algorithms are given access to an adversarial oracle $Q_A$ however, they may fail. It suffices to see this for CD-GBS since CR-GBS uses it as a subroutine.
To see why CD-GBS may fail under $Q_A$ we recall that the algorithm recursively computes $\varepsilon$-close labellings of cross-sections $\mathscr{P}^t$ for different values of $t \in [0,1]$. If ever CD-GBS is called on a degenerate cross-section $\mathscr{P}^t$, it has conditions to either tell that it is being called on a degenerate cross-section (when it notices that there exist $i,j \in [n]$ and $z \in \Delta^m$ such that $z \in int(\hat{P}_i) \cap \hat{P}_j$), or in the worst case, prevent it from exceeding its query balance. In both cases however, the algorithm returns a valid empirical labelling, i.e., $\hat{P} = \{ \hat{P}_i\}_{i=1}^n$ such that $\hat{P}_i \subseteq P_i$.
When an adversarial oracle is used however, we may see $i,j \in [n]$ and $z \in \Delta^m$ such that $z \in int(\hat{P}_i) \cap \hat{P}_j$. Indeed this can occur if $P_i = P_j$ and both are full-dimensional. The natural solution seems to merge $P_i$ and $P_j$ (since the second condition of the definition of polytope partitions tells us that $P_i = P_j$ in this case). The main problem however, is that there is no way of telling when the condition above is an artifice of the adversarial oracle, or simply due to the fact that $\mathscr{P}^t$ is degenerate. If we blindly merge labels, we may in fact be performing an incorrect merge on a degenerate cross-section! This of course may return inconsistent polytope partitions.
Since the key problem is the existence of degenerate cross-sections, we consider a slightly stronger variant of polytope partitions with the key property that cross-sections are never degenerate. Furthermore, this special type of polytope partition is expressive enough for our game theoretic applications, and best of all, it allows us to prove results in the adversarial query oracle model.
\begin{definition}[Upper Envelope Polytope Partition]\label{def:UE-polytope-partition}
Suppose that $A \in \mathbb{R}^{n \times m}$ is an $n \times m$ real-valued matrix and that $b \in \mathbb{R}^n$. Let $P_i = y \in \Delta^m$ such that $(Ay + b)_i \geq (Ay + b)_j$ for all $j \neq i$. We denote the collection $\mathscr{P}(A,b) = P_1,\ldots,P_n$, as the upper envelope polytope partition (UEPP) arising from $(A,b)$.
\end{definition}
It is straightforward to see that for any $(A,b)$, $\mathscr{P}(A,b)$ is itself an $(m,n)$-polytope partition. Crucially however, it satisfies more properties than the previous definition of polytope partitions.
\begin{lemma}\label{lemma:structure-uepp}
Suppose that $A$ is an $n \times m$ real valued matrix and that $b \in \mathbb{R}^n$. Then $\mathscr{P}(A,b) = \{P_1,\ldots,P_n\}$ has the following properties:
\begin{itemize}
\item For any $x \in [0,1)$ let $f_x$ be the canonical affine transformation that maps $(\Delta^m)^x$ to $\Delta^{m-1}$. There exists an $n \times (m-1)$ real matrix $A^x$ and $b^x \in \mathbb{R}^n$ such that $\mathscr{P}(A^x,b^x) = f_x(\mathscr{P}(A,b)^x)$.
\item $\mathscr{P}(A,b)$ is a proper polytope partition (Definition~\ref{def:proper-polytope-partition}).
\item If $A_{i,\bullet} = A_{j,\bullet}$ and $b_i = b_j$ then $P_i = P_j$. Conversely if $P_i$ is of full affine dimension and $relint(P_i) \cap P_j \neq \emptyset$, then $A_{i,\bullet} = A_{j,\bullet}$ and $b_i = b_j$; consequently, $P_i = P_j$.
\item Suppose that $a_1,\ldots,a_k \in \mathbb{R}$ are such that $\sum_{i=1}^k a_i < 1$ with $k < m$. Let $H = \{ (z_1,\ldots,z_m) \in \Delta^m \ | \ z_i = a_i, i=1,\ldots,k\}$ where $H$ has affine codimension $k$. If $x_1,\ldots,x_{m-k} \in \Delta^m$ are affinely independent points of $P_i\cap H$ and $y \in Conv(x_1,\ldots,x_{m-k})$ belongs to $P_j$, then $P_i$ and $P_j$ coincide in $H$.
\end{itemize}
\end{lemma}
\begin{proof}
The first bullet point follows from two facts: affine transformations restricted to affine subspaces are themselves affine transformations, and compositions of affine transformations are themselves affine transformations.
To be rigorous, define the affine transformation $g:\mathbb{R}^m \rightarrow \mathbb{R}^m$ to be $g(x) = Ax + b$. Let $g' = g \restriction_{(\Delta^m)^x}$ be the restriction of $g$ to the affine subspace $(\Delta^m)^x \subset \mathbb{R}^m$ of codimension 1. As we mentioned before, $g'$ is itself an affine transformation.
Now let us recall that $f_x$ is the canonical affine transformation that maps $(\Delta^m)^x$ to $\Delta^{m-1}$. It is straightforward to see that $f_x^{-1}$ exists ($\Delta^{m-1}$ and $(\Delta^{m})^x$ are clearly isomorphic) and is itself an affine transformation. Consequently $g' \circ f_x^{-1} : \Delta^{m-1} \rightarrow \mathbb{R}^n$ is itself an affine transformation, which can be identified with a matrix $A^x$ and vector $b^x$ such that $\left( g' \circ f_x^{-1} \right) z = A^x z + b^x$. It is straightforward to see that $(A^x,b^x)$ are such that $\mathscr{P}(A^x,b^x) = f_x(\mathscr{P}(A,b)^x)$ as desired.
As for the second bullet point, let $F \in \partial_{m-1}(\Delta^m)$ be an arbitrary face of $\Delta^m$ of codimension 1. In addition, we use the notation $\phi_F$ as before to denote the canonical isomorphism from $F$ to $\Delta^{m-1}$. $\phi_F$ is itself an affine transformation, hence we can use identical argumentation from before to show that by restricting the original affine functions arising from $(A,b)$ to $F \in \partial_{m-1}(\Delta^m)$, we can find equivalent affine functions that render $\phi_F(\mathscr{P}_F)$ an upper-envelope polytope partition. For arbitrary $0 \leq k \leq m-1$, we can use the previous statement inductively to show that for any $F \in \partial_k(\Delta^m)$, $\mathscr{P}_F$ is a $(k,n)$-polytope partition. This concludes the proof that $\mathscr{P}$ is a proper polytope partition.
As for the third bullet point, the fact that $A_{i,\bullet} = A_{j,\bullet}$ and $b_i = b_j$ implies $P_i = P_j$ is trivial. Let us focus on the case when $P_i$ is of full affine dimension and $relint(P_i) \cap P_j \neq \emptyset$. For the sake of contradiction, let us suppose that $A_{i,\bullet} = A_{j,\bullet}$ and $b_i \neq b_j$. If this holds, then $(Ay + b)_i \neq (Ay + b)_j$ for all $y$, which contradicts our assumption that $relint(P_i) \cap P_j \neq \emptyset$. Let us therefore suppose that $A_{i,\bullet} \neq A_{j,\bullet}$. Let $H$ be the set of $y$ such that $(Ay + b)_i = (Ay + b)_j$. Since $A_{i,\bullet} \neq A_{j,\bullet}$, $H$ has codimension of at least 1. By assumption, there exists a $z \in relint(P_i) \cap P_j$. It must be the case that $z \in H$ as well. However, using the fact that $z \in relint(P_i)$ and that $P_i$ is of full affine dimension, for some $\varepsilon> 0$, the $B_\varepsilon(z) \subsetneq P_i$. However, since $z \in H$, which is of codimension 1, then half of $B_\varepsilon(z)$ must not belong to $P_i$, which is a contradiction.
The final bullet point follows from putting the first and third bullet points together and inducting on $k$. The base case follows from the fact that for $w \in [0,1)$, we know that $\mathscr{P}^w$ is itself a scaled upper envelope polytope partition (from the first bullet point). Now suppose that $x_1,...,x_{m-1} \in P_i$ are affinely independent in $\mathscr{P}(A,b)^w$. Furthermore suppose that $Conv(x_1,...,x_{m-1})$ contains a point $y \in P_j$. Since the $x_i$ are affinely independent, it follows that $P_i^w$ is full-dimensional in $\mathscr{P}(A,b)^w$, hence we can apply the third bullet point to show that $P_i^w$ and $P_j^w$ coincide in $\mathscr{P}(A,b)^w$ (which is in fact what we desired).
Let us suppose that the claim holds for a given $k - 1 < m-1 $ and that we are given $a_1,...,a_k$. From the first bullet point, $\mathscr{P}(A,b)^{a_1}$ is a scaled lower-dimensional upper envelope polytope partition. Let us define $H = \{ (z_1,...,z_m) \in \Delta^m \ | \ z_i = a_i, i=1,..,k\}$ and $H_2 = \{ (z_1,...,z_m) \in \Delta^m \ | \ z_i = a_i, i=2,..,k\}$. It follows that $\mathscr{P}(A,b) \cap H = \mathscr{P}(A,b)^{a_1}\cap H_2$, and in the later we can use the inductive assumption (since $(m-1) - (k-1) = m-k$) to show that if $x_1,...,x_{m-k}$ are affinely independent points in $P_i^{a_1} \cap H_2 = P_i \cap H$, and $y \in Conv(x_1,...,x_{m-k})$ belongs to $P_j$, then $P_i^{a_1}$ and $P_j^{a_1}$ coincide in $H_2$, which is the same as saying $P_i$ and $P_j$ coincide in $H$ as desired.
\end{proof}
\subsection{Adversarial CD-GBS}
Suppose that $\mathscr{P}$ is an UEPP. Since it is also a proper $(m,n)$-polytope partition, it inherits all the properties from before. Along with Lemma \ref{lemma:structure-uepp} we have the necessary tools to show that Algorithm \ref{alg:GBS-adversarial} is a query efficient way of computing $\varepsilon$-close labellings of $\mathscr{P}$ with an adversarial query oracle. In the specification of CD-GBS, we use identical terms and notation from Algorithm \ref{alg:GBS}.
\begin{algorithm}[h]
\caption{Adversarial CD-GBS$(m,n,\varepsilon, Q_A)$}
\label{alg:GBS-adversarial}
\begin{algorithmic}
\item[\algorithmicinput] $m \geq 0, \ n,\varepsilon > 0$, query access to oracle $Q_A: \Delta^m \rightarrow [n]$.
\REQUIRE Recursive calls to CD-GBS$ \left( m-1,n,\frac{\varepsilon^2}{85(1-x) n m^{5/2}}, Q_A \circ f_x^{-1} \right)$.
\item[\algorithmicoutput] $\varepsilon$-close labelling of $\mathscr{P}$.
\IF{$m=0$}
\STATE Query $Q_A(0)$
\ELSE
\STATE $\hat{\mathscr{P}}^0 \leftarrow f_0^{-1} \left( \text{CD-GBS} \left( m-1,n,\frac{\varepsilon^2}{85 n m^{5/2}}, Q_A \circ f_0^{-1} \right) \right)$
\STATE$\hat{\mathscr{P}}^1 \leftarrow Q(\vec{e}_1)$.
\FOR{$k = 1$ to $\lceil \log(2/\varepsilon) \rceil$}
\FOR{$x \in \mathscr{D}^k$}
\IF{$I_x^k$ is uncovered}
\STATE $t \leftarrow midpoint(I_x)$
\STATE $\hat{\mathscr{P}}^t \leftarrow f_t^{-1} \left( \text{CD-GBS} \left( m-1,n,\frac{\varepsilon^2}{85(1-t) n m^{5/2}}, Q_A \circ f_t^{-1} \right) \right)$
\ENDIF
\STATE Recompute $\hat{\mathscr{P}}$ by taking convex hulls of labels
\WHILE{$\exists i,j \in [n], \ z \in \Delta^m$ such that $dim(\hat{P}_i) = m$ and $z \in int(\hat{P}_i)$}
\STATE Merge label $i$ with label $j$
\STATE Recompute $\hat{\mathscr{P}}$ by taking convex hulls of labels
\ENDWHILE
\IF{$\hat{\mathscr{P}}$ is an $\varepsilon$-close labelling}
\STATE Break
\ENDIF
\ENDFOR
\ENDFOR
\ENDIF
\iffalse
\STATE Define $\hat{Q}: \Delta^m \rightarrow [n] \cup \{\bot\}$ as follows:
\FOR{$i \in [n]$}
\STATE if $ x \in \hat{P}_i$, $\hat{Q}(x) \leftarrow i$, else $\hat{Q}(x) = \bot$
\ENDFOR
\fi
\RETURN $\hat{\mathscr{P}}$
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{thm:adv-CDGBS}
If CD-GBS is given access to an adversarial query oracle $Q_A$ of an $(m,n)$-polytope partition based on a UEPP, it computes an $\varepsilon$-close labelling of $\mathscr{P}$ using at most\newline
$\left( \prod_{i=1}^m \left( \binom{n+i}{i} + 2n \right) \right) 2^{2m^2}\log^m \left( \frac{170 n m^{5/2}}{\varepsilon} \right)$ membership queries. For constant $m$ this constitutes $O(n^{m^2}\log^m \left( \frac{n}{\varepsilon} \right) ) = poly(n,\log \left( \frac{1}{\varepsilon} \right))$ queries.
\end{theorem}
\begin{proof}
As in the proof of correctness of CD-GBS, we begin by noting that when $m=1$ the algorithm runs identical to binary search. We thus focus on the case where $m > 1$.
The key observation of the proof of correctness is the following: At any given $k$ in the first for loop there are at most $2|C^\alpha_{\mathscr{P}}|$ values of $x$ such that $I^k_x$ is uncovered. Let us consider the empirical polytope $\hat{\mathscr{P}}$ that has been constructed at the time of the execution of the $k$-th loop. Due to the fact that we have merged any labels from the execution of the loop at value $k-1$, it follows that for all $i,j$, $\hat{P}_i \cap \hat{P}_j$ is not of full affine dimension. In turn this means that there exists a hyperplane $H_{i,j}$ that separates the interiors of $\hat{P}_i$ and $\hat{P}_j$. Furthermore, denote $H_{i,j}^+$ as the halfspace defined by $H_{i,j}$ in which $\hat{P}_i$ is contained. This means that in turn we can define $\bar{P}_i = \cap_{j} H_{i,j}^+$ so that $\hat{P}_i \subset \bar{P}_i$. Furthermore, it is straightforward to see that we can define $\bar{\mathscr{P}} = \{ \bar{P}_i\}$ as a valid $(m,n)$-polytope partition that is consistent with $\hat{\mathscr{P}}$. Since $\bar{\mathscr{P}}$ is consistent with our current observations from $Q_A$, we can actually simulate CD-GBS on $\bar{P}_i$ for the first $k-1$ iterations of the algorithm (ordering polytopes accordingly to simulate a lexicographic query oracle). The empirical polytope returned when doing so will in fact be $\hat{\mathscr{P}}$, and thus we can apply corollary \ref{cor:width-bound} to tell us that the number of uncovered $I^k_x$ is in fact bounded by $2|C^\alpha_{\mathscr{P}}|$.
Returning to our proof of correctness of the algorithm. It is not hard to see that upon termination it is correct if we assume that calling CD-GBS as a subroutine works correctly as per the inductive assumption and crucially the fact that from Lemma \ref{lemma:structure-uepp} each cross-section of $\mathscr{P}$ is non-degenerate. Therefore we focus on the query cost of the algorithm. At each $k =1$ to $\lceil \log(2/\varepsilon) \rceil$, from our previous result there can only be at most $2|C^\alpha_{\mathscr{P}}|$ uncovered $I^k_x$, which are precisely the $I^k_x$ that result in queries. By simple multiplication we thus get that the number of cross-section queries is at most $2 \lceil \log(2/\varepsilon) \rceil |C_{\mathscr{P}}^\alpha|$, hence we get the following recursion for bounding the query cost of adversarial CD-GBS:
$$
T(m,n,\varepsilon) \leq 2\left(\binom{n+m}{m} + 2n \right) \log \left( \frac{2}{\varepsilon} \right) T \left(m-1, n, \frac{\varepsilon^2}{85 n m^{5/2}} \right)
$$
with base case $T(1,n,\varepsilon) \leq n \log \left( \frac{2}{\varepsilon} \right)$. If we unpack the recursion in the same way as Theorem \ref{thm:genbinsrch-correct}, we get the desired result.
\end{proof}
\subsection{Adversarial CR-GBS}
In this section we formalize an adversarial variant of CR-GBS. We note that most of the notation is identical to lexicographic CR-GBS.
\begin{algorithm}
\caption{CR-GBS$(m,n,\varepsilon, \mathscr{P})$}
\label{alg:FGBS-adversarial}
\begin{algorithmic}
\item[\algorithmicinput] $m,n,\varepsilon > 0$, query access to $Q_A$ for $(m,n)$-polytope partition $\mathscr{P}$.
\item[\algorithmicoutput] $\varepsilon$-close labelling of $\mathscr{P}$.
\STATE $k \leftarrow \binom{n} {2}$
\FOR{$F \in \partial( \Delta^m)^k$}
\STATE $\hat{\mathscr{P}}^F \leftarrow \phi_F^{-1} \left( \text{CD-GBS}\left(k, n, \frac{3\varepsilon}{100n^2\sqrt{k+1}(m+1)^{5/2}}, Q \circ \phi^{-1}_F \right) \right)$.
\ENDFOR
\STATE $\hat{\mathscr{P}} \leftarrow Conv_F(\hat{\mathscr{P}}_F)$
\WHILE{$\exists i,j \in [n], \ z \in \Delta^m$ such that $dim(\hat{P}_i) = m$ and $z \in int(\hat{P}_i)$}
\STATE Merge label $i$ with label $j$
\STATE Recompute convex hulls of labels
\ENDWHILE
\iffalse
\STATE Define $\hat{Q}: \Delta^m \rightarrow [n] \cup \{\bot\}$ as follows:
\FOR{$i \in [n]$}
\STATE if $ x \in \hat{P}_i$, $\hat{Q}(x) \leftarrow i$, else $\hat{Q}(x) = \bot$
\ENDFOR
\fi
\RETURN $\hat{Q}$
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{thm:full-gbs-guarantee-adversarial}
Let $\mathscr{P}$ be an $(m,n)$-polytope partition where $n$ is constant. Furthermore, let $k = \binom{n}{2}$. CR-GBS computes an $\varepsilon$-close labelling of $\mathscr{P}$ and uses $O \left( m^k \log^k \left( \frac{m}{\varepsilon} \right) \right) = poly(m,\log \left( \frac{1}{\varepsilon} \right) )$ queries.
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{thm:adv-CDGBS}, we know that there exists a polytope partition $\bar{\mathscr{P}}$ that is consistent with $\hat{\mathscr{P}}$. Once again, we notice that this invocation of adversarial CR-GBS is identical to running lexicographic CR-GBS on $\bar{\mathscr{P}}$, which would in turn return $\hat{\mathscr{P}}$ an $\varepsilon$-close labelling of $\bar{\mathscr{P}}$. However, it is straightforward to see from the definition of $\varepsilon$-close labellings that this also makes $\hat{\mathscr{P}}$ an $\varepsilon$-close labelling of $\mathscr{P}$ as desired. Finally, the query usage of adversarial CR-GBS is identical to lexicographic CR-GBS, hence the rest of the theorem follows.
\end{proof}
\section{Games and Best Responses}\label{sec:game-theory-background}
Now that we have established query-efficient algorithms for learning $\varepsilon$-close labellings of polytope partitions, we turn our attention to game theory to prove the connection between learning these labellings and computing approximate well-supported Nash equilibria.
Suppose that $G = (A,B)$ is an $m \times n$ bi-matrix game where $A,B \in [0,1]^{m \times n}$ are the row player and column player payoff matrices respectively with payoffs normalised to $[0,1]$. We wish to identify an $\varepsilon$-well-supported Nash equilibrium ($\varepsilon$-WSNE) using only limited information on $G$. The set of row player pure strategies is $[m] = \{1,\ldots,m\}$ and similarly that of the column player pure strategies is $[n] = \{1,\ldots,n\}$. Furthermore, the set of all row player mixed strategies can be associated with the axis-aligned $(m-1)$-simplex: $\Delta^{m-1}= \{ \vec{x} \in \mathbb{R}^{m-1} | \sum_{i=1}^{m-1} x_i \leq 1 \text{ and } x_i \geq 0\}$. Similarly, column player mixed strategies are identified with $\Delta^{n-1}$.
\begin{definition}[Utility Functions]
Suppose that $u \in \Delta^{m-1}$, and $v\in \Delta^{n-1}$ are row and column player mixed strategies. Let $u' = (1 - \sum u_i, u_1,...,u_{n-1})$ and $v' = (1 - \sum v_i, v_1,...,v_{n-1})$. Then for strategy profile $(u,v)$, row player utility is $U_r(u,v) = u'^TAv'$ and column player utility is $U_c(u,v) = u'^TBv'$.
\end{definition}
It will also be useful to have shorthand for the following functions: $U_r^i(y) = U_r(e_i,y)$ as the row player utility for playing pure strategy $i$, and $E_R(y) = \max_{i \in [m]} U_r^i(y)$ as the maximal utility the row player can achieve against mixed strategies. In an identical fashion we can define $U_c^j$ and $E_C$ as the column player utility in playing strategy $j$ and the maximal column player utility. With this notation in hand, we can define the best response oracles algorithms will have access to when computing approximate Nash equilibria.
\begin{definition}[Best Response Query Oracles]
Any bimatrix game has the following best response query oracles:
\begin{itemize}
\item Strong query oracles: for the column player, $BR^C_s(u) = \{ j \in [n] \ | \ U^j_c(u) = E_C(u) \}$ and for the row player, $BR^R_s(v) = \{ i \in [m] \ | \ U^i_r(v) = E_R(v) \}$
\item Lexicographic query oracles: for the column player, $BR^C_\ell(u) = \min_{j \in [n]} j \in BR^C_s(u)$ and for the row player, $BR^R_\ell(v) = \min_{ i \in [m]} i \in B^R_s(v)$
\item Adversarial query oracles: for the column player, any function $BR^C_A$ such that $BR^C_A(u) \in BR^C_s$ and for the row player, any function such that $BR^R_A(v) \in BR^R_s(v)$
\end{itemize}
\end{definition}
For a given mixed strategy, $u \in \Delta^{m-1}$, we say the support of $u$ is the set of pure strategies that are played in $u$ with non-zero probability. It will be useful to formulate this as a function in order to define Nash equilibria combinatorially in the following section.
\begin{definition}[Support Functions]
Let $S^R : \Delta^{m-1} \rightarrow \mathcal{P}([m])$ be the function which returns the support of a row player mixed strategy. Similarly let $S^C : \Delta^n \rightarrow \mathcal(\mathcal{P}[n])$
return the support of column player mixed strategies.
\end{definition}
\section{Nash Equilibria and Lipschitz Continuity of Utility Functions}\label{sec:nash-lipschitz}
\begin{definition}[Nash Equilibrium]\label{def:nash}
Suppose that $u$ and $v$ are row and column player strategies respectively. We say that the pair $(u,v)$ is a {\em Nash Equilibrium (NE)} if for all $u' \in \Delta^{m-1}$ and $v' \in \Delta^{n-1}$: $U_r(u,v) \geq U_r(u',v)$ and $U_c(u,v) \geq U_c(u,v')$.
\end{definition}
Though the definition of a Nash equilibrium involves utility values of both players at their mixed strategy profiles, there is an equivalent combinatorial formulation of the above definition:
\begin{proposition}\label{prop:comb-NE}
$(u,v)$ is a NE if and only if $S^R(u) \subseteq BR^R_s(v)$ and $S^C(v) \subseteq BR^C_s(u)$. In other words $u$ is supported by best responses to $v$ and vice versa.
\end{proposition}
When using best response queries only, one does not have access to utility values (as emphasised in the first definition), however this second equivalent definition of Nash equilibria can be verified by using best response oracles and support functions alone.\footnote{In general one is unable to recover utility values from best responses, even up to affine transformations.}
We also note that for utility queries, the complexity of an exact NE is finite: we can exhaustively query the game. On the other hand, Corollary \ref{cor:exact-nash-inft} shows that this is not the case for best response queries. As a consequence, we relax the notion of a NE when using best response queries. the relaxation of NE which we study is that of approximate well-supported equilibria. Before proceeding with the formal definition, we say that a row player mixed strategy $u \in \Delta^m$ is an $\varepsilon$ best response against a column player mixed strategy $v \in \Delta^n$ if $U_r(u,v) \geq U_r(u',v) - \varepsilon$ for all $u' \in \Delta^m$. An identical notion holds for when a column player mixed strategy $v \in \Delta^n$ is an $\varepsilon$ best response against a row player mixed strategy $u \in \Delta^m$. Intuitively, an $\varepsilon$ best response is a mixed strategy where a player has only an $\varepsilon$ incentive to deviate.
\begin{definition}[$\varepsilon$-Well-Supported Nash Equilibrium]\label{def:eps-WSNE}
Suppose that $u$ and $v$ are row and column player strategies respectively. We say that the pair $(u,v)$ is an $\varepsilon$-well-supported Nash equilibrium ($\varepsilon$-WSNE) if and only if $u$ is supported by $\varepsilon$-best responses to $v$ and vice versa.\footnote{Note that the conditions for an $\varepsilon$-WSNE imply that no player has more than an $\varepsilon$ incentive to deviate from the approximate equilibrium.}
\end{definition}
\begin{theorem}\label{thm:WSNE-LB}
The query complexity of computing an $\varepsilon$-WSNE with best response queries is $\Omega( \log \left(\frac{1}{\varepsilon} \right) )$, even when given access to strong query oracles.
\end{theorem}
\begin{proof}
Suppose that $x,y \in (0,1)$ are arbitrary, and let us consider a two-player binary action game, $G_{x,y}$, with the following row and column player payoff matrices:
\[A_x = \left( \begin{array}{rr}
x & x \\
0 & 1
\end{array} \right)
\ \
B_y = \left( \begin{array}{rr}
0 & y \\
1 & y
\end{array} \right)
\]
Since $G_{x,y}$ is a binary action game, we can express any mixed strategy profile of both players by a tuple $(p_r,p_c) \in [0,1]^2$. $p_r$ represents the probability the row player plays the second row and $p_c$ represents the probability that the column player plays the second column.
It is clear that this game has no pure equilibria, but its unique NE is the mixture: $(y,x) \in [0,1]^2$. Upon close inspection, one can also see that the set of $\varepsilon$-WSNE of $G_{x,y}$ lie in $B_\varepsilon(y) \times B_\varepsilon(x) \cap [0,1]^2$, where $B_\varepsilon(z)$ the set of points at a distance $\varepsilon$ from $z$ in $\mathbb{R}$. Suppose that an algorithm, $\mathcal{A}$, is given a game $G_{x,y}$ from the family above and access to the game's strong best response query oracles. In order for $\mathcal{A}$ to compute an $\varepsilon$-WSNE, the previous observation tells us that the it has to at least find a point $z \in B_\varepsilon(x)$ by querying the row player's best response oracle. From the structure of $A_x$ however, we know that for $p_c \in [0,x]$ the first row is a best response for the row player, and for $p_c \in [x,1]$, the second row is a best response for the row player. This problem formulation however is identical to binary search, hence finding a $z \in B_\varepsilon(x)$ takes at least $\Omega( \log \left(\frac{1}{\varepsilon} \right) )$ queries.
\iffalse
If it had a finite query complexity, it would make say $k$ queries $q_1,...,q_k$ to $BR^R_s$. Since the row player plays strategies from $\Delta^1$, these can be parametrized as values from $[0,1]$ which is the probability the row player plays the second row. Now suppose that $q_i \in [0,1]$ is the largest query made by the algorithm such that $BR^R_s(q) = c_2$, and furthermore suppose that $q_j \in [0,1]$ is the smallest query made by the algorithm such that $BR^R_s(q) = c_1$. From convexity of best response sets it is clear that $q_i \neq q_j$ and $q_i \leq y \leq q_j$. If the algorithm is to correctly return the exact NE of the game, it must return a $y \in [q_i, q_j]$, but an adversary can choose a game with true equilibrium $y' \neq y$ such that $y' \in [q_i,q_j]$.
\fi
\end{proof}
\begin{corollary}\label{cor:exact-nash-inft}
The query complexity of computing a NE with best response queries is infinite, even when given access to strong query oracles.
\end{corollary}
\subsection{Algebraic Properties of Utility Functions}
Definition \ref{def:eps-WSNE} mentions approximate best responses, yet we only have access to the best response oracle in our model. In order to resolve this, we delve into the algebraic properties of utility functions of both the column and row player.
\begin{lemma}\label{lemma:utilities-lipschitz}
If the domains of $U^i_r$ and $U^j_c$ are endowed with the $\ell_2$ norm, then the functions are $\lambda_R$ and $\lambda_C$ Lipschitz continuous respectively, for some $0\leq \lambda_R \leq \sqrt{n-1}$ and $0
\leq \lambda_C \leq \sqrt{m-1}$. If the domains are endowed with the $\ell_1$ norm, then both functions are $1$-Lipschitz continuous.
\end{lemma}
\begin{proof}
We focus on $U^i_r$, the case for the column player is identical. Let $c = [A^T]_i = (a_1,...,a_n)$ be the $i$-th row vector of the row player's payoff matrix, and suppose that $v = (v_1,...,v_{n-1}) \in \Delta^{n-1}$ is a column player mixed strategy, where $v_0 = 1 - \sum_{i=1}^{n-1} v_i$ is implicit. Let $z = (z_i)_{i=1}^{n-1}$ with $z_i = (a_i - a_0)$ for $i= 1,...,n-1$. Then it is clear that $U^i_r(v) = a_0 + \sum_{i=1}^{n-1}z_i \cdot v_i$. This function is linear, and trivially $\|z\|_2$-Lipschitz continuous. Since the game is normalised, $\|z\|_2 \leq \|\vec{1}\|_2 = \sqrt{n-1}$.
As for the second part of the claim, the domain of $U^i_r$ can be equivalently represented as $\Lambda^{n} = \{x \in \mathbb{R}^{n} \ | \ \|x\|_1 = 1$, $x_i \geq 0\}$ by using the invertible linear map $\phi_{n-1}: \Delta^{n-1} \rightarrow \Lambda^{n}$ given by $\phi_{n-1}(x_1,...,x_{n-1}) = \left( x_1,...,x_{n-1}, \left( 1-\sum_{i=1}^{n-1}{x_i} \right) \right)$. This space can be endowed with total variation distance as a metric, which for two distributions, $x,y \in \Lambda^n$ is defined as $TV(x,y) = \max_{s \subseteq n} | \mathbb{P}_x(S) - \mathbb{P}_y(S)| = \frac{1}{2} \|x - y\|_1$. Since utilities are bounded to be in the interval $[0,1]$, it follows that $U^i_r$ is 1-Lipschitz as a function with domain $\Lambda^n$ in the total variation metric.
Now suppose that $x,y \in \Delta^{n-1}$. We wish to show that $TV(\phi_{n-1}(x), \phi_{n-1}(y)) \leq \|x - y\|_1$. To see this, let $x_n = 1 - \sum_{i=1}^{n-1} x_i$ and $y_n = \sum_{i=1}^{n-1} y_i$. Then $\|\phi_{n-1}(x) - \phi_{n-1}(y)\|_1 = \|x - y\|_1 + |x_n - y_n|$. From this we see that $|x_n - y_n| = |\sum_{i=1}^{n-1}(y_i - x_i)| \leq \|x-y\|_1$, which in turn implies $\|\phi_{n-1}(x) - \phi_{n-1}(y)\|_1 \leq 2\|x- y\|_1$. Dividing the expresion by 2 and applying the fact that $TV(x,y) =
\frac{1}{2}\|x-y\|_1$ proves our desired inequality. 1-Lipschitz continuity in the $\ell_1$ norm for domain $\Delta^{n-1}$ follows immediately.
\end{proof}
\begin{corollary}
Since $E_C$ and $E_R$ are defined as a pointwise maximum, it follows that they are also Lipschitz continuous with constant $\lambda_C \leq \sqrt{m-1}$ and $\lambda_R \leq \sqrt{n-1}$ in the $\ell_2$ norm.
\end{corollary}
With bounded Lipschitz continuity, we have guarantees on how much utilities can deviate between ``close'' mixed strategy profiles. This has interesting implications even for the best response query oracle, for this means that if $u$ and $u'$ are close in the $\ell_2$ norm with $c_i \in BR^C_s(u)$, $ c_j \in BR^C_s(u')$ and $c_i \neq c_j$, then we can say that $c_i$ and $c_j$ are both approximate best responses in the vicinity of $u$ and $u'$. We formalise this as follows.
\begin{lemma} \label{lemma:close-epsBR}
Fix $\varepsilon >0$ and let $\delta_C = \frac{\varepsilon}{2\sqrt{m-1}}$. Suppose that $u \in \Delta^{m-1}$ is a row player mixed strategy with $c_j \in BR^C_s(u)$. For any $u'$ such that $\|u - u'\|_2 \leq \delta_C$, if $c_i \in BR^C_s(u')$, then $|U^i_c(u) - U^j_c(u)| \leq \varepsilon$. In other words, $c_i$ is an $\varepsilon$-best response to $u$. Similarly, let $\delta_R = \frac{\varepsilon}{2\sqrt{n-1}}$. Suppose that $v \in \Delta^{n-1}$ is a column player mixed strategy with $r_j \in BR^R_s(u)$. For any $v'$ such that $\|v - v'\|_2 \leq \delta_R$, if $r_i \in BR^R_s(v')$, then $|U^i_r(v) - U^j_r(v)| \leq \varepsilon$. In other words, $r_i$ is an $\varepsilon$-best response to $v$.
\end{lemma}
\begin{proof}
Suppose that $u'$ is such that $\|u - u'\| \leq \delta_C$ and $c_i \in BR^C_s(u')$. By definition, $E_c(u') = U^i_c(u') \geq U^j_c(u')$ and by Lemma \ref{lemma:utilities-lipschitz}, $|U_j^c(u) - U_j^c(u')| \leq \lambda_C \|u - u'\|_2$, and $\|U_i^c(u) - U_i^c(u')| \leq \lambda_C \|u - u'\|_2$. With these expressions we obtain the following inequalities:
$$
|E_c(u) - U_c^i(u)| \leq 2\lambda_C\|u - u'\|_2 \leq 2\lambda_C \frac{\varepsilon}{2\lambda_C} = \varepsilon
$$
The proof of the second half of the lemma is identical.
\end{proof}
The previous Lemma establishes the important idea that we can obtain some information regarding approximate best responses using only the best response oracle and ``nearby'' queries. With some thought one can see that this in general does not reveal {\em all} approximate best response information. For example, if a strategy were strictly dominated, a best response oracle would never see it, and hence never be able to tell if it was an approximate best response.
\section{Nash's Theorem with Discrete Approximations}\label{sec:discrete-nash}
We are now in a position to prove the intimate connection between computing $\varepsilon$-close labellings of upper envelope polytope partitions and computing $\varepsilon$-WSNE for bimatrix games using best response queries.
\begin{definition}[Best Response Sets]\label{BR-sets}
Let $G = (A,B)$ be a bimatrix game. We define column best response sets as the collection of $C_i = \{ x \in \Delta^{m-1} \ | \ BR^C_s(x) = c_i \}$. Similarly we define row player best response sets as the collection of $ R_j = \{ y \in \Delta^{n-1} \ | \ BR^R_s(y) = r_j \}$. We denote the collections by $\mathscr{C} =\{C_i\}_{i=1}^n$ and $\mathscr{R} = \{R_j\}_{j=1}^m$.
\end{definition}
Since utilities are affine functions, it is immediately clear that $\mathscr{C}$ and $\mathscr{R}$ are upper envelope polytope partitions. Now the best response oracles play the same role as membership oracles, $Q$, from before. Since adversarial oracles are the weakest of the three membership oracles (in the sense that they are a valid lexicographic oracle and they can be simulated with access to a strong oracle), we focus on using adversarial best response oracles. Furthermore, with our language of empirical labellings we can now define a key object used in the computation of approximate equilibria. Before doing so, we clarify some notation: $d(x,S)$ denotes the infimum distance of a point, $x$ to a set $S$.
\begin{definition}[Voronoi Best Response Sets]\label{def:VorBR}
Suppose that $\hat{\mathscr{C}} = \{\hat{C}_i\}$ and $\hat{\mathscr{R}} = \{\hat{R}_j\}$ are empirical labellings of $\mathscr{C}$ and $\mathscr{R}$ as in Definition \ref{def:implicit-labelling}. The {\em Voronoi Best Response Sets} of the row and column player are $VR_j = \{ y \in \Delta^{n-1} \ | \ \argmin_j d(y,\hat{R}_j) = r_j \}$ and $VC_i = \{ x \in \Delta^{m-1} \ | \ \argmin_i d(x,\hat{C}_i) = c_i \}$, defined for any $j \in [m]$ and $i \in [n]$. Furthermore, we let $V^R(v) = \{i \ | \ VR_i \ni v\}$ and $V^C(u) = \{j \ | VC_j \ni u\}$ be the row and column player {\em Voronoi Best Responses}.
\end{definition}
\begin{lemma}\label{lemma:voronoi-closed}
Voronoi best response sets partition $\Delta^{m-1}$ and $\Delta^{n-1}$ into closed connected regions with non-empty interior and piecewise linear boundaries.
\end{lemma}
\begin{proof}
Without loss of generality, let us focus on a given column player Voronoi best response set; i.e. some $VC_i \subset \Delta^{m-1}$ that is non-empty and arises from the empirical labelling $\hat{\mathscr{C}} = \{\hat{C}_i\}$ of column-player best responses. First we note that $\hat{C}_i \subset VC_i$ by definition, and the former is a convex, closed, and connected polytope of $\Delta^{m-1}$. Therefore it remains to show that if we pick an arbitrary $x \in VC_i \setminus \hat{C}_i$ it is connected to $C_i$. To do so, suppose that $p \subset \Delta^{m-1}$ is the unique shortest path from $x$ to $C_i$. It is clear that all points along $p$ must also lie in $VC_i$, therefore the claim holds.
As for closedness, note that if $\{x_n\}$ is a sequence in $VC_i$ that converges to some $x \in \Delta^{m-1}$ then $x$ must also be in $VC_i$. This follows from the continuity of Euclidian distance for $\Delta^{m-1} \subset \mathbb{R}^{m}$. Now suppose that $x$ is a limit point of $VC_i$, then we can construct a sequence as above, and thus $x$ must also be in $VC_i$, rendering the set closed.
Finally, the piecewise linear boundary arises from the fact that $\hat{C}_i$ is itself a closed convex polytope which has a piecewise linear boundary. Decision boundaries between different $\hat{C}_i$ and $\hat{C}_j$ are composed of nearest neighbour decision boundaries between piecewise linear boundaries, which in turn results in piecewise linear decision boundaries between $VC_i$ and $VC_j$.
\end{proof}
Although the previous lemma proves that Voronoi best response sets partition $\Delta^{m-1}$ and $\Delta^{n-1}$ into closed connected regions with non-empty interior and piecewise linear boundaries, they need not be convex. This ends up not being an issue for our subsequent results. The reason we deal with these objects however is due to the following Lemma. We recall that $\lambda_R \leq \sqrt{m-1}$ and $\lambda_C \leq \sqrt{n-1}$ are the relevant Lipschitz continuity constants for row player and column player expected utility functions. The following is a straightforward consequence of Lemma \ref{lemma:close-epsBR}.
\begin{lemma}\label{lemma:Vor-to-epsBR}
Suppose that $\hat{\mathscr{C}}$ is a $\frac{\varepsilon}{2\sqrt{m-1}}$-close labelling and $\hat{\mathscr{R}}$ is a $\frac{\varepsilon}{2\sqrt{n-1}}$-close labelling. Then Voronoi best responses are $\varepsilon$ best-responses in $G$
\end{lemma}
We recall that the combinatorial formulation of Nash's theorem implies that with full information of best response sets in all of $\Delta^m$ and $\Delta^n$, one is able to compute and verify an exact Nash equilibrium. Best responses only partially recover this information in convex patches of $\Delta^m$ and $\Delta^n$. Furthermore, it is not clear how a game $G'$ with best responses sets consistent with empirical best response sets of $G$ can be used to compute an approximate equilibrium of $G$. Voronoi best response sets however allow us to take the partial information provided by empirical best response sets and extend it to approximate best response information {\em across the entire domains $\Delta^m$ and $\Delta^n$} (Voronoi best response sets cover $\Delta^m$ and $\Delta^n$ after all). This hints at the fact that Voronoi best response sets hold enough information to compute $\varepsilon$-WSNE. In fact we can prove this in the same way as Nash's theorem: via Kakutani's fixed point theorem. In order to do so, we define a Voronoi best response correspondence (which as we have shown before is an approximate best response correspondence), and show that it satisfies the properties of Kakutani's fixed point theorem. The guaranteed fixed point of this correspondence will in turn be an $\varepsilon$-WSNE.
\begin{definition}[Voronoi Approximate Best Response Correspondence]\label{def:VorBR-correspondance}
For a given mixed strategy profile of both the row and column player, $(u,v) \in \Delta^{m-1} \times \Delta^{n-1}$, we define $B^*(u,v)$ to be the set of all possible mixtures over Voronoi best response profiles both players may have to the other player's strategy. $B^*: \Delta^{m-1} \times \Delta^{n-1} \rightarrow \mathcal{P}(\Delta^{m-1} \times \Delta^{n-1})$ is defined as follows:
$$
B^*(u,v) = \left( conv(V^R(v)), conv(V^C(u)) \right) \subseteq \Delta^{m-1} \times \Delta^{n-1}.
$$
\end{definition}
\begin{theorem}[Kakutani's Fixed Point Theorem \cite{kakutani1941}]\label{thm:kakutani}
Let $A$ be a non-empty subset of a finite-dimensional Euclidian space and $f:A \rightarrow \mathcal{P}(A)$ be a set-valued function satisfying the following conditions:
\begin{itemize}
\item $A$ is a compact and convex set.
\item $f(x)$ is non-empty for all $x \in A$.
\item $f(x)$ is a convex-valued correspondence: for all $x \in A$, $f(x)$ is a convex set.
\item $f(x)$ has a closed graph: that is, if $\{x_n,y_n\} \rightarrow \{x,y\}$ with $y_n \in f(x_n)$ for all $n$, then $y \in f(x)$.
\end{itemize}
Then $f$ has a fixed point, that is there exists some $x \in A$ such that $x \in f(x)$.
\end{theorem}
\begin{theorem}\label{thm:discrete-Nash}
$B^*$ satisfies all the conditions of Kakutani's fixed point Theorem, and hence there exists a strategy profile $(u^*,v^*)$ such that $(u^*,v^*)
\in B^*(u^*,v^*)$. In particular, if the Voronoi best responses for $B^*$ arise from $\hat{\mathscr{C}}$, a $\frac{\varepsilon}{2\sqrt{m-1}}$-close labelling and $\hat{\mathscr{R}}$, a $\frac{\varepsilon}{2\sqrt{n-1}}$-close labelling, then this in turn implies that $(u^*,v^*)$ is an $\varepsilon$-WSNE of $G$.
\end{theorem}
\begin{proof}
We need to prove the following conditions for Kakutani's fixed point Theorem:
\begin{itemize}
\item $B^*$ has a compact and convex domain.
\item $B^*(u,v)$ is non-empty and convex for all $(u,v) \in \Delta^{m-1} \times \Delta^{n-1}$.
\item (Graph Closedness) Suppose that $\{\sigma_n\}$ and $\{\sigma_n'\}$ are sequences in $\Delta^{m-1} \times \Delta^{n-1}$ that converge to $\sigma$ and $\sigma'$ respectively. Furthermore, suppose that $\sigma'_n \in B^*(\sigma_n)$ for all $n$. Then $\sigma' \in B^*(\sigma)$.
\end{itemize}
For the first item, the domain of $B^*$ is $\Delta^{m-1} \times \Delta^{n-1}$ which clearly satisfies the desired condition.
As for the second and third item, from the definition of $B^*$ the image of any $(u,v)$ consists of convex combinations of Voronoi best responses, which are defined for all $(u,v)$ (thus satisfying non-emptyness), and since they are convex combinations, they are convex subsets of $\Delta^{m-1} \times \Delta^{n-1}$.
Finally for the fourth item, let us consider such a sequence where $\sigma_n = (u_n,v_n)$, $\sigma = (u,v)$, $\sigma'_n = (u_n',v_n')$, and $\sigma' = (u',v')$. To show the claim, it suffices to consider the sequences $\{u_n\}$ and $\{v_n'\}$ with respective limits $u$ and $v'$, and show that $v' \in conv(V^C(u))$
To show this however, it suffices to use the fact that Voronoi best response sets are closed. Suppose that $u$ has a certain set $S \subset [n]$ of Voronoi best responses. Then there exists a constant $\mu > 0$ such that $B_\mu(u) \cap VC_i \neq \emptyset$ if and only if $i \in S$; namely the $\mu$ neighbourhood around $u$ only intersects Voronoi best response sets from $u$'s Voronoi best responses.
To explicitly construct such a $\mu$, let us consider $D_i = d(u,\hat{C}_i)$ to be the distance between $u$ and the empirical best response set $\hat{C}_i$. This means that $S = \argmin_i D_i$, so let us define $D = \min_i D_i$ and $\mu = \frac{\min_{j \notin S} D_j - D}{3} $ which is positive due to the fact that there are finitely many partial best response sets. Now suppose that $x \in B_\mu(u)$, then for any $j \notin S$ we have $d(u,\hat{C}_j) \leq d(u,x) + d(x,\hat{C}_j)$ by the triangle inequality, which rearranging gives us: $d(x,\hat{C}_j) \geq d(u,\hat{C}_j) - d(u,x) \geq (D + 3\mu) - \mu = D +2\mu$. On the other hand, for any $ i \in S$, $d(x,\hat{C}_i) \leq d(x,u) + d(u,\hat{C}_i) \leq D + \mu$. It thus follows that $x$ can only have Voronoi best responses from $S$.
Now from the fact that $u_n \rightarrow u$, then for some $N> 0$, if $n >N$ then $u_n \in B_\mu(u)$. This in turn means that $v_n \in conv(S)$ by assumption, which means that $v \in conv(S)$ as well, which is what we wanted to show. To extend this to $\sigma$ and $\sigma'$, it suffices to repeat the previous argument in each component of the correspondence.
Now that the conditions of Kakutani's fixed point Theorem are satisfied, we know of the existence of an $(u^*,v^*)$ such that $(u^*,v^*) \in B^*(u^*,v^*)$. As in the statement of the Theorem, suppose that Voronoi best responses for $B^*$ arise from an $\frac{\varepsilon}{2\sqrt{m-1}}$-close labelling of $\Delta^m$ and an $\frac{\varepsilon}{2\sqrt{n-1}}$ of $\Delta^n$, then we know that all Voronoi best responses are $\varepsilon$ best responses for both players. The conditions of the fixed point amount to saying that both players are playing convex combinations of Voronoi best responses, therefore $(u^*,v^*)$ is an $\varepsilon$-WSNE.
\end{proof}
With Theorem \ref{thm:discrete-Nash} in hand and our algorithms for constructing $\varepsilon$-close labellings, we can put everything together and prove our desired results regarding the query complexity of computing an $\varepsilon$-WSNE in general bimatrix games.
\begin{theorem}\label{thm:final-nash-BR}
Suppose that $G$ is an $m \times n$ bimatrix game and let $n$ be constant. We can compute an $\varepsilon$-WSNE using $O(m^{n^2}\log^{n^2} \left( \frac{m}{\varepsilon} \right) ) = poly(m, \log \left( \frac{1}{\varepsilon} \right) )$ adversarial best response queries.
\end{theorem}
\begin{proof}
Suppose that $\mathscr{C}$ and $\mathscr{R}$ are the polytope partitions arising from best-response sets in $G$. This means that $\mathscr{C}$ is a $(m-1,n)$-polytope partition and $\mathscr{R}$ is a $(n-1,m)$-polytope partition. Let $\varepsilon_C = \frac{\varepsilon}{2\sqrt{m-1}}$ and $\varepsilon_R = \frac{\varepsilon}{2\sqrt{n-1}}$. From Theorem \ref{thm:discrete-Nash}, we know that computing an $\varepsilon_C$-close labelling of $\mathscr{C}$ and a $\varepsilon_R$-close labelling of $\mathscr{R}$ suffice to compute an $\varepsilon$-WSNE of $G$. We use adversarial CR-GBS on $\mathscr{C}$ and adversarial CD-GBS on $\mathscr{R}$.
$n$ is the number of polytopes in the partition $\mathscr{C}$, which is assumed to be constant. Consequently, Theorem \ref{thm:full-gbs-guarantee-adversarial} states that computing an $\varepsilon_C$-close labelling of $\mathscr{C}$ using CR-GBS uses $O((m-1)^k\log^k \left( \frac{m-1}{\varepsilon_C} \right))$ adversarial queries, where $k = \binom {n} {2}$. Since $k \leq n^2$, we can upper bound the number of queries by $O(m^{n^2}\log^{n^2} \left( \frac{m}{\varepsilon} \right) )$.
$n-1$ is the dimension of the ambient simplex in the partition $\mathscr{R}$, which is assumed to be finite. Consequently, Theorem \ref{thm:adv-CDGBS} states that computing an $\varepsilon_R$-close labelling of $\mathscr{R}$ using CD-GBS uses $O(m^{(n-1)^2}\log^{(n-1)^2} \left( \frac{1}{\varepsilon} \right) )$ queries. We trivially upper bound this quantity by $O(m^{n^2}\log^{n^2} \left( \frac{m}{\varepsilon} \right) )$.
Putting everything together, the total query usage is thus $O(m^{n^2}\log^{n^2} \left( \frac{m}{\varepsilon} \right) ) = poly(m, \log \left( \frac{1}{\varepsilon} \right) )$ as desired.
\end{proof}
\section{A Brief Foray into Multiplayer Games}
In this section we partially extend our results from sections \ref{sec:nash-lipschitz} and \ref{sec:discrete-nash}. In particular, we show that in multiplayer games utility functions are Lipschitz continuous as well, which allows us to uncover approximate best-response information when observing different best responses at ``nearby'' mixed strategy profiles. In addition we generalise our definitions of best response sets and of $\varepsilon$-close labellings to obtain a result similar to Theorem \ref{thm:discrete-Nash} where we showed that obtaining a precise enough empirical labelling provides enough information to compute a well-supported approximate Nash equilibrium.
The main difference in the multiplayer setting however is that best response sets are no longer polytopes nor convex, which means that our algorithms for computing empirical labellings via generalisations of binary search no longer apply. This does not preclude us however from simply querying an $\varepsilon$-net, which as we will show will suffice for computing an $\varepsilon$-WSNE.
\subsection{Notation for Multiplayer Games}
For simplicity we will focus on games with $n$ players where each player has a strategy set $A_i$ consisting of $|A_i| = k$ pure strategies. It is straightforward to extend our results to more general games where different players have action sets of different cardinalities.
In general, we let $A = \prod_{i=1}^n A_i$ be the space of all pure strategy profiles of all players. For the $i$-th player, we also denote $A_{-i} = \prod_{j \neq i} A_j$ as the space of all pure strategy profiles of players other than $i$. Since every player has $k$ actions, it is straightforward to see that all $A_{-i}$ are isomorphic, hence without loss of generality we can assume that for all $i$, $A_{-i}$ is canonical representation. For a given pure strategy profile $a \in A$, we may wish to distinguish the pure strategy taken by the $i$-th player, and this is done by writing $a = (a_i, a_{-i})$ with $a_i \in A_i$ and $a_{-i} \in A_{-i}$.
We denote the $i$-th player's mixed strategy space by $\Delta(A)_i$, and we note it is equivalent to $\Delta^{k-1}$. This means that the space of mixed strategy profiles of all players is $\Delta(A) = \prod_{i=1}^n (\Delta^{k-1}) = (\Delta^{k-1})^n$. In addition, for the $i$-th player, we also denote $\Delta(A)_{-i} = \prod_{i=1}^n (\Delta^{k-1}) = (\Delta^{k-1})^{n-1}$ as the space of all mixed strategy profiles of players other than $i$. Once again, since all players have $k$ actions, we can assume without loss of generality that all $\Delta(A)_{-i}$ have the same canonical representation. For a mixed strategy profile $x \in \Delta(A)$, we may wish to distinguish the mixed strategy of the $i$-th player by writing $x = (x_i, x_{-i})$ with $x_i \in \Delta(A)_i$ and $x_{-i} \in \Delta(A)_{-i}$.
\begin{definition}[Multiplayer Utility Functions] \label{def:multi-utility}
For any player $i$ and action $r \in A$, we denote $U_i^r:A_{-i} \rightarrow [0,1]$ as the $i$-th player's utility for playing $r$. If $a = (a_i,a_{-i}) \in A$ is a pure strategy profile, the utility player $i$ receives is $U_i^{a_i}(a_{-i})$ which we denote by $U_i(a)$.
\end{definition}
Utility functions are defined for pure strategy profiles, but with a slight abuse of notation we extend the domain to include mixed strategy profiles. In particular, if $x = (x_i,x_{-i})$ is a mixed strategy profile, we let $U_i^r(x_{-i}) = \mathbb{E}_{a_{-i} \sim x_{-i}}(U_i^r(a_{-i}))$ and by extension $U_i(x) = \mathbb{E}_{a \sim x} (U_i(a))$.
As in bimatrix games, for a given player $i$ and $x_{-i} \in \Delta(A)_{-i}$, we let $E_i(x_{-i}) = \max_{r \in A_i} U_i^r(x_{-i})$ be the maximal expected utility player $i$ can obtain against the mixed strategy profile $x_{-i}$ of all other players.
\begin{definition}[Best Response Query Oracles]\label{def:BR-oracle-multiplayer}
Let $G$ be an $n$-player game where each player has $k$ pure strategies:
\begin{itemize}
\item There are $n$ strong best response oracles denoted $BR^i_s: \Delta(A)_{-i} \rightarrow \mathcal{P}(A_i)$ for $i=1,...,n$. Each of these is defined by $BR^i_s(x) = \{r \in A_i \ | \ U_i^r(x) = E_i(x)\}$.
\item There are $n$ lexicographic best response oracles denoted $BR^i_\ell: \Delta(A)_{-i} \rightarrow A_i$ for $i=1,...,n$. Each of these is defined by $BR^i_\ell(x) = \argmin_{r \in A_i} r \in BR^i_s(x)$ for some consistent order on each $A_i$.
\item An adversarial best response oracle collection is a collection of $n$ functions denoted $BR^i_A: \Delta(A)_{-i} \rightarrow A_i$ for $i=1,...,n$. Each of these satisfies $BR^i_A(x) \in BR^i_s(x)$.
\end{itemize}
\end{definition}
For completeness we have defined all best response oracles, but we focus on adversarial best response oracles. As mentioned before, they are the weakest from the fact that a lexicographic oracle is a valid adversarial oracle and the fact that adversarial oracles can be simulted with strong best response oracles. This implies that our results for adversarial oracles carry over to other oracle models.
For a given mixed strategy, $x \in \Delta(A)_i$, we say the support of $x$ is the set of pure strategies that are played in $x$ with non-zero probability. It will be useful to formulate this as a function in order to define Nash equilibria combinatorially again.
\begin{definition}[Support Functions]
Let $S^i : \Delta(A)_i \rightarrow \mathcal{P}(A_i)$ be the function which returns the support of the $i$-th player's mixed strategy.
\end{definition}
We are now in a position to define what a Nash equilibrium is in multiplayer games.
\begin{definition}[Nash Equilibrium]\label{def:nash-multiplayer}
We say $x \in \Delta(A)$ is a Nash Equilibrium (NE) if for any player $i$, and $x_i' \in \Delta(A)_i$ it holds that $U_i(x) \geq U_i(x'_i, x_{-i})$
\end{definition}
As before, this definition involves utility values of players at their mixed strategy profiles. Once again, there is an equivalent combinatorial formulation.
\begin{proposition}\label{prop:comb-NE-multiplayer}
$x \in \Delta(A)$ is a NE if and only if for all $i$, when $x = (x_i, x_{-i})$, $S^i(x_i) \subseteq BR^i_s(x_{-i})$.
\end{proposition}
Finally, we define what it means for a mixed strategy profile to be an $\varepsilon$-WSNE in the multiplayer setting. Before proceeding, we say that for a given $x_{-i} \in \Delta(A)_{-i}$, strategy $x \in \Delta(A)_i$ is an $\varepsilon$ best response if $U_i(x,x_{-i}) \geq U_i(x',x_{-i}) - \varepsilon$ for all $x'\in \Delta(A)_i$. Intuitively, an $\varepsilon$ best response is a mixed strategy where a player has only an $\varepsilon$ incentive to deviate.
\begin{definition}[$\varepsilon$-Well-Supported Nash Equilibrium]\label{def:eps-WSNE-multiplayer}
Suppose that $x \in \Delta(A)$ is a mixed strategy profile of all players We say that $x$ is an $\varepsilon$-well-supported Nash equilibrium ($\varepsilon$-WSNE) if and only if for every player $i$, all pure strategies in $S^i(x_i)$ are $\varepsilon$ best responses to $x_{-i}$.
\end{definition}
\subsection{Lipschitz Continuity of Utility Functions}
As in Section \ref{sec:nash-lipschitz}, we will show that for each player $i$ and each $r \in A_i$, $U_i^r$ is a Lipschitz continuous function. In order to do so, we must regard the domain of $U_i^r$, which is $\Delta(A)_{-i} = (\Delta^{k-1})^{n-1}$, as a subset of Euclidean space endowed with the $\ell_1$ norm.
\begin{lemma} \label{lemma:utility-lipschitz-multiplayer}
For any player $i$ and action $r \in A_i$, $U_i^r$ is 1-Lipschitz when the domain is endowed with the $\ell_1$ norm.
\end{lemma}
\begin{proof}
Consider an arbitrary player $j$ with $j \neq i$. If we endow $\Delta(A)_j$ with the $\ell_1$ norm, from Lemma \ref{lemma:utilities-lipschitz}, we know that $U_i^r$ as a function of $x_j \in \Delta(A)_j$ (which is a component of $\Delta(A)_{-i}$) is 1-Lipschitz. This is because if all mixed strategies other than those of player $i$ and $j$ are fixed, we obtain a $k \times k$ bimatrix game between player $i$ and $j$. Now let us consider $x,y \in \Delta(A)_{-i}$.
\begin{equation}
\begin{split}
|U_i^r(x) - U_i^r(y)| & = \left| \sum_{j=1}^{n-1} U_i^r(y_1,..,y_{j},x_{j+1},...,x_n) - U_i^r(y_1,..,y_{j+1},x_{j+2},...,x_n) \right| \\
& \leq \sum_{j=1}^{n-1} \left| U_i^r(y_1,..,y_{j},x_{j+1},...,x_n) - U_i^r(y_1,..,y_{j+1},x_{j+2},...,x_n) \right| \\
& \leq \sum_{j=1}^n \|x_i - y_i\|_1 \\
& = \|x - y\|_1
\end{split}
\end{equation}
\end{proof}
Since Lipschitz continuity is maintained over maxima, with the same Lipschitz constant, we get the following result that says that best responses to nearby mixed strategy profiles are in fact approximate best responses.
\begin{lemma} \label{lemma:multiplayer-lipschitz}
$E_i : \Delta(A)_{-i} \rightarrow [0,1]$ is 1-Lipschitz when its domain is endowed with the $\ell_1$ norm. In particular, if $x,y \in \Delta(A)_{-i}$ and $\|x - y\|_1 \leq \frac{\varepsilon}{2}$, then any $r \in BR^i_s(y)$ is an $\varepsilon$ best response to $x$ for player $i$.
\end{lemma}
\subsection{Nash's Theorem with Discrete Approximations in Multiplayer Games}
Just as in bimatrix games, we have a notion of best response sets.
\begin{definition}[Multiplayer Best Response Sets] \label{def:multiplayer-BR-sets}
Let $G$ be a game with $n$ players, each with $k$ strategies. For a player $i$ and pure strategy $r \in A_i$, we define $P_i^j = \{x \in \Delta(A)_{-i} \ | \ BR^i_s(x) = r\}$. We say that $P_i^j$ is a best response set corresponding to strategy $r$ for player $i$, and note that $\{P_i^r\}_{r \in A_i}$ cover $\Delta(A)_{-i}$.
\end{definition}
In bimatrix games our goal was to learn $\varepsilon$-close labellings of the polytope partitions induced by best response sets. In the multiplayer setting that goal can be generalised.
\begin{definition}[Multiplayer $\varepsilon$-close labellings]
Let $G$ be a game with $n$ players, each with $k$ actions, and let $i$ be a specific player in the game. Suppose that for each action $r \in A_i$, $\hat{P}_i^r \subseteq P_i^r$ is a closed set. We say the collection $\{\hat{P}_i^r\}_{r \in A_i}$ is an empirical labelling of $\Delta(A)_{-i}$. If in addition $\bigcup_{r \in A_i} \hat{P}_i^r$ is an $\varepsilon$-net of $\Delta(A)_{-i}$ in the $\ell_1$ norm, we say that the collection $\{\hat{P}_i^r\}_{r \in A_i}$ is an $\varepsilon$-close labelling of $\Delta(A)_{-i}$.
\end{definition}
As we will see shortly, if we manage to compute an $\frac{\varepsilon}{2}$-close labelling for all $\Delta(A)_{-i}$, we have enough information to compute an approximate equilibrium.
In bimatrix games, each $P_i^r$ is a polytope, but in multiplayer games, expected utilities are no longer linear in $\Delta(A)_{-i}$. Consequently, best response sets are semi-algebraic sets instead. This means that in general best response set are not connected and thus not convex. Without convexity and polytope structure we can no longer use binary search methods to learn $\varepsilon$-close labellings. As mentioned before, this does not preclude us from computing an $\varepsilon$-close labelling via a brute force method of querying an entire $\varepsilon$-net of $\Delta(A)_{-i}$. Before we show this suffices however, we show the key result of this section: computing $\frac{\varepsilon}{2}$-close labellings for all $\Delta(A)_{-i}$ suffices to compute $\varepsilon$-WSNE. To do so we revisit Voronoi best response sets. In what follows we let $d(x,S)$ denote the infimum distance of a point, $x$ to a set $S$ in the $\ell_1$ norm.
\begin{definition}[Multiplayer Voronoi Best Response Functions and Best Response Sets]
Let $G$ be a game with $n$ players, each with $k$ actions, and let $i$ be a specific player in the game. Suppose that $\{\hat{P}_i^r\}_{r \in A_i}$ is an empirical labelling of $\Delta(A)_{-i}$. Player $i$'s Voronoi Best Response function is denoted by $V^i: \Delta(A)_{-i} \rightarrow A_i$. The function is defined as $V^i(x) = \argmin_{r \in A_i} d(x,\hat{P}_i^r)$. We also define player $i$'s Voronoi Best Response Sets as $V_i^r = \{x \in \Delta(A)_{-i} \ | \ V^i(x) = r \}$.
\end{definition}
If we invoke Lemma \ref{lemma:multiplayer-lipschitz}, we obtain the same result as in bimatrix games whereby Voronoi best responses are actually approximate best responses.
\begin{lemma}\label{lemma:multiplayer-voronoi-approximateBR}
Suppose that $\{\hat{P}_i^r\}_{r \in A_i}$ is an $\frac{\varepsilon}{2}$-close labelling of $\Delta(A)_{-i}$, then Voronoi Best Response for player $i$ are $\varepsilon$ Best Responses in $G$.
\end{lemma}
In addition, our definition of empirical labellings stipulated that $\hat{P}_i^r$ are all closed sets. This is a property which is inherited by Voronoi Best Response Sets.
\begin{lemma}
Voronoi best response sets are closed.
\end{lemma}
\begin{proof}
The proof is identical to Lemma \ref{lemma:voronoi-closed} since $\ell_1$ distance is still a continuous function of the relevant domain $\Delta(A)_{-i}$.
\end{proof}
We now define the generalisation to the Voronoi Best Response Correspondence from before.
\begin{definition}[Multiplayer Voronoi Best Response Correspondence]
Suppose that $x \in \Delta(A)$ is a mixed strategy profile. We define the Voronoi Best Response Correspondence $B^*: \Delta(A) \rightarrow \mathcal{P}( \Delta(A))$ as follows:
$$
B^*(x) = \prod_{i=1}^n conv(V^i(x_{-i})) \subseteq \Delta(A)
$$
\end{definition}
\begin{theorem}\label{thm:discrete-nash-multiplayer}
$B^*$ satisfies all the conditions of Kakutani's fixed point Theorem, and hence there exists a mixed strategy profile $x^* \in \Delta(A)$ such that $x^* \in B^*(x^*)$. In particular, if the Voronoi best response for $B^*$ arise from $\frac{\varepsilon}{2}$-close labellings for all $\Delta(A)_{-i}$, then this in turn implies that $x^*$ is an $\varepsilon$-WSNE.
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{thm:discrete-Nash}, the first three conditions of Kakutani's fixed point theorem are trivial. We focus on proving that $B^*$ has a closed graph. It suffices to show the following for any player $i$: if $\{x_n\}$ is a sequence in $\Delta(A)_{-i}$ that converges to $x$, and $\{y_n\}$ is a sequence in $\Delta(A)_i$ converging to $y$ with the property that $y_n \in conv(V^i(x_n))$ for all $n$, then $y \in conv(V^i(x))$.
To show this, we prove that there exists a constant $\delta > 0$ with the property that $B^1_\delta(x) \cap V_i^r$ if and only if $r \in V_i(x)$. Here $B^1_\delta(x)$ denotes the $\ell_1$ ball of radius $\delta$ around $x$. We prove this claim by contradiction.
Suppose instead that for every $\delta > 0$, $B^1_\delta(x)$ contains some point from a $V_i^r$, where $r \notin V^i(x)$. Let $\delta_1 > 0 $ be arbitrary, and let $z_1$ be the guaranteed point in $B^1_{\delta_1}(x)$ that is contained in a collection of $V_i^r$ where none belong to $V^i(x)$. Let $\delta_2 = \|x - z_1\|_1$. We continue in this fashion where for a given $\delta_k > 0$, we let $z_k \in B^1_{\delta_k}(x)$ be a point contained in a collection of $V_i^r$ where none belong to $V^i(x)$. Accordingly we define $\delta_{k+1} = \|x - z_k\|_1$. Since we can always continue this process, we recover a sequence $\{z_n\}$ of elements in $\Delta(A)_{-i}$ that converges to $x$. There are only a finite number of Voronoi Best Response sets, hence this sequence must contain an infinite subsequence of points belonging to the same voronoi best response set, say $V_i^{r'}$, where $r'$ does not belong to $V^i(x)$. This however implies that $x$ is a limit point of $V_i^{r'}$, and since this set is closed, this implies that $x \in V_i^{r'}$, which contradicts the fact that $r' \notin V^i(x)$, thus proving our desired claim.
Returning to the sequences $\{x_n\}$ and $\{y_n\}$, the existence of a fixed $\delta > 0$ with the property that $B^1_\delta(x) \cap V_i^r$ if and only if $r \in V_i(x)$ means that for some $N > 0$, if $n > N$, $x_n \in B^1_\delta(x)$, which in turn means that $y_n \in Conv(V^i(x))$. This in turn means that $y \in Conv(V^i(x))$, as desired. Applying this result to each $i$ yields the graph-closedness of $B^*$, thus establishing that $B^*$ satisfies all the properties of Kakutani's fixed point theorem.
As for the second part of the theorem, suppose that $x^*$ is the guaranteed fixed point of $B^*$ as guaranteed by Kakutani's fixed point theorem. From Lemma \ref{lemma:multiplayer-voronoi-approximateBR}, we know that Voronoi Best Responses are $\varepsilon$-Best responses. By the definition of $B^*$, the support of each $x^*_i \in \Delta(A)_i$ consists of $\varepsilon$ best responses to $x^*$, thus establishing the fact that $x^*$ is an $\varepsilon$-WSNE.
\end{proof}
With the previous theorem in hand, we have established that computing a $\frac{\varepsilon}{2}$-close labellings of all $\Delta(A)_{-i}$ suffices to compute an $\varepsilon$-WSNE. Although the lack of convexity in best response sets prevents us from using binary search techniques, we can always query an $\varepsilon$-net of $\Delta(A)_{-i}$ in the $\ell_1$ norm. In this vein, we construct an explicit $\varepsilon$-net for $\Delta(A)_{-i}$.
\begin{lemma}
$M^n_\varepsilon = \left( \frac{2\varepsilon}{n}\mathbb{Z} \right)^{n} \bigcap \Delta^n$ is an $\varepsilon$-net in the $\ell_1$ norm for $\Delta^n$. Furthermore, $|M^n_\varepsilon| = O\left( (\frac{n}{2\varepsilon} + n)^n \right)$.
\end{lemma}
\begin{proof}
The first claim follows from noting that lattice points lie on vertices of axis-aligned hypercubes of side-length $\frac{2\varepsilon}{n}$. These cubes have a diagonal of length $2\varepsilon$ in the $\ell_1$ norm, hence their centres are at most $\varepsilon$-away from a given queried vertex.
As for the cardinality, using a stars and bars argument, one can see that if $S = \frac{1}{\kappa} \mathbb{Z}^{n} \cap \Delta^n$ for some $\kappa \in \mathbb{N}$, then $|S| = \binom {\kappa + n}{n} = O\left( (\kappa + n)^{n} \right)$.
\end{proof}
Since $\Delta(A)_{-i}$ is a product of simplices, we can take products of the above $\varepsilon$-net constructions to in turn obtain an $\varepsilon$-net for $\Delta(A)_{-i}$.
\begin{lemma}\label{lemma-lattice-multiplayer}
Suppose that $\varepsilon > 0$ and let $\varepsilon' = \frac{\varepsilon}{n-1}$. Furthermore, let us define the set $H_\varepsilon^{n,k} = (M_{\varepsilon'}^{k-1})^{n-1} \subset \Delta(A)_{-i} \cong (\Delta^{k-1})^{n-1}$. Then $H_\varepsilon^{n,k}$ is an $\varepsilon$ net for $\Delta(A)_{-i}$ in the $\ell_1$ norm. Furthermore $|H_\varepsilon^{n,k}| = O\left( (\frac{nk}{2\varepsilon})^{nk} \right)$.
\end{lemma}
Querying all points in an $\varepsilon$-net trivially gives rise to an $\varepsilon$-close labelling. Consequently, with Lemma \ref{lemma-lattice-multiplayer} and Theorem \ref{thm:discrete-nash-multiplayer} in hand, we have proven our main result regarding the query complexity of computing an $\varepsilon$-WSNE using adversarial Best Response Queries.
\begin{theorem}
Suppose that $G$ is a game with $n$ players with $k$ pure strategies each. One can compute an $\varepsilon$-WSNE of $G$ using $O\left( n(\frac{nk}{\varepsilon})^{nk} \right)$ adversarial Best Response Queries.
\end{theorem}
\section{Conclusion and Future Directions}\label{sec:conclusion}
In this paper we introduced the concept of learning $\varepsilon$-close labellings of $(m,n)$-polytope partitions with membership queries, and derived query efficient algorithms for when either the dimension of the ambient simplex in the polytope partition, $m$, is held constant, or when the number of polytopes in the partition, $n$, is held constant.
Most importantly, we introduced a novel reduction from computing $\varepsilon$-WSNE with best response queries to this geometric problem, thus allowing us to show that in the best response query model, computing $\varepsilon$-WSNE of a bimatrix game has a finite query complexity. More specifically, for $m \times n$ games with $\min(m,n)$ constant, the query complexity is polynomial in $\max(m,n)$ and $\log \left( \frac{1}{\varepsilon} \right)$. Furthermore, we partially extended our results from bimatrix games to multi-player games. Although the underlying geometry in multi-player games prevents us from using our results from learning polytope partitions, we were still able to show that querying a fine enough $\varepsilon$-net of the mixed strategy space of all players suffices to compute an $\varepsilon$-WSNE.
As mentioned in the introduction, this geometric framework could be of use in other areas where Lipschitz continuous structures appear over domains with convex partitions. Upon further inspection, it is not difficult to see that polytope partitions do not need to be contained in $\Delta^m$, and in fact our algorithms extend to arbitrary ambient polytopes.
Furthermore, it would be of great interest to create algorithms with a better query cost, prove lower bounds with regards to computing $\varepsilon$-close labellings, or simply explore weaker query paradigms, such as noisy membership oracles. Finally, we have mentioned that in the multiplayer setting, best response sets are no longer polytopes, but rather semi-algebraic sets. It would be of interest to create learning algorithms for $\varepsilon$-close labellings of these more complicated geometric objects, since doing so suffices to compute $\varepsilon$-WSNE.
\bibliographystyle{plainnat}
| {
"timestamp": "2019-04-10T02:19:27",
"yymm": "1807",
"arxiv_id": "1807.06170",
"language": "en",
"url": "https://arxiv.org/abs/1807.06170",
"abstract": "Suppose that an $m$-simplex is partitioned into $n$ convex regions having disjoint interiors and distinct labels, and we may learn the label of any point by querying it. The learning objective is to know, for any point in the simplex, a label that occurs within some distance $\\epsilon$ from that point. We present two algorithms for this task: Constant-Dimension Generalised Binary Search (CD-GBS), which for constant $m$ uses $poly(n, \\log \\left( \\frac{1}{\\epsilon} \\right))$ queries, and Constant-Region Generalised Binary Search (CR-GBS), which uses CD-GBS as a subroutine and for constant $n$ uses $poly(m, \\log \\left( \\frac{1}{\\epsilon} \\right))$ queries.We show via Kakutani's fixed-point theorem that these algorithms provide bounds on the best-response query complexity of computing approximate well-supported equilibria of bimatrix games in which one of the players has a constant number of pure strategies. We also partially extend our results to games with multiple players, establishing further query complexity bounds for computing approximate well-supported equilibria in this setting.",
"subjects": "Computer Science and Game Theory (cs.GT); Machine Learning (cs.LG)",
"title": "Learning Convex Partitions and Computing Game-theoretic Equilibria from Best Response Queries",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683502444729,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.709661072814528
} |
https://arxiv.org/abs/2109.14559 | Asymptotic profiles of zero points of solutions to the heat equation | In this paper, we consider the asymptotic profiles of zero points for the spatial variable of the solutions to the heat equation. By giving suitable conditions for the initial data, we prove the existence of zero points by extending the high-order asymptotic expansion theory for the heat equation. This reveals a previously unknown asymptotic profile of zero points diverging at $O(t)$. In a one-dimensional spatial case, we show the zero point's second and third-order asymptotic profiles in a general situation. We also analyze a zero-level set in high-dimensional spaces and obtain results that extend the results for the one-dimensional spatial case. | \section{Introduction}
We consider a Cauchy problem
\be\label{eq:main}
\begin{cases}
\dfrac{\partial u}{\partial t}=\dfrac{\partial^2 u}{\partial x^2} \quad (t>0,\ x\in\bR), \vspace{3mm}\\
u(0,x)=f(x) \quad (x\in\bR),
\end{cases}
\ee
where $u=u(t,x)\in\bR\ (t>0,x\in\bR)$ and $f\in L^1(\bR)\cap L^{\infty}(\bR)$.
Throughout this paper, we assume that $\| f \|_{L^1}\neq 0$.
The aim of this paper is the analysis of the movement of the zero level set $Z(t):=\{{x}\in\bR\mid u(t,{x})=0 \}$,
where $u(t,{x})$ is the explicitly represented solution
\be\label{basic sol.}
u(t,{x})=(G(t)*f)(x)=\int_{\bR} G(t,x-y) f(y)dy\quad (t>0,\ x\in\bR)
\ee
of \eqref{eq:main}.
Here, $G(t,{x})$ is the heat kernel defined as
\be \label{heat-ker}
G(t, {x}) := \dfrac{1}{\sqrt{4\pi t}} e^{-x^2/4t}.
\ee
The analysis of the zero level set is important to understanding the behavior of sign-changing solutions
of parabolic equations.
To analyze the behavior of sign-changing solutions, many researchers have focused on the zero level set or the sign-changing number of the initial data \cite{Angenent, DGM, Matano, Mizoguchi, Chung}.
In the case of parabolic equations,
Angenent \cite{Angenent} proved that $Z(t)$ is a discrete subset of $\bR$.
Furthermore, it is known that the number of elements of $Z(t)$
does not increase as time passes \cite{DGM, Matano}.
In investigating the dynamics of the sign-changing solution in detail,
the asymptotic behavior of $Z(t)$ has been analyzed to the heat equation.
Mizoguchi \cite{Mizoguchi} estimated the upper bound of $Z(t)$ in the case that $f(x)$ is sign-changing at finite times
and proved $Z(t)\subset [-C_0 t, C_0 t]$ for sufficiently large $t>0$ with some $C_0>0$.
Moreover, it has been reported that there exist $f(x)$ and $x^{*}(t)\in Z(t)$ such that
$x^{*}(t)>C_1 t$ holds for sufficiently large $t>0$ with some $C_1>0$.
Chung \cite{Chung} analyzed the asymptotic behavior of zero points when $f(x)$ belongs to $L^1(\bR, 1+ |x|^{k+1})$ for some $k\in\bN$ and satisfies
\be\label{Assu}
\int_{\bR} y^{j} f(y)dy =0\ (j=0,1,\ldots,k-1),\quad \int_{\bR} y^{k} f(y)dy \neq 0.
\ee
As the result of \cite{Chung}, for sufficiently large $t>0$, there exist $x^{*}_{j}(t)\in Z(t)\ (j=1,2,\ldots,k)$
such that
\beaa
\displaystyle \lim_{t\to+\infty} \dfrac{x^{*}_{j}(t)}{2\sqrt{t}} =h_j,
\eeaa
where $h_j\ (j=1,2,\ldots,k)$ are mutually different zero points of the Hermite polynomial $H_k(x)$ with the order $k$ defined as
\beaa
H_k(x):= (-1)^{k} e^{x^2} \dfrac{d^k}{dx^k}[e^{-x^2}].
\eeaa
The proof of these results is based on the Hermite polynomial approximation deduced using the asymptotic expansion for the heat equation.
In particular, when $f(x)$ belongs to $L^1(\bR, 1+ |x|^{k+1})$ for some $k\in\bN$ and satisfies \eqref{Assu},
it is well-known that the solution \eqref{basic sol.} of \eqref{eq:main} converges uniformly to
\beaa
\dfrac{1}{k!} \left(\int_{\bR} y^{k} f(y)dy\right) (4t)^{-k/2} G(t,x) H_{k}\left( \dfrac{x}{2\sqrt{t}} \right)
\eeaa
as $t\to+\infty$ (see \cite{Chung} and the references therein).
We expect from these results that the elements of $Z(t)$ have various asymptotic profiles.
However, there is no result for asymptotic profiles without the condition \eqref{Assu}.
Furthermore, even though there is a case that the asymptotic behavior of an element of $Z(t)$ is $O(t)$ as $t\to+\infty$,
the characterization of its coefficient has not been given.
The purpose of this paper is to give asymptotic profiles of the elements of $Z(t)$ in detail.
The paper is organized as follows:
In Section \ref{sec:main}, we state the main results of the asymptotic profiles of elements of $Z(t)$.
Examples of the results are given in Section \ref{sec:exam}.
Section \ref{sec:proof} provides proofs of the main results.
Finally, we mention the case of the heat equation in high dimensional space in Section \ref{sec:d} and related problems in Section \ref{sec:dis}.
\section{Main results}\label{sec:main}
\subsection{General cases}
Suppose that $f(x)$ satisfies
\be\label{condi1}\tag{H1}
\forall\lambda\in\bR, \quad \int_{\bR} e^{\lambda y} |f(y)| dy<\infty.
\ee
Define the bilateral Laplace transform of $f(x)$ and the zero level set on $\bR$ as
\beaa
F(\eta):=\displaystyle \int_{\bR} e^{\eta y} f(y) dy,\quad \cN(F):=\{ \eta\in\bR\mid F(\eta)=0 \}.
\eeaa
When $\eta_0\in\cN(F)$ satisfies
\beaa
F^{(j)}(\eta_0) &=& \displaystyle \int_{\bR} y^{j} e^{\eta y} f(y)dy = 0\quad (j=0,\ldots,k-1), \\
F^{(k)}(\eta_0) &=&\displaystyle \int_{\bR} y^{k} e^{\eta y} f(y)dy\neq 0
\eeaa
for some $k\in\bN$,
we say that $\eta_0\in\cN(F)$ has multiplicity $k$.
$F(\eta)$ is analytic on $\bC$ and $F\not\equiv 0$, and
$\cN(F)$ is thus discrete.
Moreover, for any $\eta_0\in\cN(F)$, there exists $k\in\bN$ such that $\eta_0$ has multiplicity $k$.
We then obtain the following results:
\begin{theorem}\label{thm:mainA}
Suppose that $f\in L^1(\bR)\cap L^{\infty}(\bR)$ satisfies \eqref{condi1}.
Assume that $\cN(F)\neq \emptyset$.
Then, for all $\eta_0\in\cN(F)$ with multiplicity $k\in\bN$,
there exist $T>0$ and $x^{*}_j(t)\in Z(t)\ (j=1,\ldots,k)$ for $t>T$ satisfying
\beaa
x^{*}_j(t)= 2t \eta_{0}+ 2\sqrt{t}h_j + \dfrac{F^{(k+1)}(\eta_0)}{(k+1) F^{(k)}(\eta_0)}+o(1) \quad (t\to +\infty)
\eeaa
for $j=1,\ldots,k$, where $h_j\ (j=1,\ldots,k)$ are mutually different zero points of the Hermite polynomial $H_k(x)$
with the order $k$.
\end{theorem}
\begin{remark}
We can compose $x^{*}_j(t)\in Z(t)\ (j=1,\ldots,k)$ obtained in Theorem \ref{thm:mainA} as $x^{*}_{j}\in C(T,\infty)\ (j=1,\ldots,k)$ by applying the argument and results of \cite{Angenent}.
\end{remark}
From Theorem \ref{thm:mainA}, for any $\eta_0\in \cN(F)$ and sufficiently large $t>0$,
there exists $x^{*}(t)\in Z(t)$ such that
\beaa
\displaystyle \lim_{t\to+\infty} \dfrac{x^{*}(t)}{2t} = \eta_0.
\eeaa
Thus, if $\eta_0\neq 0$, then $x^{*}(t)$ diverges with order $t$ as time passes.
In the case that $0\in \cN$ with multiplicity $k>1$,
for sufficiently large $t>0$, there exists $x^{*}(t)\in Z(t)$ such that
\beaa
\displaystyle \lim_{t\to+\infty} \dfrac{x^{*}(t)}{2\sqrt{t}} = h,
\eeaa
where $h$ is a zero point of $H_k(x)$.
When $0\in \cN(F)$ with multiplicity $k$ and $k$ is odd,
for sufficiently large $t>0$, there is $x^{*}(t)\in Z(t)$ satisfying
\beaa
\displaystyle \lim_{t\to+\infty} x^{*}(t) = \dfrac{F^{(k+1)}(0)}{(k+1) F^{(k)}(0)}
= \dfrac{\displaystyle \int_{\bR} y^{k+1}f(y)dy}{(k+1)\displaystyle \int_{\bR} y^{k} f(y)dy}.
\eeaa
Hence, by analyzing the zero points of $F(\eta)$ and their multiplicity,
we can understand the long time behavior of elements of $Z(t)$
We next give the result of the asymptotic behavior of an element of $Z(t)$.
\begin{theorem}\label{thm:mainC}
Suppose that $f\in L^1(\bR)\cap L^{\infty}(\bR)$ satisfies \eqref{condi1}.
Assume that there exist $T>0$ and $x^{*}(t)\in Z(t)$ for $t>T$ satisfying $x^{*}\in C(T,\infty)$ and
\be\label{limsup}
\limsup_{t\to +\infty} \left| \dfrac{x^{*}(t)}{2t} \right| <\infty.
\ee
Then, $\theta:= \displaystyle \lim_{t\to+\infty} \dfrac{x^{*}(t)}{2t}$ exists and belongs to $\cN(F)$.
\end{theorem}
Finally, we mention the case that $\cN(F)=\emptyset$.
Before stating the result, we introduce a notation.
For a function $f$ on $\bR$ with $f(x)\not\equiv 0$, let $z(f)$ be the number of sign changes; i.e. the supremum of $j$ such that
\beaa
f(x_{i})f(x_{i+1})<0,\quad i=1,2,\ldots,j
\eeaa
for some $-\infty<x_1<x_2<\ldots<x_{j+1}<+\infty$.
If $f(x)$ satisfies $z(f)<+\infty$,
then \eqref{limsup} in the statement of Theorem \ref{thm:mainC} always holds by Theorem 1.1 in \cite{Mizoguchi}.
We thus deduce the following result from a proof by contradiction.
\begin{corollary}\label{thm:mainB}
Suppose that $f\in L^1(\bR)\cap L^{\infty}(\bR)$ satisfies \eqref{condi1}.
Assume that $\cN(F)= \emptyset$ and $z(f)<\infty$.
There then exists $T>0$ such that $Z(t)=\emptyset $ for any $t>T$.
Furthermore, for all $x\in\bR$ and $t>T$, $u(t,x)$ has the same sign as $\int_{\bR}f(y)dy$.
\end{corollary}
\subsection{The case that $f(x)$ satisfies $z(f)=1$}
To analyze $Z(t)$ for all $t>0$, we introduce the assumption that
\be\label{condi2}\tag{H2}
\begin{cases}
f\in C^{3}(\bR),\ f(-x)\ge 0\ge f(x)\ (x>0), \\
\exists {x^{-}},{x^{+}}>0\quad s.t.\quad f(-x^{-})f(x^{+})<0,\\
f'(0)\neq 0,\quad \displaystyle \sup_{x\in\bR} |f'''(x)|<\infty.
\end{cases}
\ee
We obtain the following result:
\begin{theorem}\label{thm:main1}
Suppose that $f\in L^1(\bR)\cap L^{\infty}(\bR)$ satisfies \eqref{condi1} and \eqref{condi2}.
Let $u(t,x)$ be the solution \eqref{basic sol.} of \eqref{eq:main}.
Then, for any $t>0$, there exists $x^{*}(t)\in\bR$ satisfying $Z(t)=\{ x^{*}(t) \}$.
Moreover, $x^{*}(t)$ satisfies
\beaa
x^{*}(t)=
\begin{cases}
-\dfrac{t f''(0)}{f'(0)} + o(t) &(t\to +0),\\
2t \eta_{0}+ \dfrac{F''(\eta_0)}{2 F'(\eta_0)} + o(1)
&(t\to \infty),
\end{cases}
\eeaa
where $\eta_{0}$ is a unique constant satisfying $\cN(F)=\{\eta_0\}$
and has the same sign as $\displaystyle \int_{\bR}f(y)dy$.
In particular, $\eta_{0}=0$ if $\displaystyle \int_{\bR}f(y)dy=0$.
\end{theorem}
\section{Example}\label{sec:exam}
\begin{example}\label{exam1}
We consider the case that $f(x)=f_{a}(x)=\dfrac{1}{\sqrt{\pi}} (e^{-x^2/4}-a e^{-x^2})$,
where $a>1$.
Then,
\beaa
F_{a}(\eta)= \int_{\bR} e^{\eta y} f_{a}(y)dy = 2 e^{\eta^2} -a e^{\eta^2/4}.
\eeaa
From the direct computation, we have
\beaa
\cN(F) =
\begin{cases}
\{ -r, r \} &(a>2), \\
\{ 0 \} & (a=2), \\
\emptyset & (a<2),
\end{cases}
\eeaa
where $r = \sqrt{\dfrac{4}{3} \log\dfrac{a}{2}}$.
By applying Theorem \ref{thm:mainA}, when $a>2$, there exist $x^{*}_j(t)\in Z(t)\ (j=1,2)$ for sufficiently large $t>0$ satisfying
$|x^{*}_j(t)| = 2rt + O(1) \ (j=1,2)$ as $t\to+\infty$.
Additionally, if $a=2$, then there exist $x^{*}_j(t)\in Z(t)\ (j=1,2)$ for sufficiently large $t>0$ such that
$|x^{*}_j(t)| = \sqrt{2t}+O(1)\ (j=1,2)$ as $t\to+\infty$,
because we know that $H_2(x)=4x^2-2$.
In the case that $a<2$, the solution $u(t,x)$ becomes positive after sufficient time has passed from Corollary \ref{thm:mainC}.
\end{example}
\begin{example}\label{exam2}
We consider the case that $f(x)=f_{a,b}(x)= -x e^{(x-a)^2+bx^3-x^4}$,
where $a,b$ are constants (Figure \ref{fig:exam}).
Then, $f_{a,b}(x)$ satisfies \eqref{condi1} and \eqref{condi2}.
Thus, we can apply Theorem \ref{thm:main1}.
We have
\beaa
-\dfrac{f''_{a,b}(0)}{f'_{a,b}(0)}= 4a,
\eeaa
and the zero point $x^{*}(t)\in Z(t)$ thus satisfies
\beaa
x^{*}(t)= 4at +o(t),\quad (t\to+0).
\eeaa
Next, to check the sign of $\eta_0$ in Theorem \ref{thm:main1},
we analyze the integral value of $f_{a,b}(x)$.
Then, the partial derivative of the integral value of $f_{a,b}(x)$ with respect to $b$ satisfies
\beaa
\dfrac{\partial }{\partial b} \int_{\bR} f_{a,b} (y)dy &=& - \int_{\bR} y^4 e^{(y-a)^2+by^3-y^4} dy<0.
\eeaa
Furthermore, we can deduce the limits
\beaa
\int_{\bR} f_{a,b} (y)dy \to \mp\infty
\eeaa
as $b\to\pm\infty$, respectively.
Thus, for any $a\in\bR$, there exists $b_0\in\bR$ such that
\be\label{ex:int}
\int_{\bR} f_{a,b} (y)dy
\begin{cases}
>0 &(b<b_0) \\
=0 &(b=b_0) \\
<0 &(b>b_0).
\end{cases}
\ee
Here, we fix $a>0$ arbitrarily.
There then exists $b_0\in\bR$ satisfying \eqref{ex:int}.
If $b>b_0$, then $\eta_0$ in Theorem \ref{thm:main1} is negative.
Therefore, in the case that $a>0$ and $b>b_0$,
the zero point $x^{*}(t)$ moves in the positive direction from $0$ when $t$ is sufficiently small.
Thereafter, $x^{*}(t)$ tends to shift in the negative direction as sufficient time passes.
We understand from this example that the zero point $x^{*}(t)$ does not always move monotonically.
\end{example}
\begin{figure}[tb]
\centering
\label{fig:a}\includegraphics[width =10cm, bb=0 0 849 422 ]{fig_init.png}
\caption{The graphs of $f_{a,b}(x)= -x e^{(x-a)^2+bx^3-x^4}$ in Example \ref{exam2}.
$(a)$ $a=1.0,\ b=1.4$. $(b)$ $a=1.0,\ b=1.8$.}
\label{fig:exam}
\end{figure}
\section{Proofs of main results}\label{sec:proof}
Let $u(t,x)$ be the solution \eqref{basic sol.} to \eqref{eq:main}.
Throughout this section, suppose that $f(x)$ satisfies \eqref{condi1}.
\subsection{Proof of Theorem \ref{thm:mainA}}
Let us explain the proof of Theorem \ref{thm:mainA}.
In characterizing the coefficient of $O(t)$,
we consider the behavior of $u(t,x+2t\eta_0 )$ for all $\eta_0\in\cN(F)$ and sufficiently large $t>0$.
This argument is important to obtaining asymptotic profiles
and is an important difference from previous studies.
Suppose that $\cN(F)\neq\emptyset$.
Let $\eta_0\in\cN(F)$ with multiplicity $k\in\bN$.
By setting
\be\label{moving}
g(x):= e^{\eta_0 x} f(x),\quad v(t,x):=(G(t)*g)(x),
\ee
we obtain $u(t,x+2t\eta_0)=e^{-\eta_0 x -t \eta^2_0} v(t,x)$.
We apply the following result to prove Theorem $\ref{thm:mainA}$.
\begin{theorem}\cite[Theorem 2.2]{Chung}\label{Her-app}
Let $u$ be the solution \eqref{basic sol.} to the heat equation \eqref{eq:main} with initial data $f\in L^1(\bR; 1+|x|^{n+1})$
for some nonnegative integer $n$.
There is then a positive constant $C=C(n,f)$ such that
\beaa
\left|u(t,x)- G(t,x) \displaystyle \sum^{n}_{j=0} \dfrac{\int_{\bR} y^j f(y)dy}{(4t)^{j/2} j!} H_j\left(\dfrac{x}{2\sqrt{t}} \right) \right| \le C t^{- n/2 -1}
\eeaa
for all $x\in\bR$.
\end{theorem}
$v(t,x)$ is the solution to the heat equation
and the initial data $g(x)$ satisfies \eqref{condi1},
and we can thus apply Theorem \ref{Her-app} to $v(t,x)$ for any $n\in\bN$.
{Using $F^{(j)}(\eta_0)=\displaystyle \int_{\bR} y^{j} e^{\eta_0 y} f(y) dy$ and Theorem \ref{Her-app} by replacing $u$ and $f$ with $v$ and $g$, respectively,
the following lemma is obtained.}
\begin{lemma}\label{lem:set1}
Let $u$ be the solution \eqref{basic sol.} to the heat equation \eqref{eq:main} with initial data $f\in L^1(\bR)\cap L^{\infty}(\bR)$ satisfying \eqref{condi1}.
Suppose that $\eta_0\in\cN(F)$ has multiplicity $k\in\bN$.
Then, for any integer $n\ge k$, there is a positive constant $C=C(n,f)$ such that
\beaa
\left|v(t,x)- G(t,x) \displaystyle \sum^{n}_{j=k} \dfrac{F^{(j)}(\eta_0)}{(4t)^{j/2} j!} H_j\left(\dfrac{x}{2\sqrt{t}} \right) \right| \le C t^{- n/2 -1}
\eeaa
for any $x\in\bR$, where $v(t,x)$ is defined by \eqref{moving}.
\end{lemma}
We deduce from Lemma \ref{lem:set1} with $n=k$ that $(4t)^{(k+1)/2} v(t, 2\sqrt{t} x)$ converges uniformly to
\beaa
\dfrac{1}{\sqrt{\pi}} e^{-x^2} \dfrac{ F^{(k)}(\eta_0)}{k!} H_k(x)
\eeaa
as $t\to+\infty$.
Thus, for each zero point $h$ of the Hermite polynomial $H_k(x)$,
there exist $T>0$ and $z(t)$ for $t>T$ such that $v(t,z(t))=0$ for any $t>T$ and
\beaa
\displaystyle \lim_{t\to+\infty} \dfrac{z(t)}{2\sqrt{t}} = h.
\eeaa
We next deduce the coefficient of $O(1)$.
Let $h$ be a zero point of the Hermite polynomial $H_k(x)$.
We consider $w(t,x)=(4t)^{k/2+1}v(t,x+2\sqrt{t} h)$.
Using Lemma \ref{lem:set1} with $n=k+1$,
we obtain
\beaa
\left|v(t,x)- G(t,x) \displaystyle \sum^{k+1}_{j=k} \dfrac{F^{(j)}(\eta_0)}{(4t)^{j/2} j!} H_j\left(\dfrac{x}{2\sqrt{t}} \right) \right| \le C t^{- (k+1)/2 -1}
\eeaa
for all $x\in\bR$.
Multiplying $(4t)^{k/2+1}$ and replacing $x$ by $x+2\sqrt{t} h$,
the above inequality is rewritten as
\beaa
\left|w(t,x) - K(t,x+2 \sqrt{t} h) \displaystyle \sum^{1}_{j=0} \dfrac{F^{(j+k)}(\eta_0)}{(4t)^{(j-1)/2} (j+k)!} H_{j+k}\left(\dfrac{x}{2\sqrt{t}}+h \right) \right|
\le C t^{- 1/2 },
\eeaa
where $K(t,x):= \dfrac{1}{\sqrt{\pi}} e^{-x^2/4t}$.
We remark that $K(t,x+2\sqrt{t} h)$ converges pointwise to $ \dfrac{1}{\sqrt{\pi}} e^{-h^2}$ as $t\to+\infty$.
We have
\beaa
&&\displaystyle \lim_{t\to+\infty} \displaystyle \sum^{1}_{j=0} (4t)^{(1-j)/2} \dfrac{F^{(j+k)}(\eta_0)}{(j+k)!} H_{j+k}\left(\dfrac{x}{2\sqrt{t}}+h \right) \\
&&\hspace{100pt} = \dfrac{H_{k+1}(h)}{(k+1)!} \{ -(k+1) F^{(k)}(\eta_0)x + F^{(k+1)}(\eta_0) \}
\eeaa
for any $x\in\bR$ from the fact that $H'_{k}(h)=2h H_{k}(h)-H_{k+1}(h)=-H_{k+1}(h)\neq 0$,
and $w(t,x)$ converges thus pointwise to
\beaa
\dfrac{ e^{-h^2} H_{k+1}(h)}{(k+1)!} \{ -(k+1) F^{(k)}(\eta_0)x + F^{(k+1)}(\eta_0) \}
\eeaa
as $t\to+\infty$.
On this basis, there exist $\tilde{T}>0$ and $\tilde{z}(t)$ for $t>\tilde{T}$ such that $w(t,\tilde{z}(t))=0$ for any $t>\tilde{T}$ and
\beaa
\displaystyle \lim_{t\to+\infty} \tilde{z}(t) = \dfrac{F^{(k+1)}(\eta_0)}{(k+1)F^{(k)}(\eta_0)}.
\eeaa
Therefore, Theorem \ref{thm:mainA} is proved
because
\beaa
w(t,x)=(4t)^{k/2+1}v(t,x+2\sqrt{t} h)= (4t)^{k/2+1} e^{\eta_0 x +t \eta^2_0} u(t,x+2t\eta_0+2\sqrt{t} h).
\eeaa
\subsection{Proof of Theorem \ref{thm:mainC}}
To prove Theorem \ref{thm:mainC}, we prepare a notation and lemma.
$u(t,x)$ is now expressed as
\beaa
u(t,x)= G(t,x) P\left(t,\dfrac{x}{2t}\right),
\eeaa
where $P(t,\eta)$ is defined by
\beaa
P(t,\eta):= \int_{\bR} e^{\eta y-y^2/4t} f(y)dy.
\eeaa
We remark that $P(t,\eta)$ is well-defined on $(0,\infty)\times \bR$
and belongs to $C^{\infty}((0,\infty)\times \bR)$,
because we know that $u(t,x)$ belongs to $C^{\infty}((0,\infty)\times \bR)$ \cite{GGS}.
$u(t,x)$ has the same sign as $P\left(t,\dfrac{x}{2t}\right)$,
and we thus analyze $P(t,\eta)$ in the following.
We deduce the following lemma from the dominated convergence theorem and \eqref{condi1}.
\begin{lemma}\label{lem:unif}
For any $M>0$, $P(t,\eta)$ converges uniformly to $F(\eta)$
on $[-M,M]$ as $t\to+\infty$.
\end{lemma}
Let us explain the proof of Theorem \ref{thm:mainC}.
Define
\beaa
\overline{\theta}:= \limsup_{t\to+\infty} \dfrac{x^{*}(t)}{2t},\quad \underline{\theta}:= \liminf_{t\to+\infty} \dfrac{x^{*}(t)}{2t}.
\eeaa
Fix $\theta_0\in [\underline{\theta},\overline{\theta}]$.
Because $x^{*}(t)\in C(T,\infty)$, there exists $\{t_j\}_{j\in\bN}\subset (T,\infty)$ such that $t_j\to+\infty$ and
$\eta_j=x^{*}(t_j)/2t_{j}\to \theta_0$ hold as $j\to+\infty$.
From the assumption that $x^{*}(t)\in Z(t)$ for $t>T$,
we have $P(t_j,\eta_j)=0$ for all $j\in\bN$.
Using Lemma \ref{lem:unif} and taking a limit as $k\to+\infty$, we obtain $F(\theta_0)=0$.
This means that $\theta=\overline{\theta}=\underline{\theta}$ and $\theta\in \cN(F)$, because $\cN(F)$ is discrete.
\subsection{Proof of Theorem \ref{thm:main1}}
In this subsection, we firstly show the existence of $x^{*}(t)\in Z(t)$ for any $t>0$.
Throughout this subsection, assume that $f\in L^1(\bR)\cap L^{\infty}(\bR)$ satisfies \eqref{condi1} and \eqref{condi2}.
\begin{lemma}\label{prop:exist}
For any $t>0$, there exists a unique $\eta^{*}(t)\in\bR$ satisfying $P\left(t,\eta^{*}(t) \right)=0$.
Moreover, $\eta^{*}(t)$ has the same sign as $P(t,0)$ if $P(t,0)\neq 0$.
\end{lemma}
\begin{proof}
We fix an arbitrary $t>0$.
We first prove that $P(t,\eta)$ is strongly decreasing with respect to $\eta$.
We give constants $\eta_1, \eta_2$ satisfying $\eta_1<\eta_2$.
We know that
\beaa
e^{\eta_2 y}-e^{\eta_1 y}
\begin{cases}
>0 &(y>0),\\
<0 &(y<0),
\end{cases}
\eeaa
and we thus obtain
\beaa
P(t,\eta_2)-P(t,\eta_1) &=& \int_{\bR} (e^{\eta_2 y}-e^{\eta_1 y}) e^{-y^2/4t} f(y)dy \\
&=& \int^{\infty}_{0} (e^{\eta_2 y}-e^{\eta_1 y}) e^{-y^2/4t} f(y)dy \\
&& \quad\quad + \int^{0}_{-\infty} (e^{\eta_2 y}-e^{\eta_1 y}) e^{-y^2/4t} f(y)dy <0
\eeaa
from \eqref{condi2}.
Thus, $P(t,\eta)$ is strongly decreasing.
Next, $P(t,\eta)$ is expressed as
\beaa
P(t,\eta) = \int_{\bR} e^{\eta y-y^2/4t} f(y)dy
= \int^{\infty}_{0} e^{\eta y-y^2/4t} f(y)dy
+ \int^{0}_{-\infty} e^{\eta y-y^2/4t} f(y)dy.
\eeaa
Furthermore, each term satisfies
\beaa
\int^{\infty}_{0} e^{\eta y-y^2/4t} f(y)dy \to -\infty\ (\eta \to+\infty),\quad
\int^{0}_{-\infty} e^{\eta y-y^2/4t} f(y)dy \to 0\ (\eta \to+\infty).
\eeaa
We thus obtain $\displaystyle \lim_{\eta\to+\infty} P(t,\eta)=-\infty$.
We obtain $\displaystyle \lim_{\eta\to-\infty} P(t,\eta)=+\infty$ in the same way.
From the above argument,
there exists $\eta^{*}(t)\in\bR$ satisfying $P(t,\eta^{*}(t))=0$ according to the intermediate value theorem.
$P(t,\eta)$ is strongly decreasing with respect to $\eta$,
and $\eta^{*}(t)$ is thus unique and has the same sign as $P(t,0)$.
\end{proof}
The following lemma is obtained according to the argument of Lemma \ref{prop:exist}.
\begin{lemma}\label{lem:zero}
There exists a unique $\eta_{0}\in\bR$ satisfying $F(\eta_0)=0$.
Moreover, the sign of $\eta_{0}$ is equal to the sign of $\displaystyle \int_{\bR}f(y)dy$.
In particular, $\eta_{0}=0$ holds if $\displaystyle \int_{\bR}f(y)dy=0$.
\end{lemma}
From Lemma \ref{prop:exist}, for any $t>0$,
there exists a unique $x^{*}(t)=2t\eta^{*}(t)$ satisfying $u(t,x^{*}(t))=0$.
This means that $Z(t)=\{x^{*}(t)\}$.
Furthermore, $x^{*}(t)$ satisfies
\beaa
x^{*}(t)= 2t \eta_{0}+ \dfrac{F''(\eta_0)}{2 F'(\eta_0)} + o(1)\quad (t\to \infty),
\eeaa
according to Theorem \ref{thm:mainA}.
We next analyze the behavior of $x^{*}(t)$ as $t\to+0$.
Let us define $Q(x)= f'(0)x + f''(0)$.
\begin{lemma}\label{lem:unif3}
For any $M>0$, $\dfrac{1}{t} u(t,tx) $ converges uniformly to $Q(x)$
on $[-M,M]$ as $t\to+0$.
\end{lemma}
\begin{proof}
We know that
\beaa
\int_{\bR}e^{-y^2}dy=\sqrt{\pi}, \quad \int_{\bR} ye^{-y^2}dy=0,\quad \int_{\bR}y^2e^{-y^2}dy=\dfrac{\sqrt{\pi}}{2}
\eeaa
and $f(0)=0$,
and we thus have
\beaa
Q(x)= \dfrac{1}{t\sqrt{\pi}} \displaystyle \int_{\bR}e^{-y^2} \left\{ f(0) + f'(0)(tx- 2 \sqrt{t}y)+\dfrac{f''(0)}{2}(tx- 2 \sqrt{t} y)^2 \right\}dy - \dfrac{f''(0)}{2}t x^2
\eeaa
We set $\zeta(x):= f(x)-f(0)-f'(0)x -\dfrac{f''(0)}{2}x^2$ and fix an arbitrary $M>0$.
We then deduce
\beaa
\left| \dfrac{u(t,tx)}{t}-Q(x) \right| &\le& \dfrac{1}{t\sqrt{\pi}} \int_{\bR} e^{-y^2} |\zeta(tx- 2 \sqrt{t} y)| dy+|f''(0)|t M^2
\eeaa
for any $x\in[-M,M]$.
By setting $C_{f}:=\dfrac{1}{6} \displaystyle \sup_{x\in\bR}| f'''(x)|$, we deduce that
\beaa
\left| \zeta(tx- 2 \sqrt{t}y) \right| \le C_{f} |tx-2 \sqrt{t}y|^3
\eeaa
for any $x\in[-M,M]$ and $y\in\bR$.
Thus, for any $x\in[-M,M]$, we obtain
\beaa
\left| \dfrac{u(t,tx)}{t}-Q(x) \right|
&\le& \dfrac{C_{f}}{t\sqrt{\pi}} \int_{\bR} e^{-y^2} |tx-\sqrt{4t}y|^3 dy +|f''(0)|t M^2 \\
&\le& C_{f} \left( t^2 M^3 +tM^2 \sqrt{\dfrac{4t}{\pi}} + 2tM + \sqrt{\dfrac{4t}{\pi}} \right)+|f''(0)|t M^2.
\eeaa
The lemma is proved from this inequality.
\end{proof}
This lemma means that
$\displaystyle \lim_{t\to+0}\dfrac{x^{*}(t)}{t}= -\dfrac{f''(0)}{f'(0)}$.
Therefore, the all statement of Theorem \ref{thm:main1} is proved.
\section{The case of $\bR^{d}$}\label{sec:d}
In this section, we explain that the case of the heat equation in $\bR^{d}\ (d\ge 2)$
\be\label{eq:main-d}
\begin{cases}
\dfrac{\partial u}{\partial t}=\Delta u\quad (t>0,\ x\in\bR^{d}), \vspace{3mm}\\
u(0,x)=f(x) \quad (x\in\bR^{d}),
\end{cases}
\ee
where $\Delta := \displaystyle \sum^{d}_{j=1} \dfrac{\partial^2}{\partial x^2_j}$.
Denote the Euclidean norm and inner product on $\bR^{d}$ by $|\cdot|_d$ and $\left\langle \right.\cdot,\cdot\left. \right\rangle_{d}$, respectively.
Suppose that $f\in L^{1}(\bR^{d})\cap L^{\infty}(\bR^{d})$ satisfies
\be\label{condi1-d}
\forall \lambda\in\bR^{d},\quad \int_{\bR^{d}} e^{\left\langle \right. \lambda,y\left. \right\rangle_d} |f(y)| dy<\infty. \tag{H1d}
\ee
In the remainder of this subsection, we explain the method of analyzing the asymptotic behavior of $Z_d(t):=\{{x}\in\bR^{d}\mid u(t,{x})=0 \}$,
where $u(t,{x})$ is the explicitly represented solution
\be\label{basic sol.-d}
u(t,{x})=(G_{d}(t)*_{d}f)({x})=\int_{\bR^{d}} G_{d}(t,{x}-{y}) f({y})d{y}\quad (t>0,\ {x}\in\bR^{d})
\ee
of \eqref{eq:main-d}.
Here, $G_{d}(t,{x})$ is the heat kernel on $\bR^{d}$ defined as
\be \label{heat-ker-d}
G_{d} (t, {x}) := (4\pi t)^{-d/2} e^{-|x|^2_{d}/4t}.
\ee
Define the function
\beaa
F_{d}(\eta):= \int_{\bR^{d}} e^{\left\langle \right. \eta , y \left. \right\rangle_d} f(y) dy,\quad \eta=(\eta_1,\ldots,\eta_d)\in\bR^{d}
\eeaa
and the set
\beaa
\cN(F_{d}):=\{ \eta\in\bR^{d} \mid F_{d}(\eta)=0 \}.
\eeaa
Assume that $\cN(F_d)\neq \emptyset$.
Fix $\tilde{\eta}\in\cN(F_d)$.
Suppose that there exists $k\in\bN$ such that
for all $\alpha=(\alpha_1,\ldots,\alpha_d)\in (\bN\cup \{0\})^{d}$ with $|\alpha|:=\displaystyle \sum^{d}_{j=1}\alpha_j <k$,
\beaa
\partial^{\alpha} F_d (\tilde{\eta}):=
\dfrac{\partial^{|\alpha|} F_d }{\partial\eta^{\alpha_1}_1 \cdots \eta^{\alpha_d}_{d}}(\tilde{\eta})=0
\eeaa
and there exists $\tilde{\alpha}\in (\bN\cup \{0\})^{d}$ satisfying $|\tilde{\alpha}|=k$ and
\beaa
\partial^{\tilde{\alpha}} F_d (\tilde{\eta}) \neq 0.
\eeaa
Let $u(t,x)$ be the solution \eqref{basic sol.-d} to the equation \eqref{eq:main-d}.
By setting $g(x):= e^{\left\langle \right. \tilde{\eta},x \left. \right\rangle_d}f(x)$ and $v(t,x)= (G_d(t)*g)(x)$,
we obtain
\beaa
u(t,x+2t\tilde{\eta})= e^{-\left\langle \right. \tilde{\eta}, x\left. \right\rangle_d -t | \tilde{\eta} |^2_{d} } v(t,x).
\eeaa
By applying Theorem 2.2 in \cite{Chung}, we obtain the following lemma.
\begin{lemma}\label{lem:high}
There is a positive constant $C=C(k,d,f)$ such that
\beaa
\left|v(t,x)- G_d(t,x) \displaystyle \sum_{|\alpha|=k} \dfrac{\partial^{\alpha} F_{d}(\tilde{\eta}) }{(4t)^{k/2} \alpha !} \prod^{d}_{j=1} H_{\alpha _j}\left(\dfrac{x_j}{2\sqrt{t}} \right) \right| \le C t^{- (k+d+1)/2}
\eeaa
for all $x\in\bR$.
Furthermore, $(4t)^{(k+d)/2} v(t, 2\sqrt{t} x)$ converges uniformly to
\beaa
\tilde{H} (x):= (\pi)^{-d/2} e^{-|x|^2_d} \displaystyle \sum_{|\alpha|=k} \dfrac{\partial^{\alpha} F_{d}(\tilde{\eta}) }{\alpha !} \prod^{d}_{j=1} H_{\alpha _j}\left(x_j\right)
\eeaa
as $t\to+\infty$.
\end{lemma}
Thus, by analyzing the zero level set of $\tilde{H}(x)$, we can obtain the existence of an element of $Z(t)$ and its profile.
\begin{remark}
For the initial data $f(x)$ that is radially symmetric,
Chung \cite{Chung} considered the case that $\tilde{\eta}=0\in\cN(F_{d})$ and proved that
$\tilde{H}(x)$ can be represented using the generalized Laguerre polynomial.
However, when $\tilde{\eta}\neq 0$ or the initial data $f(x)$ is not radial symmetric,
we do not know the property of $\tilde{H}(x)$.
We leave this as an open problem.
\end{remark}
\section{discussion}\label{sec:dis}
We finally mention some related problems.
{First, there is a problem of the heat equation called the hot spot problem,
where the asymptotic behavior of the set of maximum points of the solution with respect to spatial variables (called the ``hot spot") is analyzed \cite{CK, JS, Ishige1, Ishige2}. }
In Euclidean space $\bR^{d}$, it is known that when the initial data is non-zero and non-negative, the hot spot converges to a point determined by the initial data.
When the initial data is sign-changing,
the results of this paper show that there are cases in which all elements of the hot spot diverge.
As an example, when the initial data satisfies \eqref{condi2} and the integral value of the initial data is negative,
the zero point diverges in the negative direction and all elements of the hot spot also diverge in the negative direction from Theorem \ref{thm:main1}.
{When the initial data is sign-changing, the asymptotic behavior of the hot spot is different from the case that the initial data is non-negative}.
$u_{x}(t,x)$ satisfies the heat equation, and the asymptotic behavior of the critical points of the solution can thus be analyzed from the results of this paper.
However, a more detailed evaluation is needed to determine which critical points will become hot spots.
The hot spot problem for the initial data that is sign-changing is an interesting future problem to be solved.
The asymptotic behavior of the zero level set of another diffusion equation can be considered as another extension of the problem.
As an example, we consider the case of the space fractional diffusion equation:
\beaa
\begin{cases}
\dfrac{\partial u}{\partial t}+(-\Delta)^{s}u =0 \quad (t>0,\ x\in\bR), \vspace{3mm}\\
u(0,x)=f(x) \quad (x\in\bR),
\end{cases}
\eeaa
where $(-\Delta)^{s}$ is the fractional power of the Laplace operator with $0<s<1$
In this case, it can be deduced that the zeros disappear at finite time when $f\in L^1(\bR)$ is compactly supported and its integral value is non-zero,
because the evaluation of the relative error has been derived from a previous study \cite{Luis}.
In addition, when the value of the integration is zero, the asymptotic behavior of the zero point can be considered,
and in this case, by assuming \eqref{Assu} fits the initial data,
the specific behavior of the zero level set can be considered using the asymptotic expansion of the solution obtained in a previous study \cite{IKM}.
Furthermore, the asymptotic behavior of the zero point of the solutions to the time-fractional diffusion equation (the diffusion-wave equation) \cite{EK} and
the nonlocal diffusion equation \cite{AMRT} may also be considered, but we leave this as an open problem.
\section*{Acknowledgments}
The author expresses his sincere gratitude to Prof. Shin-Ichiro Ei (Hokkaido University),
Prof. Eiji Yanagida (Tokyo Institute of Technology) and Prof. Kazuhiro Ishige (The University of Tokyo) for their useful suggestions and valuable advices and thanks Edanz (https://jp.edanz.com/ac) for editing a draft of this manuscript.
{The author is partially supported by Grant-in-Aid for JSPS Research Fellow No. 21J10036.}
| {
"timestamp": "2021-10-04T02:12:26",
"yymm": "2109",
"arxiv_id": "2109.14559",
"language": "en",
"url": "https://arxiv.org/abs/2109.14559",
"abstract": "In this paper, we consider the asymptotic profiles of zero points for the spatial variable of the solutions to the heat equation. By giving suitable conditions for the initial data, we prove the existence of zero points by extending the high-order asymptotic expansion theory for the heat equation. This reveals a previously unknown asymptotic profile of zero points diverging at $O(t)$. In a one-dimensional spatial case, we show the zero point's second and third-order asymptotic profiles in a general situation. We also analyze a zero-level set in high-dimensional spaces and obtain results that extend the results for the one-dimensional spatial case.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Asymptotic profiles of zero points of solutions to the heat equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683473173829,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7096610707111376
} |
https://arxiv.org/abs/2201.08062 | A stabilizer-free $C^0$ weak Galerkin method for the biharmonic equations | In this article, we present and analyze a stabilizer-free $C^0$ weak Galerkin (SF-C0WG) method for solving the biharmonic problem. The SF-C0WG method is formulated in terms of cell unknowns which are $C^0$ continuous piecewise polynomials of degree $k+2$ with $k\geq 0$ and in terms of face unknowns which are discontinuous piecewise polynomials of degree $k+1$. The formulation of this SF-C0WG method is without the stabilized or penalty term and is as simple as the $C^1$ conforming finite element scheme of the biharmonic problem. Optimal order error estimates in a discrete $H^2$-like norm and the $H^1$ norm for $k\geq 0$ are established for the corresponding WG finite element solutions. Error estimates in the $L^2$ norm are also derived with an optimal order of convergence for $k>0$ and sub-optimal order of convergence for $k=0$. Numerical experiments are shown to confirm the theoretical results. | \section{Introduction}
We consider the biharmonic equation of the form
\begin{subequations}
\begin{eqnarray}
\Delta^2 u&=&f,\quad \mbox{in}\;\Omega,\label{pde}\\
u&=&g_D,\quad\mbox{on}\;\Gamma,\label{pde-bc1}\\
\frac{\partial u}{\partial
\bm{n}}&=&g_N, \quad\mbox{on}\;\Gamma,\label{pde-bc2}
\end{eqnarray}
\end{subequations}
where $\Omega$ is a bounded polytopal domain in $\mathbb{R}^2$ and $\Gamma=\partial\Omega$.
In the case of homogeneous boundary conditions $g_D=g_N=0$, the variational form of problem (\ref{pde})-(\ref{pde-bc2}) reads as: find $u\in H_0^2(\Omega)$ such that
\begin{equation}\label{wf}
(\Delta u, \Delta v) = (f, v),\qquad \forall v\in H_0^2(\Omega),
\end{equation}
where $H_0^2(\Omega)$ is the subspace of $H^2(\Omega)$ consisting of
functions with vanishing value and normal derivative on
$\partial\Omega$.
For the case of nonhomogeneous boundary conditions, assume that $g_D$ and $g_N$ are the Dirichlet boundary data of some function in $H^2(\Omega)$, that is, there exists $\psi\in H^2(\Omega)$ such that
\begin{align*}
\Delta^2 \psi&=0,\quad \mbox{in}\;\Omega,\\
\psi&=g_D,\quad\mbox{on}\;\Gamma,\\
\frac{\partial \psi}{\partial
\bm{n}}&=g_N, \quad\mbox{on}\;\Gamma.
\end{align*}
Then by setting $\widetilde{u}=u-\psi$, we arrive at the weak form (\ref{wf}) for $\widetilde{u}$. Therefore for brevity, but without loss of generality, we will assume homogeneous boundary conditions in the remainder of this paper.
It is well known that $H^2$-conforming finite element methods for problem (\ref{pde})-(\ref{pde-bc2}) involve $C^1$ finite elements, which are of complex implementation and contain high order polynomials even in two dimensions. For example, Argyris and Bell finite elements have 21 and 18 degrees of freedom per triangle, respectively.
In order to avoid the use of such $C^1$ elements, nonconforming finite elements have been used to solve biharmonic problems.
Morley element \cite{morley} is one of the most popular nonconforming finite elements for the biharmonic equations, which only uses quadratic piecewise polynomials on triangle elements in two dimensional domains and doesn't need any stabilization along mesh interfaces. However, it can not be generalized to arbitrarily high order polynomials.
Discontinuous Galerkin (DG) approaches can also be applied to the biharmonic problems.
The first discontinuous Galerkin method---the interior penalty method for the fourth order PDE was presented in \cite{baker}, which uses fully discontinuous piecewise polynomials as basis functions. A nonsymmetric version of interior penalty method was proposed and analyzed in \cite{ms}. Although the DG methods have the advantage of using arbitrarily high-order elements, they also have some disadvantages. The weak forms are more complicated than those used for conforming and nonconforming finite element methods. The discrete linear system of the DG method is large because it has a large number of degrees of freedom.
To reduce the degrees of freedom of DG methods, $C^0$ interior penalty (C0IP) methods have been proposed for the fourth order PDEs first in \cite{engel} and then analyzed in \cite{bs}, where the simple Lagrange elements are used and the continuity of the function derivatives are weakly enforced by stabilization terms on interior edges. However, the C0IP methods still have the disadvantage of complex weak form and the need for the penalty parameters.
Another approach to avoid the use of $C^1$ elements is the mixed methods \cite{ab,falk,monk}, which reduces the biharmonic problem to a system of two second order elliptic problems. One of the main drawbacks of the mixed formulation is that the mixed method leads to saddle-point linear system, {which causes difficulty in efficiently solving the linear algebra system.}
The weak Galerkin (WG) finite element method was first introduced for the second order elliptic problems in \cite{wy}. One of its main characteristics is the use of the concept of weak functions and its weak derivatives. The classical differential operators, such as the gradient and the Laplacian, are approximated by weak differential operator defined as distributions, which are further approximated by piecewise polynomials. These weakly defined functions and differential operators make the WG methods highly flexible in choosing finite element spaces and using polytopal meshes. In recent years, the WG method has been a focus of great interest in the scientific community. Several WG methods have been developed to solve a wide variety of partial differential equations, e.g., \cite{wy-mixed,lyzz,gm,ggz,mwy-helm,mwyz-maxwell,mwyz-interface}. Especially, there are some works \cite{wg-bi1,wg-bi2,wg-bi3,wg-bi4,wg-bi5,wg-bi6,wg-bi7} for biharmonic equations. Compared with the DG methods, there is no penalty parameters needed to tune in the formulation of WG methods. Similar to the DG methods, the WG methods also involve stabilization along mesh skeleton, which makes the implementation of DG and WG methods more complex than the ones of conforming and nonconforming finite element methods.
Most recently, a new WG method without stabilizer term was presented for the second order elliptic problems in \cite{yz}, where we can remove the stabilization and pay the price in the form of using high enough degree of polynomials in the definition of the weak gradient. The resulting numerical scheme is as simple as conforming finite element scheme and it is easy to implement. The idea has been extended to the biharmonic equations in \cite{sf-wg}, where a stabilizer-free WG (SFWG) method has been proposed which uses full discontinuous piecewise polynomials of degrees $k+2$, $k+2$ and $k+1$ with $k\geq 0$, respectively, for discretization of the unknown solution $u$, the trace of $u$ and the trace of normal derivative $\frac{\partial u}{\partial n}$ on the skeleton of the mesh.
For triangular mesh, the minimum degree of polynomials used for the computation of the weak Laplacian is $k+7$ in theory and is $k+4$ in practical computation. As it is pointed out in \cite{sf-wg}, it is a challenging task to compute weak Laplacian and its numerical integration when the degree of polynomials used in the computation of weak Laplacian is very high.
In this paper, we will present and analyze a stabilizer-free $C^0$ weak Galerkin method to approximate the solutions of the biharmonic problem (\ref{pde})- (\ref{pde-bc2}). The method is formulated in terms of face unknowns which are discontinuous piecewise polynomials of degree $k+1$ with $k\geq 0$ and in terms of cell unknowns which are $C^0$ continuous piecewise polynomials of degree $k+2$. We have proved that, for triangular mesh, it is enough to take $k+3$ as the degree of polynomials used in the computation of weak Laplacian.
In comparison with the SFWG method \cite{sf-wg}, the SF-C0WG methods in this paper involve fewer degrees of freedom because nodal values are shared on inter-element boundaries.
The outline of this paper is as follows. In Section \ref{Sec:wg-fem}, we introduce some notations and the formulation of our SF-C0WG method and the related methods. Two energy-like norms and their equivalence and the well-posedness of the SF-C0WG method are discussed in Section \ref{Sec:well-pose}. Then, in Section 4, we derive an error equation which plays an important role in our error estimates. The error analysis of our SF-C0WG method for $H^2$-like norm and the $L^2$ and $H^1$ norm are established in Section \ref{Sec:H2-err} and \ref{Sec:L2-err}, respectively. Finally, in Section \ref{Sec:numeric}, we report some numerical experiment results to confirm the theoretical analysis developed.
\section{Weak Galerkin Finite Element Methods}\label{Sec:wg-fem}
Let ${\mathcal T}_h$ be a quasi-uniform triangulation of the domain $\Omega$.
Denote by ${\cal E}_h$ the set of all edges in ${\cal
T}_h$, and let ${\cal E}_h^0={\cal E}_h\backslash\Gamma$ be
the set of all interior edges.
For convenience, we adopt the following notations,
\begin{eqnarray*}
(v,w)_{{\mathcal T}_h} &=& \sum_{K\in{\mathcal T}_h}(v,w)_K=\sum_{K\in{\mathcal T}_h}\int_K vw d{\bm x},\\
{\langle} v,w{\rangle}_{\partial{\mathcal T}_h}&=&\sum_{K\in{\mathcal T}_h} {\langle} v,w{\rangle}_{\partial K}=\sum_{K\in{\mathcal T}_h} \int_{\partial K} vw ds.
\end{eqnarray*}
{\color{red}For any nonnegative integer $m$}, let $\mathbb{P}_m(D)$ denote the set of polynomials defined on $D$ with degree no more than $m$, where $D$ may be an element $K$ of ${\mathcal T}_h$ or an edge $e$ of ${\mathcal E}_h$. In what follows, we often consider the broken polynomial spaces
\[\mathbb{P}_m({\mathcal T}_h):=\{v\in L^2(\Omega): v|_K\in \mathbb{P}_m(K), \,\,\forall K\in{\mathcal T}_h\},\]
and
\[\mathbb{P}_m({\mathcal E}_h):=\{v\in L^2({\mathcal E}_h): {\color{red}v|_e}\in \mathbb{P}_m(e), \,\,\forall e\in{\mathcal E}_h\}.\]
First of all, we introduce a set of normal directions on ${\cal E}_h$ as follows
\begin{equation}\label{thanks.101}
{\cal D}_h = \{\bm{n}_e: \mbox{ ${\bm n}_e$ is unit and normal to $e$},\
e\in {\cal E}_h \}.
\end{equation}
Then, a weak Galerkin finite element space $V_h$ for $k\geq 0$ is defined by
\begin{equation}
V_h=\{v=\{v_0, v_{n}{\bm n}_e\}:\ v_0\in S_h,
v_{n}\in \mathbb{P}_{k+1}({\mathcal E}_h)\},
\end{equation}
with
\begin{align}\label{Sh}
S_h=\{w\in H_0^1(\Omega): w|_K\in \mathbb{P}_{k+2}(K), \,\, \forall K\in\mathcal{T}_h\},
\end{align}
where $v_n$ can be viewed as an approximation of $\frac{\partial v_0}{\partial {\bm n}_e}:=\nabla v_0\cdot{\bm n}_e$.
Denote by $V_h^0$ a subspace of $V_h$ with vanishing traces,
\begin{align}\label{Vh0}
V_h^0=\{v=\{v_0,v_{n}{\bm n}_e\}\in V_h, \ v_{n}|_e=0,\
e\subset\partial K\cap\Gamma\}.
\end{align}
\begin{definition}[Weak Laplacian] For any function $v=\{v_0, v_n{\bm n}_e\}\in V_h$, its weak Laplacian $\Delta_{w,m}v$,
is piecewisely defined as the unique polynomial $(\Delta_{w,m}v)|_K \in \mathbb{P}_{m}(K)$ such that
\begin{equation}\label{wl}
(\Delta_{w,m} v, \ \varphi)_K = -( \nabla v_0, \ \nabla\varphi)_K+{\langle} v_n{\bm n}_e\cdot{\bm n}, \ \varphi{\rangle}_{\partial K},\quad
\forall \varphi\in \mathbb{P}_{m}(K),
\end{equation}
for any $K\in\mathcal{T}_h$.
\end{definition}
Now, we are ready to present our stabilizer-free $C^0$ weak Galerkin finite element method for the biharmonic problem (\ref{pde})-(\ref{pde-bc2}).
\begin{algorithm}[SF-C0WG Method]
The stabilizer-free $C^0$ weak Galerkin finite element scheme for solving problem (\ref{pde})-(\ref{pde-bc2}) is defined as follows: find $u_h=\{u_0,u_{n}{\bm n}_e\}\in V_h^0$ such that
\begin{equation}\label{wg}
\mathcal{A}_h(u_h, v_h)=(f,\;v_0), \quad\forall\
v_h=\{v_0, v_{n}{\bm n}_e\}\in V_h^0,
\end{equation}
where the bilinear form $a_h(\cdot,\cdot)$ is defined by
\[
\mathcal{A}_h(v, w):=(\Delta_{w,k+3} v,\ \Delta_{w,k+3} w)_{{\mathcal T}_h}, \quad \forall v, w\in V_h.
\]
\end{algorithm}
\begin{remark}
Using the same WG finite element space $V_h^0$ defined by (\ref{Vh0}), a $C^0$ weak Galerkin finite element method has been presented in \cite{wg-bi2}, which is stated as follows:
\begin{algorithm}[C0WG Method]
The $C^0$ weak Galerkin finite element scheme for solving problem (\ref{pde})-(\ref{pde-bc2}) is defined as follows: find $u_h=\{u_0,u_{n}{\bm n}_e\}\in V_h^0$ such that
\begin{equation}\label{c0wg}
\mathcal{A}_{wg}(u_h, v_h)=(f,\;v_0), \quad\forall\
v_h=\{v_0, v_{n}{\bm n}_e\}\in V_h^0,
\end{equation}
where the bilinear form $a_h(\cdot,\cdot)$ is defined by
\[
\mathcal{A}_{wg}(v, w):=(\Delta_{w,k} v,\ \Delta_{w,k} w)_{{\mathcal T}_h}+s_h(v, w), \quad \forall v, w\in V_h,
\]
with the stabilizer term
\[
s_h(v, w)=\sum_{K\in{\mathcal T}_h}h_K^{-1}\langle \frac{\partial v_0}{\partial\bm{n}_e}-v_n, \frac{\partial w_0}{\partial\bm{n}_e}-w_n\rangle_{\partial K}, \quad \forall v, w\in V_h.
\]
\end{algorithm}
From the formulation of the SF-C0WG method (\ref{wg}) and the C0WG method (\ref{c0wg}), we can see that: the SF-C0WG method is obtained by removing the stabilizer $s_h(\cdot,\cdot )$ in the C0WG method via raising the degree of polynomials used in the definition of the weak Laplacian from $k$ to $k+3$.
A comparison of numerical performance of both WG methods is discussed in Section \ref{Sec:numeric}.
\end{remark}
\begin{remark}
Using the $C^0$ conforming finite element space $S_h$ defined by (\ref{Sh}), a $C^0$ interior penalty method has been presented in \cite{engel, bs}, which is stated as follows:
\begin{algorithm}[C0IP Method]
The $C^0$ interior penalty method for solving problem (\ref{pde})-(\ref{pde-bc2}) is defined as follows: find $u_h\in S_h$ such that
\begin{equation}\label{c0ipdg}
\mathcal{A}_{dg}(u_h, v_h)=(f,\;v_h), \quad\forall\
v_h\in S_h,
\end{equation}
where the bilinear form $\mathcal{A}_{dg}(\cdot,\cdot)$ is defined as follows: for any $v, w\in S_h$,
\[
\mathcal{A}_{dg}(v, w):=(D^2 v,\ D^2 w)_{{\mathcal T}_h}
-\langle {[\hspace{-0.02in}[}\nabla v{]\hspace{-0.02in}]}, {\{\hspace{-0.045in}\{}\frac{\partial^2 w}{\partial \bm{n}_e^2}{\}\hspace{-0.045in}\}} \rangle_{{\mathcal E}_h}
-\langle {[\hspace{-0.02in}[}\nabla w{]\hspace{-0.02in}]}, {\{\hspace{-0.045in}\{}\frac{\partial^2 v}{\partial \bm{n}_e^2}{\}\hspace{-0.045in}\}} \rangle_{{\mathcal E}_h}
+j_h(v, w),
\]
with the stabilizer term
\[
j_h(v, w)=\sum_{e\in{\mathcal E}_h}\eta h_e^{-1}\langle {[\hspace{-0.02in}[}\nabla v{]\hspace{-0.02in}]}, {[\hspace{-0.02in}[}\nabla w{]\hspace{-0.02in}]}\rangle_{e}, \quad \forall v, w\in S_h.
\]
Here the penalty parameter $\eta$ is a positive constant.
\end{algorithm}
For any $v\in H^2({\mathcal T}_h)$, the jump ${[\hspace{-0.02in}[} \nabla v{]\hspace{-0.02in}]}$ and the average ${\{\hspace{-0.045in}\{} \frac{\partial^2 v}{\partial \bm{n}_e^2}{\}\hspace{-0.045in}\}}$ are defined as follows.
Let $e\in {\mathcal E}_h^0$ be the common edge of $K_1$ and $K_2$ of ${\mathcal T}_h$ and $\bm{n}_i, i=1,2$ denote by the outward unit normal vector of the boundary $\partial K_i, i=1,2$. We define on the edge $e$
\begin{align*}
{\{\hspace{-0.045in}\{} \frac{\partial^2 v}{\partial \bm{n}_e^2}{\}\hspace{-0.045in}\}}=\frac{1}{2}(\frac{\partial^2 v_1}{\partial \bm{n}_e^2}+\frac{\partial^2 v_2}{\partial \bm{n}_e^2})\quad
\mbox{and}\quad
{[\hspace{-0.02in}[} \nabla v{]\hspace{-0.02in}]} = \nabla v_1\cdot \bm{n}_1+\nabla v_2\cdot\bm{n}_2,
\end{align*}
where $v_i=v|_{K_i}, i= 1,2$. On a boundary edge $e\subset\partial\Omega$, we simply take ${\{\hspace{-0.045in}\{} \frac{\partial^2 v}{\partial \bm{n}_e^2}{\}\hspace{-0.045in}\}}=\frac{\partial^2 v}{\partial \bm{n}_e^2}$ and ${[\hspace{-0.02in}[} \nabla v{]\hspace{-0.02in}]} = \nabla v\cdot \bm{n}$.
Compared with the C0IP method (\ref{c0ipdg}), our SF-C0WG method (\ref{wg}) has a simple formulation without any integration term on the edges of ${\mathcal E}_h$, which will simplify the implementation of the corresponding numerical scheme and reduce the assembling time of stiffness matrix. Although the SF-C0WG method (\ref{wg}) has more degrees of freedom than the C0IP method (\ref{c0ipdg}), numerical experiments in Section \ref{Sec:numeric} indicate that its total computational time is less than that of the C0IP method (\ref{c0ipdg}).
\end{remark}
\section{Well Posedness}\label{Sec:well-pose}
For simplicity of notation, from now on we shall drop the subscript $k+3$ in the notation $\Delta_{w,k+3}$ for the discrete weak Laplacian.
In order to analyze the SF-C0WG method (\ref{wg}), we introduce two $H^2$-like norms ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}$ and $\|\cdot\|_{2,h}$ over $V_h^0$ by
\begin{equation}\label{3barnorm}
{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}=\left[ \sum_{K\in{\mathcal T}_h}\|\Delta_wv\|_{L^2(K)}^2\right]^{1/2},
\end{equation}
and
\begin{equation}\label{norm}
\|v\|_{2,h} = \left[ \sum_{K\in{\mathcal T}_h}\left(\|\Delta v_0\|_{L^2(K)}^2+h_K^{-1}\|\frac{\partial v_0}{\partial {\bm n}_e}-v_n\|^2_{L^2({\partial K})}\right) \right]^{1/2},
\end{equation}
for all $v\in V_h^0$. Obviously, $\|\cdot\|_{2,h}$ is indeed a norm on $V_h^0$. We will show ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}$ is also a norm by proving that the norms $\|\cdot\|_{2,h}$ and ${|\hspace{-.02in}|\hspace{-.02in}|}\cdot{|\hspace{-.02in}|\hspace{-.02in}|}$ are equivalent on the finite element space $V_h^0$ in Lemma \ref{lem:happy}.
In what follows, the trace inequality is a frequently used analysis tool, which states as \cite{wy-mixed}: for any function $\phi\in H^1(K)$, there holds
\begin{equation}\label{trace}
\|\phi\|_{L^2(\partial K)}^2 \leq C \left( h_K^{-1} \|\phi\|_{L^2(K)}^2 + h_K
\|\nabla \phi\|_{L^2(K)}^2\right).
\end{equation}
The follow lemma plays a key role in the proof of Lemma \ref{lem:happy}.
\begin{lemma} \label{l-m2}
For any $v=\{v_0, v_n\bm{n}_e\}\in V_h$ and $K\in{\mathcal T}_h$, there exists a polynomial $\varphi\in \mathbb{P}_{k+3}(K)$ such that
\begin{align*}
(\Delta v_0, \varphi)_K=0,\quad
\langle (\nabla v_0-v_n\bm{n}_e)\cdot\bm{n}, \varphi\rangle_{\partial K}=\|(\nabla v_0-v_n\bm{n}_e)\cdot\bm{n}\|_{L^2(\partial K)}^2,
\end{align*}
and
\begin{align}\label{q-est}
\|\varphi\|_{L^2(K)}\leq Ch_K^{1/2}\|(\nabla v_0-v_n\bm{n}_e)\cdot\bm{n}\|_{L^2(\partial K)}.
\end{align}
\end{lemma}
\begin{proof}
For any $K\in{\mathcal T}_h$, let $e_i, i=1,2,3$ be the three edges of $K$ and $\lambda_i$'s are the barycentric coordinates of $K$. Then, we define a polynomial $\varphi_i\in \mathbb{P}_{k+3}(K)$ for $i=1,2,3$, respectively, by requiring that
\begin{align}\label{q0}
\varphi_i = \prod_{j=1, j\neq i}^3\lambda_j q,
\end{align}
with $q\in \mathbb{P}_{k+1}(K)$ and such that
\begin{subequations}
\begin{align}
\label{qa}
\langle \varphi_i, \tau\rangle_{e_i}&=\langle (\nabla v_0-v_n\bm{n}_e)\cdot\bm{n}, \tau\rangle_{e_i}, \quad\forall \tau\in \mathbb{P}_{k+1}(e_i),\\
\label{qb}
(\varphi_i, \tau)_K &=0,\quad\forall \tau\in \mathbb{P}_k(K).
\end{align}
\end{subequations}
Since there are $$(k+2)+\frac{1}{2}(k+1)(k+2)=\frac{1}{2}(k+2)(k+3)$$ equations and the same number of unknowns in the linear system (\ref{qa})-(\ref{qb}), the existence and uniqueness of $\varphi_i$ are equivalent.
Assume that both $\varphi_i$ and $\widehat{\varphi}_i$ satisfy the linear system (\ref{qa})-(\ref{qb}), we will prove their difference $d_i=\varphi_i-\widehat{\varphi}_i$ vanishes on $K$. From (\ref{q0})-(\ref{qb}), we know that $d_i$ can be expressed as
\begin{align}\label{d0}
d_i = \prod_{j=1, j\neq i}^3\lambda_j \widetilde{q},
\end{align}
with $\widetilde{q}\in \mathbb{P}_{k+1}(K)$ and
satisfies the following conditions,
\begin{subequations}
\begin{align}
\label{da}
\langle d_i, \tau\rangle_{e_i}&=0, \quad\forall \tau\in \mathbb{P}_{k+1}(e_i),\\
\label{db}
(d_i, \tau)_K &=0,\quad\forall \tau\in \mathbb{P}_k(K).
\end{align}
\end{subequations}
It follows from (\ref{da}) that $d_i=0$ on $e_i$, which together with (\ref{d0}) implies $\widetilde{q}$ in (\ref{d0}) can be written as $\widetilde{q}=\lambda_i \omega$ with $\omega\in \mathbb{P}_{k}(K)$. Therefore, we have
\begin{align*}
d_i = \prod_{j=1}^3\lambda_j \omega, \,\, \mbox{ with } \omega\in \mathbb{P}_{k}(K),
\end{align*}
which combining with (\ref{db}) implies $d_i=0$ on $K$.
Hence, the linear system (\ref{qa})-(\ref{qb}) has a unique solution $\varphi_i$ in the form of (\ref{q0}), which belongs to $\mathbb{P}_{k+3}(K)$.
Then, by a scaling arguments, we have
\begin{align*}
\|\varphi_i\|_{L^2(K)} \leq Ch_K^{1/2}\|\varphi_i\|_{L^2(\partial K)}.
\end{align*}
Thanks to (\ref{q0}), it is known that $\varphi_i = 0$ on $e_j$ for $j\neq i$. Then,
$\|\varphi_i\|_{L^2(\partial K)}=\|\varphi_i\|_{L^2(e_i)}$. Therefore,
\begin{align}\label{s1}
\|\varphi_i\|_{L^2(K)} \leq Ch_K^{1/2}\|\varphi_i\|_{L^2(e_i)}.
\end{align}
Let $\theta_i(x) = \prod_{j=1,j\neq i}^{3}\lambda_j(x)$. Using (\ref{q0}) and (\ref{qa}), we have
\begin{align*}
\langle \theta_i q, \tau\rangle_{e_i}
=\langle \varphi_i, \tau\rangle_{e_i}
\leq \|(\nabla v_0-v_n\bm{n}_e)\cdot\bm{n}\|_{L^2(e_i)}\|\tau\|_{L^2(e_i)}.
\end{align*}
Taking $\tau = q$ in the above inequality, and by the second mean value theorem of integrals, there exist a point $\varepsilon_1\in e_i$ such that
\begin{align*}
\theta_i(\varepsilon_1)\|q\|_{L^2(e_i)}^2=\langle \theta_i q, q\rangle_{e_i}
\leq \|(\nabla v_0-v_n\bm{n}_e)\cdot\bm{n}\|_{L^2(e_i)}\|q\|_{L^2(e_i)}.
\end{align*}
Then, after cancelling $\|q\|_{L^2(e_i)}$, we obtain
\begin{align}\label{s2}
\|q\|_{L^2(e_i)}
\leq \theta_i^{-1}(\varepsilon_1)\|(\nabla v_0-v_n\bm{n}_e)\cdot\bm{n}\|_{L^2(e_i)}.
\end{align}
Therefore, using (\ref{q0}) and the second mean value theorem of integrals again, there exist a point $\varepsilon_2\in e_i$ such that
\begin{align*}
\|\varphi_i\|_{L^2(e_i)}=\sqrt{\langle \theta_i^2, q^2\rangle_{e_i}}
=\theta_i(\varepsilon_2)\|q\|_{L^2(e_i)},
\end{align*}
which together with (\ref{s2}) leads to
\begin{align*}
\|\varphi_i\|_{L^2(e_i)} \leq \theta_i(\varepsilon_2)\theta_i^{-1}(\varepsilon_1)
\|(\nabla v_0-v_n\bm{n}_e)\cdot\bm{n}\|_{L^2(e_i)}.
\end{align*}
Thus, from (\ref{s1}) and the above inequality, we obtain
\begin{align*}
\|\varphi_i\|_{L^2(K)} \leq Ch_K^{1/2}\|(\nabla v_0-v_n\bm{n}_e)\cdot\bm{n}\|_{L^2(e_i)}.
\end{align*}
Finally, choosing $\varphi = \sum_{i=1}^3\varphi_i$ ends the proof.
\end{proof}
\begin{lemma}\label{lem:happy} There exist two positive constants $C_1$ and $C_2$ such
that for any $v=\{v_0,v_n{\bm n}_e\}\in V_h$, we have
\begin{equation*}
C_1 \|v\|_{2,h}\le {|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|} \leq C_2 \|v\|_{2,h}.
\end{equation*}
\end{lemma}
\begin{proof}
For any $v=\{v_0,v_n{\bm n}_e\}\in V_h$ and $\varphi\in \mathbb{P}_{k+3}(K)$, it follows from the definition of
weak Laplacian (\ref{wl}) and integration by parts that
\begin{eqnarray}
(\Delta_{w} v, \varphi)_K
&=&-(\nabla v_0, \nabla\varphi)_K+{\langle} v_n{\bm n}_e\cdot{\bm n}, \varphi{\rangle}_{\partial K}\nonumber\\
&=&(\Delta v_0, \varphi)_K+{\langle} (v_n{\bm n}_e-\nabla v_0)\cdot{\bm n}, \varphi{\rangle}_{\partial K}.\label{n-1}
\end{eqnarray}
By letting $\varphi=\Delta_w v$ in (\ref{n-1}) we arrive at
\begin{eqnarray*}
\|\Delta_{w} v\|^2_{L^2(K)} &=&(\Delta v_0, \Delta_w v)_K+{\langle} (v_n{\bm n}_e-\nabla v_0)\cdot{\bm n}, \Delta_w v{\rangle}_{\partial K}.
\end{eqnarray*}
From the trace inequality (\ref{trace}) and the inverse inequality,
we have
\begin{eqnarray*}
\|\Delta_wv\|^2_{L^2(K)} &\le& \|\Delta v_0\|_{L^2(K)} \|\Delta_w v\|_{L^2(K)}
+\|(v_n{\bm n}_e-\nabla v_0)\cdot{\bm n}\|_{L^2({\partial K})} \|\Delta_w v\|_{L^2({\partial K})}\\
&\le& C(\|\Delta v_0\|_{L^2(K)}+h_K^{-1/2}\|(v_n{\bm n}_e-\nabla v_0)\cdot{\bm n}\|_{L^2({\partial K})}) \|\Delta_w v\|_{L^2(K)},
\end{eqnarray*}
which implies
$$
\|\Delta_w v\|_{L^2(K)} \le C \left(\|\Delta v_0\|_{L^2(K)}+h_K^{-1/2}\|(v_n{\bm n}_e-\nabla v_0)\cdot{\bm n}\|_{L^2({\partial K})}\right),
$$
and consequently
$${|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|} \leq C_2 \|v\|_{2,h}.$$
Next we will prove
\begin{equation}\label{n-33}
\sum_{K\in{\mathcal T}_h} h_K^{-1}\|(\nabla v_0-v_n{\bm n}_e)\cdot{\bm n}\|^2_{L^2({\partial K})} \leq C{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}^2.
\end{equation}
Let $\varphi_0$ be obtained from Lemma \ref{l-m2}, taking $\varphi=\varphi_0$ in (\ref{n-1}) yields
\begin{align}
\|(v_n{\bm n}_e-\nabla v_0)\cdot{\bm n}\|_{L^2(\partial K)}^2
&=(\Delta_{w} v, \varphi_0)_K\le \|\Delta_{w} v\|_{L^2(K)}\|\varphi_0\|_{L^2(K)}\nonumber\\
&\leq Ch_K^{1/2} \|\Delta_{w} v\|_{L^2(K)}\|(v_n{\bm n}_e-\nabla v_0)\cdot{\bm n}\|_{L^2(\partial K)},
\end{align}
which implies (\ref{n-33}).
Finally, by letting $\varphi=\Delta v_0$ in (\ref{n-1}) we arrive at
\begin{eqnarray*}
\|\Delta v_0\|^2_{L^2(K)} &=&(\Delta v_0, \ \Delta_w v)_K
- {\langle} (v_n{\bm n}_e-\nabla v_0)\cdot{\bm n}, \ \Delta_w v{\rangle}_{\partial K}
\end{eqnarray*}
Using the trace inequality (\ref{trace}), the inverse inequality, and (\ref{n-33}), one has
\begin{eqnarray*}
\|\Delta v_0\|^2_{L^2(K)} &\le& C \|\Delta_w v\|_{L^2(K)}\|\Delta v_0\|_{L^2(K)},
\end{eqnarray*}
which gives
\begin{eqnarray}
\sum_{K\in{\mathcal T}_h}\|\Delta v_0\|^2_{L^2(K)} &\le& C {|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}^2,
\end{eqnarray}
which together with (\ref{n-33}) yields
\[ {|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|} \geq C_1\|v\|_{2,h}.\]
The proof is completed.
\end{proof}
\smallskip
In the following lemma we will prove the well-posedness of the SF-C0WG method (\ref{wg}).
\begin{lemma}
The SF-C0WG finite element scheme (\ref{wg}) has a unique
solution.
\end{lemma}
\begin{proof}
To show the well-posedness of (\ref{wg}) assume that $f=g_D=g_N=0$. We will show that $u_h$ vanishes. Take $v=u_h$ in (\ref{wg}). It follows that
\[
(\Delta_w u_h,\Delta_w u_h)_{{\mathcal T}_h}=0.
\]
Then Lemma \ref{lem:happy} implies $\|u_h\|_{2,h}=0$. Consequently, we have $\Delta u_0=0$,
$\nabla u_0\cdot{\bm n}_e=u_{n}$ on ${\partial K}$.
Thus $u_0$ is the solution of (\ref{pde})-(\ref{pde-bc2}) with $f=g_D=g_N=0$. We have $u_0=0$, then $u_n=0$, which ends the proof.
\end{proof}
\section{An Error Equation}\label{Sec:err-eqn}
Let $Q_0:H^2(\Omega)\rightarrow S_h$ be the Scott-Zhang interpolation operator introduced in \cite{wg-bi2},
which has the following properties:
\begin{itemize}
\item [a.] \citep[Page 493]{wg-bi2} $Q_0$ preserves polynomial of degree up to $k+2$, i.e., $Q_0v=v\in \mathbb{P}_{k+2}({\mathcal T}_h)$.\\
\item [b.] \citep[Lemma 8.2]{wg-bi2} $Q_0$ preserves the face mass of order $k$, i.e.,
\begin{align}\label{p-fm}
\langle v-Q_0v, p\rangle_{e}=0,\quad \forall p\in \mathbb{P}_k(e), \, e\in {\mathcal E}_h.
\end{align}
\item [c.] \citep[Theorem 8.1]{wg-bi2}For any $v\in H^{\gamma}(\Omega)$ with $\gamma\geq 2$, there holds
\begin{align}\label{Q0-approxi}
\left[\sum_{K\in{\mathcal T}_h}h^{2s}|v-Q_0v|_{H^s(K)}^2\right]^{1/2}\leq Ch^{\min\{k+3,\gamma\}}|v|_{H^{\gamma}(\Omega)},\quad 0\leq s\leq 2.
\end{align}
\end{itemize}
Now for the true solution $u$ of (\ref{pde})-(\ref{pde-bc2}), we introduce an interpolation operator $Q_h: H^2(\Omega)\rightarrow V_h$ such that on each element $K\in\mathcal{T}_h$,
\[
Q_h u = \{Q_0 u, Q_n(\frac{\partial u}{\partial {\bm n}_e}){\bm n}_e\},
\]
where $Q_n$ denotes the element-wise defined $L^2$ projections from $L^2(e)$ onto $\mathbb{P}_{k+1}(e)$ for each $e\subset {\partial K}$.
Define the error between the WG solution $u_h=\{u_0, u_n\bm{n}_e\}$ and the projection $Q_hu=\{Q_0u, Q_n(\frac{\partial u}{\partial\bm{n}_e})\bm{n}_e\}$ of the exact solution $u$ as
\[
e_h=Q_hu-u_h:=\{e_0, e_n\bm{n}_e\},
\]
with
\[
e_0=Q_0u-u_0, \quad e_n=Q_n(\frac{\partial u}{\partial\bm{n}_e})-u_n.
\]
The aim of this section is to obtain an error equation that $e_h$ {\color{red}satisfied}.
\begin{lemma}
Let $\pi_h$ be an element-wise defined $L^2$ projections onto $\mathbb{P}_{k+3}(K)$ on each element $K\in{\mathcal T}_h$.
For any $K\in{\mathcal T}_h$ and $w\in H^2(\Omega)$, we have
\begin{align}\label{key}
(\Delta_{w}(Q_hw), v)_K = (\Delta Q_0w,\ v)_K
+\langle Q_n(\frac{\partial w}{\partial \bm{n}})-\frac{\partial }{\partial{\bm n}}(Q_0w), v \rangle_{{\partial K}},
\end{align}
for any $v\in \mathbb{P}_{k+3}(K)$.
\end{lemma}
\begin{proof}
From the definition (\ref{wl}) of weak Laplacian it follows that
\begin{align}\label{q1}
(\Delta_{w}(Q_hw), v)_K &= -(\nabla Q_0w,\ \nabla v)_K +\langle Q_n(\frac{\partial w}{\partial \bm{n}_e})\bm{n}_e\cdot\bm{n}, v \rangle_{{\partial K}},
\end{align}
for any $v\in \mathbb{P}_{k+3}(K)$.
Using integration by parts, we get
\begin{align}\label{q2}
-(\nabla Q_0w,\ \nabla v)_K = (\Delta Q_0w, \ v)_K -\langle \nabla Q_0w\cdot\bm{n}, w\rangle_{\partial K}.
\end{align}
Plugging (\ref{q2}) into (\ref{q1}), and recalling that
\[Q_n(\frac{\partial w}{\partial \bm{n}_e})\bm{n}_e\cdot\bm{n}=Q_n(\frac{\partial w}{\partial \bm{n}})\]
yields (\ref{key}).
The proof is completed.
\end{proof}
\begin{lemma}[Error Equation]
Let $u$ and $u_h$ be the solutions of the problem (\ref{pde})-(\ref{pde-bc2}) and the SF-C0WG scheme (\ref{wg}), respectively.
For any $v\in V_h^0$, we have
\begin{eqnarray}
\mathcal{A}_h(e_h, v)=\ell(u,v),\label{ee}
\end{eqnarray}
where $\ell(u,v):=\sum\nolimits_{i=1}^2\ell_i(u,v)$, with
\begin{subequations}
\begin{align}
\label{el1}
\ell_1(u,v)&:=(\Delta_w (Q_hu)-\pi_h\Delta u, \Delta_w v)_{{\mathcal T}_h},\\
\label{el2}
\ell_2(u,v)&:= \langle \Delta u-\pi_h\Delta u, (\nabla v_0-v_n{\bm n}_e)\cdot{\bm n}\rangle_{\partial{\mathcal T}_h}.
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
For $v=\{v_0,v_n{\bm n}_e\}\in V_h^0$, testing (\ref{pde}) by $v_0$ and using the fact that
\[\sum_{K\in{\mathcal T}_h}\langle \Delta u, v_n{\bm n}_e\cdot{\bm n}\rangle_{\partial K}=0\]
and integration by parts, we arrive at
\begin{eqnarray}
(f,v_0)&=&(\Delta^2u, v_0)_{{\mathcal T}_h}\nonumber\\
&=&(\Delta u,\Delta v_0)_{{\mathcal T}_h} -\langle \Delta u,
\nabla v_0\cdot{\bm n}\rangle_{\partial{\mathcal T}_h} + \langle\nabla(\Delta u)\cdot{\bm n},
v_0\rangle_{\partial{\mathcal T}_h}\label{m1}\\
&=&(\Delta u,\Delta v_0)_{{\mathcal T}_h} -\langle \Delta u,
(\nabla v_0-v_n{\bm n}_e)\cdot{\bm n}\rangle_{\partial{\mathcal T}_h}.\nonumber
\end{eqnarray}
Next we investigate the term $(\Delta u,\Delta v_0)_{{\mathcal T}_h}$ in the above equation. Using (\ref{key}), integration by parts and the definition of weak Laplacian, we have
\begin{eqnarray*}
(\Delta u, \Delta v_0)_{{\mathcal T}_h}&=&(\pi_h\Delta u, \Delta v_0)_{{\mathcal T}_h} \\
&=& -(\nabla v_0, \nabla (\pi_h\Delta u))_{{\mathcal T}_h}
+\langle \nabla v_0\cdot{\bm n}, \pi_h\Delta u \rangle_{\partial{\mathcal T}_h}\nonumber\\
&=&(\Delta_w v,\ \pi_h\Delta u)_{{\mathcal T}_h}
+\langle (\nabla v_0-v_n{\bm n}_e)\cdot{\bm n}, \pi_h\Delta
u\rangle_{\partial{\mathcal T}_h}\nonumber\\
&=&(\Delta_w (Q_hu),\ \Delta_w v)_{{\mathcal T}_h}-\ell_1(u,v)
+\langle (\nabla v_0-v_n{\bm n}_e)\cdot{\bm n}, \pi_h\Delta
u\rangle_{\partial{\mathcal T}_h},
\end{eqnarray*}
which together with (\ref{m1}) yields
\begin{eqnarray}
(f,v_0)&=&\mathcal{A}_h(Q_hu, v)-\ell_1(u,v)
-\langle (\nabla v_0-v_n{\bm n}_e)\cdot{\bm n}, \Delta u-\pi_h\Delta u \rangle_{\partial{\mathcal T}_h}.\label{mmmm}
\end{eqnarray}
which implies that
\begin{eqnarray*}
\mathcal{A}_h(Q_hu, v)=(f,v_0)+\sum_{i=1}^2\ell(u,v).
\end{eqnarray*}
Subtracting (\ref{wg}) from the above equation ends the proof.
\end{proof}
\section{An Error Estimate in the $H^2$-like Norm}\label{Sec:H2-err}
We will obtain the optimal convergence rate for the solution $u_h$ of the SF-C0WG method (\ref{wg}) in a discrete $H^2$ norm.
\begin{lemma}\label{l2}
Assume $w\in H^{\gamma+2}(\Omega)$ with $\gamma>0$. There exists a constant $C$ such that the following estimates hold true:
\begin{align}
\label{mmm1}
&\left(\sum_{K\in{\mathcal T}_h} h_K\|\Delta w-\pi_h\Delta w\|_{L^2(\partial
K)}^2\right)^{1/2}
\leq C h^{\min\{k+4,\gamma\}}|w|_{H^{\gamma+2}(\Omega)},\\
\label{z2}
&\left(\sum_{K\in{\mathcal T}_h} h^{-1}_K\|\frac{\partial }{\partial{\bm n}}(Q_0w)-Q_n(\frac{\partial w}{\partial{\bm n}})\|_{L^2({\partial K})}^2\right)^{1/2}
\leq Ch^{\min\{k+1,\gamma\}}|w|_{H^{\gamma+2}(\Omega)},\\
\label{z3}
&\|\Delta_w(Q_hw)-\pi_h\Delta w\|_{L^2({\mathcal T}_h)}\leq Ch^{\min\{k+1,\gamma\}}|w|_{H^{\gamma+2}(\Omega)}.
\end{align}
\end{lemma}
\begin{proof}
By the trace inequality (\ref{trace}) and the approximation property of the $L^2$ orthogonal projection $\pi_h$, we have
\begin{align*}
&h_K\|\Delta w - \pi_h \Delta w\|_{L^2(\partial K)}^2\\
&\leq C(\|\Delta w - \pi_h \Delta w\|_{L^2(K)}^2+
h_K^2\|\nabla(\Delta w - \pi_h \Delta w)\|_{L^2(K)}^2)\\
&\leq Ch_K^{2\min\{k+4,\gamma\}}|\Delta w|_{H^{\gamma}(K)}^2\\
&\leq Ch_K^{2\min\{k+4,\gamma\}}|w|_{H^{\gamma+2}(K)}^2.
\end{align*}
Taking the summation of the above inequalities over all $K\in{\mathcal T}_h$, we completes the proof of (\ref{mmm1}).
Next, we turn to the estimate (\ref{z2}).
It follows from the definition of $Q_0$ and $Q_n$ that
\begin{align}\label{z1}
&\|\frac{\partial}{\partial{\bm n}}(Q_0w)-Q_n(\frac{\partial w}{\partial\bm{n}})\|_{L^2({\partial K})}\nonumber\\
&\leq \|\frac{\partial}{\partial\bm{n}} (Q_0w-w)\|_{L^2({\partial K})}
+\|\frac{\partial w}{\partial\bm{n}}-Q_n(\frac{\partial w}{\partial\bm{n}})\|_{L^2({\partial K})}\nonumber\\
&\leq 2\|\frac{\partial}{\partial\bm{n}} (Q_0w-w)\|_{L^2({\partial K})}.
\end{align}
Furthermore, using the trace inequality (\ref{trace}) and the approximation property (\ref{Q0-approxi}) of $Q_0$, we obtain
\begin{align*}
&\|\frac{\partial}{\partial\bm{n}} (Q_0w-w)\|_{L^2({\partial K})}^2\\
&\leq C(h_K^{-1}\|\nabla (Q_0w-w)\|_{L^2(K)}^2+h_K\|\nabla^2 (Q_0w-w)\|_{L^2(K)}^2)\\
&\leq Ch_K^{\min\{2k+3, 2\gamma+1\}}|w|_{H^{\gamma+2}(K)}^2,
\end{align*}
which together with (\ref{z1}) yields
\begin{align*}
\sum_{K\in{\mathcal T}_h} h^{-1}_K\|\frac{\partial }{\partial{\bm n}}(Q_0w)-Q_n(\frac{\partial w}{\partial{\bm n}})\|_{L^2({\partial K})}^2
\leq Ch^{\min\{2k+2,2\gamma\}}|w|_{H^{\gamma+2}(\Omega)}^2,
\end{align*}
which ends the proof of (\ref{z2}).
Now we consider the estimate (\ref{z3}). For any $v\in \mathbb{P}_{k+3}({\mathcal T}_h)$,
from (\ref{key}) and the orthogonal property of the $L^2$ projection $\pi_h$, it follows that
\begin{align}\label{c1}
&(\Delta_w(Q_hw)-\pi_h w, v)_{{\mathcal T}_h}\nonumber\\
&=(\Delta (Q_0w-w), v)_{{\mathcal T}_h}
+\langle Q_n(\frac{\partial u}{\partial\bm{n}})-\frac{\partial}{\partial{\bm n}} (Q_0u), v \rangle_{\partial {\mathcal T}_h}\nonumber\\
&=I_1+I_2.
\end{align}
From the Cauchy-Schwarz inequality and the approximation property (\ref{Q0-approxi}) of $Q_0$, one has
\begin{align}\label{c2}
|I_1|
&\leq \sum_{K\in{\mathcal T}_h} \|\Delta (Q_0w-w)\|_{L^2(K)}\| v\|_{L^2(K)}\nonumber\\
&\leq (\sum_{K\in{\mathcal T}_h} |Q_0w-w|_{H^2(K)}^2)^{1/2}\| v\|_{L^2({\mathcal T}_h)}\nonumber\\
&\leq Ch^{\min\{k+1,\gamma\}}|w|_{H^{\gamma+2}(\Omega)}\| v\|_{L^2({\mathcal T}_h)}.
\end{align}
Using the Cauchy-Schwarz inequality, (\ref{z2}) and the inverse inequality, we arrive at
\begin{align*}
|I_2|
&\leq (\sum_{K\in{\mathcal T}_h}h_K^{-1}\|(Q_n(\frac{\partial u}{\partial\bm{n}})-\frac{\partial}{\partial {\bm n}} (Q_0u)\|_{L^2(\partial K)}^2)^{1/2}
(\sum_{K\in{\mathcal T}_h}h_K\| v\|_{L^2(\partial K)}^2)^{1/2}\\
&\leq Ch^{\min\{k+1,\gamma\}}|w|_{H^{\gamma+2}(\Omega)}\| v\|_{L^2({\mathcal T}_h)},
\end{align*}
which together with (\ref{c1}) and (\ref{c2}) yields
\begin{align*}
|(\Delta_w(Q_hw)-\pi_h w, v)_{{\mathcal T}_h}|\leq Ch^{\min\{k+1,\gamma\}}|w|_{H^{\gamma+2}(\Omega)}\| v\|_{L^2({\mathcal T}_h)}.
\end{align*}
Taking $v=\Delta_w(Q_hw)-\pi_h w$ in the above inequality ends the proof of (\ref{z3}).
\end{proof}
\begin{lemma}\label{l3}
Assume $w\in H^{\gamma+2}(\Omega)$ with $\gamma>0$. There exists a constant $C$ such that the following estimates hold true:
\begin{align}
\label{mm1}
|\ell_1(w, v)|&\leq Ch^{\min\{k+1,\gamma\}}|w|_{H^{\gamma+2}(\Omega)}{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|},\\
\label{mm2}
|\ell_2(w, v)|&\leq Ch^{\min\{k+4,\gamma\}}|w|_{H^{\gamma+2}(\Omega)}{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|},
\end{align}
for any $v\in V_h^0$.
\end{lemma}
\begin{proof}
Using the Cauchy-Schwarz inequality and (\ref{z3}) of Lemma \ref{l2}, we have
\begin{align*}
|\ell_1(w, v)|&=\left| (\Delta_w(Q_hw)-\pi_h w, \Delta_wv)_{{\mathcal T}_h}\right|\\
&\leq \|\Delta_w(Q_hw)-\pi_h w\|_{L^2({\mathcal T}_h)} \|\Delta_wv\|_{L^2({\mathcal T}_h)}\\
&\leq Ch^{\min\{k+1,\gamma\}}|w|_{H^{\gamma+2}(\Omega)}{|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{align*}
It follows from the Cauchy-Schwarz inequality, (\ref{mmm1}), and Lemma \ref{lem:happy} that
\begin{align*}
|\ell_2(w,v)|&=\left|\sum_{K\in{\mathcal T}_h} \langle \Delta w-\pi_h\Delta w, (\nabla
v_0-v_n{\bm n}_e)\cdot{\bm n}\rangle_{\partial K}\right|\nonumber\\
&\leq \left(\sum_{K\in{\mathcal T}_h} h_K\|\Delta w-\pi_h\Delta
w\|_{L^2({\partial K})}^2\right)^{1/2} \nonumber\\
&\quad\times\left(\sum_{K\in{\mathcal T}_h} h_K^{-1}
\|(\nabla v_0-v_n{\bm n}_e)\cdot{\bm n}\|_{L^2({\partial K})}^2\right)^{1/2}\nonumber\\
&\leq C h^{\min\{k+4,\gamma\}}|w|_{H^{\gamma+2}(\Omega)} \|v\|_{2,h}\nonumber\\
&\leq C h^{\min\{k+4,\gamma\}}|w|_{H^{\gamma+2}(\Omega)} {|\hspace{-.02in}|\hspace{-.02in}|} v{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{align*}
We have completed the proof.
\end{proof}
\begin{theorem}\label{thm1}
Let $u_h\in V_h$ be the solution arising from the SF-C0WG scheme
(\ref{wg}). Assume that the exact solution $u\in H^{k+3}(\Omega)$. Then, there
exists a constant $C$ such that
\begin{equation}\label{err1}
{|\hspace{-.02in}|\hspace{-.02in}|} Q_hu-u_h{|\hspace{-.02in}|\hspace{-.02in}|} \leq Ch^{k+1}|u|_{H^{k+3}(\Omega)}.
\end{equation}
\end{theorem}
\begin{proof}
Taking $v=e_h$ in the error equation (\ref{ee}) and using Lemma \ref{l3} with $\gamma=k+1$, we arrive at
\begin{align*}
{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}^2&=\ell(u, e_h)\leq Ch^{k+1}|u|_{H^{k+3}(\Omega)}{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|},
\end{align*}
which completes the proof.
\end{proof}
\section{Error Estimates in the $L^2$ Norm and $H^1$ Norm}\label{Sec:L2-err}
In this section, we will provide estimates for the
$L^2$ norm and $H^1$ norm of the error between the exact solution $u$ and its corresponding WG finite element solution $u_h$.
Firstly, let us introduce the following dual problem
\begin{eqnarray}
\Delta^2\phi&=& \chi\quad
\mbox{in}\;\Omega,\label{dual}\\
\phi&=&0\quad\mbox{on}\;\Gamma,\label{dual1}\\
\nabla \phi\cdot{\bm n}&=&0\quad\mbox{on}\;\Gamma.\label{dual2}
\end{eqnarray}
Assume that the dual problem has the $H^{\alpha+2}$-regularity in the sense that there exists a constant $C$ such that
\begin{equation}\label{reg}
\|\phi\|_{H^{\alpha+2}(\Omega)}\le C\|\chi\|_{H^{\alpha-2}(\Omega)},\quad \mbox{ for } \alpha=1,2.
\end{equation}
For $\chi\in H^{\alpha-2}(\Omega)$ with $\alpha>0$, the $H^{\alpha+2}$-regularity has been proved for smooth domains in any dimension\cite{dauge}. The $H^4$-regularity has been proved by Blum and Rannacher in \cite{br} for the two dimensional convex polygonal domains with inner angles less than $126.28\dots^{\rm o}$.
\begin{lemma}\label{lem:p3}
Let $\phi\in H^{\alpha+2}(\Omega)$ with $\alpha=1,2$. Then, there holds
\begin{align}\label{p3}
|\Delta_w(Q_h\phi)|_{H^{\alpha}({\mathcal T}_h)}\leq Ch^{\min\{k+1-\alpha,0\}}|\phi|_{H^{\alpha+2}(\Omega)}.
\end{align}
\end{lemma}
\begin{proof}
The proof is given in Appendix.
\end{proof}
\begin{lemma}\label{l4}
Assume $u\in H^{k+3}(\Omega)$ and $\phi\in H^{\alpha+2}(\Omega)$ with $\alpha=1,2$. Then for $k\geq 0$, there holds
\begin{align}
\label{a1}
|\ell_1(u, Q_h\phi)|&\leq Ch^{\min\{2k+2,k+1+\alpha\}}|u|_{H^{k+3}(\Omega)}|\phi|_{H^{\alpha+2}(\Omega)},\\
\label{a2}
|\ell_2(u, Q_h\phi)|&\leq Ch^{\min\{2k+2,k+1+\alpha\}}|u|_{H^{k+3}(\Omega)}|\phi|_{H^{\alpha+2}(\Omega)}.
\end{align}
\end{lemma}
\begin{proof}
Let $\mathcal{P}_h^{\alpha-1}$ be the $L^2$ orthogonal projection onto the piecewise polynomial space $\mathbb{P}_{\alpha-1}({\mathcal T}_h)$. For simplicity, denote by $\phi_h=\Delta_w(Q_h\phi)$ and $\widehat{\phi}_h=\mathcal{P}_h^{\alpha-1}(\phi_h)$. Then,
\begin{align}\label{l1-est}
\ell_1(u, Q_h\phi)
&= (\Delta_w(Q_hu)-\pi_h\Delta u, \phi_h )_{{\mathcal T}_h}\nonumber\\
&= (\Delta_w(Q_hu)-\pi_h\Delta u, \phi_h-\widehat{\phi}_h )_{{\mathcal T}_h}
+(\Delta_w(Q_hu)-\pi_h\Delta u, \widehat{\phi}_h )_{{\mathcal T}_h}\nonumber\\
&=T_1+T_2.
\end{align}
Using the Cauchy-Schwarz inequality, (\ref{z3}) of Lemma \ref{l2} and (\ref{p3}), one has
\begin{align}\label{T1-est}
|T_1|&=|(\Delta_w(Q_hu)-\pi_h\Delta u, \phi_h-\widehat{\phi}_h )_{{\mathcal T}_h}|\nonumber\\
&\leq \|\Delta_w(Q_hu)-\pi_h\Delta u\|_{L^2({\mathcal T}_h)}\|\phi_h-\widehat{\phi}_h\|_{L^2({\mathcal T}_h)}\nonumber\\
&\leq Ch^{k+1}|u|_{H^{k+3}(\Omega)}\cdot h^{\alpha} |\phi_h|_{H^{\alpha}(\Omega)}\nonumber\\
&\leq Ch^{k+1+\alpha}|u|_{H^{k+3}(\Omega)}\cdot h^{\min\{k+1-\alpha,0\}}|\phi|_{H^{\alpha+2}(\Omega)}\nonumber\\
&\leq Ch^{\min\{2k+2,k+1+\alpha\}}|u|_{H^{k+3}(\Omega)} |\phi|_{H^{\alpha+2}(\Omega)}.
\end{align}
Now we turn to the estimate of the term $T_2$. Firstly, we rewrite $T_2$ as follows:
\begin{align}\label{T2}
T_2&=(\Delta_w(Q_hu)-\pi_h\Delta u, \widehat{\phi}_h )_{{\mathcal T}_h}\nonumber\\
&=(\Delta (Q_0u-u), \widehat{\phi}_h)_{{\mathcal T}_h}
+\langle Q_n(\frac{\partial u}{\partial {\bm n}})-\frac{\partial}{\partial{\bm n}}(Q_0u), \widehat{\phi}_h\rangle_{\partial{\mathcal T}_h}\nonumber\\
&=-(\nabla (Q_0u-u), \nabla\widehat{\phi}_h)_{{\mathcal T}_h}
+\langle Q_n(\frac{\partial u}{\partial {\bm n}})-\frac{\partial u}{\partial{\bm n}}, \widehat{\phi}_h\rangle_{\partial{\mathcal T}_h}\nonumber\\
&=J_1+J_2.
\end{align}
For the first term $J_1$, we discuss it in the following two cases:
\begin{itemize}
\item In the case of $\alpha=1$, $\nabla\widehat{\phi}_h=0$ since $\widehat{\phi}_h=\mathcal{P}_h^0(\phi_h)\in \mathbb{P}_0({\mathcal T}_h)$. Therefore, $J_1=0$.
\item In the case of $\alpha=2$, $\nabla\widehat{\phi}_h$ is a piecewise constant vector due to $\widehat{\phi}_h=\mathcal{P}_h^1(\phi_h)\in \mathbb{P}_1({\mathcal T}_h)$. Then, by Green's formula and (\ref{p-fm}), we get
\begin{align*}
J_1 = \sum_{K\in{\mathcal T}_h}-\langle Q_0u-u, \nabla\widehat{\phi}_h\cdot{\bm n}\rangle_{\partial K}=0.
\end{align*}
\end{itemize}
Thus, in both cases $\alpha=1$ and $\alpha=2$, we have
\begin{equation}\label{J1-est}
J_1=0.
\end{equation}
As to the second term {\color{red}$J_2$}, recalling the fact
\begin{align*}
\langle Q_n(\frac{\partial u}{\partial {\bm n}})-\frac{\partial u}{\partial{\bm n}}, \Delta \phi\rangle_{\partial{\mathcal T}_h}=0,
\end{align*}
we split {\color{red}$J_2$} into the following two terms:
\begin{align*}
J_2&=\langle Q_n(\frac{\partial u}{\partial {\bm n}})-\frac{\partial u}{\partial{\bm n}}, \widehat{\phi}_h\rangle_{\partial{\mathcal T}_h}\\
&=\langle Q_n(\frac{\partial u}{\partial {\bm n}})-\frac{\partial u}{\partial{\bm n}}, \mathcal{P}_h^{\alpha-1}(\Delta_w(Q_h\phi)-\Delta \phi)\rangle_{\partial{\mathcal T}_h}\nonumber\\
&\quad+\langle Q_n(\frac{\partial u}{\partial {\bm n}})-\frac{\partial u}{\partial{\bm n}}, \mathcal{P}_h^{\alpha-1}(\Delta \phi)-\Delta \phi\rangle_{\partial{\mathcal T}_h}.
\end{align*}
And then, by Cauchy-Schwarz inequality and (\ref{z2}) of Lemma \ref{l2} with $\gamma=k+1$, we get
\begin{align}\label{J2}
|J_2|&\leq (\sum_{K\in{\mathcal T}_h}h_K^{-1}\|Q_n(\frac{\partial u}{\partial {\bm n}})-\frac{\partial u}{\partial{\bm n}}\|_{L^2(\partial K)}^2)^{1/2}(\Theta_1^{1/2}+\Theta_2^{1/2})\nonumber\\
&\leq Ch^{k+1}|u|_{H^{k+3}(\Omega)}(\Theta_1^{1/2}+\Theta_2^{1/2}),
\end{align}
where
\begin{align*}
\Theta_1&:=\sum_{K\in{\mathcal T}_h}h_K\|\mathcal{P}_h^{\alpha-1}(\Delta_w(Q_h\phi)-\Delta \phi)\|_{L^2(\partial K)}^2,\\
\Theta_2&:=\sum_{K\in{\mathcal T}_h}h_K\|\mathcal{P}_h^{\alpha-1}(\Delta \phi)-\Delta \phi\|_{L^2(\partial K)}^2.
\end{align*}
From the trace inequality and the stability of $L^2$ projection $\mathcal{P}_h^{\alpha-1}$, it follows that
\begin{align*}
\Theta_1\leq C\sum_{K\in{\mathcal T}_h}\|\mathcal{P}_h^{\alpha-1}(\Delta_w(Q_h\phi)-\Delta \phi)\|_{L^2(K)}^2
\leq C\|\Delta_w(Q_h\phi)-\Delta \phi\|_{L^2({\mathcal T}_h)}^2.
\end{align*}
Then, by the triangle inequality and (\ref{z3}) of Lemma \ref{l2}, we arrive at
\begin{align}\label{theta1}
\Theta_1&\leq C(\|\Delta_w(Q_h\phi)-\pi_h\Delta \phi\|_{L^2({\mathcal T}_h)}^2+\|\pi_h\Delta\phi-\Delta \phi\|_{L^2({\mathcal T}_h)}^2)\nonumber\\
&\leq Ch^{2\min\{k+1,\alpha\}}|\phi|_{H^{\alpha+2}(\Omega)}^2.
\end{align}
It follows from the trace inequality and the approximation property of $L^2$ projection $\mathcal{P}_h^{\alpha-1}$ that
\begin{align*}
\Theta_2&\leq \sum_{K\in{\mathcal T}_h}(h_K\|\mathcal{P}_h^{\alpha-1}(\Delta \phi)-\Delta \phi\|_{L^2(K)}^2+h_K |\mathcal{P}_h^{\alpha-1}(\Delta \phi)-\Delta \phi|_{H^1( K)}^2)\nonumber\\
&\leq Ch^{2\alpha}|\phi|_{H^{\alpha+2}(\Omega)}^2,
\end{align*}
which together with (\ref{theta1}) and (\ref{J2}) leads to
\begin{align}\label{J2-est}
|J_2|\leq Ch^{\min\{2k+2,k+1+\alpha\}}|u|_{H^{k+3}(\Omega)}|\phi|_{H^{\alpha+2}(\Omega)}.
\end{align}
Collecting (\ref{T2}), (\ref{J1-est}) and (\ref{J2-est}) yields
\begin{align*}
|T_2|\leq Ch^{\min\{2k+2,k+1+\alpha\}}|u|_{H^{k+3}(\Omega)}|\phi|_{H^{\alpha+2}(\Omega)},
\end{align*}
which combining with (\ref{T1-est}) and (\ref{l1-est}) completed the proof of (\ref{a1}).
As to the proof of (\ref{a2}), from the Cauchy-Schwarz inequality and Lemma \ref{l2} with $\gamma=k+1$ it follows that
\begin{align*}
|\ell_2(u, Q_h\phi)|
&=\left|\sum_{T\in{\mathcal T}_h} \langle \Delta u-\pi_h\Delta u, \frac{\partial}{\partial {\bm n}}
(Q_0\phi)-Q_n(\frac{\partial \phi}{\partial{\bm n}_e}){\bm n}_e\cdot{\bm n}\rangle_{\partial K}\right|\nonumber\\
&\leq\left(\sum_{K\in{\mathcal T}_h} h_K\|\Delta u-\pi_h\Delta u\|_{L^2({\partial K})}^2\right)^{1/2}\times\nonumber\\
&\quad \left(\sum_{K\in{\mathcal T}_h} h^{-1}_K
\|\frac{\partial}{\partial{\bm n}}
(Q_0\phi)-Q_n(\frac{\partial \phi}{\partial{\bm n}})\|_{L^2({\partial K})}^2\right)^{1/2}\nonumber\\
&\leq Ch^{k+1}|u|_{H^{k+3}(\Omega)}\cdot h^{\min\{k+1,\alpha\}}|\phi|_{H^{\alpha+2}(\Omega)}\nonumber\\
&\leq C h^{\min\{2k+2,k+1+\alpha\}}|u|_{H^{k+3}(\Omega)} |\phi|_{H^{\alpha+2}(\Omega)}.
\end{align*}
The proof is completed.
\end{proof}
\begin{theorem}\label{thm2}
Let $u_h=\{u_0, u_n\bm{n}_e\}\in V_h$ be the solution
of the SF-C0WG scheme (\ref{wg}). Assume that the exact solution $u\in H^{k+3}(\Omega)$ and the regularity assumption (\ref{reg}) holds true.
Then, there exists a constant $C$ such that
\begin{equation}\label{L2err}
\|Q_0u-u_0\|_{L^2(\Omega)} \leq Ch^{k+3-\delta_{k,0}}|u|_{H^{k+3}(\Omega)}
\end{equation}
and
\begin{equation}\label{H1err}
\|\nabla(Q_0u-u_0)\|_{L^2(\Omega)} \leq Ch^{k+2}|u|_{H^{k+3}(\Omega)}.
\end{equation}
Here $\delta_{i,j}$ is the usual Kronecker's delta with value $1$
when $i=j$ and value $0$ otherwise.
\end{theorem}
\begin{proof}
Testing (\ref{dual}) by error function $e_0$ and then using a similar procedure as in the proof of the equation (\ref{mmmm}), we obtain
\begin{align}\label{tt}
(\chi, e_0)=(\Delta^2 \phi, e_0)_{{\mathcal T}_h}
=\mathcal{A}_h(e_h, Q_h\phi)-\ell(\phi,e_h).
\end{align}
The error equation (\ref{ee}) gives
\begin{align*}
\mathcal{A}_h(e_h, Q_h\phi) = \ell(u,Q_h\phi),
\end{align*}
which combining with (\ref{tt}) leads to
\begin{align}\label{z0}
(\chi, e_0)=\ell(u,Q_h\phi)-\ell(\phi,e_h).
\end{align}
In view of Lemma \ref{l4}, we infer that
\begin{align}\label{z5}
|\ell(u, Q_h\phi)| \leq Ch^{\min\{2k+2,k+1+\alpha\}}|u|_{H^{k+3}(\Omega)}|\phi|_{H^{\alpha+2}(\Omega)}.
\end{align}
Using Lemma \ref{l3} with $\gamma = \alpha$ and Theorem \ref{thm1}, we have
\begin{align*}
|\ell(\phi,e_h)|&\leq C h^{\min\{k+1,\alpha\}} |\phi|_{H^{\alpha+2}(\Omega)}{|\hspace{-.02in}|\hspace{-.02in}|} e_h{|\hspace{-.02in}|\hspace{-.02in}|}\\
&\leq Ch^{\min\{2k+2,k+1+\alpha\}}|u|_{H^{k+3}(\Omega)}|\phi|_{H^{\alpha+2}(\Omega)},
\end{align*}
which combining with (\ref{z0}) and (\ref{z4}) leads to
\begin{align}\label{z4}
|(\chi, e_0)| \leq C h^{\min\{2k+2,k+1+\alpha\}}|u|_{H^{k+3}(\Omega)}|\phi|_{H^{\alpha+2}(\Omega)}.
\end{align}
For the $L^2$-norm estimate of $e_0$, taking $\chi=e_0$ in the dual problem (\ref{dual})-(\ref{dual2}), and then using the estimate of (\ref{z4}) with the $H^4$-regularity, we find
\begin{align*}
\|e_0\|_{L^2(\Omega)}^2 \leq C h^{\min\{2k+2,k+3\}}|u|_{H^{k+3}(\Omega)}|\phi|_{H^{4}(\Omega)},
\end{align*}
which together with the assumption (\ref{reg}) with $\alpha=2$:
\[\|\phi\|_{H^4(\Omega)}\leq C\|e_0\|_{L^2(\Omega)}\]
completes the proof of (\ref{L2err}).
Then using the estimate of (\ref{z4}) with the $H^3$-regularity yields
\begin{align*}
|(\chi, e_0)| \leq C h^{k+2}|u|_{H^{k+3}(\Omega)}|\phi|_{H^3(\Omega)},
\end{align*}
which together with the assumption (\ref{reg}) with $\alpha=1$:
\[\|\phi\|_{H^3(\Omega)}\leq C\|\chi\|_{H^{-1}(\Omega)}\]
leads to
\begin{align}
\|\nabla e_0\|_{L^2(\Omega)} = \sup_{\chi\in H^{-1}(\Omega)}\frac{(\chi, e_0)}{\|\chi\|_{H^{-1}(\Omega)}}
\leq C h^{k+2}|u|_{H^{k+3}(\Omega)},
\end{align}
which ends the proof of (\ref{H1err}).
\end{proof}
By the triangle inequality, from Theorem \ref{thm2} and (\ref{Q0-approxi}), we immediately obtain the $L^2$ norm and $H^1$ norm error estimates between the exact solution $u$ and its WG finite element approximation $u_0$ as follows:
\begin{corollary}
Let $u_h=\{u_0, u_n\bm{n}_e\}\in V_h$ be the solution
of the SF-C0WG scheme (\ref{wg}). Assume that the exact solution $u\in H^{k+3}(\Omega)$ and the regularity assumption (\ref{reg}) holds true.
Then, there exists a constant $C$ such that
\begin{equation}
\|u-u_0\|_{L^2(\Omega)} \leq Ch^{k+3-\delta_{k,0}}|u|_{H^{k+3}(\Omega)}
\end{equation}
and
\begin{equation}
\|\nabla(u-u_0)\|_{L^2(\Omega)} \leq Ch^{k+2}|u|_{H^{k+3}(\Omega)}.
\end{equation}
Here $\delta_{i,j}$ is the usual Kronecker's delta with value $1$
when $i=j$ and value $0$ otherwise.
\end{corollary}
\section{Numerical Experiments} \label{Sec:numeric}
In this section, we conduct some numerical experiments to verify the theoretical predication on the SF-C0WG method (\ref{wg}) and also to compare its numerical performance to the C0WG method (\ref{c0wg}) and the C0IP method (\ref{c0ipdg}).
\begin{example}
Consider the model problem (\ref{pde})-(\ref{pde-bc2}) with $\Omega=(0,1)^2$.
The source data $f$ and boundaries data $g_D$ and $g_N$ are chosen so that the exact solution is
\[u=\sin(\pi x)\sin(\pi y).\]
\end{example}
\begin{figure}[!h]
\centering
{\includegraphics[width=0.7\textwidth]{./mesh_level_1.jpg}}
\caption{\label{grid1} The initial mesh. }
\label{fig:ex1-fig1}
\end{figure}
The initial mesh in our computation is shown in Figure \ref{fig:ex1-fig1}, which is generated by MATLAB function \texttt{initmesh}.
The next level of mesh is derived by uniformly refining the previous level of mesh.
The errors and the orders of convergence for the SF-C0WG method (\ref{wg}) with $k=0$ and $k=1$ are reported in Tables \ref{t1}, which confirm the theoretical predication in Theorem \ref{thm1} and Theorem \ref{thm2}.
Table \ref{t2} lists the errors and the rates of convergence for the C0WG method (\ref{c0wg}). The results in Table \ref{t1} and Table \ref{t2} show that both the SF-C0WG method and the C0WG method converge with the same rates, but the accuracy reached on a given mesh with a given polynomial degree is significant different. The SF-C0WG method is more accuracy than the C0WG method.
\begin{table}[h!]
\centering \renewcommand{\arraystretch}{1.1}
\caption{Error profiles and convergence rates of the SF-C0WG method.}\label{t1}
\begin{tabular}{c|c|cc|cc|cc}
\toprule[1.5pt]
$k$&level & ${|\hspace{-.02in}|\hspace{-.02in}|} Q_hu-u_h{|\hspace{-.02in}|\hspace{-.02in}|}$&Rate & $\|\nabla(u-u_0)\|$ &Rate &$\|u-u_0\|$ &Rate \\
\hline
\multirow{5}*{0}
&1&3.34E+00 & --&8.06E-02 & --&8.15E-03 & --\\
&2&1.66E+00 & 1.0076&2.03E-02 & 1.9899&2.00E-03 & 2.0305\\
&3&8.25E-01 & 1.0095&5.20E-03 & 1.9653&5.09E-04 & 1.9710\\
&4&4.11E-01 & 1.0059&1.32E-03 & 1.9787&1.29E-04 & 1.9760\\
&5&2.05E-01 & 1.0031&3.32E-04 & 1.9915&3.26E-05 & 1.9893\\
\hline
\multirow{5}*{1}
&1&3.61E-01 & --&5.79E-03 & --&2.97E-04 & --\\
&2&9.12E-02 & 1.9853&7.26E-04 & 2.9952&2.16E-05 & 3.7835\\
&3&2.28E-02 & 1.9975&8.99E-05 & 3.0144&1.41E-06 & 3.9374\\
&4&5.71E-03 & 2.0001&1.12E-05 & 3.0096&8.91E-08 & 3.9820\\
&5&1.43E-03 & 2.0004&1.39E-06 & 3.0048&6.29E-09 & 3.8238\\
\bottomrule[1.5pt]
\end{tabular}%
\end{table}%
\begin{table}[h!]
\centering \renewcommand{\arraystretch}{1.1}
\caption{Error profiles and convergence rates of the C0WG method.}\label{t2}
\begin{tabular}{c|c|cc|cc|cc}
\toprule[1.5pt]
$k$&level & ${|\hspace{-.02in}|\hspace{-.02in}|} Q_hu-u_h{|\hspace{-.02in}|\hspace{-.02in}|}$&Rate & $\|\nabla(u-u_0)\|$ &Rate &$\|u-u_0\|$ &Rate \\
\hline
\multirow{5}*{0}
&1&4.73E+00 & --&6.35E-01 & --&1.36E-01 & --\\
&2&2.33E+00 & 1.0189&1.52E-01 & 2.0620&3.35E-02 & 2.0236\\
&3&1.16E+00 & 1.0046&3.77E-02 & 2.0109&8.35E-03 & 2.0033\\
&4&5.81E-01 & 1.0018&9.41E-03 & 2.0047&2.08E-03 & 2.0019\\
&5&2.90E-01 & 1.0008&2.35E-03 & 2.0022&5.21E-04 & 2.0011\\
\hline
\multirow{5}*{1}
&1&7.14E-01 & --&7.91E-02 & --&4.46E-03 & --\\
&2&1.91E-01 & 1.9043&1.04E-02 & 2.9287&3.04E-04 & 3.8743\\
&3&4.89E-02 & 1.9629&1.33E-03 & 2.9627&1.96E-05 & 3.9522\\
&4&1.24E-02 & 1.9825&1.70E-04 & 2.9743&1.25E-06 & 3.9737\\
&5&3.11E-03 & 1.9914&2.14E-05 & 2.9849&7.89E-08 & 3.9852\\
\bottomrule[1.5pt]
\end{tabular}%
\end{table}%
\begin{table}[h!]
\centering \renewcommand{\arraystretch}{1.1}
\caption{Error profiles and convergence rates of the C0IP method.}\label{t2-2}
\begin{tabular}{c|c|cc|cc|cc}
\toprule[1.5pt]
$k$&level & $\| u-u_h\|_{dg}$&Rate & $\|\nabla(u-u_h)\|$ &Rate &$\|u-u_h\|$ &Rate \\
\hline
\multirow{5}*{0}
&1&1.33E+00 & -- &8.47E-02 & -- &1.02E-02 & --\\
&2&6.40E-01 & 1.0595&2.24E-02 & 1.9150&2.91E-03 & 1.8035\\
&3&3.20E-01 & 1.0009&5.92E-03 & 1.9222&7.99E-04 & 1.8635\\
&4&1.60E-01 & 0.9955&1.52E-03 & 1.9584&2.10E-04 & 1.9310\\
&5&8.03E-02 & 0.9983&3.86E-04 & 1.9824&5.35E-05 & 1.9689\\
\hline
\multirow{5}*{1}
&1&2.00E-01 & -- &6.24E-03 & -- &3.47E-04 & --\\
&2&5.15E-02 & 1.9533&7.83E-04 & 2.9950&2.58E-05 & 3.7489\\
&3&1.31E-02 & 1.9780&9.62E-05 & 3.0238&1.70E-06 & 3.9194\\
&4&3.30E-03 & 1.9870&1.19E-05 & 3.0157&1.09E-07 & 3.9712\\
&5&8.29E-04 & 1.9930&1.48E-06 & 3.0076&6.53E-09 & 4.0564\\
\bottomrule[1.5pt]
\end{tabular}%
\end{table}%
Table \ref{t2-2} shows the errors and the rates of convergence for the C0IP method (\ref{c0ipdg}). The errors in the first column of Table \ref{t2-2} is measured in the following $H^2$-like norm tailored for the C0IP method:
\[
\|v\|_{dg}:=\left[\sum_{K\in{\mathcal T}_h}|v|_{H^2(K)}^2+\sum_{e\in{\mathcal E}_h}h_e^{-1}\|{[\hspace{-0.02in}[}\nabla v{]\hspace{-0.02in}]}\|_{L^2(e)}^2 \right]^{1/2}.
\]
The results in Table \ref{t1} and Table \ref{t2-2} show that both the SF-C0WG method and the C0IP method converge with the same rate and the accuracies are also similar when the errors are measured in $H^1$ semi-norm and $L^2$ norm.
\begin{table}[h!]
\centering \renewcommand{\arraystretch}{1.1}
\caption{Comparison of assembling time and solving time for the C0WG method and the SF-C0WG method.}\label{t3}
\begin{tabular}{c|c|cc|cc}
\toprule[1.5pt]
\multirow{2}*{$k$}&\multirow{2}*{level}&\multicolumn{2}{c|}{C0WG method }&\multicolumn{2}{c}{SF-C0WG method}\\
\cline{3-6}
& & Assembling Time &Solving Time & Assembling Time &Solving Time \\
\hline
\multirow{5}*{0}
&1&0.052233 & 0.001690&0.065710 & 0.003116\\
&2&0.175486 & 0.007218&0.157510 & 0.006127\\
&3&0.734831 & 0.039118&0.546378 & 0.030866\\
&4&2.602549 & 0.171534&2.240196 & 0.133192\\
&5&10.67890 & 0.874419&8.766250 & 0.628217\\
\hline
\multirow{5}*{1}
&1&0.347160 & 0.027630&0.057160 & 0.016028\\
&2&0.184210 & 0.020082&0.201938 & 0.017132\\
&3&0.864096 & 0.072827&0.793079 & 0.061735\\
&4&3.537430 & 0.458761&2.800155 & 0.305041\\
&5&23.91767& 2.630822&12.14147 & 1.752295\\
\bottomrule[1.5pt]
\end{tabular}%
\end{table}
A comparison of the assembling time and solving time for both the C0WG method and the SF-C0WG method is displayed in Table \ref{t3}. It can be observed that the assembling time and solving time for the SF-C0WG method is always smaller than that for the C0WG method.
\begin{table}[h!]
\centering \renewcommand{\arraystretch}{1.1}
\caption{Comparison of assembling, solving and total time for the C0IP method and the SF-C0WG method.}\label{t4}
\begin{tabular}{c|c|cc|cc}
\toprule[1.5pt]
\multirow{2}*{Time (sec.)}&\multirow{2}*{level}&\multicolumn{2}{c|}{$k=0$ }&\multicolumn{2}{c}{$k=1$}\\
\cline{3-6}
& & {\color{red}C0IP} &SF-C0WG & {\color{red}C0IP} &SF-C0WG \\
\hline
\multirow{5}*{Assemble}
&1&0.073452 & 0.065710&0.091383 & 0.057160\\
&2&0.163071 & 0.157510&0.252789 & 0.201938\\
&3&0.711536 & 0.546378&1.004696 & 0.793079\\
&4&2.308666 & 2.240196&3.720932 & 2.800155\\
&5&9.169572 & 8.766250&14.90928 & 12.14147\\
\hline
\multirow{5}*{Solve}
&1&0.007031 & 0.003116&0.009475 & 0.016028\\
&2&0.004608 & 0.006127&0.015663 & 0.017132\\
&3&0.021312 & 0.030866&0.060428 & 0.061735\\
&4&0.079957 & 0.133192&0.252089 & 0.305041\\
&5&0.385537 & 0.628217&1.610523 & 1.752295\\
\hline
\multirow{5}*{Total}
&1&0.080483 & 0.068825&0.100858 & 0.073188\\
&2&0.167680 & 0.163636&0.268452 & 0.219071\\
&3&0.732848 & 0.577244&1.065125 & 0.854814\\
&4&2.388623 & 2.373388&3.973021 & 3.105196\\
&5&9.555110 & 9.394467&16.51981 & 13.89377\\
\bottomrule[1.5pt]
\end{tabular}%
\end{table}
The assembling time, solving time, and total time (the sum of the assembling and solving time) for both the C0IP method and the SF-C0WG method are illustrated in Table \ref{t4}. As can be seen, although the solving time of C0IP method is less than the SF-C0WG method, the assembling time and total time for the SF-C0WG method is always smaller than that for the C0IP method.
\section{Appendix} In this section, we shall introduce some technique tools which are useful in the $L^2$ and $H^1$ norm error analysis.
In order to prove Lemma \ref{lem:p3}, we introduce the following two lemmas.
\begin{lemma}\label{lem:p1}
For any $K\in{\mathcal T}_h$, there holds
\begin{align}\label{pp}
\Delta_w(Q_hw) = \Delta w,\quad \forall w\in \mathbb{P}_{k+2}(K).
\end{align}
\end{lemma}
\begin{proof}
For any $w\in \mathbb{P}_{k+2}(K)$, from the definitions of $Q_0$ and $Q_n$, we have $Q_0w =w$ and $Q_n(\frac{\partial w}{\partial{\bm n}_e})=\frac{\partial w}{\partial{\bm n}_e}$. Then, for any $K\in{\mathcal T}_h$ and $v\in\mathbb{P}_{k+3}(K)$, from the definition (\ref{wl}) of the weak laplacian, it follows that
\begin{align*}
(\Delta_wQ_hw, v)_K &= -(\nabla Q_0w, \nabla v)_K + \langle Q_n(\frac{\partial w}{\partial{\bm n}_e}){\bm n}_e\cdot{\bm n}, v\rangle_{\partial K}\\
&= -(\nabla w, \nabla v)_K + \langle \frac{\partial w}{\partial{\bm n}}, v\rangle_{\partial K}\\
&=(\Delta w, v)_K,
\end{align*}
which completes the proof.
\end{proof}
Let $\mathcal{P}_h^{k+2}: L^2({\mathcal T}_h)\rightarrow \mathbb{P}_{k+2}({\mathcal T}_h)$ be the element-wise defined $L^2$ orthogonal projection.
\begin{lemma}\label{lem:p2}
Assume $\phi\in H^{\alpha+2}(\Omega)$ with $\alpha=1,2$. Then, there holds
\begin{align}\label{p2}
\|\Delta_wQ_h(\phi-\mathcal{P}_h^{k+2}\phi)\|_{L^2({\mathcal T}_h)}
\leq Ch^{\min\{k+1,\alpha\}}|\phi|_{H^{\alpha+2}(\Omega)}.
\end{align}
\end{lemma}
\begin{proof}
For simplicity, denote by $w=\phi-\mathcal{P}_h^{k+2}\phi$. It follows from (\ref{key}) and the Cauchy-Schwarz inequality that
\begin{align}\label{aa2}
(\Delta_wQ_hw, v)_{{\mathcal T}_h} &= (\Delta Q_0w, v)_{{\mathcal T}_h}
+\langle Q_n(\frac{\partial w}{\partial{\bm n}})-\frac{\partial }{\partial{\bm n}}(Q_0w), v\rangle_{\partial {{\mathcal T}_h}}\nonumber\\
&\leq \|\Delta Q_0w\|_{L^2({\mathcal T}_h)}\|v\|_{L^2({\mathcal T}_h)}\nonumber\\
&\quad+(\sum_{K\in{\mathcal T}_h}h_K^{-1}\|Q_n(\frac{\partial w}{\partial{\bm n}})-\frac{\partial }{\partial{\bm n}}(Q_0w)\|_{\partial K}^2)^{1/2}
(\sum_{K\in{\mathcal T}_h}h_K\|v\|_{\partial K}^2)^{1/2}\nonumber\\
&\leq C(\|\Delta Q_0w\|_{L^2({\mathcal T}_h)}+(\sum_{K\in{\mathcal T}_h}h_K^{-1}\|Q_n(\frac{\partial w}{\partial{\bm n}})-\frac{\partial }{\partial{\bm n}}(Q_0w)\|_{\partial K}^2)^{1/2})\|v\|_{L^2({\mathcal T}_h)},
\end{align}
for any $v\in\mathbb{P}_{k+3}({\mathcal T}_h)$.
Letting $v=\Delta_wQ_hw$ in (\ref{aa2}), and then cancelling out $\|\Delta_wQ_hw\|_{L^2({\mathcal T}_h)}$ from both sides yields
\begin{align}\label{aa3}
\|\Delta_wQ_hw\|_{{\mathcal T}_h}\leq C[\|\Delta Q_0w\|_{L^2({\mathcal T}_h)}+(\sum_{K\in{\mathcal T}_h}h_K^{-1}\|Q_n(\frac{\partial w}{\partial{\bm n}})-\frac{\partial }{\partial{\bm n}}(Q_0w)\|_{\partial K}^2)^{1/2}].
\end{align}
Since the interpolant $Q_0$ preserves polynomials of degree up to $k+2$, it is easy to know
\[
Q_0(\mathcal{P}_h^{k+2}\phi)=\mathcal{P}_h^{k+2}\phi.
\]
Then, by the triangle inequality, we have
\begin{align}\label{aa4}
\|\Delta Q_0w\|_{L^2({\mathcal T}_h)}
&=\|\Delta Q_0(\phi-\mathcal{P}_h^{k+2}\phi)\|_{L^2({\mathcal T}_h)}\nonumber\\
&\leq \|\Delta (Q_0\phi-\phi)\|_{L^2({\mathcal T}_h)}+\|\Delta(\phi-\mathcal{P}_h^{k+2}\phi)\|_{L^2({\mathcal T}_h)}\nonumber\\
&\leq Ch^{\min\{k+1,\alpha\}}|\phi|_{H^{\alpha+2}(\Omega)}.
\end{align}
Since $Q_n$ and $Q_0$ preserve the polynomials of order $k+1$ and $k+2$ respectively, there holds
\begin{align*}
Q_n(\frac{\partial w}{\partial{\bm n}})-\frac{\partial }{\partial{\bm n}}(Q_0w)
&=Q_n(\frac{\partial }{\partial{\bm n}}(\phi-\mathcal{P}_h^{k+2}\phi))-
\frac{\partial }{\partial{\bm n}}(Q_0(\phi-\mathcal{P}_h^{k+2}\phi))\\
&=Q_n(\frac{\partial \phi}{\partial{\bm n}})-
\frac{\partial }{\partial{\bm n}}(Q_0\phi),
\end{align*}
which together with (\ref{z2}) of Lemma \ref{l2} leads to
\begin{align}\label{aa5}
\sum_{K\in{\mathcal T}_h}h_K^{-1}\|Q_n(\frac{\partial w}{\partial{\bm n}})-\frac{\partial }{\partial{\bm n}}(Q_0w)\|_{\partial K}^2
&=\sum_{K\in{\mathcal T}_h}h_K^{-1}\|Q_n(\frac{\partial \phi}{\partial{\bm n}})-
\frac{\partial }{\partial{\bm n}}(Q_0\phi)\|_{\partial K}^2\nonumber\\
&\leq Ch^{2\min\{k+1,\alpha\}}|\phi|_{H^{\alpha+2}(\Omega)}^2.
\end{align}
Combining the estimates of (\ref{aa3}), (\ref{aa4}) and (\ref{aa5}) completes the proof of (\ref{p2}).
\end{proof}
Now, we are ready to give the proof of Lemma \ref{lem:p3} below.
\begin{proof}
In view of (\ref{pp}) of Lemma \ref{lem:p1}, we have
\[\Delta_w Q_h(\mathcal{P}_h^{k+2}\phi)=\Delta(\mathcal{P}_h^{k+2}\phi)\]
on each element $K$ of ${\mathcal T}_h$.
If $\alpha>k$, we have $|\Delta(\mathcal{P}_h^{k+2}\phi)|_{H^{\alpha}({\mathcal T}_h)}=0$ since $\Delta(\mathcal{P}_h^{k+2}\phi)\in\mathbb{P}_k({\mathcal T}_h)$. Therefore, by the triangle inequality, we have
\begin{align}\label{aa1}
|\Delta_w(Q_h\phi)|_{H^{\alpha}({\mathcal T}_h)}
&=|\Delta_wQ_h(\phi-\mathcal{P}_h^{k+2}\phi)+\Delta(\mathcal{P}_h^{k+2}\phi)|_{H^{\alpha}({\mathcal T}_h)}\nonumber\\
&\leq|\Delta_wQ_h(\phi-\mathcal{P}_h^{k+2}\phi)|_{H^{\alpha}({\mathcal T}_h)}
+|\Delta(\mathcal{P}_h^{k+2}\phi)|_{H^{\alpha}({\mathcal T}_h)}\nonumber\\
&=|\Delta_wQ_h(\phi-\mathcal{P}_h^{k+2}\phi)|_{H^{\alpha}({\mathcal T}_h)}.
\end{align}
Then, from the inverse inequality, (\ref{aa1}) and (\ref{p2}) of Lemma \ref{lem:p2}, it follows that
\begin{align*
|\Delta_w(Q_h\phi)|_{H^{\alpha}({\mathcal T}_h)}
&\leq Ch^{-\alpha}\|\Delta_wQ_h(\phi-\mathcal{P}_h^{k+2}\phi)\|_{L^2({\mathcal T}_h)}\nonumber\\
&\leq Ch^{\min\{k+1-\alpha,0\}}|\phi|_{H^{\alpha+2}(\Omega)}.
\end{align*}
If $\alpha\leq k$, from the triangle inequality, the inverse inequality and (\ref{p2}) of Lemma \ref{lem:p2}, we can infer that
\begin{align*}
&|\Delta_w(Q_h\phi)|_{H^{\alpha}({\mathcal T}_h)}\nonumber\\
&\leq |\Delta_wQ_h(\phi-\mathcal{P}_h^{k+2}\phi)|_{H^{\alpha}({\mathcal T}_h)}
+|\Delta(\phi-\mathcal{P}_h^{k+2}\phi)|_{H^{\alpha}({\mathcal T}_h)}
+|\Delta\phi|_{H^{\alpha}({\mathcal T}_h)}
\nonumber\\
&\leq Ch^{-\alpha}\|\Delta_wQ_h(\phi-\mathcal{P}_h^{k+2}\phi)\|_{L^2({\mathcal T}_h)}
+Ch^{\min\{k+1-\alpha,0\}}|\phi|_{H^{\alpha+2}({\mathcal T}_h)}\nonumber\\
&\leq Ch^{\min\{k+1-\alpha,0\}}|\phi|_{H^{\alpha+2}({\mathcal T}_h)}.
\end{align*}
Therefore, in all cases, we have
\begin{align*}
|\Delta_w(Q_h\phi)|_{H^{\alpha}({\mathcal T}_h)}
\leq Ch^{\min\{k+1-\alpha,0\}}|\phi|_{H^{\alpha+2}({\mathcal T}_h)},
\end{align*}
as desired.
\end{proof}
| {
"timestamp": "2022-01-21T02:12:25",
"yymm": "2201",
"arxiv_id": "2201.08062",
"language": "en",
"url": "https://arxiv.org/abs/2201.08062",
"abstract": "In this article, we present and analyze a stabilizer-free $C^0$ weak Galerkin (SF-C0WG) method for solving the biharmonic problem. The SF-C0WG method is formulated in terms of cell unknowns which are $C^0$ continuous piecewise polynomials of degree $k+2$ with $k\\geq 0$ and in terms of face unknowns which are discontinuous piecewise polynomials of degree $k+1$. The formulation of this SF-C0WG method is without the stabilized or penalty term and is as simple as the $C^1$ conforming finite element scheme of the biharmonic problem. Optimal order error estimates in a discrete $H^2$-like norm and the $H^1$ norm for $k\\geq 0$ are established for the corresponding WG finite element solutions. Error estimates in the $L^2$ norm are also derived with an optimal order of convergence for $k>0$ and sub-optimal order of convergence for $k=0$. Numerical experiments are shown to confirm the theoretical results.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A stabilizer-free $C^0$ weak Galerkin method for the biharmonic equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683465856102,
"lm_q2_score": 0.7185943865443349,
"lm_q1q2_score": 0.7096610701852897
} |
https://arxiv.org/abs/1507.02654 | Ranges of Unitary Divisor Functions | For any real $t$, the unitary divisor function $\sigma_t^*$ is the multiplicative arithmetic function defined by $\sigma_t^*(p^{\alpha})=1+p^{\alpha t}$ for all primes $p$ and positive integers $\alpha$. Let $\overline{\sigma_t^*(\mathbb N)}$ denote the topological closure of the range $\sigma_t^*$. We calculate an explicit constant $\eta^*\approx 1.9742550$ and show that $\overline{\sigma_{-r}^*(\mathbb N)}$ is connected if and only if $r\in(0,\eta^*]$. We end with an open problem. | \section{Introduction}
For any $c\in\mathbb C$, the divisor function $\sigma_c$ is defined by $\sigma_c(n)=\sum_{d\mid n}d^c$. Divisor functions, especially $\sigma_1,\sigma_0$, and $\sigma_{-1}$, are among the most extensively-studied arithmetic functions \cite{Apostol, Hardy, Mitrinovic}. For example, two very classical number-theoretic topics are the study of perfect numbers and the study of friendly numbers. A positive integer $n$ is said to be \emph{perfect} if $\sigma_{-1}(n)=2$, and $n$ is said to be \emph{friendly} if there exists $m\neq n$ with $\sigma_{-1}(m)=\sigma_{-1}(n)$ \cite{Pollack}. Motivated by the very difficult problems related to perfect and friendly numbers, Laatsch studied $\sigma_{-1}(\mathbb N)$, the range of $\sigma_{-1}$. He showed that $\sigma_{-1}(\mathbb N)$ is a dense subset of the interval $[1,\infty)$ and asked if $\sigma_{-1}(\mathbb N)$ is in fact equal to the set $\mathbb Q\cap[1,\infty)$ \cite{Laatsch86}. Weiner answered this question in the negative, showing that $(\mathbb Q\cap[1,\infty))\setminus\sigma_{-1}(\mathbb N)$ is also dense in $[1,\infty)$ \cite{Weiner}.
The author has studied ranges of divisor functions in a variety of contexts \cite{Defant,Defant2,Defant3,Defant4}.
In this paper, we study the close relatives of the divisor functions known as unitary divisor functions. A \emph{unitary divisor} of an integer $n$ is a divisor $d$ of $n$ such that $\gcd(d,n/d)=1$. The unitary divisor function $\sigma_c^*$ is defined by \cite{Alladi, Cohen, Guy} \[\sigma_c^*(n)=\sum_{\substack{d\mid n \\ \gcd(d,n/d)=1}}d^c.\] The function $\sigma_c^*$ is multiplicative and satisfies $\sigma_c^*(p^{\alpha})=1+p^{\alpha c}$ for all primes $p$ and positive integers $\alpha$.
If $t\in[-1,0)$, then one may use the same argument that Laatsch employed in \cite{Laatsch86} in order to show that $\overline{\sigma_{t}^*(\mathbb N)}=[1,\infty)$. Here, the overline denotes the topological closure. In particular, $\overline{\sigma_{t}^*(\mathbb N)}$ is connected if $t\in [-1,0)$. On the other hand, $\overline{\sigma_{t}^*(\mathbb N)}$ is a discrete disconnected set if $t\geq 0$ (this follows from a simple modification of the proof of Theorem 2.2 in \cite{Defant}). The purpose of this paper is to prove the following theorem. Let $\zeta$ denote the Riemann zeta function.
\begin{theorem} \label{Thm2.3}
Let $\eta^*$ be the unique number in the interval $(1,2]$ that satisfies the equation
\begin{equation}\label{Eq1}
\frac{2^{\eta^*}+1}{2^{\eta^*}}\cdot\frac{(3^{\eta^*}+1)^2}{3^{2\eta^*}+1}=\frac{\zeta(\eta^*)}{\zeta(2\eta^*)}.
\end{equation}
If $r\in\mathbb R$, then $\overline{\sigma_{-r}(\mathbb N)}$ is connected if and only if $r\in(0,\eta^*]$.
\end{theorem}
\begin{remark}
In the process of proving Theorem \ref{Thm2.3}, we will show that there is indeed a unique solution to the equation \eqref{Eq1} in the interval $(1,2]$.
\end{remark}
In all that follows, we assume $r>1$ and study $\sigma_{-r}^*(\mathbb N)$. We first observe that $\sigma_{-r}^*(\mathbb N)\subseteq\displaystyle{\left[1,\zeta(r)/\zeta(2r)\right)}$. This is because if $q_1^{\beta_1}\cdots q_v^{\beta_v}$ is the prime factorization of some positive integer, then
\[\sigma_{-r}^*(q_1^{\beta_1}\cdots q_v^{\beta_v})=\prod_{i=1}^v\sigma_{-r}^*(q_i^{\beta_i})=\prod_{i=1}^v\left(1+q_i^{-\beta_i r}\right)\leq\prod_{i=1}^v\left(1+q_i^{-r}\right)<\prod_{p}\left(1+p^{-r}\right)\] \[=\prod_{p}\left(\frac{1-p^{-2r}}{1-p^{-r}}\right)=\frac{\zeta(r)}{\zeta(2r)}.\]
It is straightforward to show that $1$ and $\zeta(r)$ are elements of $\overline{\sigma_{-r}^*(N)}$. Therefore, Theorem \ref{Thm2.3} tells us that $\overline{\sigma_{-r}^*(\mathbb N)}=\left[1,\zeta(r)/\zeta(2r)\right]$ if and only if $r\in(0,\eta^*]$.
\section{Proofs}
In what follows, let $p_i$ denote the $i^\text{th}$ prime number. Let $\nu_p(x)$ denote the exponent of the prime $p$ appearing in the prime factorization
of the integer $x$.
To start, we need the following technical yet simple lemma.
\begin{lemma} \label{Lem2.1}
If $s,m\in\mathbb{N}$ and $s\leq m$, then $\displaystyle{\frac{p_s^{2r}+1}{p_s^{2r}+p_s^r}\leq\frac{p_m^{2r}+1}{p_m^{2r}+p_m^r}}$ for all $r>1$.
\end{lemma}
\begin{proof}
Fix some $r>1$, and write $\displaystyle{h(x)=\frac{x^{2r}+1}{x^{2r}+x^r}}$. Then
\[h'(x)=\frac{r}{x(x^r+1)^2}\left(x^r-2-\frac{1}{x^r}\right).\] We see that $h(x)$ is increasing when $x\geq 3$. Hence, in order to complete the proof, it suffices to show that $h(2)\leq h(3)$. For $r>1$, we have $2^{2r}3^r+3^{2r}+3^r<2^r3^{2r}+2^{2r}+2^r$ (in fact, equality occurs if we set $r=1$), so $(2^{2r}+1)(3^{2r}+3^r)<(2^{2r}+2^r)(3^{2r}+1)$. This shows that $\displaystyle{\frac{2^{2r}+1}{2^{2r}+2^r}<\frac{3^{2r}+1}{3^{2r}+3^r}}$, which completes the proof.
\end{proof}
The following theorem replaces the question of whether or not $\overline{\sigma_{-r}^*(\mathbb N)}$ is connected with a question concerning infinitely many inequalities. The advantage in doing this is that we will further reduce this problem to the consideration of a finite list of inequalities in Theorem \ref{Thm2.2}. Recall from the introduction that $\overline{\sigma_{-r}^*(\mathbb N)}$ is connected if and only if it is equal to the interval $[1,\zeta(r)/\zeta(2r)]$.
\begin{theorem} \label{Thm2.1}
If $r>1$, then $\overline{\sigma_{-r}^*(\mathbb N)}=\displaystyle{\left[1,\zeta(r)/\zeta(2r)\right)}$ if and only if \[\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\leq\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right)\] for all positive integers $m$.
\end{theorem}
\begin{proof}
First, suppose that $\displaystyle{\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\leq\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right)}$ for all positive integers $m$. We will show that the range of $\log\sigma_{-r}^*$ is dense in $\displaystyle{\left[0,\log\left(\zeta(r)/\zeta(2r)\right)\right)}$, which will then imply that the range of $\sigma_{-r}^*$ is dense in $\displaystyle{\left[1,\zeta(r)/\zeta(2r)\right)}$. Fix some $\displaystyle{x\in\left(0,\log\left(\zeta(r)/\zeta(2r)\right)\right)}$. We will construct a sequence $(C_i)_{i=1}^{\infty}$ of elements of the range of $\log\sigma_{-r}^*$ that converges to $x$. First, let $C_0=0$. For each positive integer $n$, if $C_{n-1}<x$, let
$\displaystyle{C_n=C_{n-1}+\log\left(1+p_n^{-\alpha_n r}\right)}$, where $\alpha_n$ is the smallest positive integer that satisfies $\displaystyle{C_{n-1}+\log\left(1+p_n^{-\alpha_n r}\right)\leq x}$. If $C_{n-1}=x$, simply set $C_n=C_{n-1}=x$. For each $n\in\mathbb{N}$, $C_n\in\log\sigma_{-r}^*(\mathbb N)$. Indeed, if $C_n\neq C_{n-1}$, then
\[C_n=\sum_{i=1}^n \log\left(1+p_i^{-\alpha_i r}\right)=\log\left(\prod_{i=1}^n\left(1+p_i^{-\alpha_i r}\right)\right)=\log\sigma_{-r}^*\left(\prod_{i=1}^np_i^{\alpha_i}\right).\] If, however, $C_n=C_{n-1}=x$, then we may let $l$ be the smallest positive integer such that $C_l=x$ and show, in the same manner as above, that $\displaystyle{C_n=C_l=\log\sigma_{-r}^*\left(\prod_{i=1}^lp_i^{\alpha_i}\right)}$. Let us write $\displaystyle{\gamma=\lim_{n\rightarrow\infty}C_n}$. Note that $\gamma$ exists and that $\gamma\leq x$ because the sequence $(C_i)_{i=1}^\infty$ is nondecreasing and bounded above by $x$. If we can show that $\gamma=x$, then we will be done. Therefore, let us assume instead that $\gamma<x$.
We have $C_n=C_{n-1}+\log(1+p_n^{-\alpha_n r})$ for all positive integers $n$. Write
$D_n=\log(1+p_n^{-r})-\log(1+p_n^{-\alpha_n r})$ and $\displaystyle{E_n=\sum_{i=1}^n D_i}$. As
\[x+\lim_{n\to\infty}E_n>\gamma+\lim_{n\to\infty}E_n=\lim_{n\rightarrow\infty}(C_n+E_n)=\lim_{n\rightarrow\infty}\left(\sum_{i=1}^n\log\left(1+p_i^{-\alpha_i r}\right)+\sum_{i=1}^n D_i\right)\] \[=\lim_{n\rightarrow\infty}\sum_{i=1}^n\log\left(1+p_i^{-r}\right)=\log\left(\zeta(r)/\zeta(2r)\right),\] we have $\displaystyle{\lim_{n\rightarrow\infty}E_n>\log\left(\zeta(r)/\zeta(2r)\right)-x}$. Therefore, we may let $m$ be the smallest positive integer such that $\displaystyle{E_m>\log\left(\zeta(r)/\zeta(2r)\right)-x}$. If $\alpha_m=1$ and $m>1$, then $D_m=0$. This forces $\displaystyle{E_{m-1}=E_m>\log\left(\zeta(r)/\zeta(2r)\right)-x}$, contradicting the minimality of $m$. If $\alpha_m=1$ and $m=1$, then $\displaystyle{0=E_m>\log\left(\zeta(r)/\zeta(2r)\right)-x}$, which is also a contradiction since we originally chose $x<\log(\zeta(r)/\zeta(2r))$. Therefore, $\alpha_m>1$. Due to the way we defined $C_m$ and $\alpha_m$, we have
$\displaystyle{C_{m-1}+\log\left(1+p_n^{-(\alpha_{m}-1)r}\right)>x}$. Hence,
\[\log\left(1+p_n^{-(\alpha_{m}-1)r}\right)-\log\left(1+p_n^{-\alpha_m r}\right)>x-C_m.\] Using our original assumption that
$\displaystyle{\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\leq\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right)}$, we have
\[\log\left(\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\right)\leq\sum_{i=m+1}^{\infty}\log\left(1+\frac{1}{p_i^r}\right)=\log\left(\frac{\zeta(r)}{\zeta(2r)}\right)-E_m-C_m\]
\[<x-C_m<\log\left(1+p_n^{-(\alpha_{m}-1)r}\right)-\log\left(1+p_n^{-\alpha_m r}\right)=\log\left(\frac{p_m^{\alpha_m r}+p_m^r}{p_m^{\alpha_m r}+1}\right).\]
Thus,
\[\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}<\frac{p_m^{\alpha_m r}+p_m^r}{p_m^{\alpha_m r}+1}.\]
Rewriting this inequality, we get $\displaystyle{p_m^{2r}+p_m^{(\alpha_m+1)r}<p_m^{3r}+p_m^{\alpha_m r}}$.
Now, dividing through by $p_m^{\alpha_m}$ yields $\displaystyle{p_m^{(2-\alpha_m)r}+p_m^r<1+p_m^{(3-\alpha_m)r}}$, which is impossible since
$\alpha_m\geq 2$. This contradiction proves that $\gamma=x$, so $\overline{\sigma_{-r}^*(\mathbb N)}=\left[1,\zeta(r)/\zeta(2r)\right]$.
To prove the converse, suppose there exists some positive integer $m$ such that \[\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}>\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right).\] We may write this inequality as
\begin{equation} \label{EqEdit1}
\frac{p_m^{2r}+1}{p_m^{2r}+p_m^r}<\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right)^{-1}.
\end{equation}
Fix a positive integer $N$. If
$\nu_{p_s}(N)=1$ for all $s\in\{1,2,\ldots,m\}$, then \[\sigma_{-r}^*(N)\geq\prod_{s=1}^m\left(1+\frac{1}{p_s^r}\right)=\frac{\zeta(r)}{\zeta(2r)}\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right)^{-1}.\]
On the other hand, if $\nu_{p_s}(N)\neq 1$ for some $s\in\{1,2,\ldots,m\}$, then
$\displaystyle{\sigma_{-r}^*(p_s)\leq}$ $\displaystyle{1+\frac{1}{p_s^{2r}}}$. This implies that
\[\sigma_{-r}^*(N)\leq\left(1+\frac{1}{p_s^{2r}}\right)\prod_{\substack{i=1 \\ i\neq s}}^{\infty}\left(1+\frac{1}{p_i^r}\right)=\frac{\zeta(r)}{\zeta(2r)}\frac{1+p_s^{-2r}}{1+p_s^{-r}}=\frac{\zeta(r)}{\zeta(2r)}\frac{p_s^{2r}+1}{p_s^{2r}+p_s^r}\] in this case.
Using Lemma \ref{Lem2.1}, we have
\[\sigma_{-r}^*(N)\leq\frac{\zeta(r)}{\zeta(2r)}\frac{p_m^{2r}+1}{p_m^{2r}+p_m^r}.\]
As $N$ was arbitrary, we have shown that there is no element of the range of $\sigma_{-r}^*$ in the interval \[\left(\frac{\zeta(r)}{\zeta(2r)}\frac{p_m^{2r}+1}{p_m^{2r}+p_m^r},\frac{\zeta(r)}{\zeta(2r)}\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right)^{-1}\right).\] This interval is a gap in the range of $\sigma_{-r}^*$ because of the inequality \eqref{EqEdit1}.
\end{proof}
As mentioned above, we wish to reduce the task of checking the infinite collection of inequalities given in Theorem \ref{Thm2.1} to that of checking finitely many inequalities. We do so in Theorem \ref{Thm2.2}, the proof of which requires the following lemma.
\begin{lemma} \label{Lem2.2}
If $j\in\mathbb{N}\backslash\{1,2,3,4,6,9\}$, then $\displaystyle{\frac{p_{j+1}}{p_j}<\sqrt[3]{2}}$.
\end{lemma}
\begin{proof}
A simple manipulation of the corollary to Theorem 3 in \cite{Rosser61} shows that \[\frac{p_{j+1}}{p_j}<\frac{(j+1)(\log(j+1)+\log\log(j+1))}{j\log j}\] for all integers $j\geq 6$. It is easy to verify that $\displaystyle{\frac{(j+1)(\log(j+1)+\log\log(j+1))}{j\log j}}<\sqrt[3]{2}$ for all $j\geq 3100$. Therefore, the desired result holds for $j\geq 3100$. A quick search through the values of $\displaystyle{\frac{p_{j+1}}{p_j}}$ for $j<3100$ yields the desired result.
\end{proof}
\begin{theorem} \label{Thm2.2}
If $r\in(1,3]$, then $\overline{\sigma_{-r}^*(\mathbb N)}=\left[1,\zeta(r)/\zeta(2r)\right]$ if and only if \[\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\leq\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right)\] for all $m\in\{1,2,3,4,6,9\}$.
\end{theorem}
\begin{proof}
Let \[F(m,r)=\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\prod_{i=1}^m\left(1+\frac{1}{p_i^r}\right)\] so that the inequality $\displaystyle{\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\leq\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right)}$ is equivalent to $\displaystyle{F(m,r)\leq\frac{\zeta(r)}{\zeta(2r)}}$. Let $r\in(1,3]$. By Theorem \ref{Thm2.1}, it suffices to show that if $\displaystyle{F(m,r)\leq\frac{\zeta(r)}{\zeta(2r)}}$ for all $m\in\{1,2,3,4,6,9\}$, then $\displaystyle{F(m,r)\leq\frac{\zeta(r)}{\zeta(2r)}}$ for all $m\in\mathbb{N}$. Therefore, assume that $r$ is such that $F(m,r)\leq\dfrac{\zeta(r)}{\zeta(2r)}$ for all $m\in\{1,2,3,4,6,9\}$.
We will show that $F(m+1,r)>F(m,r)$ for all $m\in\mathbb N\setminus\{1,2,3,4,6,9\}$. This will show that $(F(m,r))_{m=10}^{\infty}$ is an increasing sequence. As
$\displaystyle{\lim_{m\rightarrow\infty}F(m,r)=}$ $\displaystyle{\frac{\zeta(r)}{\zeta(2r)}}$, it will then follow that $\displaystyle{F(m,r)<\frac{\zeta(r)}{\zeta(2r)}}$ for all integers $m\geq 10$. Furthermore, we will see that
$F(5,r)<F(6,r)\leq\dfrac{\zeta(r)}{\zeta(2r)}$ and $F(7,r)<F(8,r)<\displaystyle{F(9,r)\leq\frac{\zeta(r)}{\zeta(2r)}}$, which will complete the proof.
Let $m\in\mathbb{N}\backslash\{1,2,3,4,6,9\}$. By Lemma \ref{Lem2.2},
$\dfrac{p_{m+1}}{p_m}<\sqrt[3]{2}\leq \sqrt[r]{2}$. This show that $p_{m+1}^r<2p_m^r$, implying that $2p_m^{2r}>p_m^r p_{m+1}^r$. Therefore,
\[2p_m^{2r}+2>p_m^r p_{m+1}^r+\frac{p_m^r}{p_{m+1}^r}-p_{m+1}^r-\frac{1}{p_{m+1}^r}=\frac{(p_m^r-1)(p_{m+1}^{2r}+1)}{p_{m+1}^r}.\]
Multiplying each side of this inequality by $\displaystyle{\frac{p_{m+1}^r}{(p_{m+1}^{2r}+1)(p_m^{2r}+1)}}$ and adding $1$ to each side, we get
\[1+\frac{2p_{m+1}^r}{p_{m+1}^{2r}+1}>1+\frac{p_m^r-1}{p_m^{2r}+1},\]
which we may write as
\[\frac{(p_{m+1}^r+1)^2}{p_{m+1}^{2r}+1}>\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}.\] Finally, we get
\[F(m+1,r)=\frac{p_{m+1}^{2r}+p_{m+1}^r}{p_{m+1}^{2r}+1}\prod_{i=1}^{m+1}\left(1+\frac{1}{p_i^r}\right)=\frac{(p_{m+1}^r+1)^2}{p_{m+1}^{2r}+1}\prod_{i=1}^m\left(1+\frac{1}{p_i^r}\right)\]
\[>\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\prod_{i=1}^m\left(1+\frac{1}{p_i^r}\right)=F(m,r).\qedhere\]
\end{proof}
Now, let \[V_m(r)=\log\left(\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\right)-\sum_{i=m+1}^\infty\log\left(1+\frac{1}{p_i^r}\right).\] Equivalently, $\displaystyle{V_m(r)=\log(F(m,r))-\log\left(\frac{\zeta(r)}{\zeta(2r)}\right)}$, where $F$ is the function defined in the proof of Theorem \ref{Thm2.2}. Observe that \[\frac{p_m^{2r}+p_m^r}{p_m^{2r}+1}\leq\prod_{i=m+1}^{\infty}\left(1+\frac{1}{p_i^r}\right)\] if and only if $V_m(r)\leq 0$. If we let $J_m(r)=\displaystyle{\sum_{i=m+1}^{m+6}\frac{1}{p_i^r+1}-\frac{p_m^{2r}-2p_m^r-1}{(p_m^r+1)(p_m^{2r}+1)}}$, then we have \[\frac{\partial}{\partial r}J_m(r)=\frac{p_m^r((p_m^r-1)^4-12p_m^{2r})\log p_m}{(p_m^{r}+1)^2(p_m^{2r}+1)^2}-\sum_{i=m+1}^{m+6}\frac{p_i^r\log p_i}{(p_i^r+1)^2}.\] It is not difficult to verify that $\displaystyle{\frac{p_m^r((p_m^r-1)^4-12p_m^{2r})\log p_m}{(p_m^{r}+1)^2(p_m^{2r}+1)^2}}\geq -1$ for all $r\in[1,2]$ and $m\in\{1,2,3,4,6,9\}$. Therefore, when $r\in[1,2]$ and $m\in\{1,2,3,4,6,9\}$, we have \[\frac{\partial}{\partial r}J_m(r)\geq -1-\sum_{i=m+1}^{m+6}\frac{p_i^r\log p_i}{(p_i^r+1)^2}\geq-1-\sum_{i=m+1}^{m+6}\frac{\log p_i}{p_i^r}> -7.\] Numerical calculations show that $\displaystyle{J_m(r)>\frac{1}{400}}$ for all $m\in\{1,2,3,4,6,9\}$ and \[r\in\left\{1+\frac{n}{2800}\colon n\in\{0,1,2,\ldots,2800\}\right\}.\] Because each function $J_m$ is continuous in $r$ for $r\in[1,2]$, we see that \[J_m(r)>\frac{1}{400}-7\left(\frac{1}{2800}\right)=0\] for all $r\in[1,2]$ and $m\in\{1,2,3,4,6,9\}$.
We introduced the functions $J_m$ so that we could write
\[\frac{\partial}{\partial r}V_m(r)=\sum_{i=m+1}^{\infty}\frac{\log p_i}{p_i^r+1}-\frac{(p_m^{2r}-2p_m^r-1)\log p_m}{(p_m^r+1)(p_m^{2r}+1)}>(\log p_m)J_m(r)>0\] for all $m\in\{1,2,3,4,6,9\}$ and $r\in[1,2]$.
A quick numerical calculation shows that $V_2(1.5)<0<V_2(2)$, so the function $V_2$ has exactly one root, which we will call $\eta^*$, in the interval $(1,2]$. Further calculations show that $V_m(2)<0$ for all $m\in\{1,3,4,6,9\}$. Hence, $V_m(r)\leq 0$ for all $m\in\{1,2,3,4,6,9\}$ and $r\in(1,\eta^*]$. By Theorem \ref{Thm2.2}, this means that if $r\in(1,2]$, then $\overline{\sigma_{-r}^*(\mathbb N)}\left[1,\zeta(r)/\zeta(2r)\right]$ if and only if $r\leq\eta^*$.
Next, note that \[\frac{\partial}{\partial r}V_2(r)=\sum_{i=3}^{\infty}\frac{\log p_i}{p_i^r+1}-\frac{(3^{2r}-2\cdot 3^r-1)\log 3}{(3^{2r}+1)(3^r+1)}>-\frac{(3^{2r}-2\cdot 3^r-1)\log 3}{(3^{2r}+1)(3^r+1)}\]
\[>-\frac{(3^{2r}+1)\log 3}{(3^{2r}+1)(3^r+1)}\geq -\frac{\log 3}{3^2+1}>-1.1\]
for all $r\in[2,3]$. Let $\displaystyle{A=\left\{2+\frac{n}{400}\colon n\in\{0,1,2,\ldots,400\}\right\}}$. With a computer program, one may verify that $V_2(r)>0.003$ for all $r\in A$. Because $V_2$ is continuous, this shows that $V_2(r)>0.003-1.1\displaystyle{\left(\frac{1}{400}\right)}>0$ for all $r\in [2,3]$. Consequently, $\overline{\sigma_{-r}^*(\mathbb N)}\neq\displaystyle{\left[1,\zeta(r)/\zeta(2r)\right)}$ if $r\in[2,3]$.
We are now in a position to prove Theorem \ref{Thm2.3}. Note that the equation defining $\eta^*$ in the statement of this theorem is simply a rearrangement of the equation $V_2(\eta^*)=0$. Therefore, we have shown that the theorem is true for $r\in(1,3]$. In order to prove the theorem for $r>3$, it suffices (by Theorem \ref{Thm2.2}) to show that $\displaystyle{F(1,r)>\frac{\zeta(r)}{\zeta(2r)}}$ for all $r>3$. If $r>3$, then
\[F(1,r)=\frac{(2^r+1)^2}{2^{2r}+1}=\frac{2^{2r}+2^{r+1}+1}{2^{2r}+1}>\frac{2^{2r}+2^r+\frac{2^{r+1}}{r-1}}{2^{2r}+1}=\frac{1+\frac{1}{2^r}+\frac{1}{(r-1)2^{r-1}}}{1+\frac{1}{2^{2r}}}\]
\[>\frac{1+\frac{1}{2^r}+{\frac{1}{(r-1)2^{r-1}}}}{\zeta(2r)}=\frac{1+\frac{1}{2^r}+\int_2^{\infty}x^{-r}dx}{\zeta(2r)}>\frac{\zeta(r)}{\zeta(2r)}.\]
\section{An Open Problem}
Let $\mathcal N(S)$ denote the number of connected components of a set $S\subseteq \mathbb R$, and let $\mathcal E_k^*=\{t\in\mathbb R\colon\mathcal N(\overline{\sigma_t^*(\mathbb N)})=k\}$. Theorem \ref{Thm2.3} tells us that $\mathcal E_1^*=[-\eta^*,0)$. What can be said about $\mathcal E_k^*$ for $k\geq 2$? Are these sets all half-open intervals? What is the growth rate of the sequence $(-\inf\mathcal E_k^*)_{k=1}^\infty$?
\section{Acknowledgements}
This work was supported by National Science Foundation grant no. 1262930. | {
"timestamp": "2017-06-26T02:03:17",
"yymm": "1507",
"arxiv_id": "1507.02654",
"language": "en",
"url": "https://arxiv.org/abs/1507.02654",
"abstract": "For any real $t$, the unitary divisor function $\\sigma_t^*$ is the multiplicative arithmetic function defined by $\\sigma_t^*(p^{\\alpha})=1+p^{\\alpha t}$ for all primes $p$ and positive integers $\\alpha$. Let $\\overline{\\sigma_t^*(\\mathbb N)}$ denote the topological closure of the range $\\sigma_t^*$. We calculate an explicit constant $\\eta^*\\approx 1.9742550$ and show that $\\overline{\\sigma_{-r}^*(\\mathbb N)}$ is connected if and only if $r\\in(0,\\eta^*]$. We end with an open problem.",
"subjects": "Number Theory (math.NT)",
"title": "Ranges of Unitary Divisor Functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683506103591,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7096610671258502
} |
https://arxiv.org/abs/2209.03722 | Percolation on High-dimensional Product Graphs | We consider percolation on high-dimensional product graphs, where the base graphs are regular and of bounded order. In the subcritical regime, we show that typically the largest component is of order logarithmic in the number of vertices. In the supercritical regime, our main result recovers the sharp asymptotic of the order of the largest component, and shows that all the other components are typically of order logarithmic in the number of vertices. In particular, we show that this phase transition is quantitatively similar to the one of the binomial random graph.This generalises the results of Ajtai, Komlós, and Szemerédi and of Bollobás, Kohayakawa, and Łuczak who showed that the $d$-dimensional hypercube, which is the $d$-fold Cartesian product of an edge, undergoes a phase transition quantitatively similar to the one of the binomial random graph. | \section{Introduction}
\subsection{Background and motivation}
In 1960, Erd\H{o}s and R\'enyi \cite{ER60} discovered the following fundamental phenomenon: the component structure of the binomial random graph $G(d+1,p)$\footnote{As we mainly consider $d$-regular graphs, we use the slightly unusual notation of $G(d+1,p)$ instead of $G(n,p)$, to make the comparison of the results simpler.} undergoes a remarkable \emph{phase transition} around the probability $p=\frac{1}{d}$. More precisely, if we let $y=y(\epsilon)$ be the unique solution in $(0,1)$ of the equation
\begin{align}\label{survival prob}
y=1-\exp\left(-(1+\epsilon)y\right),
\end{align}
then Erd\H{o}s and R\'enyi's work \cite{ER60} implies the following\footnote{In fact, Erd\H{o}s and R\'enyi worked in the closely related \emph{uniform} random graph model $G(d+1,m)$.}:
\begin{thm}[\cite{ER60}]\label{ER thm}
Let $\epsilon>0$ be a small enough constant. Then, with probability tending to one as $d$ tends to infinity,
\begin{itemize}
\item [(a)] if $p=\frac{1-\epsilon}{d}$, then all components of $G(d+1,p)$ are of order $O\left(\frac{\log d}{\epsilon^2}\right)$; and,
\item [(b)] if $p=\frac{1+\epsilon}{d}$, then $G(d+1,p)$ contains a unique giant component of order $(1+o(1))yd$, where $y$ is defined according to (\ref{survival prob}). Furthermore, all the other components of $G(d+1,p)$ are of order $O\left(\frac{\log d}{\epsilon^2}\right)$.
\end{itemize}
\end{thm}
We note that $y$ is the survival probability of a Galton-Watson tree with offspring distribution $Bin\left(d, \frac{1+\epsilon}{d}\right)$, and we have that $y=2\epsilon-O(\epsilon^2)$. The regime where $p=\frac{1-\epsilon}{d}$ is often referred to as the \textit{subcritical regime}, while if $p=\frac{1+\epsilon}{d}$ it is often called the \textit{supercritical regime}. We refer the reader to \cite{B01, FK16, JLr00} for a systematic coverage of random graphs.
We can think of the binomial random graph as perhaps the simplest example of a \emph{percolation} model. Percolation is a mathematical process, initially studied by Broadbent and Hammersley \cite{BH57} to model the flow of a fluid through a porous medium whose channels may be randomly blocked. The underlying mathematical model is simple: given a fixed \emph{host} graph $G$ and some probability $p \in (0,1)$, in (bond) percolation we consider the random subgraph $G_p$ of $G$ obtained by retaining every edge independently with probability $p$. The component structure of the percolated subgraph $G_p$ is of particular interest, and in this broader setting the phase transition described in Theorem \ref{ER thm} can be viewed as an example of a \emph{percolation threshold}. See \cite{BR06, G99, K82} for a comprehensive introduction to percolation theory.
Percolation is often studied on lattice-like graphs, where in contrast to $G(d+1,p)$ there is some non-trivial underlying geometry controlling the potential adjacencies in the random subgraph. One particular percolation model that has received considerable interest is that of percolation on the $d$\textit{-dimensional hypercube} $Q^d$. Here, $Q^d$ is a graph with the vertex set $V(Q^d)=\{0,1\}^d$, where two vertices are adjacent if they differ in exactly one coordinate.
It was conjectured by Erd\H{o}s and Spencer \cite{ES79} that $Q^d_p$ undergoes a similar phase transition to $G(d+1,p)$ when $p$ is around $\frac{1}{d}$. Ajtai, Koml\'os, and Szemer\'edi \cite{AKS81} confirmed this conjecture, and their work was extended by Bollob\'as, Kohayakawa, and \L{}uczak \cite{BKL92}.
\begin{thm}[\cite{AKS81, BKL92}] \label{hypercube thm}
Let $\epsilon>0$ be a small enough constant. Then, with probability tending to one as $d$ tends to infinity,
\begin{itemize}
\item [(a)] if $p=\frac{1-\epsilon}{d}$, then all components of $Q^d_p$ are of order $O_{\epsilon}(d)$; and,
\item [(b)] if $p=\frac{1+\epsilon}{d}$, then $Q^d_p$ contain a unique giant component of order $(1+o(1))y2^d$, where $y$ is defined according to (\ref{survival prob}). Furthermore, all the other components of $Q^d_p$ are of order $O_{\epsilon}(d)$.
\end{itemize}
\end{thm}
Observe that in the setting of the hypercube, $|V(Q^d)| = 2^d$, and so $d=\log_2 |V(Q^d)|$. Thus, one can see that there is a striking similarity between the behaviour of $G(d+1,p)$ shown in Theorem \ref{ER thm}, and the behaviour of $Q^d_p$ shown in Theorem \ref{hypercube thm}. Denoting the number of vertices in the host graph by $n$, the components in the subcritical regime are typically of order at most logarithmic in $n$, whereas in the supercritical regime there is a unique giant component of asymptotic order $yn$, and all other components are of order logarithmic in $n$. In other words, with the right scaling, the phase transitions which occur around the percolation threshold in $G(d+1,p)$ and $Q^d_p$ display quantitatively similar behaviour. It is known that a similar phenomenon occurs in other random graph models, such as graphs with a fixed degree sequence \cite{MR95} or percolation on pseudo-random graphs \cite{FKM04}. We will informally refer to this as the \textit{Erd\H{o}s-R\'enyi component phenomenon}.
Part of the Erd\H{o}s-R\'enyi component phenomenon is related to the so-called \emph{discrete duality principle}, which roughly says that if we remove the giant component from a supercritical random graph, then the distribution of what remains in some way resembles the distribution of a subcritical random graph, for an appropriate choice of parameters. See, for example, \cite{HH17} for a discussion of the discrete duality principle in hypercube and lattice percolation. There is also perhaps some similarity to the concept of \emph{mean-field behaviour}, a phenomenon in which for many models studied in the context of statistical physics there is a critical dimension above which certain quantitative properties of the model when viewed at the critical probability are no longer dependent on the dimension of the model, and so in some sense independent of the host graph. For example, Nachmias \cite{N09} showed that percolated transitive expander graphs demonstrate mean-field behaviour in terms of the width of the scaling window around the critical probability and more recently this behaviour has been shown to hold also in the percolated hypercube \cite{BCVSS06,HN17}.
It is thus a natural question to ask whether such a phenomenon holds for percolation in a wider family of graphs. The $d$-dimensional hypercube embeds naturally into $d$-dimensional space, but we can also view $Q^d$ as a high-dimensional object in terms of its product structure, since $Q^d$ can be obtained as the \textit{Cartesian product} of $d$-copies of a single edge. It is perhaps not unreasonable to expect that percolation on other \emph{high-dimensional graphs} might display similar behaviour.
Given an integer $t>0$ and a sequence of graphs $G^{(1)},G^{(2)},\ldots, G^{(t)}$, the Cartesian product of $G^{(1)},G^{(2)},\ldots, G^{(t)}$, denoted by $G=G^{(1)}\square \cdots \square G^{(t)}$ or $G=\square_{i=1}^{t}G^{(i)}$, is the graph with the vertex set
\begin{align*}
V(G)=\left\{v=(v_1,v_2,\ldots,v_t) \colon v_i\in V\left(G^{(i)}\right) \text{ for all } i \in [t]\right\},
\end{align*}
and the edge set
\begin{align*}
E(G)=\left\{uv \colon \begin{array}{l} \text{there is some } i\in [t] \text{ such that } u_j=v_j\\
\text{for all } i \neq j \text{ and } u_iv_i\in E\left(G^{(i)}\right) \end{array}\right\}.
\end{align*}
We call $G^{(1)},G^{(2)},\ldots, G^{(t)}$ the \textit{base graphs of} $G$.
Percolation on such product graphs has been quite well studied. Apart from the hypercube, perhaps the two most well-known examples are the $d$-dimensional torus $T_{n,d}$, which is the Cartesian product of $d$ cycles of length $n$, and the Hamming graph $K_n^d$, which is the Cartesian product of $d$ complete graphs of order $n$ (see Chapter 13 of \cite{HH17} for a survey on many important results on these models). However, in both of these models, most research has focused on the case where the dimension $d$ is fixed, with the asymptotics studied as the number of vertices $n$ in the base graphs tends to infinity. It is interesting to note that it is known that the percolated torus in a fixed dimension does not quite exhibit the Erd\H{o}s-R\'enyi component phenomenon. Indeed, in the supercritical regime, the second-largest component is known to be of asymptotic order $\Theta\left(d^\frac{d-1}{d}\log^{\frac{d}{d-1}} n\right)$, and not $O(\log |T_{n,d}|) = O(d \log n)$ \cite{HR06}, where the asymptotics here are in terms of $n$, with $d$ treated as a constant.
In this paper, we will focus on percolation on Cartesian products of \textit{many} graphs, that is, like in the hypercube, where the base graphs are of bounded order and we are interested in the asymptotics as the \textit{dimension} of the product, that is, the number of non-trivial base graphs, tends to infinity.
Recently, Lichev \cite{L22} considered percolation on high-dimensional product graphs, under the assumption that the \emph{isoperimetric constants} of the base graphs were not shrinking too quickly. The \textit{isoperimetric constant} $i(H)$, also known as the Cheeger constant, of a graph $H$ is given by \[
i(H)=\inf_{\begin{subarray}{c}S\subseteq V(H),\\
|S|\le |V(H)|/2\end{subarray}}\frac{e(S, S^C)}{|S|}.
\]
The isoperimetric constant broadly measures the global connectivity of a graph, by measuring how easy it is to separate the graph into two large parts.
Lichev \cite{L22} showed that the component structure of such graphs undergoes a phase transition when $p$ is around $\frac{1}{d}$ where $d\coloneqq d(G)$ is the average degree of the host graph $G$.
\begin{thm}[Theorem 1.1 of \cite{L22}]\label{product graphs}
Let $C, \gamma>0$ be constants. Let $G^{(1)},\ldots, G^{(t)}$ be connected graphs such that for all $j\in[t]$, $\Delta\left(G^{(j)}\right)\le C$ and $i\left(G^{(j)}\right)\ge t^{-\gamma}$. Let $G=\square_{j=1}^{t}G^{(j)}$. Form $G_p$ by retaining every edge of $G$ independently with probability $p$. Then, with probability tending to one as $d\coloneqq d(G)$ tends to infinity:
\begin{itemize}
\item[(a)] if $p=\frac{1-\epsilon}{d}$, then all components of $G_p$ are of order at most $\exp\left(-\frac{\epsilon^2t}{9C^2}\right)n$; and,
\item[(b)] if $p=\frac{1+\epsilon}{d}$, then there exists a positive constant $c=c(\epsilon,C,\gamma)$ such that the largest component of $G_p$ is of order at least $c n$.
\end{itemize}
\end{thm}
Observe that, in comparison to Theorems \ref{ER thm} and \ref{hypercube thm}, Theorem \ref{product graphs} only gives a qualitative description of the phase transition, in the sense that the largest component in the supercritical regime is shown to be linear in order, but neither its uniqueness nor the leading constant are determined, and the largest component in the subcritical regime is only shown to have sublinear order.
This is not quite as strong as the Erd\H{o}s-R\'enyi component phenomenon, which determines the asymptotic order and uniqueness of the giant component in the supercritical regime, as well as bounds the order of the largest and second-largest component in the subcritical and supercritical regime, respectively, as logarithmic in the order of the host graph.
On the other hand, the assumptions on the base graphs in Theorem \ref{product graphs} are quite mild, requiring only that the isoperimetric constants in the base graphs are not tending to $0$ too quickly (in fact, as will be elaborated in Section \ref{discussion}, in a forthcoming paper \cite{DEKK23} we show that the isoperimetric constants can tend to $0$ even faster). It is thus an interesting question as to what additional assumptions, if any, on the structure of the base graphs would be sufficient to ensure that the product graph displays the Erd\H{o}s-R\'enyi component phenomenon after percolation.
\subsection{Main results}
We will see that one natural assumption to make is that our host graph $G$ is regular, which in particular will be the case for a product graph if and only if the base graphs are all regular.
Our first result concerns the component structure in the subcritical regime, and in fact will hold for any $d$-regular graph, not necessarily a product graph.
\begin{theorem}\label{subcritical regime}
Let $G$ be a $d$-regular graph on $n$ vertices, let $\epsilon>0$ be a small enough constant and let $p=\frac{1-\epsilon}{d}$. Then, with probability tending to one as $n$ tends to infinity, all the components of $G_p$ are of order at most $\frac{9\log n}{\epsilon^2}$.
\end{theorem}
We note that the proof of Theorem \ref{subcritical regime} is relatively short, utilising the Breadth First Search (BFS) algorithm. In particular, it implies some known results in other more specific models, such as $G(d+1,p)$ \cite{ER60}, $Q^d_p$ \cite{AKS81, BKL92}, and the torus and Hamming graphs \cite{HH17}.
A second natural assumption will be that the product graphs all have bounded order. Roughly, this will guarantee that the asymptotic behaviour is coming from the product structure.
Our second result then concerns the component structure in the supercritical regime for the product of many regular graphs of bounded order.
\begin{theorem}\label{supercritical regime}
Let $C>1$ be a constant and let $\epsilon>0$ be sufficiently small. For all $i\in [t]$, let $G^{(i)}$ be a non-trivial connected regular graph of degree $d(G^{(i)})$ such that $\big|V\left(G^{(i)}\right)\big|\le C$. Let $G=\square_{i=1}^{t}G^{(i)}$ and let $p=\frac{1+\epsilon}{d}$, where $d\coloneqq d(G) =\sum_{i=1}^{t}d\left(G^{(i)}\right)$ is the degree of $G$. Let $n\coloneqq |V(G)|$. Then, \textbf{whp}\footnote{With high probability, that is, with probability tending to $1$ as $t$ tends to infinity.},
there exists a unique giant component of order $\left(1+o(1)\right)yn$ in $G_p$, where $y=y(\epsilon)$ is defined as in (\ref{survival prob}). Furthermore, \textbf{whp}, all the remaining components of $G_p$ are of order $O_{\epsilon}(d)$.
\end{theorem}
We note that, in terms of the asymptotic order of the second largest component, in our setting $d=\Theta(\log |G|)$, and the dependency of the leading constant on $\epsilon$ arising from our proof is inverse polynomial, of order $\frac{1}{\epsilon^3}$ (compare this with Theorem \ref{subcritical regime}, and the well-known result that the second-largest component of $G(d+1,p)$ has order $\Theta(\log d/\epsilon^2)$ \cite{ER60}).
Theorems \ref{subcritical regime} and \ref{supercritical regime} together show that when the underlying asymptotic geometry of the graph arises from a product structure, where the base graphs are bounded and regular, the percolated graph exhibits the Erd\H{o}s-R\'enyi component phenomenon. We note that, in particular, these results generalise the known results for $Q^d_p$ \cite{AKS81, BKL92}, while providing shorter and simpler proofs. Furthermore it can be shown that the assumption of regularity is necessary. We will discuss this, and the extent to which the other assumptions, in both Theorems \ref{product graphs} and \ref{supercritical regime}, can be weakened in more detail in Section \ref{discussion}.
Finally, we note that unlike the previous proofs in the case of the hypercube \cite{AKS81, BKL92}, our proof does not rely on any strong isoperimetric inequality. Indeed, we only require relatively weak assumptions on the expansion of the host graph $G$ (see Theorem \ref{productiso}), and instead use the product structure of $G$ to describe more precisely the structure of the percolated subgraph.
Similar arguments have proven to be useful in other contexts, for example in the setting of site percolation on the hypercube, where the lack of a strong enough vertex isoperimetric inequality complicates the analysis of the component structure. Recently, the first and fourth author \cite{DK22} use similar ideas to verify a longstanding conjecture of Bollob\'as, Kohayakawa, and \L{}uczak \mbox{\cite[Conjecture 11]{BKL94}} on the size of the second-largest component in the supercritical regime in this model. We expect these methods to be useful in other contexts where the analysis is constrained by the lack of a strong enough isoperimetric inequality. In particular, the proofs in this paper should be relatively easy to modify to the setting of site percolation on high-dimensional product graphs.
The structure of the paper is as follows. In Section \ref{S:prelim}, we introduce some notation, terminology and preliminary lemmas which will serve us throughout the rest of the paper. In Sections \ref{S: subcritical} and \ref{S: supercritical} we prove Theorems \ref{subcritical regime} and \ref{supercritical regime}, respectively. Finally, in Section \ref{discussion} we discuss our results and avenues for future research.
\section{Preliminaries}\label{S:prelim}
\subsection{Notation and terminology}
Let us introduce some notation and terminology, which we will use throughout the rest of the paper.
Recall that given a product graph $G=\square_{i=1}^tG^{(i)}$, we call the $G^{(i)}$ the \textit{base graphs} of $G$. Given a vertex $u = (u_1,u_2, \ldots, u_t)$ in $V(G)$ and $i \in [t]$ we call the vertex $u_i\in V(G^{(i)})$ the \textit{$i$-th coordinate} of $G$. As is standard, we may still enumerate the vertices of a given set $M$, such as $M=\left\{v_1,\ldots, v_m\right\}$ with $v_i\in V(G)$. Whenever confusion may arise, we will clarify whether the subscript stands for enumeration of the vertices of the set, or for their coordinates.
When $G^{(i)}$ is a graph on a single vertex, that is, $G^{(i)}=\left(\{u\},\varnothing\right)$, we call it \textit{trivial} (and \textit{non-trivial}, otherwise). We define the \textit{dimension} of $G=\square_{i=1}^tG^{(i)}$ to be the number of base graphs $G^{(i)}$ of $G$ which are non-trivial (we note that the dimension of $G$ is not an invariant of $G$, and in fact depends on the choice of the base graphs). Note that in Theorem \ref{supercritical regime}, we assumed that the base graphs have more than one vertex, implying that they are non-trivial. Given $H\subseteq G=\square_{i=1}^tG^{(i)}$, we call $H$ a \textit{projection of} $G$ if $H$ can be written as $H=\square_{i=1}^tH^{(i)}$ where for every $1\le i\le t$, $H^{(i)}=G^{(i)}$ or $H^{(i)}=\{v_i\}\subseteq V(G^{(i)})$; that is, $H$ is a projection of $G$ if it is the Cartesian product graph of base graphs $G^{(i)}$ and their trivial subgraphs. In that case, we further say that $H$ is the projection of $G$ onto the coordinates corresponding to the trivial subgraphs. For example, let $u_i\in V(G^{(i)})$ for $1\le i\le k$, and let $H=\{u_1\}\square\cdots\square\{u_k\}\square G^{(k+1)}\square\cdots\square G^{(t)}$. In this case we say that $H$ is a projection of $G$ onto the first $k$ coordinates.
We assume throughout the paper that $t\to\infty$, and our asymptotic notation will be with respect to $t$. Given a graph $H$ and a vertex $v \in V(H)$, we denote by $C_v(H)$ the component of $v$ in $H$. All logarithms are with the natural base, unless we explicitly state otherwise. We omit rounding signs for the sake of clarity of presentation.
\subsection{The BFS algorithm} \label{BFS description}
For the proofs of our main results, we will use the Breadth First Search (BFS) algorithm. This algorithm explores the components of a graph $G$ by building a maximal spanning forest.
The algorithm maintains three sets of vertices:
\begin{itemize}
\item $S$, the set of vertices whose exploration is complete;
\item $Q$, the set of vertices currently being explored, kept in a queue; and
\item $T$, the set of vertices that have not been explored yet.
\end{itemize}
The algorithm receives as input a graph $G$ and a linear ordering $\sigma$ on its vertices. It starts with $S=Q=\emptyset$ and $T=V(G)$, and ends when $Q\cup T=\emptyset$. At each step, if $Q$ is non-empty, the algorithm queries the vertices in $T$, in the order $\sigma$, to ascertain if they are neighbours in $G$ of the first vertex $v$ in $Q$. Each neighbour which is discovered is added to the back of the queue $Q$. Once all neighbours of $v$ have been discovered, we move $v$ from $Q$ to $S$. If $Q=\emptyset$, we move the next vertex from $T$ (according to $\sigma$) into $Q$. Note that the set of edges queried during the algorithm forms a maximal spanning forest of $G$.
In order to analyse the BFS algorithm on a random subgraph $G_p$ of a $d$-regular graph $G$ with $n$ vertices, we will utilise the \emph{principle of deferred decisions}. That is, we will take a sequence $(X_i \colon 1 \leq i \leq \frac{nd}{2})$ of i.i.d Bernoulli$(p)$ random variables, which we will think of as representing a positive or negative answer to a query in the algorithm. When the $i$-th edge of $G$ is queried during the BFS algorithm we will include it in $G_p$ if and only if $X_i=1$. It is clear that the forest obtained in this way has the same distribution as a forest obtained by running the BFS algorithm on $G_p$.
\subsection{Preliminary Lemmas}
We will make use of two standard probabilistic bounds. The first one is a typical Chernoff type tail bound on the binomial distribution (see, for example, Appendix A in \cite{AS16}).
\begin{lemma}\label{chernoff}
Let $n\in \mathbb{N}$, let $p\in [0,1]$, and let $X\sim Bin(n,p)$. Then for any $t\ge 0$,
\begin{align*}
&\mathbb{P}\left[|X-np|\ge t\right]\le 2\exp\left(-\frac{t^2}{3np}\right).
\end{align*}
\end{lemma}
The second one is the well-known Azuma-Hoeffding inequality (see, for example, Chapter 7 in \cite{AS16}),
\begin{lemma}\label{azuma}
Let $X = (X_1,X_2,\ldots, X_m)$ be a random vector with range $\Lambda = \prod_{i \in [m]} \Lambda_i$ and let $f:\Lambda\to\mathbb{R}$ be such that there exists $C \in \mathbb{R}^m$ such that for every $x,x' \in \Lambda$ which differ only in the $j$-th coordinate,
\begin{align*}
|f(x)-f(x')|\le C_j.
\end{align*}
Then, for every $t\ge 0$,
\begin{align*}
\mathbb{P}\left[\big|f(X)-\mathbb{E}\left[f(X)\right]\big|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nC_i^2}\right).
\end{align*}
\end{lemma}
We will use the following result of Chung and Tetali \cite[Theorem 2]{CT98}, which bounds the isoperimetric constant of product graphs.
\begin{thm}[\cite{CT98}]\label{productiso}
Let $G^{(1)},\ldots, G^{(t)}$ be graphs and let $G = \square_{i=1}^{t}G^{(i)}$. Then
\[
\min_j \{i( G^{(j)}) \} \geq i(G) \geq \frac{1}{2}\min_j \left\{i\left( G^{(j)}\right) \right\} .
\]
\end{thm}
Finally, we will use the following bound on the number of $m$-vertex trees in a bounded-degree graph.
\begin{lemma}[{\cite[Lemma 2]{BFM98}}]\label{BFM98}
Let $G$ be a graph on $n$ vertices with maximum degree $d$. Let $t_m(G)$ be the number of $m$-vertex trees in $G$. Then,
\begin{align*}
t_m(G)\le n(ed)^{m-1}.
\end{align*}
\end{lemma}
\section{Subcritical regime}\label{S: subcritical}
The proof of Theorem \ref{subcritical regime} is inspired by Krivelevich and Sudakov's \cite{KS13} simple proof of the emergence of a giant component in $G(d+1,p)$. However, here we analyse the BFS algorithm instead of the Depth First Search algorithm.
\begin{proof}[Proof of Theorem \ref{subcritical regime}]
Suppose we run the BFS algorithm on $G_p$, as described in Section \ref{BFS description}, and assume to the contrary that $G_p$ contains a component $K$ of order larger than $k=\frac{9\log n}{\epsilon^2}$.
Let us consider the period of the algorithm from the moment when the first vertex of $K$ enters $Q$ to the moment when the $(k+1)$-st vertex of $K$ enters $Q$. During this period we only query edges adjacent to the first $k$ vertices of $K$, and so, since $G$ is $d$-regular, we query at most $kd$ edges. However, by assumption $G_p$ induces a tree on these $k+1$ vertices, and hence in this period we receive $k$ positive answers. In particular, there is some interval $I \subseteq \left[\frac{nd}{2} \right]$ of length $kd$ such that at least $k$ of the $X_i$ with $i \in I$ are equal to $1$.
However, by Lemma \ref{chernoff}, the probability that this occurs for any fixed interval $I$ is at most
\begin{align*}
\mathbb{P}\left[Bin\left(kd,\frac{1-\epsilon}{d}\right)\ge k\right]\le \exp\left(-\frac{\epsilon^2k}{4}\right)=o\left(\frac{1}{n^2}\right),
\end{align*}
where the last equality is since $k=\frac{9\log n}{\epsilon^2}$. There are at most $\frac{nd}{2} \leq n^2$ intervals $I \subseteq \left[\frac{nd}{2} \right]$ of length $kd$, and so by the union bound \textbf{whp} there is no such interval, contradicting our assumption.
\end{proof}
\section{Supercritical regime}\label{S: supercritical}
Let us start by giving a brief sketch of the proof in this section.
We start in Section \ref{s:proj} by giving a useful technical lemma, which we call a \emph{projection lemma} (Lemma \ref{seperation lemma}), which allows us to cover a small set of points $M$ in the product graph with a set of pairwise disjoint projections of large dimension, each of which contains a unique point in $M$. This allows us to explore the graph `locally' around each of these points in an independent manner.
Using this, in Section \ref{medium}, we show that \textbf{whp} a fixed proportion of the vertices in $G_p$ will be contained in \emph{big} components, of order at least $d^k$ for some fixed integer $k>0$ and, in Section \ref{gap}, we show that these big components are `dense' in the host graph, in the sense that every vertex in $G$ is close to some big component of $G_p$. Finally, in Section \ref{proof}, we use a sprinkling argument to show that these big components are in fact all contained in a unique giant component and determine its asymptotic order. This approach is relatively standard, and is roughly the method used by Ajtai, Koml\'os, and Szemer\'edi \cite{AKS81} and Bollob\'as, Kohayakawa, and \L{}uczak \cite{BKL92}, but the arguments are simplified substantially by our projection lemma.
However, these methods inherently can only establish a superlinear polynomial bound (in $d$) on the order of the second-largest component. In order to overcome that, the standard approach requires a strong isoperimetric inequality to demonstrate a gap in the size of the components. Nevertheless, utilising the projection lemma in an inductive manner, together with a carefully chosen multi-round exposure, allows us to keep track more precisely of the distribution of the vertices in big components, and in particular to show that any large enough connected set in $G_p$, where we only require these sets to be linearly large in $d$, must be adjacent to many vertices in big components. A further sprinkling argument then allows us to give an improved bound on the size of the second-largest component, completing the proof of Theorem \ref{supercritical regime}, without the need for a strong isoperimetric inequality.
\subsection{Projection lemma}\label{s:proj}
We begin by establishing the following projection lemma, which will be useful throughout all of the subsequent sections.
\begin{lemma}\label{seperation lemma}
Let $G=\square_{i=1}^{t}G^{(i)}$ be a product graph with dimension $t$. Let $M\subseteq V(G)$ be such that $|M|=m\le t$. Then, there exist pairwise disjoint projections $H_1, \ldots, H_m$ of $G$, each having dimension at least $t-m+1$, such that every $v\in M$ is in exactly one of these projections.
\end{lemma}
\begin{proof}
We argue by induction on pairs $(m,t)$, where $t$ is the dimension of the graph, with $m \leq t$ under the lexicographical ordering. When $m=1$, we simply take $H_1=G$.
Let $M \subseteq V(G)$ have size $m\geq 2$ and let us assume the statement holds for all $(m',t')<(m,t)$. Let us write $M=\left\{v_1,\ldots, v_m\right\}$, where we stress that the subscript here is an enumeration of the vertices of $M$ and not the coordinates of a fixed vertex.
There is some coordinate $i$ on which at least two of the vertices of $M$ do not agree. Let us denote by $M_i$ the set of the $i$-th coordinates of the vertices of $M$, that is, $V(G^{(i)})\supseteq M_i=\left\{v_{1,i},\ldots, v_{\ell,i}\right\}$, where we may re-order the vertices so that $v_{j,i}$ is the $i$-th coordinate of the vertex $v_j\in M$, and $2\le \ell \le m$ (note that it is possible that $\ell<m$, since there could be vertices in $M$ which agree on their $i$-th coordinates).
Let us now consider the pairwise disjoint projections $H_1, \ldots, H_{\ell}$ of $G$ defined by
\begin{align*}
H_{j}=G^{(1)}\square\cdots\square G^{(i-1)}\square \{v_{j,i}\}\square G^{(i+1)}\square \cdots \square G^{(t)}.
\end{align*}
That is, in the $j$-th projection, we take the $i$-th coordinate of $H_{j}$ to be the trivial graph $\{v_{j,i}\}$ (which is the $i$-th coordinate of the $j$-th vertex in $M_i$).
Note that each of these projections has dimension $t-1$ and contains at least one vertex of $M$, and that each of the vertices of $M$ is in exactly one of these projections. Hence, each such projection contains at most $m-1$ vertices from $M$.
We can thus apply the induction hypothesis to each of these projections, giving rise to $m$ pairwise disjoint projections of $G$, each of dimension at least $(t-1)-(m-1)+1=t-m+1$ and containing
exactly one vertex from $M$.
\end{proof}
\subsection{Vertices in large components}\label{medium}
We first estimate from below the probability that a vertex of $G$ lies in a polynomially sized (in $d$) component of $G_p$, with the proof inspired by \cite{BKL94}.
\begin{lemma}\label{medium comp}
Let $C>1$ and $\epsilon>0$ be constants, let $G^{(1)},\ldots, G^{(t)}$ be connected regular graphs such that $1<\big|V\left(G^{(i)}\right)\big|\le C$ for all $i\in[t]$, let $G=\square_{i=1}^{t}G^{(i)}$ and let $p\geq\frac{1+\epsilon}{d}$,
where $d=\sum_{i=1}^{t}d\left(G^{(i)}\right)$ is the degree of $G$. Let $r >0$ be an integer and let $m_r=d^{\frac{r}{4}}$. Then, there exists a constant $c=c(\epsilon,r)>0$ such that for any $v\in V(G)$,
\begin{align*}
\mathbb{P}\left[\big|C_v(G_p)\big|\ge cm_r\right]\ge y-o_t\left(1\right),
\end{align*}
where $y=y(\epsilon)$ is as defined in (\ref{survival prob}).
\end{lemma}
\begin{proof}
We will prove the slightly more explicit statement that the result holds with ${c(\epsilon,r)= \left(\frac{y(\epsilon)}{5}\right)^{r}}$ by induction on $r$, over all possible values of $C$ and $\epsilon$, and all choices of $G^{(1)},\ldots, G^{(t)}$.
For $r=1$, we run the BFS algorithm (as described in Section \ref{BFS description}) on $G_p$ starting from $v$ with a slight alteration: we terminate the algorithm once $\min\left(\big|C_v(G_p)\big|,d^{\frac{1}{2}}\right)$ vertices are in $S\cup Q$. Note that at every point in the algorithm we have $|S\cup Q|\le d^{\frac{1}{2}}$, and therefore at each point in the algorithm the first vertex $u$ in the queue has at least $d-d^{\frac{1}{2}}$ neighbours (in $G$) in $T$. Hence, we can couple the forest $F$ built by this truncated BFS process with a Galton-Watson tree $B$ rooted at $v$ with offspring distribution $Bin\left(d-d^{\frac{1}{2}},p\right)$ such that $B \subseteq F$ as long as $|B| \leq d^{\frac{1}{2}}$.
Since $\left(d-d^{\frac{1}{2}}\right)\cdot p \ge 1+\epsilon-o(1)$, standard results imply that $B$ grows infinitely large with probability $y-o(1)$ (see, for example, \cite[Theorem 4.3.12]{D19}). Thus, with probability at least $y-o(1)$, we have that
\[
\big|C_v(G_p)\big|\ge d^{\frac{1}{2}} \geq \left(\frac{y}{5}\right)d^{\frac{1}{4}} = c(\epsilon,1)m_1.
\]
Let $r\geq 2$ and let us assume that the statement holds with $c(\epsilon',r-1) = \left(\frac{y(\epsilon')}{5}\right)^{r-1}$ for all $C,\epsilon$ and $G^{(1)},\ldots, G^{(t)}$. We will argue via a two-round exposure. Set $p_2=d^{-\frac{5}{4}}$ and $p_1=\frac{p-p_2}{1-p_2}$ so that $(1-p_1)(1-p_2)=1-p$. Note that $G_p$ has the same distribution as $G_{p_1}\cup G_{p_2}$, and that $p_1=\frac{1+\epsilon'}{d}$ where $\epsilon' = \epsilon - o(1)$. In fact, we will not expose either $G_{p_1}$ or $G_{p_2}$ all at once, but in several stages, each time considering only some subset of the edges.
We begin in a manner similar to the case of $r=1$. We run the BFS algorithm on $G_{p_1}$ starting from $v$, and we terminate the exploration once $\min\left(\big|C_v(G_{p_1})\big|,d^{\frac{1}{2}}\right)$ vertices are in $S\cup Q$. Once again, by standard arguments, we have that $\big|C_v(G_{p_1})\big|\ge d^{\frac{1}{2}}$ with probability at least $y\left(\epsilon'\right)-o(1)=y-o(1)$. Let us write $W_0\subseteq C_v(G_{p_1})$ for the set of vertices explored in this process, and assume in what follows that $W_0$ is of order $d^{\frac{1}{2}}$. Using Lemma \ref{seperation lemma}, we can find pairwise disjoint projections $H_1,\ldots, H_{d^{\frac{1}{2}}}$ of $G$, each having dimension at least $t-d^{\frac{1}{2}}$, such that each $v\in W_0$ is in exactly one of the $H_i$. We denote the vertex of $W_0$ in $H_i$ by $v_i$.
Now, by our assumptions on $G$, we have that $|V\left(G^{(j)}\right)|\le C$ for every $j\in[t]$ and hence clearly $d\left(G^{(j)}\right)\le C$. Thus, each of the $H_i$ is $d_i$-regular with $d_i\ge d-Cd^{\frac{1}{2}} = (1-o(1))d$. In particular, each $v_i$ has $(1-o(1))d$ neighbours in $H_i$. Let us define the following set of vertices:
\begin{align*}
W=\bigcup_{i\in [d^{\frac{1}{2}}]} N_{H_i}(v_i).
\end{align*}
Then, $W\subseteq N_G(W_0)$ and since the $H_i$ are pairwise disjoint we have $|W|\ge d^{\frac{1}{2}}(1-o(1))d\geq \frac{9d^{\frac{3}{2}}}{10}$. We now look at the edges in $G_{p_2}$ between $W_0$ and $W$. Let us denote the vertices in $W$ that are connected with $W_0$ in $G_{p_2}$ by $W'$. Since $|W|\ge \frac{9d^{\frac{3}{2}}}{10}$ and $p_2=d^{-\frac{5}{4}}$, $|W'|$ stochastically dominates
\begin{align*}
Bin\left(\frac{9d^{\frac{3}{2}}}{10}, d^{-\frac{5}{4}}\right).
\end{align*}
Thus, by Lemma \ref{chernoff}, we have that $|W'|\ge \frac{d^{\frac{1}{4}}}{3}$ and $|W'|\le d^{\frac{1}{2}}$ with probability at least $1-\exp\left(-\frac{d^{\frac{1}{4}}}{15}\right)=1-o(1)$. In what follows we will assume that these two inequalities hold.
Let $W'_i=W'\cap V(H_i)$. Now, for each $i$, we apply Lemma \ref{seperation lemma} to find a family of $\ell_i\le d^{\frac{1}{2}}$ pairwise disjoint projections of $H_i$, we denote them by $H_{i,1}, \ldots, H_{i,\ell_i}$, such that every vertex of $W'_i$ is in exactly one of the $H_{i,j}$, and each of the $H_{i,j}$ is of dimension at least $t-2d^{\frac{1}{2}}$. Finally, we denote by $v_{i,j}$ the unique vertex of $W'_i$ that is in $H_{i,j}$. Note that, each $H_{i,j}$ is $d_{i,j}$-regular with $d_{i,j} \geq d - 2Cd^{\frac{1}{2}}$.
Crucially, observe that when we ran the BFS algorithm on $G_{p_1}$, we did not query any of the edges in any of the $H_{i,j}$ - we only queried edges in $W_0$ and between $W_0$ and its neighbourhood, and by construction $E(H_{i,j})\cap E\left(W_0\cup N_G(W_0)\right)=\varnothing$. Note that, since $y=y(\epsilon)$, defined in (\ref{survival prob}), is a continuous increasing function of $\epsilon$ on $(0,\infty)$ and $p_1\cdot d_{i,j} =1 + \epsilon_{i,j} \geq 1 + \epsilon - o(1)$ for all $i,j$, we may apply the induction hypothesis to $v_{i,j}$ in $G_{p_1}\cap H_{i,j}$ and conclude that
\[
|C_{v_{i,j}}\left(H_{i,j}\cap G_{p_1}\right)| \geq c(\epsilon_{i,j},r-1) d^{\frac{r-1}{4}} \geq (1-o(1))c(\epsilon,r-1) d^{\frac{r-1}{4}}
\]
with probability at least $y\left(\epsilon_{i,j}\right)-o(1)\geq y-o(1)$. Note that these events are independent for each $H_{i,j}$. Hence, since by assumption $|W'| \geq \frac{d^{\frac{1}{4}}}{3}$, it follows from Lemma \ref{chernoff} that \textbf{whp} at least $\frac{y d^{\frac{1}{4}}}{4}$ of these $v_{i,j}$ have that $|C_{v_{i,j}}\left(H_{i,j}\cap G_{p_1}\right)|\ge (1-o(1))c(\epsilon,r-1) d^{\frac{r-1}{4}}$.
As such, we may conclude that with probability at least $y-o(1)$, we have that
\begin{align*}
|C_v(G_p)|\ge (1-o(1))\frac{yd^{\frac{1}{4}}}{4}c(\epsilon,r-1) d^{\frac{r-1}{4}} \geq \left( \frac{y}{5}\right)^{r} d^\frac{r}{4} = c(\epsilon,r)m_r,
\end{align*}
completing the induction step.
\end{proof}
From now on let us fix a constant $C>1$, and connected regular graphs $G^{(1)},\ldots, G^{(t)}$ such that $1<\big|V\left(G^{(i)}\right)\big|\le C$ for all $i\in[t]$ and let $G=\square_{i=1}^{t}G^{(i)}$, where we write $d\coloneqq \sum_{i=1}^{t}d\left(G^{(i)}\right)$ for the degree of $G$ and $n\coloneqq |V(G)|$.
We can now estimate the number of vertices in components of order at least $d^k$, where we will choose $k$ later to be some large, fixed, integer.
\begin{lemma}\label{big comp}
Let $\epsilon >0$ and let $p= \frac{1+\epsilon}{d}$. Let $k>0$ be an integer and let $W\subseteq V(G)$ be the set of vertices belonging to components of order at least $d^k$ in $G_p$. Then, \textbf{whp},
\begin{align*}
|W|=\left(1+o(1)\right)yn,
\end{align*}
where $y=y(\epsilon)$ is as defined in (\ref{survival prob}).
\end{lemma}
\begin{proof}
By Lemma \ref{medium comp}, applied with $r=4k+1$, every $v\in V(G)$ is contained in a component of order at least $d^k$ in $G_p$ with probability at least $y-o(1)$. Thus,
\begin{align*}
\mathbb{E}\left[|W|\right]\ge \left(1-o(1)\right)yn.
\end{align*}
Furthermore, since $G$ is $d$-regular, for every $v\in V(G)$ an easy coupling implies that $|C_v(G_p)|$ is stochastically dominated by the number of vertices in a Galton-Watson tree with offspring distribution $Bin(d,p)$. Thus, by standard arguments (see, for example, \cite[Theorem 4.3.12]{D19}), we have that for every $v\in V(G)$,
\begin{align*}
\mathbb{P}\left[|C_v(G_p)|\ge d^k\right]\le y+o_d(1).
\end{align*}
Since $d$ tends to infinity with $t$, we obtain that
\begin{align*}
\mathbb{E}\left[|W|\right]\le \left(1+o(1)\right)yn,
\end{align*}
and we conclude that $\mathbb{E}\left[|W|\right]=\left(1+o(1)\right)yn.$
It remains to show that $|W|$ is concentrated about its mean. To this end, let us consider the edge-exposure martingale on $G_p$, whose length is $|E(G)| = \frac{nd}{2}$. Adding or deleting an edge can change $W$ by at most $2d^k$ vertices. Thus, a standard application of Lemma \ref{azuma} implies that
\begin{align*}
\mathbb{P}\left[\big||W|- \mathbb{E}\left[|W|\right]\big|\ge n^{\frac{2}{3}}\right]&\le 2\exp\left(-\frac{n^{\frac{4}{3}}}{2\sum_{i=1}^{\frac{nd}{2}}4d^{2k}}\right) \\
&\le2\exp\left(-\frac{n^{\frac{1}{3}}}{5d^{2k+1}}\right)=o(1),
\end{align*}
since $d=\Theta(\log n)$.
\end{proof}
\subsection{Big components are everywhere-dense}\label{gap}
In this section, we establish several lemmas showing that, typically, big components in $G_p$ are \emph{everywhere-dense} in $G$. While the statements seem similar, the subtle differences in the type of density will be of importance in the proof of Theorem \ref{supercritical regime}.
We begin with the first type of density lemma. Note that, in particular, the following implies that for any fixed vertex $v \in G$, \textbf{whp} $v$ is adjacent to many vertices which lie in big components in $G_p$.
\begin{lemma} \label{dense1}
Let $\epsilon>0$ be a small enough constant, let $p=\frac{1+\epsilon}{d}$, let $k>0$ be an integer and let $M\subseteq V(G)$ be such that $|M|=m\le \frac{\epsilon d}{10C}$. Then, the probability that every $v\in M$ has less than $\frac{\epsilon^2d}{40C}$ neighbours (in $G$) which lie in components of $G_p$ whose order is at least $d^k$ is at most $\exp\left(-\frac{\epsilon^2dm}{40C}\right)$.
\end{lemma}
\begin{proof}
Let $M=\{u_1,\ldots,u_m\}$, where we stress here that the subscript denotes an enumeration of the vertices of $M$, that is, $u_i\in V(G)$. By Lemma \ref{seperation lemma}, we can find pairwise disjoint projections $H_1, \ldots, H_m$ of $G$ such that each $H_i$ is of dimension at least $t-m+1\ge t-\frac{\epsilon d}{10C}$ and $u_i\in V(H_i)$ for all $i\in[m]$. Note that since $d\le Ct$, $t-\frac{\epsilon d}{10C}>0$.
Let us fix some $i$. Without loss of generality, we may assume that $H_i$ is the projection of $G$ onto the last $m_i\le m-1$ coordinates, that is,
\begin{align*}
H_i=G^{(1)}\square\cdots\square G^{(t-m)}\square\{u_{i,t-m_i+1}\}\square\cdots\square\{u_{i,t}\},
\end{align*}
where $u_{i,\ell}\in V(G^{(\ell)})$ is the $\ell$-th coordinate of $u_i$, that is, the $\ell$-th coordinate of the $i$-th vertex of $M$. By our assumption, each $G^{(\ell)}$ has at least $2$ vertices and is connected. Thus, for each $\ell$, we can choose arbitrarily one of the neighbours of $u_{i,\ell}$ in $G^{(\ell)}$ and denote it by $v_{i,\ell}$, where the subscript in $v_{i,\ell}$ is to stress that it is a neighbour of $u_{i,\ell}$ in $G^{(\ell)}$. We now define the following $\frac{\epsilon d}{10C}$ pairwise disjoint projections of $H_i$, denoted by $H_i(1),\ldots , H_i\left(\frac{\epsilon d}{10C}\right)$, each having dimension at least $t-\frac{\epsilon d}{5C}$. We set $H_i(j)$ to be the projection of $H_i$ on the first $\frac{\epsilon d}{10C}$ coordinates, such that the $j$-th coordinate (where $1\le j\le \frac{\epsilon d}{10C}$) is the trivial subgraph $\{v_{i,j}\}\subseteq G^{(j)}$, and the coordinates $1\le \ell\le \frac{\epsilon d}{10C}$ (where $\ell\neq j$) are the trivial subgraphs $\{u_{i,\ell}\}\subseteq G^{(\ell)}$. Note that each $H_i(j)$ is at distance $1$ from $u_i$, since it contains the vertex $$v_i(j)\coloneqq \left(u_{i,1},\cdots,u_{i,j-1},v_{i,j},u_{i,j+1},\cdots,u_{i,t}\right).$$
Observe that $p=\frac{1+\epsilon}{d}$ is supercritical for every $H_i(j)$, since $d(H_i(j))\ge d-C\cdot \frac{\epsilon d}{5C}=\left(1-\frac{\epsilon}{5}\right)d$, and so $p \cdot d(H_i(j))\ge 1+\frac{3\epsilon}{5}$, for small enough $\epsilon$. Then, by Lemma \ref{medium comp}, with probability at least $y\left(\frac{3\epsilon}{5}\right)-o(1)$ we have that $v_i(j)$ belongs to a component of order at least $d^k$ in $G_p\cap H_i(j)$, and we note that by $(\ref{survival prob})$, we have that $y\left(\frac{3\epsilon}{5}\right)-o(1)>\epsilon$ for small enough $\epsilon$. These events are independent for different $j$, and thus by Lemma \ref{chernoff}, with probability at least $1-\exp\left(-\frac{\epsilon^2d}{40C}\right)$, at least $\frac{\epsilon^2d}{40C}$ of the $v_i(j)$ belong to a component of order at least $d^k$.
These events are independent for different $i$, and thus the probability that none of the $u_i$ have at least $\frac{\epsilon^2d}{40C}$ neighbours (in $G$) in components of $G_p$ whose order is at least $d^k$ is at most $\exp\left(-\frac{\epsilon^2dm}{40C}\right)$, as required.
\end{proof}
Hence, we expect almost all vertices in $G$ to be adjacent to a vertex in a big component in $G_p$. However, even the vertices not adjacent to big components, whose proportion is likely to be small, will typically be not too far from a big component.
\begin{lemma}\label{dense3}
Let $\epsilon>0$ be a small enough constant, let $p=\frac{1+\epsilon}{d}$ and let $k>0$ be an integer. Then, \textbf{whp}, every $v\in V(G)$ is at distance (in $G$) at most $2$ from a component of order at least $d^k$ in $G_p$.
\end{lemma}
\begin{proof}
Fix $u\in V(G)$, and let $u_i$ be the $i$-th coordinate of $u$ for each $i \in [t]$. For each $i \in [t]$ let us choose a neighbour $v_i$ of $u_i$ in $G^{(i)}$. For each $1 \leq \ell \neq m \leq \frac{\epsilon d}{10C}$ let $H_{\ell,m}$ be the projection of $G$ onto the first $\frac{\epsilon d}{10C}$ coordinates, such that the $\ell$-th and $m$-th coordinates of $H_{\ell,m}$ are $\{v_{\ell}\}$ and $\{v_m\}$, respectively, and the $j$-th coordinate is $\{u_j\}$ for all other $j\in \left[ \frac{\epsilon d}{10C}\right]$.
We note that the $H_{\ell,m}$ are then a set of $\binom{\frac{\epsilon d}{10C}}{2}\ge \frac{\epsilon^2d^2}{300C^2}$ pairwise disjoint projections of $G$. Furthermore, every $H_{\ell,m}$ contains a vertex at distance $2$ (in $G$) from $u$, which we denote by $v_{\ell,m}$. Moreover, since every $H_{\ell,m}$ has dimension $\left(1-\frac{\epsilon}{10C}\right)d$, it follows that every $H_{\ell,m}$ is regular with $d\left(H_{\ell,m}\right) \geq d-C\frac{\epsilon d}{10C}=\left(1-\frac{\epsilon}{10}\right)d$. In particular, $p \cdot d\left(H_{\ell,m}\right) \geq 1+ \frac{4}{5} \epsilon$, for small enough $\epsilon$, and so $p$ is supercritical for every $H_{\ell,m}$.
Hence, by Lemma \ref{medium comp} and (\ref{survival prob}), $v_{\ell,m}$ belongs to a component of $G_p \cap H_{\ell,m}$ of order at least $d^k$ with probability at least $y\left(\frac{4\epsilon}{5}\right)-o(1)\ge \epsilon$, for small enough $\epsilon$. Since the $H_{\ell,m}$ are pairwise disjoint, these events are independent for different $v_{\ell,m}$. Thus, with probability at least $1 - (1-\epsilon)^{\frac{\epsilon^2d^2}{300C^2}} \geq 1-\exp\left(-\frac{\epsilon^3d^2}{10^3C^2}\right)$ we have that at least one of the $v_{\ell,m}$ belongs to a component whose order is at least $d^k$. Since $d=\Theta(\log n)$, we have $\exp\left(-\frac{\epsilon^3d^2}{10^3C^2}\right)=o\left(\frac{1}{n}\right)$ and thus we can complete the proof with a union bound over the $n$ choices for $u$.
\end{proof}
Finally, we can use Lemma \ref{dense1} and Lemma \ref{BFM98} to show that, typically, big components in $G_p$ are dense with respect to connected sets, in the following sense.
\begin{lemma}\label{dense2}
Let $\epsilon>0$ be a small enough constant, let $p=\frac{1+\epsilon}{d}$ and let $k>0$ be an integer. Let $W$ be the set of vertices in $G_p$ belonging to components of order at least $d^k$ and let $C_1>0$ be a constant. Then, \textbf{whp} every connected subset $M \subseteq V(G)$ of size $C_1d$ contains at most $\frac{\epsilon d}{10C}$ vertices $v \in M$ such that $|N_G(v)\cap W|<\frac{\epsilon^2d}{40C}$.
\end{lemma}
\begin{proof}
By Lemma \ref{BFM98}, there are at most $n(ed)^{C_1d}$ connected subsets $M \subseteq V(G)$, and there are at most $\binom{C_1d}{\frac{\epsilon d}{10C}} \leq 2^{C_1d}$ ways to choose a subset $X\subset M$ of size $\frac{\epsilon d}{10C}$. By Lemma \ref{dense1}, the probability that no vertex in $X$ has at least $\frac{\epsilon^2d}{40C}$ neighbours in $W$ is at most $\exp\left(-\frac{\epsilon^3d^2}{400C^2}\right)$.
Hence, by a union bound, the probability that the conclusion of the lemma does not hold is at most
\begin{align*}
n(ed)^{C_1d}2^{C_1d}\exp\left(-\frac{\epsilon^3d^2}{400C^2}\right) =o(1),
\end{align*}
where we used that $d = \Theta(t) = \Theta(\log n)$.
\end{proof}
\subsection{Proof of Theorem \ref{supercritical regime}}\label{proof}
Throughout this section, we take $C$, $G^{(1)},\ldots, G^{(t)}$, $G$, $\epsilon$, $d$ and $p$ to be as in the statement of Theorem \ref{supercritical regime}, and let $y=y(\epsilon)$ be as defined in (\ref{survival prob}). In order to prove Theorem \ref{supercritical regime}, we will use a multi-round exposure argument.
More precisely, let $p_2=\frac{\epsilon}{2d}$, $p_3=\frac{1}{d^2}$, and let $p_1$ be such that $(1-p_1)(1-p_2)(1-p_3)=1-p$. Note that $p_1=\frac{1+\frac{\epsilon}{2}-o(1)}{d}$. Let $G_{p_1},G_{p_2}$ and $G_{p_3}$ be independent random subgraphs of $G$ and let us write $G_1=G_{p_1}$, $G_2=G_1\cup G_{p_2}$ and $G_3=G_2\cup G_{p_3}$, where we note that $G_3$ has the same distribution as $G_p$. Let $k>0$ be an integer, which we will choose later. We define $W_i=W_i(k)$ (for $i=1,2,3$) to be the set of vertices in components of order at least $d^k$ in $G_i$. We note that $W_1\subseteq W_2\subseteq W_3$.
We begin by showing that \textbf{whp}, there are no components of order larger than $O_{\epsilon}(d)$ in $G_p$, which do not meet $W_1$.
\begin{lemma}\label{the gap}
There exists a constant $C_1 = C_1(\epsilon)$ such that \textbf{whp} every component $M$ of $G_p$ with $|M| \ge C_1 d$ intersects with $W_1$.
\end{lemma}
\begin{proof}
We start by exposing $G_{p_1} = G_1$. Let $V_1\subseteq V(G)\setminus W_1$ be the set of all vertices of $G$ that have at least $\frac{\epsilon^2d}{161C}$ neighbours (in $G$) in $W_1$, and let $V_0=V(G)\setminus(V_1\cup W_1)$.
We first note that, by Lemma \ref{dense2} applied to $G_{p_1}$, \textbf{whp} every connected subset of $G$ with at least $C_1d$ vertices contains fewer than $\frac{\epsilon d}{21C}$ vertices in $V_0$. We assume henceforth that this property holds.
We then expose the rest of $G_p$ on $V_0 \cup V_1$. There are at most $n$ components of $G_p[V_0 \cup V_1]$. Given a connected component $M$ of $G_p[V_0\cup V_1]$ of order at least $C_1d$, since $M$ spans a connected subset of $G$, by the above
\[
|M \cap V_1| \geq C_1d - \frac{\epsilon d}{21C} \geq \frac{C_1 d}{2},
\]
if $C_1$ is sufficiently large.
Each vertex in $M \cap V_1$ has at least $\frac{\epsilon^2d}{161C}$ neighbours in $W_1$ and so there is a set $X$ of at least $\frac{\epsilon^2C_1d^2}{330C}$ edges between $M$ and $W_1$. We now expose the edges in $X \cap G_{p_2}$. By Lemma \ref{chernoff}, $X \cap G_{p_2} \neq \emptyset$ with probability at least $1-\exp\left(-\frac{\epsilon^3C_1d}{400C}\right)$. Recalling that $d=\Theta(\log n)$, we let $\alpha$ be the constant such that $d=\alpha\log n$.
Then, for $C_1 \geq \frac{800\cdot C\cdot \alpha}{\epsilon^3}$, with probability $1-o\left(\frac{1}{n}\right)$, $M$ is adjacent to a vertex in $W_1$ in $G_{p_2} \subseteq G_p$. Hence, by a union bound, \textbf{whp} every component of $G_p[V_0 \cup V_1]$ of order at least $C_1d$ is adjacent in $G_p$ to a vertex in $W_1$, from which the statement follows.
\end{proof}
We now fix $k=16$.
\begin{lemma}\label{all merge}
\textbf{Whp}, all the components of $G_2[W_2]$ are contained in a single component of $G_p$.
\end{lemma}
\begin{proof}
We first note that every edge of $G$ belongs to $G_2$ independently with probability $1-(1-p_1)(1-p_2)=\frac{1+\epsilon-o(1)}{d}$. Thus, by Lemma \ref{dense3}, \textbf{whp} every $v\in V(G)$ is at distance at most $2$ from a vertex of $W_2$. We assume henceforth that this property holds.
Suppose the statement of the lemma does not hold, then we can partition the components of $G_2[W_2]$ into two families, denote them by $\mathcal{A}$ and $\mathcal{B}$, such that there are no paths in $G_p$ (and hence in $G_{p_3}$) between $\mathcal{A}$ and $\mathcal{B}$. Denote by $s$ the number of components in $\mathcal{A}\cup \mathcal{B}$, and let $\ell\le \frac{s}{2}$ be the number of components in the smaller family. Observe that both $A=\cup_{C\in \mathcal{A}}V(C)$ and $B=\cup_{C\in \mathcal{B}}V(C)$ have at least $\ell d^{16}$ vertices in them.
Now, since every vertex in $G$ is at distance at most $2$ from either $A$ or $B$, we can partition $V(G)$ into two sets $A'$ and $B'$, such that $A'$ contains $A$ and all $v\in V(G)\setminus B$ at distance at most $2$ from $A$, and similarly $B'$ contains $B$ and all $v\in V(G)\setminus A'$ at distance at most $2$ from $B$.
Now, by Lemma \ref{productiso} the isoperimetric constant $i(G)$ of $G$ is at least $\frac{1}{2}\min_{j\in [t]}i\left(G^{(j)}\right)$. Since for all $j\in[t]$, $1<|V\left(G^{(j)}\right)|\le C$ and $G^{(j)}$ is connected, we have that $i\left(G^{(j)}\right)\ge\frac{1}{C}$ and thus $i(G)\ge \frac{1}{2C}$. It follows that there are at least $\frac{\ell d^{16}}{2C}$ edges between $A'$ and $B'$. Now, since every vertex in $A'$ is at distance at most $2$ from $A$, and similarly every vertex in $B'$ is at distance at most $2$ from $B$, we can extend these edges to a family of $\frac{\ell d^{16}}{2C}$ paths of length at most $5$ between $A$ and $B$. Since $G$ is $d$-regular, every edge participates in at most $5d^4$ paths of length at most $5$.
Thus, we can greedily thin this family to a set of $\frac{\ell d^{16}}{2C\cdot 5\cdot 5d^4}=\frac{\ell d^{12}}{50C}$ edge-disjoint paths of length at most $5$ between $A$ and $B$. The probability none of these paths are in $G_{p_3}$ is thus at most
\begin{align*}
\left(1-p_3^5\right)^{\frac{\ell d^{12}}{50C}}\le \exp\left(-\frac{\ell d^{2}}{50C}\right).
\end{align*}
On the other hand, there are at most $n$ components in $G_2[W_2]$ and hence the number of ways to partition these components into two families with one containing $\ell$ components is at most $\binom{n}{\ell}$. Thus, by the union bound, the probability that the statement does not hold is at most
\begin{align*}
\sum_{\ell=1}^{\frac{s}{2}}\binom{n}{\ell}\exp\left(-\frac{\ell d^{2}}{50C}\right)\le \sum_{\ell=1}^{\frac{s}{2}}\left[n\exp\left(-\frac{d^{2}}{50C}\right)\right]^{\ell}=o(1),
\end{align*}
since $d=\Theta(\log n)$, completing the proof.
\end{proof}
We are now ready to prove Theorem \ref{supercritical regime}:
\begin{proof}[Proof of Theorem \ref{supercritical regime}]
By Lemma \ref{big comp}, we have that \textbf{whp} $|W_3|=\left(1+o(1)\right)yn$.
By Lemma \ref{the gap}, \textbf{whp} there are no components in $G_p$ of order larger than $O_{\epsilon}(d)$ which do not intersect with $W_1$. Thus, since every component of $G_p[W_3]$ has size at least $d^{16}$, \textbf{whp} every component of $G_p[W_3]$ meets a component of $G_1[W_1]$, and so (by inclusion) meets a component of $G_2[W_2]$. Hence, by Lemma \ref{all merge}, \textbf{whp} all these components are contained in a single component of $G_p$, which we denote by $L_1$, with $V(L_1)=W_3$.
Finally, once again by Lemma \ref{the gap}, \textbf{whp} there are no components larger than $O_{\epsilon}(d)$ which do not intersect with $W_1 \subset W_3$, and thus by the above \textbf{whp} there are no components besides $L_1$ of order larger than $O_{\epsilon}(d)$.
All in all, \textbf{whp} there are $\left(1+o(1)\right)yn$ vertices in the largest component $L_1$, and all the other vertices are in components whose order is at most $O_{\epsilon}(d)$, as required.
\end{proof}
\section{Discussion and possible future research directions} \label{discussion}
Already in \cite{L22}, Lichev asked whether all of the assumptions in Theorem \ref{product graphs} were necessary to ensure a threshold for the appearance of a linear sized component. In particular, he asked whether the bound on the maximum degree of the host graphs could be removed, and whether his (relatively mild) isoperimetric requirement on the base graphs could be furthered relaxed. In a forthcoming paper \cite{DEKK23} we provide a construction answering the former question in the negative, showing that with irregular base graphs, even ones satisfying a relatively strong isoperimetric inequality, the largest component in the supercritical regime can be sublinear in size. However, using some of the tools developed in this paper we are also able to significantly relax the isoperimetric requirements on the base graphs from a polynomial dependence in the dimension to a superexponential one.
In terms of our Theorem \ref{supercritical regime}, it is also interesting to consider which assumptions are necessary. As mentioned in the introduction, there are examples of products graphs of bounded dimension, for example the $d$-dimensional torus \cite{HH17}, which do not go through a quantitatively similar phase transition to $G(d+1,p)$.
In terms of our other two assumptions, that the base graphs are regular and of bounded size, in \cite{DEKK23} we also give an example to show that the regularity assumption is necessary to a certain extent --- if we take our base graphs to be stars, and so quite irregular, then the component behaviour can be quite different, with polynomially sized components appearing already in the subcritical regime. It remains an interesting open question as to whether we can relax the assumption that the base graphs have bounded size to that of just bounded regularity, or even weaker to some assumption of \emph{almost-regularity} of the product graph, under some mild isoperimetric assumptions.
\begin{question}
Let $G$ be a high-dimensional product graph all of whose base graphs are connected and let $d$ be the average degree of $G$. Let $\epsilon > 0$ and let $p=\frac{1+\epsilon}{d}$. What assumptions on degree distributions and the isoperimetric constants of the base graphs are sufficient to guarantee that $G_p$ exhibits the Er\H{o}s-R\'{e}nyi component phenomenon?
\end{question}
More generally, it would be interesting to investigate further which other classes of graph exhibit the Erd\H{o}s-R\'enyi component phenomenon under percolation and to find the limits of the universality of this phenomenon.
We note that the isoperimetric inequality in Theorem \ref{productiso} is far from optimal in the case of the hypercube. Indeed, whilst Theorem \ref{productiso} is asymptotically tight for linear sized sets, which can have constant edge-expansion, a well-known result of Harper \cite{H64} implies that smaller sets in $Q^d$ have almost optimal edge-expansion, expanding by a factor of almost $d$. In the presence of such a strong isoperimetric inequality, it is relatively simple to show the existence of a gap in the sizes of the components in $Q^d_p$ using a first moment argument, which implies any component of polynomial (in $d$) order must in fact have order $O(d)$ (see \cite{BKL92}).
We believe that a qualitatively similar isoperimetric inequality should hold for high-dimensional product graphs (with regular base graphs of bounded order), which would lead to a shorter proof of Theorem \ref{supercritical regime}, but one that is less adaptable to other models, such as site percolation.
\begin{conjecture}
Let $G$ be a high-dimensional product graph all of whose base graphs are connected, regular and of bounded size and let $d$ be the degree of $G$. If $X \subseteq V(G)$ is such that $\log |X| \ll d$, then $e(X,X^c) = (1-o(1))d|X|$.
\end{conjecture}
Since we have seen that percolated high-dimensional product graphs exhibit the Erd\H{o}s-R\'enyi phenomenon, a natural question is to ask what other typical properties of the supercritical binomial random graph are also present in these percolated product graphs, and in particular in their giant components. For many natural properties, tight bounds are not yet known even in the case of the hypercube, see \cite{EKK22} for a recent work by the last three authors on the expansion properties of the giant component in the percolated hypercube, and its consequences for the structure of the giant component. We note that a strong isoperimetric inequality for high-dimensional product graphs as discussed in the previous paragraph would likely be a key tool in future research into the combinatorial structure of the giant component in these models. It would also be interesting to study the behaviour of percolation on such graphs close to the critical point, in particular whether they exhibit mean-field behaviour in terms of the width of the critical window, a result which was only recently shown to hold in the hypercube in a breakthrough result of van der Hofstad and Nachmias \cite{HN14}.
Finally, whilst the Cartesian product is perhaps a natural product in the context of percolation, there are many other types of graphs products, such as strong products and tensor products, and it would be interesting to know if `high-dimensional' graphs with respect to these types of products also exhibit similar behaviour under percolation. As a concrete example, since the $d$-fold tensor product of a single edge is disconnected and the $d$-fold strong product of a single edge is complete, it is perhaps interesting to consider percolation in the tensor and strong products of many small cycles. Let us write $T(k,d)$ for the $d$-fold tensor product of the cycle $C_k$ and similarly $S(k,d)$ for the $d$-fold strong product. Note that $T(k,d)$ is $2^d$-regular and $S(k,d)$ is $2^d + 2d$ regular.
\begin{question}
Do the percolated subgraphs $T(3,d)_p$ and $S(3,d)_p$ exhibit the Erd\H{o}s-R\'{e}nyi component phenomenon at the critical point $p=2^{-d}$?
\end{question}
\bibliographystyle{plain}
| {
"timestamp": "2022-09-09T02:13:05",
"yymm": "2209",
"arxiv_id": "2209.03722",
"language": "en",
"url": "https://arxiv.org/abs/2209.03722",
"abstract": "We consider percolation on high-dimensional product graphs, where the base graphs are regular and of bounded order. In the subcritical regime, we show that typically the largest component is of order logarithmic in the number of vertices. In the supercritical regime, our main result recovers the sharp asymptotic of the order of the largest component, and shows that all the other components are typically of order logarithmic in the number of vertices. In particular, we show that this phase transition is quantitatively similar to the one of the binomial random graph.This generalises the results of Ajtai, Komlós, and Szemerédi and of Bollobás, Kohayakawa, and Łuczak who showed that the $d$-dimensional hypercube, which is the $d$-fold Cartesian product of an edge, undergoes a phase transition quantitatively similar to the one of the binomial random graph.",
"subjects": "Combinatorics (math.CO); Probability (math.PR)",
"title": "Percolation on High-dimensional Product Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987568350610359,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7096610671258501
} |
https://arxiv.org/abs/1602.03865 | Higher signature Delaunay decompositions | A Delaunay decomposition is a cell decomposition in R^d for which each cell is inscribed in a Euclidean ball which is empty of all other vertices. This article introduces a generalization of the Delaunay decomposition in which the Euclidean balls in the empty ball condition are replaced by other families of regions bounded by certain quadratic hypersurfaces. This generalized notion is adaptable to geometric contexts in which the natural space from which the point set is sampled is not Euclidean, but rather some other flat semi-Riemannian geometry, possibly with degenerate directions. We prove the existence and uniqueness of the decomposition and discuss some of its basic properties. In the case of dimension d = 2, we study the extent to which some of the well-known optimality properties of the Euclidean Delaunay triangulation generalize to the higher signature setting. In particular, we describe a higher signature generalization of a well-known description of Delaunay decompositions in terms of the intersection angles between the circumscribed circles. | \section{Introduction}
Let $X$ be a finite set of points in Euclidean space $\mathbb R^d$ of dimension $d \geq 2$. A cell decomposition of the convex hull $\mathrm{CH}(X)$ of $X$ is called a \emph{Delaunay decomposition} for $X$ if it satisfies the \emph{empty ball condition:} each cell of the decomposition is inscribed in a unique round ball which does not contain any points of $X$ in its interior. A classical theorem of Delaunay \cite{del_sur} states that if $X$ is in generic position (meaning that no $d+1$ points lie in an affine hyperplane and no $d+2$ points lie on any sphere), then there exists a unique Delaunay decomposition for $X$, all of the cells of which are simplices.
This article introduces a generalization of the Delaunay decomposition in which the balls in the empty ball condition above are replaced by other families of regions bounded by certain quadratic hypersurfaces. This generalized notion of Delaunay decomposition is adaptable to geometric contexts in which the natural space from which the point set $X$ is sampled is not Euclidean, but rather some other flat semi-Riemannian geometry, possibly with degenerate directions. For example, in the Minkowski plane $\mathbb R^{1,1}$, we replace Euclidean circles with the hyperbolas of constant Minkowski distance from a point. In the degenerate plane $\mathbb R^{1,0,1}$, thought of as a rescaled limit of the Euclidean plane in which one direction has become infinitesimal, we replace Euclidean circles with their rescaled limits, namely parabolas.
A quadratic form $Q$ on $\mathbb R^d$ defines a natural geometric structure on $\mathbb R^d$. In the case that $Q$ is non-degenerate,
we simply use $Q$ in place of the Euclidean norm to measure distances (possibly imaginary), angles, and other geometric quantities. The points within a constant $Q$-distance of a fixed point are called $Q$-balls and we substitute these in place of Euclidean balls to define a natural notion of a $Q$--Delaunay triangulation. In the case that $Q$ has some degenerate directions, we study a natural geometry which treats the degenerate directions as infinitesimal and define an appropriate notion of $Q$-ball which is slightly more subtle than in the non-degenerate case. Again we define a Delaunay decomposition in terms of an appropriate empty ball condition. We stress that in the case that $Q$ is not positive (or negative) definite, $Q$-Delaunay decompositions are not in general obtained via projective transformations from Euclidean ones.
In Section~\ref{classic} we extend a classical proof of the existence and uniqueness of Delaunay decompositions in the Euclidean case to the current setting under the necessary assumption that the point set is in {\em spacelike position} with respect to $Q$, meaning that any two points are positive real distance apart.
\begin{introthm}\label{thm:Del-gen}
Let $Q$ be a quadratic form on $\mathbb R^d$ and let $X\subset {\mathbb R}^d$ be a finite set in spacelike position with respect to $Q$ and in generic position (meaning that no $d+1$ points lie in an affine hyperplane and no $d+2$ points lie on any $Q$--sphere). Then there exists a unique Delaunay decomposition for $X$. We will denote this decomposition by $\mathrm{Del}_{Q}(X)$
\end{introthm}
In Section \ref{quad} we will review some details on quadratic forms on ${\mathbb R}^d$ and their associated geometry. In the case that the quadratic form $Q$ is not degenerate, $\mathrm{Del}_{Q}(X)$ is naturally dual to a generalized Voronoi decomposition, the vertices of which are the centers of the empty $Q$--balls of $\mathrm{Del}_Q(X)$ (Proposition \ref{Q-Voronoi}).
The case that $Q$ is degenerate is more delicate. In this case it is natural to split $\mathbb R^d = \mathbb R^m \oplus \mathbb R^{d-m}$ as the sum of a non-degenerate subspace $\mathbb R^m$ complementary to the degenerate subspace $\mathbb R^{d-m}$. The associated geometry that we study treats the degenerate directions as infinitesimal so that a point $(p_0,v)$ in this geometry is thought as a point $p_0$ in the non-degenerate subspace $\mathbb R^m$ plus infinitesimal information $v \in \mathbb R^{d-m}$ describing the normal derivative of a path $p_t$ leaving the subspace $\mathbb R^m$ going into $\mathbb R^d$. We show in Theorem \ref{thm:rescaled} that the Delaunay decomposition $\mathrm{Del}_Q(X)$ in this case predicts for short time $t > 0$ the combinatorics of the path of Delaunay decompositions $\mathrm{Del}_{Q'}(X_t)$ for points $X_t$ whose first order data agrees with that determined by $X$ and with respect to a non-degenerate quadratic form $Q'$ obtained by making the infinitesimal directions finite.
In Section \ref{d2}, we study the extent to which some of the well-known optimality properties of the Euclidean Delaunay triangulation generalize to the non-Euclidean setting in the case of dimension $d = 2$. There are essentially two non-Euclidean quadratic forms $Q$, one Lorentzian (signature $(1,1)$) and one degenerate. In the first case, the $Q$-circles are hyperbolas and in the second case they are parabolas.
As in the Euclidean case, in either of these geometries weights may be assigned to the edges of a triangulation by summing the angles at opposite vertices, or alternatively by measuring the angle of intersection of the circumscribed $Q$--circles containing a given edge. Using recent results~\cite{dan_pol} about three-dimensional ideal polyhedra in constant negative curvature geometries, in Theorem \ref{thm:prescribe-angles} we characterize the possible edge weights that may occur for a $Q$--Delaunay triangulation in either case. In addition, using a generalized Thales' Theorem, Theorem \ref{thm:angle-optimization} proves that $Q$--Delaunay triangulations are optimally fat: they maximize the angle sequence (ordered lexicographically) over all triangulations as in the Euclidean case.
In Section \ref{sc:5}, we briefly outline some possible applications of the higher signature
Delaunay decompositions developed here, and mention a few open questions.
\section{Classical Delaunay decomposition and its generalizations}\label{classic}
Recall that a ball is defined in terms of the Euclidean norm, $\|x\|^2 = x^Tx$. Specifically, the ball of radius-squared $D\in \mathbb R$ and center $p \in \mathbb R^d$ is the set of points $x \in \mathbb R^d$ satisfying the inequality:
\begin{align}
\|x - p\|^2 &\leq D \label{eqn:ball1}.
\end{align}
This may be expressed as the equivalent condition that:
\begin{align}
\|x\|^2 &\leq \varphi(x) + D', \label{eqn:ball2}
\end{align}
where $\varphi: \mathbb R^d \to \mathbb R$ is the linear functional defined by $\varphi(x) = 2p^Tx$ and the constant $D' = D - \|p\|^2$. In other words, a ball is exactly the set of points in $\mathbb R^d$ whose image in $\mathbb R^{d+1}$ on the graph of the function $\|\cdot \|^2$ lies below some affine hyperplane.
The norm-squared function $x \mapsto \|x\|^2 = x^Tx$ is one example of a \emph{quadratic form}, which we define here as any function $Q: \mathbb R^d \to \mathbb R$ of the form $$Q(x) = x^T A x,$$ where $A$ is some symmetric $d \times d$ matrix. If the matrix $A$ is non-singular, then we call $Q$ {\it non-degenerate}, and if the matrix $A$ is positive definite (respectively indefinite), then we call $Q$ {\it positive
definite} (respectively {\it indefinite}). Up to transforming $\mathbb R^d$ by a linear isomorphism, a quadratic form $Q$ is determined entirely by the number of positive, negative, and zero eigenvalues of~$A$. This data is referred to as the signature of $Q$.
Let $Q$ be a non-degenerate quadratic form, possibly indefinite, on $\mathbb R^d$. The term \emph{$Q$--ball} refers to any region defined by the inequality
\begin{align}\label{eqn:Qball1}
Q(x-p) \leq D,
\end{align}
where $p \in \mathbb R^d$ and $D \in \mathbb R$ (possibly negative). Equivalently, a $Q$--ball is any region defined by the inequality
\begin{align}\label{eqn:Qball2}
Q(x) \leq \varphi(x) + D',
\end{align}
where $\varphi: \mathbb R^d \to \mathbb R$ is any linear functional and $D' \in \mathbb R$ is any constant.
Of course, Inequality~\eqref{eqn:Qball1} may be described in the form~\eqref{eqn:Qball2} by simply taking $\varphi(x) = 2p^TAx$ and $D' = D + Q(p)$. However, solving for $p, D$ in terms of $\varphi, D'$ in order to deduce inequality~\eqref{eqn:Qball1} from~\eqref{eqn:Qball2}
requires the quadratic form $Q$ (that is, the matrix $A$) to be non-degenerate. Hence, Inequalities~\eqref{eqn:Qball1} and~\eqref{eqn:Qball2} define different families of regions in the case that $Q$ is degenerate. It turns out that the regions defined by Inequality~\eqref{eqn:Qball2} define a much more interesting theory of Delaunay decomposition in this case.
\begin{Definition}[$Q$--ball]\label{def:Qball}
Let $Q$ be any quadratic form (possibly degenerate) on $\mathbb R^d$. The term \emph{$Q$--ball} refers to the region defined by Inequality~\eqref{eqn:Qball2}, where $\varphi \co \mathbb R^d \to \mathbb R$ is a linear functional and $D' \in \mathbb R$ is a constant. The boundary of a $Q$--ball is called a \emph{$Q$--sphere}.
\end{Definition}
\noindent We will see in Section~\ref{sec:degen} that a degenerate form $Q$ defines a special geometry in which the degenerate directions of $Q$ are treated as infinitesimal and we think of $Q$ as a degeneration of a family of non-degenerate quadratic forms $Q_t'$ as $t \to 0$. Under that interpretation, $Q$--balls may be thought of as limits of $Q'_t$-balls with center escaping to infinity as $t \to 0$.
In order to obtain a sensible theory of Delaunay decompositions using a semi-definite quadratic form $Q$, we must restrict the point sets under consideration. We make the following definition:
\begin{Definition}
Let $Q$ be a quadratic on $\mathbb R^d$ (possibly indefinite and/or degenerate). A finite set of points $X$ in $\mathbb R^d$ is said to be in \emph{spacelike position} with respect to $Q$ if, for any distinct points $x,y \in X$, the positivity condition $Q(x - y) > 0$ holds.
\end{Definition}
Replacing the notion of ball with that of $Q$--ball in the definition of the classical Delaunay decomposition yields the following generalization.
\begin{Definition}[$Q$--Delaunay decomposition]\label{def:Del-gen}
Let $Q$ be any quadratic form on $\mathbb R^d$ and let $X$ be a finite set of points in $\mathbb R^d$ in spacelike position with respect to $Q$. A cell decomposition of the convex hull $\mathrm{CH}(X)$ of $X$ with $0$-skeleton $X$ is called a \emph{$Q$--Delaunay decomposition} if it satisfies the \emph{empty $Q$--ball condition:} each cell of the decomposition is inscribed in a unique $Q$--ball which does not contain any points of $X$ in its interior. We will call the boundary of such a $Q$--ball an {\em empty $Q$--sphere}.
\end{Definition}
\begin{figure}
\includegraphics[width = 5.0in]{Delaunay-3types.pdf}
\captionsetup{singlelinecheck=off}\caption[An example of the $Q$--Delaunay decomposition of five points in spacelike general position in the case that:]{An example of the $Q$--Delaunay decomposition of five points in spacelike general position in the case that:
\begin{itemize}
\item (left) $Q(x_1,x_2) = x_1^2 + x_2^2$ is the standard Euclidean norm, and so the $Q$--balls are bounded by circles;
\item (middle) $Q(x_1, x_2) = x_1^2 -x_2^2$ is the standard Minkowski norm, and so the $Q$--balls are bounded by hyperbolas;
\item (right) $Q(x_1,x_2) = x_1^2$ is a degenerate norm, in which case the $Q$--balls are bounded by parabolas.
\end{itemize}
}
\end{figure}
\begin{figure}
\centering
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width = 7cm]{SDelaunay.pdf}
\end{minipage}\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width = 7cm]{HDelaunay.pdf}
\end{minipage}
\caption{The $Q$-Delaunay decompositions of the same 7 points when $Q$ is the Euclidean
(left) and the Minkowski (right) bilinear form on ${\mathbb R}^3$. Only some of the $Q$-balls
have been drawn.}
\end{figure}
We now prove Theorem \ref{thm:Del-gen}. The proof is adapted from a well-known proof in the case that $Q$ is the standard Euclidean norm, see \cite{bro_vor}.
\begin{proof}[Proof of Theorem \ref{thm:Del-gen}]
Given the quadratic form $Q$ on ${\mathbb R}^d$, we consider the quadratic form $\widehat Q$ on $\mathbb R^{d+2}$ defined by $$\widehat Q(x_1, \ldots, x_d, x_{d+1}, x_{d+2}) := Q(x_1, \ldots, x_d) -x_{d+1}x_{d+2}.$$
Associate to $\widehat Q$, we define the open subset $\mathbb X \subset \mathbb{RP}^{d+1}$ as follows:
$$\mathbb X = \{x \in \mathbb{R}^{d+2}\setminus\{0\} \mid \widehat Q(x) < 0\}/{\mathbb R}^*,$$
that is the set of negative lines in $\mathbb R^{d+2}$ with respect to $\widehat Q$. Then $\mathbb X$ is a model for semi-Riemannian geometry, possibly with degenerate directions, associated with $\widehat Q$ with constant negative sectional curvature. In the affine chart $x_{d+2} =1$, the ideal boundary of $\mathbb X$ is described by the equation $$x_{d+1} = Q(x_1, \ldots, x_d),$$ i.e. the graph $\mathscr P$ in $\mathbb R^{d+1}$ of $Q$. The projection map $\varpi: \mathbb R^{d+1} \to \mathbb R^d$ onto the first $d$-coordinates, defined by $$\varpi(x, x_{d+1}) = x,$$ restricts to a homeomorphism $\mathscr P \to \mathbb R^d$, which, in the case that $Q$ is the usual Euclidean norm-squared, is usually called \emph{stereographic projection}.
Now, consider the convex hull $\mathscr C$ of the inverse images $X' = \varpi^{-1}(X)$ of $X$ on~$\mathscr P$. $\mathscr C$ may be regarded as a convex polyhedron in $\mathbb X$ whose vertices lie on the ideal boundary $\partial \mathbb X$. By the genericity assumption, $\mathscr C$ is a $d+1$-dimensional polyhedron, whose boundary is a union of $d$-dimensional simplices which come in two types, called \emph{bottom} and \emph{top}. A face $f$ on the boundary $\partial \mathscr C$ of $\mathscr C$ is a bottom face if its unit normal, directed toward the interior of $\mathscr C$, has positive $x_{d+1}$ component. A face $f$ is a top face if its inward directed unit normal has negative $x_{d+1}$ component. The unit normal to a face never has zero $x_{d+1}$ component by the genericity assumption.
\begin{figure}[h]
{
\centering
\def6.0cm{6.0cm}
\input{convex-hull.pdf_tex}
\def6.0cm{6.0cm}
\input{projection-to-DelQ.pdf_tex}
}
\caption{To find the $Q$--Delaunay decomposition $\mathrm{Del}_Q(X)$, first lift $X$ to its image $X'$ on the graph of $Q$ and take the convex hull $\mathscr C$ of $X$ (left panel). Then project the bottom faces $\partial \mathscr C_-$ down to $\mathbb R^d$ (right panel).}
\end{figure}
We claim that $\varpi$ restricts to a homeomorphism from the union $\partial \mathscr C_-$ of the bottom faces of $\partial \mathscr C$ to the convex hull $\mathrm{CH}(X)$ of $X$ in $\mathbb R^d$ and that $\varpi$ takes the faces of $\partial \mathscr C_-$ to the cells of the unique $Q$--Delaunay decomposition for $X$. To see this, observe that the condition that all points $(x, Q(x))$ of $X'$ lie (weakly) above the affine hyperplane spanned by some face $f$ of $\partial \mathscr C_-$ is described by an inequality of the form
$$Q(x) \geq \varphi(x) + D'$$
for some linear functional $\varphi$ and constant $D'$, with equality exactly when $(x, Q(x))$ is a vertex of $f$. Hence, the cell decomposition of $\mathbb R^d$ defined by projection of the cell structure of $\partial \mathscr C_-$ satisfies that the vertices of any face $\varpi(f)$ lie on a $Q$--sphere $S_f$ which bounds a $Q$--ball $B_f$ whose interior does not contain points of $X$.
In the case that $Q$ is positive definite the proof, which up to this point is well-known, is complete.
There is, however, one subtle point to check if $Q$ is not positive definite, which is the case of interest in this article. Definition~\ref{def:Del-gen} requires that the simplex $\varpi(f)$ be \emph{inscribed} in $B_f$: so we must check that $\varpi(f)$ is actually contained in~$B_f$. This will follow from the assumption that $X$ is in spacelike position. Let $\langle \cdot, \cdot \rangle_Q $ denote the inner product associated to $Q$, defined by $\langle x, y\rangle_Q = x^T Ay$ for all $x,y \in \mathbb R^d$.
Then
\begin{equation}\label{eqn:useful}
Q(x - y) > 0 \iff 2\langle x, y\rangle_Q < Q(x) +Q(y).
\end{equation}
Let $x_1',\ldots,x_m'$ be vertices of the face $f$, where $x_k' = (x_k, Q(x_k)),$ for each $k \in \{1, \ldots, m\}$. A typical point of $f$ is a convex combination of the form $y = t_1x_1 + \cdots + t_mx_m$, where $t_1, \ldots, t_m$ are positive coefficients such that $t_1 + \cdots + t_m = 1$.
Then
\begin{align*}
Q(y) &= \left\langle \sum_{i=1}^m t_i x_i, \sum_{j=1}^m t_jx_j\right\rangle_Q \\
&= \sum_{i=1}^m t_i^2 Q(x_i) + 2\sum_{j < k} t_jt_k\langle x_j, x_k\rangle_Q\\
&\leq \sum_{i=1}^m t_i^2 Q(x_i) + \sum_{j < k} t_jt_k(Q(x_j) + Q(x_k))\\
& = \sum_{i=1}^m t_i(t_1 + \cdots +t_m) Q(x_i) = \sum_{i=1}^m t_i Q(x_i)\\
&= \sum_{i=1}^m t_i (\varphi(x_i) + D) = \varphi(y) + D\\
&\implies Q( y) \leq \varphi(y) + D.
\end{align*}
The hypotheses that the points $x_1, \ldots, x_m$ are in spacelike position was applied (via~\eqref{eqn:useful}) in the third line. Note that equality holds if and only if $y = x_k$ for some $k \in \{1, \ldots, m\}$. It follows that $\varpi(f) \subset B_f$ with $\varpi(f) \cap S_f = \{x_1, \ldots, x_m\}$, that is $\varpi(f)$ is inscribed in $B_f$. The same ideas can be used to prove the uniqueness.
\end{proof}
\begin{remark}\label{full}
Note that $\varpi$ restricts to a homeomorphism from the union $\partial \mathscr C_+$ of the top faces of $\partial \mathscr C$ to the convex hull $\mathrm{CH}(X)$ of $X$ in $\mathbb R^d$ and that $\varpi$ takes the faces of $\partial \mathscr C_+$ to the cells of the unique decomposition for $X$ where each $Q$--ball satisfies the {\it full $Q$--ball condition}: each cell of the decomposition is inscribed in a unique $Q$--ball such that it contains all the points of $X$ in its closure. We will call the boundary of such a $Q$--ball a {\em full $Q$--sphere}.
\end{remark}
\begin{remark}
In the familiar case that $Q$ is the Euclidean norm, the surface $\mathscr P$ in the proof above may be naturally interpreted as the ideal boundary of the hyperbolic space ${\mathbb H}^{d+1}$ with one point removed. The convex hull $\mathscr C$ of $X'$ is an ideal polyhedron in ${\mathbb H}^{d+1}$, the geometry of which gives information about the geometry of the Delaunay decomposition of $X$. In the general case a similar interpretation is possible. If $Q$ is non-degenerate of signature $(p,q)$, then $\mathscr P$ identifies with (a dense subset of) the ideal boundary of the negatively curved semi-Riemannian space ${\mathbb H}^{p+1,q}$ of signature $(p+1,q)$ and constant negative curvature. If $Q$ is degenerate, then $\mathscr P$ may be interpreted as (a dense subset of) the boundary of a degenerate geometry obtained from some ${\mathbb H}^{p+1,q}$ as a limit. See Section~\ref{sec:Hpq}.
\end{remark}
\section{The geometry associated to a quadratic form}\label{quad}
\subsection{Non-degenerate quadratic forms}\label{sec:non-degen}
\subsubsection{Flat semi-Riemannian geometry}
Throughout this section, we assume $Q$ is a non-degenerate quadratic form on $\mathbb R^d$. The numbers $p$ and $q$ of positive and negative eigenvalues of the matrix $A$ of $Q$
satisfy that $p+q =d$. The pair $(p,q)$ is called the signature of $Q$.
After changing coordinates by a linear transformation, we may assume that $Q$ is
the standard quadratic form of signature $(p,q)$, meaning its matrix $A = I_p \oplus -I_q$
is simply the diagonal matrix with $p$ $(+1)$'s followed by $q$ $(-1)$'s on the diagonal.
The inner product $\langle \cdot, \cdot \rangle_Q$ induced by $Q$ is given by the simple
expression
$$\langle x , y \rangle_Q = x_1y_1 + \cdots + x_p y_p - x_{p+1} y_{p+1} - \cdots -x_{d} y_{d}~.$$
We assume that $p,q>0$, since the case $q = 0$ is the classical Euclidean case and the case $p = 0$ is un-interesting (there are no point sets $X$ in spacelike position when $|X| > 1$).
The inner product $\langle \cdot, \cdot \rangle_Q$ defines, by parallel translation, a flat semi-Riemannian metric on the affine space of $\mathbb R^d$. This space and metric together is denoted by $\mathbb R^{p,q}$. It is the simply connected model for flat semi-Riemannian geometry of signature $(p,q)$. The isometry group of $\mathbb R^{p,q}$ is precisely $$\operatorname{Isom}(\mathbb R^{p,q}) = \operatorname{Aff}(\operatorname{O}(Q)) = \operatorname{O}(p,q) \ltimes \mathbb R^d,$$
where $\operatorname{O}(Q) = \operatorname{O}(p,q)$ denotes the linear transformations of $\mathbb R^d$ preserving $Q$.
Of course, any natural construction in flat semi-Riemannian geometry of signature $(p,q)$ should be invariant under this group.
\begin{Proposition}\label{invariance1}
The set of $Q$--balls is invariant under $\operatorname{Isom}(\mathbb R^{p,q})$, hence so is the notion of $Q$--Delaunay decomposition: if $f \in \operatorname{Isom}(\mathbb R^{p,q})$ and $X \subset \mathbb R^{p,q}$ is in spacelike and generic position, then $\mathrm{Del}_Q(f(X))=f(\mathrm{Del}_Q(X))$.
\end{Proposition}
\noindent In Section \ref{mobius}, we will show that in fact the $Q$--Delaunay decomposition is invariant under a larger set of transformations, specifically a natural subset (depending on $X$) of M\"obius transformations (Theorem~\ref{Mob}).
The quadratic form $Q$ may be used to measure lengths, just as in the Euclidean setting, with some minor differences.
First, note that the $Q$--distance-squared $Q(x-y)= \langle x-y, x-y \rangle_Q$ between two distinct points $x, y \in \mathbb R^d$ may be positive, negative, or zero. Borrowing terminology from the Lorentzian setting ($p = d-1, q=1$), we call the displacement between $x$ and $y$ \emph{spacelike} if $Q(x - y) > 0$, \emph{timelike} if $Q(x-y) < 0$, or \emph{lightlike} if $Q(x-y) = 0$. When $Q(x-y) < 0$, one may think of the distance between $x$ and $y$ as an imaginary number, however it is more convenient to simply work with $Q(x-y)$ rather than its square root.
\subsubsection{$Q$--Voronoi decompositions}
As in the classical setting, the $Q$--Delaunay decomposition is dual to an analogue of the Voronoi decomposition, which we call the $Q$--Voronoi decomposition.
\begin{Definition}
Let $X \subset \mathbb R^d$ be finite and let $Q$ be a non-degenerate quadratic form. Then for each $x \in X$, we define the Voronoi region of $x$ to be $$\mathrm{Vor}_Q(x) := \{y \in \mathbb R^d : Q(x-y) \leq Q(x'-y) \text{ for all } x' \in X\}.$$
\end{Definition}
\begin{Proposition}\label{Q-Voronoi}
Suppose $X \subset \mathbb R^d$ is a finite set in spacelike and generic position with respect to $Q$. Then
\begin{enumerate}
\item For each $x \in X$, $\mathrm{Vor}_Q(x)$ is a (possibly unbounded) convex polyhedron,
with timelike codimension 1 faces, containing $x$ in its interior.
\item The collection of $\mathrm{Vor}_Q(x)$ for all $x \in X$ gives a polyhedral decomposition of $\mathbb R^d$ which we call the \emph{$Q$--Voronoi decomposition} with respect to $X$. It will be denoted by $\mathrm{Vor}_Q(X)$.
\item For each facet $f$ of codimension $k$ of $\mathrm{Vor}_Q(X)$, there exists $k+1$ unique points $p_0, p_1, \ldots, p_k$, distinct from one another, such that $$f = \mathrm{Vor}_Q(p_0) \cap \mathrm{Vor}_Q(p_1) \cap \cdots \cap \mathrm{Vor}_Q(p_k).$$
\item The vertices of $\mathrm{Vor}_Q(X)$ are the centers of the empty $Q$--balls of $\mathrm{Del}_Q(X)$.
\end{enumerate}
\end{Proposition}
\begin{remark}
It is also natural to define the {\em inverse Voronoi region} as follows: for each $x \in X$, consider the set $$\mathrm{Vor}^{-1}_Q(x) = \{y \in \mathbb R^d : Q(x-y) \geq Q(x'-y) \text{ for all } x' \in X\}.$$ Then, denoting by $\mathrm{Vor}^{-1}_Q(X)$ the collection of $\mathrm{Vor}^{-1}_Q(x)$ for all $x \in X$, the vertices of $\mathrm{Vor}^{-1}_Q(X)$ are the centers of the full $Q$--balls, as defined in Remark \ref{full}.
\end{remark}
\begin{proof}
Since $X$ is in spacelike position, for all distinct $x, p \in X$,
the set of points $y\in \mathbb R^d$ such that $Q(x-y) \leq Q(p-y)$ is a half-space, bounded by a
timelike hyperplane, containing $x$ in its interior. Therefore, $\mathrm{Vor}_Q(x)$ is the union of finitely many
half-spaces (each bounded by a timelike hyperplane) and it is therefore a convex polyhedron
(which might however be unbounded). Moreover $\mathrm{Vor}_Q(x)$ contains $x$ in its interior, and
the first point is proved.
The second point is clear since, by definition, each point $y\in \mathbb R^d$
is contained in at least one of the $\mathrm{Vor}_Q(x)$ for some $x \in X$.
The third point follows from the first point and from the hypothesis that
$X$ is in generic position: given a codimension $k$ facet $f$ of $\mathrm{Vor}_Q(X)$ in the
boundary of $\mathrm{Vor}_Q(p_0)$, for some $p_0 \in X$, there are $k$
other pairwise distinct points $p_1, \cdots, p_k\in X\setminus\{p_0\}$ such that
for all $y\in f$ and for all $j\in \{ 0,\cdots, k\}$, we have $Q(y-p_i)=Q(p'_j-y)$.
Moreover there cannot be more than $k$ such points, by the genericity
hypothesis.
The last point follows from the third point because a vertex $v$ is a codimension $d$ facet. Hence, $v$ is $Q$-equidistant to $d+1$ points of $X$ and is further away from the other points.
\end{proof}
\begin{Remark}
Note that it is necessary to suppose that $X$ is in spacelike position to have $x \in \mathrm{Vor}_Q(x)$.
The hypothesis that $X$ is in generic position, however, is not as essential.
If it is not satisfied, the same convex hull construction naturally defines a $Q$--Delaunay decomposition some of whose cells may not be simplices. The codimension $0$
non-simplicial cells of the decomposition then correspond to empty spheres containing
more than $d+1$ points of $X$. The same phenomenon appears for the usual
Delaunay decomposition in Euclidean space.
\end{Remark}
\subsection{Degenerate quadratic forms}\label{sec:degen}
Consider a time-dependent finite point set $X_t$ in $\mathbb R^d$. That is, for each $t \geq 0$, $X_t = \{p_1(t), \ldots, p_n(t)\}$, where
for each $k \in \{1, \ldots, n\}$, $p_k(t)$ is a smooth path. We assume that at time $t = 0$, all of the points lie in a proper subspace $V \subset \mathbb R^d$ and are in generic position within that subspace. We are interested in the time-dependent Delaunay decomposition of $X_t$, with respect to some non-degenerate quadratic form $Q$ on $\mathbb R^d$ (perhaps the standard Euclidean norm, or perhaps an indefinite form defining a flat semi-Riemannian structure as in the previous section). Of course, at time $t = 0$, the points $X_0$ are not in generic position and the $ Q$-Delaunay decomposition is not defined (degenerate). However, we wish to use infinitesimal data at time $t = 0$ to predict the combinatorics of the Delaunay decomposition for $t > 0$.
Assume henceforth that the subspace $V$ which contains the points at time $t=0$ is non-degenerate with respect to $Q$, and let $U$ denote the $Q$-orthogonal complement. Then, after changing coordinates appropriately, we may assume that $V = \mathbb R^m$ is the coordinate hyperplane corresponding to the first $m$ coordinates and that $U = \mathbb R^{d-m}$ is the coordinate hyperplane corresponding to the remaining $d-m$ coordinates, so that the direct sum decomposition $\mathbb R^d = \mathbb R^m \oplus \mathbb R^{d-m}$ is $Q$--orthogonal.\marginnote{S: Is it ok?\\ JD: modified} In these coordinates we write $p_k(t) = (y_k(t), z_k(t))$ for all $1 \leq k \leq n$.
Consider the set $$X = \left\{ q_k = (y_k(0), z_k'(0)) \mid 1 \leq k \leq n \right\},$$ where $z_k'(0)$ denotes the derivative of $z_k(t)$ at $t = 0$. Let $\mathscr Q$ denote the degenerate quadratic form defined by
$$\mathscr Q (y,z) = Q(y,0)~. $$
\begin{theorem}\label{thm:rescaled}
With notation as above, assume that $ X$ is in spacelike, generic position for~$ \mathscr Q$. Then for all $t> 0$ sufficiently small, the natural map $X_t \to X$ induces a cell-wise homeomorphism taking $\mathrm{Del}_{Q}(X_t)$ to $\mathrm{Del}_{\mathscr Q}( X)$. In other words, for short time, the combinatorics of the $Q$--Delaunay decomposition of $X_t$ agrees with the combinatorics of the $\mathscr Q$-Delaunay decomposition of $X$.
\end{theorem}
The theorem will follow from the next lemma which gives a natural interpretation of $\mathscr Q$-balls as rescaled limits of $Q$--balls under the rescaling maps $L_t: \mathbb R^d \to \mathbb R^d$ defined by $$L_t(y,z) = \left(y, \frac{z}{t}\right).$$ The proof, omitted, is a simple computation.
\begin{lemma}\label{lem:converging-balls}
Let $B$ denote the $ \mathscr Q$-ball defined by the equation $ \mathscr Q(y,z) \leq \varphi(y,z) + D$ for some linear functional $\varphi$ and constant $D$. Let $B_t$ be a family of $Q$--balls, defined for $t > 0$, by the equations $Q(y,z) \leq \varphi_t(y,z) + D_t$ where
\begin{align*}
\lim_{t\to 0} \ \varphi_t\circ L_t^{-1} &= \varphi,\\ \lim_{t \to 0} D_t &= D.
\end{align*}
Then $L_t B_t \to B$ as $t \to 0$.
\end{lemma}
And now the proof of Theorem~\ref{thm:rescaled}.
\begin{proof}[Proof of Theorem~\ref{thm:rescaled}]
Let $I$ denote the $d+1$ indices of the points of $X$ lying on the boundary of an empty $\mathscr Q$-ball $$B^I = \{(y,z) \mid \varphi^I(y_i,z_i) + D^I\}$$
of $\mathrm{Del}_{{\mathscr Q}}(X)$, where the (unique) linear functional $\varphi^I$ and constant $D^I$ are defined by $$\mathscr Q(y_i,z_i') = \varphi^I(y_i,z_i') + D^I \;\; \forall i \in I.$$ For every sufficiently small $t > 0$, let $\varphi^I_t$ and $D^I_t$ be the unique linear functional and constant defining the $Q$--ball $B^I_t$ containing $(y_i(t), z_i(t))$ in its boundary for all $i \in I$. Then
$$ Q(y_i(t),z_i(t)) = \varphi^I_t(y_i(t), z_i(t)) + D^I_t $$
and therefore
\begin{align*}
Q\left(y_i(t), t \frac{z_i(t)}{t}\right) & = \varphi^I_t\left(y_i(t), t\frac{z_i(t)}{t}\right) + D^I_t \\ &= \varphi^I_t \circ L_t^{-1}\left(y_i(t), \frac{z_i(t)}{t} \right) + D^I_t~.
\end{align*}
Taking the limit as $t \to 0$ we find that
$$ Q(y_i(0), 0) = \lim_{t \to 0} \left(\varphi^I_t \circ L_t^{-1}\right)(y_i(0), z'_i(0)) + \lim_{t \to 0} D^I_t~, $$
so that
$$ \mathscr Q(y_i(0), z_i'(0)) = \lim_{t \to 0} \left(\varphi^I_t \circ L_t^{-1}\right)(y_i(0), z'_i(0)) + \lim_{t \to 0} D^I_t
$$
for all $i \in I$. It now follows that $\varphi^I_t \circ L_t^{-1} \to \varphi$ and $D^I_t \to D^I$. We now apply Lemma \ref{lem:converging-balls} to obtain that $L_t B^I_t \to B^I$ as $t \to 0$. Observing that $L_t X_t \to X$, it now follows that for sufficiently small $t > 0$, the rescaled balls $L_t B^I_t$ are empty of the other points of the rescaled point set $L_t X_t$. Hence, for sufficiently small $t > 0$, the $\mathscr Q$--balls $B^I_t$ are the empty balls defining the $\mathscr Q$--Delaunay triangulation of $X_t$.
\end{proof}
\subsubsection{The geometry of a degenerate quadratic form.}\label{sec:degen-geom}
Let $\mathscr Q$ be a degenerate quadratic form as in the previous subsection. Parallel translation of the inner product $\langle \cdot, \cdot \rangle_{\mathscr Q}$ determines a flat semi-Riemannian metric on $\mathbb R^d$, which in this case is degenerate. The isometries of such a metric form an infinite-dimensional group, indicating that this degenerate geometry does not have very much structure. We may impose more structure by thinking of the degenerate directions as having infinitesimal length rather than zero length. Such structure is best described by a group $G$ acting on $\mathbb R^d$ (the ``symmetries of the structure") rather than a metric. We define $G$ as a limit of the isometry groups of non-degenerate quadratic forms $Q_t$ which are degenerating to $ \mathscr Q$.
Specifically, for each $t > 0$, define $Q_t$ by $Q_t(y,z) = Q(y,0) + tQ(0,z)$,
and let $G_t = \operatorname{Isom}(Q_t)$. Define $G$ to be the limit of $G_t$ as $t \to 0$ in the Chabauty topology on the space of subgroups of the Lie group $\mathrm{Aff}(\mathbb R^d)$ of affine transformations of $\mathbb R^d$: an element $g \in \mathrm{Aff}(\mathbb R^d)$ is in $G$ if and only if there exists $g_t \in G_t$ for each $t > 0$ such that $g_t \to g$ as $t \to 0$.
\begin{Proposition} \label{pr:group}
The Lie subgroup $G \subset \mathrm{Aff}(\mathbb R^d)$ has the following properties.
\begin{enumerate}
\item $\dim G = \dim G_t$ for all $t > 0$.
\item Each $g \in G$ preserves the flat degenerate metric determined by $ \mathscr Q$.
\item Each $g \in G$ preserves the fibration determined by the projection $\pi : \mathbb R^d = \mathbb R^m \oplus \mathbb R^{d-m} \to \mathbb R^m$. If $Q_V$ denotes the restriction of $Q$ to $\mathbb R^m$, a non-degenerate form, then $g$ acts as a $Q_V$-isometry, denoted $\varpi(g)$, on $\mathbb R^m$. The map $\varpi : G \to \operatorname{Isom}(Q_V)$ is a surjective homomorphism satisfying the equivariance property: $\pi(gx) = \varpi(g) \pi(x)$.\label{item:base-metric}
\item Let $Q_U$ denote the restriction of $Q$ to the $\mathbb R^{d-m}$ factor, a non-degenerate quadratic form. Then $Q_U$ determines, by parallel translation, a flat semi-Riemannian metric on each fiber $\pi^{-1}(y) = \{(y, z): z \in \mathbb R^{d-m}\}$. These semi-Riemannian metrics are also preserved by $G$, meaning that $g \in G$ takes the metric on $\pi^{-1}(y)$ to the metric on $g \pi^{-1}(y) = \pi^{-1}(\varpi(g)y)$.\label{item:fiber-metric}
\item $G$ is the unique subgroup of $\mathrm{Aff}(\mathbb R^d)$ satisfying \eqref{item:base-metric} and \eqref{item:fiber-metric}.
\end{enumerate}
\end{Proposition}
In coordinates respecting the splitting $\mathbb R^d = \mathbb R^m \oplus \mathbb R^{d-m}$, we may describe the group $G$ as the collection of all affine transformations with any translational part and whose linear part has the form
$$\begin{pmatrix} A & 0 \\ B & C \end{pmatrix}$$
where $A \in \operatorname{O}(Q_V), C \in \operatorname{O}(Q_U)$ and $B$ is any $(d-m) \times m$ matrix.
The geometry defined by the group $G$ is much more rigid then the geometry defined by just the degenerate flat metric associated to $\mathscr Q$. This more rigid geometry appears naturally in the study of geometric structures transitioning from flat semi-Riemannian geometry of one signature to another, see~\cite{coo_lim}. While the notion of Delaunay decomposition with respect to $\mathscr Q$ is \emph{not} preserved by $\operatorname{Isom}(\mathscr Q)$, it is preserved by $G$. As in the non-degenerate case, see Section~\ref{mobius} for a stronger result about the invariance of the $\mathscr Q$--Delaunay decompositions.
\begin{Proposition}\label{invariance2}
The set of $\mathscr Q$-balls is invariant under $G$. Hence so is the notion of $\mathscr Q$-Delaunay decomposition: if $g \in G$ and $X \subset \mathbb R^{d}$ is in spacelike and generic position with respect to $\mathscr Q$, then
$\mathrm{Del}_{{\mathscr Q}}(g(X)) = g(\mathrm{Del}_{{\mathscr Q}}(X))$.
\end{Proposition}
\begin{proof}
We use the notations introduced in Lemma \ref{lem:converging-balls} and Proposition \ref{pr:group} above.
For any $t > 0$, a simple calculation shows that the map $L_t$ takes $Q$-balls to $Q_{t^2}$-balls (bijectively).
Hence Lemma \ref{lem:converging-balls} implies that every $\mathscr Q$-ball is a limit of $Q_{t}$-balls. The result follows because $G_t$ preserves the set of all $Q_t$ balls so its limit group $G$ must preserve the set of limits of $Q_t$ balls which includes the $\mathscr Q$ balls.
\end{proof}
\begin{Remark}
There does not seem to be any reasonable `geometric' notion of Voronoi decomposition with respect to a degenerate quadratic form $\mathscr Q$, because the $\mathscr Q$-balls do not have a center: one should think of the center as being at infinity. However, we note that by Theorem \ref{thm:rescaled}, the combinatorics of $\mathrm{Vor}_{Q}(X_t)$ is constant, so it may be natural to define a combinatorial $\mathscr Q$--Voronoi decomposition to agree with the combinatorics of $\mathrm{Vor}_{Q}(X_t)$.
\end{Remark}
\subsection{The convex hull construction revisited}\label{sec:Hpq}
Let us now return to the convex hull construction used in the proof of Theorem~\ref{thm:Del-gen}.
First, let $Q$ be a non-degenerate quadratic form on $\mathbb R^{d}$ of signature $(p,q)$. Consider the quadratic form $\widehat Q$ on $\mathbb R^{d+2}$ defined by $$\widehat Q(x_1, \ldots, x_d, x_{d+1}, x_{d+2}) := Q(x_1, \ldots, x_d) -x_{d+1}x_{d+2}.$$
Then $\widehat Q$ has signature $(p+1, q+1)$, and the open subset $\mathbb X \subset \mathbb{RP}^{d+1}$ consisting of negative lines in $\mathbb R^{d+2}$ with respect to $\widehat Q$, that is
$$\mathbb X = \{x \in \mathbb{R}^{d+2}\setminus\{0\} \mid \widehat Q(x) < 0\}/{\mathbb R}^*,$$
is a model for semi-Riemannian geometry of signature $(p+1, q)$ with constant negative curvature. The group $\operatorname{PO}(\widehat Q)$ of linear transformations of $\mathbb{RP}^{d+1}$ preserving $\widehat Q$ is the isometry group of a homogeneous semi-Riemannian metric of signature $(p+1,q)$ and constant negative sectional curvature.
Now, consider the affine chart $x_{d+2} =1$. In this chart, the ideal boundary of $\mathbb X$ is described by the equation $$x_{d+1} = Q(x_1, \ldots, x_d),$$ i.e. the graph in $\mathbb R^{d+1}$ of $Q$. Hence, via the map $x \mapsto (x, Q(x))$, $\mathbb R^{d}$ may be regarded as an open (and dense) set on the ideal boundary $\partial \mathbb X \subset \mathbb{RP}^{d+1}$. Indeed, $\partial \mathbb X$ is the natural conformal compactification of the flat semi-Riemmanian metric on $\mathbb R^d$ defined by $Q$, and the affine isometries $\operatorname{O}(Q) \ltimes \mathbb R^d$ of that flat metric on $\mathbb R^d$ are naturally a subgroup of the full conformal group $\widehat G = \operatorname{PO}(\widehat Q)$ of $\partial \mathbb X$. Hence in the proof of Theorem~\ref{thm:Del-gen}, the convex hull $\mathscr C$ of the lifts $X'$ of $X$ to the graph of $Q$ may be regarded as a convex polyhedron in $\mathbb X$ whose vertices lie on the ideal boundary $\partial \mathbb X$. Such a polyhedron $\mathscr C$ is called an \emph{ideal polyedron}.
Next, suppose $\mathbb R^d = \mathbb R^m \oplus \mathbb R^{d-m}$ is a $Q$--orthogonal decomposition, with $Q = Q_V \oplus Q_U$ and, as in Section~\ref{sec:degen}, consider the degenerate quadratic form $\mathscr{Q}$ on $\mathbb R^d$ defined by $\mathscr{Q}(y,z) = Q_V(y)$ for all $y \in \mathbb R^m, z \in \mathbb R^{d-m}$. In Section~\ref{sec:degen}, we defined a subgroup $G$ of the group of affine transformations of $\mathbb R^d$ which captures the geometry of a quadratic form thought of as having finite part the degenerate quadratic form $\mathscr{Q}$ and infinitesimal part described by $Q_U$ in the degenerate directions $\mathbb R^{d-m}$. The group $G$ was defined to be the limit as $t \to 0$ of the isometry groups of the flat metrics defined by quadratic forms $Q_t = Q_V \oplus t Q_U$. We may similarly consider the quadratic forms $$\widehat Q_t(x_1, \ldots, x_{d+2}) = Q_V(x_1, \ldots, x_m) + t Q_U(x_{m+1}, \ldots, x_{d}) - x_{d+1}x_{d+2}.$$
We denote the limit as $t \to 0$ of the projective orthogonal groups $\operatorname{PO}(\widehat Q_t)$ by $\widehat G$. Then $\widehat G$ acts on the space $\mathbb X \subset \mathbb{RP}^{d+1}$ of lines of negative signature with respect to $\widehat{\mathscr{Q}} = \mathscr{Q} - x_{d+1}x_{d+2}$ preserving a degenerate semi-Riemannian metric defined by $\mathscr{Q}$ and also preserving a quadratic form naturally isomorphic to $Q_U$ on the degenerate subspace, a copy of $\mathbb R^{d-m}$, of each tangent space. Note that $G$ is naturally the subgroup of $\widehat G$ that preserves the affine chart $x_{d+2} = 1$. As in the non-degenerate case above, the subset of the ideal boundary $\partial \mathbb X$ that lies in the affine chart $x_{d+2} = 1$ naturally identifies with $\mathbb R^d$ via the map $x \mapsto (x, \mathscr{Q}(x))$, and the full ideal boundary $\partial \mathbb X$ is thought of as a conformal compactification of the degenerate geometry of $\mathbb R^d$ described by $G$. Again, the points of $X$ in Theorem~\ref{thm:Del-gen} are thought of as points on $\partial \mathbb X$ and the convex hull $\mathscr C$ constructed in the proof of the theorem is an ideal polyhedron in $\mathbb X$. The space $\mathbb X$ and its symmetry group $\widehat G$ describe constant curvature semi-Riemannian geometry which is infinitesimal in some directions. See~\cite{coo_lim} for more on constant curvature semi-Riemannian geometries and their limits.
In dimension $d = 2$, consider the Euclidean quadratic form $Q(x_1, x_2) = x_1^2 + x_2^2$. Then $\widehat Q(x_1, x_2, x_3, x_4) = x_1^2 + x_2^2 - x_3x_4$ defines a copy $\mathbb X$ of the three-dimensional hyperbolic space $\mathbb H^3$. The Euclidean plane $\mathbb R^2$ naturally identifies with the subset of the ideal boundary $\partial \mathbb X$ lying in the affine patch $x_{4} =1$. A set of points $X$ in spacelike and general position with respect to $Q$ determines a convex ideal polyhedron $\mathscr C$ in ${\mathbb H}^3$ and conversely.
Similarly, consider the standard Lorentzian quadratic form $Q(x_1, x_2) = x_1^2 - x_2^2$. Then $\widehat Q(x_1, x_2, x_3, x_4) = x_1^2 - x_2^2 - x_3x_4$ defines a copy $\mathbb X$ of the $2+1$ dimensional \emph{anti de Sitter space} $\mathrm{AdS}^3$, a model for constant negative curvature Lorentzian geometry in dimension $2+1$. The Minkowski space $\mathbb R^{1,1}$ naturally identifies with the subset of the ideal boundary $\partial \mathbb X$ lying in the affine patch $x_{4} =1$. Again, a set of points $X$ in spacelike and general position with respect to $Q$ determines a convex ideal polyhedron $\mathscr C$ in $\mathrm{AdS}^3$ and conversely.
Next, define the degenerate quadratic form $\mathscr{Q}$ from $Q$ being either the Euclidean quadratic form or the Lorentzian quadratic form above using the coordinate splitting $\mathbb R^2 = \mathbb R^1 \oplus \mathbb R^1$ and let $\widehat G$ and $\mathbb X$ be defined as above. Then the degenerate plane $\mathbb R^{1,0,1}$ naturally identifies with the subset of the ideal boundary $\partial \mathbb X$ lying in the affine chart $x_{4} =1$. Here $\mathbb X$ and its symmetry group $\widehat G$ gives what is known as three-dimensional \emph{half-pipe geometry} ${\mathbb{HP}}^3$, defined in~\cite{dan_age}. It may be regarded as the geometry of an infinitesimal neighborhood of a hyperbolic $2$-plane in either the three-dimensional hyperbolic space ${\mathbb H}^3$ or the three-dimensional anti de Sitter space $\mathrm{AdS}^3$. Again, a set of points $X$ in spacelike and general position with respect to $\mathscr{Q}$ determines a convex ideal polyhedron $\mathscr C$ in ${\mathbb{HP}}^3$ and conversely.
In Section~\ref{sec:dihedral}, we will relate the geometry of the Delaunay triangulation of $X$ with the geometry of the corresponding ideal polyhedron $\mathscr C$ in each of the three geometric contexts above. We then characterize Delaunay triangulations in $\mathbb R^{1,1}$ and $\mathbb R^{1,0,1}$ by applying a recent characterization of ideal polyhedra that we obtained in~\cite{dan_pol}. The following theorem extends to $\mathrm{AdS}^3$ and ${\mathbb{HP}}^3$ a famous theorem of Rivin~\cite{riv_ach} about ideal polyhedra in hyperbolic three-space.
\begin{theorem}[\cite{dan_pol}, Theorem 1.3 and 1.7] \label{thm:DMS}
Let $\mathbb X$ be $\mathrm{AdS}^3$ or ${\mathbb{HP}}^3$. Let $\Gamma'$ be a triangulated graph on the sphere and let $w: E(\Gamma') \to \mathbb R \setminus\{0\}$ be a weight function on the edges $E(\Gamma')$ of $\Gamma'$.
Then there exists a convex ideal polyhedron $\mathscr C$ in $\mathbb X$ and an isomorphism of $\Gamma'$ with the one-skeleton of $\mathscr C$ which takes the weights $w$ to the dihedral angles of $\mathscr C$ if and only if the following conditions hold:
\begin{enumerate}
\item[(i)] For each vertex $v$ of $\Gamma'$, the vertex sum $\sum_{e \sim v} w(e) = 0$.
\item[(ii)] The edges $e$ for which $w(e) < 0$ form a Hamiltonian cycle in $\Gamma'$.
\item[(iii)] If $c$ is a Jordan curve on the sphere transverse to $\Gamma'$ which crosses exactly two edges of negative weight, then $$\sum_{e \in E_c} w(e) \geq 0$$
where $E_c$ are the edges of $\Gamma'$ crossed by $c$. Further, equality occurs if and only if $c$ encircles a single vertex.
\end{enumerate}
\end{theorem}
\subsection{M\"obius invariance of $Q$--Delaunay decompositions}\label{mobius}
Propositions~\ref{invariance1} and~\ref{invariance2} state that $Q$--Delaunay decompositions are invariant under the action of the isometry group of the flat geometry associated to $Q$. In this section, we search for a larger collection of transformations leaving the $Q$--Delaunay decomposition invariant. To this end, recall that the construction of the $Q$--Delaunay decomposition for $X$ in the proof of Theorem~\ref{thm:Del-gen}, interpreted according to the observations of Section~\ref{sec:Hpq}, involves forming the convex ideal polyhedron $\mathscr C$ in the negatively curved geometry $\mathbb X$ associated to $\widehat Q$ whose vertices are the points $X$, thought of as lying in the ideal boundary $\partial \mathbb X$ using the standard chart $\mathbb R^d \hookrightarrow \partial \mathbb X$.
The convex hull is a construction invariant under the full group of $Q$-M\"obius transformations $\widehat G = \operatorname{O}(\widehat Q)$. However, the second half of the construction of $\mathrm{Del}_Q(X)$ involves projecting the \emph{bottom faces} $\partial \mathscr C_-$ back down to $\mathbb R^d$ and the notion of bottom and top faces of $\mathscr C$ depends on a choice of a point $\infty \in \partial \mathbb X$ to project from. So, the $Q$--Delaunay decomposition of $X$ should be invariant under any M\"obius transformation $f \in \widehat G$ for which the bottom faces of $f(\mathscr C)$ relative to $\infty$ are precisely the image of the bottom faces of $\mathscr C$ relative to $\infty$. Let us now make this more precise.
First, we note that strictly speaking, even for small $f \in \widehat G$, $f(\mathrm{Del}_Q(X))$ is rarely a \emph{linear} cellulation; indeed, the image of an affine plane under $f$ is typically some $Q$-sphere (of the same dimension). However, as long as the image of a cell of $\mathrm{Del}_Q(X)$ is contained (and compact) in $\mathbb R^d$, we may \emph{isotop} it relative to its vertices so that it becomes a linear cell (i.e. pull it tight). Note that even if the image of each cell of $\mathrm{Del}_Q(X)$ lies in $\mathbb R^d$, the straightening of $f(\mathrm{Del}_Q(X))$ may fail to be a cellulation of the convex hull of $f(X)$ (and even if it is a cellulation of the convex hull, it might not be $\mathrm{Del}_Q(f(X))$). Henceforth we will write $f(\mathrm{Del}_Q(X))\sim \mathrm{Del}_Q(f(X))$ to mean that $\mathrm{Del}_Q(f(X))$ is the straightening of $f(\mathrm{Del}_Q(X))$.
As a warm-up to the general case, we first discuss the Euclidean case, where the precise statement of the M\"obius
invariance (below) is presumably well-known to specialists.
Given $X$, let $\mathcal{C}(X)$ denote the finite collection of all full and empty spheres
(see Definition \ref{def:Del-gen} and Remark \ref{full}).
\begin{Lemma}\label{Mob_Euclid}
Let $X$ be a finite set of points in $\mathbb R^d$, let $Q$ be the standard Euclidean quadratic form
and let $f \in \widehat G = \operatorname{PO}(d,1)$. Then $f(\mathrm{Del}_Q(X))$ is isotopic to $\mathrm{Del}_Q(f(X))$ if and only if
$f^{-1}(\infty)$ lies outside each sphere of the collection $\mathcal C(X)$ of empty and full spheres associated to $X$.
\end{Lemma}
\begin{Remark}
Note that the conclusion of Lemma~\ref{Mob_Euclid} may fail if the condition
is only required to hold for the empty circles. There are easy examples of this even with $|X| = 4$.
\end{Remark}
\begin{proof}[Outline of the proof]
Let $f \in \widehat G$ and assume that $f(X)$ lies in $\mathbb R^d$ (no point of $X$ is sent to $\infty$). Consider a sphere $S$ in $\mathbb R^d$.
The condition that $S$ be empty of points of $X$ means that in the conformal compactification $\partial \mathbb X= \mathbb R^d \cup \{\infty\}$, $\infty$ lies on one side of $S$ while $X$ lies on the other (allowing for some points of $X$ to be on $S$). Hence if $S$ is an empty sphere, then $f(S)$ is empty of points of $f(X)$ if and only if $f^{-1} (\infty)$ lies in the component of the complement of $S$ containing $\infty$, i.e. $f^{-1} (\infty)$ is outside of $S$. Hence, if $f^{-1} (\infty)$ lies outside all of the empty spheres associated to $X$, then the cells of $f(\mathrm{Del}(X))$ straighten to cells of $\mathrm{Del}(f(X))$. However, $\mathrm{Del}(f(X))$ could have additional top-dimensional cells, if there are additional empty spheres. Since the top-dimensional cells of $\mathrm{Del}(f(X))$ are projections of faces of the convex hull $f(\mathscr C)$ of the points $f(X) \subset \partial \mathbb X$ (using here that the convex hull construction in $\mathbb X$ is invariant under $\widehat G$), we need only check now that the images of the full spheres associated to $X$ are not empty for $f(X)$. This is precisely the condition that $f^{-1} (\infty)$ lies outside of every full sphere associated to $X$.
\end{proof}
The condition in Lemma \ref{Mob_Euclid} does not generalize well to the case of
non-positive definite quadratic forms, since in general a $Q$--sphere in $\partial\mathbb{X}$ does not separate
$\partial\mathbb{X}$ making it non-sensical to ask for a point of $\partial \mathbb X$ to lie inside or outside of a $Q$-sphere. For the sake of generalizing, let us reformulate Lemma \ref{Mob_Euclid} as follows.
\begin{Lemma}\label{Mob_Euclid2}
Let $X$ be a finite set of points in $\mathbb R^d$, let $Q$ be the standard Euclidean quadratic form,
and let $f \in \widehat G = \operatorname{PO}(d,1)$. Then $f(\mathrm{Del}_Q(X))$ is isotopic to $\mathrm{Del}_Q(f(X))$ if and only if
$f^{-1}(\infty)$ is in the same connected component as $\infty$ in the complement of the union of the spheres in $\mathcal C(X)$ in the conformal compactification $\partial \mathbb X$ of $(\mathbb R^d,Q)$.
\end{Lemma}
In the general setting, the condition above needs to be slightly expanded as follows.
Recall from Section~\ref{sec:Hpq} that, for a general quadratic form $Q$, the conformal compactification $\partial \mathbb X$ contains many points at infinity, indeed $\partial \mathbb X = \mathbb R^d \cup \mathscr L(\infty)$ is the disjoint union of $\mathbb R^d$ with the light cone of a point $\infty$ at infinity. Here the light cone $\mathscr L(x)$ of a point $x \in \partial \mathbb X$ is the union of all $y \in \partial \mathbb X$ such that $\langle x, y\rangle_{\hat Q} = 0$. If $x \in \mathbb R^d$, then $\mathscr L(x) \cap \mathbb R^d$ is precisely the points of $\mathbb R^d$ of null displacement from $x$ in the $Q$ norm (this only looks like a standard cone if $Q$ is Lorentzian). The stereographic
projection used in the proof of Theorem~\ref{thm:Del-gen} then conformally identifies the complement of $\mathscr L(\infty)$ in $\partial \mathbb X$ with $(\mathbb R^d,Q)$.
\begin{theorem}\label{Mob}
Let $Q$ be any quadratic form on $\mathbb R^d$ and let $X$ be a finite set of points in $\mathbb R^d$
in space-like position with respect to $Q$. Let $f \in \widehat G$. Then $f(\mathrm{Del}_Q(X))$
is isotopic to $\mathrm{Del}_Q(f(X))$ if
$f^{-1}(\infty)$ lies in the same connected component as $\infty$ in the complement in
the conformal compactification $\partial \mathbb X$ of $(\mathbb R^d, Q)$ of the union of
\begin{enumerate}
\item the collection ${\mathcal C}$ of all the empty and full $Q$-spheres,
\item the light cones $\mathscr L(x)$ of the points of $X$.
\end{enumerate}
\end{theorem}
The proof of Theorem \ref{Mob} uses a simple statement in projective geometry.
We use the notation of Section \ref{sec:Hpq}.
\begin{lemma} \label{lem:Mob}
Let $S$ be a $Q$-sphere in $\partial \mathbb X$ realized as the (transverse) intersection of a hyperplane $P\subset \mathbb{RP}^{d+1}$ with $\partial {\mathbb X}$. Let $P_0=P\cap {\mathbb X}$. Then a point $x \in \mathbb R^d \setminus S$ lies inside the $Q$-sphere $S$ if and only if the line in $\mathbb{RP}^{d+1}$ passing through $x$ and $\infty$ crosses the hyperplane $P$ inside of $P_0$.
\end{lemma}
\begin{proof}
Let $S$ be defined by the equation:
$$Q(x) = \varphi(x) + D$$
for $x \in \mathbb R^d$, where $\varphi: \mathbb R^d \to \mathbb R$ is a linear functional and $D \in \mathbb R$ is a constant. Recall the explicit embedding $x \mapsto [x: Q(x): 1] \in \mathbb{RP}^{d+1}$.
Then $P$ is the set of $[y: a: b] \in \mathbb{RP}^{d+1}$ such that $\varphi(y) = a - Db,$ where $y \in \mathbb R^d$ and $a,b \in \mathbb R$.
The point $\infty = [0_d:1:0]$ (where $0_d$ is the zero vector in $\mathbb R^d$) and let $x \in \mathbb R^d \setminus S$. Then the line $\ell$ in $\mathbb{RP}^{d+1}$ passing through $\infty$ and $x$is given by $[tx: tQ(x) + s: t]$. Then $\ell$ passes through $P$ at the point $p$ where $s = t(\varphi(x) - Q(x) + D)$. Let us evaluate (the sign of) $\widehat Q$ at this point:
\begin{align*}
\widehat Q(tx, tQ(x) +s, t) &= \widehat Q(tx, tQ(x) + t(\varphi(x) - Q(x) + D), t)\\ &= \widehat Q(tx, t\varphi(x) + tD, t) \\ &= Q(tx) - (t\varphi(x) + tD)t = t^2(Q(x) - (\varphi(x) + D)).
\end{align*}
Hence $p \in \mathbb X$ if and only if $Q(x) < \varphi(x) + D$ if and only if $x$ is inside the $Q$-sphere~$S$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Mob}]
Let $f_t \in \widehat G$ be a family of M\"obius transformations with $f_0 = I$ such that for all $t$, $f_t^{-1} (\infty)$ does not lie on the light cone of any point of $X$ not on any empty or full $Q$-sphere in $\mathcal C$. We must first show that $f(X)$ is space-like position, so that $\mathrm{Del}_Q(f(X))$ is defined. Consider the spacelike segment $\sigma_{ij}$ connecting $x_i$ to $x_j$ in $\mathbb R^d$. Then, since $f_t$ is a conformal deformation, $f_t(\sigma_{ij})$ is a spacelike segment in the conformal metric on $\partial \mathbb X$ for all $t$. The only way $f_t(x_i)$ and $f_t(x_j)$ could fail to have spacelike relative position is if $f_t(\sigma_{ij})$ is not contained in $\mathbb R^d$, equivalently if $f_t(\sigma_{ij})$ crosses $\mathscr L(\infty)$, equivalently if $f_t^{-1}\mathscr L(\infty) = \mathscr L(f_t^{-1} (\infty))$ crosses $\sigma_{ij}$. If this happens, then for some (possibly smaller) value of $t$, $\mathscr L(f_t^{-1} (\infty))$ contains the point $x_i$ or $x_j$, equivalently $f_t^{-1}\infty$ lies in $\mathscr L(x_i)$ or $\mathscr L(x_j)$. This does not happen by assumption.
Now, to show that $f_t(\mathrm{Del}_Q(X)) \sim \mathrm{Del}_Q(f_t(X))$, we argue as in the Euclidean case. Let $\Delta$ be a cell of $\mathrm{Del}_Q(f_t(X))$. Then $\Delta$ is the projection to $\mathbb R^d$ of a face of the convex hull in $\mathbb X$ of the points $f_t(X) \subset \partial \mathbb X$. This convex hull is precisely $f_t(\mathscr C)$, hence the vertices of $\Delta$ are the images under $f_t$ of a subset of $X$ which span a top or bottom face of $\mathscr C$, and therefore define one of the empty or full spheres with respect to $X$. So it remains to show that for each empty (respectively full) sphere $S$ in $\mathcal C$, $f_t(S)$ remains empty (respectively full) with respect to $f_t(X)$.
Consider an empty or full $Q$-sphere $S$ in $\mathcal C(X)$. We now show that $f_t(S)$ is an empty (resp. full) sphere for $f_t(X)$
for all $t$. Let $P\subset \mathbb{RP}^{d+1}$ be the hyperplane as in Lemma~\ref{lem:Mob} such that $P \cap \partial \mathbb X = S$. Let $x_i \in X$ be a point not on $S$. By the Lemma, the point $f_t(x_i)$ lies strictly inside the $Q$-sphere $f_t(S)$ if and only if the line $\ell_t$ passing through $f_t(x_i)$ and $\infty$ intersects $f_t(P)$ in the interior of $\mathbb X$. If $f_{t'}(x_i)$ is at some time $t'$ on the wrong side of $f_{t'}(S)$, then there is a time $t$ for which $\ell_t \cap f_t(P) \in \partial \mathbb X \cap f_t(P) = f_t(S)$, or equivalently the line $f_t^{-1} \ell_t$ in $\mathbb{RP}^{d+1}$ passing through $x_i$ and $f_t^{-1} (\infty)$ intersects $S$. Hence, since $x_i \notin S$, then either $f_t^{-1} (\infty) \in S$, which we assume does not happen, or the entire line $f_t^{-1} \ell_t$ lies in $\partial \mathbb X$ which implies that $f_t^{-1} (\infty) \in \mathscr L(x_i)$, which we also assume does not happen. Thus for any $t$, $f_t(x_i)$ lies strictly inside $f_t(S)$ if and only if $x_i$ lies strictly inside $S$.
\end{proof}
\section{Higher signature Delaunay decompositions in dimension $d = 2$}\label{d2}
In dimension $d = 2$, there are two interesting geometric settings to consider beyond the classical case of the Euclidean plane (i.e. $Q = Q_{2,0}$ a positive definite form).
The first is the \emph{Minkowski plane}, $\mathbb R^{1,1}$, equipped with quadratic form $Q_{1,1}(x_1, x_2) = x_1^2 - x_2^2$. See Section~\ref{sec:non-degen}.
The second is the degenerate geometry $\mathbb R^{1,0,1}$ defined by the degenerate quadratic form $Q_{1,0,1}(x_1, x_2) = x_1^2$ as in Sections~\ref{sec:degen} and Section~\ref{sec:Hpq}.
There are two collections of angles associated to a $Q$--Delaunay triangulation $\mathrm{Del}_Q(X)$, the collection of \emph{interior angles} (Definition~\ref{def:interior-angles}) of the triangles and the collection of \emph{edge angles}, which are angles formed by the $Q$-circles circumscribing adjacent triangles (Definition~\ref{def:edge-angles}), assigned to the edges of $\mathrm{Del}_Q(X)$. We investigate the behavior of the interior angles and edge angles generalizing several important results from the Euclidean setting.
\subsection{Interior angles and edge angles of higher signature Delaunay triangulations} First, we define a notion of angle in each of the two geometries of interest.
\subsubsection{Angles in the Minkowski plane $\mathbb R^{1,1}$}
\label{ssc:angles11}
In the Minkowski plane $\mathbb R^{1,1}$, there are two connected components of spacelike directions. The angle formed by two spacelike unit vectors $v, w$ in the same component is defined to be the positive number $\varphi > 0$ satisfying the equation $$\cosh \varphi = \langle v, w \rangle_{Q_{1,1}}.$$
The angle formed by two spacelike unit vectors $v, w$ in opposite components is defined to be the negative number $\varphi < 0$ satisfying $$\cosh \varphi = - \langle v, w \rangle_{Q_{1,1}}.$$
\subsubsection{Angles in the degenerate plane $\mathbb R^{1,0,1}$}
Following the notation and ideas of Section~\ref{sec:degen},
let $Q_{1,0,1} = Q_V$ and $Q_{2,0} = Q_V + Q_U$ and $Q_{1,1} = Q_V - Q_U$,
where $Q_V(x_1, x_2) = x_1^2$ and $Q_U(x_1, x_2) = x_2^2$.
Here we focus on the case of $Q_{1,0,1}$.
We say that a non-zero vector $v$ is {\em non-degenerate}
if $Q_{1,0,1}(v,v)>0$.
Let $v = (v_1, v_2), w = (w_1, w_2)$ be two unit vectors with respect to $Q_{1,0,1}$, i.e. $v_1^2 = w_1^2 = 1$. Recall the map $L_t: \mathbb R^2 \to \mathbb R^2$ defined by $L_t(x_1, x_2) = (x_1, \frac{x_2}{t})$ and consider any two paths $v_t = (a_t, b_t), w_t = (c_t, d_t)$ so that $L_tv_t \to v$ and $L_t w_t \to w$ as $t \to 0$, i.e. $a_t \to v_1, b_t \to 0$, $c_t \to w_1, d_t \to 0$ and $\dot b := \frac{\mathrm{d}}{\mathrm{d} t}\big|_{t=0} b_t= v_2$ and $\dot d := \frac{\mathrm{d}}{\mathrm{d} t}\big|_{t=0} d_t= w_2$.
Then, we define the angle $\dot \theta_{vw}$ between $v$ and $w$ in $\mathbb R^{1,0,1}$ to be the derivative
$$\dot \theta_{vw} := \frac{\mathrm{d}}{\mathrm{d} t}\big|_{t= 0} \theta_{v_t w_t},$$
where $\theta_{v_t w_t}$ denotes the $\mathbb R^{2,0}$ or $\mathbb R^{1,1}$ angle between $v_t$ and $w_t$.
It is easy to check that this definition does not depend on the paths of vectors $v_t, w_t$ chosen, nor does it depend on whether $\theta_{v_t w_t}$ is measured in $\mathbb R^{2,0}$ or $\mathbb R^{1,1}$ because in all cases
\begin{equation}\label{eqn:dot-angle}
\dot \theta_{vw} = \left\{ \begin{matrix} |v_2 - w_2| & \text{ if } v_1w_1 = 1 \\ -|v_2 - w_2| & \text{ if } v_1w_1 = -1 \end{matrix}. \right.
\end{equation}
\begin{Proposition}
The angle $\dot \theta_{vw}$ between two directions $v,w$ is invariant under the symmetry group $G$ of $\mathbb R^{1,0,1}$ defined in Section~\ref{sec:degen-geom}.
\end{Proposition}
\begin{proof}
Let $v,w\in \mathbb R^{1,0,1}$, let $g\in G$, and let $v'=gv, w'=gw'$. Let $(v_t)_{t\in (0,1)}$ and
$(w_t)_{t\in (0,1)}$ be chosen such that $L_tv_t\to v$ and $L_t w_t\to w$ as $t\to 0$, as above.
Let $(g_t)_{t\in (0,1)}$, $g_t\in G_{t^2}$, be such that $g_t\to g$, where $G_{t^2}$ was defined in Section \ref{sec:degen-geom}. Then $g_tL_tv_t\to v', g_tL_tw_t\to w'$.
Hence since $\mathcal{L} = L_{1/t}g_tL_t$ preserves $Q$, we have:
$$ \theta_{v'w'} = \frac d{dt}_{|t=0} \theta_{\mathcal{L}v_t,\mathcal{L}w_t} = \frac d{dt}_{|t=0} \theta_{v_t,w_t}
=\theta_{vw}~. $$
\end{proof}
\begin{Proposition}
Consider a non-degenerate triangle $Q_{1,0,1}$. Then the sum of the $Q_{1,0,1}$ interior angles of the triangle is zero.
\end{Proposition}
\begin{proof}
Take the derivative of the analogous formula in Euclidean or Minkowski geometry.
\end{proof}
\begin{Proposition}\label{prop:strict-angles}
Let $p_1, p_2$ be two points with non-degenerate displacement and let $p_3, p_4$ be points in $\mathbb R^{1,0,1}$ lying in the same connected component of the set of points which are non-degenerate with respect to both $p_1$ and $p_2$. Suppose that $p_4$ is in the interior of the triangle $\Delta p_1 p_2 p_3$.
Then the $\mathbb R^{1,0,1}$ angles $\measuredangle p_1 p_3 p_2, \measuredangle p_1 p_4 p_2$ satisfy that
$$\measuredangle p_1 p_3 p_2 < \measuredangle p_1 p_4 p_2.$$
\end{Proposition}
\begin{proof}
This follows from the definition of the $\mathbb R^{1,0,1}$ angles in terms of limits of angles
between triples of points in $\mathbb R^2$ (or $\mathbb R^{1,1}$), and from the corresponding inequality in
$\mathbb R^2$.
\end{proof}
\subsection{Interior angles and edge angles}
Let $Q = Q_{1,1}$ or $Q_{1,0,1}$.
There are two collections of angles naturally assigned to a $Q$-Delaunay triangulation $\mathrm{Del}_{Q_{1,1}}(X)$. Each triangle has three \emph{interior angles}, two positive and one negative, which sum to zero.
\begin{Definition}\label{def:interior-angles}
Let $Q = Q_{1,1}$ or $Q = Q_{1,0,1}$ and let $X$ be a finite set in $\mathbb R^2$ in spacelike and generic position with respect to $Q$. The \emph{interior angles} of $\mathrm{Del}_Q(X)$ is the collection of interior angles of all triangles of $\mathrm{Del}_Q(X)$, listed in increasing order.
\end{Definition}
Each edge $e$ of $\mathrm{Del}_{Q_{1,1}}(X)$ is assigned an \emph{edge angle} defined in terms of the intersection between the empty $Q$-circles circumscribing the triangles adjacent to that edge. To make a precise definition, consider two
regions $R_1, R_2$ in the plane, each with smooth boundary $S_1 = \partial R_1$, $S_2 = \partial R_2$.
The angle formed by $R_1$ and $R_2$ at a point $p$ in the intersection $S_1 \cap S_2$ of their boundaries is defined to be the angle $\varphi$ formed by the two unit vectors $v_1, v_2$ tangent to $S_1$ and $S_2$ at $p$ and in the direction of traversal placing $R_1$ and $R_2$ on the left with respect to some (any) fixed ambient orientation. In the case that $R_1 = B_1$, $R_2 = B_2$ are $Q$--balls, the corresponding $Q$--circles $S_1$ and $S_2$ will typically intersect at two points. It is easy to check that the angle formed by $B_1$ and $B_2$ at both points is the same, making the notion of intersection angle between $Q$--circles well defined.
\begin{figure}[h]
{
\centering
\def6.0cm{5.0cm}
\input{Delaunay-edge-angles.pdf_tex}
\def6.0cm{5.0cm}
\input{Delaunay-edge-angles-2.pdf_tex}
}
\caption{The edge angle $\theta$ associated to $e$ is the angle between the unit vectors $v$ and $w$ tangent to the two empty $Q$-circles circumscribing the triangles adjacent to $e$. Alternatively, $\theta$ is the sum of the interior angles $\alpha, \beta$ opposite to $e$.}
\end{figure}
\begin{Definition}\label{def:dihedral-angles-triangulation}\label{def:edge-angles}
Let $X$ be a finite set of points in $\mathbb R^2$ in spacelike and general position with respect to a quadratic form $Q$ and let $\mathcal T$ be any triangulation of the convex hull of $X$. Consider an edge $e$ of $\mathcal T$. If $e$ is an interior edge, then it is adjacent to two triangles $t_1, t_2$. In this case, the \emph{edge angle} at $e$ of $\mathcal T$ is the angle $\varphi$ formed by the $Q$ balls $B_1$ and $B_2$ circumscribing $t_1$ and $t_2$ respectively.
If $e$ is an exterior edge, then $e$ is adjacent to one triangle $t_1$. In this case, the \emph{edge angle} at $e$ is the angle between the ball $B_1$ circumscribing $t_1$ and the half-plane whose intersection with the convex hull of $X$ is precisely $e$. Typically we will denote by $\Phi: E(\mathcal T) \to \mathbb R$ the function which assigns to each edge $e$ in the set of edges $E(\mathcal T)$ the dihedral angle $\Phi(e)$ of $\mathcal T$ at $e$.
\end{Definition}
As for Euclidean Delaunay decompositions, there is a simple relation between the interior angles
and the edge angles:
\begin{Proposition}
Let $e$ be an edge of $\mathrm{Del}_Q(X)$. Then the edge angle at $e$ is the sum of the interior angles opposite to $e$ in the triangles adjacent to $e$. If $e$ is an interior edge, then there are two such triangles, whereas if $e$ is an exterior edge, than there is one such triangle.
\end{Proposition}
In the case $Q = Q_{1,1}$, the proof of this property follows from an elementary geometric remark: given three points
$a,b,c$ at time-distance $r>0$ from $0$ in $\mathbb R^{1,1}$, then the Minkowski angles satisfy
$\measuredangle(bac)=\measuredangle(b0c)/2$. In the case $Q = Q_{1,0,1}$, the proof proceeds by taking limits of the Minkowski or Euclidean case as usual.
\subsection{Prescribing the edge angles}\label{sec:dihedral}
The edge angles of any triangulation $\mathcal T$ as in Definition~\ref{def:edge-angles} determines a weighted planar triangulated graph. In the case that the quadratic form $Q$ is not positive definite, the following theorem characterizes exactly which weighted graphs occur as the edge angles of a $Q$--Delaunay triangulation.
\begin{theorem}\label{thm:prescribe-angles}
Let $\Gamma$ be a triangulated graph in the plane and let $w: E(\Gamma) \to \mathbb R \setminus\{0\}$ be a weight function on the set $E(\Gamma)$ of edges of $\Gamma$. Let $Q$ be either $Q_{1,1}$ or $Q_{1,0,1}$. Then there exists a finite set $X$ in $\mathbb R^2$ in spacelike and general position with respect to $Q$ and an isomorphism of $\Gamma$ with the one-skeleton of the Delaunay triangulation $\mathrm{Del}_{Q}(X)$ which takes the weights $w$ to the edge angles of $\mathrm{Del}_Q(X)$ if and only if the following conditions hold:
\begin{enumerate}
\item For each interior vertex $v$ of $\Gamma$, the vertex sum $\sum_{e \sim v} w(e) = 0$.
\item The edges $e$ for which $w(e) < 0$ form a Hamiltonian path whose endpoints $v_1, v_2$ are exterior vertices.
\item The vertex sum $\sum_{e \sim v} w(e)$ associated to an exterior vertex $v$ is positive if $v = v_1$ or $v= v_2$ and is negative otherwise.
\item If $c$ is a Jordan curve in $\mathbb R^2$ transverse to $\Gamma$ which either crosses two edges of negative weight but does not enclose $v_1$ or $v_2$, encloses $v_1$ and $v_2$ but does not cross any edge of negative weight, or crosses exactly one edge of negative weight and encloses exactly one of $v_1 $ and $v_2$, then $$\sum_{e \in E_c} w(e) - \sum_{v \in V_c } \sum_{e \sim v} w(e) \geq 0$$
where $E_c$ are the edges of $\Gamma$ crossed by $c$ and $V_c$ are the exterior vertices of $\Gamma$ enclosed inside of $c$. Equality occurs if and only if $c$ encircles a single vertex or $c$ encircles all vertices.
\end{enumerate}
\end{theorem}
\begin{proof}
First, consider a set $X$ of finitely many points in spacelike and generic position with respect to $Q$. As in the proof of Theorem~\ref{thm:Del-gen}, we lift $X$ to a point set $X'$ lying on the graph of $Q$ in $\mathbb R^{3}$. The Delaunay triangulation $\mathrm{Del}_Q(X)$ is seen on the bottom of the convex hull $\mathscr C$ of $X'$. As described in Section~\ref{sec:Hpq}, the graph of $Q$ in $\mathbb R^{3}$ is naturally an open dense subset of the ideal boundary $\partial \mathbb X$ of either $\mathbb X = \mathrm{AdS}^3 \subset \mathbb{RP}^3$ if $Q = Q_{1,1}$ or $\mathbb X = {\mathbb{HP}}^3 \subset \mathbb{RP}^3$ if $Q = Q_{1,0,1}$.
Let us now add an extra vertex $v_\infty = [0:0:1:0]$ and denote the convex hull of $X'$ and $v_\infty$ by $\mathscr C_2$. Then the faces of $\mathscr C_2$ consist of the bottom faces of $\mathscr C$ (corresponding to the triangles of $\mathrm{Del}_Q(X)$) as well as infinite vertical walls, as viewed in $\mathbb R^3 \subset \mathbb{RP}^3$ along the edges corresponding to exterior edges of $\mathrm{Del}_Q(X)$. It is straightforward to show that the edge angle at an edge $e$ of $\mathrm{Del}_Q(X)$ (Definition~\ref{def:dihedral-angles-triangulation}) is exactly the same as the dihedral angle of the corresponding edge of $\mathscr C_2$ as measured in $\mathbb X$. Indeed, the conformal semi-Riemannian structure on $\partial \mathbb X$ is compatible with the semi-Riemannian structure on $\mathbb X$. Further, since the sum of the dihedral angles around an ideal vertex of $\mathscr C_2$ are equal to zero (see \cite{dan_pol}), the dihedral angle along one of the new vertical edges $e$ of $\mathscr C_2$ is exactly minus the sum of the edge angles of the edges of $\mathrm{Del}_Q(X)$ incident to the vertex $v$ to which $e$ projects.
\begin{figure}[h]
{
\centering
\def6.0cm{7.0cm}
\input{convex-hull-with-point-at-infinity.pdf_tex}
}
\caption{I.}
\end{figure}
Hence, Theorem~\ref{thm:prescribe-angles} follows directly from Theorem~\ref{thm:DMS}: Let $\Gamma'$ denote the triangulated graph on the sphere obtained from $\Gamma$ by first adding a single vertex $v_\infty$ at infinity and then connecting all exterior vertices of $\Gamma$ to $v_\infty$ with an edge. There is a one-to-one correspondence between weight functions on $\Gamma$ satisfying the conditions of Theorem~\ref{thm:prescribe-angles} and weight functions on $\Gamma'$ satisfying the conditions of Theorem~\ref{thm:DMS}. The weights on the new edges of $\Gamma'$ are determined by Condition~(i). Condition (i) for the new vertex $v_\infty$ corresponds to the case that $c$ encircles the entire polygon in Condition (4). Condition (ii) corresponds to Conditions (2) and (3). Finally, in Condition (iii), that $c$ crosses a new edge of $\Gamma'$ is equivalent to $c$ encircling the corresponding exterior vertex of $\Gamma$.
\end{proof}
\subsection{Interior angle optimization}
Consider a triangulation $\mathcal T$ of a finite point set $X$ in spacelike and general position with respect to $Q$, where $Q = Q_{1,1}, Q_{1,0,1},$ or $Q = Q_{2,0}$. The \emph{angle sequence} of $\mathcal T$ is the ordered list, sorted from smallest to largest with repetition, of the interior angles of all the triangles in $\mathcal T$. Given two triangulations $\mathcal T_1, \mathcal T_2$, we say that $\mathcal T_1$ is \emph{fatter} than $\mathcal T_2$ if the angle sequence of $\mathcal T_1$ is greater than the angle sequence of $\mathcal T_2$ with respect to the lexicographic ordering, i.e. if the first angle for which the angles sequences of $\mathcal T_1$ and $\mathcal T_2$ disagree is larger for $\mathcal T_1$.
\begin{theorem}\label{thm:angle-optimization}
Let $X$ be a finite set of points in $\mathbb R^2$ in spacelike and general position with respect to $Q$,
where $Q = Q_{1,1}, Q_{1,0,1},$ or $Q_{2,0}$.
Then the $Q$--Delaunay triangulation $\mathrm{Del}_Q(X)$ is the unique fattest triangulation of $X$: the angle sequence of $\mathrm{Del}_Q(X)$ is larger, with respect to the lexicographic ordering, than the angle sequence of any other triangulation of $X$.
\end{theorem}
Of course, this theorem is well-known in the case $Q = Q_{2,0}$ of Euclidean geometry. As in that setting, to prove the theorem, we must first consider the simple case of a quadrilateral. To analyze that case, we need a generalization of the classical Thales' Theorem. The proof in this general setting is essentially the same.
\begin{Proposition}[Thales' Theorem]\label{prop:Thales}
Let $p_1, p_2, p_3$ be three distinct points in spacelike position lying on a $Q$--circle $S$ and let $p_4$ and $p_5$ be points on the same side of the line $\overline{p_1 p_2}$ as $p_3$ and which lie inside and outside of $S$ respectively. Further assume that $p_3, p_4, p_5$ all lie in the same connected component of the set of points that are in spacelike position with respect to $p_1$ and $p_2$. Then $\measuredangle p_1 p_4 p_2 > \measuredangle p_1 p_3 p_2 > \measuredangle p_1 p_5 p_2$.
\end{Proposition}
\begin{figure}[h]
{
\centering
\def6.0cm{6.0cm}
\input{Thales_tex}
}
\caption{In the proof of Proposition~\ref{prop:Thales}, the angle $\measuredangle p_1 p_3 p_2$ is locally constant as $p_3$ is varied, so WLOG we may assume $\Delta p_1 p_3 p_2$ contains $\Delta p_1 p_4 p_2$.}
\end{figure}
\begin{proof}
In the case that $Q$ is non-degenerate (so $Q = Q_{2,0}$ or $Q = Q_{1,1}$), the $Q$--circle has a center $O$. Observe that the triangles $\Delta O p_1 p_3$ and $\Delta O p_2 p_3$ are isosceles, from which it easily follows that the angle $\measuredangle p_1 p_3 p_2$ is locally constant as $p_3$ is varied along $S$. So without loss in generality we may assume that $\Delta p_1 p_2 p_4 \subset \Delta p_1 p_2 p_3 \subset \Delta p_1 p_2 p_5$ and the result follows.
In the case $Q = Q_{1,0,1}$, the argument is almost the same. However, in this case a $Q$--circle $S$ does not have a center (the center is at $\infty$).
Nonetheless, the claim that the angle $\measuredangle p_1 p_3 p_2$ is locally constant as $p_3$ is varied along $S$ still holds by taking the derivatives of the same fact for $Q = Q_{2,0}$ as rescaled $Q_{2,0}$-circles converge to $S$ as in Lemma~\ref{lem:converging-balls}. So again without loss in generality we may assume that $\Delta p_1 p_2 p_4 \subset \Delta p_1 p_2 p_3 \subset \Delta p_1 p_2 p_5$, and the result follows using the strict inequality of Proposition~\ref{prop:strict-angles}.
\end{proof}
\begin{Lemma}[Angle optimization for quadrilaterals]\label{lem:quads}
Consider a convex quadrilateral $R$ whose vertices $X$ are in spacelike general position with respect to $Q$. Then, of the two triangulations of $X$, the $Q$--Delaunay triangulation $\mathrm{Del}_Q(X)$ is the fatter triangulation.
\end{Lemma}
\begin{proof}
The proof is nearly identical to the proof in the classical Euclidean case $Q = Q_{2,0}$.
Let $p_1, p_2, p_3, p_4$ denote the points of $X$, cyclically ordered around the boundary of $R$. Without loss in generality, the minimum (which a priori might not be unique) of the twelve angles formed by any three of the four points is $\measuredangle p_1 p_2 p_4$. Note that if $Q = Q_{1,1}$ or $Q = Q_{1,0,1}$, then $\measuredangle p_1 p_2 p_4$ will be negative. In that case, $p_3$ and $p_4$ must lie in the same connected component of the set of points in spacelike position with respect to $p_1$ and $p_2$. Since $p_3$ does not lie on the $Q$--circle $S$ passing through $p_1, p_2, p_4$, it lies either inside or outside $S$. By Proposition~\ref{prop:Thales}, $p_3$ must lie inside $S$. Hence, of the two triangulations of $R$, the one containing $\Delta p_1 p_2 p_4$ both fails to be the fatter of the two and fails to be the $Q$--Delaunay triangulation.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:angle-optimization}]
Let $\mathcal T$ be a triangulation of $X$ with optimal angle sequence. In other words $\mathcal T$ is the fattest triangulation or possibly tied for fattest.
Consider any quadrilateral $R$ in $\mathcal T$. We will show that the empty $Q$--circle condition holds for~$R$, meaning that each of the two triangles of $R$ has the property that its circumscribed $Q$--circle does not contain the fourth point of $R$. This is true automatically if $R$ is not convex. So assume $R$ is convex. Then the optimality of $\mathcal T$ implies that the triangulation of $R$ also optimizes the angle sequence (six angles) for $R$. Hence, by Lemma~\ref{lem:quads}, the empty $Q$--circle condition holds for $R$.
Since each quadrilateral satisfies the empty circle condition, it follows that the polygonal surface obtained by lifting the triangles of $\mathcal T$ up to the graph of $Q$ in $\mathbb R^3$ is locally convex along all edges.
It is therefore a convex surface. It now follows that $\mathcal T$ must be the $Q$--Delaunay triangulation.
\end{proof}
\section{Outline of applications}
\label{sc:5}
We now briefly outline some possible applications of higher signature Delaunay triangulations.
\subsection{Triangulations in Minkowski space}
The most direct application is to define triangulations with a given vertex
sets in spacelike position in Minkowski space, or in other constant curvature
Lorentz spaces.
Suppose for instance that $\Sigma\subset {\mathbb R}^{d-1,1}$ is a spacelike
hypersurface in the $d$-dimensional Minkowski space, and let $X$ be a
finite set of points of $\Sigma$. The $Q$-Delaunay decomposition with
vertex set $X$ --- where $Q$ denotes the Minkowski scalar product ---
provides a triangulation of the convex hull of $X$ which we expect to
have good properties with respect to the ambient geometric structure.
\subsection{Analysis of discrete functions}
There are possible applications of the higher signature Delaunay decomposition not only
for sets of points in Minkoski space, but also in other situations where one
dimension (or more) plays a role fundamentally different from the others.
Consider for instance a finite set $X_0\in {\mathbb R}^{d-1}$ and a $k$-Lipschitz function
$u:X_0\to {\mathbb R}$ obtained by sampling a function $v:{\mathbb R}^{d-1}\to {\mathbb R}$
at the points of $X_0$. After multiplying $u$ by a constant, we can suppose
that $u$ is $k$-Lipschitz, for some $k<1$. Let $X\subset {\mathbb R}^d$ be the graph of
$u$. We can then consider the $Q$-Delaunay decomposition with vertex set $X$,
where
$$ Q = dx_1^2 + \cdots + dx_{d-1}^2- dx_d^2~. $$
We expect that the combinatorics of $\mathrm{Del}_Q(X)$ can be useful in analysing
the points of $X_0$ where $u$ takes particularly interesting or relevant
values. Typically, points of $X$ which are vertices of ``large'' simplices
of $\mathrm{Del}_Q(X)$ could be particularly relevant.
For functions which are not {\em a priori} Lipschitz, it might be more
relevant to consider the $Q'$-Delaunay decomposition with vertex set $X$,
where $Q'$ is the degenerate quadric
$$ Q' = dx_1^2 + \cdots + dx_{d-1}^2~. $$
\subsection{Proximity Graphs}
Proximity graphs are undirected graphs in which the points spanning an edge are close in some defined sense.
\subsubsection{Minimum Spanning Tree and Traveling Salesperson Cycle}
Given a finite set $X \subset {\mathbb R}^d$, a {\em Euclidean Minimum Spanning Tree} $\mathrm{MST}(X)$ of $X$ is an (undirected) graph with
minimum summed Euclidean edge lengths that connects all points in $X$ and which has only
the points of $X$ as vertices. It is easy to see that it has no cycle
(that is, that it is really a tree).
We generalize this notion to the setting of points $X$ in spacelike and general position with respect to a quadratic form $Q$ on $\mathbb R^d$. We may measure distance between two points $x_1, x_2 \in X$ by the function $d_Q(x_1, x_2) = \sqrt{Q(x_1-x_2)}$. As in the Euclidean case, we define a \emph{$Q$-minimum spanning
tree} $\mathrm{MST}_Q(X)$ of $X$ as a tree with spacelike edges of minimum total length
having as vertices exactly the points of $X$.
Any such tree satisfies the following remarkable property, well-known in the case $Q$ is the Euclidean norm.
\begin{Proposition}\label{prop:mst}
Let $X$ be in spacelike and general position with respect to the quadratic form $Q$. Then a $Q$-minimum spanning tree
$\mathrm{MST}_Q(X)$ is a sub-graph of the one-skeleton of the $Q$-Delaunay triangulation $\mathrm{Del}_Q(X)$.
\end{Proposition}
\begin{proof}
We need only to treat the case that $Q$ is non-degenerate. The case of $Q$ degenerate follows from Theorem \ref{thm:rescaled} by approximating $Q$ by a one-parameter family of non-degenerate quadratic forms.
Consider an edge $[x, y]$ of $\mathrm{MST}_Q(X)$ spanned by two vertices $x, y \in X$. Let $c \in [x, y]$ be the center point of the interval and consider the $Q$-ball $B$ centered at $c$ and of radius $d_Q(x, y)/2$. Specifically, $B$ is defined by the inequality $Q(z-c) < Q(y-c) = Q(x-c)$.
We show that all other points $z \in X$ lie outside of $B$. Suppose by contradiction that there is $z \in X$ such that $Q(z-c) < Q(y-c) = Q(x-c)$. We first show that $Q(z-x), Q(z-y) < Q(x-y)$. Indeed, if $Q$ is Euclidean, then this follows easily from the triangle inequality. However, we note that if $Q$ is not positive definite, the triangle inequality fails even for points in spacelike position. Set $D^2 = Q(x-y)$. Then
\begin{align*}
D^2 &= \langle x-y, x-y \rangle_Q \\ &= Q(x) + Q(y) +2\langle x,y\rangle_Q~,
\end{align*}
and therefore
$$ 2 \langle x, y \rangle_Q = - D^2 + Q(x) + Q(y)~. $$
Next, using that $Q\left(z-\frac{x}{2} - \frac{y}{2}\right) < \left(\frac{D}{2}\right)^2$ and the above expression for $\langle x, y\rangle_Q$, we have:
$$ Q(z) +\frac{1}{4}Q(x)+ \frac{1}{4}Q(y)-\langle z,x\rangle_Q -\langle z, y\rangle_Q +
\frac{1}{2}\langle x, y\rangle_Q < \frac{D^2}{4} $$
so that
$$ Q(z) + \frac{1}{2}Q(x) + \frac{1}{2}Q(y) - \langle z,x\rangle_Q - \langle z,y\rangle_Q <
\frac{D^2}{2} $$
and thus
$$ Q(z-x) + Q(z-y) < D^2~.$$
Hence, since both $Q(z-x), Q(z-y)$ are positive, we have that $Q(z-x), Q(z-y) < D^2 $ and so $d_Q(x,z), d_Q(y,z) < D= d_Q(x,y)$.
Next, there is some path in $\mathrm{MST}_Q(X)$ that connects $z$ to either $x$ or $y$ not passing through $[x,y]$. Without loss in generality, $z$ is connected to $y$ by such a path. Replacing the segment $[x,y]$ with $[z,x]$ yields a spanning tree with smaller total $Q$-length which contradicts that $\mathrm{MST}_Q(X)$ was minimal.
Hence there are no points of $X$ inside the ball $B$. Hence if $X'$ denotes the lift of $X$ to the graph of $Q$, as in the proof of Theorem~\ref{thm:Del-gen}, then the plane whose intersection with the graph of $Q$ projects to $\partial B$ is a support plane to the bottom of the convex hull $\mathscr C$ of $X'$ which contains the lift $[x',y']$ of the edge $[x,y]$. Hence $[x',y']$ is an edge of the convex hull of $X'$ and so $[x,y]$ is an edge of $\mathrm{Del}_Q(X)$.
\end{proof}
In the Euclidean plane, Proposition~\ref{prop:mst} has been used to provide algorithms
calculating $\mathrm{MST}(X)$ in time $O(n\log n)$, where $n = |X|$,
while algorithms not using the Delaunay decomposition of $X$ run in $O(n^2)$. In higher dimension, that is, if $d\geq 3$, finding an optimal algorithm remains an open problem.
For higher signatures, finding an optimal algorithm to construct a minimum spanning tree for a set
$X$ of points in spacelike position appears to be an open question.
We did not investigate whether
algorithms that apply in the Euclidean case extend to higher signatures. It might be relatively easy in the Minkowski plane, since the spacelike condition is then
quite strong, but it is conceivable that Proposition~\ref{prop:mst} can be used for this question
in the 3-dimensional Minkowski space.
\subsubsection{Relative neighborhood and Gabriel graph}
There are also natural higher signature analogues of the relative neighborhood graph, introduced and studied by Toussaint~\cite{tou_the} in 1980 in the setting of the Euclidean plane, and the Gabriel graph, introduced by Gabriel--Sokal~\cite{gab_ane} in 1969 in the setting of Euclidean space of any dimension.
Let $X$ be a finite set in $\mathbb R^d$ which is in spacelike position. Then
the {\em relative neighborhood graph} $\mathrm{RNG}_Q(X)$ of $X$ is the
(undirected) graph which has an edge connecting $x$ to $y$ if $d_Q(x,y) < \max\{d_Q(z,x), d_Q(z,y)\}$ for all $z \in X \setminus\{x,y\}$.
Next assume $Q$ is non-degenerate. The \emph{Gabriel graph} $\mathrm{GG}_Q(X)$ of $X$ has an edge $[x,y]$ for $x,y \in X$ if the open $Q$-ball whose diameter is the segment $[x,y]$ is empty of all other points of $X$. This notion does not seem to have an analogue for $Q$ degenerate. It is straightforward from the proof of Proposition~\ref{prop:mst} to show:
\begin{Proposition} In the case $Q$ non-degenerate,
$$\mathrm{MST}_Q(X) \subset \mathrm{RNG}_Q(X) \subset \mathrm{GG}_Q(X) \subset \mathrm{Del}_Q(X).$$
In the case $Q$ degenerate,
$$\mathrm{MST}_Q(X) \subset \mathrm{RNG}_Q(X) \subset \mathrm{Del}_Q(X).$$
\end{Proposition}
\subsection{Minimizing interpolation error}
The Delaunay triangulation $\mathrm{Del}_{Q}(X)$ can also be characterized from a functional approximation point of view. For example, in the Euclidean setting, Lambert \cite{lam_the} showed that the classical Delaunay triangulation maximizes the arithmetic mean of the radius of inscribed circles of the triangles, while Rippa \cite{rip_min} proved that it minimizes the integral of the squared gradient.
Let $\Omega \subset {\mathbb R}^d$ be a bounded domain, let $T$ be a triangulation of $\Omega$, and let $f$ be a function defined on $\Omega$. We define
$$\mathcal{Q}(T, f, p) = \|f-\hat{f}_{T}\|_{Q, L^p(\Omega)}~,$$
where $\hat{f}_{T}$ is the linear interpolation of $f$ based on the triangulation $T$ of $\Omega$, and where we use the quadratic form $Q$.
The following result, due to Chen--Xu \cite{che_opt} in the Euclidean setting, characterizes the $Q$-Delaunay triangulation as the optimal triangulation for the piecewise linear interpolation over a given point set $X$ of the function $Q(x)$. The proof of Chen--Xu, which uses the convex hull interpretation of the Delaunay triangulation, is easily generalized to the higher signature setting under the additional hypothesis that the points are in spacelike position.
\begin{Theorem}
Let $X\subset {\mathbb R}^d$ be a finite set in spacelike position with respect to $Q$, then
$$\mathcal{Q}(\mathrm{Del}_{Q}(X), Q(\cdot), p) = \min_{T\in \mathcal{P}(X)}\mathcal{Q}(T, Q(\cdot), p),\text{ for } 1\leq p\leq \infty~,$$
where $\mathcal{P}(X)$ is the set of all triangulations that have $X$ as vertices and $\Omega$ is the convex hull of $X$.
\end{Theorem}
\subsection{Other applications}
There are many more other well-known and important applications/generalizations of the classical
Delaunay triangulation, for example the notion of $\alpha$--shapes \cite{ede_ont} or
$\beta$--skeletons \cite{kir_afr}, which have important applications in pattern recognition,
digital shape sampling and processing, and structural molecular biology, among others.
We encourage the reader to investigate generalizations of such constructions to the
higher signature setting as needed.
\subsection{Implementation}
There are several efficient algorithms that can be used to compute the Delaunay triangulation
of a set of points in the Euclidean plane or in higher-dimensional space, for instance
an incremental flipping algorithm \cite{ede_sha} or a divide-and-conquer algorithm
\cite{cig_mon}, both
running in $O(n\log(n))$ in the plane (see \cite{boi_dev} for efficient implementations).
We have not investigated the extent to which these algorithms generalize readily to the setting of $Q$--Delaunay triangulations, nor have we investigated whether similar efficiency can be achieved. Of course any convex hull algorithm may applied to directly calculate $Q$--Delaunay decompositions as in the proof of Theorem~\ref{thm:Del-gen}.
In an online appendix to this
paper\footnote{\texttt{http://math.uni.lu/schlenker/programs/highersign/highersign.html}},
we provide a crude implementation of the computation of Delaunay decomposition for the
Euclidean, Minkowski and degenerate bilinear forms in ${\mathbb R}^3$. This implementation is
provided for illustration only, and is not intended to be computationally efficient.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2016-02-12T02:13:13",
"yymm": "1602",
"arxiv_id": "1602.03865",
"language": "en",
"url": "https://arxiv.org/abs/1602.03865",
"abstract": "A Delaunay decomposition is a cell decomposition in R^d for which each cell is inscribed in a Euclidean ball which is empty of all other vertices. This article introduces a generalization of the Delaunay decomposition in which the Euclidean balls in the empty ball condition are replaced by other families of regions bounded by certain quadratic hypersurfaces. This generalized notion is adaptable to geometric contexts in which the natural space from which the point set is sampled is not Euclidean, but rather some other flat semi-Riemannian geometry, possibly with degenerate directions. We prove the existence and uniqueness of the decomposition and discuss some of its basic properties. In the case of dimension d = 2, we study the extent to which some of the well-known optimality properties of the Euclidean Delaunay triangulation generalize to the higher signature setting. In particular, we describe a higher signature generalization of a well-known description of Delaunay decompositions in terms of the intersection angles between the circumscribed circles.",
"subjects": "Computational Geometry (cs.CG); Discrete Mathematics (cs.DM); Differential Geometry (math.DG); Geometric Topology (math.GT)",
"title": "Higher signature Delaunay decompositions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683498785867,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7096610666000026
} |
https://arxiv.org/abs/2011.01291 | Singularity of sparse random matrices: simple proofs | Consider a random $n\times n$ zero-one matrix with "density" $p$, sampled according to one of the following two models: either every entry is independently taken to be one with probability $p$ (the "Bernoulli" model), or each row is independently uniformly sampled from the set of all length-$n$ zero-one vectors with exactly $pn$ ones (the "combinatorial" model). We give simple proofs of the (essentially best-possible) fact that in both models, if $\min(p,1-p)\geq (1+\varepsilon)\log n/n$ for any constant $\varepsilon>0$, then our random matrix is nonsingular with probability $1-o(1)$. In the Bernoulli model this fact was already well-known, but in the combinatorial model this resolves a conjecture of Aigner-Horev and Person. | \section{Introduction}
Let $M$ be an $n\times n$ random matrix with i.i.d.\ $\Ber(p)$
entries (meaning that each entry $M_{ij}$ satisfies $\Pr(M_{ij}=1)=p$
and $\Pr(M_{ij}=0)=1-p$). It is a famous theorem of Koml\'os\ \cite{Kom67,Kom68}
that for $p=1/2$ a random Bernoulli matrix is \emph{asymptotically
almost surely} nonsingular: that is, $\lim_{n\to\infty}\Pr(M\text{ is singular})=0$.
Koml\'os' theorem can be generalised to sparse random Bernoulli matrices
as follows.
\begin{thm}
\label{thm:ber}Fix $\varepsilon>0$, and let $p=p(n)$
be any function of $n$ satisfying $\min(p,1-p)\geq (1+\varepsilon)\log n/n$.
Then for a random $n\times n$ random matrix $M$ with i.i.d.\ $\Ber(p)$
entries, we have
\[\lim_{n\to\infty}\Pr(M\text{ is singular})=0.\]
\end{thm}
\cref{thm:ber} is best-possible, in the sense that if $\min(p,1-p) \le(1-\varepsilon)\log n/n$,
then we actually have $\lim_{n\to\infty}\Pr(M\text{ is singular})=1$
(because, for instance, $M$ is likely to have two identical columns). That
is to say, $\log n/n$ is a \emph{sharp threshold} for singularity.
It is not clear when \cref{thm:ber} first appeared in print, but strengthenings
and variations on \cref{thm:ber} have been proved by several different
authors (see for example \cite{AE14,BR18,CV08,CV10}).
Next, let $Q$ be an $n\times n$ random matrix with independent rows,
where each row is sampled uniformly from the subset of vectors in
$\{ 0,1\} ^{n}$ having exactly $d$ ones ($Q$ is said
to be a random \emph{combinatorial} matrix). The study of such matrices was initiated by Nguyen\ \cite{Ngu13}, who proved that if $d=n/2$ then $Q$ is asymptotically almost surely
nonsingular (where $n\to\infty$ along the even integers). Strengthenings of Nguyen's theorem have been proved by several authors; see for example \cite{AP20,FJLS,Jai19,JSS20,Tra20}. Recently, Aigner-Horev and Person\ \cite{AP20} conjectured an analogue of \cref{thm:ber} for sparse random combinatorial matrices, which we prove in this note.
\begin{thm}
\label{thm:comb}Fix $\varepsilon>0$, and let $d=d(n)$
be any function of $n$ satisfying $\min(d,n-d)\geq (1+\varepsilon)\log n$.
Then for a $n\times n$ random zero-one matrix $Q$ with independent rows,
where each row is chosen uniformly among the vectors with $d$ ones, we have
\[
\lim_{n\to\infty}\Pr(Q\text{ is singular})\to0.
\]
\end{thm}
Just like \cref{thm:ber}, \cref{thm:comb} is best-possible in the sense that if $\min(d,n-d) \le(1-\varepsilon)\log n$,
then $\lim_{n\to\infty}\Pr(M\text{ is singular})=1$. \cref{thm:comb} improves on a result of Aigner-Horev and Person: they proved the same fact under the assumption that $\lim_{n\to \infty} d/(n^{1/2}\log^{3/2}n)=\infty$ (assuming that $d\le n/2$).
The structure of this note is as follows. First, in \cref{sec:general} we prove a simple and general lemma (\cref{lem:general}) which applies to any random matrix with i.i.d.\ rows. This lemma distills the essence of (a special case of) an argument due to Rudelson and Vershinyn~\cite{RV08}. Essentially, it shows that in order to prove \cref{thm:ber} and \cref{thm:comb}, one just needs to prove some relatively crude estimates about the typical structure of the vectors in the left and right kernels of our random matrices.
Then, in \cref{sec:ber} and \cref{sec:comb} we show how to use \cref{lem:general} to give simple proofs of \cref{thm:ber} and \cref{thm:comb}. Of course, \cref{thm:ber} is not new, but its proof is extremely simple and it serves as a warm-up for \cref{thm:comb}. It turns out that in order to analyse the typical structure of the vectors in the left and right kernel, we can work over $\ZZ_q$ for some small integer $q$ (in fact, we can mostly work over $\ZZ_2$). This idea is not new (see for example, \cite{AP20,CMMM19,Fer20,FJ19,FJLS,Hua18,Mes20,NW18a,NW18b}), but the details here are much simpler.
We remark that with a bit more work, the methods in our proofs can also likely be used to prove the conclusions of \cref{thm:ber} and \cref{thm:comb} under the weaker (and strictly best-possible) assumptions that $\lim_{n\to \infty}(\min(pn,n-pn)-\log n)=\infty$ and $\lim_{n\to \infty}(\min(d,n-d)-\log n)=\infty$. However, in this note we wish to emphasise the simple ideas in our proofs and do not pursue this direction.
\textit{Notation.} All logarithms are to base $e$. We use common asymptotic notation, as follows. For real-valued functions $f(n)$ and $g(n)$, we write $f=O(g)$ to mean that there is some constant $C>0$ such that $\vert f\vert \leq Cg$. If $g$ is nonnegative, we write $f=\Omega(g)$ to mean that there is $c>0$ such that $f \geq cg$ for sufficiently large $n$. We write $f=o(g)$ to mean that $f(n)/g(n)\to 0$ as $n\to\infty$.
\section{A general lemma}
\label{sec:general}
In this section we prove a (very simple) lemma which will give us
a proof scheme for both \cref{thm:ber} and \cref{thm:comb}. For a
vector $x$, let $\supp(x)$ (the \emph{support} of $x$)
be the set of indices $i$ such that $x_{i}\ne0$.
\begin{lem}
\label{lem:general}Let $\FF$ be a field, and let $A\in\FF^{n\times n}$
be a random matrix with i.i.d.\ rows $R_{1},\dots,R_{n}$. Let $\mathcal{P}\subseteq\FF^{n}$
be any property of vectors in $\FF^{n}$. Then for any $t\in\RR$,
the probability that $A$ is singular is upper-bounded by
\begin{align}
& \Pr(x^{T}A=0\text{ for some nonzero }x\in\FF^{n}\text{ with }|\supp(x)|<t)\label{eq:small-supp}\\
& \qquad+\frac{n}{t}\Pr(\text{there is nonzero }x\notin\mathcal{P}\text{ such that }x\cdot R_{i}=0\text{ for all }i=1,\dots,n-1)\label{eq:P}\\
& \qquad+\frac{n}{t}\max_{x\in\mathcal{P}}\Pr(x\cdot R_{n}=0)\label{eq:LO}
\end{align}
\end{lem}
\begin{proof}
Note that $A$ is singular if and only if there is a nonzero $x\in\FF^{n}$
satisfying $x^{T}A=0$. Let $\mathcal{E}_{i}$ be the event that $R_{i}\in\spn\{ R_{1},\dots,R_{i-1},R_{i+1},\dots,R_{n}\} $,
and let $X$ be the number of $i$ for which $\mathcal{E}_{i}$ holds.
Then by Markov's inequality and the assumption that the rows $R_{1},\dots,R_{n}$ are i.i.d., we have
\[
\Pr\left(x^{T}M=0\text{ for some }x\text{ with }|\supp\left(x\right)|\ge t\right)\le\Pr(X\ge t)\le\frac{\E X}{t}=\frac{n}{t}\Pr(\mathcal{E}_{n}).
\]
It now suffices to show that $\frac{n}{t}\Pr(\mathcal{E}_{n})$ is upper-bounded by the sum of the terms \cref{eq:P} and \cref{eq:LO}. Note that after sampling $R_1,\dots, R_{n-1}$, we can always choose a nonzero vector $x\in \FF^n$ with $x\cdot R_{i}=0$ for $i=1,\dots,n-1$. If the event $\mathcal{E}_{n}$ occurs, we must have $x\cdot R_n=0$. Distinguishing the cases $x\not\in\mathcal{P}$ and $x\in\mathcal{P}$ now gives the desired bound.
\end{proof}
\section{Singularity of sparse Bernoulli matrices: a simple
proof}
\label{sec:ber}
Let us fix $0<\eps<1$. We will take $t=cn$ for some small constant $c$ (depending on $\varepsilon$), and let $\mathcal{P}$
be the property $\{ x\in\QQ^{n}:|\supp (x)|\geq t\} $.
All we need to do is to show that the three terms \cref{eq:small-supp},
\cref{eq:P} and \cref{eq:LO} in \cref{lem:general} are each of the form $o(1)$. The following
lemma is the main part of the proof.
\begin{lem}
\label{lem:ber-normal}Let $R_{1},\dots,R_{n-1}$ be the first $n-1$
rows of a random $\Ber(p)$ matrix, with $\min(p,1-p)\geq (1+\varepsilon)\log n/n$.
There is $c>0$ (depending only on $\varepsilon$) such that with probability
$1-o(1)$, no nonzero vector $x\in\QQ^{n}$ with $|\supp(x)|<cn$
satisfies $R_{i}\cdot x=0$ for all $i=1,\dots,n-1$.
\end{lem}
\begin{proof}
If such a vector $x$ were to exist, we would be able to multiply by an integer and then
divide by a power of two to obtain a vector $v\in\ZZ^{n}$ with
at least one odd entry also satisfying $|\supp(v)|<cn$
and $R_{i}\cdot v=0$ for $i=1,\dots,n-1$. Interpreting $v$ as a vector in $\ZZ_{2}^n$, we would have $R_{i}\cdot v\equiv 0 \pmod{2}$ for $i=1,\dots,n-1$ and furthermore $v\in \ZZ_{2}^n$ would be a nonzero vector consisting of less than $cn$ ones. We show that such a vector $v$ is unlikely to exist (working over $\ZZ_{2}$ discretises the problem, so that we may use a union bound).
Let $p^*=\min(p,1-p)\geq (1+\varepsilon)\log n/n$. Consider any $v\in\{ 0,1\} ^{n}$ with $|\supp(v)|=s$.
Then $R_{i}\cdot v$ for $i=1,\dots,n-1$ are i.i.d.\ $\Bin(s,p)$ random variables. Let $P_{s,p}$ be the probability that a $\Bin(s,p)$ random variable is even, so
\begin{align*}
P_{s,p} & =\frac{1}{2}\left(\sum_{i=0}^{s}\binom{s}{i}p^{i}(1-p)^{s-i}+\sum_{i=0}^{s}\binom{s}{i}(-1)^{i}p^{i}(1-p)^{s-i}\right)\\
& =\frac12+\frac{(1-2p)^{s}}2\leq \begin{cases}
e^{-\left(1+o(1)\right)sp^*} & \text{if }sp^*=o(1),\\
e^{-\Omega(1)} & \text{if }sp^*=\Omega(1).
\end{cases}
\end{align*}
Taking $r=\delta/p^*$ for sufficiently small $\delta$ (relative to
$\varepsilon$), the probability that there exists nonzero $v\in\ZZ_{2}^n$
with $|\supp(v)|<cn$ and $R_{i}\cdot v\equiv 0 \pmod{2}$
for all $i=1,\dots,n-1$ is at most (recalling that $p^*\geq (1+\varepsilon)\log n/n$)
\begin{align*}
\sum_{s=1}^{cn}\binom{n}{s}P_{s,p}^{n-1} & \le\sum_{s=1}^{r}e^{s\log n-(1-\varepsilon/3)snp^*}+\sum_{s=r+1}^{cn}e^{s(\log(n/s)+1)-\Omega(n)}\\
& \le\sum_{s=1}^{\infty}n^{-s\varepsilon/3}+\sum_{s=1}^{cn}e^{n\left((s/n)(\log(n/s)+1)-\Omega(1)\right)}=o(1),
\end{align*}
provided $c$ is sufficiently small (relative to $\delta$).
\end{proof}
Taking $c$ as in \cref{lem:ber-normal}, we immediately see that the
term \cref{eq:P} is of the form $o\left(1\right)$. Observing that
the rows and columns of $M$ have the same distribution, and that
the event $x^{T}M=0$ is simply the event that $x\cdot C_{i}=0$ for
each column $C_{i}$ of $M$, it also follows from \cref{lem:ber-normal}
that the term \cref{eq:small-supp} is of the form $o\left(1\right)$.
Finally, the following straightforward generalisation of the well-known Erd\H os--Littlewood--Offord
theorem shows that the
term \cref{eq:LO} is of the form $o\left(1\right)$, which completes the proof of \cref{thm:ber}. This
lemma is the only nontrivial ingredient in the proof of \cref{thm:ber}. This lemma is a special case of \cite[Lemma~8.2]{CV08}, but it can also be quite straightforwardly deduced from the Erd\H os--Littlewood--Offord theorem itself.
\begin{lem}
\label{lem:L-O}Consider a (non-random) vector $x=(x_{1},\dots,x_{n})\in\RR^{n}$,
and let $\xi_{1},\dots,\xi_{n}$ be i.i.d.\ $\Ber(p)$
random variables, and let $p^*=\min(p,1-p)$. Then
\[
\max_{a\in \RR}\Pr(x_{1}\xi_{1}+\dots+x_{n}\xi_{n}=a)=O\left(\frac{1}{\sqrt{|\supp(x)|p^*}}\right).
\]
\end{lem}
\section{Singularity of sparse combinatorial matrices}
\label{sec:comb}
Let us again fix $0<\eps<1$. The proof of \cref{thm:comb} proceeds in almost exactly the same way
as the proof of \cref{thm:ber}, but there are three significant complications.
First, since the entries are no longer independent, the calculations
become somewhat more technical. Second, the rows and columns of $Q$ have different distributions,
so we need two versions of \cref{lem:ber-normal}: one for vectors in the left kernel and one for vectors in the right kernel. Third, the fact that
each row has exactly $d$ ones means that we are not quite as free
to do computations over $\ZZ_{2}$ (for example, if $d$ is even and
$v$ is the all-ones vector then we always have $Qv=0$ over $\ZZ_{2}$). For certain parts of the argument we will instead work over $\ZZ_{d-1}$.
Before we start the proof, the following lemma will allow us to restrict our attention to the case where $d\le n/2$, which will be convenient.
\begin{lem}
Let $Q\in \RR^{n\times n}$ be a matrix whose every row has sum $d$, for some $d\notin\{0,n\}$. Let $J$ be the $n\times n$ all-ones matrix. Then $Q$ is singular if and only if $J-Q$ is singular.
\end{lem}
\begin{proof}
Note that the all-ones vector $\one$ is in the column space of $Q$ (since the sum of all columns of $Q$ equals $d\one$). Hence every column of $J-Q$ is in the column space of $Q$. Therefore, if $Q$ is singular, then $J-Q$ is singular as well. The opposite implication can be proved the same way.
\end{proof}
In the rest of the section we prove \cref{thm:comb} uner the assumption that $(1+\eps)\log n\le d\le n/2$ (note that if $Q$ is a uniformly random zero-one matrix with every row having exactly $d$ ones, then $J-Q$ is a uniformly random zero-one matrix with every row having exactly $n-d$ ones).
The first ingredient we will need is an analogue of \cref{lem:L-O} for ``combinatorial''
random vectors. In addition to the notion of the support
of a vector, we define a \emph{fibre} of a vector to be a set of all
indices whose entries are equal to a particular value.
\begin{lem}
\label{lem:comb-LO}Let $0\le d\le n/2$, and consider a (non-random) vector $x\in\RR^{n}$ whose largest fibre
has size $n-s$, and let $\gamma\in\{ 0,1\} ^{n}$ be a random
zero-one vector with exactly $d$ ones. Then
\[
\max_{a\in \RR}\Pr(x\cdot\gamma=a)=O\left(\sqrt{n/(sd)}\right).
\]
\end{lem}
We deduce \cref{lem:comb-LO} from
the $p=1/2$ case of \cref{lem:L-O} (that is, from the Erd\H os--Littlewood--Offord
theorem~\cite{Erd45}).
\begin{proof}
The case $p=1/2$ is treated in \cite[Proposition~4.10]{LLTTY17}; this proof proceeds along similar lines. Let $p=d/n\leq 1/2$. We realise the distribution
of $\gamma$ as follows. First choose $d=pn$ random disjoint pairs
$\left(i_{1},j_{1}\right),\dots,\left(i_{pn},j_{pn}\right)\in\left\{ 1,\dots,n\right\} ^{2}$ (each having distinct entries),
and then determine the 1-entries in $\gamma$ by randomly choosing one
element from each pair.
We first claim that with probability $1-e^{-\Omega\left(sp\right)}$,
at least $\Omega\left(sp\right)$ of our pairs $\left(i,j\right)$
have $x_{i}\ne x_{j}$ (we say such a pair is \emph{good}). To see
this, let $I$ be a union of fibres of $x$, chosen such that $\left|I\right|\ge n/3$
and $n-\left|I\right|\geq s/3$ (if $s\le 2n/3$ we can
simply take $I$ to be the largest fibre of $x$, and otherwise we
can greedily add fibres to $I$ until $\left|I\right|\ge n/3$). To prove our claim, we will prove that in fact with the desired probability there are $\Omega(sp)$ different $\ell$ for which $i_\ell\notin I$ and $j_\ell\in I$.
Let $f=\ceil{pn/6}$ and let $S$ be the set of $\ell\le f$ for which
$i_{\ell}\notin I$. So, $\left|S\right|$ has a hypergeometric distribution
with mean $(n-|I|)f/n=\Omega\left(sp\right)$, and by a Chernoff bound (see for
example \cite[Theorem~2.10]{JLR00}), we have $\left|S\right|=\Omega\left(sp\right)$ with
probability $1-e^{-\Omega\left(sp\right)}$. Condition on such an outcome
of $i_{1},\dots,i_{f}$. Next, let $T$ be the set of $\ell\in S$ for which $j_{\ell}\in I$.
Then, conditionally, $\left|T\right|$ has a hypergeometric distribution
with mean at least $(|I|-f)|S|/n=\Omega\left(sp\right)$, so again using a Chernoff
bound we have $\left|T\right|=\Omega\left(sp\right)$ with probability
$1-e^{-\Omega\left(sp\right)}$, as claimed.
Now, condition on an outcome of our random pairs such that at least
$\Omega(sp)$ of them are good. Let $\xi_{\ell}$ be the indicator random variable for the event that
$i_{\ell}$ is chosen from the pair $\left(i_{\ell},j_{\ell}\right)$,
so $\xi_{1},\dots,\xi_{pn}$ are i.i.d.\ $\Ber\left(1/2\right)$
random variables, and $x\cdot\gamma=a$ if and only if
\[
(x_{i_{1}}-x_{j_{1}})\xi_{1}+\dots+(x_{i_{pn}}-x_{j_{pn}})\xi_{1}=a-x_{j_{1}}-\dots-x_{j_{pn}}.
\]
Under our conditioning, $\Omega\left(sp\right)$ of the $x_{i_{\ell}}-x_{j_{\ell}}$
are nonzero, so by \cref{lem:L-O} with $p=1/2$, conditionally we have $\Pr\left(x\cdot\gamma=a\right)\le O\left(1/\sqrt{sp}\right)$.
We deduce that unconditionally
\[
\Pr(x\cdot\gamma=0)\le e^{-(sp)}+O(1/\sqrt{sp})=O(1/\sqrt{sp})=O(\sqrt{n/(sd)}),
\]
as desired.
\end{proof}
The proof of \cref{thm:comb} then reduces to the following two lemmas. Indeed, for a constant $c>0$ (depending on $\varepsilon$) satisfying the statements in \cref{lem:comb-normal-hard,lem:comb-normal}, we can take $t=cn/\log d$, and
\[\mathcal P=\{x\in \QQ^n:x\text{ has largest fibre of size at most }(1-c/\log d)n\}.\]
We can then apply \cref{lem:general}. By \cref{lem:comb-normal-hard}, the term \cref{eq:small-supp} is bounded by $o(1)$, by \cref{lem:comb-normal} the term \cref{eq:P} is bounded by $(n/t)\cdot n^{-\Omega(1)}=(\log d/c)\cdot n^{-\Omega(1)}=o(1)$, and by \cref{lem:comb-LO} the term \cref{eq:LO} is bounded by $(n/t)\cdot O\left(\sqrt{n\log d/(cnd)}\right)= O(\log^{3/2}d/\sqrt d)=o(1)$.
\begin{lem}
\label{lem:comb-normal-hard}Let $Q$ be a random combinatorial matrix (with $d$ ones in each row),
with $(1+\varepsilon)\log n\le d\le n/2$. There
is $c>0$ (depending only on $\varepsilon$) such that with probability
$1-o(1)$, there is no nonzero vector $x\in\QQ^{n}$ with
$|\supp(x)|<cn/\log d$ and $x^{T}Q=0$.
\end{lem}
\begin{lem}
\label{lem:comb-normal}Let $R_{1},\dots,R_{n-1}$ be the first $n-1$ rows of a random combinatorial matrix (with $d$ ones in each row), with $(1+\varepsilon)\log n\le d\le n/2$. There is $c>0$ (depending only on $\varepsilon$)
such that with probability $1-n^{-\Omega(1)}$, every nonzero $x\in \QQ^n$ satisfying $R_{i}\cdot x=0$
for all $i=1,\dots,n-1$ has largest fibre of size at most $(1-c/\log d)n$.
\end{lem}
\begin{proof}[Proof of \cref{lem:comb-normal-hard}]
As in \cref{lem:ber-normal}, it suffices to work over $\ZZ_{2}$.
Let $C_{1},\dots,C_{n}$ be the columns of $Q$, consider any $v\in\ZZ_{2}^{n}$
with $|\supp(v)|=s$, and let $\mathcal{E}_v$
be the event that $C_{i}\cdot v\equiv 0\pmod{2}$ for $i=1,\dots,n$. Note that $\mathcal{E}_v$
only depends on the submatrix $Q_v$ of $Q$ containing only those
rows $j$ with $v_{j}=1$ (and $\mathcal{E}_v$ is precisely the event that every column of $Q_v$ has an even sum).
Let $p=d/n\le 1/2$, let $M_v$ be a random $s\times n$ matrix with i.i.d.\ $\Ber(p)$
entries, and let $\mathcal{E}_v'$
be the event that every column in $M_v$ has an even sum. Note that $M_v$
is very similar to $Q_v$, so the probability of $\mathcal E_v$ is very similar to the probability of $\mathcal E_v'$. Indeed, writing $R_{1},\dots,R_{s}$ and $R_{1}',\dots,R_{s}'$
for the rows of $Q_v$ and $M_v$ respectively, and writing $s_j=|\supp(R_j')|$, for each $j$ we have $s_j\sim \Bin(n,p)$, so an elementary computation using Stirling's formula shows that $\Pr(s_j=d)=\Omega(1/\sqrt{d})=e^{-O(\log d)}$. Hence
\[
\Pr(\mathcal{E}_v)=\Pr(\mathcal{E}_v'\,|\,s_j=d\text{ for all }j)\le\Pr(\mathcal{E}_v')/\Pr(s_j=d\text{ for all }j)=e^{O(s\log d )}\Pr(\mathcal{E}_v')=e^{O(s\log (pn))}\Pr(\mathcal{E}_v').
\]
Recalling the quantity $P_{s,p}$ from the proof of \cref{lem:ber-normal},
we have
\[
\Pr(\mathcal{E}_v')=P_{s,p}^{n}=\begin{cases}
e^{-\left(1+o(1)\right)spn} & \text{if }sp=o(1),\\
e^{-\Omega(n)} & \text{if }sp=\Omega(1),
\end{cases}
\]
so if $s\le cn/\log d=cn/\log(pn)$ for small $c>0$, then we also have
\[
\Pr(\mathcal{E}_v)\leq \begin{cases}
e^{-\left(1+o(1)\right)spn} & \text{if }sp=o(1),\\
e^{-\Omega(n)} & \text{if }sp=\Omega(1).
\end{cases}
\]
Let $P_s=\Pr(\mathcal{E}_v)$ (which only depends on $s$). We can now conclude the proof in exactly the same way as in \cref{lem:ber-normal}. Taking $r=\delta/p$ for sufficiently small $\delta$ (relative to
$\varepsilon$), the probability that there exists nonzero $v\in\ZZ_{2}^n$
with $|\supp(v)|<cn/\log d$ and $C_{i}\cdot v\equiv 0\pmod{2}$
for all $i=1,\dots,n$ is at most
\begin{align*}
\sum_{s=1}^{cn/\log d}\binom{n}{s}P_{s} & \le\sum_{s=1}^{r}e^{s\log n-(1-\varepsilon/3)snp}+\sum_{s=r+1}^{cn/\log d}e^{s(\log(n/s)+1)-\Omega(n)}\\
& \le\sum_{s=1}^{\infty}n^{-s\varepsilon/3}+\sum_{s=1}^{cn/\log d}e^{n\left((s/n)(\log(n/s)+1)-\Omega(1)\right)}=o(1),
\end{align*}
provided $c$ is sufficiently small (relative to $\delta$).
\end{proof}
We will deduce \cref{lem:comb-normal} from the following lemma.
\begin{lem}
\label{lem:hypergeometric}Suppose $p\le1/2$ and $pn\to\infty$, and let $\gamma\in\{ 0,1\} ^{n}$ be
a random vector with exactly $pn$ ones. Let $q\geq 2$ be an integer and consider a (non-random) vector $v\in\ZZ_{q}^{n}$
whose largest fibre has size $n-s$. Then for any $a\in \ZZ_q$ we have $\Pr(v\cdot\gamma\equiv a \pmod{q})\le P_{p,n,s}$
for some $P_{p,n,s}$ (only depending on $p$, $n$ and $s$) satisfying
\[
P_{p,n,s}=\begin{cases}
e^{-\Omega(1)} & \text{when }sp=\Omega(1),\\
e^{-\left(1-o(1)\right)sp} & \text{when }sp=o(1)
\end{cases}
\]
\end{lem}
\begin{proof}
As in the proof of \cref{lem:comb-LO}, we realise the distribution of $\gamma$ by first choosing $pn$ random disjoint pairs $(i_{1},j_{1}),\dots,(i_{pn},j_{pn})\in\{ 1,\dots,n\} ^{2}$,
and then randomly choosing one element from each pair to comprise the 1-entries of $\gamma$.
Let $\mathcal{E}$ be the event that $v_{i}\ne v_{j}$ for at least
one of our random pairs $(i,j)$. Then $\Pr(v\cdot\gamma\equiv a \pmod {q}\,|\,\mathcal{E})\le1/2$.
So, it actually suffices to prove that
\[
\Pr(\mathcal{E})\ge\begin{cases}
\Omega(1) & \text{when }sp=\Omega(1),\\
\left(2-o(1)\right)sp & \text{when }sp=o(1).
\end{cases}
\]
If $s\ge n/3$ (this can only occur if $sp=\Omega(1)$),
then we can choose $J\su \{1,\dots,n\}$ to be a union of fibres of the vector $v\in \ZZ_q^n$ such that $n/3\le|J|\le2n/3$.
In this case,
\[
\Pr(\mathcal{E})\ge\Pr(i_{1}\in J,\,j_{1}\notin J)=\Omega(1),
\]
as desired. So, we assume $s<n/3$, and let $I\su \{1,\dots,n\}$ be the set of indices in the largest fibre of $v$ (so $|I|=n-s$).
Note that $\mathcal{E}$ occurs whenever there is a pair $\{ i_{k},j_{k}\} $
with exactly one element in $I$.
Let $\mathcal{F}$ be the event that $i_{k}\in I$ for all $k=1,\dots,pn$. We have
\[
\Pr(\mathcal{E}\,|\,\mathcal{F})\ge1-(1-s/n)^{pn}=\begin{cases}
\Omega(1) & \text{when }sp=\Omega(1),\\
\left(1-o(1)\right)sp & \text{when }sp=o(1),
\end{cases}
\]
and
\[
\Pr(\mathcal{E}\,|\,\overline{\mathcal{F}})\ge(n-s-pn)/(n-pn)=\begin{cases}
\Omega(1) & \text{when }sp=\Omega(1),\\
1-o(1) & \text{when }sp=o(1).
\end{cases}
\]
This already implies that if $sp=\Omega(1)$, then $\Pr(\mathcal{E})=\Omega(1)$
as desired. If $sp=o(1)$ then $\Pr(\mathcal{F})\le(1-s/n)^{pn}=1-\left(1+o(1)\right)sp$,
so
\[
\Pr(\mathcal{E})=\Pr(\mathcal{F})\Pr(\mathcal{E}\,|\,\mathcal{F})+\Pr(\overline{\mathcal{F}})\Pr(\mathcal{E}\,|\,\mathcal{\overline{\mathcal{F}}})\geq \left(2-o(1)\right)sp,
\]
as desired.
\end{proof}
\begin{proof}[Proof of \cref{lem:comb-normal}]
Let $q=d-1$. It suffices to prove that with probability $1-o(1)$ there is no nonconstant ``bad'' vector
$v\in\ZZ_{q}^n$ whose largest fibre has size at least $(1-c/\log q)n$
and which satisfies $R_{i}\cdot v\equiv 0\pmod{q}$ for all $i=1,\dots,n-1$. (Note that by the choice of $q$, if $v\in \ZZ_{q}^n$ is constant and nonzero, then it is impossible to have $v\cdot R_1=0$).
Let $p=d/n$, consider any $v\in\ZZ_{q}^n$ whose largest fibre has size $n-s$, and
consider any $i\in \{1,\dots,n-1\}$. Then $R_{i}\cdot v$ is of the form in \cref{lem:hypergeometric},
so taking $r=\delta/p$ for sufficiently small $\delta$ (relative to $\varepsilon$), the probability
that such a bad vector exists is at most
\begin{align*}
\sum_{s=1}^{c'n/\log q}\binom{n}{s}q^{s+1}P_{p,n,s}^{n-1} & \le\sum_{s=1}^{r}e^{s\log n+(s+1)2\sqrt{pn}-(1-\varepsilon/3)spn}+\sum_{s=r+1}^{c'n/\log q}e^{s(\log(n/s)+1)+cn+2\sqrt{pn}-\Omega(n)}\\
& \le\sum_{s=1}^{\infty}n^{-s\varepsilon/3}+\sum_{s=1}^{c'n/\log q}e^{n\left((s/n)(\log(n/s)+1)-\Omega(1)\right)}=n^{-\Omega(1)},
\end{align*}
provided $c'>0$ is sufficiently small (relative to $\delta$) and $n$ is sufficiently large.
\end{proof}
| {
"timestamp": "2020-11-04T02:02:26",
"yymm": "2011",
"arxiv_id": "2011.01291",
"language": "en",
"url": "https://arxiv.org/abs/2011.01291",
"abstract": "Consider a random $n\\times n$ zero-one matrix with \"density\" $p$, sampled according to one of the following two models: either every entry is independently taken to be one with probability $p$ (the \"Bernoulli\" model), or each row is independently uniformly sampled from the set of all length-$n$ zero-one vectors with exactly $pn$ ones (the \"combinatorial\" model). We give simple proofs of the (essentially best-possible) fact that in both models, if $\\min(p,1-p)\\geq (1+\\varepsilon)\\log n/n$ for any constant $\\varepsilon>0$, then our random matrix is nonsingular with probability $1-o(1)$. In the Bernoulli model this fact was already well-known, but in the combinatorial model this resolves a conjecture of Aigner-Horev and Person.",
"subjects": "Combinatorics (math.CO); Probability (math.PR)",
"title": "Singularity of sparse random matrices: simple proofs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683465856103,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7096610642336884
} |
https://arxiv.org/abs/1408.0602 | Extremal problems on shadows and hypercuts in simplicial complexes | Let $F$ be an $n$-vertex forest. We say that an edge $e\notin F$ is in the shadow of $F$ if $F\cup\{e\}$ contains a cycle. It is easy to see that if $F$ is "almost a tree", that is, it has $n-2$ edges, then at least $\lfloor\frac{n^2}{4}\rfloor$ edges are in its shadow and this is tight. Equivalently, the largest number of edges an $n$-vertex cut can have is $\lfloor\frac{n^2}{4}\rfloor$. These notions have natural analogs in higher $d$-dimensional simplicial complexes, graphs being the case $d=1$. The results in dimension $d>1$ turn out to be remarkably different from the case in graphs. In particular the corresponding bounds depend on the underlying field of coefficients. We find the (tight) analogous theorems for $d=2$. We construct $2$-dimensional "$\mathbb Q$-almost-hypertrees" (defined below) with an empty shadow. We also show that the shadow of an "$\mathbb F_2$-almost-hypertree" cannot be empty, and its least possible density is $\Theta(\frac{1}{n})$. In addition we construct very large hyperforests with a shadow that is empty over every field.For $d\ge 4$ even, we construct $d$-dimensional $\mathbb{F} _2$-almost-hypertree whose shadow has density $o_n(1)$.Finally, we mention several intriguing open questions. | \section{Introduction}
This article is part of an ongoing research effort to bridge
between graph theory and topology (see, e.g.~\cite{kalai, sum_complex,DNRR,lin_mesh,bab_kah,farber,gromov,lubotzky}).
This research program starts from the observation that a graph can be viewed as a $1$-dimensional simplicial complex, and that many basic concepts of graph theory such as connectivity, forests, cuts, cycles, etc., have natural counterparts in the realm of higher-dimensional simplicial complexes.
As may be expected, higher dimensional objects tend to be more complicated
than their $1$-dimensional counterparts, and many fascinating phenomena reveal themselves from the present vantage point. This paper is dedicated to
the study of several extremal problems in this domain. We start by introducing some of the
necessary basic notions and definitions.
\vspace{0.7cm}
\noindent
{\bf Simplices, Complexes, and the Boundary Operator:} \\
All simplicial complexes considered here have $[n]=\{1,\ldots,n\}$ or $\ensuremath{\mathbb Z}_n$ as their {\em vertex set} $V$. A simplicial complex $X$ is a collection of subsets of $V$ that is closed under taking subsets. Namely, if $A\in X$ and $B\subseteq A$, then $B\in X$ as well. Members of $X$ are called {\em faces} or {\em simplices}. The {\em dimension} of the simplex $A\in X$ is defined as $|A|-1$.
A $d$-dimensional simplex is also called a $d$-simplex or a $d$-face for short. The dimension ${\rm dim}(X)$ is defined as $\max{\rm dim}(A)$ over all faces $A\in X$, and we also refer to a $d$-dimensional simplicial complex as a $d$-complex. The {\em size} $|X|$ of a $d$-complex $X$ is the number $d$-faces in $X$.
The complete $d$-dimensional complex $K_n^d = \{\sigma \subset [n] ~|~
|\sigma| \leq d+1 \}$ contains all simplices of dimension $\leq d$. If $X$ is a $d$-complex and $t < d$, the collection of all faces of dimension $\le t$ in $X$ is a simplicial complex that we call the $t$-{\em skeleton} of $X$. If this $t$-skeleton coincides with $K_n^t$ we say that $X$ has a {\em full} $t$-dimensional skeleton. If $X$ has a full $(d-1)$-dimensional skeleton (as we usually assume), then its {\em complement} $\bar X$ is defined by taking a full $(d-1)$-dimensional skeleton and those $d$-faces that are not in $X$.
The permutations on the vertices of a face $\sigma$ are split in two
{\em orientations} of $\sigma$, according to the permutation's
sign. The {\em boundary operator} $\partial=\partial_d$ maps an oriented
$d$-simplex $\sigma = (v_0,...,v_d)$ to the formal sum
$\sum_{i=0}^{d}(-1)^i(\sigma\setminus v_i)$, where $\sigma\setminus
v_i=(v_0,...v_{i-1},v_{i+1},...,v_d)$ is an oriented
$(d-1)$-simplex. We fix some field $\ensuremath{\mathbb F}$ and linearly extend
the boundary operator to free $\ensuremath{\mathbb F}$-sums of
simplices. We consider the ${n \choose d}\times{n \choose d+1}$
matrix form of $\partial_d$ by choosing arbitrary orientations for
$(d-1)$-simplices and $d$-simplices in $K_n^d$. Note that changing the
orientation of a $d$-simplex (resp. $d-1 $-simplex) results in
multiplying the corresponding column (resp. row) by $-1$. Thus the
$d$-boundary of a weighted sum of $d$ simplices, viewed as a vector $z$
(of weights) of dimension ${n \choose d+1}$ is just the matrix-vector
product $\partial_d z$. We denote by $M_X$ the submatrix of $\partial_d$ restricted to the columns associated with $d$-faces of a $d$-complex $X$.
The specific underlying fields that we consider in this paper are $\ensuremath{\mathbb Q}$ or $\ensuremath{\mathbb F}_2$. (In the latter case orientation is redundant). It is very interesting to extend the discussion to the case where everything is done over a commutative ring, and especially over $\ensuremath{\mathbb Z}$, but we do not do this here. We associate each column in the matrix form of $\partial_d$
with the corresponding $d$-simplex $\sigma$. This is an ${n \choose d}$-dimensional vector of $0,\pm 1$ whose support corresponds to the boundary of $\sigma$.
It is standard and not hard to see that for every choice of ground field, the matrix
$\partial_d$ has rank ${n-1 \choose d}$. A fundamental (easy) fact is that
$\partial_{d-1} \cdot \partial_{d} = 0$ for any $d$.
\vspace{0.7cm}
\noindent
{\bf Rank Function and other notions:} \\
If $S$ is a collection of $n$-vertex $d$-simplices, then we define\footnote{In the language of simplicial homology, consider the $d$-complex $K(S)$ whose set of $d$-faces is $S$. Then, ${\rm rank}(S)$ is
just the dimension of $B_{d-1}$, the linear space of $(d-1)$-boundaries of $K(S)$,
and $|S| - {\rm rank}(S)$ is the dimension of the homology group $H_d(K(S))$.}
its {\em rank} as the $\ensuremath{\mathbb F}$-rank of the set of the corresponding columns of $\partial_d$.
(It clearly does not depend on the choice of orientations). The set of all $d$-faces of $K_n^d$ has rank ${{n-1}\choose d}$,
with basis being, e.g., the collection of all $d$-simplices that contain the
vertex $1$.
If ${\rm rank}(S)=|S|$, we say that $S$ is {\em acyclic} over $\ensuremath{\mathbb F}$.
A maximal acyclic set of $d$-faces is called a {\em $d$-hypertree},
and an acyclic set of size ${{n-1}\choose d}-1$ is called an {\em almost-hypertree}.
Hypertrees over $\ensuremath{\mathbb Q}$ were studied e.g., by Kalai~\cite{kalai} and others~\cite{adin,duval}
in the search for high-dimensional analogs of Cayley's formula for the number of labeled trees. A {\em $d$-dimensional hypercut} (or {\em $d$-hypercut} in short) is an inclusion-minimal set of $d$-faces that intersects every hypertree. It is a standard fact in matroid theory that for every hypercut $C$, there is a hypertree $T$ such that $|C \cap T|=1$.
The {\em shadow} $SH(S)$ of a set $S$ of $d$-simplices consists of all $d$-simplices $\sigma\not\in S$ which are in the $\ensuremath{\mathbb F}$-linear span of $S$, i.e., such that ${\rm rank} (S\cup\sigma) = {\rm rank}(S)$. A set of $d$-faces is a hypercut iff its complement is the union of an almost-hypertree and its shadow. If $SH(S) = \emptyset$, we say that $S$ is {\em closed} or {\em shadowless}. For instance, a set of edges in a graph is closed if it is a disjoint union of cliques.
We turn to define $d$-{\em collapsibility}. A $(d-1)$-face $\tau$ in a
$d$-complex $K$ is called {\em exposed} if it is contained in exactly one
$d$-face $\sigma$ of $K$. An elementary $d$-collapse on $\tau$ consists of
the removal of $\tau$ and $\sigma$ from $K$. We say that $K$ is $d$-collapsible
if it is possible
to eliminate all the $d$-faces of $K$ by a series of elementary $d$-collapses.
It is an easy observation that the set of $d$-faces in a $d$-collapsible $d$-complex
is acyclic over every field.
We refer the reader to Section~\ref{section:combin} for some more background material.
\vspace{1.6cm}
\noindent
{\bf Results:}
In this paper we study extremal problems concerning the possible sizes
of hypercuts and shadows in simplicial complexes.
We begin with some trivial observations on $n$-vertex graphs, starting with the (non-tight) claim that no cut can have more than ${n\choose 2} - n+2$ edges. This follows, since for every cut there is a tree that meets it in exactly one edge. Actually the largest number of edges of a cut is $\lfloor \frac{n^2}{4} \rfloor$. We investigate here the $2$-dimensional situation and discover that it is completely different from the graphical case. When we discuss $2$-complexes, we refer to $2$-faces as faces (and keep the terms vertex and edge for $0$ and $1$ dimensional faces).
A $2$-dimensional hypertree has ${n-1 \choose 2}$ faces. So, by the same reasoning, every hypercut has at most ${n \choose 3} - \left({n-1 \choose 2}-1\right)$ faces. A hypercut of this size (if one exists) is called {\em perfect}. We show that $\ensuremath{\mathbb Q}$-perfect hypercuts exist for certain integers $n$, and if a well-known conjecture by Artin\footnote{There is strong evidence for this conjecture. In particular it follows from the generalized Riemann Hypothesis.} in number theory is true, there are {\em infinitely many} such $n$. The construction is based on the $2$-complex of length-$3$ arithmetic progressions in $\ensuremath{\mathbb Z}_n$, and is of an
independent interest.
Over the field $\ensuremath{\mathbb F}_2$, surprisingly, the situation changes. There are no perfect hypercuts for $n>6$,
and the largest possible hypercut has ${n \choose 3} - \frac{3}{4} n^2 - \Theta(n)$ faces. We completely describe all the extremal hypercuts.
Staying with $\ensuremath{\mathbb F}=\ensuremath{\mathbb F}_2$ and with $d>2$ the situation depends on the {\em parity} of $d$. As we show, for $d$ even the largest $d$-hypercuts have ${n \choose d+1}\cdot\left(1-o_n(1)\right)$ $d$-faces. When $d$ is odd, all $d$-hypercuts have density that is bounded away from 1.
Equivalently, this subject can be viewed from the perspective of shadows of acyclic complexes. Thus the complement of a perfect $2$-hypercut over $\ensuremath{\mathbb Q}$ is an almost-hypertree (i.e. acylic complex with ${n-1 \choose 2}-1$ $2$-faces) with an empty shadow. Our results over $\ensuremath{\mathbb F}_2$ can be restated as saying that the least possible size of the shadow of a $2$-dimensional $\ensuremath{\mathbb F}_2$-almost-hypertree is $\frac{n^2}{4}+\Theta(n)$.
Many questions suggest themselves: Let $X$ be an $\ensuremath{\mathbb F}$-acyclic $d$-dimensional $n$-vertex simplicial complex with a full skeleton and a given size. What is the smallest possible shadow of such $X$? More specifically, what is the largest possible size of $X$ if it is shadowless?
One construction that we present here applies to all fields at once, since it is based on the combinatorial notion of collapsibility. This is a collapsible $2$-complex with $f={n-1 \choose 2} - (n+1)$ $2$-faces which remains collapsible after the addition of any other $2$-face. This yields, for every field $\ensuremath{\mathbb F}$, a shadowless $\ensuremath{\mathbb F}$-acyclic $2$-complex with $f$ $2$-faces
We note that Bj\"orner and Kalai's work \cite{BK} determines the {\em largest} possible shadow of an acyclic $d$-complex with $f$ faces. The extremal examples are maximal acyclic subcomplexes of a shifted complex with $f$ faces.
The rest of the paper is organized as following. In Section~\ref{section:combin} we introduce some additional notions in the combinatorics of simplicial complexes. Section~\ref{sec:Q} deals with the problem of largest $2$-hypercuts over $\ensuremath{\mathbb Q}$.
In Section~\ref{sec:f2} we study the same problem over $\ensuremath{\mathbb F}_2$.
In Section~\ref{sec:f2_even_dim} we construct large $d$-hypercuts over $\ensuremath{\mathbb F}_2$ for even $d \ge 4$.
In Section~\ref{sec:cl_ac} we deal with large acyclic shadowless $2$-dimensional sets.
Lastly, in Section~\ref{section:open} we present some of the many open questions in this area.
\section{Additional Notions and Facts from Simplicial Combinatorics}
\label{section:combin}
Recall that we view the $d$-boundary operator as a linear map over
$\ensuremath{\mathbb F}$, that maps
vectors supported on oriented $d$-simplices to vectors supported on
$(d-1)$-simplices, given explicitly by the matrix $\partial_d$, as defined in the Introduction.
The right kernel of $\partial_d$ is the linear space of {\em $d$-cycles}.
The left image of $\partial_d$ is the linear space $B^d(X)$ of $d$-coboundaries of $X$.
With some abuse of notation we occasionally call a set of $d$-simplices
a cycle or a coboundary if it is the {\em support} of a cycle or a coboundary.
Clearly over $\ensuremath{\mathbb F}_2$, this makes no difference. In this case each $d$-coboundary is
associated with a set $A$ of $(d-1)$-faces, and consists of those $d$-faces whose boundary has an odd intersection with $A$.
A $d$-coboundary is called {\em simple} if its support does
not properly contain the support of any other non-empty $d$-coboundary. As observed e.g., in \cite{V-con}, a coboundary is simple if and only if its support is
a hypercut.
If $\sigma$ is a face in a complex $X$, we define its {\em link} via
${\rm link}_\sigma(X)=\{\tau\in X : \tau \cap \sigma= \emptyset, ~ \tau\cup\sigma\in X\}$. This is clearly a simplicial complex. For instance,
the link of a vertex $v$ in a graph $G$ is $v$'s neighbour set which we
also denote by $N_G(v)$ or $N(v)$. For a $2$-coboundary $C$ over $\ensuremath{\mathbb F}_2$ and a vertex
$v \in [n]$, it is easy to see that the graph ${\rm link}_v(C)$ generates $C$, i.e. $C = {\rm link}_v(C) \cdot \partial_2$. Namely, the characteristic vector of the $2$-faces of $C$ equals to the vector-matrix left product of the characteristic vector of the edges of ${\rm link}_v(C)$ with the boundary matrix $\partial_2$. We recall a necessary and
sufficient condition that $G = {\rm link}_v(C)$ generates a $2$-hypercut $C$
rather than a general coboundary.
Two incident edges $uv$,$uw$ in a graph $G=(V,E)$ are said to be
$\Lambda$-{\em adjacent} if $vw\notin E$. We say that $G$ is
$\Lambda$-{\em connected} if the transitive closure of the
$\Lambda$-adjacency relation has exactly one class.
\begin{proposition}\label{prop:2cut}\cite{V-con}
A $2$-dimensional coboundary $B$ is a hypercut if and only if the
graph ${\rm link}_v(B)$ is $\Lambda$-connected for every $v$.
\end{proposition}
\section{Shadowless Almost-Hypertrees Over $\ensuremath{\mathbb Q}$}
\label{sec:Q}
The main result of this section is a construction of $2$-dimensional {\em shadowless} $\ensuremath{\mathbb Q}$-almost-hypertrees. As mentioned above, the complement of such a complex is a {\em perfect} hypercut having ${n\choose 3}-{n-1\choose 2} + 1$ faces which is the most possible.
\begin{theorem}\label{thm:noShadow}
Let $n\ge 5$ be a prime for which $\ensuremath{\mathbb Z}_n^*$ is generated by $\{- 1,
2\}$. Let $X=X_n$ be a 2-dimensional simplicial complex on
vertex set $\ensuremath{\mathbb Z}_n$ whose $2$-faces are arithmetic progressions of
length $3$ in $\ensuremath{\mathbb Z}_n$ with difference not in $\{ 0,\pm 1\}$ . Then,
\begin{itemize}
\item $X_n$ is $2$-collapsible, and hence it is an
almost-hypertree over every field.
\item $SH(X_n)=\emptyset$ over $\ensuremath{\mathbb Q}$. Consequently, the complement of $X_n$ is a
perfect hypercut over $\ensuremath{\mathbb Q}$.
\end{itemize}
\end{theorem}
The entire construction and much of the discussion of $X_n$ is carried out over $\ensuremath{\mathbb Z}_n$.
However, in the following discussion, the boundary operator $M_X$ of
$X_n$ is considered over the rationals.
We start with two simple observations. First, note
that $X_n$ has a full $1$-skeleton, i.e., every edge is contained in some $2$-face of $X$.
Also, we note that the choice of omitting the arithmetic triples with difference $\pm 1$ is completely arbitrary. For every $a \in \ensuremath{\mathbb Z}_n^*$, the automorphism $r \mapsto ar$ of $\ensuremath{\mathbb Z}_n$ maps $X_n$
to a combinatorially isomorphic complex of arithmetic triples over $\ensuremath{\mathbb Z}_n$, with
omitted difference $\pm a$. Consequently, Theorem~\ref{thm:noShadow} holds equivalently for any difference that we omit. In what follows we indeed assume for
convenience that the missing difference is not $\pm 1$, but rather $\pm 2^{-1} \in \ensuremath{\mathbb Z}_n$.
For $d \in Z^*_n$, define $E_d \;=\; E_{d, n} \;=\; \left(\,
(0,d),(1,d+1),\ldots,(n-1,d+n-1)\, \right),$ where all additions are $\bmod ~n$.
This is an ordered subset of directed edges in $X_n$.
Similarly, we consider the collection of arithmetic triples of difference $d$,
$$F_d \;=\; F_{d, n} \;=\; \left( \, (0,d,2d),(1,d+1,2d+1),\ldots,(n-1,d+n-1,2d+n-1) \,\right) .$$
Clearly every directed edge appears in exactly one $E_{d}$ and then its reversal is in
$E_{-d}$. Likewise for arithmetic triples and the $F_d$'s.
Since we assume that
$Z^*_n$ is generated by $\{-1, 2\}$, it follows that
the powers $\left\{ 2^i \right\} \subset Z_n^*$, ${i=0,\ldots,
{\frac{n-1}{2} - 1}}$, are all distinct, and, moreover, no power is
an additive inverse of the other. Therefore, the sets $\left\{
E_{2^i} \right\}$, ${i=0,\ldots, {\frac{n-1}{2} - 1}}$, constitute a
partition of the $1$-faces of $X_n$. Similarly, the sets $\left\{ F_{2^j}
\right\}$, ${j=0,\ldots, {\frac{n-1}{2} - 2}}$, constitute a partition
of the $2$-faces of $X_n$. The omitted difference is $2^{\frac{n-1}{2} -
1} \in \{ \pm 2^{-1} \} $, as assumed (the sign is determined according to whether
$2^{\frac{n-1}{2}} =1$ or $-1$).
\begin{lemma}\label{lem:boundaryOfArithComplex}
Ordering the rows of the adjacency matrix $M_{X}$ by $E_{2^i}$'s, and ordering the columns by the $F_{2^i}$'s, the matrix $M_{X}$ takes the following form:
\begin{equation}
\label{eqn:matrix_collapse_form}
M_{X} ~=~
\left(\begin{matrix}
I+Q&0&0&...\\
-I&I+Q^2&0&...\\
0&-I&\ddots&...\\
0&0&\ddots&I+Q^{2^{\frac{n-1}{2} - 2}}\\
0&0&...&-I
\end{matrix}\right)
\end{equation}
where each entry is an $n\times n$ matrix (block) indexed by $\ensuremath{\mathbb Z}_n$, and $Q$ is
a permutation matrix corresponding to the linear map $b\mapsto b+1$ in $\ensuremath{\mathbb Z}_n$.
\end{lemma}
\begin{proof}
Consider an oriented face $\sigma \in F_{2^i} \subset X_n$. Then,
$\sigma=(b,b+2^i,b+2^{i+1})$ for some $b\in\ensuremath{\mathbb Z}_n$ and $0\le i\le \frac{n-1}{2} - 2$, i.e., $\sigma$ is the $b$-th element in $F_{2^i}$. By definition, $\partial \sigma=(b,b+2^i)+(b+2^i,b+2^{i+1})-(b,b+2^{i+1})$.
The first two terms in $\partial\sigma$ are the $b$-th and $(b+2^i)$-th elements in $E_{2^{i}}$ respectively;
the third term corresponds to the $b$-th element in $E_{2^{i+1}}$. Thus, the blocks indexed by $E_{2^{i}} \times F_{2^{i}}$ are of the form $I+Q^{2^i}$, the blocks $E_{2^{i+1}} \times E_{2^{i}}$ are $-I$, and the rest is 0.
\end{proof}
We may now establish the main result of this section.
\begin{proof}{\bf (of Theorem~\ref{thm:noShadow})}~
We start with the first statement of the theorem. Let $m= \frac{n-1}{2}$.
Lemma~\ref{lem:boundaryOfArithComplex} implies that the edges
in $E_{2^{m - 1}}$ are exposed. Collapsing on these edges leads to elimination of $E_{2^{m - 1}}$ and the faces in $F_{2^{m - 2}}$. In terms of the
matrix $M_X$, this corresponds to removing the rightmost "supercolumn". Now
the edges in $E_{2^{m - 2}}$ become exposed, and collapsing them
leads to elimination of $E_{2^{m - 2}}$, and $F_{2^{m - 3}}$. This results in exposure of
$E_{2^{m - 3}}$, etc. Repeating the argument to the end, all the faces of $X_n$ get eliminated, as claimed.
To show that $X_n$ is an almost-hypertree we need to show that the number of its $2$-faces is $\binom{n-1}{2} - 1$. Indeed,
\[
|X_n| ~=~ \sum_{j=0}^{\frac{n-1}{2} - 2} |F_{2^j}| ~=~ \left( \frac{n-1}{2} - 1 \right) \cdot n ~=~
\binom{n-1}{2} - 1\;.
\]
We turn to show the second statement of the theorem, i.e., that $SH(X_n)=\emptyset$. Let $u\in \ensuremath{\mathbb Q}^{\binom{n}{2}}$, be a vector indexed by the edges of $X_n$, where $u_e=2^i$ when $e\in E_{2^i}$. Here we think of $2^i$ as an integer (and not an element in $\ensuremath{\mathbb Z}_n$). We claim that for every $\sigma\in K_n^2$,
\[
\langle u,\partial\sigma \rangle\,=\,0 ~~\iff~~ \sigma\in X_n\,.
\]
Indeed, for every face $\sigma\in K_n^{(2)}$, exactly three coordinates in the vector $\partial\sigma$ are non-zero, and they are $\pm 1$. Since the entries of $u$ are successive powers of $2$, the condition $\langle u,\partial\sigma \rangle\,=\,0$ holds iff $\partial\sigma$ (or $-\partial\sigma$) has two $1$'s in $E_{2^i}$ and one $-1$ in $E_{2^{i+1}}$ for some
$0\le i\le \frac{n-1}{2} - 1$. This happens if and only if $\sigma$
is of the form $(b, b+2^i, b+2^{i+1})$, i.e., precisely when $\sigma \in X_n$.
This implies that $X_n$ is closed, i.e. $SH(X_n)=\emptyset$, since any 2-face $\sigma \in K_n^{(2)}$ spanned by $X_n$ must satisfy $\langle u,\partial\sigma \rangle =0$, this being precisely the characterisation of $X_n$.
Thus, $X_n$ is a closed set of co-rank 1. Therefore, its complement is a hypercut. Moreover, since
$X_n$ is almost-hypertree, this hypercut is perfect.
\end{proof}
When the prime $n$ does not satify the assumption of Theorem~\ref{thm:noShadow} we can still say something about the structure of $X_n$. Let the group $G_n=\ensuremath{\mathbb Z}_n^*\slash\{\pm 1\}$, and let $H_n$ be the subgroup of
$G_n$ generated by $2$. Then,
\begin{theorem}
For every prime number $n$, ${\rm rank}_\ensuremath{\mathbb Q} (X_n) = |X_n| -(n-1)\cdot ([G_n:H_n] - 1)$.~ In particular,
$X_n$ is acyclic if and only if $Z_n^*$ is generated by $\{\pm 1, 2\}$.
\end{theorem}
We only sketch the proof. We saw that the partition of $X_n$'s edges and faces to the sets $E_i$ and $F_i$. We consider also a coarser partition by joining together all the $E_i$'s and $F_i$'s for which $i$ belongs to some coset of $H_n$. This induces a block structure on $M_X$ with $[G_n:H_n]$ blocks. An argument as in the proof of Lemma \ref{lem:boundaryOfArithComplex} yields the structure of these blocks. Finally, an easy computation shows that one of these blocks is $2$-collapsible, and each of the others contribute precisley $n-1$ vectors to the right kernel.
We conclude this section by recalling the following well-known conjecture of Artin which is implied by the generalized Riemann hypothesis ~\cite{artin}.
\begin{conjecture}[Artin's Primitive Root Conjecture]
Every integer other than -1 that is not a perfect square is a primitive root modulo infinitely many primes.
\end{conjecture}
This conjecture clearly yields infinitely many primes $n$ for which $\ensuremath{\mathbb Z}_n^*$ is generated by $2$. (It is even conjectured that the set of such primes has positive
density). Clearly this implies that the assumptions of
Theorem~\ref{thm:noShadow} hold for infinitely many primes $n$.
\section{Largest Hypercuts over $\ensuremath{\mathbb F}_2$}
\label{sec:f2}
In this section we turn to discuss our main questions over the field $\ensuremath{\mathbb F}_2$. The main result of this section is:
\begin{theorem}
\label{thm:main}
For large enough $n$, the largest size of a $2$-dimensional hypercut over $\ensuremath{\mathbb F}_2$ is ${n \choose 3}
- \left( \frac{3}{4} n^2 - \frac{7}{2}n + 4 \right)$ for even $n$ and ${n \choose 3}
- \left(\frac 34n^2-4n+\frac{25}{4} \right)$ for odd $n$.
\end{theorem}
\begin{remark}
The proof provides as well a characterization of all the extremal cases of this theorem.
\end{remark}
Since no confusion is possible, in this section we use the shorthand term {\em cut} for a $2$-dimensional hypercut.
The first step in proving Theorem \ref{thm:main} is
the slightly weaker Theorem \ref{thm:1}. A further refinement yields the
tight upper bound on the size of cuts.
Note that since ${[n] \choose 3}$ is a coboundary, the complement
$\bar{C}= {[n] \choose 3} \setminus C$ of any cut $C$ is a
coboundary. Moreover, the complement of the $(n-1)$-vertex graph,
${\rm link}_v(C)$, is a link of $\bar{C}$. In what follows, ${\rm link}_v(C)$
is always considered as an $(n-1)$-vertex graph with vertex set $[n] \setminus
\{v\}$. Occasionally, we will consider the graph ${\rm link}_v(C) \cup \{v\}$ which has $v$ as an isolated vertex.
\begin{theorem}
\label{thm:1}
The size of every $n$-vertex cut is at most ${n \choose 3} - \frac{3}{4}\cdot n^2 + o(n^2)$. In every cut $C$ that attains this bound there is a vertex $v$ for which the graph $G = {\rm link}_v(C)$ satisfies either
\begin{enumerate}
\item $\bar{G}$ has one
vertex of degree $\frac n2 \pm o(n)$ and all other vertices have
degree $o(n)$. Moreover, $|E(G)| = n-1 + o(n)$.
\item
$\bar{G}$ has one vertex of degree $n - o(n)$, one vertex of degree $\frac n2 \pm o(n)$, and all other vertices have
degree $o(n)$. Moreover $|E(G)| = 2n \pm o(n)$.
\end{enumerate}
\end{theorem}
We need to make some preliminary observations.
\begin{observation}
\label{obs:zero}
Let $G= (V,E)$ be a graph with $n$ vertices, $m$ edges and $t$ triangles and let $C$ be the coboundary generated by $G$. Then
$|C| = nm - \sum_{v \in V} d_v^2 + 4t$.
\end{observation}
\begin{proof}
Let $e=(u,v) \in E(G)$. Then ${\rm link}_e(C)$ consists of those vertices $x\neq u, v$ that are adjacent to both or none of $u, v$. Namely, $|{\rm link}_e(C)| = n - d_u - d_v +2|N(v) \cap N(u)|$. Clearly $|N(v) \cap N(u)|$ is the number of triangles in $G$ that contain $e$. But $\sum_{e \in E(G)}|{\rm link}_e(C)|$ counts every two-face in $C$ three times or once, depending on whether or not it is a triangle in $G$. Therefore $$|C| +2t =\sum_{(u,v) \in E} \left(n - d_u - d_v +2|N(v) \cap
N(u)|\right).$$
The claim follows.
\end{proof}
Two vertices in a graph are called {\em clones} if they have the same set of neighbours (in particular they must be nonadjacent).
\begin{observation}
\label{obs:1991}
For every nonempty cut $C$ and $x\in V$ the graph
${\rm link}_x(\bar C)$ is
connected and has no clones, or it contains one isolated vertex and a
complete graph on the rest of the vertices.
\end{observation}
\begin{proof}
Directly follows from the fact that ${\rm link}_x(C)$ is $\Lambda$-connected (Proposition \ref{prop:2cut}).
\end{proof}
The size of a cut $C$ for which ${\rm link}_x(\bar{C})$ is the union of a complete graph on $n-2$ vertices and an isolated vertex equals to $n-2$, which is much smaller than the bound in Theorem \ref{thm:1}. We restrict the following discussion to cuts $C$ for which
$\bar{G}={\rm link}_x(\bar{C})$ is connected and has no clones. Let $V=V(\bar G)$, and we denote by $N(v):=N_{\bar G}(v)$. For every $S\subseteq V$, an {\em $S$-atom} is a subset $A\subseteq V$ which satisfies: $(u,v)\in E(\bar G) \iff (u',v)\in E(\bar G)$ for every $u,u'\in A$ and $v\in S$.
The next claim generalizes Observation \ref{obs:1991}.
\begin{claim}
\label{cl:167}
Suppose $C$ is a cut and $G=(V,E) = {\rm link}_x(C)$ for some vertex $x\notin V$. Let
$S\subseteq V$, and $G' = \bar{G} \setminus S$. Then, for every non-empty $S$-atom $A$, at least $|A|-2$ of the edges in $G'$ meet $A$.
\end{claim}
\begin{proof}
Let $H$ be the subgraph of $G'$ induced by an atom $A$. If $H$ has at most two
connected components, the claim is clear, since a connected graph on
$r$ vertices has at least $r-1$ edges. We next consider what happens if $H$ has
three or more connected components. We show that every component except
possibly one has an edge in $\bar E$ that connects it to $V\setminus
(S\cup A)$. This clearly proves the claim.
So let $C_1, C_2, C_3$ be connected components of $H$, and suppose
that neither $C_1$ nor $C_2$ is connected in $\bar G$ to $V\setminus
(S\cup A)$. Let $F:= \cup_{1\le i< j\le 3} C_i\times C_j\subseteq
E$. Since $G$ is $\Lambda$-connected, there must be a $\Lambda$-path
connecting every edge in $C_1\times C_2$ to every edge in $C_2\times
C_3$. However, every path that starts in $C_1\times C_2$ can never
leave it. Indeed, let us consider the first time this $\Lambda$-path
exits $C_1\times C_2$, say $xy$ that is followed by $yw$, where $x\in
C_1, y\in C_2$, $w \notin C_1\cup C_2$ and $yw \notin E$. By the atom condition, a vertex in $S$ does not distinguish between vertices $x, y\in A$, whence $w\notin S$. Finally $w$ cannot be in $A$, for $xw\notin E$ would imply that $w\in C_1$. Hence, $C_1$ is connected in $\bar G$ to $V\setminus (S\cup A)$, a contradiction.
\end{proof}
In the following claims, let $G=(V,E) = {\rm link}_x(C)$, for a cut $C$, and $x \notin V$, and let $\bar{G} = (V,\bar{E}) = {\rm link}_x(\bar{C})$.
Denote by $d=(d_1 \geq d_2 \geq \ldots \geq d_{n-1} \geq 1)$ the sorted degree sequence of $\bar G$. We label the vertices $v_1, \ldots, v_{n-1}$ so that $d(v_i) = d_i$ for all $i$.
\begin{claim}
\label{cl:185}
$d_1 \leq m/2 +1$.
\end{claim}
\begin{proof}
Apply Claim \ref{cl:167} with $S=\{v_1\}$ and $A=N(v_1)$. It yields
the existence of at least $|A|-2$ edges in $\bar{G}$ that meet $A$ but
not $v_1$. Since $|A|=d_1$, $m\ge d_1+(d_1-2)$, implying the claim.
\end{proof}
\begin{claim}
\label{cl:12}
$d_1 + d_ 2 \leq \frac{m+n}{2}$.
\end{claim}
\begin{proof}
Apply Claim \ref{cl:167} with $S=\{v_1,v_2\}$ and $A=N(v_1) \cap
N(v_2)$ to conclude that $m\ge d_1+d_2+|A|-3$ (as $(v_1,v_2)$ might be
an edge). By inclusion-exclusion, $n-3\ge d_1+d_2-|A|$. These two inequalities imply the claim.
\end{proof}
\begin{claim}
\label{cl:171}
For every $k$, $\sum_{i=1}^k d_i \leq m- \frac{n}{2} + \frac{k^2}{2}+ 2^{k}$.
\end{claim}
\begin{proof}
There are at most $2^k$ atoms of $S=\{v_1,...,v_k\}$, and we apply
Claim \ref{cl:167} to each of them. There are at least $|A|-2$ edges
with one vertex in atom $A$ and the other vertex not in
$S$. Consequently, there are at least $\frac12(n-k-2\cdot 2^k)$
edges in $\bar{G} \setminus S$ (as each edge may be counted twice).
In addition, there may be at most ${k \choose 2}$ edges induced by $S$,
hence the total number of edges is bounded by
$$
m \ge \sum_{i=1}^{k} d_i-{k \choose 2} + \frac12(n-k-2\cdot 2^k)
$$
\end{proof}
\begin{proof} {\bf of Theorem \ref{thm:1}}
Let $C$ be a cut, and assume by contradiction that $|\bar{C}| \leq
\frac{\gamma}{3} n^2$, for some $\gamma < \frac 94$. By averaging, there is a
link, say $\bar{G} = (V,\bar E) = {\rm link}_v(\bar{C})$, of at most $\frac{3|\bar C|}{n}\leq \gamma n <
9n/4$ edges, where $V = [n]\setminus \{v\}$. Let $|\bar E|= m \leq
\gamma n$ for some $\gamma < 9/4$.
We will show that $|\bar{C}| \geq 3n^2/4 - o(n^2)$ contradicting the
assumption.
Indeed, Observation \ref{obs:zero} implies that $|\bar{C}| = mn - \sum_{v \in \bar{G}} d_v^2 +
4t$ where $t$ is the number of triangles in $\bar{G}$. Hence it
suffices to show that $s(\bar{C}):= mn - \sum_v d_v^2 \geq
\frac{3}{4} \cdot n^2 - o(n^2)$.
Given a sequence of reals $d=(d_1 \geq d_2 \geq \ldots \geq d_{n-1}
\geq 1)$, we denote $f(d) := mn - \sum_i d_i^2$ where $m
=\frac{1}{2} \sum_i d_i$. With this notation $s(\bar{C}) = f(d)$
where $d$ is the sorted degree sequence of $\bar{G}$ and $v_1,
\ldots v_{n-1}$ the
corresponding ordering of the vertices.
I.e., $d(v_i) = d_i \geq d_{i+1}$.
We want to reduce the problem of proving a lower bound on $f(d)$ to
showing a lower bound on $f_k(d) =mn -\sum_1^k d_i^2$, where $k =
k(n)$ is an appropriately chosen slowly growing function. Clearly
$f_k(d)-f(d)=\sum_{j=k+1}^{n-1} d_j^2$. But $d_j\le \frac{2m}{j}$ for
all $j$ whence $\sum_{j=k+1}^{n-1} d_j^2 \le 4m^2
\sum_{j=k+1}^{\infty}\frac{1}{j^2}<\frac{4m^2}{k}$, i.e, $f_k(d) \leq
f(d) +\frac{4m^2}{k}$. Since $m<\frac 94 n$, it suffices to show that
$f_k(d) \geq \frac{3}{4} n^2 - o(n^2)$ for an arbitrary
$k=\omega_n(1)$, which is our next goal.
We first note that.
\begin{claim}\label{cl:1993}
\label{cor:21} For every $k =o(\log n)$,
$f_k(d) \geq mn - d_1^2 - (m - n/2 - d_1)\cdot d_2 - o(n^2)$.
\end{claim}
\begin{proof}
$f_k(d) =mn -\sum_1^k d_i^2\ge mn - d_1^2-(\sum_2^k d_i)\cdot d_2
\geq mn - d_1^2 - (m - n/2 - d_1)\cdot d_2 - o(n^2)$, where the
second step is by convexity, and the last
step uses Claim~\ref{cl:171}.
\end{proof}
We now normalize everything in terms of $n$, namely, write
$$m = \gamma \cdot n, ~~~d_1 = x \cdot n, ~~~d_2 = y \cdot n.$$
$$
g(\gamma,x,y) :=\frac{1}{n^2} f_k - o(1) = \gamma - x^2 - \gamma \cdot y + \frac{y}{2} + xy,
$$
The problem of minimizing $f_k$ subject to our assumptions on $\gamma$, $d_2 \leq d_1$, and Claims \ref{cl:185}, \ref{cl:12}, \ref{cl:171} becomes
\vspace{0.2in}
{\bf Optimization problem A}
Minimize $g(\gamma,x,y)$, subject to:
\begin{enumerate}
\item $1\le \gamma \le \frac 94.$
\item $0\le y\le x \le \min\left(\frac \gamma 2,1\right).$
\item $x+ y \leq \gamma - \frac{1}{2}$.
\item $x+y\le\frac{1+\gamma}{2}.$
\end{enumerate}
This problem is answered in the following Theorem whose proof is in the appendix.
\begin{theorem}\label{thm:5}
The answer to Optmization problem A is $\min g(\gamma,x,y) = \frac{3}{4}$. The optimum
is attained in exactly two points $(\gamma=1, x=\frac{1}{2}, y = 0)$ and $(\gamma
= 2, x=1, y=\frac{1}{2})$.
\end{theorem}
Plugging the optimal values on $\gamma, x,y$ back into Claim
\ref{cl:1993} completes the proof of the Theorem.
\end{proof}
\begin{proof}[ Proof of Theorem \ref{thm:main}]
Let us recall some of the facts proved so far concerning the largest $n$-vertex $2$-hypercut $C$. Pick an arbitrary vertex $v$. Since $C$ is a coboundary, it can be generated by an $n$-vertex graph which consists of the isolated vertex $v$, and $G={\rm link}_v(C)$, an $(n-1)$-vertex $\Lambda$-connected graph. Similarly, $\bar C$ can be generated by the disjoint union of ${v}$ and $\bar G$. As we saw, there exists some ${v}$ for which the corresponding $\bar{G}$ satisfies either
$$ {\bf CASE ~(I)}:~~~~~~~m = n-1+o(n), ~d_1 =\frac n2 \pm o(n)\text{~and~} d_2 = o(n),$$
or
$$ {\bf CASE ~(II)}:~~~~~~m = 2n \pm o(n), ~d_1 =n - o(n), ~d_2 =\frac n2 \pm o(n) \text{~and~}d_3 = o(n).$$
where, as before, $m=|E(\bar G)|$, $d_1\ge d_2 \ge \dots \ge d_{n-1}$ is the degree sequence of $\bar G$, with $d_i=d(v_i)$. We denote by $t$ the number of triangles in $\bar G$. Since $C$ is the largest cut, the graph $\bar G$ attains the minimum of $f(\bar G)=nm-\sum d_i^2+4t$ among all graphs whose complement is $\Lambda$-connected.
We now turn to further analyse the structure of $\bar G$, in {\bf CASE (I)}.
\begin{lemma}
\label{lm:str_barG_caseI}
Suppose that $\bar G$ satisfies~ {\bf CASE (I)} and let $H=\bar G\setminus v_1$. Then $H$ is either (i) A perfect matching, or (ii) A perfect matching plus an isolated vertex, or (iii) A perfect matching plus an isolated vertex and a 3-vertex path.
\end{lemma}
\begin{proof}
The proof proceeds as follows: for every $H$ other than the above, we find a local variant $\bar G_1$ of $\bar G$ with $f(\bar G_1)< f(\bar G)$. We then likewise modify $G_1$ to $G_2$ etc., until for some $k\ge 1$ the graph $G_k$ is $\Lambda$-connected. The process proceeds as follows.
For every connected component $U$ of $H$ of even size $|U|\ge 4$, we replace $H|_U$ with a perfect matching on $U$, and connect $v_1$ to one vertex in each of these $\frac{|U|}{2}$ edges. {\em Now all connected components of $H$ are either an edge or have an odd size}.
Consider now odd-size components. Note that $H$ can have at most one isolated vertex. Otherwise $\bar G$ is disconnected or it has clones, so that $G$ is not $\Lambda$-connected.
As long as $H$ has two odd connected components which together have $6$ vertices or more, we replace this subgraph with a perfect matching on the same vertex set, and connect $v_1$ to one vertex in each of these edges. If the remaining odd connected components are a triangle and an isolated vertex, remove one edge from the triangle, and connect $v_1$ only to one endpoint of the obtained $3$-vertex path. In the last remaining case $H$ has at most one odd connected component $U$.
If no odd connected components remain or if $|U|=1$, we are done.
In the last remaining case $H$ has a single odd connected component of order $|U|\ge 3$. We replace $H|_U$ with a matching of $(|U|-1)/2$ edges, connect $v_1$ to one vertex in each edge of the matching and to the isolated vertex. If, in addition, there is a connected component of order $2$ with both vertices adjacent to $v_1$ (Note that by the proof of Claim \ref{cl:167} there is at most one such component.), we remove as well one edge between $v_1$ and this component.
All these steps strictly decrease $f$. We show this for the first kind of steps. The other cases are nearly identical.
Recall that $|E(H)|=\frac n2\pm o(n)$ and that $H$ has at most one isolated vertex. Therefore every connected component in $H$ has only $o(n)$ vertices. Let $U$ be a connected component with $2u\ge 4$ vertices of which $0<r\le 2u$ are neighbours of $v_1$, and let $\beta = |E(H|_U)| - (2u-1)\ge 0$. Let $\bar G'$ be the graph after the aforementioned modification w.r.t. $U$. We denote its number of edges and triangles by $m'$ and $t'$ resp., and its degree sequence by $d_i'$. Then,
\begin{eqnarray*}
f(\bar G)-f(\bar G') = n(m-m')-\sum_i(d_i^2-d_i'^2)+4(t-t') \ge \\
n(\beta+r-1)-\left(d_1^2-(d_1-r+u)^2\right)-\sum_{i\in U}d_i^2 \ge \\
n(\beta+r-1)+\left(u-r\right)(2d_1+u-r)-2u(4u-2+2\beta+r).
\end{eqnarray*}
In the second row we use $t\ge t'$, which is true since the modification on $U$ creates no new triangles. In the third row we use $\sum_{i\in U}d_i^2\le(\max_{i\in U}d_i)\left(\sum_{i\in U}d_i\right).$
Let us express $d_1=\frac{n-w}{2}$ where $w=o(n)$. What remains to prove is that
\begin{eqnarray*}
n(\beta+u-1)+(u-r)(u-r-w)\ge 2u(4u-2+2\beta+r).
\end{eqnarray*}
Or, after some simple manipulation, and using the fact $r \leq 2u$, that
\[
\beta n+(u-1)n\ge 4\beta u+u(7u+w+3r-4).
\]
This is indeed so since $u=o(n)$ implies that $\beta n\gg\beta u$ and $2\le u\le o(n)$ implies $(u-1)n\gg u(7u+w+4r-4)$.
The other cases are done very similarly, with only minor changes in the parameters. In the case of two odd connected components which together have $2u\ge 6$ vertices, in the final step the main term is $(\beta + u -2)n \ge n+\beta u$ since $u>2$. In the case of changing a triangle to a 3-vertex path the main term in the final inequality is $(\beta+u-1)n=n$.
\end{proof}
The structure of $\bar G$ for~ {\bf CASE (I)} is almost completely determined by Lemma \ref{lm:str_barG_caseI}. Since $G$ is $\Lambda$-connected, in $\bar G$ $v_1$ must have a neighbour in each component of $H$, and can be fully connected to at most one component. In addition, if $P$ is a 3-vertex path in $H$, then $v_1$ has exactly one neighbour in $P$ which is an endpoint. Otherwise we get clones. Therefore the only possible graphs are those that appear in Figure 1. The first row of the figure applies to odd $n$, where the optimal $\bar G$ satisfies $f(\bar G)=\frac 34n^2-4n+\frac{25}{4}$. The other rows correspond to $n$ even, with four optimal graphs that satisfy $f(\bar G)=\frac 34n^2-\frac 72n+4$.
\begin{figure}[h!]
\label{fig:caseI}
\centering
\includegraphics[width=1\textwidth]{caseIdraw.png}
\caption{The graphs $\bar G$ that are considered in the final stage of the proof of~ {\bf CASE (I)}. The first row refers to the only possibility for odd $n$. The second row to even $n$, where $H$ is a perfect matching. The third row refers to even $n$, where $H$ a disjoint union of an isolated vertex, 3-path and a matching.}
\end{figure}
This concludes~ {\bf CASE (I)}, and we now turn to {\bf Case (II)}. Our goal here is to reduce this back to {\bf CASE (I)}, and this is done as follows.
\begin{claim}
\label{clm::redII2I}
Let $\bar G = {\rm link}_v(\bar{C})$ be a graph on $n-1$ vertices with parameters as in {\bf CASE (II)}. If $H=\bar G\setminus\{v_1,v_2\}$ has an isolated vertex $\rm{z}$ that is adjacent in $\bar G$ to $v_1$ then $f(\bar G)$ is bounded by the extremal examples found in {\bf CASE (I)}.
\end{claim}
\begin{proof}
Let $S$ be the star graph on vertex set $V\cup\{v\}$ with vertex $v_1$ in the center and $n-1$ leaves. Consider the graph $F := \bar{G} \oplus S$ on the same vertex set, whose edge set is the symmetric difference of $E(S)$ and $E(\bar G)$. Since every triplet meets $S$ in an even number of edges, the coboundary that $F$ generates equals to the coboudnary that $\bar G$ generates, which is $\bar C$.
In addition, $\rm{z}$ is an isolated vertex in $F$ since its only neighbor in $\bar G$ is $v_1$. Consequently, $F = {\rm link}_{\rm z}(\bar{C})$, and the claim will follow by showing that $F$ agrees with the conditions of {\bf CASE (I)}.
Indeed, $\mbox{deg}_F(v_1) =
n-1 - \mbox{deg}_{\bar{G}}(v_1) = o(n)$, and $|\mbox{deg}_F(u)-\mbox{deg}_{\bar G}(u)|\le 1$ for every other vertex $u$. Hence, $\mbox{deg}_F(v_2) =
\frac{n}{2} \pm o(n)$, and $\mbox{deg}_F(u) =o(n)$ for every other vertex $u$.
\end{proof}
If $H$ has no isolated vertex that is adjacent in $\bar G$ to $v_1$, we show how to modify $\bar G$ to a graph $\bar G_1$, such that (i) $G_1$ is $\Lambda$-connected, (ii) $\bar G_1\setminus\{v_1,v_2\}$ has an isolated vertex which is adjacent to $v_1$ in $\bar G_1$ , and (iii) $f(\bar G_1)<f(\bar G)$.
Since $G$ is $\Lambda$-connected and using the proof of claim \ref{cl:167}, $H$ has at most one connected component $U_1$ in $H$ where all vertices are adjacent to $v_1$ and not to $v_2$ in $\bar G$. Similarly, it has at most one connected component $U_2$ where all vertices are adjacent to both $v_1$ and $v_2$. Also, since $d_1=n-o(n)$, $d_2=\frac n2 \pm o(n)$ and $H$ has at most 3 isolated vertices, there exists an edge $xy \in E(H)$ such that $xv_1, xv_2, yv_1\in E(\bar{G})$, but $yv_2\not\in E(\bar{G})$.
$G_1$ is constructed as following:
\begin{enumerate}
\item If neither components $U_1$ nor $U_2$ exist, remove the edge $xy$ and the edge $v_1v_2$, if it exists. Otherwise, let $r:=|U_1\cup U_2|$.
\item If $r$ is even, replace it in $H$ with a perfect matching on $u-2$ vertices and two isolated vertices. Connect $v_1$ to every vertex in $U_1\cup U_2$. Make $v_2$ a neighbor of one of the isolated vertices, and one vertex in each of the edges of the matching. Additionally, remove the edge $v_1v_2$ if it exists.
\item If $u$ is odd, replace it in $H$ by a perfect matching on $u-1$ vertices and one isolated vertex. Connect $v_1$ to every vertex in $U_1\cup U_2$, and $v_2$ to one vertex in each edge of the matching.
\end{enumerate}
The fact that value of $f$ decreased is shown similarly to the
calculation in {\bf CASE (I)}.
\end{proof}
\section{Large $d$-hypercuts over $\ensuremath{\mathbb F}_2$ in even dimensions}
\label{sec:f2_even_dim}
In this section we consider large $d$-dimensional hypercuts over $\ensuremath{\mathbb F}_2$ for $d>2$. We show that for $d$ even the largest $d$-hypercuts have ${n\choose d+1}\left(1-o_n(1)\right)$ $d$-faces. In contrast, for odd $d$ we observe that the density of every $d$-hypercut is bounded away from $1$.
\begin{theorem}\label{thm:f2_even_dim}
For every $d\ge 2$ even there exists an $n$-vertex $d$-hypercut over $\ensuremath{\mathbb F}_2$ with ${n\choose d+1}\left(1-o_n(1)\right)$ $d$-faces.
\end{theorem}
Before we prove the theorem, let us explain why the situation is different in odd dimensions. Recall that Turan's problem (e.g., ~\cite{keevash_survey}) asks for the largest density $ex(n,K_{d+2}^{d+1})$ of a $(d+1)$-uniform hypergraph that does not contain all the hyperedges on any set of $(d+2)$ vertices. For $d$ odd a hypercut has this property, because (see Section \ref{section:combin}), the characteristic vector $C\in\ensuremath{\mathbb F}_2^{n\choose d+1}$ of a $d$-hypercut is a coboundary, i.e., $C\cdot\partial_{d+1}=0$. A simple double-counting argument shows that the density of $C$ cannot exceed $1-\frac{1}{d+2}$, and in fact, a better upper bound of $1-\frac{1}{d+1}$ is known~\cite{sidorenko}. One of the known constructions for the Turan problem yields $d$-coboundaries with density $\frac 34 - \frac {1}{2^{d+1}} - o(1)$ for $d$ odd ~\cite{de_caen}. In particular for $d=3$ this gives a lower bound of $\frac {11}{16}=0.6875$. In YP's MSc thesis~\cite{yuval_msc} an upper bound of $0.6917$ was found using flag algebras.
We now turn to prove the theorem for some $d\ge 4$ even. As before, a $d$-hypercut $C$ is an inclusion-minimal set of $d$-faces whose characteristic vector is a coboundary. In addition, every $d$-coboundary $C$ and every vertex $v$ satisfy $C = {\rm link}_v(C)\cdot\partial_d$. Recall that $C$ is a $2$-hypercut iff ${\rm link}_v(C)$ is $\Lambda$-connected for some vertex $v$. In dimension $>2$ we do not have such a charaterization, but as we show below, an appropriate variant of the {\em sufficient} condition for being a hypercut does apply in all dimensions.
Let $\tau,\tau'$ be two $(d-1)$-faces in a $(d-1)$-complex $K$. We say that they are $\Lambda$-{\em adjacent} if their union $\sigma=\tau\cup\tau'$ has cardinality $d+1$, and $\tau,\tau'$ are the only $d$-dimensional subfaces of $\sigma$ in $K$. We say that $K$ is $\Lambda$-connected if the transitive closure of the $\Lambda$-adjacency relation has exactly one class.
\begin{claim}
Let $C$ be a $d$-dimensional coboundary such that the $(d-1)$-complex $K={\rm link}_v(C)$ is $\Lambda$-connected for some vertex $v$. Then $C$ is a $d$-hypercut.
\end{claim}
\begin{proof}
Suppose that $\emptyset\neq C' \subsetneq C$ is a $d$-coboundary and let $K'={\rm link}_v(C')$. Note that $\emptyset\neq K' \subsetneq K$ and therefore there are $(d-1)$-faces $\tau,\tau'$ which are $\Lambda$-adjacent in $K$ such that $\tau'\in K'$ and $\tau \notin K'$. Consider the $d$-dimensional simplex $\sigma=\tau\cup\tau'$. On the one hand, since exactly two of the facets of $\sigma$ are in $K$, it does not belong to $C=K\cdot\partial_d$. On the other hand, it does belong to $C'$ since exactly one of its facets ($\tau'$) is in $K'$. This contradicts the assumption that $C'\subset C$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:f2_even_dim}]
We start by constructing a random $(n-1)$-vertex $(d-1)$-dimensional complex $K$ that has a full skeleton, where each $(d-1)$-face is placed in $K$ independently with probability $p:=1-n^{-\frac{1}{3d-3}}$. We show that with probability $1-o_n(1)$ the complex $K$ is $\Lambda$-connected, whence $C:=K\cdot\partial_d$ is almost surely a $d$-hypercut of the desired density.
We actually show that $K$ satisfies a condition that is stronger than $\Lambda$-connectivity. Namely, let $\tau,\tau' \in K$ be two distinct $(d-1)$ faces. We find $\pi, \pi'$ where $\pi$ is $\Lambda$-adjacent to $\tau$ and $\pi'$ is $\Lambda$-adjacent to $\tau'$ and in addition the symmetric differences get smaller $|\pi\oplus\pi'|<|\tau\oplus\tau'|$. To this end we pick some vertices $u\in\tau\setminus\tau'$, $u'\in\tau'\setminus\tau$ and aim to show that with high probability there is some $x\notin\tau\cup\tau'$ for which the following event $P_x$ holds:
\[
\pi_x:=\tau\cup\{x\}\setminus\{u\}\text{~and~} \pi'_x:=\tau'\cup\{x\}\setminus\{u'\} \text{~are in~} K, \text{~and~}
\]
\[
\tau\text{~ is~} \Lambda-\text{~adjacent to~}\pi_x\text{~ and~} \tau'\text{~ is~}\Lambda\text{-~adjacent to~}\pi'_x
\]
In other words, it is required that $\pi_x \in K$ and $\tau\cup\{x\}\setminus\{w\}\notin K$ for every $w\in\tau\setminus\{u\}$, and similarly for $\tau',\pi_x'$. Therefore $\Pr(P_x)=p^2\cdot(1-p)^{2d-2}$. Moreover, the events $\{P_x~|~x\notin\tau\cup\tau'\}$ are independent. Hence, the claim fails for some $\tau,\tau'$ with probability at most $\left(1-p^2\cdot(1-p)^{2d-2}\right)^{n-2d}=\exp{[-\Theta(n^{1/3})]}$. The proof is concluded by taking the union bound over all pairs $\tau,\tau'$.
\end{proof}
\section{Large Collapsible $2$-dimensional Hyperforests with no Shadow}
\label{sec:cl_ac}
The main result of this section is a construction of a shadowless $\ensuremath{\mathbb F}$-acyclic $2$-complex over every field $\ensuremath{\mathbb F}$. Recall that assuming Artin's conjecture there are infinitely many shadowless $2$-dimensional $\ensuremath{\mathbb Q}$-almost hypertrees. We saw in Section~\ref{sec:f2} that every $\ensuremath{\mathbb F}_2$-almost hypertree has a shadow and there we discussed its minimal caridnality. We now complement this by seeking the largest number of $2$-faces in shadowless hyperforests. Our construction works at once {\em for all fields} since it is based on the combinatorial property of $2$-collapsibility.
\begin{theorem}
\label{thm:cls_acyc}
For every odd integer $n$, there exists a $2$-collapsible $2$-complex $A=A_n$ with ${{n-1}\choose {2}} - (n+1)$ faces that remains $2$-collapsible after the addition of any new face. In particular, this complex is acyclic and shadowless over every field.
\end{theorem}
\begin{proof}
The vertex set $V=V(A)$ is the additive group $\ensuremath{\mathbb Z}_n$. All additions here are done $\bmod ~n$. Edges in $A$ are denoted $(x,x+a)$ with $a<n/2$, and such an edge is said to have {\em length} $a$. Also, for $a>1$, $b=\floor{\frac a2}$ is uniquely defined subject to $1\le b <a < n/2$. For every $x\in \ensuremath{\mathbb Z}_n$ and $n/2>a>1$ we say that the edge $(x, x+a)$ {\em yields} the face $\rho_{x,a}:=\{x,x+a,x+\floor{\frac a2}\}$ of {\em length} $a$. These are $A$'s $2$-faces:
\[\{\rho_{x,a} ~|~ n/2>a\ne 1,3,~x\in \ensuremath{\mathbb Z}_n\}\]
It is easy to $2$-collapse $A$ by collpasing $A$'s faces in decreasing order of their lengths. In each phase of the collapsing process, the longest edges in the remaining complex are exposed and can be collapsed.
It remains to show that the complex $A\cup\{\sigma\}$ is $2$-collapsible for every face $\sigma=\{x,y,z\}\notin A$. To this end, let us carry out as much as we can of the "top-down" collapsing process described above. Clearly some of the steps of this process become impossible due to the addition of $\sigma$, and we now turn to describe the complex that remains after all the possible steps of the previous collapsing process are carried out. Subsequently we show how to $2$-collapse this remaining complex and conclude that $A\cup\{\sigma\}$ is $2$-collpasible, as claimed.
For every $n/2>a\ge 1,~x\in \ensuremath{\mathbb Z}_n$ we define a subcomplex $C_{x,x+a} \subset A$. If $a=1$ or $3$, this is just the edge $(x, x+a)$. For all $n/2>a\ne 1, 3$ it is defined recursively as $C_{x,~x+\floor{\frac a2}}\cup C_{x+\floor{\frac a2},~x+a}\cup \{\rho_{x,a}\}$.
Note that $C_{x,y}$ is a triangulation of the polygon that is made up of the edge $(x,y)$ and $C(x,y)$'s edges of lengths $1$ and $3$.
Our proof will be completed once we (i) Observe that this remaining complex is $\Delta_\sigma:=\{\sigma\}\cup C_{x,y}\cup C_{x,z}\cup C_{y,z}$, and (ii) Show that $\Delta_\sigma$ is $2$-collapsible.
Indeed, toward (i), just follow the original collapsing process and notice that $\Delta_\sigma$ is comprised of exactly those faces in $A$ that are affected by the introduction of $\sigma$ into the complex.
We will show (ii) by proving that the face $\sigma$ can be collapsed out of $\Delta_\sigma$. Consequently, $\Delta_\sigma$ is $2$-collapsible to a subcomplex of the $2$-collapsible complex $A$.
As we show below
\begin{claim}\label{clm:only_leaf}
There exists a vertex in $\Delta_\sigma$ which belongs to exactly one of the complexes $C_{x,y}, C_{x,z}$ or $C_{y,z}$.
\end{claim}
This allows us to conclude that the face $\sigma$ can be collapsed out of $\Delta_\sigma$.
Say that the vertex $v$ is in $C_{x,y}$ and only there, and let $e$ be some edge of length $1$ or $3$ in $C_{x,y}$ that contains $v$. Follow the recursive consturction of $C_{x,y}$ as it leads from $(x,y)$ to $e$. Every edge that is encountered there appears only in the polygon $C_{x,y}$. By traversing this sequence in reverse, we collpase $\sigma$ out of $\Delta_\sigma$.
\end{proof}
\begin{proof}[Proof of Claim \ref{clm:only_leaf}]
By translating $\bmod ~n$ if necessary we may assume that $x=0$ and $0<y,z-y<\frac n2$. If $z>\frac n2$, then $V(C_{0,y})\subseteq\{0,...,y\}$, $V(C_{y,z})\subseteq\{y,...,z\}$ and $V(C_{z,0})\subseteq\{z,...,n-1,0\}$, so their vertex sets are nearly disjoint altogether.
We now consider the case $z<\frac n2$ and assume by contradiction that the claim fails for $\sigma=\{0,y,z\}$. We want to conclude that $\sigma \in A$, and in fact $\sigma \in C_{0,z}$. By the recursive construction of $C_{0,z}$, this, in other words, means that both edges $(0,y)$ and $(y,z)$ are in $C_{0,z}$. We only prove that $(0,y)\in C_{0,z}$, and the claim $(y,z)\in C_{0,z}$ follows by an essentially identical argument.
So we fix $0<y<z<\frac n2$ and we want show that
\begin{equation}\label{eqn:unique_vertex}
\mbox{If $(0,y)\notin C_{0,z}$ then $V(C_{0,z})\cap [0,y] \neq V(C_{0,y})$.}
\end{equation}
Consequently, there is a vertex $v$ in $[0,y]$ which belongs to exactly one of the complexes $C_{0,z}$ and $C_{0,y}$. If such a $v<y$ exists, we are done, since $C_{y,z}$ has no vertices in $[0,y-1]$. Otherwise,
\[
V(C_{0,z})\cap[0,y-1] = V(C_{0,y})\setminus \{y\}~~~\mbox{and}~~~y\notin C_{0,z}.
\]
But the vertices of $C_{0,z}$ form an increasing sequence from $0$ to $z$ with differences $1$ or $3$, so either $(y-2,y+1)\in C_{0,z}$ or $(y-1,y+2)\in C_{0,z}$. In the former case, both $y-2$ and $y$ are vertices in $C_{0,y}$, and therefore $y-1 \in C_{0,y}$ and consequently $y-1 \in C_{0,z}$, contrary to the assumption that the edge $(y-2,y+1)$ is in $C_{0,z}$. In the latter case, $y+2 \in C_{0,z}$ and $y+1 \notin C_{0,z}$. Which vertex succeeds $y$ in $C_{y,z}$? If $(y,y+1)\in C_{y,z}$ then $y+1$ belongs only to $C_{y,z}$. If $(y,y+3)\in C_{y,z}$ then $y+2$ is only in $C_{0,z}$.
We prove the implication~(\ref{eqn:unique_vertex}) by induction on $y$. The base cases where $y=1$ or $y=3$, are straightforward. If $(0,\floor{\frac y2})\notin C_{0,z}$ then by induction $V(C_{0,z})\cap [0,\floor{\frac y2}] \neq V(C_{0,\floor{\frac y2}})$. But $V(C_{0,y})\cap [0,\floor{\frac y2}] = V(C_{0,\floor{\frac y2}})$ and the conclusion that $V(C_{0,z})\cap [0,y] \neq V(C_{0,y})$ follows. We now consider what happens if $(0,\floor{\frac y2})\in C_{0,z}$. Which edge has yielded the $2$-face of $C_{0,z}$ that contains the edge $(0,\floor{\frac y2})$? It can be either $(0, 2\cdot\floor{\frac y2})$ or $(0, 2\cdot\floor{\frac y2}+1)$. But one of these two edges is $(0,y)$ which, by assumption, is not in $C_{0,z}$, so it must be the other one. Namely, either $y=2r$ and $(0,2r+1)\in C_{0,z}$ or $y=2r+1$ and $(0,2r)\in C_{0,z}$.
Let us deal first with the case $y=2r$.
Assume, in contradiction to ~(\ref{eqn:unique_vertex}), that $V(C_{0,z})\cap [0,2r] = V(C_{0,2r})$. In particular $V(C_{0,z})\cap [r,2r] = V(C_{0,2r})\cap [r,2r]$. But since $(0,2r+1)$ is an edge of $C_{0,z}$ it also follows that $V(C_{0,2r+1})\cap [r,2r] = V(C_{0,z})\cap [r,2r]$. Therefore,
\begin{equation*}\label{eqn:vertex_segment_equality}
V(C_{0,2r+1})\cap [r,2r] = V(C_{0,2r})\cap [r,2r].
\end{equation*}
By the recursive consturction of $C_{0,2r+1}$ and $C_{0,2r}$ we obtain that $
V(C_{r,2r+1})\cap [r,2r] = V(C_{r,2r})$. By using the rotational symmetry of $A$ we can translate this equation by $r$ to conclude that
$V(C_{0,r+1})\cap [0,r] = V(C_{0,r})$. By induction, using the contrapositive of Equation~(\ref{eqn:unique_vertex}) this implies that $(0,r)\in C_{0,r+1}$ hence $r=1$. However, $C_{0,z}$ cannot contain both $(0,3)$ and $(0,1)$ so we are done.
The argument for $y=2r+1$ is essentially the same and is omitted.
\end{proof}
\begin{comment}
\begin{theorem}
\label{thm:cls_acyc_old}
For every integer $n$, there exists a $2$-collapsible $2$-complex $A_n$ with ${{n-1}\choose {2}} - O(n\log n)$ faces that remains $2$-collapsible after the addition of any new face. Thus, this complex is acyclic and closed over every field, and perfect $(2,O(n\log n))$-hypercuts exist.
\end{theorem}
\begin{proof}
For simplicity of presentation, we restrict the discussion to $n$'s of the form $n=2^k-1$. The vertex set $V=V(A_n)$ is the set of all binary strings
of length $\le k-1$, including the empty string. Hence $|V|=2^{k}-1$, as needed. We denote strings by lowercase letters. The faces of $A_n$ are defined as follows: Let $x, y \in V$ be such that neither one is a prefix of the other. Associated with every such pair is the face $\sigma_{xy}=(x, y, z)$ where $z$ is the longest common prefix of $x, y$.
Clearly every edge $(x,y)$ as above is exposed, so that all the faces of $A_n$ can be
simultaneously collapsed.
It is easy to verify that $|\{(x,y)|~ \mbox{x is a prefix of y } \}|=\Theta(n\log n)$.
This yields the claim on the number of faces in $A_n$.
We still need to show that $A_n$ remains $2$-collapsible when a new face $(a, b, c)$ is added.
Almost all faces of $A_n\cup \{(a,b,c)\}$ get collapsed right away, as before. The only faces that survive the first round
of collapse are $S=\{ (a, b, c), \sigma_{ab}, \sigma_{ac}, \sigma_{bc} \}$, where $\sigma_{xy}$ is null if there is a prefix relation between $x,y$. But a $2$-complex with $4$ faces or less that is non-2-collapsible must be the boundary of a tetrahedron. So suppose that $S$ is the boundary of the tetrahedron $(a,b,c,z)$. For this to happen, no two strings among $a, b, c$ can have a prefix relation. In addition, the string $z$ is the longest common prefix of each pair among $a, b, c$. Consider the first position $i$ for which $a_i, b_i, c_i$ are not all equal to see that this is impossible.
\end{proof}
\section{Largest Shadows and Smallest $(d,k)$-Hypercuts}
\label{section:maxShadow}
In this section we do not restrict the underlying field $\ensuremath{\mathbb F}$ nor the dimension $d$. We ask how small ${\rm rank}(Y)$ can be given its size $|Y|$. Equivalenly, what is the largest possible size of a $d$-complex of rank $r$?
Clearly, the extremal $Y$ is closed, and the largest acyclic subset in $Y$ has the maximum possible shadow for any acyclic set of size $r$.
A-priori the answers to these questions may depend on the number of vertices $n$. However, as we shall see, this is
not so, and if $r \leq {{n-1}\choose {d}}={\rm rank}(K_n^d)$,\, there exists an optimal $n$-vertex $d$-complex of rank $r$.
Since a $(d,r)$-hypercut is the complement of a closed set of co-rank $r$,
it follows that a $(d,r)$-hypercut in $K_n^d$ has the least possible size if and only if it is the complement
of an extremal $Y$ as above of co-rank $r$. That is, ${\rm rank}(Y)={{n-1}\choose{d}} - r$.
As it turns out, minimum $r$-cuts in $K_n^d$ are much simpler objects than maximum cuts, let
alone maximum $r$-cuts.
We need to recall some notions related to Kruskal-Katona Theorem (e.g.,~\cite{Frankl}).
For every two integers $m$ and $d$, there is a {\em unique} representation
\[
m ~=~ \sum_{i=s}^{d+1} {{m_i} \choose {i}},~~~\mbox{where}~~~
m_{d+1} > m_d > \ldots > m_s \geq s \geq 1~.
\]
It is called the {\em cascade} form. Let
\[
m_{\{d\}} ~=~ \sum_{i=s}^{d+1} {{m_i} \choose {i - 1}}~.
\]
{\bf Kruskal-Katona Theorem:}~{\em The number of $(d-1)$-faces in a $d$-complex of size $m$ is at least $m_{\{d\}}$. Equality holds e.g., for the $d$-complex generated by the first $m$ members in the co-lexicographic order on $(d+1)$-sets over $\ensuremath{\mathbb N}$.
} \\
\\
This clearly implies monotonicity.
\begin{fact}
\label{fat:1}
If $m > m'$ then $m_{\{d\}} \geq m'_{\{d\}}$ for any $d$.
\end{fact}
Our result can be stated as following:
\begin{theorem}
\label{th:dual_max}
Every $d$-complex of size $m = \sum_{i=s}^{d+1} {{m_i}\choose {i}}$, (in cascade form) has rank at least
\[
\geq\; \sum_{i=s}^{d+1} {{m_i -1}\choose {i-1}}~,
\]
where ${0 \choose 0} = 1$. The bound is attained e.g., for a co-lexicographic initial segment as above.
\end{theorem}
The proof is based on a shifting argument.
\begin{definition}
Let $Y\subseteq K_n^d $ and $u<v\in[n]$. For every $\sigma\in Y$, define
$$S_{uv}(\sigma)=\left\{\begin{matrix}
\sigma\setminus\{v\}\cup\{u\}&v\in\sigma,\;u\notin\sigma,\;\sigma\setminus\{v\}\cup\{u\}\notin Y\\
\sigma& \mbox{otherwise}
\end{matrix}\right. $$
In addition, $S_{uv}(Y)=\{S_{uv}(\sigma)\mid\sigma\in Y\}\subseteq K_n^d$.
A complex $Y$ is called {\em shifted} if $S_{uv}(Y)=Y$ for every $u<v$.
\end{definition}
It is well known and easy to prove that every complex can be made shifted by repeatedly
applying shifting operations $S_{uv}$ with $u < v$ (See ~\cite{shifting}). We prove Theorem~\ref{th:dual_max} by
showing that it holds for shifted complexes, and that shifting does not increase the rank.
\begin{claim}
Every shifted complex satisfies Theorem~\ref{th:dual_max}.
\end{claim}
\begin{proof}
Let $Y$ be a shifted complex with $m = \sum_{i=s}^{d+1} {{m_i}\choose {i}}$ $d$-faces, (in cascade form).
Let $\mbox{st}_1(Y)$ be the {\em star} of vertex $1$, namely, the set of all faces of $Y$ containing it. The $d$-complex $Z$ generated by $\mbox{st}_1(Y)$ is clearly $d$-collapsible, since every $d$-face $\sigma$ in $Z$ contains the vertex $1$, and hence $\sigma\setminus\{1\}$ is exposed. Therefore $F$, the set of $Z$'s $d$-faces, is acyclic.
In fact, $F$ is {\em maximal} acyclic, i.e., it spans every $d$-face $\sigma \in Y$. Indeed, if $\sigma \not\in F$, let $\sigma^+ = \{1\} \cup \sigma\,$ be the $(d+1)$-simplex obtained
by augmenting $\sigma$ by vertex $1$. It is easy to verify that
\[
\partial \sigma^+ ~=~ \sigma \;+\; \sum_{v\in \sigma} \epsilon_v \cdot
(\sigma\setminus\{v\}\cup\{1\})\;.
\]
where $\epsilon_v \in \{-1,1\}$ are the corresponding signs.
Since $\partial_{d}\partial_{d+1} = 0$, we conclude that\\
$\,\partial_d \sigma\;=\; -\sum_{v\in \sigma} \epsilon_v \cdot \partial_d (\sigma\setminus\{v\}\cup\{1\})\,$,
and since $Y$ is shifted, all $(\sigma\setminus\{v\}\cup\{1\})$ are in $F$. Hence, $F$ spans $\sigma$. We showed that $\;{\rm rank} (Y) \;=\; |F|$, but what is the size of $F$?
Following~\cite{Frankl}, we show that $~|F| \,\geq\, \sum_{i=s}^{d+1} {{m_i -1}\choose {i-1}}$.
Indeed, assume to the contrary that $\,|F| \,<\, \sum_{i=s}^{d+1} {{m_i -1}\choose {i-1}}$.
Let $\mbox{ast}_1(Y)$, the {\em antistar} of vertex $1$, namely the $d$-complex of all faces in $Y$ that do not contain the vertex $1$. Then,
\[
|\mbox{ast}_1(Y)| \;=\; m - |F| ~>~ \sum_{i=s}^{d+1} {m_i\choose i} - \sum_{i=s}^{d+1} {{m_i - 1} \choose {i-1}} ~=~ \sum_{i=s}^{d+1} {{m_i-1}\choose i}\,.
\]
Let $q$ be the largest $i$ such that $m_i=i$, or $q=0$ if no such $i$ exist, and let $a:=|\mbox{ast}_1(Y)|$. By the above,
\[
a ~\geq~ 1 + \sum_{i=s}^{d+1} {{m_i-1}\choose i} ~\geq~
{q \choose q} + \sum_{i=q+1}^{d+1} {{m_i-1}\choose i}\,,
\]
By monotonicity (Fact \ref{fat:1})
\[
a_{\{d\}} ~\ge~ {q \choose {q-1}} + \sum_{i=q+1}^{d+1} {{m_i-1}\choose i-1}
\]
Also
\[
{q \choose {q-1}} + \sum_{i=q+1}^{d+1} {{m_i-1}\choose i-1} ~\geq~ \sum_{i=s}^{d+1} {{m_i-1}\choose i-1}\;
\]
since each term for $i \leq q$ contributes $1$ to the right hand side
and there are at most $q$ such terms.
By applying the Kruskal-Katona Theorem to $\mbox{ast}_1(Y)$, the number of its $(d-1)$-faces is at least $a_{\{d\}}$. But $Y$ is shifted, so for every
$(d-1)$-face $\tau$ of $\mbox{ast}_1(Y)$, the $d$-simplex
$\tau \cup \{1\} \in F$, whence $|F| \geq \sum_{i=s}^{d+1} {{m_i-1}\choose {i-1}}\,$, contrary to our assumption.
The initial co-lexicographic segment of length $m$ clearly satisfies $|F| \,=\, \sum_{i=s}^{d+1} {{m_i-1}\choose {i-1}}.$
\end{proof}
We show next that shifting does not increase rank.
\begin{lemma}
${\rm rank} (Y)\;\ge\; {\rm rank} (S_{uv}(Y))$ for every $d$-complex $Y$
and every two vertices $v,u \in V(Y)$.
\end{lemma}
\begin{proof}
Let $Z_d$ be the vector space of $d$-cycles spanned by $Y$, namely,
the right kernel of the matrix $\partial_d$ restricted to the columns of
$Y$.
Since ${\rm rank}(Y) + {\rm dim} (Z_d(Y)) = |Y|$, and shifting preserves the size $|Y|$, it suffices to show that~
${\rm dim} Z_d(Y)\,\leq\,{\rm dim} Z_d(S_{uv}(Y))$. This will be achieved by means of a mapping that may be viewed as ``forced shifting".
Let $C_i(Y)$ be the free vector space of $i$-chains of $Y$ over $\ensuremath{\mathbb F}$. Namely the collection of all formal linear combinations with coefficients from $\ensuremath{\mathbb F}$ of oriented
$i$-faces of $Y$. For $0\le j\le d$, define the map $\Phi = \Phi_{v\rightarrow u}$ from $C_j(K_n^d)$ to $C_j(K_n^d)$ as the linear extension of
\[
\Phi(\sigma)=
\left\{\begin{matrix}
0& u,v\in\sigma\\
\sigma\setminus\{v\}\cup\{u\}& u\notin\sigma,\;v\in\sigma\\
\sigma& v\notin\sigma
\end{matrix}\right.
\]
In the second line we need to specify an orientation of $\Phi(\sigma)$. The orientation of $\sigma$ is specified by an ordered list of its vertices. The orientation of $\Phi(\sigma)$ is determined by replacing $v$ by $u$ in that list. Note that this is a proper definition of $\Phi(\sigma)$ which does not depend on the choice of the permutation used to specify $\sigma$'s orientation.
\ignore{
The map $\Phi = \Phi_{v\rightarrow u}$ has a concrete geometrical meaning.
Since, by definition, it applies to all the dimensions simultaneously in a coherent manner (a face $\tau$
of $\sigma$ is mapped to a face $\Phi(\tau)$ of $\Phi(\sigma)$), $\Phi$ is in fact a map, or, rather, a transformation,
of one weighted oriented $d$-complex into another. If the original $d$-complex $X$ is not supported on $v$, it is left
as is. Otherwise, $\Phi(X)$ is obtained from $X$ by gluing $v$ to $u$, followed up by "cleaning out", which
includes the removal of thus created defective faces (self-loops in
case 1 of the definition of $\Phi$), and addition (or subtraction, depending on the orientation) of the weights on thus created parallel faces.
As one would expect given the geometrical interpretation,}
The main point is that $\Phi$ maps $d$-cycles to $d$-cycles, i.e., $\partial Z = 0$
implies that $\partial \Phi(Z) = 0$. In fact, as we now show, something stronger is true and $\Phi$ is a {\em chain map}. In words, $\Phi$ commutes with the boundary
operator $\partial$, i.e., $\Phi\partial=\partial\Phi$.
By linearity, it suffices to verify it for $d$-simplices. Let $\sigma=(\sigma_0,...,\sigma_d)$ be an oriented $d$-simplex. Recall that $\partial\sigma\;=\; \sum_{i=0}^{d}(-1)^i \sigma^i$,
where $\sigma^i$ is the oriented $(d-1)$-face of $\sigma$ obtained by omitting the $i$-th entry from $\sigma$.
\begin{enumerate}
\item We start with the case where both $u,v\in\sigma$, say $u=\sigma_j,\;v=\sigma_k$. We need to show that $\Phi(\partial\sigma)=0$. Clearly, $\Phi(\sigma^i) = 0$ for every $i \neq j,k$, and thus
$$
\Phi(\partial\sigma) \;=\; \sum_{i=0}^{d}(-1)^i\Phi(\sigma^i) \;=\;
(-1)^j \Phi(\sigma^j)\,+\,(-1)^k\Phi(\sigma^k)~.
$$
However, by definition, $\Phi(\sigma^j)$ and $\Phi(\sigma^k)$ are both
the same $(d-1)$-simplex obtained from $\sigma$ by removing $v$. Moreover, it is easily
verified that $~{\rm sign}(\Phi(\sigma^j)) = (-1)^{k-j-1} {\rm sign}(\Phi(\sigma^k))$. Therefore,
\[
(-1)^j \Phi(\sigma^j)\,+\,(-1)^k\Phi(\sigma^k) \;=\; (-1)^j (-1)^{k-j-1} \Phi(\sigma^k) \,+\,(-1)^k\Phi(\sigma^k)
\;=\; 0~.
\]
\item\label{case_2} If $u\notin\sigma, v\in\sigma$, then $\Phi(\sigma)^i = \Phi(\sigma^i)$ for every $0\le i\le d$,
and thus
\[
\Phi(\partial\sigma) \;=\; \Phi\left( \sum_{i=0}^{d}(-1)^i \sigma^i \right) \;=\; \sum_{i=0}^{d}(-1)^i \Phi(\sigma^i) \;=\;
\sum_{i=0}^{d}(-1)^i \Phi(\sigma)^i \;=\; \partial\Phi(\sigma)~.
\]
\item Finally, if $v\notin\sigma$, then $\Phi(\sigma) = \sigma$. Since $v\not\in\sigma^i$, $\Phi(\sigma)^i = \Phi(\sigma^i)=\sigma^i$ for every $0\le i\le d$,
and the argument of case~\ref{case_2} applies.
\end{enumerate}
Let us consider next the restriction of $\Phi$ to ${Z_d(Y)}$, the vector space of $d$-cycles of $Y$. This restriction is a linear map that we call $\phi$. Clearly $~{\rm dim}({\rm ker}\phi) + {\rm dim}({\rm Im}\phi) = {\rm dim}(Z_d(Y))\,$ and the proof of the lemma will be completed by showing that
\[
{\rm ker} \phi ~\oplus~ {\rm Im} \phi ~\subseteq~ Z_d(S_{uv}(Y))\,.
\]
\begin{enumerate}
\item ${\rm Im} \phi$:~ For every $\sigma\in Y$, either $\Phi(\sigma)=0$ or it is in $S_{uv}(Y)$. But $\phi$ maps cycles to cycles, so $~ {\rm Im} \phi \;\subseteq\; Z_d(S_{uv}(Y))\,$.
\item ${\rm ker}\phi$:~ Let $Z$ be a $d$-cycle of the form $Z=\sum_{\sigma\in T}\alpha_{\sigma}\cdot\sigma$ with $T\subseteq Y$, where $\alpha_{\sigma}\neq 0$ for $\sigma\in T$ and $\phi(Z)=0$. We need to show that $\sigma\in S_{uv}(Y)$ for every $\sigma\in T$. This is clearly the case if $v\notin\sigma$ or if $u \in\sigma$. We only need to show that $\sigma'=\sigma\setminus\{v\}\cup\{u\}\in T$ when $v\in\sigma, u\notin\sigma$. But the coefficient of $\sigma'$ in the expansion of $\phi(Z)$ is $\alpha_{\sigma}\pm\alpha_{\sigma'}$ which is zero, since $\phi(Z)=0$. Consequently $\alpha_{\sigma'}\neq 0$ and $\sigma'\in T$, as claimed.
\item Finally, since $\Phi^2=\Phi$,~ ${\rm ker}\phi\,\cap\, {\rm Im}\phi \;=\;\emptyset$, and therefore the sum is direct.
\end{enumerate}
This concludes the proof of Theorem~\ref{th:dual_max}. \end{proof}
Lov$\rm \acute{a}$sz' version of the of the Kruskal-Katona Theorem yields the following
appealing corollary:
\begin{corollary}
\label{cor::lovasz}
Let $|Y| = {x \choose d+1}$, where $x \geq d+1\,$ is real. Then, ${\rm rank} (Y) \ge {{x-1} \choose d}$.
For $x \in \ensuremath{\mathbb N}$, the equality is attained if and only if $Y$ is the $d$-skeleton of a $k$-dimensional simplex.
\end{corollary}
Another simple consequence of Theorem~\ref{th:dual_max} is an upper bound on $|Y|$ in terms of its rank:
\begin{theorem}
\label{th:dual_dual_max}
The largest size of a $d$-complex with ${\rm rank}(Y) = r$ is
\[
m(d,r) \;=\; \sum_{i=s}^{d} {{r_i +1}\choose {i+1}}\;
\text{~~where~~}
r \;=\; \sum_{i=s}^{d} {{r_i}\choose {i}}
\]
in cascade form. The bound is attained e.g., at the $d$-complex $K(d,r)$ whose $d$-faces are
the first $m(d,r)$ $(d+1)$-tuples over $\ensuremath{\mathbb N}$ in the co-lexicographic order.
\end{theorem}
It is a good place to comment that $K(d,r)$ is a subcomplex of $K_n^d$, provided that
${\rm rank}(K_n^d) = {{n-1}\choose {d}} \geq r$. This follows from the fact that the size of the
the underlying set of $K(d,r)$ is $r_d + 1$ if $s=d$, and $r_d + 2$ otherwise, where $r_d$ is the parameter from the cascade form of $r$. Observe also that $K(d,r)$ is necessarily closed, since otherwise it could be augmented without increasing its the rank, contrary to its optimality.
An alternative description of the $d$-faces of $K(d,r)$ is the largest prefix of $(d+1)$-tuples in the co-lexicographic order in which the number $1$ is contained in $r$ tuples.
Keeping in mind that a $(d,k)$-hypercut in $K_n^d$ is a complement of a closed set of rank ${n\choose{d-1}} - k$, Theorem~\ref{th:dual_dual_max} can be rephrased as a lower bound on the size of $(d,k)$-hypercut.
\begin{theorem}
\label{th:min-r-cut}
Let $r= {{n-1}\choose d} - k$. Then, the complement of $K(d,r)$ in $K_n^d$ is a $(d,k)$-hypercut of minimal size.
\end{theorem}
We turn to derive concrete bounds on the size of the extremal $(d,k)$-hypercut.
Since $K(d,r)$ is a prefix in the co-lexicogrphic order, its complement $H(d,k)$ is a suffix in this order. But a suffix in the co-lexicographic order is a prefix in the lexicographic order on a reversed alphabet. Consequently, by renaming the vertex set, $H(d,k)$ is the smallest prefix of the lexicographic order on $[n] \choose d+1$ in which the number $n$ is contained in $k$ tuples.
Equivalently, let $\tau_1,...,\tau_k$ be the prefix of length $k$ of the lexicographic order on ${[n-1]\choose d}$. Let $H_i$, for every $1\le i \le k$, be the $d$-hypercut consists of the $d$-faces $\sigma \supset \tau_i$. Then,
$H(d,k) = \bigcup_{i=1}^{k} H_i$.
\begin{claim}
$\mbox{}~~~ k(n-d) \;\geq \;|H(k,d)| \;\geq\; {1\over {d+1}} \cdot k(n-d) \,.$
\end{claim}
\begin{proof}
On one hand,
\[
|H(k,d)|~=~ \left|\bigcup_{i=1}^{k} H_i\right| ~\leq~ k\cdot |H_i| ~=~ k(n-d).
\]
On the other hand, any $d$-simplex in $H(k,d)$
may belong to at most $(d+1)$ different $H_i$'s, hence~~~
$H(k,d) \geq {1\over {d+1}} \cdot
k(n-d)$.
\end{proof}
\end{comment}
\section{Open Problems}
\label{section:open}
\begin{itemize}
\item
There are several problems that we solved here for $2$-dimensional complexes. It is clear that some completely new ideas will be required in order to answer these questions in higher dimensions. In particular it would be interesting to extend the construction based on arithmetic triples for $d>2$.
\item
An interesting aspect of the present work is that the behavior over $\ensuremath{\mathbb F}_2$ and $\ensuremath{\mathbb Q}$ differ, some times in a substantial way. It would be of interest to investigate the situation over other coefficient rings.
\item
How large can an acyclic closed set over $\ensuremath{\mathbb F}_2$ be?
Theorem \ref{thm:cls_acyc} gives a bound, but we do not know the exact answer yet.
\item
We still do not even know how large a $d$-cycle can be. In particular, for which integers $n, d$ and a field $\ensuremath{\mathbb F}$ does there exist a set of ${n-1 \choose d}+1$ $d$-faces on $n$ vertices such that removing any face yields a $d$-hypertree over $\ensuremath{\mathbb F}$?
\item
Many basic (approximate) enumeration problems remain wide open.
How many $n$-vertex $d$-hypertrees are there? What about $d$-collapsible complexes? A fundamental work of Kalai~\cite{kalai} provides some estimates for the former problem, but these bounds are not sharp. In one dimension there are exactly $\frac{(n-1)!}{2}$ inclusion-minimal $n$-vertex cycles. We know very little about the higher-dimensional counterparts of this fact.
\end{itemize}
| {
"timestamp": "2015-11-13T02:07:07",
"yymm": "1408",
"arxiv_id": "1408.0602",
"language": "en",
"url": "https://arxiv.org/abs/1408.0602",
"abstract": "Let $F$ be an $n$-vertex forest. We say that an edge $e\\notin F$ is in the shadow of $F$ if $F\\cup\\{e\\}$ contains a cycle. It is easy to see that if $F$ is \"almost a tree\", that is, it has $n-2$ edges, then at least $\\lfloor\\frac{n^2}{4}\\rfloor$ edges are in its shadow and this is tight. Equivalently, the largest number of edges an $n$-vertex cut can have is $\\lfloor\\frac{n^2}{4}\\rfloor$. These notions have natural analogs in higher $d$-dimensional simplicial complexes, graphs being the case $d=1$. The results in dimension $d>1$ turn out to be remarkably different from the case in graphs. In particular the corresponding bounds depend on the underlying field of coefficients. We find the (tight) analogous theorems for $d=2$. We construct $2$-dimensional \"$\\mathbb Q$-almost-hypertrees\" (defined below) with an empty shadow. We also show that the shadow of an \"$\\mathbb F_2$-almost-hypertree\" cannot be empty, and its least possible density is $\\Theta(\\frac{1}{n})$. In addition we construct very large hyperforests with a shadow that is empty over every field.For $d\\ge 4$ even, we construct $d$-dimensional $\\mathbb{F} _2$-almost-hypertree whose shadow has density $o_n(1)$.Finally, we mention several intriguing open questions.",
"subjects": "Combinatorics (math.CO)",
"title": "Extremal problems on shadows and hypercuts in simplicial complexes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9919380094250445,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7096562602909829
} |
https://arxiv.org/abs/1708.05041 | Total Forcing and Zero Forcing in Claw-Free Cubic Graphs | A dynamic coloring of the vertices of a graph $G$ starts with an initial subset $S$ of colored vertices, with all remaining vertices being non-colored. At each discrete time interval, a colored vertex with exactly one non-colored neighbor forces this non-colored neighbor to be colored. The initial set $S$ is called a forcing set (zero forcing set) of $G$ if, by iteratively applying the forcing process, every vertex in $G$ becomes colored. If the initial set $S$ has the added property that it induces a subgraph of $G$ without isolated vertices, then $S$ is called a total forcing set in $G$. The total forcing number of $G$, denoted $F_t(G)$, is the minimum cardinality of a total forcing set in $G$. We prove that if $G$ is a connected cubic graph of order~$n$ that has a spanning $2$-factor consisting of triangles, then $F_t(G) \le \frac{1}{2}n$. More generally, we prove that if $G$ is a connected, claw-free, cubic graph of order~$n \ge 6$, then $F_t(G) \le \frac{1}{2}n$, where a claw-free graph is a graph that does not contain $K_{1,3}$ as an induced subgraph. The graphs achieving equality in these bounds are characterized. | \section{Introduction}
A dynamic coloring of the vertices in a graph is a coloring of the vertex set which may change, or propagate, throughout the vertices during discrete time intervals. Of the dynamic colorings, the notion of \emph{forcing sets} (\emph{zero forcing sets}), and the associated graph invariant known as the \emph{forcing number} (\emph{zero forcing number}), are arguably the most prominent, see for example \cite{AIM-Workshop, Barioli13, zf_np, DaHe17+, Davila Kenter, girthTheorem, Genter1, Genter2, Edholm, Hogben16, LuTang16, zf_np2}.
In this paper, we continue the study of total forcing sets in graphs, where a \emph{total forcing set} is a forcing set that induces a subgraph without isolated vertices.
More formally, let $G$ be a graph with vertex set $V = V(G)$ and edge set $E = E(G)$. The \emph{forcing process} is defined in~\cite{DaHe17a,DaHe17b} as follows: Let $S \subseteq V$ be a set of initially ``colored" vertices, all other vertices are said to be ``non-colored". A vertex contained in $S$ is said to be $S$-colored, while a vertex not in $S$ is said to be $S$-uncolored. At each time step, if a colored vertex has exactly one non-colored neighbor, then this colored vertex \emph{forces} its non-colored neighbor to become colored. If $v$ is such a colored vertex, we say that $v$ is a \emph{forcing vertex}. We say that $S$ is a \emph{forcing set}, if by iteratively applying the forcing process, all of $V$ becomes colored. We call such a set $S$, an $S$-forcing set. In addition, if $S$ is an $S$-forcing set in G and $v$ is a $S$-colored vertex that forces a new vertex to be colored, then $v$ is an \emph{$S$-forcing vertex}. The cardinality of a minimum forcing set in $G$ is the \emph{forcing number} of $G$, denoted $F(G)$.
If $S$ is a forcing set in a graph $G$ and the subgraph $G[S]$ induced by $S$ contains no isolated vertex, then $S$ is a \emph{total forcing set}, abbreviated as a TF-\emph{set} of $G$. The \emph{total forcing number} of $G$, written $F_t(G)$, is the cardinality of a minimum TF-set in $G$. The concept of a total forcing set was first introduced and studied by Davila in~\cite{Davila} as a strengthening of the concept of zero forcing originally introduced by the AIM-minimum rank group in~\cite{AIM-Workshop}. In~\cite{DaHe17a}, the authors show that if $G$ is a connected graph of order~$n \ge 3$ with maximum degree~$\Delta$, then $F_t(G) \le ( \frac{\Delta}{\Delta +1} ) n$. If we restrict $G$ to be a tree, then this bound can be improved to $F_t(G) \le \frac{1}{\Delta}((\Delta - 1)n + 1)$, as shown in~\cite{DaHe17b}.
A graph is \emph{$F$-free} if it does not contain $F$ as an induced subgraph. In particular, if $F = K_{1,3}$, then the graph is \emph{claw-free}. An excellent survey of claw-free graphs has been written by Flandrin, Faudree, and Ryj\'{a}\v{c}ek~\cite{ffr}. Chudnovsky and Seymour recently attracted considerable interest in claw-free graphs due to their excellent series of papers in \textit{Journal of Combinatorial Theory} on this topic (see, for example, their paper~\cite{ChSeV}). Domination and total domination in claw-free, cubic graph has been extensively studied (see, for example,~\cite{DeHaHe17,fh04,FaHe08b,HeLo12,HeMa16,HeYe_book,Li11,SoHe10} and elsewhere).
In this paper, we study total forcing sets in cubic, claw-free graphs.
\subsection{Notation}
For notation and graph theory terminology, we in general follow~\cite{HeYe_book}. Specifically, let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$, and of order~$n(G) = |V(G)|$ and size $m(G) = |E(G)|$. A \emph{neighbor} of a vertex $v$ in $G$ is a vertex $u$ that is adjacent to $v$, that is, $uv\in E(G)$. The \emph{open neighborhood} of a vertex $v$ in $G$ is the set of neighbors of $v$, denoted $N_G(v)$.
The \emph{neighborhood} of a set $S$ of vertices of $G$ is the set $N_G(S) = \cup_{v \in S} N_G(v)$. We denote the \emph{degree} of $v$ in $G$ by $d_G(v)$. A \emph{cubic graph} (also called a $3$-\emph{regular graph}) is a graph in which every vertex has degree~$3$, while a \emph{cubic multigraph} is a multigraph in which every vertex has degree~$3$.
For a set of vertices $S \subseteq V(G)$, the subgraph induced by $S$ is denoted by $G[S]$. The subgraph obtained from $G$ by deleting all vertices in $S$ and all edges incident with vertices in $S$ is denoted by $G - S$.
We denote the path and cycle on $n$ vertices by $P_n$ and $C_n$, respectively. We denote by $K_n$ the \emph{complete graph} on $n$ vertices. A complete graph $K_3$ is called a \emph{triangle}. The complete graph on four vertices minus one edge is called a \emph{diamond}. A vertex is \emph{covered} by the cycle if it belongs to the cycle. A \emph{cycle cover} of a multigraph is a collection of vertex-disjoint cycles in the multigraph that cover all the vertices. A \emph{$2$-factor} of a graph $G$ is a spanning $2$-regular subgraph of $G$; that is, a $2$-factor of $G$ is a collection of cycles that contain all vertices of $G$. We use the standard notation $[k] = \{1,2,\ldots,k\}$.
\subsection{Diamond-Necklace}
\label{S:neck}
In this paper, we shall adopt the terminology and notation of a diamond-necklace, coined by Henning and L\"{o}wenstein in~\cite{HeLo12}. For completeness, we repeat their definition here. For $k \ge 2$ an integer, let $N_k$ be the connected cubic graph constructed as follows. Take $k$ disjoint copies $D_1, D_2, \ldots, D_{k}$ of a diamond, where $V(D_i) = \{a_i,b_i,c_i,d_i\}$ and where $a_ib_i$ is the missing edge in $D_i$. Let $N_k$ be obtained from the disjoint union of these $k$ diamonds by adding the edges $\{a_ib_{i+1} \mid i \in [k-1] \}$ and adding the edge $a_{k}b_1$. We call $N_k$ a \emph{diamond-necklace with $k$ diamonds}. Let ${\cal N}_{\rm cubic} = \{ N_k \mid k \ge 2\}$. A diamond-necklace, $N_6$, with six diamonds is illustrated in Figure~\ref{f:N6}, where the darkened vertices form a TF-set in the graph.
\begin{figure}[htb]
\tikzstyle{every node}=[circle, draw,fill=black!0, inner sep=0pt,minimum width=.175cm]
\begin{center}
\begin{tikzpicture}[thick,scale=.8]
\draw(0,0) {
+(0.00,3.33) -- +(0.48,3.06)
+(0.00,3.33) -- +(0.01,2.56)
+(0.01,2.56) -- +(0.48,3.06)
+(0.48,3.06) -- +(0.67,3.71)
+(0.67,3.71) -- +(0.00,3.33)
+(0.01,2.56) -- +(0.01,1.88)
+(0.01,1.88) -- +(0.00,1.11)
+(0.00,1.11) -- +(0.48,1.39)
+(0.01,1.88) -- +(0.48,1.39)
+(0.48,1.39) -- +(0.67,0.73)
+(0.67,0.73) -- +(0.00,1.11)
+(0.67,0.73) -- +(1.26,0.40)
+(1.26,0.40) -- +(1.92,0.56)
+(1.92,0.56) -- +(1.92,0.00)
+(1.92,0.00) -- +(1.26,0.40)
+(1.92,0.56) -- +(2.59,0.40)
+(2.59,0.40) -- +(1.92,0.00)
+(2.59,4.05) -- +(1.92,3.89)
+(1.92,3.89) -- +(1.92,4.44)
+(1.92,4.44) -- +(2.59,4.05)
+(1.92,3.89) -- +(1.26,4.05)
+(1.26,4.05) -- +(1.92,4.44)
+(1.26,4.05) -- +(0.67,3.71)
+(2.59,4.05) -- +(3.17,3.71)
+(3.17,3.71) -- +(3.37,3.06)
+(3.37,3.06) -- +(3.85,3.33)
+(3.85,3.33) -- +(3.17,3.71)
+(3.37,3.06) -- +(3.84,2.56)
+(3.85,3.33) -- +(3.84,2.56)
+(3.84,2.56) -- +(3.84,1.88)
+(3.84,1.88) -- +(3.37,1.39)
+(3.37,1.39) -- +(3.85,1.11)
+(3.85,1.11) -- +(3.84,1.88)
+(3.37,1.39) -- +(3.17,0.73)
+(3.17,0.73) -- +(3.85,1.11)
+(3.17,0.73) -- +(2.59,0.40)
+(0.48,1.39) node[fill=black]{}
+(0.00,1.11) node{}
+(0.01,1.88) node[fill=black]{}
+(0.48,3.06) node{}
+(0.67,0.73) node{}
+(0.01,2.56) node[fill=black]{}
+(0.00,3.33) node[fill=black]{}
+(0.67,3.71) node{}
+(1.26,4.05) node{}
+(1.92,3.89) node{}
+(1.92,4.44) node[fill=black]{}
+(2.59,4.05) node[fill=black]{}
+(2.59,0.40) node{}
+(1.26,0.40) node[fill=black]{}
+(1.92,0.00) node[fill=black]{}
+(1.92,0.56) node{}
+(3.17,3.71) node{}
+(3.37,3.06) node{}
+(3.85,3.33) node[fill=black]{}
+(3.84,2.56) node[fill=black]{}
+(3.84,1.88) node{}
+(3.17,0.73) node[fill=black]{}
+(3.85,1.11) node[fill=black]{}
+(3.37,1.39) node{}
};
\end{tikzpicture}
\end{center}
\vskip -0.6 cm \caption{A diamond-necklace $N_6$.} \label{f:N6}
\end{figure}
\section{Main Results}
In this paper, we prove the following two results. Our first result shows that the total forcing number of a connected, claw-free, cubic graph in which every unit is a triangle-unit is at most one-half its order. A proof of Theorem~\ref{thm1} is presented in Section~\ref{S:proof1}.
\begin{thm}
\label{thm1}
If $G$ is a connected cubic graph of order~$n$ that has a spanning $2$-factor consisting of triangles, then $F_t(G) \le \frac{n}{2}$, with equality if and only if $G$ is the prism $C_3 \, \Box \, K_2$.
\end{thm}
More generally, we show that if we exclude the exceptional graph $K_4$, then the total forcing number of a connected, claw-free, cubic graph is always at most one-half its order, and we characterize the extremal graphs achieving equality in this bound. A proof of Theorem~\ref{thm2} is presented in Section~\ref{S:proof2}.
\begin{thm}
\label{thm2}
If $G \ne K_4$ is a connected, claw-free, cubic graph of order $n$, then $F_t(G) \le \frac{1}{2}n$ with equality if and only if $G \in {\cal N}_{\rm cubic}$ or $G$ is the prism $C_3 \, \Box \, K_2$.
\end{thm}
As a consequence of Theorem~\ref{thm2}, we have the following upper bound on the forcing number of a connected, claw-free, cubic graph. A proof of Theorem~\ref{thm3} is presented in Section~\ref{S:proof3}.
\begin{thm}
\label{thm3}
If $G \ne K_4$ is a connected, claw-free, cubic graph of order $n$, then $F(G) \le \frac{1}{2}n$ with equality if and only if $G$ is the diamond-necklace $N_2$ or the prism $C_3 \, \Box \, K_2$.
\end{thm}
As an immediate consequence of Theorem~\ref{thm3}, the forcing number of a connected, claw-free, cubic graph of order at least~$10$ is strictly less than one-half its order.
\begin{cor}
\label{cor1}
If $G$ is a connected, claw-free, cubic graph of order~$n \ge 10$, then $F(G) < \frac{1}{2}n$.
\end{cor}
We close with the following relationship between the total forcing and forcing numbers of a connected, claw-free, cubic graph.
\begin{thm}
\label{thm4}
If $G$ is a connected, claw-free, cubic graph of order $n$, then
\[
\frac{F_t(G)}{F(G)} \le 2,
\]
and this bound is asymptotically best possible.
\end{thm}
\section{Known Results and Preliminary Lemma}
We shall need the following result observed, for example, in~\cite{Davila,DaHe17a} and elsewhere.
\begin{ob}
\label{ob1}
If $G$ is an isolate-free graph, then $F(G) \le F_t(G) \le 2F(G)$.
\end{ob}
The following property of connected, claw-free, cubic graphs is established in~\cite{HeLo12}.
\begin{lem}{\rm (\cite{HeLo12})}
If $G \ne K_4$ is a connected, claw-free, cubic graph of order $n$, then the vertex set $V(G)$ can be uniquely partitioned into sets each of which induces a triangle or a diamond in $G$.
\label{l:known}
\end{lem}
By Lemma~\ref{l:known}, the vertex set $V(G)$ of connected, claw-free, cubic graph $G \ne K_4$ can be uniquely partitioned into sets each of which induce a triangle or a diamond in $G$. Following the notation introduced in~\cite{HeLo12}, we refer to such a partition as a \emph{triangle}-\emph{diamond partition} of $G$, abbreviated $\Delta$-D-partition. We call every triangle and diamond induced by a set in our $\Delta$-D-partition a \emph{unit} of the partition. A unit that is a triangle is called a \emph{triangle-unit} and a unit that is a diamond is called a \emph{diamond-unit}. (We note that a triangle-unit is a triangle that does not belong to a diamond.) We say that two units in the $\Delta$-D-partition are \emph{adjacent} if there is an edge joining a vertex in one unit to a vertex in the other unit.
Before presenting proofs of our main results, we prove first that every graph in the family ${\cal N}_{\rm cubic}$ has total forcing number exactly one-half the order of the graph and has forcing number exactly one-fourth the order plus two. We remark that the following lemma may be of interest to those who study forcing as it relates to the linear algebraic minimum rank problem.
\begin{lem}
\label{lem1}
If $G \in {\cal N}_{\rm cubic}$ has order~$n$, then $F_t(G) = \frac{1}{2}n$ and $F(G) = \frac{1}{4}n + 2$.
\end{lem}
\noindent\textbf{Proof. } Let $G \in {\cal N}_{\rm cubic}$ have order~$n$. Thus, $G$ is a diamond-necklace with $k$ diamonds for some $k \ge 2$, where $n = 4k$. We follow the notation introduced in Section~\ref{S:neck}. Hence, $G = N_k$ consists of $k$ vertex disjoint diamonds $D_1, D_2, \ldots, D_{k}$, where $V(D_i) = \{a_i,b_i,c_i,d_i\}$ and where $a_ib_i$ is the missing edge in $D_i$. Further, $G$ is obtained from these $k$ diamonds by adding the edges $\{a_ib_{i+1} \mid i \in [k-1] \}$ and adding the edge $a_{k}b_1$. Let $A = \{a_1,a_2,\ldots,a_k\}$ and $C = \{c_1,c_2,\ldots,c_k\}$.
(a) We first prove that $F_t(G) = 2k$. Let $S = (A \setminus \{a_1\}) \cup C \cup \{b_1\}$. (In the special case when $n = 6$, the set $S$ is illustrated by the darkened vertices in Figure~\ref{f:N6}, where here $G = N_6$ and $n = 24$.) The set $S$ is a TF-set of $G$ since the sequence $x_1,x_2,\ldots,x_{2k}$ of played vertices in the forcing process result in all vertices of $G$ colored, where $x_i$ denotes the forcing vertex played in the $i$th step of the process and where $x_1 = b_1$, $x_2 = d_1$, and $x_{2i+1} = a_{i+1}$ and $x_{2i+2} = d_{i+1}$ for $i \in [k-1]$; that is, the sequence of played vertices is given by $b_1, d_1, a_2, d_2, \ldots, a_k, b_k$. Thus, $F_t(G) \le |S| = 2k$.
We show next that $F_t(G) \ge 2k$. Let $S'$ be an arbitrary TF-set of $G$. We show that every diamond in the diamond-necklace contains at least two vertices of $S'$. If neither $c_i$ nor $d_i$ belong to the set $S'$ for some $i \in [k]$, then neither $c_i$ nor $d_i$ can be colored in the forcing process. Hence, at least one of $c_i$ and $d_i$ belongs to $S'$. Renaming vertices if necessary, we may assume that $c_i \in S'$. Since $G[S']$ is isolate-free, at least one neighbor of $c_i$ belongs to $S'$, implying that $|S' \cap V(D_i)| \ge 2$. Thus,
\[
|S'| = \sum_{i=1}^k |S' \cap V(D_i)| \ge 2k.
\]
\indent
Since $S'$ is an arbitrary TF-set of $G$, this implies that $F_t(G) \ge 2k$. As observed earlier, $F_t(G) \le 2k$. Consequently, $F_t(G) = 2k = \frac{1}{2}n$.
(b) We show next that $F(G) = k + 2$. In this case, we consider the set $S = C \cup \{b_1,a_2\}$. The set $S$ is a forcing set of $G$ since the sequence $x_1,x_2,\ldots,x_{3k+1}$ of played vertices in the forcing process result in all vertices of $G$ colored, where $x_i$ denotes the forcing vertex played in the $i$th step of the process and where $x_{3i-2} = a_{i+1}$, $x_{3i-1} = d_{i+1}$, and $x_{3i} = b_{i+1}$ for $i \in [k-1]$, and where $x_{3k-2} = a_1$; that is, the sequence of played vertices is given by $a_2,d_2,b_2, a_3,b_3,d_3, \ldots, a_k, d_k, b_k, a_1$. Thus, $F_t(G) \le |S| = k + 2$.
We show next that $F(G) \ge k + 2$. Let $S'$ be an arbitrary forcing set of $G$. At least one of $c_i$ and $d_i$ belong to $S'$ for every $i \in [k]$, and so every diamond in the diamond-necklace contains at least one vertex of $S'$. Renaming vertices if necessary, we may assume that $C \subseteq S'$. Since $S'$ is a forcing set, the first vertex $x_1$ played in the forcing process belongs to $S'$ and at least two neighbors of $x_1$ belong to $S'$. This implies that if $x_1 \in \{c_i,d_i\}$ for some $i \in [k]$, then $S'$ contains at least three vertices from the diamond $D_i$. Moreover, if $x_1 \in \{a_i,b_i\}$ for some $i \in [k]$, say $x_1 = b_i$ by symmetry, then $S'$ contains at least three vertices from the diamond $D_i$ or $S'$ contains at least two vertices from both diamonds $D_i$ and $D_{i+1}$ (where addition is taken modulo~$k$). Thus, one diamond in the diamond-necklace contains at least three vertices of $S'$ or two diamonds in the diamond-necklace both contain at least two vertices of $S'$, implying that $|S'| \ge k + 2$. Since $S'$ is an arbitrary TF-set of $G$, this implies that $F(G) \ge k+2$. As observed earlier, $F(G) \le k+2$. Consequently, $F(G) = k+2 = \frac{1}{4}n + 2$.~$\Box$
\section{Proof of Main Results}
\subsection{Proof of Theorem~\ref{thm1}}
\label{S:proof1}
In this section, we present a proof of Theorem~\ref{thm1}. Recall its statement.
\noindent \textbf{Theorem~\ref{thm1}}. \emph{If $G$ is a connected cubic graph of order~$n$ that has a spanning $2$-factor consisting of triangles, then $F_t(G) \le \frac{n}{2}$, with equality if and only if $G$ is the prism $C_3 \, \Box \, K_2$.
}
\medskip
\noindent\textbf{Proof. } Let $G$ be a connected cubic graph of order~$n$ that has a spanning $2$-factor consisting of triangles. We note that $G$ is a claw-free, cubic graph in which every unit is a triangle-unit. In particular, $G$ contains no diamond as a subgraph. We define the contraction multigraph of $G$, denoted $M_G$, to be the multigraph whose vertices correspond to the triangle-units in $G$ and where two vertices in $M_G$ are joined by the number of edges joining the corresponding triangle-units in $G$. By construction, $M_G$ has no loops but does possibly contain multiple edges. We note the order of $M_G$ is precisely the number of triangle-units in $G$. Since $G$ contains at least two triangle-units, $M_G$ has order at least~$2$. Every vertex in $M_G$ has degree~$3$, and so $M_G$ is a cubic multigraph. For each vertex $v$ in $M_G$, let $T_v$ denote the triangle-unit in $G$ associated with the vertex $v$.
Among all collections of vertex-disjoint cycles in $M_G$ (where we allow $2$-cycles), let ${\cal C}$ be chosen so that the following holds. \vspace{0.2cm}
\\
\indent (1) The number of vertices of $M_G$ covered by cycles that belong to ${\cal C}$ is maximized. \\
\indent (2) Subject to (1), the number of cycles in ${\cal C}$ is maximized.
\noindent
We now define a sequence of sets $S_1,S_2, \ldots$ as follows. Let $S_1$ be the set of all vertices covered by cycles in ${\cal C}$; that is,
\[
S_1 = \bigcup_{C \in {\cal C}} V(C).
\]
Since every vertex in the multigraph $M_G$ has degree~$3$, there is at least one cycle in $M_G$, implying that $S_1 \ne \emptyset$. Let $S_2$ be the set of all vertices in $M_G$ that do not belong to the set $S_1$, and are joined to $S_1$ by at least two edges (more precisely, by two or three edges). Let $S_3$ be the set of all vertices of $M_G$ not in $S_1 \cup S_2$ that are joined to $S_1 \cup S_2$ by at least two edges. More generally, if the set $S_i$ is defined for some $i \ge 1$ and
\[
S_{\le i} = \bigcup_{j=1}^i S_j,
\]
then let $S_{i+1}$ be the set of all vertices of $M_G$ not in $S_{\le i}$ that are joined to $S_{\le i}$ by at least two edges. Suppose that $V(M_G) \setminus S_{\le i} \ne \emptyset$ and $S_{i+1} = \emptyset$ for some $i \ge 0$; that is, suppose the set of vertices of $M_G$ that do not belong to the set $S_{\le i}$ is nonempty, but each such vertex is joined to $S_{\le i}$ by at most one edge. Let $M_{i+1}$ be the multigraph induced by the set of vertices of $M_G$ not in $S_{\le i}$. Since there is at most edge of $M_G$ that joins each vertex in $M_{i+1}$ to $S_{\le i}$, every vertex in the multigraph $M_{i+1}$ has degree at least~$2$, implying that there is a cycle in $M_{i+1}$. Adding this cycle to the collection of cycles in ${\cal C}$ contradicts the maximality of ${\cal C}$ (see condition~(2) of our choice of ${\cal C}$). Hence, if $V(M_G) \setminus S_{\le i} \ne \emptyset$ for some $i \ge 0$, then $S_{i+1} \ne \emptyset$. Since the graph $G$ is finite, and so, $M_G$ is also finite, we note that for some integer $k \ge 0$, the following holds: $V(M_G) = S_{\le k}$ and $S_k \ne \emptyset$.
We now construct a TF-set of $G$ as follows. Let $C \colon v_1v_2\ldots v_\ell v_1$ be an arbitrary cycle in ${\cal C}$, where $\ell \ge 2$. Let $T_1, T_2, \ldots, T_\ell$ be the triangle-units in $G$ associated with the vertices $v_1, v_2, \ldots, v_\ell$, respectively. Further, let $V(T_i) = \{v_{i1},v_{i2},v_{i3}\}$, where $v_{i2}v_{i+1,1}$ is an edge joining the triangle-units $T_i$ and $T_{i+1}$ in $G$ for $i \in [\ell]$, where addition is taken modulo~$\ell$. Thus, $C' \colon v_{11}v_{12}v_{21}v_{22} \ldots v_{\ell 1}v_{\ell 2} v_{11}$ is a cycle in $G$ that gives rise to the cycle $C$ in $M_G$. We remark that the cycle $C'$ may not be the unique such cycle. However, we select one such cycle $C'$ and fix this cycle. Relative to our choice of the cycle $C'$, we call the vertex $v_{i3}$ the \emph{free vertex} of the triangle-unit $T_i$. Let $n_C$ be the number of vertices that belong to triangle-units of $G$ that are associated with the cycle $C$ of $M_G$, and so, $n_C = 3\ell$. Next we color a subset $S$ of the vertices that belong to triangle-units of $G$ associated with the cycle $C$ of $M_G$ as follows.
If $\ell \ge 2$ is even, then we color all vertices that belong to the triangle-units $T_{2i-1}$ where $i \in [\ell/2]$. Let $S_C$ denote the resulting set of colored vertices of $G$ associated with the cycle~$C$. (We illustrate the case when $\ell = 6$ in Figure~\ref{f:Ceven}, where the vertices in $S_C$ are darkened.) We note that exactly one-half the vertices in all triangle-units of $G$ associated with the cycle $C$ of $M_G$ are colored; that is, $|S_C| = n_C/2$.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=.8,style=thick,x=1cm,y=1cm]
\def\vr{2.5pt}
\path (-0.1,0.5) coordinate (v1);
\path (-0.1,-0.2) coordinate (u1);
\path (0,0.5) coordinate (v11);
\path (1.5,0.5) coordinate (v12);
\path (0.75,1.5) coordinate (v13);
\path (0.75,2.1) coordinate (v1p);
\path (3,0.5) coordinate (v21);
\path (4.5,0.5) coordinate (v22);
\path (3.75,1.5) coordinate (v23);
\path (3.75,2.1) coordinate (v2p);
\path (6,0.5) coordinate (v31);
\path (7.5,0.5) coordinate (v32);
\path (6.75,1.5) coordinate (v33);
\path (6.75,2.1) coordinate (v3p);
\path (9,0.5) coordinate (v41);
\path (10.5,0.5) coordinate (v42);
\path (9.75,1.5) coordinate (v43);
\path (9.75,2.1) coordinate (v4p);
\path (12,0.5) coordinate (v51);
\path (13.5,0.5) coordinate (v52);
\path (12.75,1.5) coordinate (v53);
\path (12.75,2.1) coordinate (v5p);
\path (15,0.5) coordinate (v61);
\path (16.5,0.5) coordinate (v62);
\path (15.75,1.5) coordinate (v63);
\path (15.75,2.1) coordinate (v6p);
\path (16.6,0.5) coordinate (v6);
\path (16.6,-0.2) coordinate (u6);
\draw (v11) -- (v12);
\draw (v11) -- (v13);
\draw (v12) -- (v13);
\draw (v21) -- (v22);
\draw (v21) -- (v23);
\draw (v22) -- (v23);
\draw (v31) -- (v32);
\draw (v31) -- (v33);
\draw (v32) -- (v33);
\draw (v41) -- (v42);
\draw (v41) -- (v43);
\draw (v42) -- (v43);
\draw (v51) -- (v52);
\draw (v51) -- (v53);
\draw (v52) -- (v53);
\draw (v61) -- (v62);
\draw (v61) -- (v63);
\draw (v62) -- (v63);
\draw (v12) -- (v21);
\draw (v22) -- (v31);
\draw (v32) -- (v41);
\draw (v42) -- (v51);
\draw (v52) -- (v61);
\draw (v62) -- (v11);
\draw (v13) -- (v1p);
\draw (v23) -- (v2p);
\draw (v33) -- (v3p);
\draw (v43) -- (v4p);
\draw (v53) -- (v5p);
\draw (v63) -- (v6p);
\draw (u1) -- (u6);
\draw (v11) [fill=black] circle (\vr);
\draw (v12) [fill=black] circle (\vr);
\draw (v13) [fill=black] circle (\vr);
\draw (v21) [fill=white] circle (\vr);
\draw (v22) [fill=white] circle (\vr);
\draw (v23) [fill=white] circle (\vr);
\draw (v31) [fill=black] circle (\vr);
\draw (v32) [fill=black] circle (\vr);
\draw (v33) [fill=black] circle (\vr);
\draw (v41) [fill=white] circle (\vr);
\draw (v42) [fill=white] circle (\vr);
\draw (v43) [fill=white] circle (\vr);
\draw (v51) [fill=black] circle (\vr);
\draw (v52) [fill=black] circle (\vr);
\draw (v53) [fill=black] circle (\vr);
\draw (v61) [fill=white] circle (\vr);
\draw (v62) [fill=white] circle (\vr);
\draw (v63) [fill=white] circle (\vr);
\draw (v1) to[out=180,in=180, distance=1cm] (u1);
\draw (v6) to[out=0,in=0, distance=1cm] (u6);
\draw (0,0.25) node {{\small $v_{11}$}};
\draw (1.5,0.25) node {{\small $v_{12}$}};
\draw (1.2,1.5) node {{\small $v_{13}$}};
\draw (3,0.25) node {{\small $v_{21}$}};
\draw (4.5,0.25) node {{\small $v_{22}$}};
\draw (4.2,1.5) node {{\small $v_{23}$}};
\draw (6,0.25) node {{\small $v_{31}$}};
\draw (7.5,0.25) node {{\small $v_{32}$}};
\draw (7.2,1.5) node {{\small $v_{33}$}};
\draw (9,0.25) node {{\small $v_{41}$}};
\draw (10.5,0.25) node {{\small $v_{42}$}};
\draw (10.2,1.5) node {{\small $v_{43}$}};
\draw (12,0.25) node {{\small $v_{51}$}};
\draw (13.5,0.25) node {{\small $v_{52}$}};
\draw (13.2,1.5) node {{\small $v_{53}$}};
\draw (15,0.25) node {{\small $v_{61}$}};
\draw (16.5,0.25) node {{\small $v_{62}$}};
\draw (16.2,1.5) node {{\small $v_{63}$}};
\draw (0.75,0.9) node {{\small $T_1$}};
\draw (3.75,0.9) node {{\small $T_2$}};
\draw (6.75,0.9) node {{\small $T_3$}};
\draw (9.75,0.9) node {{\small $T_4$}};
\draw (12.75,0.9) node {{\small $T_5$}};
\draw (15.75,0.9) node {{\small $T_6$}};
\end{tikzpicture}
\end{center}
\vskip -0.25cm
\caption{The coloring of the triangle-units when $\ell = 6$.} \label{f:Ceven}
\end{figure}
If $\ell \ge 3$ is odd, then we color the four vertices $v_{12},v_{13},v_{21},v_{23}$ and if $\ell \ge 5$ we further color all vertices that belong to the triangle-units $T_{2i}$ where $i \in [(\ell-1)/2] \setminus \{1\}$. Let $S_C$ denote the resulting set of colored vertices of $G$ associated with the cycle~$C$. (We illustrate the case when $\ell = 5$ in Figure~\ref{f:Codd}, where the vertices in $S_C$ are darkened.) We note that $(n_C - 1)/2$ of the vertices in all triangle-units of $G$ associated with the cycle $C$ of $M_G$ are colored; that is, $|S_C| = (n_C - 1)/2$.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=.8,style=thick,x=1cm,y=1cm]
\def\vr{2.5pt}
\path (-0.1,0.5) coordinate (v1);
\path (-0.1,-0.2) coordinate (u1);
\path (0,0.5) coordinate (v11);
\path (1.5,0.5) coordinate (v12);
\path (0.75,1.5) coordinate (v13);
\path (0.75,2.1) coordinate (v1p);
\path (3,0.5) coordinate (v21);
\path (4.5,0.5) coordinate (v22);
\path (3.75,1.5) coordinate (v23);
\path (3.75,2.1) coordinate (v2p);
\path (6,0.5) coordinate (v31);
\path (7.5,0.5) coordinate (v32);
\path (6.75,1.5) coordinate (v33);
\path (6.75,2.1) coordinate (v3p);
\path (9,0.5) coordinate (v41);
\path (10.5,0.5) coordinate (v42);
\path (9.75,1.5) coordinate (v43);
\path (9.75,2.1) coordinate (v4p);
\path (12,0.5) coordinate (v51);
\path (13.5,0.5) coordinate (v52);
\path (12.75,1.5) coordinate (v53);
\path (12.75,2.1) coordinate (v5p);
\path (13.6,0.5) coordinate (v5);
\path (13.6,-0.2) coordinate (u5);
\draw (v11) -- (v12);
\draw (v11) -- (v13);
\draw (v12) -- (v13);
\draw (v21) -- (v22);
\draw (v21) -- (v23);
\draw (v22) -- (v23);
\draw (v31) -- (v32);
\draw (v31) -- (v33);
\draw (v32) -- (v33);
\draw (v41) -- (v42);
\draw (v41) -- (v43);
\draw (v42) -- (v43);
\draw (v51) -- (v52);
\draw (v51) -- (v53);
\draw (v52) -- (v53);
\draw (v12) -- (v21);
\draw (v22) -- (v31);
\draw (v32) -- (v41);
\draw (v42) -- (v51);
\draw (v52) -- (v11);
\draw (v13) -- (v1p);
\draw (v23) -- (v2p);
\draw (v33) -- (v3p);
\draw (v43) -- (v4p);
\draw (v53) -- (v5p);
\draw (u1) -- (u5);
\draw (v11) [fill=white] circle (\vr);
\draw (v12) [fill=black] circle (\vr);
\draw (v13) [fill=black] circle (\vr);
\draw (v21) [fill=black] circle (\vr);
\draw (v22) [fill=white] circle (\vr);
\draw (v23) [fill=black] circle (\vr);
\draw (v31) [fill=white] circle (\vr);
\draw (v32) [fill=white] circle (\vr);
\draw (v33) [fill=white] circle (\vr);
\draw (v41) [fill=black] circle (\vr);
\draw (v42) [fill=black] circle (\vr);
\draw (v43) [fill=black] circle (\vr);
\draw (v51) [fill=white] circle (\vr);
\draw (v52) [fill=white] circle (\vr);
\draw (v53) [fill=white] circle (\vr);
\draw (v1) to[out=180,in=180, distance=1cm] (u1);
\draw (v5) to[out=0,in=0, distance=1cm] (u5);
\draw (0,0.25) node {{\small $v_{11}$}};
\draw (1.5,0.25) node {{\small $v_{12}$}};
\draw (1.2,1.5) node {{\small $v_{13}$}};
\draw (3,0.25) node {{\small $v_{21}$}};
\draw (4.5,0.25) node {{\small $v_{22}$}};
\draw (4.2,1.5) node {{\small $v_{23}$}};
\draw (6,0.25) node {{\small $v_{31}$}};
\draw (7.5,0.25) node {{\small $v_{32}$}};
\draw (7.2,1.5) node {{\small $v_{33}$}};
\draw (9,0.25) node {{\small $v_{41}$}};
\draw (10.5,0.25) node {{\small $v_{42}$}};
\draw (10.2,1.5) node {{\small $v_{43}$}};
\draw (12,0.25) node {{\small $v_{51}$}};
\draw (13.5,0.25) node {{\small $v_{52}$}};
\draw (13.2,1.5) node {{\small $v_{53}$}};
\draw (0.75,0.9) node {{\small $T_1$}};
\draw (3.75,0.9) node {{\small $T_2$}};
\draw (6.75,0.9) node {{\small $T_3$}};
\draw (9.75,0.9) node {{\small $T_4$}};
\draw (12.75,0.9) node {{\small $T_5$}};
\end{tikzpicture}
\end{center}
\vskip -0.25cm
\caption{The coloring of the triangle-units when $\ell = 5$.} \label{f:Codd}
\end{figure}
Let $S$ be the resulting set of all colored vertices associated with the cycles $C$ in ${\cal C}$; that is,
\[
S = \bigcup_{C\in {\cal C}} S_C
\hspace*{0.75cm} \mbox{and} \hspace*{0.75cm}
|S| = \sum_{C\in {\cal C}} |S_C| \le \frac{1}{2}\sum_{C\in {\cal C}} n_C.
\]
We claim that $S$ is a TF-set in $G$. By construction, $G[S]$ contains no isolated vertex. Hence it suffices for us to prove that $S$ is a forcing set of $G$. For this purpose we define $k$ stages in the forcing process, where we recall that $k$ is the nonnegative integer such that $V(M_G) = S_{\le k}$ and $S_k \ne \emptyset$. For $i \in [k]$, let $V_i$ be the set of all vertices that belong to a triangle-unit of $G$ associated with a vertex of $M_G$ that belongs to the set $S_i$, and let \[
V_{\le i} = \bigcup_{j=1}^i V_j.
\]
We note that $V(G) = V_{\le k}$ and
\[
|V_1| = 3|S_1| = \sum_{C \in {\cal C}} n_C \ge 2|S|,
\]
and so
\[
|S| \le \frac{1}{2}|V_1| \le \frac{1}{2}n.
\]
We first define Stage~1 of the forcing process. In this first stage we color all vertices in $V_1$, starting with the initial set $S$ of colored vertices, in the following way. We consider the cycles in ${\cal C}$ sequentially (in any order) and color the vertices that belong to a triangle-unit of $G$ associated with such a cycle as follows. Let $C \colon v_1v_2\ldots v_\ell v_1$ in ${\cal C}$, where $\ell \ge 2$, be an arbitrary cycle in ${\cal C}$.
Suppose that $\ell \ge 2$ is even. Using our earlier notation, we note that the $S$-colored vertex $v_{2i-1,2}$ forces the vertex $v_{2i,1}$ to be colored for all $i \in [\ell/2]$. Further, the $S$-colored vertex $v_{11} \in S$ forces the vertex $v_{\ell,2}$ to be colored, while the $S$-colored vertex $v_{2i-1,1} \in S$ forces the vertex $v_{2i-2,2}$ to be colored for all $i \in [\ell/2] \setminus \{1\}$. In this way, all vertices $v_{ij}$, where $j \in [2]$, are colored in the forcing process. The newly colored vertex $v_{2i,1}$ now forces the vertex $v_{2i,3}$ to be colored for all $i \in [\ell/2]$. Thus if $\ell$ is even, then all vertices that belong to a triangle-unit of $G$ associated with the cycle $C$ of $M_G$ are colored starting with the initial set $S$ of colored vertices.
Suppose next that $\ell \ge 3$ is odd. Using our earlier notation, we note that the $S$-colored vertex $v_{12}$ forces the vertex $v_{11}$ to be colored, and the $S$-colored vertex $v_{21}$ forces the vertex $v_{22}$ to be colored. In this way, all six vertices in $T_1$ and $T_2$ are colored. The colored vertex $v_{2i,2}$ now forces the vertex $v_{2i+1,1}$ to be colored for all $i \in [(\ell - 1)/2]$. Further, the colored vertex $v_{11} \in S$ forces the vertex $v_{\ell,2}$ to be colored, while the $S$-colored vertex $v_{2i,1} \in S$ forces the vertex $v_{2i-1,2}$ to be colored for all $i \in [(\ell-1)/2] \setminus \{1\}$. In this way, all vertices $v_{ij}$, where $j \in [2]$, are colored in the forcing process. The newly colored vertex $v_{2i+1,1}$ now forces the vertex $v_{2i+1,3}$ to be colored for all $i \in [(\ell-1)/2]$. Thus if $\ell$ is odd, then all vertices that belong to a triangle-unit of $G$ associated with the cycle $C$ of $M_G$ are colored starting with the initial set $S$ of colored vertices. Thus, after Stage~1 of the forcing process all vertices in $V_1$ are colored.
We next define Stage~2 of the forcing process. In this second stage we color all vertices in $V_2$ as follows. Let $v$ be an arbitrary vertex in $V_2$. Thus, the vertex $v$ belongs to a triangle-unit, $T_v$ say, in $G$ that is associated with a vertex $v' \in S_2$ in $M_G$. By definition, the vertex $v'$ is joined to $S_1$ in $M_G$ by two or three edges. This implies that there are two or three edges joining $T_v$ to vertices in $V_1$. Let $a_2, b_2, c_2$ be the three vertices in the triangle-unit $T_v$ in $G$. We note that $v \in \{a_2,b_2,c_2\}$. Renaming vertices if necessary, we may assume that $a_2$ has a neighbor, $a_1$ say, in $V_1$ and $b_2$ has a neighbor, $b_1$ say, in $V_1$. Since every vertex of $V_1$ belongs to a triangle-unit in $G[V_1]$, we note that every vertex of $V_1$ has at most one neighbor not in $V_1$. In particular, $a_2$ is the only uncolored neighbor of $a_1$ and $b_2$ is the only uncolored neighbor of $b_1$ upon completion of Stage~1. Further, $a_1 \ne b_1$. The vertex $a_1$ now forces the vertex $a_2$ to be colored, and the vertex $b_1$ now forces the vertex $b_2$ to be colored. The newly colored vertex $a_2$ now forces the third vertex of $T_v$, namely $c_2$, to be colored. Thus, all three vertices in the triangle-unit $T_v$ in $G$ become colored in Stage~2. In particular, $v$ is colored in this stage of the forcing process. Since $v$ is an arbitrary vertex in $V_2$, after Stage~2 of the forcing process all vertices in $V_{\le 2} = V_1 \cup V_2$ are colored.
In general, suppose that Stage~$i$ in the forcing process has been defined for some $i$ where $1 \le i < k$, and that after Stage~$i$ all vertices in $V_{\le i}$ are colored. We now define the $(i+1)$th stage in the forcing process. In this stage we color all vertices in $V_{i+1}$ as follows. Let $v$ be an arbitrary vertex in $V_{i+1}$. Thus, the vertex $v$ belongs to a triangle-unit, $T_v$ say, in $G$ that is associated with a vertex $v' \in S_{i+1}$ in $M_G$. By definition, the vertex $v'$ is joined to $S_{\le i}$ in $M_G$ by two or three edges. This implies that there are two or three edges joining $T_v$ to vertices in $V_{\le i}$. Let $a_{i+1}, b_{i+1}, c_{i+1}$ be the three vertices in the triangle-unit $T_v$ in $G$. We note that $v \in \{a_{i+1},b_{i+1},c_{i+1}\}$. Renaming vertices if necessary, we may assume that $a_{i+1}$ has a neighbor, $a$ say, in $V_{\le i}$ and $b_{i+1}$ has a neighbor, $b$ say, in $V_{\le i}$.
Since every vertex of $V_{\le i}$ belongs to a triangle-unit in $G[V_{\le i}]$, we note that every vertex of $V_{\le i}$ has at most one neighbor not in $V_{\le i}$. In particular, $a_{i+1}$ is the only uncolored neighbor of $a$ and $b_{i+1}$ is the only uncolored neighbor of $b$ upon completion of Stage~$i$. Further, $a \ne b$. The vertex $a$ now forces the vertex $a_{i+1}$ to be colored, and the vertex $b$ now forces the vertex $b_{i+1}$ to be colored. The newly colored vertex $a_{i+1}$ now forces the third vertex of $T_v$, namely $c_{i+1}$, to be colored. Thus, all three vertices in the triangle-unit $T_v$ in $G$. In particular, $v$ is colored in this stage of the forcing process. Since $v$ is an arbitrary vertex in $V_{i+1}$, after Stage~$i+1$ of the forcing process all vertices in $V_{\le i+1}$ are colored. In particular, after Stage~$k$ of the forcing process all vertices in $V(G) = V_{\le k}$ are colored. Thus, the set $S$ is a TF-set of $G$, implying by our earlier observations that
\[
F_t(G) \le |S| \le \frac{1}{2}|V_1| \le \frac{1}{2}n.
\]
Suppose, next, that $F_t(G) = n/2$. Thus, we must have equality throughout the above inequality chain, implying in particular that $F_t(G) = |S| = |V_1|/2$ and $k = 1$. This in turn implies that ${\cal C}$ is a cycle cover in $M_G$, and so every vertex of $M_G$ belongs to a cycle in ${\cal C}$. Further, for every cycle $C$ in ${\cal C}$, we have $|S_C| = n_C/2$, and so every cycle in ${\cal C}$ has even length. Thus, the coloring of the triangle-units of the cycle $C$ is as illustrated in Figure~\ref{f:Ceven}.
Let $C \colon v_1v_2\ldots v_\ell v_1$ be an arbitrary cycle in ${\cal C}$, where $\ell \ge 2$ is even. As before, let $T_1, T_2, \ldots, T_\ell$ be the triangle-units in $G$ associated with the vertices $v_1, v_2, \ldots, v_\ell$, respectively, and let $V(T_i) = \{v_{i1},v_{i2},v_{i3}\}$, where $C' \colon v_{11}v_{12}v_{21}v_{22} \ldots v_{\ell 1}v_{\ell 2} v_{11}$ is the chosen cycle in $G$ that gives rise to the cycle $C$ in $M_G$. Recall that the vertex $v_{i3}$ is called the free vertex of the triangle-unit $T_i$ for $i \in [\ell]$ relative to $C'$. We now consider the free vertex $v_{13}$. Let $v$ be the neighbor of $v_{13}$ that does not belong to the triangle-unit $T_1$. We note that $v$ is a free vertex associated with some cycle $C^*$ in $G$, where possibly $C' = C^*$. Let $T_v$ be the triangle-unit in $G$ that contains~$v$. We note that either all three vertices of $T_v$ are colored (that is, belong to $S$) or none are colored.
If $v$ is an $S$-colored vertex, then the set $S' = S \setminus \{v\}$ is a TF-set of $G$, since as the first vertex played in the forcing process starting with the set $S'$ we play the vertex $v_{13}$ which forces $v$ to be colored, and then proceed exactly with the forcing process starting with the set $S$ to color all remaining vertices, implying that $F_t(G) \le |S'| < |S| = n/2$, a contradiction. Therefore, $v$ is not an $S$-colored vertex, and so no vertex in $T_v$ is $S$-colored.
Suppose that $T_v$ is neither the triangle-unit $T_2$ nor the triangle-unit $T_\ell$ (where possibly $T_2 = T_\ell$). Let $x$ and $y$ be the two vertices of $T_v$ different from $v$, and let $x^*$ and $y^*$ be the neighbors of $x$ and $y$, respectively, that belong to the cycle $C^*$. Since $T_v \ne T_2$ and $T_v \ne T_\ell$, we note that neither $x^*$ nor $y^*$ belong to the triangle-units $T_2$ or $T_\ell$. We note, however, that all three vertices in the triangle-unit containing $x^*$ (respectively, $y^*$) are colored (that is, belong to $S$). In this case, the set $S' = S \setminus \{v_{13}\}$ is a TF-set of $G$, since as the first four vertices played in the forcing process starting with the set $S'$ we play first the vertex $x^*$ which forces $x$ to be colored, second we play the vertex $y^*$ which forces $y$ to be colored, third we play the vertex $x$ which forces $v$ to be colored, and fourth we play the vertex $v$ which forces $v_{13}$ to be colored, and then proceed with the forcing process starting with the set $S$ to color all remaining vertices, implying once again that $F_t(G) \le |S'| < |S| \le n/2$, a contradiction.
Therefore, $T_v$ is the triangle-unit $T_2$ or the triangle-unit $T_\ell$. Since $G$ is connected and the vertex $v_{13}$ is an arbitrary free vertex in $G$, this implies that ${\cal C}$ consists of exactly one cycle, that is, ${\cal C} = \{C\}$ and $|{\cal C}| = 1$. By symmetry, we may assume that $T_v = T_2$. Suppose that $C$ has length $\ell \ge 4$. We now consider the free vertex $v_{33}$ that belongs to the triangle-unit $T_3$. Analogously as with the vertex $v_{13}$, we show that the free vertex $v_{33}$ is adjacent with the free vertex $v_{43}$ that belongs to the triangle-unit $T_4$. More generally, we show that the free vertex $v_{2i+1,3}$ that belongs to the triangle-unit $T_{2i+1}$ is adjacent with the free vertex $v_{2i+2,3}$ that belongs to the triangle-unit $T_{2i+2}$ for all $i \in [\ell - 1]$. Thus the graph $G$ is determined. We illustrate the graph $G$ in the special case when $\ell = 6$ in Figure~\ref{f:G}, where the vertices in $S$ are darkened.
\medskip
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=.8,style=thick,x=1cm,y=1cm]
\def\vr{2.5pt}
\path (-0.1,0.5) coordinate (v1);
\path (-0.1,-0.2) coordinate (u1);
\path (0,0.5) coordinate (v11);
\path (1.5,0.5) coordinate (v12);
\path (0.75,1.5) coordinate (v13);
\path (3,0.5) coordinate (v21);
\path (4.5,0.5) coordinate (v22);
\path (3.75,1.5) coordinate (v23);
\path (6,0.5) coordinate (v31);
\path (7.5,0.5) coordinate (v32);
\path (6.75,1.5) coordinate (v33);
\path (9,0.5) coordinate (v41);
\path (10.5,0.5) coordinate (v42);
\path (9.75,1.5) coordinate (v43);
\path (12,0.5) coordinate (v51);
\path (13.5,0.5) coordinate (v52);
\path (12.75,1.5) coordinate (v53);
\path (15,0.5) coordinate (v61);
\path (16.5,0.5) coordinate (v62);
\path (15.75,1.5) coordinate (v63);
\path (16.6,0.5) coordinate (v6);
\path (16.6,-0.2) coordinate (u6);
\draw (v11) -- (v12);
\draw (v11) -- (v13);
\draw (v12) -- (v13);
\draw (v21) -- (v22);
\draw (v21) -- (v23);
\draw (v22) -- (v23);
\draw (v31) -- (v32);
\draw (v31) -- (v33);
\draw (v32) -- (v33);
\draw (v41) -- (v42);
\draw (v41) -- (v43);
\draw (v42) -- (v43);
\draw (v51) -- (v52);
\draw (v51) -- (v53);
\draw (v52) -- (v53);
\draw (v61) -- (v62);
\draw (v61) -- (v63);
\draw (v62) -- (v63);
\draw (v12) -- (v21);
\draw (v22) -- (v31);
\draw (v32) -- (v41);
\draw (v42) -- (v51);
\draw (v52) -- (v61);
\draw (v62) -- (v11);
\draw (v13) -- (v23);
\draw (v33) -- (v43);
\draw (v53) -- (v63);
\draw (u1) -- (u6);
\draw (v11) [fill=black] circle (\vr);
\draw (v12) [fill=black] circle (\vr);
\draw (v13) [fill=black] circle (\vr);
\draw (v21) [fill=white] circle (\vr);
\draw (v22) [fill=white] circle (\vr);
\draw (v23) [fill=white] circle (\vr);
\draw (v31) [fill=black] circle (\vr);
\draw (v32) [fill=black] circle (\vr);
\draw (v33) [fill=black] circle (\vr);
\draw (v41) [fill=white] circle (\vr);
\draw (v42) [fill=white] circle (\vr);
\draw (v43) [fill=white] circle (\vr);
\draw (v51) [fill=black] circle (\vr);
\draw (v52) [fill=black] circle (\vr);
\draw (v53) [fill=black] circle (\vr);
\draw (v61) [fill=white] circle (\vr);
\draw (v62) [fill=white] circle (\vr);
\draw (v63) [fill=white] circle (\vr);
\draw (v1) to[out=180,in=180, distance=1cm] (u1);
\draw (v6) to[out=0,in=0, distance=1cm] (u6);
\draw (0,0.25) node {{\small $v_{11}$}};
\draw (1.5,0.25) node {{\small $v_{12}$}};
\draw (0.25,1.5) node {{\small $v_{13}$}};
\draw (3,0.25) node {{\small $v_{21}$}};
\draw (4.5,0.25) node {{\small $v_{22}$}};
\draw (4.2,1.5) node {{\small $v_{23}$}};
\draw (6,0.25) node {{\small $v_{31}$}};
\draw (7.5,0.25) node {{\small $v_{32}$}};
\draw (6.25,1.5) node {{\small $v_{33}$}};
\draw (9,0.25) node {{\small $v_{41}$}};
\draw (10.5,0.25) node {{\small $v_{42}$}};
\draw (10.2,1.5) node {{\small $v_{43}$}};
\draw (12,0.25) node {{\small $v_{51}$}};
\draw (13.5,0.25) node {{\small $v_{52}$}};
\draw (12.25,1.5) node {{\small $v_{53}$}};
\draw (15,0.25) node {{\small $v_{61}$}};
\draw (16.5,0.25) node {{\small $v_{62}$}};
\draw (16.2,1.5) node {{\small $v_{63}$}};
\draw (0.75,0.9) node {{\small $T_1$}};
\draw (3.75,0.9) node {{\small $T_2$}};
\draw (6.75,0.9) node {{\small $T_3$}};
\draw (9.75,0.9) node {{\small $T_4$}};
\draw (12.75,0.9) node {{\small $T_5$}};
\draw (15.75,0.9) node {{\small $T_6$}};
\end{tikzpicture}
\end{center}
\vskip -0.25cm
\caption{The graph $G$ when $\ell = 6$.} \label{f:G}
\end{figure}
However, we can now replace the cycle $C \colon v_1v_2\ldots v_\ell v_1$ in ${\cal C}$ that belongs to the multigraph $M_G$ with the $2$-cycles $v_1v_2v_1$, $v_3v_4v_3, \ldots, v_{\ell - 1} v_\ell v_{\ell - 1}$ of $M_G$. Thus we could have chosen the cycle cover ${\cal C}$ to consist of $\ell/2 \ge 2$ vertex-disjoint cycles, contradicting condition~(2) of our choice of ${\cal C}$. Therefore, $\ell = 2$. Thus, $|{\cal C}| = 1$ and the cycle in ${\cal C}$ is a $2$-cycle, implying that $G$ contains exactly two triangle-units and is therefore the prism $C_3 \, \Box \, K_2$. Therefore if $F_t(G) = n/2$, then we have shown that $G = C_3 \, \Box \, K_2$.~$\Box$
\subsection{Proof of Theorem~\ref{thm2}}
\label{S:proof2}
In this section, we present a proof of Theorem~\ref{thm2}. Recall its statement.
\noindent \textbf{Theorem~\ref{thm2}}. \emph{If $G \ne K_4$ is a connected, claw-free, cubic graph of order $n$, then $F_t(G) \le \frac{1}{2}n$ with equality if and only if $G \in {\cal N}_{\rm cubic}$ or $G$ is the prism $C_3 \, \Box \, K_2$.
}
\medskip
\noindent\textbf{Proof. } We proceed by induction on the order~$n \ge 6$ of a connected, claw-free, cubic graph $G$. If $n = 6$, then $G$ is the prism $C_3 \, \Box \, K_2$ and $F_t(G) = 3 = n/2$ (the three darkened vertices shown in Figure~\ref{f:prism}(a) form a minimum TF-set in $C_3 \, \Box \, K_2$). If $n = 8$, then $G$ is the diamond-necklace $N_2$ and $F_t(G) = 4 = n/2$ (the four darkened vertices shown in Figure~\ref{f:prism}(b) form a minimum TF-set in $N_2$). This establishes the base cases. Let $n \ge 10$ and assume that if $G'$ is a connected, claw-free, cubic graph of order~$n'$, where $6 \le n' < n$, then $F_t(G') \le n'/2$ with equality if and only if $G' \in {\cal N}_{\rm cubic}$ or $G'$ is the prism $C_3 \, \Box \, K_2$. Let $G$ be a connected, claw-free, cubic graph of order~$n$.
\medskip
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=.8,style=thick,x=1cm,y=1cm]
\def\vr{2.75pt}
\path (0,0) coordinate (b);
\path (0,1.5) coordinate (a);
\path (1,0.75) coordinate (c);
\path (2,0.75) coordinate (d);
\path (3,0) coordinate (f);
\path (3,1.5) coordinate (e);
\path (5.25,0.75) coordinate (g);
\path (6.75,0.75) coordinate (j);
\path (8.25,0.75) coordinate (k);
\path (9.75,0.75) coordinate (n);
\path (6,0) coordinate (i);
\path (9,0) coordinate (m);
\path (6,1.5) coordinate (h);
\path (9,1.5) coordinate (l);
\draw (a) -- (b);
\draw (a) -- (c);
\draw (a) -- (e);
\draw (b) -- (c);
\draw (b) -- (f);
\draw (c) -- (d);
\draw (d) -- (e);
\draw (d) -- (f);
\draw (e) -- (f);
\draw (g) -- (h);
\draw (g) -- (i);
\draw (g) -- (j);
\draw (h) -- (j);
\draw (h) -- (l);
\draw (i) -- (j);
\draw (i) -- (m);
\draw (k) -- (l);
\draw (k) -- (m);
\draw (k) -- (n);
\draw (l) -- (n);
\draw (m) -- (n);
\draw (a) [fill=black] circle (\vr);
\draw (b) [fill=black] circle (\vr);
\draw (c) [fill=black] circle (\vr);
\draw (d) [fill=white] circle (\vr);
\draw (e) [fill=white] circle (\vr);
\draw (f) [fill=white] circle (\vr);
\draw (g) [fill=black] circle (\vr);
\draw (h) [fill=black] circle (\vr);
\draw (i) [fill=white] circle (\vr);
\draw (j) [fill=white] circle (\vr);
\draw (k) [fill=white] circle (\vr);
\draw (l) [fill=black] circle (\vr);
\draw (m) [fill=white] circle (\vr);
\draw (n) [fill=black] circle (\vr);
\draw (1.5,-0.75) node {(a) {\small $C_3 \, \Box \, K_2$}};
\draw (7.5,-0.75) node {(b) {\small $N_2$}};
\end{tikzpicture}
\end{center}
\vskip -0.65cm
\caption{The prism $C_3 \, \Box \, K_2$ and the diamond-necklace $N_2$.} \label{f:prism}
\end{figure}
If $G \in {\cal N}_{\rm cubic}$, then by Lemma~\ref{lem1}, $F_t(G) = n/2$. Hence, we may assume that $G \notin {\cal N}_{\rm cubic}$, for otherwise the desired result follows. Therefore at least one unit in the $\Delta$-D-partition of $G$ is a triangle-unit. Since every triangle-unit is joined by three edges to vertices from other units, while every diamond-unit is joined by two edges to vertices from other units, we note that there are at least two triangle-units in our $\Delta$-D-partition. If $G$ has no diamond-unit, then the graph $G$ has a spanning $2$-factor consisting of triangles, and so by Theorem~\ref{thm1} $F_t(G) \le n/2$, with equality if and only if $G$ is the prism $C_3 \, \Box \, K_2$. Hence, we may assume that $G$ contains at least one diamond-unit, for otherwise the desired result follows.
Let $D$ be an arbitrary diamond-unit in the $\Delta$-D-partition of $G$, where $V(D) = \{a,b,c,d\}$ and where $ab$ is the missing edge in $D$. Let $e$ be the neighbor of $a$ not in $D$, and let $f$ be the neighbor of $b$ not in $D$. Since $G$ is claw-free, we note that $e \ne f$. We proceed further with the following claim.
\begin{unnumbered}{Claim~A}
If $e$ and $f$ are not adjacent, then $F_t(G) < \frac{1}{2}n$.
\end{unnumbered}
\noindent\textbf{Proof. } Suppose that $e$ and $f$ are not adjacent. Let $e_1$ and $e_2$ be the two neighbors of $e$ different from $a$, and let $f_1$ and $f_2$ be the two neighbors of $f$ different from $b$. By the claw-freeness of $G$, we note that the three vertices $e, e_1, e_2$ induce a triangle, say $T_e$, in $G$ and the three vertices $f, f_1, f_2$ induce a triangle, say $T_f$, in $G$. We note that $T_e$ and $T_f$ have no vertex in common. Further, $T_e$ (respectively, $T_f$) is either a triangle-unit or forms part of a diamond-unit. The resulting
subgraph of $G$ is illustrated in Figure~\ref{f:ClaimA}.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=.8,style=thick,x=1cm,y=1cm]
\def\vr{2.75pt}
\path (-0.5,0) coordinate (e2p);
\path (0,0) coordinate (e2);
\path (-0.5,1.5) coordinate (e1p);
\path (0,1.5) coordinate (e1);
\path (1,0.75) coordinate (e);
\path (3,0.75) coordinate (a);
\path (4,0) coordinate (d);
\path (4,1.5) coordinate (c);
\path (5,0.75) coordinate (b);
\path (7,0.75) coordinate (f);
\path (8.5,1.5) coordinate (f1p);
\path (8,1.5) coordinate (f1);
\path (8.5,0) coordinate (f2p);
\path (8,0) coordinate (f2);
\draw (e1) -- (e1p);
\draw (e2) -- (e2p);
\draw (f1) -- (f1p);
\draw (f2) -- (f2p);
\draw (e) -- (e1);
\draw (e) -- (e2);
\draw (e1) -- (e2);
\draw (f1) -- (f2);
\draw (e) -- (a);
\draw (f) -- (f1);
\draw (f) -- (f2);
\draw (f) -- (b);
\draw (a) -- (c);
\draw (a) -- (d);
\draw (b) -- (c);
\draw (b) -- (d);
\draw (c) -- (d);
\draw (a) [fill=white] circle (\vr);
\draw (b) [fill=white] circle (\vr);
\draw (c) [fill=white] circle (\vr);
\draw (d) [fill=white] circle (\vr);
\draw (e) [fill=white] circle (\vr);
\draw (e1) [fill=white] circle (\vr);
\draw (e2) [fill=white] circle (\vr);
\draw (f) [fill=white] circle (\vr);
\draw (f1) [fill=white] circle (\vr);
\draw (f2) [fill=white] circle (\vr);
\draw (0,-0.3) node {{\small $e_2$}};
\draw (0,1.8) node {{\small $e_1$}};
\draw (1,1.1) node {{\small $e$}};
\draw (3,1.1) node {{\small $a$}};
\draw (4,1.8) node {{\small $c$}};
\draw (4,-0.3) node {{\small $d$}};
\draw (5,1.1) node {{\small $b$}};
\draw (7,1.2) node {{\small $f$}};
\draw (8,1.85) node {{\small $f_1$}};
\draw (8,-0.35) node {{\small $f_2$}};
\end{tikzpicture}
\end{center}
\vskip -0.55cm
\caption{A subgraph of $G$.} \label{f:ClaimA}
\end{figure}
Let $G'$ be the graph obtained from $G$ by deleting the vertices in $V(D)$ and their incident edges, and then adding the edge $ef$; that is, $G' = (G - V(D)) \cup \{ef\}$. We note that $G'$ is a connected, claw-free, cubic graph of order~$n'$, where $6 \le n' < n$. If $G' \in {\cal N}_{\rm cubic}$, then $G \in {\cal N}_{\rm cubic}$, contradicting our earlier assumption that $G \notin {\cal N}_{\rm cubic}$. Thus, $G' \notin {\cal N}_{\rm cubic}$. If $G'$ is the prism $C_3 \, \Box \, K_2$, then renaming vertices if necessary, we may assume that $e_i$ and $f_i$ are adjacent for $i \in [2]$, implying that $G$ is the graph of order~$n = 10$ shown in Figure~\ref{f:ClaimAa}. The set $\{a,c,e,e_1\}$, for example, illustrated in Figure~\ref{f:ClaimAa} is a TF-set in $G$, implying that $F_t(G) \le 4 < n/2$. Hence, we may assume that $G'$ is not the prism $C_3 \, \Box \, K_2$, for otherwise the desired result follows.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=.8,style=thick,x=1cm,y=1cm]
\def\vr{2.75pt}
\path (0,-0.5) coordinate (e2);
\path (0.1,-0.55) coordinate (e2p);
\path (0,2) coordinate (e1);
\path (0.1,2.05) coordinate (e1p);
\path (1,0.75) coordinate (e);
\path (3,0.75) coordinate (a);
\path (4,0) coordinate (d);
\path (4,1.5) coordinate (c);
\path (5,0.75) coordinate (b);
\path (7,0.75) coordinate (f);
\path (7.9,2.05) coordinate (f1p);
\path (8,2) coordinate (f1);
\path (7.9,-0.55) coordinate (f2p);
\path (8,-0.5) coordinate (f2);
\draw (e) -- (e1);
\draw (e) -- (e2);
\draw (e1) -- (e2);
\draw (f1) -- (f2);
\draw (e) -- (a);
\draw (f) -- (f1);
\draw (f) -- (f2);
\draw (f) -- (b);
\draw (a) -- (c);
\draw (a) -- (d);
\draw (b) -- (c);
\draw (b) -- (d);
\draw (c) -- (d);
\draw (a) [fill=black] circle (\vr);
\draw (b) [fill=white] circle (\vr);
\draw (c) [fill=black] circle (\vr);
\draw (d) [fill=white] circle (\vr);
\draw (e) [fill=black] circle (\vr);
\draw (e1) [fill=black] circle (\vr);
\draw (e2) [fill=white] circle (\vr);
\draw (f) [fill=white] circle (\vr);
\draw (f1) [fill=white] circle (\vr);
\draw (f2) [fill=white] circle (\vr);
\draw (-0.3,-0.5) node {{\small $e_2$}};
\draw (-0.3,2) node {{\small $e_1$}};
\draw (1,1.1) node {{\small $e$}};
\draw (3,1.1) node {{\small $a$}};
\draw (4.3,1.5) node {{\small $c$}};
\draw (4.3,0) node {{\small $d$}};
\draw (5,1.1) node {{\small $b$}};
\draw (7,1.2) node {{\small $f$}};
\draw (8.3,2) node {{\small $f_1$}};
\draw (8.3,-0.5) node {{\small $f_2$}};
\draw (e1p) to[out=45,in=135, distance=0.5cm] (f1p);
\draw (e2p) to[out=-45,in=-135, distance=0.5cm] (f2p);
\end{tikzpicture}
\end{center}
\vskip -0.55cm
\caption{The graph $G$.} \label{f:ClaimAa}
\end{figure}
Applying the inductive hypothesis to the graph $G'$, we have $F_t(G') < n'/2$, noting that $G' \notin {\cal N}_{\rm cubic}$ and $G'$ is not the prism $C_3 \, \Box \, K_2$. We show next that $F_t(G) \le F_t(G') + 2$. Let $S'$ be a minimum TF-set in $G'$, and so $F_t(G') = |S'|$.
Suppose that $f \in S'$ but neither $f_1$ nor $f_2$ belong to $S'$. Since $G'[S']$ contains no isolated vertex, we note that in this case $e \in S'$. Thus, $S = (S' \setminus \{f\}) \cup \{a,c\}$ is a TF-set on $G$, since as the first three vertices played in the forcing process in $G$ starting with the set $S$ we play first the vertex $a$ which forces $d$ to be colored, second the vertex $d$ which forces $b$ to be colored, and third the vertex $b$ which forces $f$ to be colored, and then proceed with the forcing process in $G'$ starting with the set $S'$ to color all remaining vertices of $G$, implying that $F_t(G) \le |S| = |S'| + 1 = F_t(G') + 1$. Hence, we may assume that if $f \in S'$, then at least one of $f_1$ and $f_2$ belong to $S'$. Analogously, we may assume that if $e \in S'$, then at least one of $e_1$ and $e_2$ belong to $S'$.
Suppose that $e \in S'$. By our earlier assumptions, if $f \in S'$, then at least one of $f_1$ and $f_2$ belong to $S'$. Thus, the set $S = S' \cup \{a,c\}$ is a TF-set on $G$, since as the first two vertices played in the forcing process in $G$ starting with the set $S$ we play first the vertex $a$ which forces $d$ to be colored, and second the vertex $d$ which forces $b$ to be colored, and then proceed with the forcing process in $G'$ starting with the set $S'$ to color all remaining vertices of $G$, implying that $F_t(G) \le |S| = |S'| + 2 = F_t(G') + 2$. Analogously, if $f \in S'$, then $F_t(G) \le F_t(G') + 2$.
Suppose that neither $e$ nor $f$ belong to $S'$. In the forcing process in $G'$ starting with the set $S'$, we may assume renaming vertices if necessary that the vertex $e$ is colored before the vertex $f$. With this assumption, the set $S = S' \cup \{a,c\}$ is once again a TF-set on $G$, since we proceed with the forcing process in $G'$ starting with the set $S'$ until the vertex $e$ is colored, and then play the vertex $a$ which forces $d$ to be colored, followed by the vertex $d$ which forces $b$ to be colored, and thereafter continue with the original forcing process in $G'$ except that if the vertex $e$ is played in the original forcing process we instead play the vertex $b$. This implies that $F_t(G) \le |S| = |S'| + 2 = F_t(G') + 2$.
In all the above cases, we have shown that $F_t(G) \le F_t(G') + 2$. Hence since $F_t(G') < n'/2$ and $n' = n - 4$, this implies that $F_t(G) < n/2$. This completes the proof of Claim~A.~{\tiny ($\Box$)}
\medskip
By Claim~A, we may assume that $e$ and $f$ are adjacent, for otherwise $F_t(G) < n/2$, as desired. Thus, $e$ and $f$ belong to a common triangle-unit, say $T$. Let $g$ be the third vertex in the triangle-unit $T$, and let $h$ be the neighbor of $g$ not in $T$. Further, let $i$ and $j$ be the two vertices that belong to the triangle containing~$h$. If $h$ belongs to a diamond-unit, then choosing this diamond-unit as our initial diamond-unit $D$, we are back in Claim~A, implying that $F_t(G) < n/2$. Hence, we may assume that $h$ belongs to a triangle-unit in the $\Delta$-D-partition. Let $i$ and $j$ be the names of the vertices in this triangle-unit different from $h$. Further, let $k$ and $\ell$ denote the neighbors of $i$ and $j$, respectively, not in this triangle-unit. By assumption, $h$ does not belong to a diamond-unit, and so $k \ne \ell$. The resulting subgraph of $G$ is illustrated in Figure~\ref{f:figC}, where
possibly $k$ and $\ell$ are adjacent vertices.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=.8,style=thick,x=1cm,y=1cm]
\def\vr{2.75pt}
\path (0.25,0.75) coordinate (c);
\path (1,1.5) coordinate (a);
\path (1,0) coordinate (b);
\path (1.75,0.75) coordinate (d);
\path (3,1.5) coordinate (e);
\path (3,0) coordinate (f);
\path (4,0.75) coordinate (g);
\path (5,0.75) coordinate (h);
\path (6,1.5) coordinate (i);
\path (6,0) coordinate (j);
\path (7,1.5) coordinate (k);
\path (7,0) coordinate (l);
\path (7.5,1.75) coordinate (k1);
\path (7.5,1.25) coordinate (k2);
\path (7.5,0.25) coordinate (l1);
\path (7.5,-0.25) coordinate (l2);
\draw (a) -- (e);
\draw (a) -- (c);
\draw (a) -- (d);
\draw (b) -- (c);
\draw (b) -- (d);
\draw (c) -- (d);
\draw (b) -- (f);
\draw (e) -- (f);
\draw (g) -- (e);
\draw (g) -- (f);
\draw (g) -- (h);
\draw (h) -- (i);
\draw (h) -- (j);
\draw (i) -- (k);
\draw (i) -- (j);
\draw (j) -- (l);
\draw (k) -- (k1);
\draw (k) -- (k2);
\draw (l) -- (l1);
\draw (l) -- (l2);
\draw (a) [fill=white] circle (\vr);
\draw (b) [fill=white] circle (\vr);
\draw (c) [fill=white] circle (\vr);
\draw (d) [fill=white] circle (\vr);
\draw (e) [fill=white] circle (\vr);
\draw (f) [fill=white] circle (\vr);
\draw (g) [fill=white] circle (\vr);
\draw (h) [fill=white] circle (\vr);
\draw (i) [fill=white] circle (\vr);
\draw (j) [fill=white] circle (\vr);
\draw (k) [fill=white] circle (\vr);
\draw (l) [fill=white] circle (\vr);
\draw (-0.05,0.75) node {{\small $c$}};
\draw (1,-0.35) node {{\small $b$}};
\draw (1,1.8) node {{\small $a$}};
\draw (2.05,0.75) node {{\small $d$}};
\draw (3,-0.35) node {{\small $f$}};
\draw (3,1.8) node {{\small $e$}};
\draw (4,0.4) node {{\small $g$}};
\draw (5,0.4) node {{\small $h$}};
\draw (6,-0.35) node {{\small $j$}};
\draw (6,1.85) node {{\small $i$}};
\draw (7,-0.35) node {{\small $\ell$}};
\draw (7,1.85) node {{\small $k$}};
\end{tikzpicture}
\end{center}
\vskip -0.55cm
\caption{A subgraph of $G$.} \label{f:figC}
\end{figure}
We now consider the connected, claw-free, cubic graph $G'$ obtained from $G$ by removing the vertices of the set $\{e,f,g,h,i,j\}$ and adding the edges $ak$ and $b\ell$. Let $n'$ be the order of $G'$, and so $n' = n - 6$. Suppose that the vertex $k$ belongs to a diamond-unit $D^*$ in $G$ (and therefore in $G'$). If this diamond-unit does not contain the vertex~$\ell$, then choosing this diamond-unit as our initial diamond-unit $D$, we are back in Claim~A, implying that $F_t(G) < n/2$. Hence, we may assume that the diamond-unit $D^*$ contains the vertex $\ell$, implying that the graph $G$ is the graph of order~$n = 14$ shown in Figure~\ref{f:Gdeter3}, where $V(D^*) = \{k,\ell,m,p\}$. The set $\{a,c,e,i,k,m\}$, for example, illustrated in Figure~\ref{f:Gdeter3} is a TF-set in $G$, implying that $F_t(G) \le 6 < n/2$. Hence, we may assume that the vertex $k$ belongs to a triangle-unit in $G$ (and therefore in $G'$). Thus, $G' \notin {\cal N}_{\rm cubic}$. Since $G'$ contains the diamond-unit $D$, we note that $G'$ is not the prism $C_3 \, \Box \, K_2$. Applying the inductive hypothesis to the graph $G'$, we therefore have $F_t(G') < n'/2$.
\begin{figure}[htb]
\begin{center}
\begin{tikzpicture}[scale=.8,style=thick,x=1cm,y=1cm]
\def\vr{2.75pt}
\path (0.25,0.75) coordinate (c);
\path (1,1.5) coordinate (a);
\path (1,0) coordinate (b);
\path (1.75,0.75) coordinate (d);
\path (3,1.5) coordinate (e);
\path (3,0) coordinate (f);
\path (4,0.75) coordinate (g);
\path (5,0.75) coordinate (h);
\path (6,1.5) coordinate (i);
\path (6,0) coordinate (j);
\path (8,1.5) coordinate (k);
\path (8,0) coordinate (l);
\path (7.25,0.75) coordinate (p);
\path (8.75,0.75) coordinate (m);
\draw (a) -- (e);
\draw (a) -- (c);
\draw (a) -- (d);
\draw (b) -- (c);
\draw (b) -- (d);
\draw (c) -- (d);
\draw (b) -- (f);
\draw (e) -- (f);
\draw (g) -- (e);
\draw (g) -- (f);
\draw (g) -- (h);
\draw (h) -- (i);
\draw (h) -- (j);
\draw (i) -- (k);
\draw (i) -- (j);
\draw (j) -- (l);
\draw (k) -- (m);
\draw (k) -- (p);
\draw (l) -- (m);
\draw (l) -- (p);
\draw (m) -- (p);
\draw (a) [fill=black] circle (\vr);
\draw (b) [fill=white] circle (\vr);
\draw (c) [fill=black] circle (\vr);
\draw (d) [fill=white] circle (\vr);
\draw (e) [fill=black] circle (\vr);
\draw (f) [fill=white] circle (\vr);
\draw (g) [fill=white] circle (\vr);
\draw (h) [fill=white] circle (\vr);
\draw (i) [fill=black] circle (\vr);
\draw (j) [fill=white] circle (\vr);
\draw (k) [fill=black] circle (\vr);
\draw (l) [fill=white] circle (\vr);
\draw (m) [fill=black] circle (\vr);
\draw (p) [fill=white] circle (\vr);
\draw (-0.05,0.75) node {{\small $c$}};
\draw (1,-0.35) node {{\small $b$}};
\draw (1,1.8) node {{\small $a$}};
\draw (2.05,0.75) node {{\small $d$}};
\draw (3,-0.35) node {{\small $f$}};
\draw (3,1.8) node {{\small $e$}};
\draw (4,0.4) node {{\small $g$}};
\draw (5,0.4) node {{\small $h$}};
\draw (6,-0.35) node {{\small $j$}};
\draw (6,1.85) node {{\small $i$}};
\draw (8,-0.35) node {{\small $\ell$}};
\draw (8,1.85) node {{\small $k$}};
\draw (6.95,0.7) node {{\small $p$}};
\draw (9.1,0.75) node {{\small $m$}};
\end{tikzpicture}
\end{center}
\vskip -0.55cm
\caption{The graph $G$.} \label{f:Gdeter3}
\end{figure}
We show next that $F_t(G) \le F_t(G') + 3$. Let $S'$ be a minimum TF-set in $G'$, and so $F_t(G') = |S'|$. We note that every diamond-unit in $G'$ contains at least two vertices of $S'$. In particular, the diamond-unit $D$ contains at least two vertices of $S'$. We now consider the set $S$ obtained from $S'$ by removing the vertices of $D$ in $S'$ and adding to it the vertices in the set $\{a,c,e,i,j\}$; that is, $S = (S' \setminus V(D)) \cup \{a,c,e,i,j\}$. The set $S$ is a TF-set of $G$, since as the first five vertices played in the forcing process starting with the set $S$ we play first the vertex $a$ which forces $d$ to be colored, second we play the vertex $d$ which forces $b$ to be colored, third we play the vertex $b$ which forces $f$ to be colored, fourth we play the vertex $f$ which forces $g$ to be colored, fifth we play the vertex $g$ which forces $h$ to be colored, and thereafter we follow the forcing process starting with the set $S'$ in $G'$ to color all remaining vertices (where we replace the vertex $a$ with the vertex $i$ if $a$ is played to color $k$ in the original forcing process in $G'$ and we replace the vertex $b$ with the vertex $j$ if $b$ is played to color $\ell$ in the original forcing process in $G'$). Hence, $F_t(G) \le |S| \le |S'| - 3 < n'/2 - 3 = n/2$. This completes the proof of Theorem~\ref{thm2}.~$\Box$
\subsection{Proof of Theorem~\ref{thm3}}
\label{S:proof3}
In this section, we present a proof of Theorem~\ref{thm3}. Recall its statement.
\noindent \textbf{Theorem~\ref{thm3}}. \emph{If $G \ne K_4$ is a connected, claw-free, cubic graph of order $n$, then $F(G) \le \frac{1}{2}n$ with equality if and only if $G$ is the diamond-necklace $N_2$ or the prism $C_3 \, \Box \, K_2$.
}
\medskip
\noindent\textbf{Proof. } Let $G \ne K_4$ be a connected, claw-free, cubic graph of order $n$. By Theorem~\ref{thm2} and Observation~\ref{ob1}, $F(G) \le F_t(G) \le n/2$. Further if $F(G) = n/2$, then $F_t(G) = n/2$, implying by Theorem~\ref{thm2} that $G \in {\cal N}_{\rm cubic}$ or $G$ is the prism $C_3 \, \Box \, K_2$. Suppose that $G \in {\cal N}_{\rm cubic}$ has order~$n$, and so $n = 4k$ for some $k \ge 2$. If $n \ge 12$, then $F(G) = n/4 + 2 < n/2$, a contradiction. Hence, $n = 8$, and so, $G = N_2$. If $G = C_3 \, \Box \, K_2$, then $n = 6$ and $F(G) = 3 = n/2$.~$\Box$
\subsection{Proof of Theorem~\ref{thm4}}
\label{S:proof4}
In this section, we present a proof of Theorem~\ref{thm4}. Recall its statement.
\noindent \textbf{Theorem~\ref{thm4}}. \emph{If $G$ is a connected, claw-free, cubic graph of order $n$, then
\[
\frac{F_t(G)}{F(G)} \le 2,
\]
and this bound is asymptotically best possible.
}
\medskip
\noindent\textbf{Proof. } By Observation~\ref{ob1}, $F_t(G)/F(G) \le 2$. To show that this bound is asymptotically best possible, let $\epsilon > 0$ be an arbitrary real number and consider the graph $G = N_k \in {\cal N}_{\rm cubic}$ where $k > \frac{4}{\epsilon} - 2$. By Lemma~\ref{lem1},
\[
\frac{F_t(G)}{F(G)} = \frac{2k}{k+2} = 2 - \frac{4}{k+2} > 2 - \epsilon,
\]
implying that the ratio $F_t(G)/F(G)$ can be arbitrarily close to~$2$.~$\Box$
\medskip
| {
"timestamp": "2017-08-18T02:01:08",
"yymm": "1708",
"arxiv_id": "1708.05041",
"language": "en",
"url": "https://arxiv.org/abs/1708.05041",
"abstract": "A dynamic coloring of the vertices of a graph $G$ starts with an initial subset $S$ of colored vertices, with all remaining vertices being non-colored. At each discrete time interval, a colored vertex with exactly one non-colored neighbor forces this non-colored neighbor to be colored. The initial set $S$ is called a forcing set (zero forcing set) of $G$ if, by iteratively applying the forcing process, every vertex in $G$ becomes colored. If the initial set $S$ has the added property that it induces a subgraph of $G$ without isolated vertices, then $S$ is called a total forcing set in $G$. The total forcing number of $G$, denoted $F_t(G)$, is the minimum cardinality of a total forcing set in $G$. We prove that if $G$ is a connected cubic graph of order~$n$ that has a spanning $2$-factor consisting of triangles, then $F_t(G) \\le \\frac{1}{2}n$. More generally, we prove that if $G$ is a connected, claw-free, cubic graph of order~$n \\ge 6$, then $F_t(G) \\le \\frac{1}{2}n$, where a claw-free graph is a graph that does not contain $K_{1,3}$ as an induced subgraph. The graphs achieving equality in these bounds are characterized.",
"subjects": "Combinatorics (math.CO)",
"title": "Total Forcing and Zero Forcing in Claw-Free Cubic Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9919380099017026,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7096562546133826
} |
https://arxiv.org/abs/1810.02437 | Permutation graphs and the Abelian sandpile model, tiered trees and non-ambiguous binary trees | A permutation graph is a graph whose edges are given by inversions of a permutation. We study the Abelian sandpile model (ASM) on such graphs. We exhibit a bijection between recurrent configurations of the ASM on permutation graphs and the tiered trees introduced by Dugan et al. [10]. This bijection allows certain parameters of the recurrent configurations to be read on the corresponding tree. In particular, we show that the level of a recurrent configuration can be interpreted as the external activity of the corresponding tree, so that the bijection exhibited provides a new proof of a famous result linking the level polynomial of the ASM to the ubiquitous Tutte polynomial. We show that the set of minimal recurrent configurations is in bijection with the set of complete non-ambiguous binary trees introduced by Aval et al. [2], and introduce a multi-rooted generalization of these that we show to correspond to all recurrent configurations. In the case of permutations with a single descent, we recover some results from the case of Ferrers graphs presented in [11], while we also recover results of Perkinson et al. [16] in the case of threshold graphs. | \section{Introduction}\label{sec:intro}
In the Abelian sandpile model (ASM) on a graph, each vertex has a number of ``grains''. If a vertex has at least as many grains as its degree is then it can be toppled, donating one grain to each of its neighbors. If a (nonempty) sequence of topplings from a configuration $c$ of grains leads to $c$ again, then $c$ is said to be recurrent.
In this paper we study the ASM on permutation graphs. For a permutation $\pi=\pi_1\pi_2\ldots \pi_n$ this is the graph whose vertices are the integers $1,2,\ldots,n$ with an edge between $i$ and $j$ if and only if $i<j$ and $\pi_i>\pi_j$, that is, if $\pi_i$ and $\pi_j$ form an inversion in $\pi$.
This paper generalizes the results in \cite{DSSS}, where the recurrent configurations on Ferrers graphs were classified in terms of decorated EW-tableaux, since Ferrers graphs are isomorphic to permutation graphs of permutations with a single descent
We extend the bijection in \cite{DSSS} between recurrent configurations on Ferrers graphs and the intransitive trees of Postnikov~\cite{Post}, to bijectively connect recurrent configurations of permutation graphs and the tiered trees introduced by Dugan et al.~\cite{DGGS}, of which the intransitive trees are a special case.
In~\cite{ABBS}, Aval et al. introduced the so-called complete non-ambiguous binary trees (\cnabs), which arise from certain 0/1 fillings of square Ferrers diagrams. We show that the set of minimal recurrent configurations on permutation graphs is in bijection with \cnabs. We then generalize the \cnabs, which have a canonical root vertex, to a multirooted version, which we show to be in bijection with all recurrent configurations on the corresponding permutation graphs.
We also show that our results extend those of Perkinson et al. \cite{PYY}, connecting
parking functions and labeled spanning trees of threshold graphs, which are a subset of permutation graphs.
The paper is organized as follows. In Section~\ref{sec:defs} we recall necessary
definitions and provide a link between tiered trees and spanning trees of permutation graphs.
In Section~\ref{sec:main_bij} we exhibit a bijection between tiered trees and recurrent configurations of the ASM on permutation graphs. We show how the level statistic and canonical toppling of a recurrent configuration can be read from the corresponding tree, and interpret the level statistic as the external activity of the tree. This provides a new proof, in the case of permutation graphs, of the famous result linking the level polynomial of the ASM to the ubiquitous Tutte polynomial (see Proposition~\ref{pro:Tutte_level}).
In Section~\ref{sec:minrec_CNAB} we recall the definition of complete non-ambiguous binary trees introduced by Aval et al.~\cite{ABBS}, show that these are in bijection with the set of minimal recurrent configurations of the ASM and introduce a generalization that we show to correspond to all recurrent configurations.
Finally, in Section~\ref{sec:specialisations} we study two special cases of permutation graphs, namely Ferrers graphs (corresponding to permutations with a single descent) and threshold graphs, and recover results from~\cite{DSSS} and ~\cite{PYY} respectively.
\section{Definitions and Preliminaries}\label{sec:defs}
For any positive integer $n$, we let $[n]:=\{1,\ldots,n\}$ and $\clsn$ be the set of permutations of~$[n]$.
\subsection{Permutation graphs}\label{sec:perm_graphs}
To a permutation $\pi = \pi_1 \cdots \pi_n \in \clsn$, we associate a graph $G_{\pi}$ as follows. The vertex set of $\pi$ is $[n]$ and the edges are the pairs $(\pi_i,\pi_j)$ such that $i<j$ and $\pi_i>\pi_j$, that is, $(i,j)$ is an inversion of $\pi$. Such a graph is called a \emph{permutation graph}.
A permutation $\pi \in \clsn$ is said to be \emph{indecomposable} if there exists no positive integer $k<n$ such that $\{\pi_1,\ldots,\pi_k\} = [k]$. The following is well known, see for example \cite[Lemma~3.2]{koh-reh-conn-perm-graphs}.
\begin{fact}
A permutation graph $G_{\pi}$ is connected if and only if $\pi$ is indecomposable.
\end{fact}
Figure~\ref{fig:example_permgraph} shows the graphs associated with the permutations $\pi = 23541$ and $\pi' = 23154$. Note that $\pi'$ can be decomposed as 231--54, while $\pi$ is indecomposable. Thus the graph $G_{\pi}$ is connected, while $G_{\pi'}$ is not.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.5]
\draw (0,0)--(2,0)--(0,0)--(-1,2)--(2,0)--(1,3.2)--(2,0)--(3,2)--(2,0);
\draw [fill=white] (0,0) circle [radius=0.5];
\draw [fill=white] (-1,2) circle [radius=0.5];
\draw [fill=white] (1,3.2) circle [radius=0.5];
\draw [fill=white] (3,2) circle [radius=0.5];
\draw [fill=white] (2,0) circle [radius=0.5];
\node at (0,0) {$5$};
\node at (-1,2) {$4$};
\node at (1,3.2) {$3$};
\node at (3,2) {$2$};
\node at (2,0) {$1$};
\node at (1,-1.3) {$G_{23541}$};
\begin{scope}[shift={(8,0)}]
\draw (0,0)--(-1,2);
\draw (1,3.2)--(2,0)--(3,2)--(2,0);
\draw [fill=white] (0,0) circle [radius=0.5];
\draw [fill=white] (-1,2) circle [radius=0.5];
\draw [fill=white] (1,3.2) circle [radius=0.5];
\draw [fill=white] (3,2) circle [radius=0.5];
\draw [fill=white] (2,0) circle [radius=0.5];
\node at (0,0) {$5$};
\node at (-1,2) {$4$};
\node at (1,3.2) {$3$};
\node at (3,2) {$2$};
\node at (2,0) {$1$};
\node at (1,-1.3) {$G_{23154}$};
\end{scope}
\end{tikzpicture}
\caption{The graphs associated with the permutations $\pi = 23541$ (left) and $\pi' = 23154$ (right).\label{fig:example_permgraph}}
\end{figure}
Since we will be analyzing the ASM on permutation graphs, and the ASM is only defined on connected graphs, we will from now on only deal with permutation graphs of indecomposable permutations unless otherwise specified.
\subsection{Tiered trees}\label{sec:tiered_trees}
Tiered trees were introduced in~\cite{DGGS} as a generalization of the intransitive trees introduced by Postnikov~\cite{Post}, the latter of which have exactly two tiers.
\begin{definition}
A \emph{tiered tree} of size $n$ is a pair $(T,t)$ where:
\begin{itemize}
\item $T$ is a labeled tree on $[n]$.
\item $t$ is a surjective mapping from $[n] \rightarrow [k]$ for some $k$ such that for any edge $(i,j)$ of $T$ with $i>j$ we have $t(i) < t(j)$.
\end{itemize}
The function $t$ is called the \emph{tiering} function of the tiered tree $(T,t)$, and the integer $k$ is its number of tiers.
A tiered tree is said to be \emph{fully tiered} if its number of tiers equals its number of vertices, that is, $k=n$, or equivalently, if its tiering function is a bijection.
\end{definition}
\begin{remark}
The condition $t(i)<t(j)$ is reversed in~\cite{DGGS}. This corresponds to replacing the function $t$ with $k+1-t$. The reason we reverse this condition is to make the link between tiered trees and permutation graphs simpler.
\end{remark}
\subsection{Fully tiered trees and permutation graphs}\label{sec:tieredtrees_permgraphs}
\begin{lemma}\label{lem:tiered_trees_fully_tiered}
Let $\mathcal{T} = (T,t)$ be a tiered tree. Then there exists a fully tiered tree $\mathcal{T}' = (T,t')$.
\end{lemma}
\begin{proof}
Let $\mathcal{T}=(T,t)$ be a tiered tree. For $\ell \in [k]$, we
let $P_\ell := t^{-1}(\ell)$ be the set of vertices at tier $\ell$ in $\mathcal{T}$. By definition, the $P_\ell$ form a partition of $[n]$. We define $t':[n] \rightarrow [n]$ by
\begin{equation}\label{eqn:def_t'}
t'(i):=\left(\sum_{m=1}^{\ell-1} \vert P_m \vert\right) + \vert \{j \in P_\ell : \, j < i\} \vert+1,
\end{equation}
where $\ell$ is such that $i \in P_\ell$.
In words, the function $t'$ keeps the relative ordering of tiers, and orders vertices inside each tier in increasing order, as illustrated in Figure~\ref{fig:fully_tiered} below. We claim that $t'$ is a tiering function for the tree $T$, and that $(T,t')$ is fully tiered.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.7]
\draw (-2,2)--(0,0)--(0,2)--(0,0)--(2,4)--(2,0);
\draw[dotted] (-3,0)--(3,0);
\draw[dotted] (-3,2)--(3,2);
\draw[dotted] (-3,4)--(3,4);
\draw [fill=white] (0,0) circle [radius=0.3];
\draw [fill=white] (-2,2) circle [radius=0.3];
\draw [fill=white] (0,2) circle [radius=0.3];
\draw [fill=white] (2,4) circle [radius=0.3];
\draw [fill=white] (2,0) circle [radius=0.3];
\node at (0,0) {$5$};
\node at (-2,2) {$4$};
\node at (0,2) {$1$};
\node at (2,4) {$2$};
\node at (2,0) {$3$};
\node [left] at (-3,0) {$t(\cdot)=1$};
\node [left] at (-3,2) {$t(\cdot)=2$};
\node [left] at (-3,4) {$t(\cdot)=3$};
\node at (0,-1.3) {$\mathcal{T}$};
\begin{scope}[shift={(6,0)}]
\draw (0,1)--(0,2);
\draw [out=135,in=225] (0,0) to (0,4);
\draw [out=30,in=-30] (0,1) to (0,4);
\draw [out=45,in=-45] (0,1) to (0,3);
\draw[dotted] (-1,0)--(1,0);
\draw[dotted] (-1,1)--(1,1);
\draw[dotted] (-1,2)--(1,2);
\draw[dotted] (-1,3)--(1,3);
\draw[dotted] (-1,4)--(1,4);
\draw [fill=white] (0,0) circle [radius=0.3];
\draw [fill=white] (0,1) circle [radius=0.3];
\draw [fill=white] (0,2) circle [radius=0.3];
\draw [fill=white] (0,3) circle [radius=0.3];
\draw [fill=white] (0,4) circle [radius=0.3];
\node at (0,0) {$3$};
\node at (0,1) {$5$};
\node at (0,2) {$1$};
\node at (0,3) {$4$};
\node at (0,4) {$2$};
\node [right] at (1,0) {$t(\cdot)=1$};
\node [right] at (1,1) {$t(\cdot)=2$};
\node [right] at (1,2) {$t(\cdot)=3$};
\node [right] at (1,3) {$t(\cdot)=4$};
\node [right] at (1,4) {$t(\cdot)=5$};
\node at (0,-1.3) {$\mathcal{T}'$};
\end{scope}
\end{tikzpicture}
\caption{A tiered tree $\mathcal{T}$ (left) and a fully tiered tree $\mathcal{T}'$ (right) with the same underlying tree. The tiers are represented as levels.\label{fig:fully_tiered}}
\end{figure}
Let $(i,j)$ be an edge of $T$ with $i<j$, and let $\ell,m$ be such that $i \in P_\ell$ and $j \in P_m$. Since $t$ is a tiering function, this implies that $t(i) = \ell > m = t(j)$. Now by construction, Equation~\eqref{eqn:def_t'} implies that $t'(i) > t'(j)$, as desired.
It is clear from Equation~\eqref{eqn:def_t'} that $t'$ assigns a unique positive
number no greater than $n$ to each $i$, which implies that $t'$ is a bijection, so $(T,t')$ is fully tiered.
\end{proof}
Lemma~\ref{lem:tiered_trees_fully_tiered} states that any tiered tree can be viewed as a fully tiered tree in a sense. As such, from now on, we only consider fully tiered trees,
and call these simply tiered trees. The following proposition establishes a link between tiered trees and permutation graphs.
\begin{prop}\label{pro:tieredtrees_permgraphs}
Let $T$ be a labeled tree on $[n]$ and $\pi \in \clsn$. Then $T$ is a spanning tree of~$G_{\pi}$ if and only if $(T,\pi^{-1})$ is a tiered tree.
\end{prop}
\begin{proof}
Suppose that $T$ is a spanning tree of $G_{\pi}$. This means that if $(i,j)$ is an edge of $T$ with $i>j$, then $i$ appears before $j$ in $\pi$, which implies $\pi^{-1}(i)<\pi^{-1}(j)$. This is exactly the condition that $(T,\pi^{-1})$ is a tiered tree. The converse follows in the same way.
\end{proof}
\subsection{The Abelian sandpile model}\label{sec:ASM}
The ASM is a dynamic process on a graph which has attracted considerable attention through the years, and remains a constant source of new and interesting research topics.
Let $G=(V,E)$ be a finite, connected, loop-free, undirected graph with vertex set $V=[n]$ for some $n$. Let $d_i = d_i(G)$ be the degree of the vertex $i$ in $G$. We will consider the sandpile model on the graph $G$ with a distinguished vertex $s \in [n]$, called the \emph{sink}. We indicate that by writing this as the pair $\gas{G}{s}$.
A \emph{configuration} on $\gas{G}{s}$ is a vector $c=(c_1,\ldots ,c_n) \in \mathbb{Z}_+^n$ that assigns the number $c_i$ to vertex $i$.
We think of $c_i$ as the number of `grains of sand' at the vertex $i$.
$\Config{G}$ is the set of all configurations on $\gas{G}{s}$.
Let $\alpha_i \in \mathbb{Z}^n$ be the vector with $1$ in the $i$-th position and~$0$ elsewhere.
We say that a vertex $i$ is \emph{stable} in a configuration $c=(c_1,\ldots ,c_n)\in \Config{G}$ if $c_i < d_i$. Otherwise it is \emph{unstable}. A configuration is stable if all its non-sink vertices are stable.
Unstable vertices may \emph{topple}. We define the toppling operator $T_i$ corresponding to the toppling of an unstable vertex $i \in [n]$ in a configuration $c \in \Config{G}$ by
$$
T_i(c) := c - d_i \alpha_i + \sum_{j: \{i,j\} \in E} \alpha_j,
$$
where the sum is over all vertices adjacent to $i$. In words, when a vertex $i$ topples, it sends one grain of sand along each incident edge to its neighbors. We write $c \ra{i} c'$ to indicate that the vertex $i$ is unstable in $c$ and that $T_i(c)=c'$.
It is possible to show (see for instance \cite[Section 5.2]{Dhar}) that starting from any configuration~$c$ and toppling unstable vertices, one eventually reaches a stable
configuration $c'$. Moreover, $c'$ does not depend on the order in which unstable vertices are toppled in this sequence.
\begin{definition}\label{def:rec_states}
A configuration $c \in \Config{G}$ is \emph{recurrent} on $\gas{G}{s}$ if it satisfies the following three conditions:
\begin{enumerate}
\item We have $c_s=d_s$.
\item The configuration $c$ is stable, that is, $c_i < d_i$ for $i \neq s$.
\item\label{defrec3} There exists a sequence $v_1,\ldots,v_n$ with $v_1=s$ and $\{v_1,\ldots,v_n\} = [n]$ such that
$$c^0 = c \ra{v_1} c^1 \ra{v_2} \cdots \ra{v_n} c^n=c.$$
\end{enumerate}
\end{definition}
In words, the third condition states that there is an ordering of the vertices such that starting from $c$, every vertex can be toppled (exactly) once in this order. The fact that after making these topplings one returns to the configuration $c$ is guaranteed by the following argument: On every edge $(i,j)$ of $G$, toppling $i$ sends one grain from $i$ to $j$ while toppling~$j$ sends one grain from $j$ to $i$. Thus, toppling every vertex exactly once leaves the initial configuration unchanged.
Let $\Rec{s}{G}$ be the set of recurrent configuration on a graph $G$ with sink $s$.
Given $c\in\Rec{s}{G}$, define the \emph{level} of $c$ to be
$$
\level{c} := \sum\limits_{i \in [n]} c_i - |E|,
$$
where $|E|$ denotes the number of edges of $G$. From \cite[Thm. 3.5]{Lop} we have that if $G=(V,E)$ is a graph and $c \in \Rec{s}{G}$, then $0\leq\level{c}\leq |E|-|V|+1$.
The level of a recurrent configuration is thus always a non-negative integer. The \emph{level polynomial} of a graph $\gas{G}{s}$ is the generating function of the level statistic over the set of recurrent configurations on that graph:
$$
\Levelpoly{G}{s}{x} := \sum\limits_{c \in \Rec{s}{G}} x^{\level{c}}.
$$
Finally, we define the notion of \emph{canonical toppling}. Given a recurrent configuration $c \in \Rec{s}{G}$, the canonical toppling of $c$ is the ordered set partition $P=P_0,\ldots,P_k$ of $[n]$
where $P_0=\{s\}$ and for $i \geq 1$, $P_i$ is the set of (non-sink) unstable vertices resulting from the toppling of all vertices in $P_0,\ldots,P_{i-1}$. The fact that this gives a partition of $[n]$ is guaranteed by Condition~\eqref{defrec3} of Definition~\ref{def:rec_states}. For $c \in \Rec{s}{G}$, we denote by $\mathsf{CanonTop}(c)$ the canonical toppling of $c$.
\begin{example}\label{ex:rec_canontop}
Let $\pi=3421$ and $G_{3421}$ be the corresponding permutation graph, as illustrated in Figure~\ref{fig:ex_canontop}.
Fix $s=3$ to be the sink vertex (represented as a square), and consider the configuration $c=(1,2,2,1)$ (grains are represented as red dots next to their vertex).
We have $c_3=2=d_3$ and $c_i < d_i$ for $i \neq 3$ so the first two conditions of Definition~\ref{def:rec_states} are satisfied. We show that the third condition is also satisfied, and simultaneously determine the canonical toppling.
We initially topple vertex $3$ in $c^0=c$. This yields the configuration $c^1 = (2,3,1,0)$. In~$c^1$, only vertex $2$ is unstable, so we topple this, reaching $c^2=(3,0,1,2)$. Now both $1$ and $4$ are unstable.
In this case, we may topple for instance $1$ then $4$, and this will yield the initial configuration $c$. Thus, $c$ is recurrent and $\mathsf{CanonTop}(c) = \{3\},\{2\},\{1,4\}$. Finally, we can compute the level of
$c$: $\level{c} = 1+2+2+1-5 = 1$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.5]
\def\graph{
\draw (0,0)--(2,0)--(2,2)--(0,2)--(0,0)--(2,2);
\draw [fill=white] (0,0) circle [radius=0.5];
\draw [fill=white] (0,2) circle [radius=0.5];
\draw [fill=white] (2,2) circle [radius=0.5];
\draw [thick, fill=white] (1.57,-0.43) rectangle (2.43,0.43);
\node at (0,0) {$2$};
\node at (0,2) {$4$};
\node at (2,0) {$3$};
\node at (2,2) {$1$};
}
\newcommand\rdot{\setlength\unitlength{1mm}\red{\circle*{1.5}}}
\newcommand\rdotsb{\makebox(0,0){\rdot\,\rdot}}
\newcommand\rdotsc{\makebox(0,0){\rdot\,\rdot\,\rdot}}
\newcommand{\grainsa}[2]{\node at (#1,#2) {\rdot};}
\newcommand{\grainsb}[2]{\node at (#1,#2) {\rdotsb};}
\newcommand{\grainsc}[2]{\node at (#1,#2) {\rdotsc};}
\newcommand\arrow{\node at (-1.5,1.1) {$\rightarrow$};}
\graph
\grainsa{2.13}{2.8}
\grainsb{0.13}{-0.78}
\grainsb{2.13}{-0.78}
\grainsa{0.13}{2.8}
\begin{scope}[shift={(5,0)}]{
\graph
\arrow
\grainsb{2.13}{2.8}
\grainsc{0.13}{-0.78}
\grainsa{0.13}{2.8}
};
\end{scope}
\begin{scope}[shift={(10,0)}]{
\arrow
\graph
\grainsc{2.13}{2.8}
\grainsa{2.13}{-0.78}
\grainsb{0.13}{2.8}
};
\end{scope}
\begin{scope}[shift={(15,0)}]{
\arrow
\graph
\grainsa{0.13}{-0.78}
\grainsb{2.13}{-0.78}
\grainsc{0.13}{2.8}
};
\end{scope}
\begin{scope}[shift={(20,0)}]{
\arrow
\graph
\grainsa{2.13}{2.8}
\grainsb{0.13}{-0.78}
\grainsb{2.13}{-0.78}
\grainsa{0.13}{2.8}
};
\end{scope}
\end{tikzpicture}
\caption{The permutation graph $G_{3421}$ with sink $s=3$ and the configuration $c=(1,2,2,1)$, which is shown to be recurrent by the toppling sequence 3,2,1,4. \label{fig:ex_canontop}}
\end{figure}
\end{example}
\section{A bijection from trees to recurrent configurations of the ASM}\label{sec:main_bij}
\subsection{The bijection}\label{sec:main_thm}
Let $\pi \in \clsn$ and $T$ be a spanning tree of $G=G_{\pi}$, that is, such that $(T,\pi^{-1})$ is a tiered tree by Proposition~\ref{pro:tieredtrees_permgraphs}.
Let $s \in [n]$ be a distinguished vertex of $G$. We view the tree $T$ as being rooted at $s$. Given $i \in [n]$, we define the \emph{height} of $i$ in $T$ to be its distance to the root $s$, and denote it $h(i)$. If $i \neq s$, the \emph{parent} of $i$ is the next vertex encountered on the unique path from $i$ to $s$, and we denote this $p(i)$. For $k \geq 1$, we define $T^{(k)} := \{ i \in n: \, h(i)=k\}$ to be the set of vertices at height $k$
in $T$, with analogous definitions for $T^{(>k)}$, $T^{(\geq k)}$, etc. Finally, we let $N_G(i)$ be the set of neighbors of $i$ in the graph $G$.
For $i \in [n]$, we let:
\begin{eqnarray}
\lambda_i = \lambda_i(T) & := & \left\vert N_G(i) \cap T^{\left(>h(i)\right)} \right\vert \label{eq:def_lambda} ,\\
\mu_i = \mu_i(T) & := & \left\vert N_G(i) \cap T^{\left(h(i)\right)} \right\vert \label{eq:def_mu} ,\\
\nu_i = \nu_i(T) & := & \left\vert N_G(i) \cap T^{\left(h(i) - 1\right)} \cap [0,p(i)-1] \right\vert \label{eq:def_nu} .
\end{eqnarray}
In words, $\lambda_i$ is the set of neighbors of $i$ in $G$ at height strictly greater than $i$ in $T$, $\mu_i$ is the set of neighbors of $i$ in $G$ at the same height as $i$ in $T$, and $\nu_i$ is the set of neighbors of~$i$ in $G$ at height one less than $i$, and whose labels are strictly smaller than the parent of $i$. Although it would be natural to combine $\lambda_i$ and $\mu_i$ into one number, this definition facilitates our proof of the following theorem. Note that these definitions all depend on the choice of a distinguished vertex $s$, though for lightness of notation we do not make this explicit.
\begin{theorem}\label{thm:main_result}
Let $\pi \in \clsn$ be a permutation and $s \in [n]$ a distinguished vertex of $G=G_{\pi}$.\linebreak Given a spanning tree $T$ of $G$ we define a configuration $c(T)=(c_1(T),\ldots,c_n(T)) \in \Config{G}$ by
$$
c_i(T) := \lambda_i(T) + \mu_i(T) + \nu_i(T).
$$
Then the map $\phi_{TC} : T \mapsto c(T)$ is a bijection from the set of spanning trees of $G$ to $\Rec{s}{G}$.
Moreover, for any spanning tree $T$, we have $ \level{c(T)} = \sum\limits_{i=1}^n \left( \frac12 \mu_i(T) + \nu_i(T) \right) $, and
$$
\mathsf{CanonTop} \left( c(T) \right) = T^{(0)},T^{(1)},\ldots.
$$
That is, the canonical toppling of $c(T)$ is given by the breadth-first search of $T$.
\end{theorem}
Before we prove this result, let us examine one example in depth.
\begin{example}
Let $\pi = 514362$. The associated permutation graph $G=G_{\pi}$ is represented on the left of Figure~\ref{fig:ex_bij}. We take $s=3$ to be the sink. Let $T$ be the spanning tree of $G$ on the right of Figure~\ref{fig:ex_bij}. We represent $T$ as a tree rooted at the distinguished vertex $3$, and compute the corresponding configuration $c(T)$:
For $i=1$, there are no vertices at height greater than $h(1)=2$ in $T$, and none of the other two vertices at height $2$ are neighbors of $1$ in $G$, so that $\lambda_1=\mu_1=0$. In fact, the parent of $1$ in $T$ is its only neighbor in $G$, so that we also have $\nu_1=0$, and thus $c_1=0$. Now consider the vertex $i=2$. In $T$
there are three vertices at height greater than $h(2)=1$, which are $1,4,6$. Of these, $4$ and~$6$ are neighbors of $2$ in G, so that $\lambda_2=2$. Similarly, $5$ is the other vertex at height $1$ in $T$, and is a neighbor of $2$ in $G$, so that $\mu_2=1$. Finally, the parent of $2$ in $T$ is the only vertex at height $1-1=0$, so $\nu_2=0$. Thus, $c_2=2+1+0=3$.
Similarly, $c_3=3+0+0=3$. Now for $i=4$, we have $\lambda_4=0$ (there are no vertices at
a greater height in $T$), $\mu_4=0$ (neither $1$ nor $6$ are neighbors of $4$ in $G$). But both $2$ and $5$ are neighbors of $4$ in $G$ with height equal to $h(4)-1$ in $T$, and the parent of $4$ in $T$ is $5$, so that $\nu_4=1$, and thus $c_4=0+0+1=1$. Finally, we can see that $c_5=2+1+0=3$, and $c_6=0+0+0=0$. Thus, we have $c(T)=(0,3,3,1,3,0)$.
We check that $c(T)$ is recurrent using Definition~\ref{def:rec_states}, and also establish that the canonical toppling of $c(T)$ is given by the breadth-first search (BFS) of $T$. The vertex degree sequence of $G$ is given by $(1,4,3,3,4,1)$, and the BFS of $T$ is $3-25-146$ with dashes separating the sets of vertices at different heights. Start from the configuration $c=c(T)=(0,3,3,1,3,0)$. We have $c_3=d_3$ and $c_j<d_j$ for $j \neq 3$, as desired. Therefore we initially topple vertex $3$. This leads to the configuration $(0,4,0,2,4,0)$. In this configuration, vertices $2$ and $5$ are unstable. We topple these, which leads to the configuration $(1,1,2,4,1,1)$. In this configuration, vertices $1$,$4$ and~$6$ are unstable. We topple these, which leads back to the initial configuration $(0,3,3,1,3,0)$. Thus, by Definition~\ref{def:rec_states} the configuration $c(T)$ is recurrent, and we have moreover shown that $\mathsf{CanonTop}\left(c^T \right) = 3-25-146$, which is exactly the BFS of $T$.
Finally, the graph $G$ has $8$ edges, so that on the one hand
$$\level{c(T)} = (0+3+3+1+3+0) - 8 = 10 - 8 =2.$$
On the other hand, we have
$$\sum\limits_{i=1}^6 \left( \frac12 \mu_i + \nu_i \right) = \frac12 (0+1+0+0+1+0) + (0+0+0+1+0+0) = 1+1 = 2,$$
which gives the desired result.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.5]
\newcommand\vertex[2]{\draw [fill=white] (#1,#2) circle [radius=0.5];}
\newcommand\rdot{\setlength\unitlength{1mm}\red{\circle*{1.5}}}
\newcommand\rdotsb{\makebox(0,0){\rdot\,\rdot}}
\newcommand\rdotsc{\makebox(0,0){\rdot\,\rdot\,\rdot}}
\newcommand{\grainsa}[2]{\node at (#1,#2) {\rdot};}
\newcommand{\grainsb}[2]{\node at (#1,#2) {\rdotsb};}
\newcommand{\grainsc}[2]{\node at (#1,#2) {\rdotsc};}
\draw (0,0)--(4,2);
\draw (-1,2)--(3,0);
\draw (-1,2)--(4,2)--(3,4)--(0,4)--(-1,2)--(3,4);
\draw (0,4)--(4,2);
\vertex{0}{0};
\vertex{-1}{2};
\vertex{0}{4};
\vertex{4}{2};
\draw [fill=white] (2.58,3.58) rectangle ++(0.84,0.84);
\vertex{3}{0};
\node at (0,0) {$6$};
\node at (-1,2) {$5$}; \grainsc{-2.2}{2}
\node at (0,4) {$4$}; \grainsa{-0.8}{4}
\node at (3,4) {$3$}; \grainsc{4.5}{4}
\node at (4,2) {$2$}; \grainsc{5.5}{2}
\node at (3,0) {$1$};
\node at (1.5,-1.8) {$G_{514362}$};
\begin{scope}[shift={(13,0)}]
\draw (0,0)--(-2,2)--(-2,4);
\draw (0,0)--(1,2)--(0,4)--(1,2)--(2,4);
\draw [fill=white] (-0.42,-0.42) rectangle (0.42,0.42);
\draw [fill=white] (-2,2) circle [radius=0.5];
\draw [fill=white] (-2,4) circle [radius=0.5];
\draw [fill=white] (1,2) circle [radius=0.5];
\draw [fill=white] (0,4) circle [radius=0.5];
\draw [fill=white] (2,4) circle [radius=0.5];
\node at (0,0) {$3$};
\node at (-2,2) {$2$};
\node at (-2,4) {$6$};
\node at (1,2) {$5$};
\node at (0,4) {$1$};
\node at (2,4) {$4$};
\node at (0,-1.3) {$T$};
\end{scope}
\end{tikzpicture}
\caption{The graph $G$ associated with the permutation $\pi = 514362$ (left) and a spanning tree $T$ of $G$ represented as rooted at the distinguished vertex~$3$ (right). Configuration on $G$ corresponding to $T$ shown with red dots.\label{fig:ex_bij}}
\end{figure}
\end{example}
\begin{proof}[Proof of Theorem \ref{thm:main_result}]
Let $T$ be a spanning tree of $T$, and $c:=c(T)$ the corresponding configuration. We first show that $c$ is recurrent, and that $\mathsf{CanonTop}(c) = T^{(0)},T^{(1)},\ldots$, using Definition~\ref{def:rec_states}.
\begin{enumerate}
\item The sink $s$ is the unique vertex at height $0$ in $T$, so that $\lambda_s = \vert N_G(s) \vert = d_s$, and $\mu_s = \nu_s = 0$. Thus $c_s = \lambda_s + \mu_s + \nu_s = d_s$ as desired.
\item For $i \neq s$, we see that $\lambda_i$, $\mu_i$ and $\nu_i$ all count distinct subsets of $N_G(i)$. Moreover, $p(i)$ is a neighbor of $i$ in $G$ which is counted in none of these three subsets. Thus, $c_i < \vert N_G(i) \vert = d_i$, and so the configuration $c$ is stable.
\item We now show that, starting from the configuration $c$, for any $k \geq 1$, if we topple the vertices of $T^{(0)},\ldots,T^{(k-1)}$, then the set of non-sink unstable vertices is exactly $T^{(k)}$. Combined with the above, this shows that $c$ is recurrent,
and that $\mathsf{CanonTop}(c) = T^{(0)},T^{(1)},\ldots$. Let $k \geq 1$ and let $c'$ be the configuration reached from the initial configuration $c$ after toppling the vertices of $T^{(0)},\ldots,T^{(k-1)}$. We need to show that $c'_i \geq d_i$ if $i \in T^{(k)}$, and that $c'_i < d_i$ if $i \notin T^{(k)} \cup \{s\}$.
\begin{itemize}
\item Let $i \in T^{(k)}$. We have $c'_i = c_i + \left\vert N_G(i) \cap T^{(<k)} \right\vert$, since the second term of the sum is the number of grains vertex $i$ receives through toppling $T^{(0)},\ldots,T^{(k-1)}$. Thus
\begin{align*}
c'_i & = \lambda_i + \mu_i + \nu_i + \left\vert N_G(i) \cap T^{(<k)} \right\vert \\
& = \left\vert N_G(i) \cap T^{(>k)} \right\vert + \left\vert N_G(i) \cap T^{(k)} \right\vert + \nu_i + \left\vert N_G(i) \cap T^{(<k)} \right\vert \\
& = d_i + \nu_i \geq d_i,
\end{align*}
as desired.
\item Let $i \in T^{(>k)}$. Write $ \ell = h(i) > k$. As above, we have
\begin{align*}
c'_i & = c_i + \left\vert N_G(i) \cap T^{(<k)} \right\vert \\
& = \left\vert N_G(i) \cap T^{(> \ell)} \right\vert + \left\vert N_G(i) \cap T^{(\ell)} \right\vert + \left\vert N_G(i) \cap T^{(<k)} \right\vert + \nu_i. \\
\end{align*}
Now $\nu_i$ counts a subset of neighbors of $i$ in $G$ which are at height $\ell-1$ in $T$, and since $p(i)$ is not counted in $\nu_i$, this is a strict subset.
Thus $c'_i < \left\vert N_G(i) \cap T^{(> \ell)} \right\vert + \left\vert N_G(i) \cap T^{(\ell)} \right\vert + \left\vert N_G(i) \cap T^{(<k)} \right\vert + \left\vert N_G(i) \cap T^{(\ell -1)} \right\vert$, and since $\ell-1 \geq k$, it follows that $c' <d_i$ as desired.
\item Finally, let $i \in T^{(<k)}$, with $i \neq s$. The vertex $i$ has been toppled in $T^{(0)},\ldots,T^{(k-1)}$, so that $c'_i = c_i + \left\vert N_G(i) \cap T^{(<k)} \right\vert - d_i$. But we have already shown that $c$ is stable, so $c_i <d_i$, and thus $c'_i < \left\vert N_G(i) \cap T^{(<k)} \right\vert \leq \vert N_G(i) \vert = d_i$, as desired.
\end{itemize}
\end{enumerate}
This completes the first part of the proof, namely that $c$ is recurrent, and that $\mathsf{CanonTop}(c) = T^{(0)},T^{(1)},\ldots$.
We now show that $ \level{c} = \sum\limits_{i=1}^n \left( \frac12 \mu_i + \nu_i \right)$. We have
\begin{align*}
\level{c} & = \sum\limits_{i=1}^n \left( \lambda_i + \mu_i + \nu_i \right) - \vert E \vert \\
& = \sum\limits_{i=1}^n \left( \lambda_i + \frac12 \mu_i \right) - \vert E \vert + \sum\limits_{i=1}^n \left( \frac12 \mu_i + \nu_i \right).
\end{align*}
Now, the sum $\sum_{i=1}^n \lambda_i$ counts all pairs of vertices $(i,j)$ such that $j \in N_G(i)$ and $h(i) < h(j)$. Thus every edge $(i,j)$ of $G$ with $h(i) \neq h(j)$ is counted exactly once in that sum. Moreover, the sum $\sum_{i=1}^n \mu_i$ counts all pairs of vertices $(i,j)$ such that $j \in N_G(i)$ and $h(i) = h(j)$. Thus, in the sum $\sum_{i=1}^n \mu_i$, every edge $(i,j)$ of $G$ with $h(i) = h(j)$ is counted twice. Therefore we have $\sum_{i=1}^n \left( \lambda_i + \frac12 \mu_i \right) = \vert E \vert $, and thus $ \level{c} = \sum_{i=1}^n \left( \frac12 \mu_i + \nu_i \right)$, as desired.
It remains to show that $\phi_{TC}$ is a bijection. To do this, we exhibit its inverse. Let $c \in \Rec{s}{G}$, and write $\mathsf{CanonTop}(c) = P_0,P_1,\ldots$ for the canonical toppling of $c$, with $P_0 = \{s\}$. We construct a spanning tree $T = T(P)$ of $G$ from this as follows. The levels of $T$ are such that for all $j \geq 0$ we have $T^{(j)} = P_j$. To define $T$ it is then sufficient to define a parent map $p:[n] \setminus \{s\} \rightarrow [n]$ such that for any $j \geq 1$ and $i \in P_j$, we have $p(i) \in N_G(i) \cap P_{j-1}$. That this intersection is nonempty follows from the definition of the canonical toppling, since for $i$ to topple in $P_j$ it must have received some grains through the toppling of $P_{j-1}$.
Fix some $j \geq 1$ and $i \in P_j$. The definition of the canonical toppling implies the following property: Starting from $c$, the vertex $i$ is stable after toppling the vertices from $P_0,\ldots,P_{j-2}$, and becomes unstable after toppling those of $P_{j-1}$. For $k \geq 0$, let $N_G^{(<k)}(i)$ be the set of neighbors of $i$ in $G$ which are in $P_0 \cup \cdots \cup P_{k-1}$. The previous property can then be summarized in the following two inequalities:
$$ c_i + \left\vert N_G^{(<j-1)}(i) \right\vert < d_i, $$
$$ c_i + \left\vert N_G^{(<j)}(i) \right\vert \geq d_i. $$
Letting $r_i := c_i + \left\vert N_G^{(<j)}(i) \right\vert - d_i$, this is equivalent to
$$ 0 \leq r_i < \left\vert N_G(i) \cap P_{j-1} \right\vert.$$
We then define $p(i)$ to be the $(r_i+1)$-th largest element of $ N_G(i) \cap P_{j-1} $,
and let $T=\phi_{CT}(c)$ be the spanning tree of $G$ resulting from this construction. We now show that $\phi_{CT}$ is the inverse of $\phi_{TC}$.
First, let $T$ be a spanning tree of $G$ and set $T':= \phi_{CT}(\phi_{TC}(T))$. By construction,
we have $T^{(k)} = T'^{(k)}$ for all $k \geq 0$, so we only need to show that for any $i \in [n] \setminus \{s\}$, we have $p^T(i) = p^{T'}(i)$. Set $c:=\phi_{TC}(T)$ and let $i \in T^{(j)} \left( = T'^{(j)} \right) $ for some $ j \geq 1$. By definition, $p^{T'}(i)$ is the $(r_i+1)$-th largest element of $ N_G(i) \cap T^{(j-1)} $, where $r_i := c_i + \left\vert N_G^{(<j)}(i) \right\vert - d_i = c_i - \left\vert N_G(i) \cap T^{( \geq j)} \right\vert$. But by definition of $\phi_{TC}$, this means that $r_i = \nu_i(T) = \left\vert N_G(i) \cap T^{(j-1)} \cap [0,p(i) - 1] \right\vert$, and thus $p^T(i)$ is also the $(r_i+1)$-th largest element of $ N_G(i) \cap T^{(j-1)} $, so that $p^T(i) = p^{T'}(i)$ as desired. This shows that $\phi_{CT}(\phi_{TC}(T)) = T$ for any spanning tree $T$. Since it is well known that the number of recurrent configurations for the ASM on a graph $G$ is equal to a number of spanning trees of $G$ (see for instance~\cite[Section 3.2]{Red}), this is sufficient to conclude that $\phi_{TC}$ is a bijection, with $\phi_{CT}$ its inverse.
\end{proof}
\begin{remark}\label{rem:bij_tiered_trees_rec_configs}
Theorem~\ref{thm:main_result}, combined with Proposition~\ref{pro:tieredtrees_permgraphs}, provides a bijection between the set of (fully) tiered trees on $[n]$ and the (disjoint) union of the sets of recurrent configurations for the ASM over all (connected) permutation graphs on $n$ vertices. In particular, we have that the number of (fully) tiered trees on $[n]$ is given by the sum $ \sum_{\pi} \vert \Rec{s}{G_{\pi}} \vert$, where the sum is over all indecomposable permutations of length $n$, and $s$ is some fixed (but arbitrary) sink in $[n]$.
\end{remark}
\subsection{A Tutte-descriptive activity}\label{sec:Tutte}
Let $\pi \in \clsn$ be a permutation, $G=G_{\pi}$ its permutation graph, and $s \in [n]$ a distinguished vertex of $G$. Given a spanning tree $T$, we interpret the level statistic of the corresponding recurrent configuration $c(T) \in \Rec{s}{G}$ as the external
activity of the spanning tree $T$.
\begin{definition}\label{def:activity}
Let $G=(V,E)$ be a graph, $T$ a spanning tree of $G$, and $\prec$ a total order on the edges $E$ of $G$. An edge $e \notin T$ is said to be \emph{externally active} if it is the maximal edge for $\prec$ in the unique cycle contained in $T \cup \{e\}$. An edge $e \in T$ is said to be \emph{internally active} if it is the maximal edge for $\prec$ in the unique \emph{cocycle} contained in $T \setminus \{e\}$, that is, in the set of edges connecting the two connected components of $T \setminus \{e\}$. The external, resp. internal, activity of $T$ is its number of externally, resp. internally, active edges, and is denoted by $\mathsf{ext}(T)$, resp. $\mathsf{int}(T)$.
\end{definition}
In light of this, Theorem~\ref{thm:main_result} can be interpreted as a bijection between recurrent configurations and spanning trees of a graph, mapping the level of a configuration to the external activity of the corresponding tree. This bijection is different from those already existing in the literature, such as~\cite{Ber,CLB}.
Recall that the Tutte polynomial of a (connected) graph $G=(V,E)$ is defined by
$$
\tut_G(x,y) := \sum\limits_{S \subseteq E} (x-1)^{\mathrm{cc}(S) -1} (y-1)^{\mathrm{cc}(S) + \vert S \vert - \vert V \vert},
$$
where for $S \subseteq E$, $\mathrm{cc}(S)$ denotes the number of connected components of the subgraph $(V,S)$. The level and Tutte polynomial of a graph are related by the following well-known result.
\begin{prop}\label{pro:Tutte_level}
Let $\gas{G}{s}$ be a graph. Then we have
$\Levelpoly{G}{s}{x} = \tut_G(1,x).$ In particular, the level polynomial is independent of the choice of sink.
\end{prop}
This result was initially proved by L\'opez \cite{Lop}, following a conjecture by Biggs.
Subsequent combinatorial (bijective) proofs have been given, for instance, by Cori and Le Borgne~\cite{CLB}, and Bernardi~\cite{Ber}. The aim of the remainder of this section is to show that Theorem~\ref{thm:main_result} gives a new bijective proof in the case of permutation graphs.
Let $G=G_{\pi}$ be a permutation graph with sink $s$. We first show that for a spanning tree $T$ of $G$, we can construct an order $\prec_T$ on the edges of $G$ such that $\level{\phi_{TC}(T)} = \mathsf{ext}(T)$. We then show that the order map $T \mapsto \prec_T$ is \emph{Tutte-descriptive} in the sense introduced by Courtiel in \cite{Cour}.
Let $T$ be a spanning tree of $G$. As usual, we root $T$ at $s$. The following algorithm defines an order $\prec_T$ of $E$.
\begin{algorithm}\label{algo:<_T}
\begin{enumerate}
\item Initially, set $k=0$ and all vertices as unvisited.
\item Let $v$ be the largest unvisited vertex at height $k$ in $T$. If no such vertex exists, increase $k$ by $1$ and repeat this step.
\item Let $S$ be the set of edges $(v,w)$ of $G$ such that $w$ is unvisited. Order elements of $S$ by $(v,w) \prec_T (v,w')$ if $w>w'$, and such that all edges in $S$ are greater (in $\prec_T$) than all previously ordered edges.
\item Mark $v$ as visited. If all edges of $G$ have been ordered then terminate, otherwise return to Step (2).
\end{enumerate}
\end{algorithm}
This order $\prec_T$ is similar to that introduced by Gessel and Sagan in \cite{GS}, though where theirs is based on a depth-first search of $T$ ours is based on a breadth-first search, since vertices are visited in that order.
\begin{example}\label{ex:<_T}
Let $\pi = 514362$ so that $G_{\pi}$ is the graph on the left in Figure~\ref{fig:ex_bij}, and consider the spanning tree $T$ on the right in that figure.
We initially set $k=0$ and $v=3$ which is the only vertex at height $0$ in $T$. Proceeding to Step~(3), we have $S = \{(3,2),(3,4),(3,5)\}$. We order these $(3,5) \prec (3,4) \prec (3,2)$. We then mark $3$ as visited, and return to Step~(2). Since there are no unvisited vertices left at height $0$, we move to height $1$.
We set $v=5$, which is the largest vertex at height $1$ (neither vertex has been visited yet). Now $S=\{(5,1),(5,2),(5,4)\}$ since $3$ has already been visited, and we order these $(5,4) \prec (5,2) \prec (5,1)$, with $(3,2) \prec (5,4)$. We then mark $5$ as visited, return to Step~(2), and set $v=2$. We have $S=\{ (2,4),(2,6) \}$, which we order $(2,6) \prec (2,4)$, with $(5,1) \prec (2,6)$. We then mark $2$ as visited, and we now see that all edges have been ordered, so the algorithm terminates, and yields the order $(3,5) \prec (3,4) \prec (3,2) \prec (5,4) \prec (5,2) \prec (5,1) \prec (2,6) \prec (2,4)$.
\end{example}
\begin{theorem}\label{thm:ext_activity_level}
Let $G=G_{\pi}$ be a permutation graph with sink $s$. Then for any spanning tree $T$ of $G$, we have
$$ \mathsf{ext}(T) = \level{\phi_{TC}(T)}, $$
where $\mathsf{ext}(T)$ is the number of externally active edges for the order $\prec_T$ defined by Algorithm~\ref{algo:<_T}.
\end{theorem}
To prove this result, we need two lemmas.
\begin{lemma}\label{lem:level}
Let $G=G_{\pi}$ be a permutation graph with sink $s$, and $T$ a spanning tree of $G$. Then
\begin{align*}
\level{\phi_{TC}(T)} & = \left\vert \{ (i,j) \in E(G) : \, h(i) = h(j) \} \right\vert \\
& + \left\vert \{ (i,j) \in E(G) : \, h(i) = h(j) -1 \text{ and } i<p(j) \} \right\vert.
\end{align*}
\end{lemma}
\begin{proof}
From Theorem~\ref{thm:main_result}, we have that $ \level{\phi_{TC}(T)} = \sum\limits_{i=1}^n \left( \frac12 \mu_i + \nu_i \right)$. Moreover, we saw in the proof of that result that $\sum\limits_{i=1}^n \frac12 \mu_i$ counts edges $(i,j)$ of $G$ such that $h(i) = h(j)$, that is the first term of the right-hand side of Lemma~\ref{lem:level}, while it is clear that $\sum\limits_{i=1}^n \nu_i$ counts the second term, so the result immediately follows.
\end{proof}
\begin{lemma}\label{lem:ext_active_edges}
Let $G=G_{\pi}$ be a permutation graph with sink $s$, $T$ a spanning tree of $G$, and~$\prec_T$ the order on the edges of $G$ given by Algorithm~\ref{algo:<_T}. Suppose that an edge $e=(i,j)$ is externally active for $\prec_T$. Then we have $ \vert h(i) - h(j) \vert \leq 1$.
\end{lemma}
\begin{proof}
Suppose that $e=(i,j)$ with $h(i) \geq h(j) + 2$. In particular, we have $h(i) > h(p(i)) > h(j)$. By the construction in Algorithm~\ref{algo:<_T}, we therefore have $(i,j) \prec_T (i,p(i))$, and since the unique cycle of $T \cup \{e\}$ contains the edge $(i,p(i))$ this implies that $e$ is not externally active, which completes the proof.
\end{proof}
We now prove Theorem~\ref{thm:ext_activity_level}.
\begin{proof}[Proof of Theorem~\ref{thm:ext_activity_level}]
Let $e=(i,j)$ be an edge of $G \setminus T$, with $h(i) \leq h(j)$. By Lemma~\ref{lem:level}, it is sufficient to show that $e$ is externally active if and only if $h(i) = h(j)$ or $h(i) = h(j) - 1$ and $i < p(j)$.
First suppose that $e$ is externally active. Lemma~\ref{lem:ext_active_edges} implies that we have $h(i) = h(j)$ or $h(i) = h(j) - 1$. If $h(i) = h(j)$ there is nothing to do. If $h(i) = h(j) - 1$, we need to show that $i < p(j)$. But if $i > p(j)$ (we cannot have $i = p(j)$ since $(i,j)$ is not an edge of $T$) the construction in Algorithm~\ref{algo:<_T} implies that $(i,j) \prec_T (p(j),j)$ . Since the edge $(p(j),j)$ is contained in the unique cycle of $T \cup \{(i,j)\}$, this means that $e$ is not externally active, which is a contradiction. Hence we must have $i < p(j)$, as desired.
Conversely, suppose that $h(i) = h(j)$ or $h(i) = h(j) - 1$ and $i < p(j)$. Note that the unique cycle of $T \cup \{(i,j)\}$ is formed of the union of the paths $i\leftrightarrow i \wedge j$ and $j \leftrightarrow i \wedge j$ and of the edge $e$, where $i \wedge j$ is the greatest common ancestor of $i$ and $j$ in the tree $T$. If $h(i) = h(j)$ or $h(i) = h(j) - 1$, then all vertices of those paths other than $i$ and $j$ are visited before $i$ and $j$ in the construction of Algorithm~\ref{algo:<_T}, which implies that all edges of the paths $i\leftrightarrow i \wedge j$ and $j \leftrightarrow i \wedge j$ are ordered in $\prec_T$ before $(i,j)$, and thus that edge is externally active by definition. This completes the proof.
\end{proof}
Theorem~\ref{thm:ext_activity_level} states that the level of the configuration corresponding to a spanning tree $T$ via Theorem~\ref{thm:main_result} can be interpreted as the external activity of $T$ for a specific order $\prec_T$ of the edges of $G$. We now show that this order is Tutte-descriptive in the sense introduced by Courtiel \cite{Cour}.
\begin{definition}\label{def:Tutte_descriptive}
Let $G = (V,E)$ be a graph, and suppose we have a mapping $ \Psi: T \mapsto \prec_T$ from the set of spanning trees of $G$ to the set of total orders on $E$. We say that the mapping~$\Psi$ is \emph{Tutte-descriptive} if
$$
\tut_G(x,y) = \sum\limits_T x^{\mathsf{int}(T)} y^{\mathsf{ext}(T)},
$$
where the sum is over all spanning trees of $G$, and $\mathsf{int}(T)$, resp. $\mathsf{ext}(T)$, is the number of internally, resp. externally, active edges for the order $\prec_T$.
\end{definition}
\begin{remark}\label{rem:Tutte_descriptive}
In fact, Courtiel in \cite{Cour} introduces a more general notion of Tutte-descriptive activity. Our notion above corresponds to what he calls \emph{tree-compatible} order maps.
\end{remark}
\begin{theorem}\label{thm:Tutte_descriptive}
Let $G = G_{\pi}$ be a permutation graph, with sink $s$. Then the mapping $T \mapsto \prec_T$, where $\prec_T$ is the order defined by Algorithm~\ref{algo:<_T}, is Tutte-descriptive.
\end{theorem}
\begin{proof}
This follows from~\cite[Theorem 5.3]{Cour} in analogous fashion to the proof of~\cite[Proposition 7.9]{Cour}, with the slight adjustments necessary to take into account that Algorithm~\ref{algo:<_T} provides an order map based on a breadth-first, rather than depth-first, search.
\end{proof}
Combining Theorems~\ref{thm:ext_activity_level} and \ref{thm:Tutte_descriptive} gives a new combinatorial proof of the link between the level polynomial and the Tutte polynomial in Proposition~\ref{pro:Tutte_level} in the case of permutation graphs.
\section{Minimal recurrent configurations and complete non-ambiguous binary trees}\label{sec:minrec_CNAB}
\subsection{Minimal recurrent configurations}\label{sec:minrec}
Given a graph $G$ and a distinguished vertex $s$ of $G$, a configuration $c\in\Config{G}$ is minimal recurrent if it is recurrent and $\level{c}=0$. We denote by $\MinRec{s}{G}$ the set of minimal recurrent configurations for the ASM on $G$. We show that on permutation graphs, minimal recurrent configurations are uniquely determined by their canonical toppling.
\begin{definition}\label{def:comp_partition_permutation}
Given a permutation $\pi \in \clsn$ and a distinguished vertex $s \in [n]$, we say that an ordered set partition $P=P_0,\ldots,P_k$ of $[n]$ is \emph{$(\pi,s)$-compatible} if it satisfies the following three conditions:
\begin{enumerate}
\item\label{defcpp1} $P_0 = \{s\}$.
\item\label{defcpp2} For any $j \geq 0$, the elements of $P_j$ appear in increasing order in $\pi$ (that is, there is no inversion in $\pi$ between two elements of $P_j$).
\item\label{defcpp3} For any $j \geq 1$ and $i \in P_j$, there exists $i' \in P_{j-1}$ such that $(i,i')$ or $(i',i)$ is an inversion of $\pi$.
\end{enumerate}
\end{definition}
\begin{example}\label{ex:comp_partition_permutation}
Let $\pi = 25341$ and $s=3$. We wish to compute the set of $(\pi,s)$-compatible ordered partitions of $[5]$. We always have $P_0 = \{3\}$. From Condition~\eqref{defcpp3}, $P_1$ must be formed of elements $i$ such that $(i,3)$ or $(3,i)$ is an inversion of $\pi$. There are two such elements: $5$ and $1$. However, $(5,1)$ is an inversion of $\pi$, so $P_1$ cannot contain both these elements by Condition~\eqref{defcpp2}. Thus we must have $P_1 = \{1\}$ or $P_1 = \{5\}$. We now remark that the only element which forms an inversion with $2$ is $1$, so that, by Condition~\eqref{defcpp3}, $2$ must be in the part immediately after that containing $1$. Moreover, the part containing $2$ must either contain another element, or be the final part of $P$.
Suppose that $P_1=\{1\}$. By the preceding argument, $P_2$ must contain $2$ and at least one other element which forms an inversion with $1$. There are two remaining elements which do this: $4$ and $5$. Since $(5,4)$ is an inversion, $P_2$ cannot contain both of these, so we must have $P_2 = \{2,4\}$ and $P_3 = \{5\}$, or $P_2 = \{2,5\}$ and $P_3 = \{4\}$. Suppose now that $P_1=\{5\}$. By similar arguments, we must have $P_2 = \{1\}$ or $P_2 = \{4\}$. Using the argument from the previous paragraph, if $P_2 = \{1\}$, then we must have $P_3=\{2,4\}$, and if $P_2 = \{4\}$, then we must have $P_3=\{1\}$ and $P_4=\{2\}$. Finally, we see that there are four $(\pi,s)$-compatible ordered partitions, which we write as blocks separated by dashes, for clarity:
\def\raise.6ex\hbox{\rule{.6em}{.1ex}}{\raise1pt\hbox{--}}
\def\raise.6ex\hbox{\rule{.6em}{.1ex}}{\raise.6ex\hbox{\rule{.6em}{.1ex}}}
$$
3\bb1\bb24\bb5,\;\;\;\; 3\bb1\bb25\bb4,\;\;\;\; 3\bb5\bb1\bb24,\;\;\;\; 3\bb5\bb4\bb1\bb2.
$$
\end{example}
\begin{prop}\label{pro:minrec_orderedpartitions}
Let $\pi \in \clsn$ and $s \in [n]$. The map $\phi_{CP} : c \mapsto \mathsf{CanonTop}(c)$ is a bijection from the set $\MinRec{s}{G_{\pi}}$ of minimal recurrent configurations on the permutation graph $G_{\pi}$ to the set of $(\pi,s)$-compatible ordered partitions of $[n]$.
\end{prop}
\begin{proof}
Let $c \in \MinRec{s}{G_{\pi}}$, and define $P := \mathsf{CanonTop}(c) = P_0,\ldots,P_k$. By definition, $P$ is an ordered partition of $[n]$ and $P_0 = \{s\}$.
Let $T:=\phi_{TC}^{-1}(c)$ be the spanning tree of $G=G_{\pi}$ corresponding to $c$ via the inverse of the bijection in Theorem~\ref{thm:main_result}, and for any $i \in [n]$, let $\lambda_i(T),\mu_i(T),\nu_i(T)$ be defined as in Equations~\eqref{eq:def_lambda},~\eqref{eq:def_mu},~\eqref{eq:def_nu} in Section~\ref{sec:main_thm}.
By Theorem~\ref{thm:main_result}, we have $\mathsf{CanonTop}(c) = T^{(1)},\ldots,T^{(k)}$, that is, $P_j = T^{(j)}$ for all $j \in [k]$.
Moreover, since $c$ is minimal, we have $\level{c} = 0$, which in particular implies $\mu_i(T) = 0$ for all $i \in [n]$. Thus for any $j \in [k]$ there are no edges in $G$ between any two vertices of $P_j$. This implies that the elements of $P_j$ appear in increasing order in $\pi$, so Condition~\eqref{defcpp2} of Definition~\ref{def:comp_partition_permutation} is satisfied.
Now let $j \geq 2$ and $i \in T^{(j)}=P_j$. Let $i' = p(i)$ be the parent of $i$ in $T$, so that $i' \in T^{(j-1)} = P_{j-1}$. Since $(i',i)$ is an edge of $T$ it is also an edge of $G$, which means that $(i',i)$ or $(i,i')$ is an inversion of $\pi$, as desired. We have thus shown that if $c \in \MinRec{s}{G_{\pi}}$, then $\mathsf{CanonTop}(c)$ is a $(\pi,s)$-compatible ordered partition of $[n]$. This shows that the map of Proposition~\ref{pro:minrec_orderedpartitions} is well defined.
To show that it is a bijection, we define its inverse. Suppose that $P = P_0,\ldots,P_k$ is a $(\pi,s)$-compatible ordered partition of $[n]$. We first construct a spanning tree $T=T(P)$ of $G$. The tree $T$ will be rooted at $s$ so that for all $j \in [k]$ we have $T^{(j)} = P_j$. To define $T$ it is thus sufficient to define the parent map $p$. For $j \geq 1$, and $i \in P_j$, we define
\begin{equation}\label{eq:def_parent}
p(i) := \min \left( N^G(i) \cap P_{j-1} \right).
\end{equation}
By Condition~\eqref{defcpp3} of Definition~\ref{def:comp_partition_permutation}, $p(i)$ is well defined, that is, $N^G(i) \cap P_{j-1} \neq \emptyset$.
We now define $c=\phi_{PC} (P) := \phi_{TC}(T(P))$, where $\phi_{TC}$ is the bijection of Theorem~\ref{thm:main_result}, and $T(P)$ is the tree defined above. We show that $c$ is minimal. Let $i \in [n]$. By Condition~\eqref{defcpp2} of Definition~\ref{def:comp_partition_permutation}, we have $\mu_i(T) = 0$ since there are no edges in $G$ between any two vertices of $T^{\left( h(i) \right)} = P_{h(i)}$. Moreover, Equation~\eqref{eq:def_parent} implies that $\nu_i(T)=0$. Since these are true for any $i \in [n]$ it follows from Theorem~\ref{thm:main_result} that $\level{c}=0$, as desired.
Finally, it is straightforward to show that the maps $\phi_{PC}$ and $\phi_{CP}$ are inverses of each other, which completes the proof.
\end{proof}
\subsection{Complete non-ambiguous binary trees}\label{sec:CNAB}
Non-ambiguous binary trees were introduced and studied in~\cite{ABBS} as a special case of the tree-like tableaux from~\cite{ABN}.
\begin{definition}\label{def:NAB}
A non-ambiguous binary tree (\nab) is a filling of a rectangular Ferrers diagram $F$ where every cell is either empty or dotted such that:
\begin{enumerate}
\item\label{nab:1} Every row and every column has a dotted cell.
\item\label{nab:2} Except for the top left cell, every dotted cell has either a dotted
cell above it in its column or to its left in its row, but not both.
\end{enumerate}
The dot in the top left cell (implied by \eqref{nab:1} and \eqref{nab:2}) is called the \emph{{root dot}}, or simply the \emph{root}.
\end{definition}
Through the remainder of this section, when we talk about a dot to the left/right of
another dot we mean in the \emph{same row}, and similarly in the \emph{same column} for above/below.
The name \emph{non-ambiguous binary tree} comes from the fact that by drawing an edge between a dotted cell and the dotted cell immediately above it or to its left, for all dotted cells, one creates a binary tree, embedded in the grid $\mathbb{Z}_2$. Regarding the dot in the top left cell as a root of the tree, Condition~\ref{nab:2} of Definition~\ref{def:NAB} ensures that every other dot has a unique parent.
A \nab\ is \emph{complete} if the associated binary tree is complete, that is, every dotted cell has either a dotted cell below it and to its right, or neither of these, and we refer to such complete \nabs\ as \cnabs. Figure~\ref{fig:ex_NAB} shows two examples of \nabs, with the edges of the associated binary tree drawn in. The left-hand one is complete, while the right-hand one is not.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.35]
\newcommand\dott[2]{\draw [fill=black] (#1,#2) circle [radius=0.25];}
\begin{scope}[shift={(-1,-4)}]
\draw[step=2cm,thick] (0,0) grid (10,10);
\end{scope}
\begin{scope}[shift={(-2,-7)}]
\draw (2,6)--(2,12);
\draw (2,12)--(10,12);
\draw (2,10)--(8,10);
\draw (4,8)--(4,12);
\draw (6,4)--(6,12);
\dott{2}{12}
\dott{4}{12}
\dott{6}{12}
\dott{10}{12}
\dott{2}{10}
\dott{8}{10}
\dott{4}{8}
\dott{6}{4}
\dott{2}{6}
\end{scope}
\begin{scope}[shift={(13,-3)}]
\draw[step=2cm,thick] (0,0) grid (10,8);
\draw (3,1)--(3,5)--(7,5)--(1,5)--(1,7)--(9,7)--(5,7)--(5,3);
\draw [fill=black] (3,1) circle [radius=0.25];
\draw [red,fill=red] (1,5) circle [radius=0.25];
\draw [fill=black] (3,5) circle [radius=0.25];
\draw [fill=black] (1,7) circle [radius=0.25];
\draw [fill=black] (5,3) circle [radius=0.25];
\draw [fill=black] (5,7) circle [radius=0.25];
\draw [fill=black] (7,5) circle [radius=0.25];
\draw [fill=black] (9,7) circle [radius=0.25];
\end{scope}
\end{tikzpicture}
\caption{Two examples of \nabs. The left-hand one is complete, and thus a \cnab, while the right-hand one is not, since the red vertex in column 1, row 3, has only one child. \label{fig:ex_NAB}}
\end{figure}
\begin{lemma}
A \nab\ on a Ferrers diagram $F$ has exactly $n$ dots, where $n$ is one less
than the semi-perimeter of $F$. If a \nab\ is complete then $F$ has the same number of rows as columns.
\end{lemma}
\begin{proof}
Each non-root dot has a dot above it or to its left, but not both. If such a dot has no dot above it, move it to the top row. Otherwise it has no dot to its left, in which case move it to the leftmost column. This moves every dot either to the top row or leftmost column. (We regard dots in the top row and leftmost column as being moved, although they stay put.) Every column has a dot that will be moved up, namely the column's topmost dot, and every row's leftmost dot moves to the leftmost column. This process will therefore leave dots in the entire top row and leftmost column, but nowhere else, which proves the first part.
Given a complete NAB, we trace the above process of moving dots to the top row or leftmost column. For each non-root dot that gets moved to the top row the dot to its left (its parent) has a dot below it, which therefore gets moved into the leftmost column, and conversely. Thus, there must be as many rows as columns.
\end{proof}
\subsection{Complete non-ambiguous binary multitrees}\label{sec:CNABM}
Given a permutation $\pi=\pi_1\pi_2\ldots\pi_n$ let $\tpi$ be the $n\times n$ grid with dots in cells $(\pi_i,i)$, where cell $(i,j)$ is the cell in row $i$ and column $j$,
the northwest corner cell being $(1,1)$.
\begin{definition}\label{def:cmNAB}
A \emph{complete multirooted non-ambiguous binary tree (CMNAB)} is obtained from $\tpi$, for a permutation $\pi$, by adding a further $n-1$
dots with the following conditions:
\begin{enumerate}
\item\label{mnab-cond1} Every added dot has a dotted cell below it in its column and to its right in its row.
\item\label{mnab-cond2} The graph obtained as in the case of a \cnab\ is a tree.
\end{enumerate}
The added dots are called \emph{internal dots}. The set of \cmnabs\ arising from~$\tpi$ is denoted~$\mpi$, and the tree obtained from $M\in\mpi$ is denoted $T(M)$.
\end{definition}
A \cmnab\ gives a multirooted tree where the roots are the dots
with no dots to the left or above; see Figure~\ref{fig:map_zeta}. We will show that \cnabs\ are precisely the \cmnabs\ with a single root.
\begin{lemma}\label{lem:cell2edge}
A cell $(i,j)$ in $\tpi$ has a dot to the right and below if and only if
there is an edge between $i$ and $\pi_j$ in $G_\pi$.
\begin{proof}
If a cell $(i,j)$ has dots both to the right and below then there is a leaf dot below it in row $r>i$, so $\pi_j=r$, and to its right in column $c>j$, so $\pi_c=i<r$. Since $j<c$ and $\pi_j>\pi_c=i$, $\pi_j$ and $i$ form an inversion in $\pi$, so $(\pi_j,i)$ is an edge in $G_\pi$.
Conversely, an edge in $G_\pi$ corresponds to an inversion in $\pi$, which in turn corresponds to a pair of leaf dots in $\pi$, the leftmost of which is lower than the other, and thus there is a cell above the leftmost one and to the left of the other.
\end{proof}
\end{lemma}
By Lemma~\ref{lem:cell2edge} every cell in $\tpi$ with an internal dot corresponds to an
edge in $G_\pi$. So we can map the elements of $\mpi$ to subgraphs of $G_\pi$, by the map $\zeta$ which maps $M\in\mpi$ to the subgraph $\zeta(M)$ of $G_\pi$ with edge set
$$
E(\zeta(M))=\{(i,\pi_j) : (i,j)\text{ contains an internal dot in }M\}.
$$
Note that the non-internal dots in $M$ are the leaves of $T(M)$ and they correspond precisely to the pairs $(i,j)$
where $\pi_j=i$.
The following lemma is straightforward to prove.
\begin{lemma}\label{lem:paths}
Let $S=\zeta(M)$ for a \cmnab\ $M\in \mpi$, so $S$ is a subgraph of $G_\pi$. The sequence
$$
v_1,e_1,v_2,e_2,\ldots,e_{k-1},v_k,
$$
alternating between vertices and edges in $S$, is a path in $S$ if and only if
$$
\ell_1,i_1,\ell_2,i_2,\ldots,\ell_{k-1},i_k
$$
is an alternating sequence of leaf and internal dots in $M$, with every pair of consecutive dots in the same row or column, where $\ell_t$ and $i_t$ are the dots corresponding to the vertex $v_t$ and edge $e_t$, respectively. In particular, $T(M)$ being connected is equivalent to $S$ being connected. Moreover, adding an edge to $S$ corresponds to closing such a sequence through $M$ to a cycle.
\end{lemma}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.35]
\newcommand\dott[2]{\draw [fill=black] (#1,#2) circle [radius=0.25];}
\draw[step=2cm,thick] (0,0) grid (12,12);
\begin{scope}[shift={(-1,-1)}]
\draw (2,6)--(2,10);
\draw (4,12)--(10,12);
\draw (2,10)--(8,10);
\draw (4,2)--(4,12);
\draw (6,4)--(6,12);
\draw (4,8)--(12,8);
\dott{4}{12}
\dott{6}{12}
\dott{10}{12}
\dott{2}{10}
\dott{4}{10}
\dott{8}{10}
\dott{4}{8}
\dott{12}{8}
\dott{6}{4}
\dott{4}{2}
\dott{2}{6}
\node at(2,14.3){$1$};
\node at(4,14.3){$2$};
\node at(6,14.3){$3$};
\node at(8,14.3){$4$};
\node at(10,14.3){$5$};
\node at(12,14.3){$6$};
\node at(-0.3,12){$1$};
\node at(-0.3,10){$2$};
\node at(-0.3,8){$3$};
\node at(-0.3,6){$4$};
\node at(-0.3,4){$5$};
\node at(-0.3,2){$6$};
\end{scope}
\begin{scope}[shift={(18,0)}]
\newcommand\nod[3]{\draw[fill=white](#1,#2)circle[radius=.75];\node at(#1,#2){$#3$};}
\begin{scope}[shift={(-1,-1)}]
\draw[thin] (4,12)--(10,12)--(12,7)--(10,2)--(4,2)--(2,7)--(4,12);
\draw[thin] (4,2)--(4,12)--(12,7)--(2,7)--(10,12)--(10,2)--(4,2);
\draw[red, ultra thick] (2,7)--(10,12)--(4,12)--(12,7)--(10,2);
\draw[red, ultra thick] (4,2)--(4,12);
\nod{4}{12}{6}
\nod{10}{12}{1}
\nod{2}{7}{5}
\nod{12}{7}{2}
\nod{4}{2}{3}
\nod{10}{2}{4}
\end{scope}
\end{scope}
\end{tikzpicture}
\caption{An element $M\in\mathcal{M}_{465213}$, and the graph $G_{465213}$, with the spanning tree corresponding to $M$ marked with thick red lines. Moving the dot at $(2,2)$ to $(1,1)$, thus creating a \cnab, would correspond to replacing the edge $(2,6)$ by $(1,4)$ in the spanning tree, whereas moving the dot at $(2,1)$ to $(3,1)$ corresponds to replacing the edge $(2,4)$ with $(3,4)$.\label{fig:map_zeta}}
\end{figure}
\begin{prop}
The map $\zeta$ is a bijection from $\mpi$ to the spanning trees of $G_\pi$.
\end{prop}
\begin{proof}
Let $\pi$ be an $n$-permutation. By Lemma~\ref{lem:cell2edge} and the fact that every element of $\mpi$ has $n-1$ internal dots, $\zeta$ is a map from $\mpi$ to the set of subgraphs of $G_\pi$ with $n-1$ edges. To show that those edges form a spanning tree it therefore suffices to show that they form a connected graph, which follows from Lemma~\ref{lem:paths}.
Conversely, if $S$ is a spanning tree of $G_\pi$, place a dot in cell $(i,j)$ of $\tpi$ for each edge $(i,\pi_j)$ of $S$, where $i<\pi_j$. By Lemma~\ref{lem:cell2edge}, this places $n-1$ internal dots in $\tpi$ and each of those dots contributes two edges to the graph in $\tpi$, connecting to a dot below and to the right, a total of $2n-2$ edges, in a graph with $2n-1$ vertices. To show that this graph is a tree it again suffices to show that it is connected, which again follows from Lemma~\ref{lem:paths}.
\end{proof}
If a \cmnab\ has a unique root then the definition is equivalent to that of a \cnab, as we will now show.
\begin{lemma}\label{lem:1root}
A \cmnab\ $M$ has a dot with a dot to the left and above if and only if it has more than one root.
\end{lemma}
\begin{proof}
Suppose $M$ has a dot $d$ with a dot $a$ above and a dot $\ell$ to the left. Tracing a zig-zag path from $d$ through $a$ to the topmost dot in their column, then to the leftmost dot in that row, and so on, we must end up at a dot with no dot above or to the left, which is a root dot~$d_1$. Tracing analogously through $\ell$ we will end up at a root dot $d_2$. These two root dots must be distinct, for otherwise we would have traced out a cycle in the tree $T(M)$.
Suppose then that $M$ has (at least) two distinct root dots $\ell$ and $h$, which then must be in different rows, say $h$ in a higher row. The unique path from $\ell$ to $h$
in the tree $T(M)$ must contain an up-step, but start with a right or a down step. Consider the maximal sequence of right and down steps in the beginning of that path. If the last step in that sequence was a right step the next step must be up, if it was a down step the next step must be left. In either case we have found a dot with a dot to the left and above.
\end{proof}
\begin{prop}\label{prop:cnabs-cmnabs}
The \cmnabs\ with a single root are precisely the \cnabs.
\end{prop}
\begin{proof}
Every row in a \cnab\ has a unique leaf dot, which is also the unique leaf dot in its column, and this accounts for $n$ dots. The remaining $n-1$ dots are internal and satisfy Condition~\eqref{mnab-cond1} in Definition~\ref{def:cmNAB}, and \cnabs\ satisfy Condition~\eqref{mnab-cond2} in Definition~\ref{def:cmNAB}, so every \cnab\ is a \cmnab. Conversely, if a \cmnab\ $M$ has a single root, then by Lemma~\ref{lem:1root} no dot has a dot to the left and above, and so $M$ satisfies Condition~\eqref{nab:2} in Definition~\ref{def:NAB} (and Condition~\eqref{nab:1} by definition) and thus is a \cnab.
\end{proof}
We can use the map $\zeta$ to map the \cnabs\ on $\tpi$ to the minimal recurrent configurations on~$G_\pi$. For the following lemma we order the edges of $G_\pi$ reverse lexicographically by coordinates of the corresponding cells in $\tpi$ (see Definition~\ref{def:activity} of external activity).
\begin{prop}\label{prop:root-ext}
Let $M$ be a \cmnab. There is a unique root in $M$ if and only if $\zeta(M)$ has no external activity.
\begin{proof}
If $M$ has more than one root, then Lemma~\ref{lem:1root} implies there exists a dot $d$ in $M$ with a dot $a$ above and dot $\ell$ to the left. Let $c$ be the cell that completes the rectangle of $a,\ell$ and~$d$. Then $c$ corresponds to an edge external to the spanning tree $S$ and adding it to $S$ creates a cycle with edges corresponding to $a,\ell,c$ and $d$, by Lemma~\ref{lem:paths}. Since the edge corresponding to $c$ is ordered last of these edges it is externally active.
If $e$ is an externally active edge then adding it creates a cycle in the spanning tree $S$, which corresponds to a cycle of internal dots in $M$. Such a cycle must contain a dot with a dot to the left and a dot above. However, as $e$ is externally active it must be ordered last in its cycle in~$S$ so the corresponding dot in $M$ is weakly northwest of all other dots in the cycle. Thus the addition of the dot corresponding to $e$ cannot cause one of the pre-existing dots to have a dot to the left and above. Therefore, such a dot must already exist in the cycle, so $M$ has a cell with a dot to the left and a dot above, which implies $M$ is multirooted, by Lemma~\ref{lem:1root}.
\end{proof}
\end{prop}
Note that in this case, unlike in Section~\ref{sec:Tutte}, the order of the edges of
$G=G_{\pi}$ is fixed \textit{a priori} (it does not depend on the tree $T$). It is known (see~\cite{Tutte}) that in this case we have
$$
\tut_G(x,y) = \sum\limits_T x^{\mathsf{int}(T)} y^{\mathsf{ext}(T)},
$$
where the sum is over all spanning trees of $G$, and thus by Proposition~\ref{pro:Tutte_level} the spanning trees with no external activity are in bijection with the minimal recurrent configurations. Therefore, Proposition~\ref{prop:cnabs-cmnabs} and Proposition~\ref{prop:root-ext} imply the following.
\begin{corollary}\label{cor:1root-minrec}
The elements of $\mpi$ with a single root are in bijection with
the minimal recurrent configurations of $G_\pi$.
\end{corollary}
\begin{problem}
Find a nice bijective proof of Corollary~\ref{cor:1root-minrec}.
\end{problem}
\begin{remark}
In~\cite{DGGS}, the authors provided a new interpretation of the sequence $A002190 = 1,1,4,33,456,9460,\ldots$
in~\cite{oeis} counting complete non-ambiguous binary trees, in terms of fully tiered trees with weight $0$. Section~\ref{sec:minrec} and Corollary~\ref{cor:1root-minrec} provide another two combinatorial interpretations to this sequence:
\begin{itemize}
\item as the sum $\sum\limits_{ \pi \in \bar{\cls}_n} \vert \MinRec{s}{G_{\pi}}\vert$, where $\bar{\cls}_n$ is the set of indecomposable permutations of length $n$.
\item as the number of pairs $(\pi,P)$ where $\pi \in \bar{\cls}_n$ and $P$ is a $(\pi,s)$-compatible ordered partition of $n$.
\end{itemize}
\end{remark}
\section{Specialisations}\label{sec:specialisations}
\subsection{The Ferrers case}\label{sec:Ferrers}
In this section, we are interested in the case where the permutation~$\pi$ has a single descent, in which case the permutation graph $G_{\pi}$ is a Ferrers graph. In this case the spanning trees of the permutation graph are the intransitive trees introduced by Postnikov~\cite{Post}. As such, we recover results from~\cite[Section 5.3]{DSSS}.
A \emph{Ferrers graph} (see~\cite{ew-ferrers}) is a bipartite graph whose vertices of
each part are labeled $t_1,t_2,\ldots,t_k$ and $b_1,b_2,\ldots,b_m$, respectively, satisfying the following conditions:
\begin{enumerate}
\item\label{fg1} If $(t_i,b_j)$ is an edge with $r\le i$ and $s\le j$, then $(t_r,b_s)$ is also an edge.
\item\label{fg2} Both $(t_1,b_m)$ and $(t_k,b_1)$ are edges.
\end{enumerate}
As illustrated in the example in Figure~\ref{fig:Ferrers_permgraph}, we think of the
vertices $t_i$ as ``top'' vertices, and the~$b_i$ as ``bottom'' vertices. Note that when read from left to right the labels on the top vertices are increasing but decreasing for the bottom vertices. Thus, condition~\eqref{fg1} above says that a top vertex must have edges to all vertices that any vertex to its right does, and likewise for bottom vertices.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.35]
\begin{scope}[shift={(-2,0)}]
\draw[step=2cm,thick] (0,2) grid (8,8);
\draw[step=2cm,thick] (6,6) grid (10,8);
\draw[step=2cm,thick] (0,0) grid (2,2);
\node at (10.5,7) {$1$};
\node at (9,5.5) {$2$};
\node at (8.5,5) {$3$};
\node at (8.5,3) {$4$};
\node at (7,1.5) {$5$};
\node at (5.1,1.5) {$6$};
\node at (3.1,1.5) {$7$};
\node at (2.5,.7) {$8$};
\node at (1,-.5) {$9$};
\node at (-0.5,7) {$t_1$};
\node at (-0.5,5) {$t_2$};
\node at (-0.5,3) {$t_3$};
\node at (-0.5,1) {$t_4$};
\node at (1,8.7) {$b_1$};
\node at (3,8.7) {$b_2$};
\node at (5,8.7) {$b_3$};
\node at (7,8.7) {$b_4$};
\node at (9,8.7) {$b_5$};
\end{scope}
\begin{scope}[shift={(13,2)}]
\draw (1.5,4)--(0,0)--(1.5,4)--(3,0)--(1.5,4)--(6,0)--(1.5,4)--(9,0)--(1.5,4)--(12,0);
\draw (4.5,4)--(0,0)--(4.5,4)--(3,0)--(4.5,4)--(6,0)--(4.5,4)--(9,0);
\draw (7.5,4)--(0,0)--(7.5,4)--(3,0)--(7.5,4)--(6,0)--(7.5,4)--(9,0);
\draw (10.5,4)--(0,0);
\draw [fill=white] (1.5,4) circle [radius=0.8];
\draw [fill=white] (4.5,4) circle [radius=0.8];
\draw [fill=white] (7.5,4) circle [radius=0.8];
\draw [fill=white] (10.5,4) circle [radius=0.8];
\draw [fill=white] (0,0) circle [radius=0.8];
\draw [fill=white] (3,0) circle [radius=0.8];
\draw [fill=white] (6,0) circle [radius=0.8];
\draw [fill=white] (9,0) circle [radius=0.8];
\draw [fill=white] (12,0) circle [radius=0.8];
\node at (1.5,4) {$1$};
\node at (4.5,4) {$3$};
\node at (7.5,4) {$4$};
\node at (10.5,4) {$8$};
\node at (0,0) {$9$};
\node at (3,0) {$7$};
\node at (6,0) {$6$};
\node at (9,0) {$5$};
\node at (12,0) {$2$};
\end{scope}
\end{tikzpicture}
\caption{Example of a Ferrers diagram, the labeling of its South-East border, and the corresponding labeled Ferrers graph, which is exactly the permutation graph $G_{256791348}$.\label{fig:Ferrers_permgraph}}
\end{figure}
Given a Ferrers diagram with rows labeled from top to bottom with $t_1,t_2,\ldots,t_k$ and
columns labeled with $b_1,b_2,\ldots,b_m$ from left to right,
there is a unique Ferrers graph whose vertices are labeled with the $t_i$ and $b_j$ and where $(t_i,b_j)$ is an edge if and only if the diagram has a cell in row $t_i$ and column $b_j$. This correspondence is clearly one-to-one.
Given a permutation $\pi \in \clsn$, we say that the pair $\pi_i,\pi_{i+1}$ is a \emph{descent} of $\pi$ if $\pi_i > \pi_{i+1}$.
\begin{prop}\label{pro:Ferrers_single_descent}
Let $G$ be a graph on $n$ vertices. Then $G$ is a Ferrers graph if and only if there exists an indecomposable permutation $\pi$ with exactly one descent such that $G \simeq G_{\pi}$.
\end{prop}
\begin{proof}
Suppose that $\pi$ is indecomposable and has a single descent. Then we can decompose~$\pi$ in a unique way as $\pi= \pi_1 \pi_2$, where $\pi_1$ and $\pi_2$ are increasing subsequences of $[n]$ and the last letter of $\pi_1$ is strictly greater than the first letter of $\pi_2$. Since $\pi$ is indecomposable, this implies that the the last letter of $\pi_1$ is $n$, and the first letter of $\pi_2$ is $1$. Now we let $F=F(\pi)$ be the Ferrers diagram
defined as follows. Label the edges on the South-East border of $F$ from North-East to South-West in the order $1,2,\ldots,n$, and let the step labeled $k$ be a South step if $k \in \pi_2$ and a West step if $k \in \pi_1$. This defines a Ferrers diagram since $1 \in \pi_2$. Then the edges of the corresponding Ferrers graph $G(F)$ are the pairs $(i,j)$ where $i$ is a column label (a West step), $j$ a row label (a South step), and $i>j$, that is, exactly the inversions of~$\pi$. Thus $G_{\pi} \simeq G(F)$.
For the converse, suppose $G=G(F)$ is the Ferrers graph corresponding to the Ferrers diagram $F$, whose South-East border is labeled as before. Let $\pi_1$ and $\pi_2$ be the words formed of West steps and East steps, respectively, of that border, each in increasing order. Then $\pi = \pi_1 \pi_2$ is a permutation of length $n$ with exactly one descent, and as above, we have $G \simeq G_{\pi}$, as desired (that $\pi$ is indecomposable follows from the fact that a Ferrers graph is connected).
\end{proof}
Figure~\ref{fig:Ferrers_permgraph} illustrates the construction in the proof of Proposition~\ref{pro:Ferrers_single_descent}. Thus, Ferrers graphs can be viewed as permutation graphs where the permutation has a single descent.
\begin{remark}
It is possible for a permutation with more than one descent to yield a Ferrers graph. For instance, the graph corresponding to the permutation $3142$ is isomorphic to $P_4$, the path graph on $4$ vertices, which is a Ferrers graph. Indeed, $P_4$ is the Ferrers graph corresponding to the diagram whose row lengths are $(2,1)$, or equivalently, it is isomorphic to the permutation graph $G_{2413}$.
\end{remark}
We revisit Theorem~\ref{thm:main_result} in the context of Ferrers graphs. Let $\pi \in \clsn$ be an indecomposable permutation with a single descent, and $G=G_{\pi}$ the corresponding Ferrers graph. We decompose $\pi$ into $\pi_1 \pi_2$ as before, where $\pi_1$ and $\pi_2$ are two increasing sequences such that the last letter of $\pi_1$ is $n$ and the first letter of $\pi_2$ is $1$. We write $A_1$ and $A_2$ for the unordered set of labels appearing in $\pi_1$ and $\pi_2$, respectively. For $j\in \{1,2\}$, we set $\bar{j} := 3-j$, so that if $j=1$, $\bar{j} = 2$ and vice versa.
\begin{lemma}\label{lemma:tree_levels}
Let $s \in [n] $ be a distinguished vertex of $G=G_\pi$ where $\pi$ has a single descent, and let $j \in \{1,2\}$ be such that $s \in A_j$. Let $T$ be a spanning tree of $G$, rooted at $s$. Then for any $k \geq 0$, we have:
\begin{itemize}
\item $T^{(2k)} \subseteq A_j$.
\item $T^{(2k+1)} \subseteq A_{\bar{j}}$.
\end{itemize}
\end{lemma}
\begin{proof}
Any edge $e$ of $G$ is a pair $e=(x,y) \in A_1 \times A_2$. Thus if $e = (x,y) \in T^{(k)} \times T^{(k+1)}$ is an edge of $T$, we have that if $x \in A_j$ then $y \in A_{\bar{j}}$ and vice versa. Since $T^{(0)} = \{s\} \subseteq A_j$, the claim follows immediately by induction.
\end{proof}
An immediate consequence of Lemma~\ref{lemma:tree_levels} is the following.
\begin{prop}\label{pro:mu=0}
Let $s \in [n] $ be a distinguished vertex of $G=G_\pi$ where $\pi$ has a single descent, and $T$ be a spanning tree of $G$, rooted at $s$. Then, for all $i\in[n]$, we have $\mu_i(T)=0$, where the $\mu_i(T)$ are defined as in Equation~\eqref{eq:def_mu} in Section~\ref{sec:main_thm}.
\end{prop}
\begin{proof}
Given $k \geq 0$, Lemma~\ref{lemma:tree_levels} implies that $G$ has no edges between two elements of $T^{(k)}$.
\end{proof}
We now show that in this case, there is a one-to-one correspondence between spanning
trees of permutation graphs $G_\pi$ where $\pi$ has a single descent, and the intransitive trees first introduced by Postnikov~\cite{Post}.
Let~$T$ be a labeled tree on the vertex set $[n]$. We say that $T$ is \emph{intransitive} if all its vertices are either local minima or local maxima.
Given a tree $T$, we denote by $\mathsf{LocMin}(T)$ and $\mathsf{LocMax}(T)$ its set of local minima and maxima, respectively. Thus~$T$ is an intransitive tree if and only if $\mathsf{LocMin}(T), \mathsf{LocMax}(T)$ forms a partition of $[n]$.
The following Proposition is essentially a re-writing of Proposition~\ref{pro:tieredtrees_permgraphs} in the case of permutations with a single descent, but we give a proof in the current context.
\begin{prop}
Let $\pi$ be an indecomposable permutation with a single descent. Write $\pi=\pi_1 \pi_2$ with $\pi_1$ and $\pi_2$ being, respectively, the increasing ordering of a set $A_1$ containing $n$ and of a set $A_2$ containing $1$. Let $T$ be a labeled tree on the vertex set $[n]$. Then $T$ is a spanning tree of $G_{\pi}$ if and only if $T$ is an intransitive tree with $\mathsf{LocMax}(T)=A_1$ and $\mathsf{LocMin}(T) = A_2$.
\end{prop}
\begin{proof}
Suppose that $T$ is a spanning tree of $G=G_{\pi}$, and let $i \in A_1$. By construction, if $(i,j)$ is an edge of $G$, then $j \in A_2$ and $i>j$. In particular, all neighbors of $i$ in $T$ have labels strictly less than $i$, so that $A_1 \subseteq \mathsf{LocMax}(T)$. Similarly, we have $A_2 \subseteq \mathsf{LocMax}(T)$, and since $A_1,A_2$ forms a partition of $[n]$ this implies that $T$ is an intransitive tree with $\mathsf{LocMax}(T)=A_1$ and $\mathsf{LocMin}(T) = A_2$.
The converse follows from the fact that if $T$ is an intransitive tree, all its edges connect a local maximum $i$ to a local minimum $j$ with $i>j$.
\end{proof}
We now restate Theorem~\ref{thm:main_result} in this specialized context.
Let $\pi$ be an indecomposable permutation with a single descent. Write $\pi=\pi_1 \pi_2$ with $\pi_1$ and $\pi_2$ being, respectively, the increasing ordering of a set $A_1$ containing $n$ and of a set $A_2$ containing $1$. Let $G=G_{\pi}$ be the corresponding permutation graph (which is a Ferrers graph by Proposition~\ref{pro:Ferrers_single_descent}). Let $s \in [n]$ be a distinguished vertex of $G$, and $T$ a labeled tree on $[n]$, rooted at $s$. For $i \in [n]$, we define:
\begin{eqnarray}
\tilde{\lambda}_i(T) & := & \begin{cases}
\left\vert \{ j \in T^{\left(>h(i)\right)} \cap A_2 : \, j<i \} \right\vert, \quad \text{if } i \in A_1, \\
\left\vert \{ j \in T^{\left(>h(i)\right)} \cap A_1 : \, j>i \} \right\vert, \quad \text{if } i \in A_2.
\end{cases}
\label{eq:def_lambda_fer}\\
\tilde{\nu}_i(T) & := &
\left\vert T^{\left(h(i) -1\right)} \cap (p(i),i) \right\vert
\label{eq:def_nu_fer},
\end{eqnarray}
where $p(i)$ is the parent of $i$ in the rooted tree $T$, and $(p(i),i)$ is the interval
$[p(i)+1,i-1]$ if $p(i)<i$, and $(p(i),i)=[i+1,p(i)-1]$ if $i<p(i)$.
\begin{theorem}\label{thm:recconfig_intrantrees}
The map $T \mapsto c(T)$, with $c(T) \in \Config{G}$ defined by $c_i(T) := \tilde{\lambda}_i(T) + \tilde{\nu}_i(T)$, is a bijection from the set of intransitive trees $T$ such that $\mathsf{LocMax}(T)=A_1$ and $\mathsf{LocMin}(T)=A_2$, to the set $\Rec{s}{G}$ of recurrent configurations on $G$.
Moreover, we have $ \level{c(T)} = \sum\limits_{i=1}^n \left( \frac12 \tilde{\mu}_i(T) + \tilde{\nu}_i(T) \right) $, and $\mathsf{CanonTop} \left( c(T) \right) = T^{(0)},T^{(1)},\ldots$.
That is, the canonical toppling of $c(T)$ is given by the breadth-first search of $T$.
\end{theorem}
In particular, we recover the bijection between the set of intransitive trees on $n$ vertices and the set of recurrent configurations on Ferrers graphs on $n$ vertices from~\cite{DSSS}.
Note that the definition of $\tilde{\nu}$ in Equation~\eqref{eq:def_nu_fer} differs slightly from that of $\nu$ in Equation~\eqref{eq:def_nu} in Section~\ref{sec:main_thm} (the definition of $\tilde{\lambda}$ is the same as that of $\lambda$, though written slightly differently). This is due to the extra structure of intransitive trees, namely that every vertex is either a local minimum or a local maximum, which allows this simpler formula to be given. There is no additional difficulty in the proof, it merely requires a slight tweaking of the inverse map introduced in the proof of Theorem~\ref{thm:main_result}.
\subsection{Threshold graphs}\label{sec:thresholdgraphs}
Threshold graphs were introduced by Chv\'atal and Hammer~\cite{CH} and are defined as
those graphs that can be constructed from a graph with one vertex by repeatedly adding an isolated vertex or a vertex that is connected to every already existing vertex. It is easy to see that a threshold graph is the permutation graph of a permutation obtained from the permutation 1 by repeatedly appending or prepending a new largest letter. One example of such a permutation is $86521347$; these are exactly the permutations that first
decrease and then increase. Note, however that a threshold graph may be disconnected and thus correspond to a decomposable permutation (which will have its largest letter last).
In~\cite{PYY} the authors present a general bijection between the parking functions of a graph and labeled spanning trees. In the case where $G$ is a threshold graph, this bijection maps the degree of the parking function to the number of inversions of the
spanning tree (an inversion of a tree $T$ with vertex set $[n]$ is a pair $(i,j)$ such that $i>j$ and $j$ is an ancestor of $i$ in~$T$, relative to a designated root). Parking functions of a graph are essentially the same as recurrent configurations for the ASM, via a simple linear translation, with the degree of a parking function corresponding to the level of a recurrent configuration. Thus, the bijection in~\cite{PYY} can be viewed as a bijection between recurrent configurations on a threshold graph~$G$ and spanning trees of $G$, mapping the level statistic of the configuration to the number of inversions of the trees.
In Section~\ref{sec:Tutte}, we showed that our bijection in Theorem~\ref{thm:main_result} can be seen as a bijection between recurrent configurations on a permutation graph $G$ and spanning trees of $G$, mapping the level of the configuration to the external activity of the tree.
It is known that the inversion and external activity statistics are equidistributed over labeled trees on $n$ vertices, and~\cite{Beis} provides a bijective proof of this fact.
Since threshold graphs are a special case of permutation graphs, it follows that Theorem~\ref{thm:main_result} can be viewed as an extension of the work in~\cite{PYY}.
\bibliographystyle{abbrv}
| {
"timestamp": "2018-10-08T02:03:25",
"yymm": "1810",
"arxiv_id": "1810.02437",
"language": "en",
"url": "https://arxiv.org/abs/1810.02437",
"abstract": "A permutation graph is a graph whose edges are given by inversions of a permutation. We study the Abelian sandpile model (ASM) on such graphs. We exhibit a bijection between recurrent configurations of the ASM on permutation graphs and the tiered trees introduced by Dugan et al. [10]. This bijection allows certain parameters of the recurrent configurations to be read on the corresponding tree. In particular, we show that the level of a recurrent configuration can be interpreted as the external activity of the corresponding tree, so that the bijection exhibited provides a new proof of a famous result linking the level polynomial of the ASM to the ubiquitous Tutte polynomial. We show that the set of minimal recurrent configurations is in bijection with the set of complete non-ambiguous binary trees introduced by Aval et al. [2], and introduce a multi-rooted generalization of these that we show to correspond to all recurrent configurations. In the case of permutations with a single descent, we recover some results from the case of Ferrers graphs presented in [11], while we also recover results of Perkinson et al. [16] in the case of threshold graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Permutation graphs and the Abelian sandpile model, tiered trees and non-ambiguous binary trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9919380096633735,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.7096562484242632
} |
https://arxiv.org/abs/2203.07518 | Erdős--Szekeres-type problems in the real projective plane | We consider point sets in the real projective plane $\mathbb{R}P^2$ and explore variants of classical extremal problems about planar point sets in this setting, with a main focus on Erdős--Szekeres-type problems.We provide asymptotically tight bounds for a variant of the Erdős--Szekeres theorem about point sets in convex position in $\mathbb{R}P^2$, which was initiated by Harborth and Möller in 1994. The notion of convex position in $\mathbb{R}P^2$ agrees with the definition of convex sets introduced by Steinitz in 1913.For $k \geq 3$, an (\affine) $k$-hole in a finite set $S \subseteq \mathbb{R}^2$ is a set of $k$ points from $S$ in convex position with no point of $S$ in the interior of their convex hull. After introducing a new notion of $k$-holes for points sets from $\mathbb{R}P^2$, called projective $k$-holes, we find arbitrarily large finite sets of points from $\mathbb{R}P^2$ with no \projective 8-holes, providing an analogue of a classical planar construction by Horton from 1983. We also prove that they contain only quadratically many \projective $k$-holes for $k \leq 7$. On the other hand, we show that the number of $k$-holes can be substantially larger in~$\mathbb{R}P^2$ than in $\mathbb{R}^2$ by constructing, for every $k \in \{3,\dots,6\}$, sets of $n$ points from $\mathbb{R}^2 \subset \mathbb{R}P^2$ with $\Omega(n^{3-3/5k})$ \projective $k$-holes and only $O(n^2)$ \affine $k$-holes. Last but not least, we prove several other results, for example about projective holes in random point sets in $\mathbb{R}P^2$ and about some algorithmic aspects.The study of extremal problems about point sets in $\mathbb{R}P^2$ opens a new area of research, which we support by posing several open problems. | \section{Introduction}
\label{sec:introduction}
\subsection{Erd\H{o}s-Szekeres-type results in the Euclidean plane}
Throughout the whole paper, we consider each set $S$ of points from the Euclidean plane $\mathbb{R}^2$ to be finite and in \emph{general position}, that is, no three points of $S$ lie on a common line.
We say that a set $S$ of $k$ points in the Euclidean plane is in \emph{convex position} if $S$ forms the vertex set of a convex polygon, which we call a \emph{$k$-gon} or an \emph{\affine $k$-gon}.
In 1935, Erd\H{o}s and Szekeres~\cite{ErdosSzekeres1935} showed that, for every integer $k\ge3$, there is a smallest positive integer $ES(k)$ such that every finite set of at least $ES(k)$ points in the plane in general position contains a subset of $k$ points in convex position.
This result, known as the \emph{Erd\H{o}s--Szekeres theorem}, was one of the starting points of both discrete geometry and Ramsey theory.
It motivated various lines of research that led to several important results as well as to many difficult open problems.
For example, there were many efforts to determine the growth rate of the function $ES(k)$.
Erd\H{o}s and Szekeres~\cite{ErdosSzekeres1935} showed $ES(k) \leq \binom{2k-4}{k-2}+1$ and conjectured that $ES(k)=2^{k-2}+1$ for every $k \geq 2$.
This conjecture, known as the \emph{Erd\H{o}s--Szekeres conjecture}, was later supported by Erd\H{o}s and Szekeres~\cite{ErdosSzekeres1960}, who proved the matching lower bound $ES(k) \geq 2^{k-2}+1$.
The Erd\H{o}s--Szekeres conjecture was verified for $k \leq 6$~\cite{SzekeresPeters2006} (see also \cite{Maric2019,Scheucher2020}), but is still open for $k \ge 7$.
In fact, Erd\H{o}s even offered \$500 reward for its solution.
The currently best upper bound $ES(k) \le 2^{k+O(\sqrt{k\log{k}})}$ is due to Holmsen, Mojarrad, Pach, and Tardos~\cite{HolmsenMPT2020}, who improved an earlier breakthrough by Suk~\cite{Suk2017} who showed $ES(k) \le 2^{k+O(k^{2/3}\log{k})}$.
Altogether, these estimates give, for every $k \geq 2$,
\begin{equation}
\label{eq-ESbounds}
2^{k-2} +1 \le ES(k) \le 2^{k+O(\sqrt{k\log{k}})}.
\end{equation}
Several variations of the Erd\H{o}s--Szekeres theorem have been studied in the literature.
In the 1970s, Erd\H{o}s~\cite{Erdos1978} asked whether there is a smallest positive integer $h(k)$ such that every set $S$ of at least $h(k)$ points in the plane in general position contains an \emph{(\affine) $k$-hole}, which is a
convex polygon spanned by a subset of $k$ points from $S$
that does not contain any point from $S$ in its interior.
In other words, a $k$-hole in a finite points set $S$ in the plane in general position is a $k$-gon which is \emph{empty} in~$S$,
that is, its interior does not contain any point from~$S$.
After Horton~\cite{Horton1983} constructed arbitrarily large point sets with no 7-hole, it took more than 20 years until Gerken~\cite{Gerken2008} and Nicolas~\cite{Nicolas2007} independently showed that every sufficiently large set of points contains a 6-hole.
Therefore, $h(k)$ is finite if and only if $k \leq 6$.
Estimating the minimum number of $k$-holes is another example of a classical Erd\H{o}s--Szekeres-type problem.
For a fixed integer $k \geq 3$ and a positive integer $n$, let $h_k(n)$ be the minimum number of $k$-holes in any finite set of $n$ points in the plane.
The growth rate of the function $h_k(n)$ was also studied extensively.
Horton's result implies $h_k(n) = 0$ for $k \geq 7$.
The minimum numbers of 3- and 4-holes are known to be quadratic in $n$, but we only have the bounds $\Omega(n \log^{4/5}n) \leq h_5(n) \leq O(n^2)$ and $\Omega(n) \leq h_6(n) \leq O(n^2)$~\cite{ABHKPSVV2020_JCTA,BaranyValtr2004} for 5- and 6-holes, respectively.
However, it is widely conjectured that $h_5$ and $h_6$ are also both quadratic in $n$.
In this paper, we consider analogous Erd\H{o}s--Szekeres-type problems in the real projective plane $\RPP$.
We define notions of convex position, $k$-gons, and $k$-holes in $\RPP$ and study the corresponding extremal problems, providing several new results as well as numerous open problems in this new line of research.
\subsection{Convex sets in the real projective plane}
\label{sec:prelim}
As in the planar case, we consider only sets $P$ of points from the real projective plane $\RPP$ that are
finite and in \emph{general position}, that is, no three points from $P$ lie on a common projective line.
We say that $P$ is in \emph{\projective convex position} if it is a set in convex position in some Euclidean plane $\rho \subset \RPP$.
Recall that by removing a projective line from $\RPP$
one obtains a Euclidean plane.
Following the notation introduced by Steinitz~\cite{Steinitz1913}, we say that a subset $X$ of $\RPP$ is \emph{semiconvex} if any two points of $X$ can be joined by a line segment fully contained in $X$.
The set $X$ is \emph{convex} if it is semiconvex and does not contain some projective line, that is, $X$ is contained in a plane $\rho \subset \RPP$; see also~\cite{deGrootdeVries1958}.
A \emph{\projective convex hull} of a set $Y \subset \RPP$ is an inclusion-wise minimal convex subset of $\RPP$ containing~$Y$.
We note that, unlike the situation in the plane, a \projective convex hull of $Y$ does not have to be determined uniquely; see Figure~\ref{fig:gonsAndHoles}.
\begin{figure}[htb]
\centering
\includegraphics{figs/figGonsHoles.pdf}
\caption{An example of three \projective 4-gons determined by the same subset of four points from a set $P$ of six points in $\RPP$. The \projective 4-gons in (a) and (b) are not \projective 4-holes in $P$, but the \projective 4-gon in~(c) is a \projective 4-hole in~$P$.
}
\label{fig:gonsAndHoles}
\end{figure}
\begin{definition}[A \projective $k$-gon]\label{def:projective-k-gon
For a positive integer $k$ and a finite set $P$ of points from~$\RPP$ in general position, a \emph{\projective $k$-gon} determined by $P$
is a \projective convex hull of a set $I$ of $k$ points from $P$ which contains all points of $I$ on its boundary; see Figure~\ref{fig:gonsAndHoles}.
\end{definition}
The notion ``\projective $k$-gon'' in~$\RPP$ is a natural analogue of the notion ``\affine $k$-gon'' in~$\mathbb{R}^2$,
since \projective $k$-gons in~$\RPP$ are exactly
those subsets of~$\RPP$ which are convex $k$-gons in some of the planes contained in~$\RPP$
Since a \projective convex hull is not determined uniquely, a set of $k$ points in $\RPP$ can determine several \projective $k$-gons. In particular,
it is not difficult to verify that
\begin{romanenumerate}
\item\label{item-3gon}
any three points in general position in $\RPP$ determine four \projective 3-gons,
\item\label{item-4gon}
any four points in general position in $\RPP$ determine three \projective 4-gons,
\item\label{item-5gon}
any five points in general position in $\RPP$ determine exactly one \projective 5-gon, and
\item
any $k \ge 6$ points in general position in $\RPP$ determine at most one \projective $k$-gon.
\end{romanenumerate}
We also introduce the following natural analogue of holes in the real projective plane.
\begin{definition}[A \projective $k$-hole
For an integer $k \geq 3$ and a finite set $P$ of points from $\RPP$ in general position, a \emph{\projective $k$-hole} in $P$ is a \projective $k$-gon determined by points from $P$ that does not contain any point from $P$ in its interior; see Figure~\ref{fig:gonsAndHoles}.
\end{definition}
The notion of a ``\projective $k$-hole'' in~$\RPP$ is a natural analogue of the notion of an ``(\affine) $k$-hole'' in~$\mathbb{R}^2$, since \projective $k$-holes in~$\RPP$ are exactly
those subsets of~$\RPP$ which are (\affine) $k$-holes in some of the planes contained in~$\RPP$.
We note that, again, a single set of $k\in\{3,4\}$ points in general position in $\RPP$ can determine several different \projective $k$-holes.
Also note that, if $H$ is a \projective $k$-hole in a finite set $P$ of points from $\RPP$ in general position,
then in every affine plane $\rho\subset \RPP$ containing~$H$, the set $H$~is an \affine $k$-hole.
A subset of $\RPP$ is a \emph{\projective hole} in $P$ if it is a \projective $k$-hole in $P$ for some integer $k \geq 3$.
We also describe the following alternative view on \projective $k$-gons and $k$-holes via planar point sets.
A \emph{double chain}~\cite{hurNoyUrru99}
is a set $S=A \cup B$ of $k$ points from $\mathbb{R}^2$ with $A=\{s_1,\dots,s_m\}$ and $B=\{s_{m+1},\dots,s_{k}\}$
for some $m$ with $1 \leq m \leq k-1$
such that,
for every $i=1,\ldots,k$,
the line $\overline{s_is_{i+1}}$ separates
$A\setminus\{s_i,s_{i+1}\}$ from $B\setminus\{s_i,s_{i+1}\}$
(indices modulo $k$); see Figure~\ref{fig:doubleChain}.
The sets $A$ and $B$ are the \emph{chains} of the double chain.
For a line~$\ell$ not separating~$A$,
let $H_{\ell}^A$ be the closed half-plane bounded by~$\ell$
that contains~$A$ and we similarly define $H_{\ell}^B$.
The \emph{double chain $k$-wedge} of $S$ is the union $W_A \cup W_B$
where
$
W_A =
\bigcap_{i=0}^{m} H^A_{\overline{s_is_{i+1}}}
$
and
$
W_B =
\bigcap_{i=m}^{k} H^B_{\overline{s_is_{i+1}}}
$.
\begin{observation}
\label{obs-gons}
Let $P$ be a set of $k$ points from $\RPP$ in general position
and let $\rho\subset \RPP$ be an affine plane containing~$P$.
A convex set $G$ in $\RPP$ is a \projective $k$-gon determined by~$P$
if and only if, in $\rho$,
$G$ is either
a convex polygon with $k$ vertices (that is, an \affine $k$-gon)
or a double chain $k$-wedge.\qed
\end{observation}
\begin{figure}[htb]
\centering
\includegraphics{figs/figDoubleChain.pdf}
\caption{A double chain $S$ on 9 points and the corresponding double chain 9-wedge.
}
\label{fig:doubleChain}
\end{figure}
\begin{observation}
\label{obs-holes}
Let $P$ be a set of $k$ points from $\RPP$ in general position
and let $\rho\subset \RPP$ be an affine plane containing~$P$.
A convex set $H$ in $\RPP$ is a \projective $k$-hole in~$P$
if and only if, in $\rho$,
$H$ is either
a convex polygon with $k$ vertices that is empty in~$P$ (that is, an \affine $k$-hole)
or
a double chain $k$-wedge that is empty in~$P$.\qed
\end{observation}
Convex sets in the real projective plane were considered by many authors~\cite{BrachoCalvillo1991,deGrootdeVries1958,Dekker1955,Haalmeyer1917,Kneser1921} and their study goes back more than 100 years to Steinitz~\cite{Steinitz1913}.
Besides the article of Harborth and M\"oller \cite{HarborthMoeller1993}, which introduced the notion of \projective $k$-gons,
we are not aware of any further literature on \projective $k$-gons or \projective $k$-holes.
Thus, our goal is to conduct a first extensive study of extremal properties of point sets in~$\RPP$.
\section{Our results}
\label{sec:results}
First, we consider an analogue of the Erd\H{o}s--Szekeres theorem in the real projective plane.
For an integer $k \geq 2$, let $ES^p(k)$ be the minimum positive integer $N$ such that every set of at least $N$ points in $\RPP$ in general position contains $k$ points in \projective convex position.
Interestingly, due to Observation~\ref{obs-gons}, $ES^p(k)$ equals the minimum positive integer such that every set of at least $ES^p(K)$ points in $\mathbb{R}^2$ in general position contains either $k$ points in convex position or a double chain of size $k$.
As already noted in \cite{HarborthMoeller1993},
one immediately gets $ES^p(k) \leq ES(k)$.
On the other hand,
$ES^p(k) \geq ES(\lceil k/2\rceil)$, since the largest chain of a double chain of size $k$ has at least $\lceil k/2\rceil$ points.
Thus, by~\eqref{eq-ESbounds}, we have $2^{\lceil k/2\rceil-2}+1\leq ES^p(k) \leq 2^{k + O(\sqrt{k\log{k}})}$ for every $k \geq 2$ and, in particular, the numbers $ES^p(k)$ are finite.
As our first result, we prove an almost matching lower bound on $ES^p(k)$.
\begin{theorem}
\label{thm:projective_k_gon_theorem}
There are constants $c,c'>0$ such that, for every integer $k \geq 2$,
\[
2^{k-c\log{k}} \leq ES^p(k) \leq 2^{k+c'\sqrt{k\log{k}}}.
\]
\end{theorem}
The precise value of $ES^p(k)$ is known for small values of $k$.
For $k \leq 5$,
all sets of $k$ points from $\RPP$ determine a \projective $k$-gon by properties~(\ref{item-3gon})--(\ref{item-5gon}) below Definition~\ref{def:projective-k-gon} and thus $ES^p(k)=k$.
Using SAT-solver-based computations, we have also verified the value $ES^p(6) = 9$,
which was determined by Harborth and M\"oller \cite{HarborthMoeller1993}.
This value can also be verified with an
exhaustive search, or by using
the database of order types of planar point sets~\cite{AichholzerKrasserAurenhammerOTDB,AichholzerAurenhammerKrasser2001}
or the database of (acyclic) oriented matroids \cite{FinschiFukuda2002,FinschiDBOM}.
We also found sets of 17 points from $\RPP$ with no \projective 7-gon, witnessing $ES^p(7) \geq 18$.
\iffalse
In fact, among the 135 equivalence classes of combinatorially different sets of 8 points in $\RPP$ (a.k.a.\ projective order types or acyclic oriented matroids of rank~3),
there are only two which do not contain \projective 6-gons; see Figure~\ref{fig:n8_no_pc6gon}.
\begin{figure}[htb]
\centering
\begin{subfigure}[t]{.47\textwidth}
\centering
\includegraphics[page=1]{figs/n8_no_pc6gon}
\label{fig:n8_no_pc6gon_1}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.47\textwidth}
\centering
\includegraphics[page=2]{figs/n8_no_pc6gon}
\label{fig:n8_no_pc6gon_2}
\end{subfigure}
\caption{Two combinatorially different sets of $8$ points from $\RPP$ with no \projective 6-gon.}
\label{fig:n8_no_pc6gon}
\end{figure}
\fi
Now, we focus on extremal problems about holes in the real projective plane.
As our first result, we show that the existence of \projective 8-holes is not guaranteed in large point sets in~$\RPP$, proving an analogue of the result by Horton~\cite{Horton1983}.
\begin{theorem}
\label{thm:holesExistence}
For every $n \in \mathbb{N}$, there exist sets of $n$ points from $\RPP$ in general position with no \projective 8-hole.
\end{theorem}
We recall that Theorem~\ref{thm:holesExistence} implies that there are arbitrarily large finite sets of points from~$\RPP$ in general position with no \projective $k$-holes for any $k \geq 8$.
The proof of Theorem~\ref{thm:holesExistence} uses
\emph{Horton sets} defined by Valtr~\cite{Valtr1992a} as a generalization
of a construction of Horton~\cite{Horton1983} of an arbitrarily large planar point set in general position (so-called \emph{perfect Horton set}) with no $7$-hole; see Section~\ref{sec:horton_sets_sketch} for the definition of Horton sets.
Horton sets contain no \affine 7-holes in~$\mathbb{R}^2$ and we actually show that, if they are embedded in~$\RPP$, they contain no \projective 8-holes.
Moreover, we show quadratic bounds on the number of \projective $k$-holes in Horton sets
for $k\le 7$.
\begin{theorem}
\label{theorem:horton_sets}
Let $H$ be a Horton set
of size $n$ in $\mathbb{R}^2 \subset \RPP$.
Then $H$ has $\Theta(n^2)$ \projective $k$-holes for every $k\le 7$.
Moreover, if $H$ is the perfect Horton set of size $n=2^z$, then the number of \projective 3-holes in $H$ equals
\[
4.25 \cdot 2^{2z} + 2^z(-3z^2/2-z/2-5.5) -4z + 2
= 4.25n^2 -1.5n \log^2 n-\Theta(n \log n)
.
\]
\end{theorem}
For positive integers $k \geq 3$ and $n$, let $h_k^p(n)$ be the minimum number of \projective $k$-holes in any set of $n$ points in $\RPP$ in general position.
Theorem~\ref{theorem:horton_sets} gives
$h^p_k(n) \leq O(n^2)$ for every $k \leq 7$ and Theorem~\ref{thm:holesExistence} gives
$h^p_k(n) = 0$ for every $k > 7$.
In contrast to the planar case, each sufficiently large Horton set in $\RPP$ contains a \projective $7$-hole.
We do not have examples of large point sets in $\RPP$ without \projective 7-holes, thus it is natural to ask whether there are \projective 7-holes in every sufficiently large point set in $\RPP$.
We believe this to be the case; see Subsection~\ref{subsec:openProblems} for more open problems.
We also prove that every set of at least 7 points in $\RPP$ contains a \projective 5-hole while there are sets of 6 points in $\RPP$ with no \projective 5-hole.
Interestingly, every set of 5 points in $\RPP$ contains a \projective 5-hole.
This is in contrast with the situation in the plane, where we have $h_k(n) \leq h_k(n+1)$ for every $k$ and $n$, which can be seen by removing a vertex of the convex hull of a set $S$ of $n+1$ points from $\mathbb{R}^2$ with $h_k(n+1)$ \affine $k$-holes.
\begin{proposition}
\label{proposition:projective5holes}
Every set of at least 7 points in general position in $\RPP$ contains a \projective 5-hole.
Also,
$h_5^{p}(5)=1$ and $h_5^{p}(6)=0$.
\end{proposition}
The proof of Proposition~\ref{proposition:projective5holes} can be found
\ifappendix
in~Appendix~\ref{sec:projective5holes_proof}.
\else
in~\cite{arxiv_version}.
\fi
The following theorem shows that for some point sets the number of holes is substantially larger in~$\RPP$ than in $\mathbb{R}^2$.
\begin{theorem}\label{thm:construction}
For every $k \in \{3,\dots,6\}$ and every positive integer $n$,
there is a set $S_k(n)$ of $n$ points in general position in $\mathbb{R}^2\subset \RPP$ such that $S_k(n)$ has
$O(n^2)$ \affine $k$-holes in $\mathbb{R}^2$ and
$\Omega(n^{3-\frac{5}{3k}})$ \projective $k$-holes.
More generally,
for every $k \in \{3,\dots,6\}$, every real number $\alpha\in[0,k-2]$, and each positive integer $n$,
there is a set $S^\alpha_k(n)$ of $n$ points in general position in $\mathbb{R}^2\subset\RPP$ such that $S^\alpha_k(n)$ has
$O(n^{2+\alpha})$ \affine $k$-holes in $\mathbb{R}^2$ and
$\Omega(n^{2+\beta})$ \projective $k$-holes,
where
\[
\beta:=\begin{cases}
1-\frac{5}{3k} + \alpha\cdot\frac{k-1}k & \ \text{if }\ 0\le \alpha \le \frac{2k-5}3,\\
(1+\alpha)\frac{k-2}{k-1} & \ \text{if }\ \frac{2k-5}3 <\alpha \le k-2.
\end{cases}
\]
\end{theorem}
The following result shows a significant difference between the number of holes of all sizes in the plane and in the real projective plane.
\begin{theorem}\label{thm:construction2}
For any two positive integers $n$ and $x$ with $x\le 2^{n/2}$, there is a set $S(n,x)$ of $n$ points in general position in $\mathbb{R}^2\subset\RPP$ containing at most $O(x+n^2)$ \affine holes in $\mathbb{R}^2$ and at least $\Omega(x^2)$ \projective holes.
\end{theorem}
In general, we can show that every set $P$ of $n$ points from $\mathbb{R}^2 \subset \RPP$ contains at least quadratically many
\projective holes which are not \affine holes in~$\mathbb{R}^2$.
\begin{proposition}
\label{proposition:manyprojective34holes}
Let $P$ be a set of $n$ points in
$\mathbb{R}^2 \subset \RPP$ in general position,
and let $h_k^p(P)$
be the number of \projective $k$-holes in~$P$.
Then,
\[
h_3^p(P)
\ge h_3(P) + \frac{1}{3} \binom{n}{2} \;\;\;\;\;\text{ and }\;\;\;\;\;
h_4^p(P)
\ge h_4(P) + \frac{1}{2} \left(\binom{n}{2}-3n+3\right),
\]
where
$h_k(P)$
is the number of \affine $k$-holes in $P$ in the plane $\mathbb{R}^2$.
\end{proposition}
The proof of Proposition~\ref{proposition:manyprojective34holes}
\ifappendix
is in Appendix~\ref{sec:more_3holes_and_4holes_proof}.
\else
can be found in~\cite{arxiv_version}.
\fi
Together with the best known lower bounds on $h_3(n)$ and $h_4(n)$ by Aichholzer et al.~\cite{ABHKPSVV2020_JCTA}, the estimates from Proposition~\ref{proposition:manyprojective34holes} give
\[
h_3^p(n)
\ge \frac{7}{6}n^2 + \Omega(n \log^{2/3} n) \;\;\;\;\;\text{ and }\;\;\;\;\;
h_4^p(n)
\ge \frac{3}{2}n^2 + \Omega(n \log^{3/4} n).
\]
We also discuss random point sets in the real projective plane and provide the following analogue to results for random point sets in the plane~\cite{BaranyFueredi1987,Valtr1995a}.
This gives an alternative proof of the upper bound $h_3^p(n) \leq O(n^2)$.
The proof of Theorem~\ref{theorem:random_sets} can be found
\ifappendix
in Appendix~\ref{sec:random_sets}.
\else
in~\cite{arxiv_version}.
\fi
\begin{theorem}
\label{theorem:random_sets}
Let $K$ be a compact convex subset in $\mathbb{R}^2$ of unit area.
If $P$ is a set of $n$ points chosen uniformly and independently at random from $K \subset \mathbb{R}^2 \subset \RPP$,
then the expected number of \projective 3-holes in $P$ is in $\Theta(n^2)$.
Moreover, the expected number of \projective holes in~$P$, which are not \affine holes in $\mathbb{R}^2$, is in $\Theta(n^2)$.
\end{theorem}
Last but not least, we discuss the computational complexity
of determining the number of $k$-gons and $k$-holes in a given point set.
Mitchell et al.~\cite{MitchellRSW1995} gave an $O(m n^3)$ time algorithm to compute, for all $k=3,\ldots,m$, the number of $k$-gons and $k$-holes in a given set $S$ of $n$ points in the Euclidean plane.
Their algorithm also counts $k$-islands in $O(k^2 n^4)$ time.
Here, an \emph{(\affine) $k$-island}
in a finite point set $S$ in the plane in general position is
the convex hull of a $k$-tuple $I$ of points from $S$
that does not contain any point from $S \setminus I$.
Note that
a convex set in~$\mathbb{R}^2$
is a $k$-hole in~$S$ if and only if it is a $k$-gon and a $k$-island in~$S$.
Here, we consider the algorithmic aspects of the analogous problems in the real projective plane.
By modifying the algorithm by Mitchell et al.~\cite{MitchellRSW1995}, we can efficiently
compute the number of \projective $k$-gons, $k$-holes, and $k$-islands of a finite set in the real projective plane.
Here, a \emph{\projective $k$-island} in a finite set $P$ of points from $\RPP$ in general position is
a \projective convex hull of a $k$-tuple $I$ of points from $P$
that does not contain any point from $P \setminus I$.
Note that,
similarly as in the affine case,
a convex set in~$\RPP$
is a \projective $k$-hole in~$P$ if and only if it is a \projective $k$-gon and a \projective $k$-island in~$P$.
\begin{theorem}
\label{thm:efficient_counting}
Let $P$ be a set of $n$ points in $\mathbb{R}^2 \subset \RPP$ in general position.
Assuming a RAM model of computation
which can perform arithmetic operations
on integers in constant time,
we can compute the total number of \projective $k$-gons and $k$-holes in~$P$
for $k=3,\ldots,m$ in $O(m n^4)$ time and $O(m n^2)$ space.
The number of \projective $k$-islands in~$P$ for $k=3,\ldots,m$ can be computed
in $O(m^2 n^5)$ time and $O(m^2 n^3)$ space.
\end{theorem}
\section{Discussion}
\label{subsec:openProblems}
The study of extremal questions about finite point sets in $\RPP$ suggests a wealth of interesting open problems and topics one can consider.
Here, we draw attention to some of them.
By Theorem~\ref{thm:holesExistence}, there are arbitrarily large finite point sets in $\RPP$ that avoid $k$-holes for any $k \geq 8$.
On the other hand, the result by Gerken~\cite{Gerken2008} and Nicolas~\cite{Nicolas2007} implies that every sufficiently large finite subset of $\RPP$ contains a \projective $k$-hole for any $k \leq 6$, as an analogous statement is true already in the affine setting.
The existence of \projective 7-holes in sufficiently large finite subsets of $\RPP$ remains an intriguing open problem and we believe that \projective 7-holes can be always found in large points sets in $\RPP$.
\begin{conjecture}
\label{problem:7holes}
Every sufficiently large point set in $\RPP$ contains a \projective $7$-hole.
\end{conjecture}
As we already mentioned, point sets in the plane satisfy $h_k(n) \leq h_k(n+1)$ for all $k$ and~$n$.
By Proposition~\ref{proposition:projective5holes}, this is no longer true in the real projective plane.
However, we do not know any other example violating this inequality except of the single case for 5-holes in $\RPP$.
Thus, it is natural to ask the following question.
\begin{problem}
\label{problem:subsetproperty}
Is it true that for every integer $k \geq 3$ there is $n_0=n_0(k)$ such that $h^p_k(n+1) \geq h^p_k(n)$ for every $n \geq n_0$?
\end{problem}
We have shown in Theorem~\ref{theorem:horton_sets}
that Horton sets only contain $\Theta(n^2)$ \projective $k$-holes.
Since Horton sets only contain $\Theta(n^2)$ \affine $k$-islands \cite{FabilaMonroyHuemer2012}, which is asymptotically minimal,
we wonder whether the same bound applies to \projective $k$-islands.
\begin{problem}
\label{problem:islands}
For every fixed integer $k \geq 3$, is the minimum number of \projective $k$-islands among all sets of $n$ points from $\RPP$ in general position in $\Theta(n^2)$?
\end{problem}
We have shown in Theorem~\ref{theorem:random_sets} that the expected number of 3-holes in random sets of $n$ points from $\RPP$ is in $\Theta(n^2)$.
In the plane, we know that the expected number of $k$-holes and $k$-islands is in $\Theta(n^2)$ for any fixed $k$~\cite{bsv2021_partI,bsv2021_partII}.
Can analogous estimates be obtained also in the real projective plane?
We note that the lower bound $\Omega(n^2)$ follows from the planar case.
\begin{problem}
\label{problem:quadraticrandom}
Let $K$ be a compact convex subset in $\mathbb{R}^2$ of unit area and let $k \ge 3$.
Is the expected number of \projective $k$-holes and $k$-islands in a set of $n$ points, which is chosen uniformly and independently at random from $K \subset \mathbb{R}^2 \subset \RPP$, in $\Theta(n^2)$?
\end{problem}
Besides all these Erd\H{o}s--Szekeres-type problems related to $k$-gons, $k$-holes and $k$-islands,
many other classical problems
have natural analogues in the projective plane.
In the following, we discuss the problem of \emph{crossing families}.
Let $P$ be a finite set of points in the plane.
For a positive integer $n$, let $T(n)$ be the largest number such that any set of $n$ points in general position in the plane determines at least $T(n)$ pairwise crossing segments.
The problem of estimating $T(n)$ was introduced in the 1990s by Erd\H{o}s et al.~\cite{crossingFamilies} who proved $T(n) \geq \Omega(\sqrt{n})$.
Since then it was widely conjectured that $T(n) \in \Theta(n)$.
However, nobody has been able to improve the lower bound from \cite{crossingFamilies} until a recent breakthrough by Pach, Rubin, and Tardos~\cite{pachRubTar21} who showed $T(n) \geq n^{1-o(1)}$.
In $\RPP$, every pair of points determines a projective line that can be divided into two projective line segments.
Given $2n$ points $p_1,\dots,p_k,q_1,\dots,q_k$ from $\RPP$, we say that they form \emph{\projective crossing family} of size $k$ if, for each $i$, we can choose a projective line segment $s_i$ between $p_i$ and $q_i$ such that for any pair $i,j$ with $1 \leq i < j \leq k$ the projective line segments $s_i$ and $s_j$ intersect.
We can then ask about the maximum size $T^p(n)$ of a \projective crossing family in a set $P$ of $n$ points from $\RPP$.
Note that any set of $k$ pairwise crossing segments of~$P$,
which live in a plane $\rho \subset \RPP$, gives a \projective crossing family of size $k$ in~$P$.
Thus, proving a linear lower bound might be simpler for $T^p(n)$ than for~$T(n)$.
\begin{problem}
\label{problem:crossingFamily}
Is the maximum size $T^p(n)$ of a \projective crossing family in a set of $n$ points from $\RPP$ in general position in $\Theta(n)$?
\end{problem}
All the notions we discussed (general position, convex position, $k$-gons, $k$-holes, $k$-islands, crossing families, and various others)
naturally extend
to higher dimensional Euclidean spaces and
also to higher dimensional projective spaces.
In fact, $k$-gons and $k$-holes in higher dimensional Euclidean spaces
are currently quite actively studied:
\begin{itemize}
\item
One central open problem in higher
dimensions is to determine the largest value $H(d)$ such that every
sufficiently large set in $\mathbb{R}^d$ contains an $H(d)$-hole.
While $H(2)=6$ is known,
the gap between the upper
and the lower bound for $H(d)$ remains huge for $d \geq 3$.
\cite{bch20,ConlonLim2021,Scheucher2021_SAT_highdim,VALTR1992b}
\item
For sets of $n$ points sampled independently and uniformly at random from a unit-volume convex body in $\mathbb{R}^d$,
the expected number of $k$-holes and $k$-islands is in $\Theta(n^d)$. \cite{bsv2021_partI,bsv2021_partII}
\item
While the $k$-gons and $k$-holes can be counted efficiently in the Euclidean plane,
determining the size of the largest gon or hole is NP-hard
already in~$\mathbb{R}^3$.
\cite{GiannopoulosKnauerWerner2013}
\end{itemize}
\medskip
These analogues
in $\RPP$ and in high dimensional projective spaces
are interesting by themselves, but they might also shed new light on the original problems.
We plan to address further such analogues and we hope to also motivate some readers for this line of research.
\section{Proof of Theorem~\ref{thm:projective_k_gon_theorem}}
\label{sec:kgons_proof}
Here, we show, for every integer $k \geq 2$, almost matching bounds on the minimum size $ES^p(k)$ that guarantees the existence of a \projective $k$-gon in every set of at least $ES^p(k)$ points from~$\RPP$.
More precisely, we prove that there are constants $c,c'>0$ such that
\[
2^{k-c\log{k}} \leq ES^p(k) \leq 2^{k+c'\sqrt{k\log{k}}}.
\]
The upper bound follows from~\eqref{eq-ESbounds}, thus it remains to prove the lower bound on $ES^p(k)$.
To do so, we construct sets of $2^{k-c\log{k}}$ points in $\RPP$ with no \projective $k$-gon.
By Observation~\ref{obs-gons}, it suffices to
show that $S$ contains
no $k$ points in convex position and no double chain of size~$k$.
To obtain such sets, we employ a recursive construction by Erd\H{o}s and Szekeres~\cite{ErdosSzekeres1935}.
By choosing $c$ sufficiently large, we can assume $k \geq 7$.
Let $X$ and $Y$ be finite sets of points in the Euclidean plane.
We say that \emph{$X$ lies deep below $Y$} and \emph{$Y$ lies high above $X$} if each point of $X$ lies below every line through two points of $Y$, and
each point of $Y$ lies above every line through two points of $X$.
For $k \geq 2$, we say that a set $C$ of $k$ points in the plane is a \emph{$k$-cup} if its points lie on the graph of a convex function and we call $C$ a \emph{$k$-cap} if its points lie on the graph of a concave function.
We now construct the set $S$ inductively as follows.
For $a \leq 2$ or $u \leq 2$, let $S_{a,u}$ be a set consisting of a single point from~$\mathbb{R}^2$ and note that $S_{a,u}$ then does not contain a 2-cap nor a 2-cup.
For integers $a,u \geq 3$, we let $S_{a,u}$ be a set obtained by placing a copy of $S_{a,u-1}$ to the left and deep below a copy of $S_{a-1,u}$.
It follows by induction that $|S_{a,u}| = \binom{a+u-4}{a-2} = \binom{a+u-4}{u-2}$ and that $S_{a,u}$ does not contain an $a$-cap nor a $u$-cup; see~\cite{ErdosSzekeres1935}.
Finally, we let $S = S_{\lfloor k/2\rfloor-1,\lfloor k/2\rfloor-1}$.
Since $k \geq 7$, we have $\lfloor k/2\rfloor-1 \geq 2$ and thus the set $S$ is well-defined.
Note that $|S| = \binom{\lfloor k/2\rfloor+\lfloor k/2\rfloor-4}{\lfloor k/2\rfloor-2} \geq 2^{k-c\log{k}}$ for some constant $c>0$.
The set $S$ does not contain $k$ points in convex position, as such a $k$-tuple contains either a $(\lfloor k/2\rfloor-1)$-cap or a $(\lfloor k/2\rfloor-1)$-cup.
Thus, it remains to show that $S$ does not contain a double chain of size $k$.
Suppose for contradiction that $W$ is
a double chain $k$-wedge with $A \cup B$ in $S$ with $A=\{s_1,\dots,s_m\}$ and $B=\{r_1,\dots,r_{k-m}\}$ for some $m$ with $1 \leq m \leq k-1$;
using the notation from Subsection~\ref{sec:prelim}.
We let $\ell_1$ be the line $\overline{s_1r_{k-m}}$ and $\ell_2$ be the line $\overline{s_m r_1}$.
Let $a \leq \lfloor k/2\rfloor-1$ and $u \leq \lfloor k/2\rfloor-1$ be two numbers such that $W$ has all vertices in $S_{a,u}$ but it does not have all vertices in $S_{a-1,u}$ nor in $S_{a,u-1}$.
Let $D$ and $U$ be the copies of $S_{a-1,u}$ and $S_{a,u-1}$, respectively, forming $S_{a,u}$.
We can assume without loss of generality that $|\{s_1,s_m,r_1,r_{k-m}\} \cap D| \geq 2$, as the other case $|\{s_1,s_m,r_1,r_{k-m}\} \cap U| \geq 2$ is treated analogously.
We distinguish the following two cases.
\textbf{Case 1:}
Assume $|\{s_1,s_m,r_1,r_{k-m}\} \cap D| = 2$.
Then two points from $\{s_1,s_m,r_1,r_{k-m}\}$ are in $D$ and the other two points are in $U$.
By symmetry, we can assume $s_1 \in U$.
We distinguish the following two subcases, which are shown in
Figure~\ref{fig:projective_gons_2}.
Note that, since the line segments $s_1r_{k-m}$ and $s_mr_1$ cross, the cases $s_1,r_{k-m} \in U$ and $r_1,s_m \in D$ cannot occur.
\begin{figure}[htb]
\centering
\includegraphics[page=4,width=\textwidth]{figs/projective_gons}
\caption{The cases in the proof of Theorem~\ref{thm:projective_k_gon_theorem}.}
\label{fig:projective_gons_2}
\end{figure}
\textbf{Case 1a:}
Assume $s_1,s_m \in U$ and $r_1,r_{k-m} \in L$.
We assume that $s_1$ is to the left of $s_m$, otherwise we
reverse the order of the elements in $A$ and~$B$
which, in particular, exchanges the roles of $s_1$ and~$s_m$.
Since $U$ is high above $D$, the line $\overline{s_1r_{k-m}}$ is almost vertical and separates $s_m$ from~$r_1$, where
$s_1$ is to the left of~$s_m$ and
$r_1$ is to the left of~$r_{k-m}$.
All points of $A \setminus\{s_1\}$ lie to the right of $\overline{s_1r_{k-m}}$ and to the left of $\overline{s_mr_{k-m}}$.
Since $D$ is deep below $U$, no point of $D$ satisfies these two conditions.
Hence all points of $A$ lie in~$U$.
An analogous argument shows that
all points of $B$ lie in~$D$.
Since $A$ forms an $m$-cup in $U$ and $B$ forms a $(k-m)$-cap in~$D$,
we have $m \leq u - 1$ and $k-m \leq a-1$.
Consequently, $k = m + (m-k) \leq a + u - 2 \leq \lfloor k/2\rfloor + \lfloor k/2\rfloor -4 < k$, which is impossible.
\textbf{Case 1b:}
Assume $s_1,r_1 \in U$ and $s_m,r_{k-m} \in L$.
We assume that $s_1$ is to the left of $r_1$, as otherwise we exchange the roles of $A$ and $B$
which, in particular, exchanges the roles of $s_1$ and~$r_1$.
Since $U$ is high above $D$, the line $\overline{s_1r_{k-m}}$ is almost vertical and separates $s_m$ from $r_1$ and
$s_m$ is to the left of $r_{k-m}$.
All points of $A \setminus \{s_1\}$ lie to the left of the almost vertical line $\overline{s_1r_{k-m}}$
and to the right of the almost vertical line $\overline{s_1s_m}$.
Hence, $A \cap U = \{s_1\}$ and all points from $A \setminus \{s_1\}$ lie in $D$.
The set $A \setminus \{s_1\}$ forms an $(m-1)$-cup in $D$ and thus $m-1 \leq u-1$.
An analogous argument shows that $B \setminus \{r_1\}$ forms a $(k-m-1)$-cap in $D$ and thus $(k-m)-1 \leq a -1$.
In total, we obtain $k= (m-1) + (k-m-1) + 2 \leq (u-1) + (a-1) +2 \leq \lfloor k/2 \rfloor + \lfloor k/2 \rfloor -2 < k$, which is again impossible.
\textbf{Case 2:} Assume $|\{s_1,s_m,r_1,r_{k-m}\} \cap D| = 3$.
We can assume that either $s_1$ or $s_m$ lies in~$U$,
as otherwise we exchange the roles of $A$ and $B$.
Furthermore, we can assume that $s_1 \in U$, as otherwise we reverse the order of the elements in $A$ and~$B$.
Since $U$ is high above $D$, the line $\overline{s_1r_{k-m}}$ is almost vertical and separates $r_1$ and $s_m$.
Since all vertices of $W$ lie either to the left of the almost vertical line $\overline{s_1s_m}$ and to the right of the almost vertical line $\overline{s_1r_1}$ or to the right of $\overline{s_1s_m}$ and to the left of $\overline{s_1r_1}$,
the point $s_1$ is the only vertex of $W$ in $U$.
Hence, the points $S \setminus \{s_1\}$ lie in $D$ and form an $(m-1)$-cup in $D$.
Thus, $m-1 \leq u$.
The points of $B$ all lie in $D$ and form a $(k-m)$-cap in $D$.
Thus, $k-m \leq a-1$.
Altogether, we have $k = (m-1)+1+(k-m) \leq u+1+a-1 \leq \lfloor k/2\rfloor + \lfloor k/2\rfloor -2<k$, which is impossible.
\medskip
Since there is no case left, we have a contradiction with the assumption that $W$ is a double chain $k$-wedge with vertices in $S$.
This completes the proof of Theorem~\ref{thm:projective_k_gon_theorem}.
\section{Sketch of the proofs of Theorem~\ref{thm:holesExistence} and~Theorem~\ref{theorem:horton_sets}}
\label{sec:horton_sets_sketch}
Here, we sketch the proof of the fact that there are arbitrarily large finite sets of points from $\RPP$ in general position with no \projective 8-hole and with only quadratically many \projective $k$-holes for every $k \leq 7$. For the full proof
\ifappendix
see Appendix~\ref{sec:horton_sets_proof}.
\else
see~\cite{arxiv_version}.
\fi
The construction uses so-called \emph{Horton sets} defined by Valtr~\cite{Valtr1992a}.
Let $H$ be a set of $n$ points $p_1,\ldots,p_n$ from $\mathbb{R}^2$, sorted according to increasing $x$-coordinates.
Let $H_0$ be the set of points $p_i$ with odd $i$ and let $H_1$ be the set of points $p_i$ with even $i$.
The set $H$ is \emph{Horton} if either $|H| \le 1$ or if $|H| \ge 2 $, $H_0$ and $H_1$ are both Horton and $H_0$ lies deep below or high above $H_1$. In the second case, we call $H_0$ and $H_1$ the \emph{layers} of~$H$.
As in Section~\ref{sec:kgons_proof}, we say that \emph{$H_0$ lies deep below $H_1$} and \emph{$H_1$ lies high above $H_0$} if each point of $H_0$ lies below every line spanned by two points of $H_1$, and
each point of $H_1$ lies above every line spanned by two points of $H_0$.
For a nonempty subset $A$ of $H$,
we define the \emph{base} of $A$ in $H$ as the smallest recursive layer of $H$ containing $A$.
As in Section~\ref{sec:kgons_proof},
we use the terms \emph{$k$-cup} and \emph{$k$-cap}.
A \emph{cap} is a set that is a $k$-cap for some integer $k$ and, analogously, a \emph{cup} is a set that is a $k$-cup for some $k$.
A cap $C$ is \emph{open} in a set $S \subseteq \mathbb{R}^2$ if there is no point of $S$ below $C$, that is,
for each pair of points $c_1,c_2$ from $C$,
no point of $S$ has its coordinate between $x(c_1)$ and $x(c_2)$
and lies below the line $\overline{c_1c_2}$.
Analogously, a cup in $S$ is \emph{open} in $S$ if there is no point of $S$ above it.
\subsection{\texorpdfstring{Quadratic upper bounds on the number of $k$-holes}{Quadratic upper bounds on the number of k-holes}}\label{subsection:qudratic-upper-bound}
We show that any Horton set on $n$ points embedded in the real projective plane does not contain 8-holes and that $H$ has at most $O(n^2)$ $k$-holes for every $k \in \{3,\dots,7\}$.
By Observation~\ref{obs-holes}, it suffices to show that any Horton set $H$ on $n$ points in the plane does not contain 8-holes nor an empty double chain 8-wedge
and that, for every $k \in \{3,\dots,7\}$, $H$ contains only at most $O(n^2)$ $k$-holes and empty double chain $k$-wedges.
Valtr~\cite{Valtr1992a} showed that any Horton set in the plane does not contain 7-holes and that it does not contain any open 4-cap nor an open 4-cup.
B\'ar\'any and Valtr~\cite{BaranyValtr2004} showed that the number of $k$-holes in any Horton set of size $n$ is at most $O(n^2)$ for every $k \in \{3,\dots,6\}$.
Thus, it suffices to estimate the number of double chain $k$-wedges in Horton sets.
Let $H$ be a Horton set with $n$ points in the plane.
We first show that the number of open caps in every Horton set $H$ with $n$ points in the plane is at most $O(n)$ and that analogous statement is true for open cups.
To prove this claim, it suffices to consider only open 2-caps and 3-caps, as $H$ does not contain open 4-caps.
We proceed by induction on $\log_2{n}$ and show that the number $t_2(H)$ of open 2-caps equals $2n-\log_2{(n)}-2$ and that the number $t_3(H)$ of open 3-caps in $H$ equals $n-\log_2{(n)}-1$ if $n$ is a power of 2.
Both expressions hold for $n=1$ and thus we assume $n \geq 2$.
Let $p_1,\dots,p_n$ be the points of $H$ ordered according to increasing coordinates and let $H_0=L(H)$ and $H_1=U(H)$ be the sets that partition $H$ such that $H_0$ is deep below $H_1$.
Every line segment $p_ip_{i+1}$ forms an open 2-cap in $H$ and there is no other open 2-cap in $H$ with points in $H_0$ and $H_1$, as there is a point of $H_1$ above any such line segment $p_ip_j$ with $j>i+1$.
Since no two points from $H_1$ form an open 2-cap in $H$, we have $t_2(H_0) + n-1$ open 2-caps in $H$.
By the induction hypothesis, it follows $t_2(H)=2n-\log_2{(n)}-2$.
To determine the number of open 3-caps in $H$,
note that every triple $p_ip_{i+1}p_{i+2}$ with odd $i$ forms an open 3-cap in $H$.
In fact, there is no other open 3-cap in~$H$ with a point in $H_0$ and also in~$H_1$, as there is a point of $H_1$ above any such line segment $p_ip_j$ with $j>i+1$.
Since no three points in $H_1$ form an open 3-cap in $H$, we obtain $t_3(H_0) + n/2-1$ open 3-caps in $H$.
The induction hypothesis then gives $t_3(H)=n-\log_2{(n)}-1$.
If $n$ is not a power of two, we consider a Horton set $H'$ of size $m$ instead, where $m$ is as the smallest power of 2 larger than~$n$, and denote its leftmost $n$ points by~$H''$.
Since $H''$ is also a Horton set of $n$ points and
contains the same open caps as~$H$,
we obtain $t_2(H) \le t_2(H') < 4n$ and $t_3(H) \le t_3(H') < 2n$.
Overall, the number of open caps in $H$ is at most~$O(n)$.
With an analogous argument we obtain the same upper bound on the number of open cups~in~$H$.
We now proceed with the proof by induction on $n$.
Clearly, the claims about the double chain $k$-wedges are true in any Horton set with one or two points, so we assume $n \geq 3$.
For some integer $k \geq 3$, let $W \subseteq H$ be a double chain $k$-wedge that is empty in $H$.
We will show that $k \leq 7$ and estimate the number of such double chain $k$-wedges for each $k \in \{3,\dots,7\}$.
If $W$ is contained in $H_0$ or in $H_1$, then $k \leq 7$ by the induction hypothesis.
Thus, we assume that $W$ contains a point from $H_0$ and also from $H_1$.
An elaborate case analysis
shows that $H$ contains no double chain 8-wedge that is empty in $H$ and that has points in $H_0$ and~$H_1$;
\ifappendix
see Appendix~\ref{sec:horton_sets_proof}.
\else
see~\cite{arxiv_version}.
\fi
By the induction hypothesis, the sets $H_0$ and $H_1$ do not contain any double chain 8-wedge that is empty in~$H_0$ and in $H_1$, respectively.
Since every double chain 8-wedge that is contained in $H_i$ and is empty in $H$ is also empty in $H_i$ for every $i \in \{0,1\}$, we see that there is no double chain 8-wedge in~$H$ that is empty in~$H$.
This completes the proof of Theorem~\ref{thm:holesExistence}.
Let $k \in \{3,\dots,7\}$.
For the quadratic upper bounds, it can be shown that there is a constant $c$ such that $H$ contains at most $cn^2$ double chain $k$-wedges that are empty in $H$ and that have points in $H_0$ and~$H_1$
\ifappendix
(again, see Appendix~\ref{sec:horton_sets_proof}).
\else
(again, see~\cite{arxiv_version}).
\fi
Altogether, the number $w_k(H)$ of empty double chain $k$-wedges in $H$ satisfies $w_k(H) \leq w_k(H_0) + w_k(H_1) + cn^2$.
Solving this linear recurrence with the initial condition $w_k(H')=0$ for any set $H'$ with $|H'| =1$ gives $w_k(H) \leq O(n^2)$.
This completes the proof of the first part of Theorem~\ref{theorem:horton_sets}.
\section{Outline of the construction giving Theorems~\ref{thm:construction} and \ref{thm:construction2}}
\label{sec:two_constructions_outline}
Here we outline the construction giving Theorems~\ref{thm:construction} and \ref{thm:construction2}.
For the full proof,
\ifappendix
see Appendix~\ref{sec:two_constructions_proofs}.
\else
see~\cite{arxiv_version}.
\fi
We are given a $k\in\{3,\dots,6\}$ and a positive integer~$n$.
Our construction uses two integer parameters $a,b\ge2$ satisfying $a\le n^{1/3}$ and $ab\le n$.
In the proof of Theorem~\ref{thm:construction}, these parameters depend on the value of the parameter $\alpha$ in the theorem. For the proof of Theorem~\ref{thm:construction2},
where we are given an integer parameter~$x$,
we choose
$a:=2$ and $b\approx \log_2 (x)$.
Assuming $\sqrt n$ is an integer, we start the construction with the $\sqrt n\times\sqrt n$ integer lattice in the plane, denoted by $L(\sqrt{n}\times \sqrt{n})$,
and we fix a subset $C_3$ of $\Theta(n^{1/3})$ points in convex position in $L(\sqrt{n}\times \sqrt{n})$.
We then perturb the lattice to get a so-called \emph{random squared Horton set},
denoted by $H(\sqrt{n}\times \sqrt{n})$, which is a randomized version~\cite{BaranyValtr2004}
of the lattice version of so-called
Horton sets~\cite{Valtr1992a}, which generalize
the famous construction of Horton~\cite{Horton1983} of planar point sets in general position with no $7$-holes. The random squared Horton set is described in~\cite[Section~2]{BaranyValtr2004} and denoted by $\Lambda^*$ there.
We consider the $|C_3|$-element subset $C_3^H$ of $H(\sqrt{n}\times \sqrt{n})$
corresponding to $C_3$. Since $C_3$ is in convex position, the set $C_3^H$
is also in convex position. We fix an $a$-element subset $C$ of~$C_3^H$, where $a$ is the above mentioned parameter.
For each $c\in C$, we take a set $S_c$ of $b$ points lying in a very small neighborhood of $c$ and on a unit circle touching the polygon $\conv C_3^H$ in the point $c$. Since the points of $S_c$ are placed very close together on a unit circle, they are almost collinear.
We consider the set $H(\sqrt{n}\times \sqrt{n})\cap\conv C_3^H$,
and denote its union with the sets $S_c,c\in C$, by $T=T(a,b)$; see Figure~\ref{fig:square_construction}.
The set $T$ has at most $n+ab\le 2n$ points,
and it is just a little technicality to adjust its size to~$n$ at the right place in the proof.
\begin{figure}[htb]
\centering
\includegraphics{figs/square_construction}
\caption{An illustration of the set~$T(a,b)$ for $a=3$ and $b=5$ (we assume each $c$ lies in $S_c$).}
\label{fig:square_construction}
\end{figure}
We now sketch a proof that the set $T$ satisfies Theorems~\ref{thm:construction} and \ref{thm:construction2} for properly chosen parameters $a$ and $b$.
The random squared Horton set of size $n$ has $O(n^2)$ \affine holes~\cite{BaranyValtr2004,Valtr1992a}.
Likewise, using the condition $ab\le n$ and two additional facts, it can be argued that the set $T$ has at most $O(n^2)$ \affine holes that do not lie completely in some $S_c$. The two additional facts are that
(i)~the expected number of \affine holes containing a fixed point of $C$ is at most $O(n)$ and
(ii)~the expected number of \affine holes containing a fixed pair of points of $C$ is at most $O(n)$.
The number of \affine $k$-holes that lie completely in one of the sets
$S_c$ is clearly $a\binom{b}{k}<ab^k$. Thus, the total number of \affine $k$-holes in $T=T(a,b)$ is at most $O(n^2+ab^k)$.
Due to the construction, any $(k-1)$-element subset of any set $S_c$, together with any point of $T\setminus S_c$, forms a \projective $k$-hole. There are $a$ sets $S_c$ and each of them has size $b$. Thus, there are
at least $a\cdot\binom{b}{k-1}\cdot (|T|-b)=\Theta(ab^{k-1}n)$ \projective $k$-holes in $T$.
Now, Theorem~\ref{thm:construction} is obtained from the above construction
by setting the parameters $a,b$ carefuly with respect to $\alpha$. Namely, for $\alpha\in[0,\frac{2k-5}3]$ we set
$a\approx n^{1/3}$ and $b:=n^{(5/3+\alpha)/k}$, and
for $\alpha\in(\frac{2k-5}3,k-2]$ we set $a\approx n^{1-(1+\alpha)/(k-1)}$ and $ b:=n^{(1+\alpha)/(k-1)}$.
We remark that in the range $\alpha\in[0,\frac{2k-5}3]$, the parameter $a$ corresponds to its maximum possible size which is the maximum size of a subset in the lattice $L(\sqrt{n}\times \sqrt{n})$ in convex position, and the parameter $b$ grows with~$\alpha$, since increased $\alpha$ allows bigger \affine holes.
In the range $\alpha\in(\frac{2k-5}3,k-2]$, the parameter $b$ continues to grow with $\alpha$ but $a$ is decreasing to keep the size $ab$ of $S$ below~$n$.
To obtain Theorem~\ref{thm:construction2} from the above construction, we set $a:=2$ and $b\approx\log_2x$.
Then the number of \affine holes contained in one of the two sets $S_c$ is $\approx a2^b=\Theta(x)$ and the number of other \affine holes in $T$ is again in $O(n^2)$. Any subset of the $(ab=)2b$-element union of the two sets $S_c$ is in convex position or is a double chain, determining a \projective hole.
Thus, $T=T(2,b)$ has at least $\Theta(2^{2b})=\Theta(x^2)$ \projective holes.
Theorem~\ref{thm:construction2} follows.
\section{Proof of Theorem~\ref{thm:efficient_counting}}
\label{sec:counting}
Let $S$ be a set of $n$ points in the Euclidean plane in general position.
Mitchell et al.~\cite{MitchellRSW1995} use a dynamic programming approach to determine, for every point $p \in S$, the number of $k$-gons and $k$-holes for $k=3,\ldots,m$, which have $p$ as the bottom-most point.
The algorithm performs in
$O(m n^2)$ time and space.
They also determine the number of $k$-islands in $S$, which have $p$ as the bottom-most point,
in $O(m^2 n^3)$ time and space.
Note that the bottom-most point is unique without loss of generality, as otherwise we perform an affine transformation which does not affect the number of $k$-gons, $k$-holes, and $k$-islands.
Here, we introduce an algorithm that efficiently computes the number of \projective $k$-gons, $k$-holes, and $k$-islands of a finite set $P$ of $n$ points from $\mathbb{R}^2 \subset \RPP$.
First, we discuss how to determine the number of \projective $k$-gons in~$P$.
Let $G$ be a \projective $k$-gon with $k\ge 3$
and let $p_1,p_2$ be two vertices that are consecutive on the boundary of~$G$.
If we start at $p_1$
and trace the boundary of~$G$ in the direction of~$p_2$,
we obtain a unique cyclic permutation $p_1,\ldots,p_k$ of the vertices of~$G$.
By starting at $p_2$ and tracing in the direction of $p_1$, we obtain the reversed cyclic permutation.
It is crucial that, independently from the starting point and the direction, only the $k$ pairs $\{p_i,p_{i+1}\}$ for $i=1,\ldots,k$ (indices modulo~$k$) appear as consecutive vertices along the boundary of~$G$.
For every pair of points $\{s,t\} \in P$, the algorithm will
count (with multiplicities) the number of \projective $k$-gons in~$P$, which have $s$ and $t$ as consecutive vertices on the boundary.
Since each \projective $k$-gon is counted exactly $k$ times,
we can then derive the number \projective $k$-gons in~$P$ by a simple division by~$k$.
For a pair $\{s,t \}$ of distinct points from~$P$,
we can choose a line $\ell_{s,t}^+$ ($\ell_{s,t}^-$) which is parallel to the line $\overline{st}$ and lies very close and to the left (right) of $\overline{st}$.
By removing $\ell_{s,t}^+$ and $\ell_{s,t}^-$, respectively, from $\RPP$, we obtain
two planes $\rho_{s,t}^+ \subset \RPP$ and $\rho_{s,t}^- \subset \RPP$.
Now,
every \projective $k$-gon~$G$ of~$P$, which has $s$ and $t$ as consecutive vertices on its boundary, is a convex $k$-gon either in $\rho_{s,t}^+$ or in $\rho_{s,t}^-$, but not in both.
Note that in both planes~$\rho_{s,t}^+$ and $\rho_{s,t}^-$,
$s$ and $t$ lie on the boundary of the convex hull of~$P$.
Moreover, we can assume that $s$ is the bottom-most point in both planes $\rho_{s,t}^+$ and~$\rho_{s,t}^-$,
as otherwise we apply a suitable rotation.
For each of the $\binom{n}{2}$ pairs $\{s,t \}$ of distinct points from~$P$,
we now count the number of convex $k$-gons in the planes $\rho_{s,t}^+$ and $\rho_{s,t}^-$,
which have $s$ and $t$ as consecutive vertices on the boundary.
This counting can be done in $O(m n^2)$ time and space by using the algorithm of Mitchell et al.\ \cite{MitchellRSW1995} with the slight modification that, in the initial phase, we only count $3$-gons of the form $p_1=s,p_2=t,p_3$; see equation~(3) in~\cite{MitchellRSW1995}.
Since each \projective $k$-gon $G$ is now counted precisely $k$ times, once for each pair of consecutive vertices along the boundary of~$G$, this completes the argument for \projective $k$-gons.
Similarly, we count \projective $k$-holes and $k$-islands.
The time and space requirements of the algorithm from~\cite{MitchellRSW1995}
for counting \projective $k$-holes, which are incident to the bottom-most point, are the same as for \projective $k$-gons.
For counting \projective $k$-islands, which are incident to the bottom-most point, the algorithm from \cite{MitchellRSW1995} uses $O(m^2 n^3)$ time and space.
| {
"timestamp": "2022-03-16T01:07:29",
"yymm": "2203",
"arxiv_id": "2203.07518",
"language": "en",
"url": "https://arxiv.org/abs/2203.07518",
"abstract": "We consider point sets in the real projective plane $\\mathbb{R}P^2$ and explore variants of classical extremal problems about planar point sets in this setting, with a main focus on Erdős--Szekeres-type problems.We provide asymptotically tight bounds for a variant of the Erdős--Szekeres theorem about point sets in convex position in $\\mathbb{R}P^2$, which was initiated by Harborth and Möller in 1994. The notion of convex position in $\\mathbb{R}P^2$ agrees with the definition of convex sets introduced by Steinitz in 1913.For $k \\geq 3$, an (\\affine) $k$-hole in a finite set $S \\subseteq \\mathbb{R}^2$ is a set of $k$ points from $S$ in convex position with no point of $S$ in the interior of their convex hull. After introducing a new notion of $k$-holes for points sets from $\\mathbb{R}P^2$, called projective $k$-holes, we find arbitrarily large finite sets of points from $\\mathbb{R}P^2$ with no \\projective 8-holes, providing an analogue of a classical planar construction by Horton from 1983. We also prove that they contain only quadratically many \\projective $k$-holes for $k \\leq 7$. On the other hand, we show that the number of $k$-holes can be substantially larger in~$\\mathbb{R}P^2$ than in $\\mathbb{R}^2$ by constructing, for every $k \\in \\{3,\\dots,6\\}$, sets of $n$ points from $\\mathbb{R}^2 \\subset \\mathbb{R}P^2$ with $\\Omega(n^{3-3/5k})$ \\projective $k$-holes and only $O(n^2)$ \\affine $k$-holes. Last but not least, we prove several other results, for example about projective holes in random point sets in $\\mathbb{R}P^2$ and about some algorithmic aspects.The study of extremal problems about point sets in $\\mathbb{R}P^2$ opens a new area of research, which we support by posing several open problems.",
"subjects": "Combinatorics (math.CO); Computational Geometry (cs.CG)",
"title": "Erdős--Szekeres-type problems in the real projective plane",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9919380072800831,
"lm_q2_score": 0.7154239897159438,
"lm_q1q2_score": 0.7096562467191999
} |
https://arxiv.org/abs/1802.02381 | The $b$-branching problem in digraphs | In this paper, we introduce the concept of $b$-branchings in digraphs, which is a generalization of branchings serving as a counterpart of $b$-matchings. Here $b$ is a positive integer vector on the vertex set of a digraph, and a $b$-branching is defined as a common independent set of two matroids defined by $b$: an arc set is a $b$-branching if it has at most $b(v)$ arcs sharing the terminal vertex $v$, and it is an independent set of a certain sparsity matroid defined by $b$. We demonstrate that $b$-branchings yield an appropriate generalization of branchings by extending several classical results on branchings. We first present a multi-phase greedy algorithm for finding a maximum-weight $b$-branching. We then prove a packing theorem extending Edmonds' disjoint branchings theorem, and provide a strongly polynomial algorithm for finding optimal disjoint $b$-branchings. As a consequence of the packing theorem, we prove the integer decomposition property of the $b$-branching polytope. Finally, we deal with a further generalization in which a matroid constraint is imposed on the $b(v)$ arcs sharing the terminal vertex $v$. | \section{Introduction}
\label{SECintro}
Since the pioneering work of Edmonds \cite{Edm70,Edm79},
the importance of
{\em matroid intersection} has been well appreciated.
A special class of matroid intersection is
\emph{branchings} (or \emph{arborescences}) in digraphs.
Branchings have several good properties which do not hold for general matroid intersection.
The objective of this paper is to propose a class of matroid intersection
which generalizes branchings and inherits those good properties of branchings.
One of the good properties of branchings is that
a maximum-weight branching can be found by a simple combinatorial algorithm \cite{Boc71,CL65,Edm67,Ful74}.
This algorithm is
much simpler than general weighted matroid intersection algorithms,
and is
referred to
as a ``multi-phase greedy algorithm'' in the textbook by Kleinberg and Tardos \cite{KT05}.
Another good property is the elegant theorem for packing disjoint branchings \cite{Edm73}.
In terms of matroid intersection,
this theorem says that,
if there exist $k$ disjoint bases in each of the two matroids,
then there exist $k$ disjoint common bases.
This packing theorem leads to a proof that
the branching polytope has the \emph{integer decomposition property} (defined in Section \ref{SECpre}).
In this paper,
we propose \emph{$b$-branchings},
a class of matroid intersection
generalizing branchings,
while maintaining the above two good properties.
This offers a new direction of fundamental extensions of the classical theorems on branchings.
Let $D=(V,A)$ be a digraph
and let $b\in \ZZ_{++}^V$ be a positive integer vector on $V$.
For $v \in V$ and $F \subseteq A$,
let $\delta^-_F(v)$ denote the set of arcs in $F$ entering $v$,
and
let $d\sp{-}_F(v) = |\delta^-_F(v)|$.
One matroid $\Mdeg$ on $A$ has its independent set family $\Ideg$ defined by
\begin{align}
\label{EQpartition}
&{}\Ideg = \{F \subseteq A \colon \mbox{$d_F^-(v) \le b(v)$ for each $v \in V$}\}.
\end{align}
That is,
$\Mdeg$ is
the direct sum of a uniform matroid on $\delta_A^-(v)$ of rank $b(v)$ for every $v \in V$.
Hence,
each vertex can have indegree at most $b(v)$,
which can be more than one.
Indeed,
this is the reason why we refer to it as a $b$-branching,
as a counterpart of a $b$-matching.
In order to make $b$-branchings a satisfying generalization of branchings,
the other matroid should be defined appropriately.
Our answer is a \emph{sparsity matroid}
determined by $D$ and $b$,
which is defined as follows.
For
$F \subseteq A$ and
$X\subseteq V$,
let $F[X]$ denote the set of arcs in $F$ induced by $X$.
Also,
denote
$\sum_{v \in X}b(v)$ by $b(X)$.
Now define a matroid $\Msp$ on $A$ with independent set family $\Isp$ by
\begin{align}
\label{EQsparsity}
&{}\Isp = \{F \subseteq A \colon \mbox{$|F[X]| \le b(X) - 1$ ($\emptyset \neq X \subseteq V$)}\}.
\end{align}
It is known that $\Msp$ is a matroid \cite[Theorem 13.5.1]{Fra11},
referred to as a \emph{count matroid} or a \emph{sparsity matroid}.
Now we refer to an arc set $F \subseteq A$ as a \emph{b-branching} if $F \in \Ideg \cap \Isp$.
It is clear that
a branching is a special case of a $b$-branching where $b(v)=1$ for each $v\in V$.
We demonstrate that
$b$-branchings
yield a reasonable generalization
of branching
by proving that the two fundamental results on branchings can be extended.
That is,
we present
a multi-phase greedy algorithm for finding a maximum-weight $b$-branching,
and
a theorem for packing disjoint $b$-branchings.
Our multi-phase greedy algorithm is an extension of the maximum-weight branching algorithm \cite{Boc71,CL65,Edm67,Ful74},
and it has the following features.
First,
its running time is $\mathrm{O}(|V||A|)$,
which is as fast as a simple implementation of the maximum-weight branching algorithm \cite{Boc71,CL65,Edm67,Ful74},
and
faster than the current best general weighted matroid intersection algorithm.
Second,
our algorithm also finds an optimal dual solution,
which is integer if the arc weights are integer.
Thus,
the algorithm constructively proves the total dual integrality of the associated linear inequality system.
Finally,
the algorithm leads to a characterization of the existence of a $b$-branching with prescribed indegree,
which is a generalization of
that for an arborescence \cite{Boc71,Edm67,Ful74}.
This characterization theorem is extended to
a theorem on packing disjoint $b$-branchings.
Let
$k$ be a positive integer,
and
$b_1,\ldots, b_k$ be nonnegative integer vectors on $V$ such that $b_i(v)\le b(v)$ for each $v \in V$ and $b_i\neq b$ ($i =1,\ldots, k$).
We provide a necessary and sufficient condition for $D$ to contain $k$ disjoint $b$-branchings $B_1,\ldots, B_k$
such that
$d^-_{B_i}(v)=b_i(v)$ for every $v \in V$ and $i=1,\ldots, k$,
which extends Edmonds' disjoint branching theorem \cite{Edm73}.
We then show such disjoint $b$-branchings $B_1,\ldots, B_k$ can be found in strongly polynomial time.
This strongly polynomial solvability is extended to finding disjoint $b$-branchings $B_1,\ldots, B_k$
that minimize $w(B_1)+ \cdots + w(B_k)$, when the arc-weight vector $w \in \RR_+^A$ is given.
By utilizing our disjoint $b$-branchings theorem,
we also prove the integer decomposition property of the $b$-branching polytope.
We further deal with a generalized class of \emph{matroid-restricted $b$-branchings}.
This is a class of matroid intersection
in which $\Mdeg$ is the direct sum of an arbitrary matroid on $\delta_A^-(v)$ of rank $b(v)$ for all $v \in V$.
Note that,
in the class of $b$-branchings,
the matroid $\Mdeg$ is the direct sum of a uniform matroid on $\delta_A^-(v)$ of rank $b(v)$.
We show that our multi-phase greedy algorithm can be extended to this generalized class.
\medskip
Let us conclude this section with describing related work.
The weighted matroid intersection problem is a common generalization of various combinatorial optimization problems such as bipartite matchings, packing spanning trees, and branchings~(or arborescences) in a digraph.
The problem has also been applied to various engineering problems, e.g., in electric circuit theory~\cite{M00,R89}, rigidity theory~\cite{R89}, and
network coding~\cite{DFZ11,HKM05}.
Since 1970s, quite a few algorithms have been proposed for matroid intersection problems, e.g., \cite{BCG86,Fra81,IT76,LSW2015,L70,L75}~(See \cite{HKK16} for further references).
However, all known algorithms are not greedy, but based on augmentation; repeatedly incrementing a current solution by exchanging some elements.
The matroids in branchings are a partition matroid and a graphic matroid, which are interconnected by a given digraph.
Such interconnection makes branchings more interesting.
As mentioned before, branchings have properties that matroid intersection of an arbitrary pair of a partition matroid and a graphic matroid does not have.
In particular, extending the packing theorem of branchings \cite{Edm73} is indeed a recent active topic.
Kamiyama, Katoh, and Takizawa \cite{KKT09} presented a fundamental extension
based on reachability in digraphs,
which is followed by a further extension based on
convexity in digraphs due to Fujishige \cite{Fuj10}.
Durand de Gevigney, Nguyen, and Szigeti \cite{DNS13} proved a theorem for
packing arborescences with matroid constraints.
Kir{\' a}ly \cite{Kir16} generalized the result of \cite{DNS13} in the
same direction of \cite{KKT09}.
A matroid-restricted packing of arborescences \cite{BK16,Fra09} is another generalization concerning a matroid constraint.
We remark that our packing and matroid restriction for $b$-branchings differ from
the above matroidal extensions of packing of arborescences.
\medskip
The organization of this paper is as follows.
In Section \ref{SECpre},
we review the literature of branchings and matroid intersection,
including
algorithmic, polyhedral, and packing results.
In Section \ref{SECalgo},
we present a multi-phase greedy algorithm for finding a
maximum-weight $b$-branching.
Section \ref{SECpacking} is devoted to
proving a theorem on packing disjoint $b$-branchings.
In Section \ref{SECmatroid},
we extend the multi-phase greedy algorithm to matroid-restricted $b$-branchings.
In Section \ref{SECconcl},
we conclude this paper with a couple of remarks.
\section{Preliminaries}
\label{SECpre}
In this section,
we review fundamental results on branchings and related theory of matroid intersection and polyhedral combinatorics.
For more details,
the readers are referred to \cite{Kam14,KV12,Sch03}.
In a digraph $D=(V,A)$,
an arc subset $B \subseteq A$ is a \emph{branching}
if,
in the subgraph $(V,B)$,
the indegree of every vertex is at most one
and
there does not exist a cycle.
In terms of matroid intersection,
a branching is a common independent set
of a partition matroid and a graphic matroid,
i.e.,\
intersection of
\begin{align}
\label{EQpartition1}
&{}\{F \subseteq A \colon \mbox{$d^-_F(v) \le 1$ for each $v \in V$}\}, \\
\label{EQgraphic}
&{}\{F \subseteq A \colon \mbox{$|F[X]| \le |X| - 1$ ($\emptyset \neq X \subseteq V$)}\}.
\end{align}
Recall that a branching is a special case of a $b$-branching where $b(v)=1$ for each $v\in V$.
Indeed,
by putting $b(v)=1$ for each $v\in V$ in \eqref{EQpartition} and \eqref{EQsparsity},
we obtain \eqref{EQpartition1} and \eqref{EQgraphic},
respectively.
As stated in Section \ref{SECintro},
a maximum-weight branching can be found by a multi-phase greedy algorithm \cite{Boc71,CL65,Edm67,Ful74},
which appears in standard textbooks such as \cite{KT05,KV12,Sch03}.
To the best of our knowledge,
we have no other nontrivial special case of matroid intersection
which can be solved greedily.
For example,
intersection of two partition matroids is equivalent to bipartite matching.
This seems the simplest nontrivial example of matroid intersection,
but
we do not know a greedy algorithm for finding a maximum bipartite matching.
Another important result on branchings is the disjoint branchings theorem by Edmonds \cite{Edm73},
described as follows.
For a positive integer $k$,
the set of integers
$\{1,\ldots, k\}$ is denoted by $[k]$.
For $F \subseteq A$ and $X \subseteq V$,
let $\delta^-_F(X) \subseteq A$ denote the set of arcs in $F$ from $V \setminus X$ to $X$,
and let $d^-_F(X) = |\delta^-_F(X)|$.
\begin{theorem}[Edmonds \cite{Edm73}]
\label{THMbpacking}
Let $D=(V,A)$ be a digraph and $k$ be a positive integer,
and $U_1,\ldots, U_k$ be subsets of $V$.
Then,
there exist disjoint branchings $B_1,\ldots, B_k$ such that
$U_i = \{v \in V \colon d^-_{B_i}(v)=1\}$ for each $i \in [k]$ if and only if
\begin{align}
\notag
d^-_A(X) \ge |\{ i \in [k] \colon X \subseteq U_i\}| \quad (\emptyset \neq X \subseteq V).
\end{align}
\end{theorem}
From Theorem \ref{THMbpacking},
we obtain a theorem on covering a digraph by branchings \cite{Fra79,MG86}.
\begin{theorem}[\cite{Fra79,MG86}]
\label{THMbcovering}
Let $D=(V,A)$ be a digraph and let $k$ be a nonnegative integer.
Then,
the arc set $A$ can be covered by $k$ branchings if and only if
\begin{align*}
&{}d^-_A(v) \le k \quad (v \in V), \\
&{}|A[X]| \le k(|X|-1) \quad (\emptyset \neq X \subseteq V).
\end{align*}
\end{theorem}
Theorem \ref{THMbcovering} leads to the \emph{integer decomposition property} of the branching polytope.
The \emph{branching polytope} is a convex hull of the charactiristic vectors of all branchings.
It follows from the total dual integrality of matroid intersection \cite{Edm70} that
the branching polytope is determined by the following linear system:
\begin{alignat}{2}
\label{EQmi1}
&{}x(\delta^-(v)) \le 1 \quad {}&&{}(v \in V), \\
\label{EQmi2}
&{}x(A[X]) \le |X|-1 \quad {}&&{}(\emptyset \neq X \subseteq V), \\
\label{EQmi3}
&{} x(a) \ge 0 {}&&{}(a\in A).
\end{alignat}
\begin{theorem}[see \cite{Sch03}]
The linear system \eqref{EQmi1}--\eqref{EQmi3} is totally dual integral.
\end{theorem}
\begin{corollary}[see \cite{Sch03}]
\label{CORbTDI}
The linear system \eqref{EQmi1}--\eqref{EQmi3} determines the branching polytope.
\end{corollary}
For a polytope $P$ and a positive integer $k$,
define $kP=\{ x \colon \exists x' \in P, x=kx'\}$.
A polytope $P$ has the \emph{integer decomposition property}
if,
for each positive integer $k$,
any integer vector $x \in kP$ can be represented as the sum of $k$ integer vectors in $P$.
The integer decomposition property of the branching polytope is a direct consequence of Theorem \ref{THMbcovering} and
Corollary \ref{CORbTDI}.
\begin{corollary}[\cite{BT81}]
The branching polytope has the integer decomposition property.
\end{corollary}
We remark that
the integer decomposition property does not hold for an arbitrary matroid intersection polytope.
Schrijver \cite{Sch03} presents an example of matroid intersection defined on the edge set of $K_4$ without integer decomposition property.
Indeed,
finding a class of polyhedra with integer decomposition property is a
classical topic in combinatorics.
Typical examples of polyhedra with integer decomposition property include
polymatroids \cite{BT81,Gil75},
the branching polytope \cite{BT81},
and
intersection of two strongly base orderable matroids \cite{DM76,McD76}.
While there is some recent progress \cite{Ben17},
the integer decomposition property of polyhedra is far from being well understood.
In Section \ref{SECpacking},
we will prove that the $b$-branching polytope is
a new example of polytopes with
integer decomposition property.
\section{Multi-phase greedy algorithm}
\label{SECalgo}
In this section,
we present a multi-phase greedy algorithm
for finding a maximum-weight $b$-branching
by extending that for branchings \cite{Boc71,CL65,Edm67,Ful74}.
\subsection{Key lemma}
\label{SECkeylemma}
Let $D=(V,A)$ be a digraph and $b \in \ZZ_{++}^V$ be a positive integer vector on $V$.
Recall that an arc set $F \subseteq A$ is a $b$-branching
if $F \in \Ideg \cap \Isp$,
where $\Ideg$ and $\Isp$ are defined by \eqref{EQpartition}
and \eqref{EQsparsity},
respectively.
We first show a key property of $\Md$ and $\Msp$,
which plays an important role in our algorithm.
\begin{lemma}
\label{LEMcircuit}
An independent set $F$ in $\Mdeg$ is not independent in $\Msp$
if and only if $(V,F)$ has a strong component $X$ such that
\begin{align}
\label{EQstrong}
|F[X]| = b(X).
\end{align}
Moreover,
such $F[X]$ is a circuit of $\Msp$.
\end{lemma}
\begin{proof}
Sufficiency is obvious:
$|F[X]| = b(X)$ implies that $F$ is not independent in $\Msp$.
We now prove necessity.
Suppose that $F \in \Ideg$ is not independent in $\Msp$.
Then,
there exists $X$ ($\emptyset \neq X \subseteq V$) such that
\begin{align}
\label{EQdep}
|F[X]| \ge b(X).
\end{align}
Let $X$ be an inclusionwise minimal set satisfying \eqref{EQdep}.
That is,
\begin{align}
\label{EQminimal}
|F[X']| \le b(X') - 1 \quad (\emptyset \neq X' \subsetneqq X).
\end{align}
We first show that $X$ satisfies \eqref{EQstrong}.
Since $F$ is independent in $\Mdeg$,
it holds that
\begin{align}
\label{EQYtight}
\mbox{$|F[Y]| \le \displaystyle\sum_{v\in Y}d_F^-(v) \le b(Y)$ for each $Y\subseteq V$}.
\end{align}
By \eqref{EQdep} and \eqref{EQYtight},
it follows that
\begin{align}
\label{EQXtight}
|F[X]| = \sum_{v \in X}d^-_F(v) = b(X),
\end{align}
and thus
\eqref{EQstrong} holds.
We then prove that $X$ is a strong component in $(V,F)$.
The former equality in \eqref{EQXtight} implies that
\begin{align}
\label{EQXin}
d^-_F(X)=\sum_{v \in X} d^-_F(v) - |F[X]| = 0.
\end{align}
The latter equality in \eqref{EQXtight} implies that
\begin{align}
\label{EQvsatur}
\mbox{$d^-_F(v) = b(v)$ for every $v \in X$}.
\end{align}
Then,
it follows from \eqref{EQminimal} and \eqref{EQvsatur} that
\begin{align}
\notag
d^-_F(X')
{}&{}= \sum_{v \in X'}d^-_F(v) - |F[X']| \\
{}&{}\ge b(X') - (b(X')-1)= 1 \quad
\quad (\emptyset \neq X' \subsetneqq X).
\label{EQXin2}
\end{align}
By
\eqref{EQXin} and \eqref{EQXin2},
we have shown that $X$ is a strong component in $(V,F)$.
We complete the proof by showing
that $F[X]$ is a circuit in $\Msp$.
The fact that $F[X] \notin \Isp$ directly follows from $|F[X]|=b(X)$.
Thus,
it is sufficient to prove that $F'=F[X] \setminus \{a\} \in \Isp$ for each arc $a \in F[X]$.
It follows from \eqref{EQXtight} that
\begin{align}
\label{EQX}
|F'[X]| = |F[X]| - 1 = b(X) -1.
\end{align}
For a proper subset $X'$ of $X$,
by \eqref{EQminimal},
\begin{align}
\label{EQXprime2}
|F'[X']| \le |F[X']| \le b(X')-1.
\end{align}
By \eqref{EQX} and \eqref{EQXprime2},
we conclude that $F' \in \Isp$.
\end{proof}
Lemma \ref{LEMcircuit} enables us to design the following multi-phase greedy algorithm for finding a maximum-weight $b$-branching:
\begin{itemize}
\item
Find a maximum-weight independent set $F$ in $\Mdeg$.
\item
If $(V,F)$ has a strong component $X$ satisfying \eqref{EQdepend},
then contract $X$,
reset $b$ and the weights of the remaining arcs appropriately,
and recurse.
\end{itemize}
A formal description of the algorithm appears in Section \ref{SECdescription}.
\subsection{Algorithm description}
\label{SECdescription}
We denote an arc $a \in A$ with initial vertex $u$ and terminal vertex $v$ by $(u,v)$.
We assume that the arc weights are nonnegative and represented by a vector $w \in \RR_+^A$.
For $F \subseteq A$,
we denote $w(F)=\sum_{a \in F}w(a)$.
Our multi-phase greedy algorithm for finding a maximum-weight $b$-branching is described as follows.
\begin{description}
\item[\textsc{Algorithm \bb{}}.]
\item[Input.]
A digraph $D=(V,A)$,
and
vectors $b \in \ZZ_{++}^V$
and
$w \in \RR_+^A$.
\item[Output.]
A $b$-branching $F \subseteq A$ maximizing $w(F)$.
\item[Step 1.]
Set $i:=0$,
$\Di{0} := D$,
$\bi{0} := b$,
and
$\wi{0} := w$.
\item[Step 2.]
Define a matroid $\Mdegi{i}=(\Ai{i}, \Idegi{i})$
accordingly to $\Di{i}$ and $\bi{i}$ by \eqref{EQpartition}.
Then,
find $\Fi{i} \in \Idegi{i}$ maximizing $\wi{i}(\Fi{i})$.
\item[Step 3.]
If $(\Vi{i},\Fi{i})$ has a strong component $X$ such that
\begin{align}
\label{EQdepend}
|\Fi{i}[X]| = \bi{i}(X),
\end{align}
then
go to Step 4.
Otherwise,
let $F := \Fi{i}$ and
go to Step 5.
\item[Step 4.]
Denote by $\X \subseteq 2\sp{\Vi{i}}$ the family of strong components $X$ in $(\Vi{i},\Fi{i})$ satisfying \eqref{EQdepend}.
Execute the following updates to construct $\Di{i+1}=(\Vi{i+1}, \Ai{i+1})$,
$\bi{i+1}\in \ZZ_{++}\sp{\Vi{i+1}}$,
and $\wi{i+1} \in \RR_+\sp{\Ai{i+1}}$.
\begin{itemize}
\item
For each $X \in \X$,
execute the following updates.
First,
contract $X$ to obtain a new vertex $v_X$.
Then,
for every arc $a=(z,y) \in \Ai{i}$ with $z \in \Vi{i} \setminus X$ and $y \in X$,
\begin{align*}
&{}z' := \begin{cases}
v_{X'} & (\mbox{$z \in X'$ for some $X' \in \X$}), \\
z & (\mbox{otherwise}),
\end{cases}
\\
&{}a' := (z',v_X), \\
&{}\Psi(a') := a, \\
&{}\wi{i+1}(a') := \wi{i}(a) - \wi{i}(\alpha(a,\Fi{i})) + \wi{i}(a_X),
\end{align*}
where
$\alpha(a,\Fi{i})$ is an arc in $\delta_{\Fi{i}}^- (y)$ minimizing $\wi{i}$,
and
$a_X$ is an arc in $\Fi{i}[X]$ minimizing $\wi{i}$.
\item
Define $\bi{i+1}\in \ZZ_{++}\sp{\Vi{i+1}}$ by
\begin{align*}
\bi{i+1}(v) := \begin{cases}
1 &(\mbox{$v = v_X$ for some $X \in \X$}),\\
\bi{i}(v) &(\mbox{otherwise}).
\end{cases}
\end{align*}
\end{itemize}
Let $i := i+1$ and go back to Step 2.
\item[Step 5.]
If $i=0$, then return $F$.
\item[Step 6.]
For every strong component $X$ in $(\Vi{i-1}, \Fi{i-1})$ with \eqref{EQdepend},
apply the following update:
if there exists $a' = (z,v_X) \in F$,
then
\begin{align*}
F:= ((F \setminus \{a'\}) \cup \{\Psi(a')\}) \cup (\Fi{i-1}[X] \setminus \{\alpha(\Psi(a'),X')\});
\end{align*}
otherwise,
\begin{align*}
F:= F \cup (\Fi{i-1}[X] \setminus \{a_X\}).
\end{align*}
Let $i:= i-1$ and go back to Step 5.
\end{description}
The complexity of Algorithm \bb{} is analyzed as follows.
It is clear that there are at most $|V|$ iterations.
It is also straightforward to see that the $i$-th iteration requires $\order{|\Ai{i}|}$ time:
Steps 2, 3, and 4 respectively require $\order{|\Ai{i}|}$ time.
Thus,
the total time complexity of the algorithms is $\order{|V||A|}$.
\subsection{Optimality of the algorithm and totally dual integral system}
In this subsection,
we prove that the output of \textsc{Algorithm \bb{}} is a maximum-weight
$b$-branching by the following primal-dual argument.
We first present a linear program describing the maximum-weight $b$-branching problem.
It is a special case of the linear program for weighted matroid intersection,
and hence
we already know that
the linear system is endowed with total dual integrality.
Here we show an algorithmic proof for the total dual integrality.
That is,
we show that,
when $w$ is an integer vector,
integral optimal primal and dual solutions can be computed via
\textsc{Algorithm \bb}.
Consider the following linear program,
in variable $x \in \RR\sp{A}$,
associated with the maximum-weight $b$-matching problem:
\begin{alignat}{3}
\label{EQlp0}
&{}\mbox{maximize} \quad {}&&{}\sum_{a \in A}w(a)x(a) &&\\
\label{EQlp1}
&{}\mbox{subject to}\quad{}&&{}x(\delta_A^-(v)) \le b(v) \quad {}&&{}(v \in V), \\
\label{EQlp2}
&&&{}x(A[X]) \le b(X) - 1 \quad {}&&{}(\emptyset \neq X \subseteq V), \\
\label{EQlp3}
&&&{}0\le x(a) \le 1 {}&&{}(a\in A).
\end{alignat}
The constraints \eqref{EQlp1}--\eqref{EQlp3} are indeed a special case of
a linear system describing the common independent sets in two matroids,
which is totally dual integral (see \cite{Sch03}).
Thus,
we obtain the following theorem.
\begin{theorem}
\label{THMpolytope}
The linear system \eqref{EQlp1}--\eqref{EQlp3} is totally dual integral.
In particular,
the linear system \eqref{EQlp1}--\eqref{EQlp3} determines the $b$-branching polytope.
\end{theorem}
The dual problem of \eqref{EQlp0}--\eqref{EQlp3},
in variable $p \in \RR\sp{2^V}$ and $q \in \RR\sp{A}$,
is described as follows.
\begin{alignat}{2}
\label{EQdual0}
&{}\mbox{minimize} \quad {}&&{}\sum_{v \in V}b(v)p(v) + \sum_{X \colon \emptyset \neq X \subseteq V}(b(X) - 1)p(X) + \sum_{a\in A}q(a)\\
&{}\mbox{subject to} \quad {}&&{} p(v) + \sum_{X\colon a \in A[X]}p(X) + q(a) \ge w(a) \quad (a=uv \in A), \\
&&&{}p(X) \ge 0 \quad (X \subseteq V), \\
&&&{}q(a) \ge 0 \quad (a \in A).
\label{EQdual2}
\end{alignat}
An optimal solution $(p^*,q^*)$ is computed via \textsc{Algorithm \bb} in the following manner.
At the beginning of \textsc{Algorithm \bb},
set
$w^\circ=w$.
In Step 4 of \textsc{Algorithm \bb},
for each strong component $X \in \X$,
define $p^*(X) \in \RR$ by
\begin{align*}
\notag
p^*(X) = \min
\{
{}&{}\min\{w^\circ(\alpha^\circ(a)) - w^\circ(a) \colon a \in \delta_{\Ai{i}}^-(X)\} ,
\min\{w^{\circ}(a') \colon a' \in \Fi{i}[X] \}
\},
\end{align*}
where
$\alpha^\circ(a)$ is the $b(y)$-th optimal arc
with respect to $w^\circ$
among the arcs sharing the terminal vertex $y \in V$ with $a$ in the original digraph $D$.
Then
for each arc $a \in A$ such that
$a \in \Ai{i}[X]$ or $a$ is deleted in the contraction of $X'$ with $v_{X'}$ included in $X$,
set
$w^\circ(a) :=w^\circ(a) - p^*(X)$.
After the termination of \textsc{Algorithm \bb},
let
the value $p^*(v)$ be equal to the $b(v)$-th maximum value among $\{w^\circ(a)\colon a \in \delta_A^-(v)\}$
for each vertex $v \in V$.
Finally,
let $q^*(a) = \max \{w(a) - p^*(v) - \sum_{X \colon a \in A[X]}p^*(X),0\}$.
It is straightforward to see that
the characteristic vector of the output $F$ and $(p^*,q^*)$
satisfy the complementary slackness condition.
Thus
they are
optimal solutions for the linear programs \eqref{EQlp0}--\eqref{EQlp3} and \eqref{EQdual0}--\eqref{EQdual2},
respectively.
Moreover,
$(p^*,q^*)$ is integer if $w$ is integer,
which implies that
\eqref{EQlp1}--\eqref{EQlp3} is totally dual integral.
\subsection{Existence of a $b$-branching with prescribed indegree}
Our algorithm leads to the following theorem characterizing the existence of
$b$-branching with prescribed indegree,
which is an extension of
that for arborescences.
\begin{theorem}
\label{THMarb}
Let $D=(V,A)$ be a digraph
and
$b\in \ZZ_{++}\sp{v}$ be a positive integer vector on $V$.
Let $b' \in \ZZ_{+}^V$ be a nonnegative integer vector such that
$b'(v) \le b(v)$ for every $v \in V$ and
$b' \neq b$.
Then,
$D$ has a $b$-branchings $B$ such that
$d^-_{B}(v) = b'(v)$ for each $v \in V$
if and only if
\begin{alignat}{2}
\label{EQexistdeg}
{}&{}d^-_A(v) \ge b'(v)\quad {}&&{}(v \in V),\\
\label{EQexistcut}
{}&{}d^-_A(X) \ge 1 {}&&{}(\mbox{$\emptyset \neq X \subsetneq V$, $b'(X) = b(X) \neq 0$}).
\end{alignat}
\end{theorem}
Let $r \in V$ be a specified vertex.
A characterization of the existence of an $r$-arborescence \cite{Boc71,Edm67,Ful74} is obtained
as a special case of Theorem \ref{THMarb},
by putting
$b(v)=1$ for every $v \in V$,
$b'(v)=1$ for every $v \in V \setminus \{r\}$,
and $b'(r)=0$.
Theorem \ref{THMarb} can be proved in two ways.
The necessity of \eqref{EQexistdeg} and \eqref{EQexistcut} is clear.
One way to derive the sufficiency of \eqref{EQexistdeg} and \eqref{EQexistcut} is
Algorithm \bb.
Apply Algorithm \bb{} to the case where $b=b'$ and $w(a)=1$ for each $a \in A$.
Then,
\eqref{EQexistdeg} and \eqref{EQexistcut} certify that
$F^{(i)}$ found in Step 2 of Algorithm \bb{} is always a base of $\Min\sp{(i)}$.
It thus follows that
the output $F$ of Algorithm \bb{} is a $b$-branching with $d_F^-=b'$.
An alternative proof for the sufficiency of \eqref{EQexistdeg} and \eqref{EQexistcut}
is implied by
the proof for Theorem \ref{THMbbpacking} in Section \ref{SECpacking},
which extends Theorem \ref{THMarb} to
a characterization of the existence of disjoint $b$-branchings with prescribed indegree.
\section{Packing disjoint $b$-branchings}
\label{SECpacking}
In this section,
we present a theorem on packing disjoint $b$-branchings $B_1,\ldots, B_k$ with
prescribed indegree,
which extends Theorem \ref{THMbpacking}, as well as Theorem \ref{THMarb}.
Our proof is an extension of the proof for Theorem \ref{THMbpacking} by Lov\'{a}sz \cite{Lov76}.
We then show that
such disjoint $b$-branchings can be found in strongly polynomial time.
We further show that disjoint $b$-branchings $B_1, \ldots, B_k$ minimizing the weight $w(B_1)+ \cdots + w(B_k)$
can be found in strongly polynomial time.
Finally,
as a consequence of our packing theorem,
we prove the integer decomposition property of the $b$-branching polytope.
\subsection{Characterizing theorem for disjoint $b$-branchings}
Let $D=(V,A)$ be a digraph,
$b\in \ZZ_{++}^V$ be a positive integer vector on $V$,
and
$k$ be a positive integer.
For $i\in [k]$,
let $b_i \in \ZZ_{+}^V$ be a nonnegative integer vector such that
$b_i(v) \le b(v)$ for every $v \in V$ and
$b_i \neq b$.
We present a theorem for chracterizing whether $D$ contains disjoint $b$-branchings
$B_1,\ldots, B_k$ such that
$d^-_{B_i} = b_i$
for each $i \in [k]$.
We begin with introducing
a function which plays a key role in the sequel.
Define a function $g: 2^V \to \ZZ_+$ by
\begin{align}
\label{EQg}
g(X) = |\{ i \in [k] \colon b_i(X) = b(X) \neq 0\}| \quad (X \subseteq V).
\end{align}
The following lemma is straightforward to observe.
\begin{lemma}
\label{LEMgsup}
The function $g$ is supermodular.
\end{lemma}
\begin{proof}
For $X \subseteq V$,
define $I_X \subseteq [k]$ by
$I_X = \{ i \in [k] \colon b_i(X) = b(X) \neq 0\}$.
By the definition \eqref{EQg} of $g$,
for $X,Y \subseteq V$,
it holds that
\begin{align*}
g(X) + g(Y) = |I_X| + |I_Y| = |I_X \setminus I_Y| + |I_Y \setminus I_X| + 2|I_X \cap I_Y|.
\end{align*}
Moreover,
it is straightforward to see that
\begin{align*}
&{}g(X \cup Y) = |I_X \cap I_Y|, &
&{}g(X \cap Y) \ge
|I_X \cup I_Y| = |I_X \setminus I_Y| + |I_Y \setminus I_X| + |I_X \cap I_Y|.
\end{align*}
It therefore holds that
$g(X)+g(Y) \le g(X \cup Y) + g(X \cap Y)$,
and
hence $g$ is a supermodular function.
\end{proof}
Our characterization theorem is described as follows.
\begin{theorem}
\label{THMbbpacking}
Let $D=(V,A)$ be a digraph,
$b\in \ZZ_{++}^V$ be a positive integer vector on $V$,
and
$k$ be a positive integer.
For $i\in [k]$,
let $b_i \in \ZZ_{+}^V$ be a nonnegative integer vector such that
$b_i(v) \le b(v)$ for every $v \in V$ and
$b_i \neq b$.
Then,
$D$ has disjoint $b$-branchings $B_1,\ldots, B_k$ such that
$d^-_{B_i} = b_i$
for each $i \in [k]$
if and only if
the following two conditions are satisfied:
\begin{alignat}{2}
\label{EQpackingdeg}
{}&{}d^-_A(v) \ge \sum_{i=1}^k b_i(v)\quad {}&&{}(v \in V),\\
\label{EQpackingcut}
{}&{}d^-_A(X) \ge g(X) \quad{}&&{} (X \subseteq V).
\end{alignat}
\end{theorem}
\begin{proof}
Necessity is clear.
We prove sufficiency by induction on $\sum_{i=1}^k b_i(V)$.
The case $\sum_{i=1}^k b_i(V) = 0$ is trivial;
$B_i = \emptyset$ for each $i \in [k]$.
Without loss of generality,
suppose that $b_1(V) >0$.
Define a partition $\{V_0, V_1, V_2\}$ of $V$ by
\begin{align*}
&{}V_0 = \{ v \in V \colon b_1(v) = 0\}, \\
&{}V_1 = \{ v \in V \colon 0<b_1(v) < b(v)\}, \\
&{}V_2 = \{ v \in V \colon b_1(v) = b(v)\}.
\end{align*}
Then,
it holds that
\begin{align}
\label{EQpartB}
&{}V_0 \cup V_1 \neq \emptyset, \\
\label{EQpartR}
&{}V_0 \neq V,
\end{align}
which follow from $b_1 \neq b$ and $b_1(V)>0$, respectively.
Let $W \subseteq V$ be an inclusionwise minimal vertex subset satisfying
\begin{align}
\label{EQw1}
&{}W \cap (V_0\cup V_1) \neq \emptyset, \\
\label{EQw2}
&{}W \setminus V_0 \neq \emptyset, \\
\label{EQw3}
&{}d^-_A(W)=g(W).
\end{align}
Such $W \subseteq V$ always exists,
because $W=V$ satisfies \eqref{EQw1}--\eqref{EQw3}:
\eqref{EQw1} follows from \eqref{EQpartB};
\eqref{EQw2} follows from \eqref{EQpartR};
and
\eqref{EQw3} follows from $b_i\neq b$ ($i \in [k]$) and hence $g(V)=0$.
Let $W_j = W \cap V_j$ ($j=0,1,2$).
\begin{claim}
\label{CLa}
There exists an arc $(u,v) \in A$ such that $u \in W_0\cup W_1$ and $v \in W_1 \cup W_2$.
\end{claim}
\begin{proof}
First,
suppose that $W_2 \neq \emptyset$.
Then,
it holds that $g(W_2)>g(W)$,
since
every $i \in [k]$ contributing to $g(W)$ also contributes to $g(W_2)$,
and
$i=1$ does not contribute to $g(W)$ but to $g(W_2)$.
Hence we obtain
that
\begin{align}
d^-_A(W_2) \ge g(W_2)
> g(W) = d^-_A(W).
\label{EQw22}
\end{align}
Now \eqref{EQw22}
implies that
there exists an arc $(u,v) \in A$ such that $u \in W_0\cup W_1$ and $v \in W_2$.
Next,
suppose that $W_2 = \emptyset$.
By \eqref{EQw2},
we have that $W_1 \neq \emptyset$.
Then,
it holds that
\begin{align*}
\sum_{v \in W_1}d^-_A(v)
&{}\ge \sum_{i=1}^k b_i(W_1) \quad (\because \mbox{\eqref{EQpackingdeg}}) \\
&{}> \sum_{i=2}^k b_i(W_1) \quad (\because b_1(W_1) >0) \\
&{}\ge |\{i \in [k] \colon b_i(W) = b(W) \neq 0\}| \quad (\because b_1(W) \neq b(W))\\
&{}= g(W) \\
&{}= d^-_A(W),
\end{align*}
implying that
there exists an arc $(u,v) \in A$ such that $u \in W=W_0 \cup W_1$ and $v \in W_1$.
\end{proof}
Let $a = (u,v) \in A$ be an arc in Claim \ref{CLa}.
We then show that
resetting
\begin{align}
\label{EQresetA}
&{}A := A \setminus \{a\}, \\
\label{EQresetB}
&{}b_1(v) := b_1(v) - 1
\end{align}
maintains \eqref{EQpackingdeg} and \eqref{EQpackingcut}.
(This resetting amounts to augmenting $B_1$ by adding $a$.)
It is straightforward to see that \eqref{EQresetA} and \eqref{EQresetB} maintain
\eqref{EQpackingdeg}.
To prove that \eqref{EQresetA} and \eqref{EQresetB} maintain
\eqref{EQpackingcut},
suppose to the contrary that
$X \subseteq V$ comes to violate \eqref{EQpackingcut} after the resetting \eqref{EQresetA} and \eqref{EQresetB}.
This violation implies that $d^-_A (X) = g(X)$ before the resetting, and
$d^-_A(X)$ has decreased by one
while $g(X)$ has remained unchanged by the resetting.
It then follows that
\begin{align}
\label{EQenter}
u \in V \setminus X \quad \mbox{and} \quad v \in X.
\end{align}
It also follows that $i=1$ does not contribute to $g(X)$,
and hence before the resetting,
it holds that
\begin{align}
\label{EQU}
X \cap (V_0 \cup V_1) \neq \emptyset.
\end{align}
By \eqref{EQenter},
we have that
$u \in W\setminus X$ and
$v\in X \cap W$,
and hence $\emptyset \neq X \cap W \subsetneqq W$.
Here
we show that
$X \cap W$ satisfies
\eqref{EQw1}--\eqref{EQw3},
which contradicts the minimality of $W$.
Before the resetting,
it holds that
\begin{align}
\label{EQsubsup1}
d^-_A(X \cap W)
&{}\le
d^-_A(X) + d^-_A(W) - d^-_A(X \cup W)
\\
\label{EQsubsup2}
&{} \le g(X) + g(W) - g(X \cup W)
\quad \\
\label{EQsubsup3}
&{} \le g(X \cap W).
\end{align}
Indeed,
\eqref{EQsubsup1} follows from
submodularity of $d^-_A$.
The inequality \eqref{EQsubsup2} follows from $d^-_A(X)=g(X)$, $d^-_A(W)=g(W)$,
and $d^-_A(X \cup W) \ge g(X \cup W)$.
Finally,
\eqref{EQsubsup3} follows from Lemma \ref{LEMgsup}.
Since
$d^-_A(X \cap W) \ge g(X\cap W)$ by \eqref{EQpackingcut},
all inequalities \eqref{EQsubsup1}--\eqref{EQsubsup3}
hold with equality,
and hence
$d^-_A(X \cap W)
=
g(X \cap W)
$
holds before the resetting.
Equality in \eqref{EQsubsup3} implies that
$(X \cap W) \cap (V_0 \cup V_1) \neq \emptyset$.
Indeed,
we have that
$W \cap (V_0 \cup V_1) \neq \emptyset$ because $u \in W \cap (V_0 \cup V_1)$,
and hence $i=1$ does not contribute to $g(W)$.
Combined with \eqref{EQU},
$i=1$ contributes to none of $g(X)$, $g(W)$, and $g(X \cup W)$.
Thus,
by the equality in \eqref{EQsubsup3},
$i=1$ does not contribute to $g(X \cap W)$ as well,
and hence $(X \cap W) \cap (V_0 \cup V_1) \neq \emptyset$ must hold.
We also have $(X \cap W)\setminus V_0 \neq \emptyset$,
because $v \in (X \cap W)\setminus V_0$.
Therefore,
$X \cap W$ satisfies \eqref{EQw1}--\eqref{EQw3},
contradicting the minimality of $W$.
Thus,
we have finished proving that
resetting of \eqref{EQresetA} and \eqref{EQresetB} maintains
\eqref{EQpackingcut}.
Now we can apply induction to obtain
disjoint $b$-branchings $B_1,\ldots, B_k$ in the digraph $(V, A \setminus\{a\})$
such that
$d^-_{B_1} = b_1 - \chi_v$ and $d^-_{B_i} = b_i$ for $i=2,\ldots, k$,
where $\chi_v \in \ZZ\sp{V}$ is a vector defined by $\chi_v(v) = 1$ and $\chi_v(u) = 0$ for every $u \in V \setminus \{v\}$.
We complete the proof by showing that $B_1 \cup \{a\}$ is a $b$-branching.
In resetting,
we always have $u \in W_0 \cup W_1$,
which implies that
the construction of $B_1$ begins with a vertex $r$ with $b_1(r) < b(r)$
and
the component in $(V, B_1)$ containing $a$ includes $r$.
Thus,
no $X \subseteq V$ comes to satisfy $|B_1[X]| = b(X)$.
\end{proof}
\subsection{Algorithm for finding disjoint $b$-branchings}
Let us discuss the algorithmic aspect of Theorem \ref{THMbbpacking}.
First,
we can determine whether \eqref{EQpackingdeg} and \eqref{EQpackingcut} hold in strongly polynomial time.
Condition \eqref{EQpackingdeg} is clear.
For \eqref{EQpackingcut},
we have that $d^-_A(X)$ is submodular and
$g(X)$
is supermodular (Lemma \ref{LEMgsup}),
and hence
$d^-_A(X) - g(X)$ is submodular.
Thus,
we can determine whether there exists $X$ with $d^-_A(X) - g(X) < 0$ by
submodular function minimization,
which can be done in strongly polynomial time \cite{IFF01,LSW2015,Sch00}.
Finding $b$-branchings $B_1,\ldots, B_k$ can also be done in strongly polynomial time.
By the proof for Theorem \ref{THMbbpacking},
it suffices to find an arc $a \in A$ such that
resetting \eqref{EQresetA} and \eqref{EQresetB} maintains \eqref{EQpackingcut}.
This can be done by determining whether there exists $X$ with $d^-_A(X) - g(X) < 0$
after resetting \eqref{EQresetA} and \eqref{EQresetB} for each $a \in A$,
i.e.,\
at most $|A|$ times of submodular function minimization \cite{IFF01,LSW2015,Sch00}.
\begin{theorem}
Conditions \eqref{EQpackingdeg} and \eqref{EQpackingcut} can be checked in strongly polynomial time.
Moreover,
if \eqref{EQpackingdeg} and \eqref{EQpackingcut} hold,
then disjoint $b$-branchings $B_1,\ldots, B_k$ such that $d_{B_i}^- = b_i$ for each $i \in [k]$ can be found in strongly polynomial time.
\end{theorem}
Furthermore,
if an arc-weight vector $w \in \RR_+^A$ is given,
we can find disjoint $b$-branchings $B_1,\ldots, B_k$ minimizing
$w(B_1)+ \cdots +w(B_k)$ in strongly polynomial time.
Indeed,
conditions \eqref{EQpackingdeg} and \eqref{EQpackingcut} derive a totally dual integral system
which determines a \emph{submodular flow polyhedron}.
A set family $\C \subseteq 2^V$ is called a
\emph{crossing family} if,
for each $X,Y \in \C$ with
$X \cup Y \neq V$
and $X \cap Y \neq \emptyset$,
it holds that
$X \cup Y, X\cap Y \in \C$.
A function $f: \C \to \RR$ defined on a
crossing family $\C \subseteq V$ is called
\emph{crossing submodular} if,
for each
$X,Y \in \C$ with
$X \cup Y \neq V$
and
$X \cap Y \neq \emptyset$,
it holds that
$f(X) + f(Y) \ge f(X\cup Y) + f(X\cap Y)$.
A function $f$ is \emph{crossing supermodular} if $-f$ is crossing submodular.
A \emph{submodular flow polyhedron} is a polyhedron described as
\begin{alignat*}{2}
&{}x(\delta_A^-(X))-x(\delta_A^+(X)) \le f(X) {}&\quad&{}
(X \in \C)
, \\
&{}l(a) \le x(a) \le u(a) {}&{}
\quad &{}(a \in A)
\end{alignat*}
by some digraph $(V,A)$,
crossing
submodular function $f$
on a crossing family $\C \subseteq 2^V$,
and
vectors $l,u \in \RR^A$,
where $\delta_A^+(X)$ denotes the set of arcs
in $A$ from $X$ to $V \setminus X$.
\begin{lemma}[\cite{Sch84}]
\label{LEMsf}
For a digraph $D=(V,A)$,
let
$f\colon 2^V \to \RR$ be a
crossing
supermodular function
on $\C \subseteq 2^V$
and $u \in \RR^A$.
Then,
a polyhedron determined by
\begin{alignat*}{2}
&x(\delta^-_A (X)) \ge f(X) \quad &&{}
(X \in \C),\\
& 0 \le x(a) \le u(a) \quad &&{}(a \in A)
\end{alignat*}
is a submodular flow polyhedron.
\end{lemma}
By Lemma \ref{LEMsf}, the linear inequality system \eqref{EQpackingdeg} and \eqref{EQpackingcut} determines a submodular flow polyhedron.
Indeed, we can define a crossing supermodular function $f\colon 2^V \to \RR$ by
\[
f(X) =
\begin{cases}
\displaystyle \sum_{i=1}^k b_i (v) & \mbox{($X = \{v\}$ for some $v\in V$)},\\
g(X) & (\mbox{otherwise}).
\end{cases}
\]
Since a submodular flow polyherdron is totally dual integral \cite{EG77},
an arc subset $B \subseteq A$ with \eqref{EQpackingdeg} and \eqref{EQpackingcut}
minimizing $w(B)$ can be found by optimization over a
submodular flow polyhedron,
which can be done in strongly polynomial time \cite{FIM02,FT87,IMS00,IMS03}.
After that,
we can partition $B$ into $b$-branchings $B_1,\ldots, B_k$ with $d^-_{B_i}=b_i$ ($i \in [k]$)
in the same manner as above.
\begin{theorem}
If \eqref{EQpackingdeg} and \eqref{EQpackingcut} hold,
then disjoint $b$-branchings $B_1,\ldots, B_k$ such that $d_{B_i}^- = b_i$ for each $i \in [k]$
minimizing $w(B_1)+ \cdots +w(B_k)$ can be found in strongly polynomial time.
\end{theorem}
\subsection{Integer decomposition property of the $b$-branching polytope}
In this subsection
we show another consequence of
Theorem \ref{THMbbpacking}:
the integer decomposition property of the $b$-branching polytope.
First,
Theorem \ref{THMbbpacking}
leads to the following min-max relation on covering by $b$-branchings.
This is an extension of Theorem \ref{THMbcovering},
the theorem on covering by branchings \cite{Fra79,MG86}.
\begin{corollary}
\label{CORbbcover}
Let $D=(V,A)$ be a digraph,
$b\in \ZZ_{++}^V$ be a positive integer vector on $V$,
and
$k$ be a positive integer.
Then,
the arc set $A$ can be covered by $k$ $b$-branchings if and only if
\begin{align}
\label{EQcover1}
&{}d^-_A(v) \le k \cdot b(v) \quad (v \in V), \\
\label{EQcover2}
&{}|A[X]| \le k (b(X)-1) \quad (\emptyset \neq X \subseteq V).
\end{align}
\end{corollary}
\begin{proof}
Necessity is obvious.
To prove sufficiency,
construct a new digraph $D'=(V',A')$ in the following manner.
The vertex set $V'$ is obtained from $V$ by adding a new vertex $r$.
The arc set $A'$ is obtained from $A$ by adding $k\cdot b(v) - d^-_A(v)$ parallel arcs
from $r$ to $v$ for each $v \in V$.
Note that $k\cdot b(v) - d^-_A(v)$ is nonnegative by \eqref{EQcover1}.
Then,
in the digraph $D' = (V',A')$,
it holds that
\begin{align}
\label{EQcover1p}
d^-_{A'}(v) {}&{}= k \cdot b(v) \quad (v \in V), \\
d^-_{A'}(X)
{}&{}= \sum_{v \in X}d^-_{A'}(v) - |A[X]| \notag\\
\label{EQcover2p}
{}&{}\ge \sum_{v \in X}k \cdot b(v) - k(b(X)-1) = k \quad (\emptyset \neq X \subseteq V).
\end{align}
Now define vectors $b', b_0' \in \ZZ^{A'}$ by
\begin{align*}
&
b'(v) =
\begin{cases}
b(v) & (v \in V), \\
1 & (v =r),
\end{cases}
&&
b'_0(v) =
\begin{cases}
b(v) & (v \in V), \\
0 & (v =r).
\end{cases}
\end{align*}
By \eqref{EQcover1p} and \eqref{EQcover2p},
we can apply Theorem \ref{THMbbpacking} to $D'$ and obtain
$k$ disjoint $b'$-branchings $B_1',\ldots B_k'$ in $D'$
satisfying $d^-_{B_i'} = b_0'$ for each $i \in [k]$.
It then follows that $|B_i'| = b_0'(V)=b(V)$ for each $i \in [k]$.
Since $$|A'|= |A| + \sum_{v \in V} (k \cdot b(v)-d^-_A(v)) = |A| + (k\cdot b(V) - |A|)=k \cdot b(V),$$
$\{B_1',\ldots , B_k'\}$ is a partition of $A'$.
Thus,
by
restricting $B_1',\ldots B_k'$ to $A$,
we obtain
$b$-branchings $B_1,\ldots, B_k$ partitioning $A$.
\end{proof}
The integer decomposition property of the $b$-branching polytope
is a direct consequence of
Corollary \ref{CORbbcover}.
\begin{corollary}
The $b$-branching polytope has the integer decomposition property.
\end{corollary}
\begin{proof}
Denote the $b$-branching polytope by $P$.
Recall that $P$ is determined by
\eqref{EQlp1}--\eqref{EQlp3} (Theorem \ref{THMpolytope}).
Let $k$ be a positive integer and
$x \in \ZZ\sp{A}$ be an integer vector in $kP$.
It follows from $x \in kP$ that
\begin{alignat*}{2}
{}&{}x (\delta^-(v)) \le k\cdot b(v) \quad {}&&{}(v \in V), \\
{}&{}x(A[X]) \le k(b(X) - 1) \quad {}&&{}(\emptyset \neq X \subseteq V), \\
{}&{}0 \le x(a) \le k \quad {}&&{}(a \in A).
\end{alignat*}
Now consider an arc set $A_x$ consisting of
$x(a)$ arcs parallel to $a$ for each $a \in A$.
It is straightforward to see that
\eqref{EQcover1} and \eqref{EQcover2} hold when $A=A_x$.
Thus,
by Corollary \ref{CORbbcover},
$A_x$ can be covered by $k$ $b$-branchings.
In other words,
$x$ is the sum of $k$ integer vectors in $P$,
implying the integer decomposition property of $P$.
\end{proof}
\section{Matroid-restricted $b$-branchings}
\label{SECmatroid}
In this section,
we deal with \emph{matroid-restricted $b$-branching},
which further generalizes $b$-branchings.
Let $D=(V,A)$ be a digraph and $b \in \ZZ_{++}^V$ be a positive integer vector on $V$.
For each vertex $v \in V$,
a matroid $\Minv=(\delta^-(v), \Iinv)$ with rank $b(v)$ is attached.
We denote the direct sum of $\Minv$ for every $v \in V$ by $\MinV = (A, \IinV)$.
Now an arc set $F \subseteq A$ is an \emph{$\MinV$-restricted $b$-branching}
if $F \in \IinV \cap \Isp$.
Note that a $b$-branching is a special case where
$\Minv$ is a uniform matroid for each $v\in V$.
Here we provide a multi-phase greedy algorithm for finding a maximum-weight $\MinV$-restricted $b$-branching
by extending \textsc{Algorithm \bb}.
\begin{description}
\item[Algorithm \mrbb{}.]
\item[Input.]
A digraph $D=(V,A)$,
vectors $b :\ZZ_{++}^V$ and
$w\in \RR_+^A$,
and
matroids $\Minv=(\delta^-(v), \Iinv)$ with rank $b(v)$ for each $v\in V$.
\item[Output.]
An $\MinV$-restricted $b$-branching $F \subseteq A$ maximizing $w(F)$.
\item[Step 1.]
Set $i:=0$,
$\Di{0} := D$,
$\bi{0} = b$,
and
$\wi{0} := w$.
\item[Step 2.]
For each $v \in \Vi{i}$,
define a matroid $\Mvi{i}=(\delta_{\Ai{i+1}}^-(v), \Ivi{i+1})$ as
$\Mvi{i}$ if $v \in V$,
and
a uniform matroid of rank $1$ otherwise.
Let $\MVi{i} = (\Ai{i}, \IVi{i})$ be the direct sum of $\Mvi{i}$ for every $v \in \Vi{i}$.
Then,
find $\Fi{i} \in \IVi{i}$ maximizing $\wi{i}(\Fi{i})$.
\item[Step 3.]
If $(\Vi{i},\Fi{i})$ has a strong component $X$ such that
\begin{align}
\label{EQdependM}
|\Fi{i}[X]| = \bi{i}(X),
\end{align}
then
go to Step 4.
Otherwise,
let $F := \Fi{i}$ and
go to Step 5.
\item[Step 4.]
Denote the family of strong components $X$ in $(\Vi{i},\Fi{i})$ satisfying \eqref{EQdependM} by $\X \subseteq 2\sp{\Vi{i}}$.
Execute the following updates to construct $\Di{i+1}=(\Vi{i+1}, \Ai{i+1})$,
$\bi{i+1}\in \ZZ_{++}\sp{\Vi{i+1}}$,
and $\wi{i+1}\in\RR_+\sp{\Ai{i+1}}$.
\begin{itemize}
\item
For each $X \in \X$,
execute the following updates.
First,
contract $X$ to obtain a new vertex $v_X$.
Then,
for every arc $a=(z,y) \in \Ai{i}$ with $z \in \Vi{i} \setminus X$ and $y \in X$,
\begin{align*}
&{}z' := \begin{cases}
v_{X'} & (\mbox{$z \in X'$ for some $X' \in \X$}), \\
z & (\mbox{otherwise}),
\end{cases}
\\
&{}a' := (z',v_X), \\
&{}\Psi(a') := a, \\
&{}\wi{i+1}(a') := \wi{i}(a) - \wi{i}(\alpha(a,\Fi{i})) + \wi{i}(a_X),
\end{align*}
where
$\alpha(a,\Fi{i})$ is an arc in
the fundamental circuit of $a$ with respect to $\Fi{i}$ in $\mathbf{M}_y^{(i)}$
minimizing $\wi{i}$,
and
$a_X$ is an arc in $\Fi{i}[X]$ minimizing $\wi{i}$.
\item
Define $\bi{i+1} \in \ZZ_{++}\sp{\Vi{i+1}}$ by
\begin{align*}
\bi{i+1}(v) := \begin{cases}
1 &(\mbox{$v = v_X$ for some $X \in \X$}),\\
\bi{i}(v) &(\mbox{otherwise}).
\end{cases}
\end{align*}
\end{itemize}
Let $i := i+1$ and go back to Step 2.
\item[Step 5.]
If $i=0$, then return $F$.
\item[Step 6.]
For every strong component $X$ in $(\Vi{i-1}, \Fi{i-1})$ with \eqref{EQdependM},
apply the following update:
if there exists $a' = (z,v_X) \in F$,
then
\begin{align*}
F:= ((F \setminus \{a'\}) \cup \{\Psi(a')\}) \cup (\Fi{i-1}[X] \setminus \{\alpha(\Psi(a'),X')\});
\end{align*}
otherwise,
\begin{align*}
F:= F \cup (\Fi{i-1}[X] \setminus \{a_X\}).
\end{align*}
Let $i:= i-1$ and go back to Step 5.
\end{description}
\section{Concluding remarks}
\label{SECconcl}
In this paper,
we have proposed $b$-branchings,
a generalization of branchings.
In a $b$-branching, a vertex $v$ can have indegree at most $b(v)$,
and thus $b$-branchings serve as a counterpart of $b$-matchings for matchings.
It is somewhat surprising that,
to the best of our knowledge,
such a fundamental generalization of branchings has never
appeared in the literature.
The reason might be that,
in order to obtain a reasonable generalization,
it is far from being trivial how the other matroid (graphic matroid) in branchings is generalized.
We have succeeded in obtaining a generalization
inheriting the multi-phase greedy algorithm \cite{Boc71,CL65,Edm67,Ful74}
and the packing theorem \cite{Edm73} for branchings
by setting
a sparsity matroid defined by \eqref{EQsparsity} as
the other matroid.
An important property of the two matroids is
Lemma \ref{LEMcircuit},
which says that
an independent set of one matroid is decomposed into
an independent set and
some circuits in the other matroid.
This plays an important role in the design of a multi-phase greedy algorithm:
find an optimal independent $F$ set in one matroid;
contract the circuits in $F$ with respect to the other matroid;
and
the optimal common independent set can be found recursively.
We remark that the definitions \eqref{EQpartition} and \eqref{EQsparsity} are essential
to attain this property.
For example,
the property fails if
the vector $b$ is not identical in \eqref{EQpartition} and \eqref{EQsparsity}.
It also fails if
the sparsity matroid is defined by $|F[X]| \le b(X) - k$ for $k \neq 1$.
Another remark is on
the similarity of our algorithm and the blossom algorithm for nonbipartite matchings \cite{Edm65},
where a factor-critical component can be contracted and expanded.
In our $b$-branching algorithm,
for each strong component $X \in \X$ and each $v^*\in X$,
there exists an arc set $F_X \subseteq A[X]$ such that
$d^-_{F_X} (v^*) = b(v^*)-1$ and $d^-_{F_X} (v^*) = b(v^*)$ for each $v \in X \setminus \{v^*\}$.
In the blossom algorithm for nonbipatite matchings,
for each factor-critical component $X$ and each vertex $v^* \in X$,
there exists a matching exactly covering $X \setminus \{v^*\}$.
We finally remark that
the problem of finding a maximum-weight $b$-branching is a special case of a
modest generalization of the framework of the $\mathcal{U}$-feasible $t$-matching problem in bipartite graphs~\cite{Tak17ipco}.
In \cite{Tak17ipco},
it is proved that
the $\mathcal{U}$-feasible $t$-matching problem
in bipartite graphs is efficiently tractable under certain assumptions on the family of excluded structures $\mathcal{U}$.
The $b$-branching problem can be regarded as a new problem
which falls in this tractable class of the (generalized) $\mathcal{U}$-feasible $t$-matching problem.
\section*{Acknowledgements}
This work is partially supported by
JST ERATO Grant Number JPMJER1201,
JST CREST Grant Number JPMJCR1402,
JST PRESTO Grant Number JPMJPR14E1,
JSPS KAKENHI Grant Numbers
JP16K16012,
JP17K00028,
JP25280004,
JP26280001,
Japan.
| {
"timestamp": "2018-02-08T02:06:48",
"yymm": "1802",
"arxiv_id": "1802.02381",
"language": "en",
"url": "https://arxiv.org/abs/1802.02381",
"abstract": "In this paper, we introduce the concept of $b$-branchings in digraphs, which is a generalization of branchings serving as a counterpart of $b$-matchings. Here $b$ is a positive integer vector on the vertex set of a digraph, and a $b$-branching is defined as a common independent set of two matroids defined by $b$: an arc set is a $b$-branching if it has at most $b(v)$ arcs sharing the terminal vertex $v$, and it is an independent set of a certain sparsity matroid defined by $b$. We demonstrate that $b$-branchings yield an appropriate generalization of branchings by extending several classical results on branchings. We first present a multi-phase greedy algorithm for finding a maximum-weight $b$-branching. We then prove a packing theorem extending Edmonds' disjoint branchings theorem, and provide a strongly polynomial algorithm for finding optimal disjoint $b$-branchings. As a consequence of the packing theorem, we prove the integer decomposition property of the $b$-branching polytope. Finally, we deal with a further generalization in which a matroid constraint is imposed on the $b(v)$ arcs sharing the terminal vertex $v$.",
"subjects": "Discrete Mathematics (cs.DM); Data Structures and Algorithms (cs.DS); Combinatorics (math.CO)",
"title": "The $b$-branching problem in digraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9919380091867154,
"lm_q2_score": 0.7154239836484144,
"lm_q1q2_score": 0.7096562420646374
} |
https://arxiv.org/abs/1906.00105 | A Lipschitz Matrix for Parameter Reduction in Computational Science | We introduce the Lipschitz matrix: a generalization of the scalar Lipschitz constant for functions with many inputs. Among the Lipschitz matrices compatible a particular function, we choose the smallest such matrix in the Frobenius norm to encode the structure of this function. The Lipschitz matrix then provides a function-dependent metric on the input space. Altering this metric to reflect a particular function improves the performance of many tasks in computational science. Compared to the Lipschitz constant, the Lipschitz matrix reduces the worst-case cost of approximation, integration, and optimization; if the Lipschitz matrix is low-rank, this cost no longer depends on the dimension of the input, but instead on the rank of the Lipschitz matrix defeating the curse of dimensionality. Both the Lipschitz constant and matrix define uncertainty away from point queries of the function and by using the Lipschitz matrix we can reduce uncertainty. If we build a minimax space-filling design of experiments in the Lipschitz matrix metric, we can further reduce this uncertainty. When the Lipschitz matrix is approximately low-rank, we can perform parameter reduction by constructing a ridge approximation whose active subspace is the span of the dominant eigenvectors of the Lipschitz matrix. In summary, the Lipschitz matrix provides a new tool for analyzing and performing parameter reduction in complex models arising in computational science. |
\section*{Acknowledgements}
The authors would like to thank Akil Narayan for suggesting the
counter example to maximum uncertainty designs in~\cref{sec:design:uncertainty}
and Drew Kouri for pointing us to the randomized algorithms for Voronoi vertex sampling.
\section{Background\label{sec:background}}
Here we briefly compare the dimension reduction provided by the Lipschitz matrix
with several existing dimension reduction techniques.
\subsection{Active Subspace}
Given some measure on the domain $\set D$ $\mu$,
the active subspace is defined in terms of the dominant eigenvectors
of the average outer product of gradients
\begin{equation}
\ma C = \int_{\ve x\in \set D} \nabla f(\ve x) \nabla f(\ve x)^\trans \D \mu(\ve x).
\end{equation}
In contrast, when computing the Lipschitz matrix from gradient samples,
we do not need a measure associated with $\set D$
as the Lipschitz matrix is an upper bound on the outer product of gradients for all points in $\set D$:
\begin{equation}
\nabla f(\ve x) \nabla f(\ve x)^\trans \preceq \ma W = \ma L^\trans \ma L, \quad \forall \ve x \in \set D,
\end{equation}
where $\preceq$ the ordering of symmetric positive definite matrices
($\ma A \preceq \ma B$ if $\ma B - \ma A$ is positive definite).
Hence the active subspace matrix $\ma C$ and the Lipschitz matrix $\ma W$ provide
similar, but distinict, matrices from which we can determine the importance of a particular direction.
Exactly determining both of these matrices is difficult,
but both can be estimated using a collection of gradient samples.
For computing the active subspace matrix $\ma C$,
a typical approach is randomly sample $\cve x_i\in \set D$ according to the measure $\mu$
and use Monte-Carlo to estimate the integral:
\begin{equation}
\ma C \approx \tma C := \frac{1}{N}\sum_{i=1}^N \nabla f(\cve x_i) \nabla f(\cve x_i)^\trans.
\end{equation}
In the case of the Lipschitz matrix as discussed in~\cref{sec:estimate},
we can estimate $\ma W$ by solving a semidefinite program:
\begin{equation}
\ma W \approx \tma W := \argmin_{\ma W' \in \mathbb{S}^{m\times m}} \Trace \ma W'
\text{ such that }
\nabla f(\cve x_i) \nabla f(\cve x_i)^\trans \preceq \ma W' \ \forall \ i=1,\ldots,M,
\end{equation}
where $\mathbb{S}^{m\times m}$ denotes the symmetric positive definite cone.
\subsection{Sloppy Models}
Sloppy models take a contrapositive perspective for dimension reduction in comparison to the active subspace;
this approach defines the directions along which $f$ does not vary~\cite{TQ14}.
\section{Lipschitz Uncertainty}
One of the appealing aspects of Gaussian processes is they provide
an approach for estimating the \emph{uncertainty} of the approximation $g$
constructed from samples of $f$.
This approach specifies a family of functions $g\in \mathcal{GP}(\set X, f)$
where $g$ interpolates $f$ for all $\hve x_i \in \hset X$, $f(\hve x_i) = g(\hve x_i)$ for all $\hve x_i \in \hset X$,
as well as a probability associated with each function $g$
based on an assumed prior distribution.
Then we can define a set valued function $\set G_\delta$
that specifies the range of possible interpolating functions $g$ with probability greater than $\delta$
\begin{equation}
\set G_\delta(\ve x;\set X, f) = \lbrace g(\ve x) : \mathbb{P}[g \in \mathcal{GP}(\set X, f)]> \delta \rbrace.
\end{equation}
Using the Lipschitz matrix, we can provide similar set valued function $\set F_\ma L$
that defines the possible values of $f$, but under a very different interpretation.
From the Lipschitz matrix definition~\cref{eq:matrix_lipschitz},
we note that for a particular $\ve x\in \set D$,
$f(\ve x)$ must be contained within
\begin{equation}\label{eq:set_F}
\set F_\ma L(\ve x; \hset X, f) :=
\bigcap_{\hve x_i \in \hset X} \left[ f(\hve x_i) - \|\ma L(\ve x - \hve x_i)\|_2,
\ f(\hve x_i) + \|\ma L(\ve x - \hve x_i)\|_2 \right].
\end{equation}
Unlike the Gaussian process uncertainty,
this uncertainty of $f$ does not depend on a prior distribution for $f$ or a probability threshold $\delta$,
only the Lipschitz matrix $\ma L$.
\Cref{fig:gp} provides a comparison of these two notions of uncertainty
based on samples of a sine function.
Which notion of uncertainty is preferable depends on the setting.
In the remainder of this section we first describe how to evaluate
these bounds at a particular point
and then show how to compute these bounds over a set to
provide a visual representation of uncertainty in shadow plots.
\subsection{Point-wise Lipschitz Uncertainty}
As our definition of the uncertainty in $f$ given by $\set F_\ma L$ in \cref{eq:set_F}
is the intersection of intervals, we can restate this as a single interval
given in terms of the minimum and maximum of a finite set:
\begin{equation}
\set F_\ma L(\ve x; \hset X, f) =
\left[ \max_{\hve x_i \in \hset X} f(\hve x_i) - \|\ma L(\ve x - \hve x_i)\|_2,
\min_{\hve x_i \in \hset X} f(\hve x_i) + \|\ma L(\ve x - \hve x_i)\|_2
\right].
\end{equation}
Visualizing this uncertainty is challenging,
as its difference from the scalar Lipschitz case only appears for dimensions two and greater.
In~\cref{fig:gap}, we compare the gap between these lower and upper bounds given by $\set F_\ma L$.
This example illustrates that the uncertainty given by the Lipschitz matrix is smaller
than that of the scalar Lipschitz constant and hence provides tighter bounds on $f$.
\input{fig_gap}
\subsection{Set-wise Lipschitz Bounds}
Shadow plots are an important tool of \emph{exploratory data analysis}
for visualizing the behavior of high-dimensional functions.
These plots as exemplified in \cref{fig:shadow}
plot one linear combination of inputs along the x-axis
and the output along the y-axis.
In addition, we seek to plot the shadow of the uncertainty in $f$.
To do so, we define the Lipschitz uncertainty interval for the set $\set S \subset D$
as the union of the pointwise intervals $\set F_\ma L(\ve x)$:
\begin{equation}\label{eq:L_set_bounds}
\set F_\ma L(\set S; \hset X, f) := \bigcup_{\ve x\in \set S} \set F_{\ma L}(\ve x; \hset X, f).
\end{equation}
As before, as $\set F_\ma L(\ve x)$ is an interval,
the union of these intervals in~\cref{eq:L_set_bounds} is also an interval:
\begin{equation}\label{eq:L_bounds_minimax}
\set F_\ma L(\set S; \hset X, f) =
\left[
\min_{\ve x \in \set S\vphantom{\hset X}}
\max_{\hve x_i \in \hset X} f(\hve x_i) - \|\ma L(\ve x - \hve x_i)\|_2, \
\max_{\ve x \in \set S\vphantom{\hset X}}
\min_{\hve x_i \in \hset X} f(\hve x_i) + \|\ma L(\ve x - \hve x_i)\|_2
\right].
\end{equation}
For generating the shadow plots in \cref{fig:shadow},
we generate the uncertainty intervals for each value of the ordinate $y$
by introducing an equality constraint $\set S_y = \lbrace \ve x \in \set D: \ma U^\trans \ve x = y \rbrace$.
\input{fig_shadow}
There are a variety of algorithms for solving the two minimax optimization problems in~\cref{eq:L_bounds_minimax};
e.g.,~\cite{CC78,MO80,RN98,CL92}.
Here we adopt a simple approach following Osborne and Watson~\cite{OW69}
which computes a search direction via a linear program
and selects the next step via a line search.
The advantage of this approach is we can leverage existing,
high quality LP solvers to compute this step.
In the following subsection we first describe this algorithm
with a few modifications for our application.
Then as this optimization problem has many local minima as evidenced in~\cref{fig:gp},
we describe a heuristic for initializing this algorithm.
\subsubsection{Optimization}
Here we consider the optimization problem for the lower bound in~\cref{eq:L_bounds_minimax},
\begin{equation}
\minimize_{\ve x \in \set S} \max_{i=1,\ldots,M} \phi_i^-(\ve x),
\quad \phi_i^- (\ve x): = f(\ve x_i) - \|\ma L(\ve x - \hve x_i)\|_2.
\end{equation}
Following Osborne and Watson~\cite{OW69}
we introduce an auxiliary variable $t$ to act as an upper bound on $\phi_i^-(\ve x)$
and solve the constrained optimization problem over $(\ve x,t)$:
\begin{equation}
\begin{split}
\minimize_{\ve x \in \set S, t\in \R} & \ t \\
\text{such that} & \ \phi_i^-(\ve x) \le t,
\quad i=1,\ldots,M.
\end{split}
\end{equation}
As the set $\set S$ can often involve a mixture of linear equality and inequality constraints,
we prefer the sequential linearization approach of Osborne and Watson,
where the constraints $\phi_i^-(\ve x) \le t$ are
linearized about the current iterate $\ve x^{(k)}$
and we solve a linear program for the search direction $\ve p$:
\begin{equation}\label{eq:bound_LP}
\begin{split}
\ve p, t^{(k+1)} = \argmin_{\ve p \in \R^m, t \in \R} & \ t\\
\text{such that} & \ \phi_i^-(\ve x^{(k)} +
\ve p^\trans \nabla_{\ve x} \phi_i^-(\ve x^{(k)}) \le t \quad i=1,\ldots,M\\
&\ \ve x^{(k)} + \ve p \in \set S.
\end{split}
\end{equation}
Then each step of our optimization problem solves a (potentially large) linear program~\cref{eq:bound_LP}.
For the upper bound, we use the same algorithm with
\begin{equation}
\maximize_{\ve x \in \set S} \!\! \min_{i=1,\ldots,M} \! \! \phi_i^+(\ve x) =
-\minimize_{\ve x \in \set S}\! \! \max_{i=1,\ldots,M} \!\! -\phi_i^+(\ve x),
\ \phi_i^+(\ve x) := f(\hve x_i) + \|\ma L(\ve x - \hve x_i)\|_2.
\end{equation}
\input{alg_extrema}
\Cref{alg:extrema} provides a summary of this algorithm,
where we make a few modifications to the original algorithm.
First, we introduce additional constraints on the domain of $\ve x$.
Next, rather than performing an exact line search for the step length $\alpha$,
we use a back tracking line search with a standard Armijo condition for termation.
However, we find it is important to use a relatively large constant
for determining if we should accept the step (here we use $1/2$).
\subsubsection{Initialization}
Examining~\cref{fig:gp} we notice that for both the lower and upper bounds
there are many local minimizers on $\set S = [-1,1]$
and so the extrema returned by \cref{alg:extrema}
will depend strongly on the starting point $\ve x^{(0)}$.
Hence it is important to find a good set of initializations for this minimax optimization problem
to ensure we approximately obtain the true lower and upper bounds over $\set S$.
Fortunately, there is a simple heuristic for choosing a set of initializations
that leverages tools from computational geometry.
Consider the special case of finding the lower and upper bounds
given a Lipschitz matrix $\ma L$ where the value of each sample was zero,
$f(\hve x_i) = 0$ for all $\hve x_i \in \hset X$.
Then the lower and upper bounds have the same magnitude (but with flipped signs)
and are given by the optimzation problem
\begin{equation}\label{eq:empty_circle}
\maximize_{\ve x \in \set S} \min_{i=1,\ldots,M} \|\ma L(\ve x - \hve x_i)\|_2.
\end{equation}
If $\set S$ is a convex polytope,
then this is an example of the \emph{largest empty circle} problem
which is well known to be solved using the Voronoi diagram; see, e.g.,~\cite[sec.~5.1.3]{AK00}.
Namely, its solution is comes from the finite set of Voronoi vertices $\ve v_j$
which are equidistant from their $m+1$ nearest neighbors in $\set S$,
\begin{equation}
\min_{i=1,\ldots,M} \|\ma L(\ve v_j - \hve x_i)\|_2 =
\|\ma L(\ve v_j - \hve x_i)\|_2 \quad \forall i \in \set I_j,
\quad |\set I_j| = m+1,
\end{equation}
and the intersection of the Voronoi ridges,
which are hyperplanes satisfying fewer $m+1$ constraints above,
with the boundary of the set $\ma L\set S$ providing the remaining constraints to total $m+1$
linearily independent equality constraints.
Then, equipped with this finite set, we simply evaluate the objective~\cref{eq:empty_circle}
for each and return the largest.
One challenge with this approach is that the samples $\hset X$ may not be elements of $\set S$,
for example when generating shadow plots where $\set S$ is $\set D$ intersected with an equality constraint.
Here we propose simply to project the candidates generated for the largest empty circle problem on $\set D$
onto $\set S$ and use those as the initializers to~\cref{alg:extrema}.
Specificially, given candidate $\ve c$ from the largest empty circle problem,
we compute the projected candidate $\hve c$ where:
\begin{equation}
\hve c = \set P_{\set S}(\ve c) := \min_{\ve c'\in \set S} \|\ma L(\ve c - \ve c')\|_2.
\end{equation}
\input{alg_extrema_init}
The computational complexity of computing the Voronoi vertices grows exponentially with the dimension $m$,
but thanks to efficient computational geometry algorithms, such as Quickhull~\cite{BDH96},
we can compute all the Voronoi vertices for moderate dimensional problems $m\le 10$ with a few hundred samples
within a few minutes of computation time.
Unfortunately, computing the intersection of the Voronoi ridges with the boundary is significantly harder.
Rather than explicitly computing all these candidates,
we instead randomly sample the boundary of the domain $\set S$
from the Chebyshev center.
This procedure is summarized in \cref{alg:extrema_init}.
The approach we use here does not well scale to high dimensions $(m>10)$.
Rather than deterministically computing all the Voronoi vertices,
we can use a randomized algorithm such as~\cite{LC05} that will randomly sample
both the Voronoi vertices as well as the points on the boundary for the largest empty circle problem.
However, as all our examples are moderate dimensional, we use the approach outlined in \cref{alg:extrema_init}.
\section{Mitigating the Curse of Dimensionality\label{sec:curse}}
One of the uses of the Lipschitz matrix is to, in some instances,
remove the curse of dimensionality.
As we will show, if $\ma L$ is rank-$n$
then we have reduced the intrinsic dimension of $f$ and
then the covering number grows asymptotically like $\order(\epsilon^{-n})$, not $\order(\epsilon^{-m})$.
However, even if $\ma L$ is full rank,
we will show that using the Lipschitz matrix can still provide
effective dimension reduction, slowing the growth of $N_\epsilon(\ma L\set D)$.
To motivate this analysis, we consider a few illustrative examples.
Consider the linear function $f_1: \set D \to \R$
with scalar Lipschitz constant $L_1$ and Lipschitz matrix $\ma L_1$:
\begin{equation}
\set D_1 = [-1,1]^m, \quad
f_1(\ve x) = \ve a^\trans \ve x, \quad
L_1 = \|\ve a\|_2, \quad
\ma L_1 = \begin{bmatrix}
\ma 0 \\
\ve a^\trans
\end{bmatrix}.
\end{equation}
In the scalar Lipschitz case the curse of dimensionality is still present,
but in the matrix Lipschitz case $\ma L_1\set D_1$ lives on a one-dimensional subspace of $\R^m$
and hence $N_\epsilon(\ma L_1\set D_1)$ grows only linearly in $\epsilon$.
\emph{Ridge functions}~\cite{Pin15} allow us to generalize this dimension reduction result.
A ridge function is a function of a few linear combinations of the input $\ve x$;
for example consider $f_2:\set D_2 = [-1,1]^m\to \R$
\begin{equation}
f_2(\ve x) = g_2(\ma A^\trans \ve x)
\quad \ma A\in \R^{m\times n},
\quad g: \R^n\to \R.
\end{equation}
For ridge functions, the gradients of $f$ live in the range of $\ma A$,
hence both the weight matrix $\ma W_2$ and Lipschitz matrix $\ma L_2$ are rank-$n$:
\begin{equation}
\nabla f(\ve x) \in \Range(\ma A)
\quad \Rightarrow \quad
\Rank(\ma W_2) = \Rank(\ma L_2) = n.
\end{equation}
Again we have achieved dimension reduction as $\ma L_2 \set D_2$ lives on an $n$-dimensional subspace,
yielding substantial savings if $n\ll m$.
However, there are situations in which the Lipschitz matrix does not provide
a benefit over the standard Lipschitz constant.
Consider the quadratic function $f_3$ which has Lipschitz constant $L_3$ and Lipschitz matrix $\ma L_3$
\begin{equation}
\set D_3 = [-1,1]^m, \quad
f_3(\ve x) = \frac{1}{2\sqrt{m}} \ve x^\trans \ve x, \quad
L_3 = 1, \quad
\ma L_3 = \ma I.
\end{equation}
In this case, the scalar Lipschitz constant and the Lipschitz matrix are equivalent
and hence $N_\epsilon(L_3\set D_3) = N_\epsilon(\ma L_3\set D_3)$
so the Lipschitz matrix yields no additional savings.
\subsection{Intrinsic Dimension Reduction}
If the Lipschitz matrix $\ma L$ is rank-$n$ and $n< m$,
then we have effectively reduced the dimension.
Consider any point $\ve x \in \set D$.
Then for any $\ve z$ in the nullspace of $\ma L$,
the value of $f\in \set L(\set D, \ma L)$
is constant:
\begin{equation}
f(\ve x) = f(\ve x + \ve z) \quad \forall \ve x + \ve z \in \set D, \
\ma L \ve z = \ve 0.
\end{equation}
Hence when performing any task on $f$,
it is sufficient to consider the coordinates defined by the range of $\ma L$,
which define an $n$-dimensional set.
Hence, the covering number can at most grow like $\order(\epsilon^{-n})$---%
slower than the scalar Lipschitz rate $\order(\epsilon^{-m})$.
\subsection{Effective Intrinsic Dimension Reduction\label{sec:curse:effective}}
Even if the Lipschitz matrix is not low rank,
the Lipschitz matrix can still provide \emph{effective intrinsic dimension reduction}
as the covering number grows slower than $\order(\epsilon^{-m})$ in the pre-asymptotic regime.
\Cref{fig:cover} illustrates that this is an effect of the Lipschitz matrix
shrinking the width of $\ma L\set D$ along particular dimensions.
These dimensions with small width can, for large $\epsilon$,
be covered by a single $\epsilon$-ball and hence the covering number grows
like the number of dimensions wider than $\epsilon$.
We can see this effect in \cref{fig:volume} for the OTL Circuit test problem.
\input{fig_volume}
As computing the optimal covering is NP-hard~\cite[Chap.~2]{CL98},
we construct an upper bound on the covering number by counting the number
of $\epsilon$-balls placed on a grid that intersect the domain.
Here we choose a grid aligned to the right singular vectors of $\ma L$
with spacing $\epsilon/\sqrt{m}$ so grid of balls covers $\ma L \set D$ with no holes.
Then we determine if a particular $\epsilon$-ball centered at grid point $\hve y$
intersects the set $\ma L\set D$ by finding the closest point $\ve z$ in $\ma L\set D$
by solving the quadratic program:
\begin{equation}
\begin{split}
\minimize_{\ve z \in \R^m} & \ \| \hve y - \ve z\|_2^2 \\
\text{such that} & \ \ve z = \sum_{j} \alpha_j \ve c_j,
\quad \sum_{j} \alpha_j = 1, \quad \quad \alpha_j \ge 0.
\end{split}
\end{equation}
Here we have described the point $\ve z$ as a convex combination of the
corners $\ve c_j$ of $\ma L\set D$.
Then if $\| \hve y - \ve z\|_2 \le \epsilon$, $\set B_\epsilon(\hve y)$ intersects $\ma L\set D$.
As the number of these grid points $\hve y$ grows exponentially,
when there are more than $10^5$ we randomly sample $10^5$ and estimate the covering number
by the number of grid points times the fraction in this sample that intersect the transformed domain.
\subsection{Reduced Volume}
Asymptotically as the $\epsilon$-balls in the covering number become small,
the covering number is proportional to the volume of the domain.
One of the advantages of the Lipschitz matrix is it can yield a set with a smaller volume.
As $\ma L$ and $L$ are linear transformations, the volume of $\ma L\set D$ and $L \set D$
are simply related to the volume of $\set D$:
\begin{equation}
\vol(L\set D) = |L|^m \cdot \vol(\set D), \qquad
\vol(\ma L\set D) = | \! \det \ma L| \cdot \vol(\set D).
\end{equation}
Hence, if $\vol(\ma L\set D) \ll \vol(L \set D)$,
we can obtain a substantial reduction in the covering number,
and hence in the cost of optimization.
\Cref{tab:scaling} illustrates that the volume of the set transformed by the Lipschitz matrix
can be several orders of magnitude smaller than that of the set scaled by the Lipschitz constant.
Moreover, the ratio of these volumes is asymptotically the ratio of $N_\epsilon(L\set D)$
to $N_\epsilon(\ma L \set D)$.
We can see this in \cref{fig:volume} where this ratio is $1.1\cdot 10^5$,
and indeed the separation is indeed this amount.
\section{Design of Computer Experiments\label{sec:design}}
Given the uncertainty derived from the Lipschitz matrix in the previous section,
we might ask:
how can we choose samples to minimize the size of the uncertainty interval~\cref{eq:interval}?
This fundamentally is a question of the \emph{design of computer experiments}~\cite{SWN03},
a subfield of the \emph{design of experiments} (see, e.g.,~\cite{Fed72})
with the primary distinction that observations are deterministic; i.e., $f(\ve x)$ returns only one value.
Although there are a variety of motivations for constructing a design of experiments,
here we show the Lipschitz matrix motivates a space-filling
design of experiments in the metric induced by the Lipschitz matrix.
We briefly describe how a design can be approximated using the vertices of the Voronoi diagram
and discuss the pitfalls of a greedy, maximum uncertainty approach when
the Lipschitz matrix is unknown.
\subsection{Space Filling Design}
In a space filling design (see, e.g.,~\cite[sec.~5.3]{SWMW89})
the goal is to distribute samples $\lbrace \hve x_j \rbrace_{j=1}^M$
evenly with respect to a particular distance metric on the domain of $f$.
Here we use a metric derived from the Lipschitz matrix:
\begin{equation}
d(\ve x_1, \ve x_2) = \|\ma L(\ve x_1 - \ve x_2)\|_2.
\end{equation}
For example, a \emph{minimax distance design}
would choose samples $\lbrace \hve x_j \rbrace_{j=1}^M$ to minimize the fill distance
in this metric:
\begin{equation}\label{eq:designL}
\minimize_{\hve x_1,\ldots, \hve x_M \subset \set D} \max_{\ve x \in \set D} \min_{j=1,\ldots, M}
\|\ma L(\hve x_j - \ve x)\|_2.
\end{equation}
We can motivate this design as it
minimizes an error bound for all interpolatory approximations of $f$
with the same Lipschitz matrix.
\begin{corollary}\label{cor:LipMatBound}
Suppose $f, \widetilde{f}\in \set L(\set D, \ma L)$ with $f(\hve x_j) = \widetilde{f}(\hve x_j)$
for $j=1,\ldots, M$, then
\begin{equation}\label{eq:Ldist}
\max_{\ve x\in \set D} |f(\ve x) - \widetilde{f}(\ve x)|
\le \max_{\ve x\in \set D}\min_{j=1,\ldots, M} 2\|\ma L(\hve x_j - \ve x)\|_2.
\end{equation}
\end{corollary}
\begin{proof}
Set $\ma U = \ma I$ and $\epsilon=0$ in \cref{thm:error_bound}.
\end{proof}
Note, others have justified minimax designs using Bayesian arguments~\cite{JMY90}.
Unfortunately constructing a minimax design
is challenging due to the nested optimization problem.
Instead we solve a simpler optimization problem as a proxy for this minimax design
that allows us to exploit tools from computational geometry to aid in its solution.
Specifically we construct a sequential \emph{maximin distance design
by picking $\hve x_{M+1}$ holding $\hve x_{1},\ldots, \hve x_{M}$ fixed:
\begin{equation}\label{eq:empty}
\hve x_{M+1} = \argmax_{\ve x\in \set D} \min_{j=1,\ldots,M} \|\ma L(\hve x_j - \ve x)\|_2.
\end{equation}
When $\set D$ is a convex polytope this is an example of the
largest empty circle problem whose solution is given by the
\emph{bounded Voronoi vertices}~\cite[subsec.~5.1.3]{AK00}.
The Voronoi vertices are those points that are equidistant
to their $m$ closest points from $\lbrace \ve x_j\rbrace_{j=1}^M$:
\begin{equation}
\| \ma L (\ve v - \hve x_{\set I[1]})\|_2 =
\cdots =\| \ma L (\ve v - \hve x_{\set I[m]})\|_2 \le \| \ma L(\ve v - \hve x_j)\|_2 \forall j=1,\ldots, M,
\end{equation}
and the bounded Voronoi vertices are those vertices inside $\set D$ along with the
intersections of equidistance hyperplanes with the boundary of $\set D$.
If the domain $\set D$ is described by linear equality and inequality constraints
$\set D = \lbrace \ve x \in \R^m: \ma A \ve x \le \ve b, \ma A_{\text{eq}} \ve x = \ve b_{\text{eq}}\rbrace$
then these bounded Voronoi vertices are those points that satisfy a total of $m$ equality constraints:
\begin{equation}
\begin{split}
\| \ma L (\ve v - \hve x_{\set I[1]})\|_2 &=
\cdots =\| \ma L (\ve v - \hve x_{\set I[\ell]})\|_2 \le \| \ma L(\ve v - \hve x_j)\|_2
\quad \forall j=1,\ldots, M; \\
\ma A_{\text{eq}} \ve v &= \ve b_{\text{eq}} \qquad
\ma A_{\text{eq}} \in \R^{p\times m}, \ve b_{\text{eq}}\in \R^p; \\
\ma A_{\cdot, \set J} \ve v &= \ve b_{\cdot, \set J} \qquad |\set J| = m - \ell - p.
\end{split}
\end{equation}
By construction these bounded Voronoi vertices satisfy the Karush–Kuhn–Tucker conditions for~\cref{eq:empty}
and hence are local minimizers of~\cref{eq:empty}.
Thus, one approach to solving~\cref{eq:empty} is to enumerate these bounded Voronoi vertices $\ve v$
and simply return the vertex minimizing $\min_{j=1,\ldots, M} \|\ma L(\hve x_j - \ve v)\|_2$.
In low dimensional spaces (i.e., $m=3$) this is feasible using specialized solvers like Qhull~\cite{BDH96}.
However, the number of bounded Voronoi vertices grows exponentially in dimension of the domain
and for higher dimensional spaces we
resort to a randomized vertex sampling algorithm to find a subset of these~\cite{LC05}.
\subsection{Numerical Examples}
Here we provide two examples to illustrate the effectiveness of using a Lipschitz matrix
based design of experiments over one based on the Lipschitz constant.
\Cref{tab:sample} shows an estimate of maximum pointwise uncertainty based on these two designs
and shows that the maximum pointwise uncertainty is reduced by a factor of three to seven.
In \cref{fig:shadow} we see that the uncertainty projected onto the shadow plot
is significantly reduced when using a Lipschitz matrix based design compared to a Lipschitz constant design.
Importantly, the projected uncertainty is only a factor of two larger
than the enclosing envelope based on hundreds of thousands of samples;
this means the Lipschitz matrix coupled with a Lipschitz design of experiments
can provide an informative projected uncertainty.
\input{tab_sample}
\subsection{Maximum Uncertainty Design\label{sec:design:uncertainty}}
A tempting heuristic for choosing samples sequentially
is to pick the next sample $\hve x_{M+1}$ where the uncertainty interval~\cref{eq:Lmat_uncertain}
is largest; i.e.,
\begin{equation}\label{eq:max_uncertainty}
\hve x_{M+1} = \argmax_{\ve x \in \set D} | \set U(\ve x; \set L(\set D, \ma L), \lbrace (\hve x_j, y_j) \rbrace_{j=1}^M) |.
\end{equation}
Unfortunately, this heuristic will fail if we are simultaneously trying
to estimate the Lipschitz matrix from these samples.
Namely, there can be regions of the domain where the Lipschitz bounds
predict an uncertainty interval of size zero that also contain samples
that would increase the Lipschitz matrix.
This can be seen in one dimension in \cref{fig:gp}.
Near $\frac13$ the bounds are tight, yet this region contains the maximum slope
which, if sampled, would increase the Lipschitz constant.
\section{Discussion}
Here we have generalized the scalar Lipschitz constant
to the \emph{Lipschitz matrix}.
This Lipschitz matrix provides improved results over those for scalar Lipschitz functions:
it yields more efficient designs for computer experiments,
it decreases uncertainty,
and it reduces the computational complexity of approximation, optimization, and integration.
An appealing aspect of the Lipschitz matrix comes in its computation.
Although the semidefinite program is expensive, it is a convex problem with a unique minimizer.
Moreover, no restrictions are placed on how samples or gradients are chosen
and either samples or gradients can be used.
These are features not shared by any existing subspace based dimension reduction technique.
\section{Bounding the Lipschitz Matrix using Samples\label{sec:samples}}
How can we compute the Lipschitz matrix $\ma L\in \R^{m\times m}$ satisfying the constraint
\begin{equation}\label{eq:constraint}
| f(\ve x_1) - f(\ve x_2)| \le \| \ma L(\ve x_1 - \ve x_2)\|_2 \qquad \forall \ve x_1, \ve x_2\in \set D?
\end{equation}
Ideally, we would infer $\ma L$ directly from $f$,
but in many applications this is not possible.
Instead, we propose computing a Lipschitz matrix $\hma L \in \R^{m\times m}$
that satisfies this inequality~\cref{eq:constraint}
on only a finite set of samples $\hset D := \lbrace \ve x_i\rbrace_{i=1}^M \subseteq \set D$;
namely, $\hma L$ satisfies
\begin{equation}\label{eq:constraint_finite}
| f(\ve x_i) - f(\ve x_j)| \le \|\hma L(\ve x_i - \ve x_j)\|_2 \qquad \forall i,j \in 1,\ldots,M.
\end{equation}
Unlike the constraint over the entire set~\cref{eq:constraint},
this finite set estimate $\hma L$ is not well posed.
Suppose there is a vector $\ve a \in \R^m$ such that $\lbrace \ve a^\trans \ve x_i\rbrace_{i=1}^M$
are distinct,
then there is a rank-1 Lipschitz matrix satisfying this finite sample constraint~\cref{eq:constraint_finite}:
\begin{equation}\label{eq:rank1}
\hma L = \begin{bmatrix} \ma 0 \\ L\ve a^\trans \end{bmatrix} \in \R^{m\times m},
\qquad L = \max_{i,j} \frac{ |f(\ve x_i) - f(\ve x_j)|}{| \ve a^\trans \ve x_i - \ve a^\trans \ve x_j|}.
\end{equation}
Although this Lipschitz matrix satisfies the finite sample constraint,
since the choice of $\ve a$ only depends on the samples $\hset D$,
it bears no resemblance to the true Lipschitz matrix $\ma L$
and may be arbitrarily large if $\ve a^\trans \ve x_i$ approaches $\ve a^\trans \ve x_j$.
This presents a fundamental challenge when trying to recover the finite-sample Lipschitz matrix $\hma L$.
Additionally, our goal is not simply to recover a matrix $\hma L$ satisfying the finite sample
constraint, but to find the \emph{smallest} such matrix.
There is a natural ordering for these Lipschitz matrices.
Considering the 2-norm in~\cref{eq:constraint}
\begin{equation}
\| \hma L(\ve x_i - \ve x_j)\|_2^2 = (\ve x_i - \ve x_j) \hma L^\trans \hma L(\ve x_i - \ve x_j),
\end{equation}
we say that $\hma L_1$ is smaller than $\hma L_2$ if
\begin{equation}
\hma L_1^\trans \hma L_1 \preceq \hma L_2^\trans \hma L_2.
\end{equation}
However this is only a partial ordering,
and so to pose an optimization problem for $\hma L$,
we need a function that converts this partial ordering into a total ordering.
A natural choice might the determinant $|\det \hma L|$ as this is the factor by which $\hma L$
has shrunk the domain $\set D$,
but this objective function is undesirable because it will push $\hma L$ towards a rank deficient matrix,
even if one is not called for.
Instead, we optimize with respect to the trace of $\hma L^\trans \hma L$, or equivalently,
the Frobenius norm of $\hma L$:
\begin{equation}
\Trace (\hma L^\trans \hma L) = \sum_{i=1}^M [\hma L]_{\cdot,i}^\trans [\hma L]_{\cdot,i}
= \|\hma L\|_\fro^2.
\end{equation}
With this objective function the hope is that it will discourage spurious rank-deficient solutions
like~\cref{eq:rank1} since it will penalize large entries in $\hma L$.
Thus our goal is to solve the following quadratically constrained quadratic program (QCQP):
\begin{equation}\label{eq:Lmin1}
\begin{split}
\minimize_{\hma L\in \R^{m\times m}} & \ \frac12 \| \hma L\|_\fro^2 \\
\text{such that} &\
\frac12 [f(\ve x_i) - f(\ve x_j)]^2 \le \frac12 \|\hma L(\ve x_i - \ve x_j)\|_2^2
\quad \forall \ve x_i, \ve x_j \in \hset D.
\end{split}
\end{equation}
The optimal solution to this problem is a lower bound on the Lipschitz matrix $\ma L$ over the entire set
generated by solving~\cref{eq:Lmin1} with $\hset D$ replaced with $\set D$
\begin{equation}
\hma L^\trans \hma L \preceq \ma L^\trans \ma L.
\end{equation}
Unfortunately~\cref{eq:Lmin1} is not a convex problem
(it would be if the inequalities were reversed)
and as such we cannot use powerful algorithms for solving convex QCQPs.
This motivates our development of an interior point algorithm in the remainder of this section.
In the first subsection we show how to inexpensively generate a finite sample Lipschitz matrix $\tma L$
that satisfies the constraints.
Then, in the subsequent subsection we describe an interior point algorithm for solving~\cref{eq:Lmin1}
and conclude with a discussion on how to recursively deflate the rank of $\hma L$ if possible.
\subsection{Initialization}
When using interior point methods it is helpful to initialize them with a feasible point
such that succeeding iterations of the algorithm preserve feasibility.
Here we describe a simple procedure for constructing a finite-sample Lipschitz matrix $\tma L$
satisfying the constraints~\cref{eq:constraint}.
Suppose we have an initial estimate $\tma L$,
then for a given $i$ and $j$ we have
\begin{equation}
[f(\ve x_i) - f(\ve x_j)]^2 \le (\ve x_i - \ve x_j)\tma L^\trans \tma L (\ve x_i - \ve x_j)
= \ve h^\trans \ma M \ve h
\end{equation}
where for convenience we have defined $\ve h=\ve x_i - \ve x_j$ and $\ma M = \tma L^\trans \tma L$.
If this constraint is not satisfied,
we then add the rank-1 update $\alpha \ve h \ve h^\trans$ to $\ma M$
such that $\alpha > 0$ is as small as possible.
Applying this update
\begin{align}
[f(\ve x_i) - f(\ve x_j)]^2 &\le
\ve h^\trans \ma M \ve h + \ve h^\trans(\alpha \ve h \ve h^\trans) \ve h
= \ve h^\trans \ma M \ve h + \alpha \|\ve h\|_2^4 \\
\intertext{yields the choice}
\alpha &= \frac{ [f(\ve x_i) - f(\ve x_j)]^2 - \ve h^\trans \ma M \ve h}{\|\ve h\|_2^4}.
\end{align}
\Cref{alg:L_init} uses this update over all unique pairs of $i$ and $j$
to construct an initial estimate of $\tma L$.
\input{alg_L_init}
\subsection{Optimization}
Here we describe an interior point method to solve~\cref{eq:Lmin1},
largely following the outline given by Nocedal and Wright~\cite[Chap.~19]{NW06}.
Before discussing the details of our implementation,
we first discuss how we simplify the optimization problem.
As written~\cref{eq:Lmin1} is posed as an optimization problem over the $m^2$ entries of $\hma L $;
here we show how to restate this as posed over a vector $\ve \ell$ of the entries of $\hma L$.
First note that $\|\hma L \|_\fro = \|\! \vectorize \hma L\|_2$
where $\vectorize \hma L$ converts $\hma L \in \R^{m\times m}$ into a vector in $\R^{m^2}$ by stacking columns.
Next, we rewrite the product $\hma L (\ve x_i - \ve x_j)$ as:
\begin{equation}
\hma L (\ve x_i - \ve x_j) = ( (\ve x_i - \ve x_j)^\trans \otimes \ma I_m) \vectorize \hma L
\end{equation}
where $\otimes$ is the Kroneker product and $\ma I_m\in \R^{m \times m}$ is the $m$-dimensional identity matrix.
Hence our constraints are quadratic in $\vectorize \hma L$:
\begin{equation}
[f(\ve x_i) - f(\ve x_j)]^2 \le
\|\hma L (\ve x_i - \ve x_j)\|_2^2 =
(\vectorize \hma L)^\trans ( (\ve x_i - \ve x_j) \otimes \ma I_m) ( (\ve x_i - \ve x_j)^{\!\trans} \! \otimes \ma I_m) (\vectorize \hma L).
\end{equation}
However this parameterization in terms of $\vectorize \hma L \in \R^{m^2}$ is larger than necessary,
since, without loss of generality, we can assume $\hma L$ is lower triangular.
Hence, we instead work with $\ve \ell \in \R^{m(m+1)/2}$ describing the lower triangular portion of $\hma L$
where the matrix $\ma T\in \R^{m^2\times(m(m+1)/2)}$ maps $\ve \ell$ to $\vectorize \hma L$: $\vectorize \hma L =\ma T \ve \ell$.
This leads to the constraint functions $c_{i,j}(\ve\ell)$
\begin{equation}
c_{i,j}(\ve \ell) :=
\frac12 \ve \ell^\trans \ma T^\trans ( (\ve x_i - \ve x_j) \otimes \ma I_m) ( (\ve x_i - \ve x_j)^{\!\trans} \! \otimes \ma I_m) \ma T \ve \ell
- \frac12 [f(\ve x_i) - f(\ve x_j)]^2.
\end{equation}
This interior matrix, $\ma Q_{i,j} := \ma T^\trans ( (\ve x_i - \ve x_j) \otimes \ma I_m) ( (\ve x_i - \ve x_j)^{\!\trans} \! \otimes \ma I_m) \ma T$
is a block diagonal matrix with increasing block dimensions:
\begin{multline}
\ma Q_{i,j} =
\begin{bmatrix}
y_1^2 & \\
& y_1^2 & y_1y_2 & &\\
& y_1 y^2 & y_2^2 & &\\
& & & \ddots \\
& & & & y_1^2 & y_1y_2 & \cdots & y_1 y_m \\
& & & & y_1 y_2 & y_2^2 & \cdots & y_2 y_m \\
& & & & \vdots & \vdots & \ddots & \vdots \\
& & & & y_1 y_m & y_2 y_m & \cdots & y_m^2
\end{bmatrix}
\quad \text{with} \quad \ve y = \ve x_i - \ve x_j.
\end{multline}
Then using these simplifications, our optimization problem~\cref{eq:Lmin1} can be restated as:
\begin{equation}\label{eq:Lmin2}
\begin{split}
\minimize_{\ve \ell \in \R^{m(m+1)/2}} & \ \| \ve \ell\|_2^2 \\
\text{such that} & \ c_{i,j}(\ve\ell) = \frac12 \ve \ell^\trans \ma Q_{i,j} \ve \ell - \frac12 [f(\ve x_i) - f(\ve x_j)]^2 \ge 0.
\end{split}
\end{equation}
\input{alg_ip}
\Cref{alg:ip} gives our implementation of an interior point method for solving~\cref{eq:Lmin2}.
This implementation uses a line search method to both maintain a feasible solution
(so that early termination still yields a Lipschitz matrix satisfying the constraints)
and the merit function
\begin{equation}
\phi(\ve \ell, \ve s) := \|\ve \ell\|_2^2 - \mu\sum_{k=1}^{M(M-1)/2} \log s_k + \nu \left[\sum_{k=1}^{M(M-1)/2} (c_{i_k,j_k}(\ve \ell) - s_k)^2\right]^{1/2}
\end{equation}
to ensure sufficient decrease of the objective at each step.
Due to the large number of constraints, we eliminate many of the equations from the primal-dual system
yielding the condensed matrix $\ma A\in \R^{(m(m+1)/2) \times (m(m+1)/2)}$~\cite[eq.~(19.16)]{NW06}.
In practice, this matrix can be come indefinite, leading us to add a regularization penality in line~\ref{alg:ip:pen}.
We also use the Fiacco-McCormick approach to reduce the barrier penality in line~\ref{alg:ip:mu3},
where we set this penality parameter to zero if it becomes too small in line~\ref{alg:ip:mu2}.
In our implementation we are careful to avoid ever explicitly storing all the gradients and Hessians associated with the constraints
instead accumulating these into $\ma A$ and $\ve b$ directly as suggsted in lines \ref{alg:ip:A} and \ref{alg:ip:b}.
\subsection{Rank Deflation}
In cases where the optimal $\hma L$ will be low rank,
our solution to~\cref{eq:Lmin2} may not be low rank due to numerical artifacts and stopping critera.
To force $\hma L$ to rank-$r$, we can pose the optimization problem only over the bottom $m-r$ rows of $\hma L$
by truncating the corresponding columns of $\ma T$ yielding a problem with fewer degrees of freedom,
namely, $m(m+1)/2 - (m-r)(m-r+1)/2$.
Here we do so in a recursive manner as outlined in \cref{alg:recurse},
using the SVD to decrease the rank of $\hma L$ in line~\ref{alg:recurse:rank}
and allowing $\hma L$ to grow slightly in line~\ref{alg:recurse:fat} to insure the lower rank $\hma L$
satisfies the constraints.
\input{alg_recurse}
\section{Estimating the Lipschitz Matrix From Data\label{sec:estimate}}
Ideally, given a function $f$ we could identify its Lipschitz matrix analytically.
Unfortunately, for many functions this is impractical.
Instead we seek to estimate the Lipschitz matrix using a
finite number of observations of the functions value $f(\hve x_j)$
and its gradient $\nabla f(\cve x_j)$.
In this section we show that the finite data analogue to~\cref{eq:Lopt}
can be solved using a semidefinite program for both Lipschitz and $\epsilon$-Lipschitz functions.
We then discuss the possibility of low-rank solutions
and using the determinant rather than the Frobenius norm to impose an ordering on Lipschitz matrices.
Finally, we provide a numerical example illustrating the computational cost
associated with finding the Lipschitz matrix and the convergence of these finite-data
Lipschitz matrix estimates.
\subsection{Semidefinite Program}
After limiting our knowledge of $f$ to a finite number of samples $\lbrace f(\hve x_j)\rbrace_{j=1}^M$,
the optimization problem for $\ma L$~\cref{eq:Lopt} becomes
\begin{equation}\label{eq:Lopt_finite}
\begin{split}
\minimize_{\ma L\in \R^{m\times m}} & \ \|\ma L\|_\fro^2 \\
\text{such that} & \
|f(\hve x_i) - f(\hve x_j)|^2 \le \|\ma L(\hve x_i - \hve x_j)\|_2^2
\quad \forall \ i,j \in \lbrace 1,\ldots,M\rbrace
\end{split}
\end{equation}
By squaring the constraint,
we note this is a non-convex quadratically constrained quadratic program.
If instead we work with the squared Lipschitz matrix $\ma H \coloneqq \ma L^\trans \ma L$,
we can identify the Lipschitz matrix by solving a convex program.
Working over the cone of positive semidefinite matrices $\mathbb{S}^m_+\subset \R^{m\times m}$
and noting $\| \ma L\|_\fro^2 = \Trace \ma H$, we solve
\begin{equation}\label{eq:Hopt_sdp}
\begin{split}
\minimize_{\ma H \in \mathbb{S}^{m}_+} & \ \Trace \ma H \\
\text{such that} & \
|f(\hve x_i) - f(\hve x_j)|^2 \le (\hve x_i - \hve x_j)^\trans \ma H(\ve x_i - \ve x_j)
\quad \forall \ i,j \in \lbrace 1,\ldots,M \rbrace
\end{split}
\end{equation}
and recover $\ma L$ as the unique symmetric positive semidefinite square root of $\ma H$.
If gradient information is available, we can include this data to estimate the Lipschitz matrix.
Recalling the Lipschitz constraint in~\cref{eq:Lopt}
for points $\ve x\in \set D$ and $\ve x+ \delta \ve p\in \set D$ where $\ve p\in \R^m$,
the Lipschitz matrix $\ma L$, and thus $\ma H$, must satisfy
\begin{align}
| f(\ve x+ \delta \ve p) - f(\ve x) |^2 \le \|\ma L(\ve x + \delta \ve p - \ve x)\|_2^2
= \delta^2 \ve p^\trans \ma L^\trans \ma L \ve p = \delta^2 \ve p^\trans \ma H \ve p.
\end{align}
Then dividing by $\delta^2$ and taking the limit as $\delta \to 0$,
we have
\begin{align}
(\ve p^\trans \nabla f(\ve x))^2 =
\ve p^\trans \nabla f(\ve x) \nabla f(\ve x)^\trans \ve p \le \ve p^\trans \ma H \ve p.
\end{align}
Since this must hold for any $\ve p \in \R^m$,
we conclude
\begin{equation}
\nabla f(\ve x) \nabla f(\ve x)^\trans \preceq \ma H.
\end{equation}
Thus, incorporating the gradient at $\lbrace \cve x_j \rbrace_{j=1}^N$
yields the semidefinite program:
\begin{equation}\label{eq:Hsdp}
\begin{split}
\minimize_{\ma H \in \mathbb{S}^{m}_+} & \ \Trace \ma H \\
\text{such that} & \ |f(\hve x_i) - f(\hve x_j)|^2 \le
(\hve x_i - \hve x_j)^\trans \ma H (\hve x_i - \hve x_j) \quad \forall \ 1\le i < j\le M, \\
& \ \nabla f(\cve x_k) \nabla f(\cve x_k)^\trans \preceq \ma H \qquad \forall \ k = 1,\ldots, N.
\end{split}
\end{equation}
In our numerical examples, we use CVXOPT~\cite{cvxopt} to solve~\cref{eq:Hopt_sdp} and~\cref{eq:Hsdp}.
Note that the Lipschitz class of functions~\cref{eq:matrix_lipschitz}
does not constrain second and higher derivatives;
hence these derivatives cannot be added to~\cref{eq:Hsdp} to constrain the Lipschitz matrix.
\subsection{$\epsilon$-Lipschitz\label{sec:estimate:epsilon}}
We follow a similar approach to identify the $\epsilon$-Lipschitz matrix associated with a function $f$.
From the definition of this function class in~\cref{eq:Lmat_epsilon},
we note that for any two points $\ve x_1,\ve x_2 \in \set D$, $f$ must satisfy
\begin{equation}\label{eq:epsilon_Lip_constraint}
|f(\ve x_1) - f(\ve x_2)| \le \epsilon + \|\ma L(\ve x_1 - \ve x_2)\|_2.
\end{equation}
Then given samples $\lbrace \hve x_j \rbrace_{j=1}^M$
we find the squared Lipschitz matrix $\ma H$ as the solution of a semidefinite program
\begin{equation}\label{eq:Hopt_epsilon}
\begin{split}
\minimize_{\ma H \in \mathbb{S}_+^m} & \ \Trace \ma H \\
\text{such that} & \ (|f(\hve x_i) - f(\hve x_j)| - \epsilon)^2 \le
(\hve x_i - \hve x_j)^\trans \ma H (\hve x_i - \hve x_j) \quad
\forall \ 1\le i < j \le M.
\end{split}
\end{equation}
Note that due to the inclusion of $\epsilon$ in~\cref{eq:epsilon_Lip_constraint},
the class of $\epsilon$-Lipschitz functions does not constrain derivatives.
Hence we cannot use derivative information in~\cref{eq:Hopt_epsilon}
unlike the standard Lipschitz case~\cref{eq:Hsdp}.
\subsection{Low Rank Solutions}
It is tempting to impose a rank constraint on
$\ma H$ to yield low-rank Lipschitz matrices satisfying the data.
Unfortunately this can yield uninformative estimates of low-rank structure.
For example, given only samples of $f$ at
points $\lbrace \hve x_j \rbrace_{j=1}^M$ in general position,
almost every vector $\ve a\in \R^m$ yield distinct projections
$\lbrace \ve a^\trans \hve x_j\rbrace_{j=1}^M$
and hence we can find a rank-1 Lipschitz matrix:
\begin{equation}
\ma L = \begin{bmatrix}
\ma 0 \\
\alpha \ve a^\trans
\end{bmatrix}\in \R^{m \times m},
\quad
\alpha = \max_{i,j} \frac{ |f(\hve x_i) - f(\hve x_j)|}{|\ve a^\trans \hve x_i - \ve a^\trans \hve x_j|}.
\end{equation}
Gradient information improves this situation;
if $\Span \lbrace \nabla f(\cve x_j)\rbrace_{j=1}^N$ is an $r$-dimensional subspace of $\R^m$,
then $\ma H$ and $\ma L$ must be at least rank $r$.
\subsection{Using the Determinant}
Later, in \cref{sec:reducing} we show the computational complexity of several
tasks is proportional to the determinant of the Lipschitz matrix.
So, we might ask: why not use $\det \ma L$ in the optimization for the Lipschitz matrix~\cref{eq:Hsdp}?
There are two important reasons.
First, we lose convexity:
$\|\ma L\|_\fro^2 = \Trace \ma H$ is convex whereas $|\det \ma L|^2 = \det \ma H$ is concave.
Second, as illustrated in the previous subsection,
given only finite samples we can always find a low-rank Lipschitz matrix;
hence minimizing the determinant would always terminate with a zero objective value
but a likely spurious solution.
Even though we are not minimizing the determinant,
minimizing the Frobenius norm minimizes a bound on the determinant.
Using Jensen's inequality,
\begin{equation}
|\det \ma L|^2 = \det \ma H
\le m^{-m} \Trace (\ma H)^m = m^{-m} \|\ma L\|_\fro^{2m}.
\end{equation}
\subsection{Convergence}
\Cref{fig:convergence} provides a numerical example
illustrating the convergence of Lipschitz matrix estimates with increasing data
using the six dimensional OTL Circuit test function~\cite{BS07a}.
Here we have used the Frobenius norm to measure the convergence of $\ma H$
to a `true' estimate $\hma H$ based on a large number of gradient samples.
As an alternative to the Frobenius norm,
we could have used the geodesic distance
on the space of positive definite matrices~\cite[eq.~(2.5)]{BS09}:
\begin{equation}
d_{\mathbb{S}_+^m}(\ma H_1, \ma H_2) = \sqrt{ \sum_{k} \log^2(\lambda_k(\ma H_1, \ma H_2))}
\end{equation}
where $\lambda_k(\ma H_1, \ma H_2)$ denotes the $k$th
generalized eigenvalue of the pencil $\lambda \ma H_1 - \ma H_2$.
However in this example the Lipschitz matrix was low-rank when using a small number of samples
and hence the distance was undefined.
\input{fig_convergence}
Two features are immediately apparent in this example.
As expected,
as the quantity of data increases
our estimates of the squared Lipschitz matrix converge to their nominal value.
Further, we note that the computation time scales roughly linear in the number of gradients,
but quadratic in the number of samples.
Both cases result from growth in the number of constraints imposed.
Given the slower convergence of the sample-based estimate of the Lipschitz matrix
and its greater computational expense, we might ask:
is it better to construct finite-difference based gradients
than use the samples directly?
In this example with random samples, the answer is yes.
However this only works because we our evaluations of $f$ are noise-free
and hence we are able to accurately compute derivatives from finite-differences.
Finally we note that computing the Lipschitz matrix is far more computationally expensive than
a Monte Carlo estimate of the average outer product of gradients $\ma C$~\cref{eq:C_to_H}
using the same number of gradients;
here constructing $\ma C$ always took less than $10^{-2}$ seconds even with $N=10^4$
gradient samples.
\section{Finding Extrema of Lipschitz Bounds}
In this section our goal is to find extrema over the set of values $f$
can take on given the input-output pairs and the Lipschitz matrix $\ma L$.
The end goal of this process will be to enable us to visualize
possible values of $f$ in shadow plots (\cref{sec:shadow})
and provide potential optimizers of $f$ for derivative free optimization (\cref{sec:opt}).
Recall that given samples $\lbrace \ve x_i, f(\ve x_i)\rbrace_{i=1}^M$
and the Lipschitz matrix $\ma L$
the interval of possible values of $f$ at $\ve x$
given by the Lipschitz bounds is $\set F(\ve x)$ as defined in~\cref{eq:set_F}:
\begin{equation}
\set F(\ve x; \lbrace \ve x_i\rbrace_{i=1}^{M}, f, \ma L)
\!=\! \left[ \max_{i=1,\ldots,M} f(\ve x_i) \!-\! \|\ma L(\ve x - \ve x_i)\|_2,
\min_{i=1,\ldots,M} f(\ve x_i) \!+\! \|\ma L(\ve x - \ve x_i)\|_2 \! \right].
\end{equation}
Hence to find extrema of $\set F$ on some set $\set S\subseteq \set D$,
we need to solve a \emph{nonlinear minimax} optimization problem
\begin{align}
\label{eq:lip_lb}
\minimize_{\ve x \in \set S} \ \min \set F(\ve x; \lbrace \ve x_i \rbrace_{i=1}^M, f, \ma L)
&= \minimize_{\ve x\in \set S} \max_{i=1,\ldots,M} f(\ve x_i) - \|\ma L(\ve x - \ve x_i)\|_2; \\
\label{eq:lip_ub}
\maximize_{\ve x \in \set S} \ \max \set F(\ve x; \lbrace \ve x_i \rbrace_{i=1}^M, f, \ma L)
&= \maximize_{\ve x\in \set S} \min_{i=1,\ldots,M} f(\ve x_i) + \|\ma L(\ve x - \ve x_i)\|_2.
\end{align}
There are a variety of algorithms for solving this class of problems;
see, e.g.,~\cite{CC78,MO80,RN98,CL92}.
Here we adopt a simple approach following Osborne and Watson~\cite{OW69}
which computes a search direction via a linear program
and selects the next step via a line search.
The advantage of this approach is we can leverage existing,
high quality LP solvers to compute this step.
In the following subsection we first describe this algorithm
with a few modifications for our application.
Then as this optimization problem has many local minima as evidenced in~\cref{fig:gp},
we describe a heuristic for initializing this algorithm.
\subsection{Optimization}
\hrule
\subsection{Initialization}
\section{Introduction}
With the increasing sophistication of computer models,
practitioners in science and engineering are often confronted with the \emph{curse of dimensionality}:
the phenomena that for many tasks,
the computational burden grows exponentially in the number of parameters~\cite{TW98}.
A common approach to confront this difficulty is to employ \emph{dimension reduction}
to identify a few parameters that are sufficient to (approximately) explain the behavior of the model.
One important class of dimension reduction approaches are
\emph{subspace-based dimension reduction} techniques that
identify a low-dimensional \emph{active subspace} of the input parameters
that approximately captures the variation in the function;
e.g., given the function $f: \set D \subset \R^m \to \R$
the active subspace spanned by the columns of $\ma U\in \R^{m\times n}$
allows $f$ to be approximated by a \emph{ridge function} of fewer parameters:
\begin{equation}
f(\ve x) \approx g(\ma U^\trans \ve x); \qquad
g: \R^n \to \R, \quad \ma U^\trans \ma U = \ma I.
\end{equation}
There are a variety of approaches to identify this active subspace;
see, e.g., the average outer-product of gradients~\cite{Con15}
and ridge approximation~\cite{CEHW17, HC18}.
Here we introduce a new approach for parameter space dimension reduction
based on a generalization of the scalar Lipschitz constant.
This \emph{Lipschitz matrix} not only allows us to identify the active subspace,
but also provides improvements over the scalar Lipschitz constant:
tightening the bounds on the uncertainty in the function away from samples
and reducing the complexity of approximation, optimization, and integration.
Given a positive scalar Lipschitz constant $L \in \R_+$,
we can define the class of Lipschitz functions on a domain $\set D \subset \R^m$
with respect to the $2$-norm
\begin{equation}\label{eq:scalar_lipschitz}
\set L(\set D, L) \coloneqq
\lbrace f:\set D\to \R \ : \ |f(\ve x_1) - f(\ve x_2)| \le L \|\ve x_1 - \ve x_2\|_2,
\ \forall \ve x_1, \ve x_2 \in \set D\rbrace.
\end{equation}
To obtain the \emph{Lipschitz matrix},
we move the scalar Lipschitz constant $L$ inside the norm, promoting it to a matrix $\ma L \in \R^{m\times m}$.
This yields an analogous class of Lipschitz functions with the Lipschitz matrix $\ma L$:
\begin{equation}\label{eq:matrix_lipschitz}
\set L(\set D, \ma L) \coloneqq
\lbrace f:\set D\to \R \ : \ |f(\ve x_1) - f(\ve x_2)| \le \|\ma L(\ve x_1 - \ve x_2)\|_2,
\ \forall \ve x_1, \ve x_2 \in \set D\rbrace.
\end{equation}
We refer to both of these classes as \emph{Lipschitz functions}
as with the appropriate choice of scalar Lipschitz constant and Lipschitz matrix
these two classes are nested:
\begin{equation}
\set L(\set D, \ma L) \subset \set L(\set D, \|\ma L\|_2)
\quad \text{and} \quad
\set L(\set D, L) = \set L(\set D, L\ma I).
\end{equation}
The advantage of the Lipschitz matrix over the Lipschitz constant
is it provides additional information about the function.
As an example of how this information can be exploited, consider a one-dimensional ridge function $f_1$
with scalar Lipschitz constant $L_1$ and Lipschitz matrix $\ma L_1$:
\begin{equation}
f_1 \! : \! \set D_1 \! = \! [-1,1]^m \to \R , \ f_1(\ve x)\! \coloneqq g_1(\ve a^\trans \! \ve x), \
g_1 \in \set L(\R, 1),
\
L_1 \! = \! \| \ve a \|_2, \
\ma L_1 \!=\! \begin{bmatrix} \ma 0 \\ \ve a^\trans \end{bmatrix}\! .
\end{equation}
Suppose we seek approximate $f_1$ with error at most $\epsilon$ over $\set D_1$.
Treating $f_1$ as a scalar Lipschitz function $f_1\in \set L(\set D_1, L_1)$
would require us to sample at $\order(\epsilon^{-m})$ points in $\set D$
to minimize the worst-case error (see, e.g.,~\cite[sec.~7]{Don00})---%
this exponentially increasing cost with dimension $m$ is the \emph{curse of dimensionality}.
However if we knew $f_1$ had Lipschitz matrix $\ma L_1$, i.e., $f_1 \in \set L(\set D_1, \ma L_1)$,
we could exploit the rank-1 nature of $\ma L_1$
and achieve the same worst case accuracy using only $\order(\epsilon^{-1})$ samples.
Even if the Lipschitz matrix is not low rank,
it can reduce the costs associated approximation
as discussed in \cref{sec:reducing}.
One challenge when with working with the Lipschitz matrix
compared to the Lipschitz constant is there is no unique smallest Lipschitz matrix.
For the scalar Lipschitz constant,
the smallest Lipschitz constant is given by
\begin{equation}
\begin{split}
\minimize_{L \in \R_+} & \ L \\
\text{such that} & \ |f(\ve x_1) - f(\ve x_2)| \le L\| \ve x_1 - \ve x_2 \|_2
\quad \forall \ve x_1,\ve x_2 \in \set D.
\end{split}
\end{equation}
To pose an analogous optimization problem for the Lipschitz matrix,
a total ordering of matrices $\ma L\in \R^{m\times m}$ must be provided.
A partial ordering is provided by the
symmetric positive semidefinite
\emph{squared Lipschitz matrix} $\ma H \coloneqq \ma L^\trans \ma L$
appearing in the constraint for Lipschitz matrix functions~\cref{eq:matrix_lipschitz}
\begin{equation}
\|\ma L(\ve x_1 - \ve x_2)\|_2^2 = (\ve x_1 - \ve x_2)^\trans \ma L^\trans \ma L (\ve x_1 - \ve x_2)
= (\ve x_1- \ve x_2)^\trans \ma H (\ve x_1 - \ve x_2).
\end{equation}
Here we choose to introduce a total ordering compatible with this partial ordering of $\ma H$
via the Frobenius norm
and define the Lipschitz matrix $\ma L$ to be the solution of
\begin{equation}\label{eq:Lopt}
\begin{split}
\minimize_{\ma L \in \R^{m\times m}} & \ \|\ma L\|_\fro^2
\\
\text{such that} & \
|f(\ve x_1) - f(\ve x_2)| \le \|\ma L(\ve x_1 - \ve x_2)\|_2
\quad \forall \ve x_1, \ve x_2 \in \set D.
\end{split}
\end{equation}
In many cases we will not be able to solve this problem exactly,
but instead will approximate its solution given a finite number of
evaluations of $f$ and its gradient $\nabla f$.
In \cref{sec:estimate}, we show that the finite-data analog to~\cref{eq:Lopt}
can be solved as semidefinite program in $\ma H$.
Unfortunately many functions that appear in computational science and engineering
which are ideally Lipschitz functions,
are not in practice due to \emph{computational noise}~\cite{MW11}---%
a phenomena emerging due to many factors, including convergence tolerances and mesh discretizations.
If we model the numerical function $f$ as the sum of
a true function $\widehat{f}$ and a small perturbation $\widetilde{f}$,
see, e.g.,~\cite[eq.~(1.3)]{MW11},
\begin{equation}
f(\ve x) = \widehat{f}(\ve x) + \widetilde{f}(\ve x)
\end{equation}
we can extend our notion of Lipschitz to functions like $f$ provided we can bound
the perturbation from noise $\widetilde{f}(\ve x)$.
If $|\widetilde{f}(\ve x)| \le \frac{\epsilon}{2}$ for all $\ve x\in \set D$,
then $f$ is an $\epsilon$-Lipschitz function, where
\begin{equation}\label{eq:Lmat_epsilon}
f\in \set L_\epsilon(\set D, \ma L) \coloneqq
\lbrace f(\ve x) + \eta : f\in \set L(\set D, \ma L), \ \eta \in [-\epsilon/2, \epsilon/2]\rbrace.
\end{equation}
In \cref{sec:estimate:epsilon} we show how to extend our estimation of the Lipschitz matrix from data to
the $\epsilon$-Lipschitz case.
Equipped with the Lipschitz matrix for a function $f$,
we can use the knowledge that $f\in \set L(\set D, \ma L)$ to aid in several applications.
One way is to use the Lipschitz matrix to estimate the active subspace
for a subspace-based dimension reduction.
Analogously to the average outer product of gradients approach advocated by Constantine~\cite{Con15},
choosing the active subspace of $f$ to be the dominant eigenvectors of
squared Lipschitz matrix $\ma H$ minimizes an error bound in \cref{thm:error_bound}.
Note that in terms of gradients,
the squared Lipschitz matrix is an upper bound on the gradients in the
ordering of positive semidefinite matrices,
whereas the average outer product of gradients $\ma C$ is a mean with respect to a measure $\rho$:
\begin{equation}\label{eq:C_to_H}
\ma C \coloneqq \! \int_{\set D} \! \nabla f(\ve x) \nabla f(\ve x)^\trans \rho(\ve x) \D \ve x
\ \ \text{versus} \ \
\nabla f(\ve x) \nabla f(\ve x)^\trans \! \preceq \ma L^\trans \ma L = \ma H
\quad \forall \ve x \in \set D,
\end{equation}
where $\ma A \preceq \ma B$ denotes $\ma B - \ma A$ is positive semidefinite.
As illustrated in~\cref{sec:subspace} the Lipschitz matrix yields similar active subspaces
to the average outer product of gradients and has the advantage of not requiring gradient information.
Additionally, the $\epsilon$-Lipschitz matrix can avoid identifying an undesirable active subspace
in the presence of high-frequency, low amplitude terms in $f$.
The Lipschitz matrix can also be used to define \emph{uncertainty}---%
the range of possible values $f$ might take away from its value known at samples.
Unlike the uncertainty associated with the scalar Lipschitz constant,
the Lipschitz matrix can provide informative uncertainty intervals as shown in \cref{sec:uncertainty}.
This uncertainty estimate then motivates a space-filling \emph{design of experiments}
to minimize the Lipschitz matrix uncertainty.
As illustrated in \cref{sec:design},
using this sampling scheme further reduces
the uncertainty given the same number of samples.
As a final application of the Lipschitz matrix,
we consider the worst case \emph{information based complexity}
of approximation, integration, and optimization for Lipschitz-matrix functions.
In \cref{sec:reducing}
we show that the complexity of these tasks is proportional to
the determinant of the Lipschitz matrix.
As such, when the Lipschitz matrix has decaying singular values
the computational cost can be substantially reduced compared to results using the Lipschitz constant.
Moreover, when the Lipschitz matrix is rank-$r$
then $f$ is an $r$-dimensional ridge function $f(\ve x) = g(\ma U^\trans \ve x)$
with $\ma U\in \R^{m\times r}$
and computational complexity scales with $r$ instead of the intrinsic dimension of $\set D$.
Following the principles of reproducible research,
we provide code implementing the algorithms described in this paper
and scripts generating the data appearing in the figures and tables
available at {\tt \url{http://github.com/jeffrey-hokanson/PSDR/}}.
\section{Noisy Functions\label{sec:noise}}
For many complex simulations, such as those involving partial differential equation models,
although the true function $\widehat{f}$ may be Lipschitz continuous
its numerical approximation $f$ may not be.
This phenomena is commonly called \emph{computational noise}~\cite{MW11}
and is the result of many factors including convergence tolerances and mesh discretizations.
A common model is to assume that this noise is the combination of the true function $\widehat{f}$
and some small perturbation $\widetilde{f}$; see, e.g.,~\cite[eq.~(1.3)]{MW11}:
\begin{equation}
f(\ve x) = \widehat{f}(\ve x) + \widetilde{f}(\ve x).
\end{equation}
Even though $f$ may no longer be Lipschitz,
we may still seek to identify low-dimensional structure using the Lipschitz matrix.
Here we introduce the class of \emph{$\epsilon$-Lipschitz functions} that are Lipschitz functions
that allow at most $\epsilon$ variation between points independent of their distance
\begin{equation}
\set L_\epsilon(\set D, \ma L) :=
\lbrace
f:\set D\to \R :
| f(\ve x_1) - f(\ve x_2) | \le \epsilon + \|\ma L (\ve x_1 - \ve x_2)\|_2 \quad
\forall \ve x_1, \ve x_2 \in \set D
\rbrace.
\end{equation}
As before, by working with $\ma H = \ma L^\trans \ma L$
we can introduce a semidefinite program to identify $\ma L$ from data:
\begin{equation}
\begin{split}
\minimize_{\ma H \in \mathbb{S}_+^m} & \ \Trace \ma H \\
\text{such that} & \ (|f(\ve x_i) - f(\ve x_j)| - \epsilon)^2 \le
(\ve x_i - \ve x_j)^\trans \ma H (\ve x_i - \ve x_j) \quad
\forall 1\le i < j \le M.
\end{split}
\end{equation}
In this case we cannot incorporate gradient information as $f$ is no longer differentiable.
However, in the absence of gradient information
and with sufficiently large $\epsilon$, $\ma H$ will be low rank.
\subsection{Toy Example}
\input{fig_roof}
As an example, consider the ``corrugated roof'' function~\cite[eq.~(26)]{CEHW17}
\begin{equation}\label{eq:roof}
f(\ve x) = 5 x_1 + \sin (10 \pi x_2) \quad \ve x \in [-1,1]^2.
\end{equation}
This function is linear in the first coordinate and highly oscillatory in the second coordinate.
In terms of computational noise,
we can consider this a model where the first term is the true function $\widehat{f}$
and the second term is the computational noise $\widetilde{f}$.
The challenge with this example is that our attempt to identify an active subspace
either via the Lipschitz matrix or the average outer product of gradients
will identify the second coordinate as the most important.
Namely, for this function the Lipschitz matrix is
\begin{equation}
\ma L = \begin{bmatrix} 5 & 0 \\ 0 & 10 \pi \end{bmatrix}
\qquad
\ma H = \begin{bmatrix} 25 & 0 \\ 0 & 100\pi^2 \end{bmatrix}
\end{equation}
and hence we have identified the second coordinate as the most significant.
However, examining the shadow plot in \cref{fig:roof} this is undesirable.
If instead we considered the $\epsilon$-Lipschitz matrix with $\epsilon=2$,
the variation in $f$ due to the sine-term is ignored, leaving the rank-1 Lipschitz matrix $\ma L_2$:
\begin{equation}
\ma L_2 = \begin{bmatrix} 5 & 0 \\ 0 & 0 \end{bmatrix}
\qquad
\ma H_2 = \begin{bmatrix} 25 & 0 \\ 0 & 0 \end{bmatrix}.
\end{equation}
Then identifying the active subspace as that associated with the largest eigenvector of $\ma H_2$
leaves us to identify the first component as the most important
which matches our intuition.
In this example we have exploited the fact we knew the size of the noise contribution.
However, even with an inaccurate estimate we
\JH{Simple numerical example plotting subspace angle of dominant direction as a function of $\epsilon$}
\JH{Combine algorithm into estimate; combine example into subspace}
\section{Using the Lipschitz Matrix in Optimization\label{sec:opt}}
A Lipschitz based approach is one technique for solving global optimization problems~\cite{MV17}.
Here we show that by invoking the Lipschitz matrix approach we can accelerate the convergence
of this algorithm
\section{Information Based Complexity\label{sec:reducing}}
Whereas the last section sought samples minimizing uncertainty given a particular function,
now we ask a related question:
what is the minimum number of samples required approximate \emph{any}
$f \in \set L(\set D, \ma L)$ to within $\epsilon$ throughout $\set D$?
This is a question of \emph{information based complexity}~\cite{TW98}:
a subfield that seeks to understand the fundamental complexity of tasks
such as approximation, integration, and optimization.
For standard scalar Lipschitz functions $f\in \set L(\set D, L)$
these results are well known; see, e.g.,~\cite[chap.~2]{TW98}.
In this section we generalize these complexity results for Lipschitz matrix functions.
Our approach will be to connect the computational complexity to the
$\epsilon$ \emph{internal covering number} illustrated in \cref{fig:cover}---%
given a domain $\set D$, the number of $\epsilon$ balls
centered at points $\hve x_j\in \set D$ covering the domain $\set D$:
\begin{equation}\label{eq:cover}
N_\epsilon(\set D) \coloneqq \! \! \! \argmin_{M, \lbrace \hve x_j\rbrace_{j=1}^M \subset \set D} \! \! \! M
\ \ \text{such that} \ \
\set D \! \subseteq \! \bigcup_{j=1}^M \set B_\epsilon(\hve x_j),
\ \set B_\epsilon(\hve x_j) \!=\! \lbrace \ve x : \| \ve x - \hve x_j\|_2 \le\! \epsilon\rbrace.
\end{equation}
Specifically, we will show that the worst-case computational complexity
to obtain $\order(\epsilon)$ accuracy for approximation, integration, and optimization
requires $N_\epsilon(\ma L \set D)$ evaluations of $f\in \set L(\set D, \ma L)$
where $\ma L\set D = \lbrace \ma L \ve x : \ve x \in \set D\rbrace$.
\input{fig_cover}
The reason that the covering number plays a critical role in each of these tasks
is that $N_\epsilon(\ma L\set D)$ is the minimum number of samples required to
construct a minimax distance design~\cref{eq:designL} where every point in the domain is
at most a distance $\epsilon$ away from a sample $\hve x_j$.
To see this, let $\hve z_j$ be points in $\ma L\set D$
forming an $\epsilon$-covering:
\begin{equation}\label{eq:cover_z}
\ma L \set D \subseteq \bigcup_{j=1}^{N_\epsilon(\ma L\set D)} B_\epsilon(\hve z_j),
\qquad \hve z \in \ma L \set D.
\end{equation}
Then we can then find points $\hve x_j \in \set D$ such that
$\ma L\hve x_j = \hve z_j$ (these are non-unique if $\ma L$ is low-rank).
After changing coordinates the maximum distance is at most $\epsilon$:
\begin{equation}\label{eq:cover_eq}
\max_{\ve x\in \set D}\min_{j=1,\ldots, N_\epsilon(\ma L\set D)}
\|\ma L(\ve x- \hve x_j)\|_2
= \max_{\ve z \in \ma L\set D} \min_{j=1,\ldots, N_\epsilon(\ma L\set D)}
\|\ve z - \hve z_j\|_2 \le \epsilon.
\end{equation}
This connection to the covering number exposes
a geometric origin for the curse of dimensionality.
If $\set D\subset \R^m$ is convex and contains at least one $\epsilon$-ball
then the covering number grows exponentially in the dimension $m$:
\begin{align}\label{eq:cover_bound}
\left(\frac{1}{\epsilon}\right)^{m} \frac{\vol(\set D)}{\vol(\set B_1)} \le
N_\epsilon(\set D)
\le
\left(\frac{3}{\epsilon}\right)^{m} \frac{\vol(\set D)}{\vol(\set B_1)}.
\end{align}
where $\vol(\set D)$ refers to the Lebesgue measure of the set in $\R^m$~\cite[Thm~14.2]{Wu17}.
Note that if $\set D$ has an intrinsic dimension $r$ less than $m$,
then there an $r$-dimensional domain $\tset D\subset \R^r$ with intrinsic dimension $r$
such that each point in $\tset D$ is associated with a unique point in $\set D$.
Hence to apply these bounds on the covering number~\cref{eq:cover_bound}
we must work on the transformed domain $\tset D$ rather than $\set D$.
\input{fig_covering}
The bounds on the covering number in~\cref{eq:cover_bound}
suggest two ways that the Lipschitz matrix can reduce complexity
that the scalar Lipschitz constant cannot.
The more significant of these occurs when the Lipschitz matrix is low-rank.
If $\ma L$ is rank-$r$ and $\set D$ has intrinsic dimension $m$
then $\ma L\set D$ has intrinsic dimension $r$ whose coordinates
are given by the leading $r$ right singular vectors of $\ma L$.
In this case the growth of the covering number no longer depends
exponentially on parameter space dimension $m$, but on the rank of $\ma L$.
Even when $\ma L$ is not exactly low rank,
this same effect can slow the asymptotic growth in the covering number.
When $\epsilon$ is sufficiently large as illustrated in \cref{fig:cover}
there are dimensions of $\ma L\set D$ that do not require more than one $\epsilon$-ball
to cover.
This temporarily slows the growth of the covering number as shown in \cref{fig:covering}.
The second way the Lipschitz matrix reduces complexity emerges
in the constants of the bounds~\cref{eq:cover_bound} when $\ma L$ is full rank.
Both the lower and upper bounds depend on the volume of the transformed domain:
$\vol(L \set D)$ in the scalar Lipschitz case and $\vol(\ma L\set D)$ in the Lipschitz matrix case.
These two volumes are proportional $L^m$ and the determinant of $\ma L$:
\begin{equation}
\vol(\ma L\set D) = | \det(\ma L) | \cdot \vol(\set D),
\qquad
\vol(L\set D) = L^m \cdot \vol(\set D).
\end{equation}
As illustrated in \cref{tab:scaling},
the volume associated with the Lipschitz matrix is often orders of magnitude smaller
than that associated with the scalar Lipschitz constant.
This implies a substantial reduction in the number of function evaluations
required to reach a specified accuracy
when using the Lipschitz matrix as compared to the Lipschitz constant.
\input{tab_scaling}
In the remainder of this section we connect the complexity
of approximation, integration, and optimization of Lipschitz functions to the covering number.
\subsection{Approximation Complexity}
A key tool in establishing complexity results for Lipschitz functions
is the \emph{central approximation} $\overline{f}$:
the mean of the lower and upper bounds of the uncertainty interval
$\set U(\ve x; \set L(\set D, \ma L), \lbrace \hve x_j, y_j \rbrace_{j=1}^M )$
given in~\cref{eq:interval},
\begin{align}
\label{eq:interval2}
\set U(\ve x)
&\phantom{:}= \left[
\max_{j=1,\ldots, M} y_j - \|\ma L(\ve x - \hve x_j)\|_2 \ , \
\min_{j=1,\ldots, M} y_j + \|\ma L(\ve x - \hve x_j)\|_2
\right] \\
%
\label{eq:central_approx}
\overline{f}(\ve x) & \coloneqq
\frac12 \left(\max_{j=1,\ldots, M} \! y_j - \|\ma L(\ve x - \hve x_j)\|_2 \right)
\!+\!
\frac12 \left(\min_{j=1,\ldots, M} \! y_j + \|\ma L(\ve x - \hve x_j)\|_2 \right).
\end{align}
The central approximation is the best worst-case approximation in the sup-norm.
\begin{lemma}\label{lem:central}
Given a Lipschitz matrix $\ma L$
and data $\lbrace \hve x_j, y_j\rbrace_{j=1}^M$ the central approximation $\overline{f}$~\cref{eq:central_approx}
minimizes the worst case error over any other approximation $\widetilde{f}\in \set L(\set D, \ma L)$
where $\widetilde{f}(\hve x_j) = y_j$
\begin{equation}
\sup_{\substack{f \in \set L(\set D, \ma L) \\ f(\hve x_j) = y_j \ j=1,\ldots, M}}
|f(\ve x) - \overline{f}(\ve x)|
\le
\sup_{\substack{f \in \set L(\set D, \ma L) \\ f(\hve x_j) = y_j \ j=1,\ldots, M}}
|f(\ve x) - \widetilde{f}(\ve x)|.
\end{equation}
\end{lemma}
This result allows us to show the worst case complexity of approximation
of Lipschitz functions.
\begin{theorem}\label{thm:approx}
The minimum number of samples
required to construct an approximation $\widetilde{f}\in \set L(\set D, \ma L)$
of any Lipschitz function $f\in \set L(\set D, \ma L)$
with maximum pointwise error $\epsilon$,
is the $\epsilon$ internal covering number of $\ma L\set D$, $N_\epsilon(\ma L\set D)$.
\end{theorem}
\begin{proof}
Consider the case where all observations of $f$ are zero, i.e., $f(\hve x_j) = 0$ for $j=1,\ldots, M$.
From \cref{lem:central} the best approximation $\widetilde{f}$ is the central approximation $\overline{f}$
whose worst-case error is, from~\cref{eq:interval2},
\begin{equation}
\sup_{\substack{f \in \set L(\set D, \ma L) \\ f(\hve x_j) = y_j \ j=1,\ldots, M}}
| f(\ve x) - \overline{f}(\ve x) | = \min_{j=1,\ldots, M} \|\ma L(\ve x - \hve x_j)\|_2.
\end{equation}
The smallest number of samples
such that this error is less than $\epsilon$ is $N_\epsilon(\ma L\set D)$ by \cref{eq:cover_eq}.
Any other data $f(\hve x_j)$ requires as many or fewer samples to obtain the same accuracy.
\end{proof}
\subsection{Optimization Complexity}
The complexity of optimization follows a similar argument as that for approximation.
\begin{theorem}\label{thm:opt}
The minimum number of samples to find the maximum of any Lipschitz function
$f\in \set L(\set D, \ma L)$ to within $\epsilon$ is $N_\epsilon(\ma L\set D)$.
\end{theorem}
\begin{proof}
First we provide an upper bound on the minimum number of samples.
By \cref{thm:approx}, the central approximation constructed using $N_\epsilon(\ma L\set D)$ samples
is within $\epsilon$ of the Lipschitz function $f$; i.e., $|f(\ve x) - \overline{f}(\ve x)|\le \epsilon$
for all $\ve x\in \set D$.
Hence at most $N_\epsilon(\ma L\set D)$ samples are required to find the optimum
of any $f\in \set L(\set D,\ma L)$ within $\epsilon$.
To show this bound is obtained,
consider the function $f \in \set L(\set D, \ma L)$
where $f(\ve x)\ge 0$ for all $\ve x\in \set D$,
$f(\ve x^\star) = 2\epsilon$,
and $f$ has minimum integrand.
Suppose we choose samples $\hve x_j$ corresponding to an $\epsilon$ covering of $\ma L\set D$
via~\cref{eq:cover_eq};
the location of $\ve x^\star$ can be chosen adversarially such that only
the last sample has a value greater than $\epsilon$, i.e., $f(\hve x_j) < \epsilon$
for $j=1,\ldots, M-1$ and $f(\hve x_M) \ge \epsilon$ where $M = N_\epsilon(\ma L\set D)$.
Hence $N_\epsilon(\ma L\set D)$ samples are required to find the optimum
of this function, obtaining the upper bound.
\end{proof}
\subsection{Integration Complexity}
The quadrature rule with minimum worst case error is built using the same tools:
the central approximation based on samples from a minimal $\epsilon$-covering of $\ma L\set D$.
This proof parallels the one-dimensional Lipschitz case
presented by Traub and Werschulz~\cite[chap.~2]{TW98}.
\begin{theorem}
Let $\phi^\star$ be the quadrature rule for functions $f\in \set L(\set D,\ma L)$
resulting from integrating the central approximation
$\overline{f}$ constructed from $N_\epsilon(\ma L \set D)$ samples $\hve x_j^\star$ solving~\cref{eq:designL}
\begin{equation}
\int_\set{D} f(\ve x) \D \ve x \approx
\phi^\star(f) \coloneqq \int_{\set D} \overline{f}(\ve x) \D \ve x
\text{ where } \overline{f}\in \set L(\set D, \ma L) \text{ and }
\overline{f}(\hve x_j^\star) = f(\hve x_j^\star)
\end{equation}
and let $\phi$ be any other quadrature rule based on integrating an approximation $\widetilde{f}$
interpolating samples $\lbrace \hve x_j \rbrace_{j=1}^{N_\epsilon(\ma L\set D)}$
\begin{equation}
\int_{\set D} f(\ve x) \D \ve x \approx \phi(f) \coloneqq
\int_\set D \widetilde{f}(\ve x) \D \ve x
\text{ where } \widetilde{f}\in \set L(\set D, \ma L) \text{ and }
\widetilde{f}(\hve x_j) = f(\hve x_j).
\end{equation}
Then
\begin{equation}
\sup_{f\in \set L(\set D, \ma L)}
\left| \phi^\star (f)
- \int_{\set D} f(\ve x) \D \ve x
\right|
\le
\sup_{f\in \set L(\set D, \ma L)}
\left| \phi(f) - \int_{\set D} f(\ve x) \D \ve x
\right|
\end{equation}
and the error of $\phi^\star$ is bounded above by $\epsilon \cdot \vol(\set D)$.
\end{theorem}
\begin{proof}
First note that given set of points $\lbrace \hve x_j\rbrace_{j=1}^{N_\epsilon(\ma L\set D)}$,
the central approximation has smallest worst case error by \cref{lem:central}
and since $\overline{f}, \widetilde{f}\in \set L(\set D, \ma L)$ are Lipschitz continuous,
the integrand of the central approximation has smaller worst case error than any other approximation
interpolating the data.
Then as using points $\hve x_j^\star$ corresponding to the optimal covering of $\ma L\set D$
via~\cref{eq:cover_eq}, the maximum pointwise error is $\epsilon$.
Integrating this over the domain yields the upper bound
\begin{equation}
\sup_{f\in \set L(\set D, \ma L)}
\left| \phi^\star (f)
- \int_{\set D} f(\ve x) \D \ve x
\right|
\le \epsilon \cdot \vol(\set D).
\end{equation}
\end{proof}
\section{Lipschitz-Based Design of Computer Experiments\label{sec:sample}}
There are a wide variety of approaches for picking inputs to a computer simulation,
a process that goes under the name the \emph{design of computer experiments}~\cite{SWN03}.
Here we ask, how might we use our Lipschitz based approach for this task?
An intuitive choice for a Lipschitz-based sampling scheme might be to greedily sample
points in the domain where the gap between the Lipschitz bounds is greatest.
Assuming we have estimated $\hma L_k$ from data $\lbrace \hve x_i, f(\hve x_i)\rbrace_{i=1}^k$,
this would choose the next sample $\hve x_{k+1}$ to solve
\begin{align}
\label{eq:greedy_F}
&\max_{\ve x \in \set D} |\set F(\ve x; \lbrace \hve x_i\rbrace_{i=1}^{k}, f, \hma L_{k})| \\
\label{eq:greedy_F2}
=& \max_{\ve x\in \set D} \left[
\left(\min_{i=1,\ldots,k} \! f(\hve x_i) \!+\! \|\hma L_k(\ve x - \hve x_i)\|_2\! \right)
\!-\! \left(\max_{i=1,\ldots,k} \! f(\hve x_i) \!-\! \|\hma L_k(\ve x - \hve x_i)\|_2 \! \right)
\right].
\end{align}
Although intuitively appealing, this approach is fundamentally flawed.
As illustrated in \cref{fig:bad_greedy} using a toy example from Regier and Stark~\cite[Fig.~1]{RS15},
if the initial Lipschitz constant is underestimated, large regions of the domain will never by sampled.
This suggests we employ an alternative strategy.
If we assume for the sake of selecting a new sample that $f(\ve x) = 0$ in~\cref{eq:greedy_F2},
then this reduces to minimizing the maximum value of $\|\ma L(\ve x - \ve x_i)\|_2$:
\begin{equation}\label{eq:minimax}
\minimize_{\lbrace \ve x_i\rbrace_{i=1}^M} \max_{\ve x\in \set D} \min_{i=1,\ldots, M} \|\ma L (\ve x - \ve x_i)\|_2.
\end{equation}
In computer experiments this is called a \emph{minimax distance design}~\cite[Sec.~5.2]{SWN03}
and can be formulated with an arbitrary metric $d$.
However, this is a challenging nested optimization problem,
so rather than solving~\cref{eq:minimax}, we propose constructing a \emph{maximin distance design}:
\begin{equation}\label{eq:maximin}
\maximize_{\lbrace \ve x_i\rbrace_{i=1}^M} \min_{\substack{i,j \in 1,\ldots, M\\ i\ne j}} \|\ma L(\ve x_i - \ve x_j)\|_2.
\end{equation}
Unlike a fixed metric $d$, our Lipschitz matrix distance metric can change as we obtain more samples of $f$.
Hence, rather than constructing a single maximin distance design for a single $\ma L$,
we construct a sequential maximin distance design:
\begin{equation}\label{eq:seq_maximin}
\ve x_{k+1} = \argmax_{\ve x \in \set D} \min_{i=1,\ldots, k} \|\hma L_{k}(\ve x - \ve x_i)\|_2.
\end{equation}
\input{fig_bad_greedy}
The main advantage in constructing a sequential maximin design~\cref{eq:seq_maximin}
is we can use powerful tools from computational geometry to simplify this optimization problem.
Consider the mapping of $\ve x_i$ by $\hma L_k$: $\ve y_i = \hma L_k \ve x_i$.
Any solution to~\cref{eq:seq_maximin} that is in the interior of $\set D$ must be a Voronoi vertex of
$\lbrace \ve y_i \rbrace_{i=1}^M$---a point equidistant from $m+1$ points $\ve y_i$.
When these points $\ve y_i$ live on a low-dimensional subspace,
such as when $\hma L_k$ is low rank, these Voronoi vertices can be enumerated inexpensively;
otherwise, in high dimensions, the Voronoi vertices can be sampled efficiently~\cite{LC05}.
Then by constructing an additional set of points on the boundary of $\set D$,
we can approximately solve~\cref{eq:seq_maximin} by a simple maximum over a finite set of candidates
as described in \cref{alg:sample}.
The largest circle problem, equivalent to finding this greedy point,
is well known to be solveable using the Voronoi diagram~\cite[Thm.~10]{SH75}.
\input{alg_sample}
\input{fig_sample_fill}
\input{fig_sample}
\section{Dimension Reduction\label{sec:subspace}}
Broadly speaking, parameter space dimension reduction consists of finding a low-dimensional
set of coordinates resulting from a map $\ve h$ where
\begin{equation}
f(\ve x) \approx g(\ve h(\ve x)),
\qquad
g:\R^n \to \R,
\quad
\ve h: \set D \to \R^n,
\end{equation}
Broadly speaking, there are three main classes of parameter space dimension reduction
distinguished by their class of $\ve h$.
\emph{Coordinate-based} dimension reduction identifies a set of
active parameters denoted by the index set $\set I$
along which $f$ varies and chooses $\ve h$ to select these parameters
\begin{equation}
[\ve h(\ve x)]_i = [\ve x]_{\set I[i]}.
\end{equation}
There are a variety of ways to identify these active variables;
see, e.g.,~\cite{WLS15} for a review.
\emph{Subspace-based} dimension reduction
identifies an \emph{active subspace} spanned by the columns of $\ma U \in \R^{m \times n}$
along which $f$ varies and chooses $\ve h$ to be a linear function
\begin{equation}
\ve h(\ve x) = \ma U^\trans \ve x.
\end{equation}
As with coordinate-based approaches,
there are a variety of methods for identifying the active subspace
such as the average outer product of gradients~\cite{Con15},
polynomial ridge approximation~\cite{HC18}, and many others~\cite{Li18}.
\emph{Nonlinear} dimension reduction permits $\ve h$ to be any nonlinear function
and includes a variety of approaches; see, e.g.,~\cite{LV07}.
Note these classes are nested:
coordinate-based approaches are a subset of subspace-based approaches
where $\ma U$ is restricted to be columns of the identity matrix;
subspace-based approaches are a subset of nonlinear approaches
where $\ve h$ is restricted to be linear.
In this section we show that the Lipschitz matrix can be used
to identify an active subspace via the dominant eigenvectors of the squared Lipschitz matrix.
Much like the average outer product of gradients,
we can motivate our choice of subspace via an approximation error bound
given in \cref{thm:error_bound} involving the eigenvalues of $\ma H$.
We show in \cref{sec:subspace:decay} that rapid decay of eigenvalues
indicates low-dimensional structure and
show in \cref{sec:subspace:one} that projecting onto the dominant eigenvectors of $\ma H$
yields similar subspaces to existing methods
when applied to a common test problem.
However there are examples where existing approaches
identify an active subspace that is not useful.
Although the standard Lipschitz approach fails in a similar way,
we show that $\epsilon$-Lipschitz can avoid this pitfall
in \cref{sec:subspace:epsilon}.
\subsection{Error Bound\label{sec:subspace:bound}}
When using the Lipschitz matrix in the context of subspace-based dimension reduction,
our goal will be to find a ridge approximation $\widetilde{f}$ of $f$ where
\begin{equation}
f(\ve x) \approx \widetilde{f}(\ve x) \coloneqq g(\ma U^\trans \ve x) , \qquad \ma U^\trans \ma U = \ma I,
\end{equation}
where $g$ is called the \emph{ridge profile}.
Note that for vectors $\ve w$ in the nullspace of $\ma U$,
the ridge function $\widetilde{f}$ has the same value for $\ve x$ and $\ve x + \ve w$:
\begin{equation}
|\widetilde{f}(\ve x + \ve w) - \widetilde{f}(\ve x)|
= | g(\ma U^\trans(\ve x+\ve w)) - g(\ma U^\trans \ve x)|
= | g(\ma U^\trans \ve x) - g(\ma U^\trans \ve x)|
= 0.
\end{equation}
Hence the Lipschitz matrix of $\widetilde{f}$ has the same nullspace as $\ma U$.
Thus when seeking a ridge approximation of the Lipschitz function $f\in \set L(\set D, \ma L)$
with an active subspace spanned by the columns of $\ma U$,
we will construct our approximation from
$\set L(\set D, \ma L \ma U\ma U^\trans)$.
The following theorem provides a bound on
the accuracy of a ridge approximation.
\begin{theorem}\label{thm:error_bound}
Suppose $f\in \set L(\set D, \ma L)$
and $\widetilde{f} \in \set L(\set D, \ma L\ma U\ma U^\trans)$
where $\ma U\in \R^{m\times n}$ and $\ma U^\trans \ma U = \ma I$.
If $|f(\hve x_j) - \widetilde{f}(\hve x_j)|\le \epsilon$ and
$ \max_{\ve x\in \set D} \min_{j} \| \ma L \ma U \ma U^\trans (\hve x_j - \ve x)\|_2 \le \delta $
then
\begin{equation}
\max_{\ve x \in \set D} | f(\ve x) - \widetilde{f}(\ve x)| \le \epsilon +
2\delta + \sigma_{\max}(\ma L(\ma I - \ma U\ma U^\trans)) \cdot \diam(\set D)
\end{equation}
where $\sigma_{\max}(\ma A)$ denotes the largest singular value of $\ma A$ and
$\diam(\set D)$ denotes the diameter of the set $\set D$.
\end{theorem}
\begin{proof}
Suppose $\ve x \in \set D$ is fixed
and let $j = \argmin_k\|\ma L \ma U \ma U^\trans (\hve x_k - \ve x)\|_2$.
Then as $|f(\hve x_j) - \widetilde{f}(\hve x_j)|\le \epsilon$ we have
\begin{align}
|f(\ve x) \!-\! \widetilde{f}(\ve x)| &=
| f(\ve x) - \widetilde{f}(\ve x)
+ [f(\hve x_j) - \widetilde{f}(\hve x_j)]
- [f(\hve x_j) - \widetilde{f}(\hve x_j)]
| \\
&\le |f(\ve x) - f(\hve x_j)| + | \widetilde{f}(\ve x) - \widetilde{f}(\hve x_j)| + \epsilon.\\
\intertext{After invoking each function's Lipschitz matrix}
&\le \| \ma L(\ve x - \hve x_j)\|_2 + \| \ma L\ma U\ma U^\trans (\ve x - \hve x_j)\|_2 + \epsilon\\
&\le \|\ma L( \ma U\ma U^\trans \! + \ma I - \ma U\ma U^\trans)(\ve x - \hve x_j)\|_2
+ \|\ma L(\ma U\ma U^\trans)(\ve x - \hve x_j)\|_2 +\epsilon,\\
\intertext{and using the triangle inequality in the first term,}
&\le 2\|\ma L(\ma U\ma U^\trans)(\ve x - \hve x_j)\|_2
+ \|\ma L(\ma I - \ma U\ma U^\trans)(\ve x - \hve x_j)\|_2 + \epsilon \\
&\le \epsilon + 2\delta + \sigma_{\max}(\ma L(\ma I - \ma U\ma U^\trans))\cdot\diam(\set D).
\end{align}
\end{proof}
Hence by choosing $\ma U$ to be the right singular vectors of $\ma L$ associated with the largest singular values,
we can minimize this bound.
If we slightly rewrite this result in terms of the eigenvalues of
the squared Lipschitz matrix,
\begin{equation}\label{eq:H_eig}
\sigma_{\max}(\ma L(\ma I - \ma U \ma U^\trans))^2 =
\lambda_{\max}( (\ma I - \ma U\ma U^\trans) \ma H (\ma I - \ma U \ma U^\trans)),
\end{equation}
we can make a connection to the error bound associated with the
average outer product of gradients.
\begin{theorem}[Constantine~{\cite[Thm~4.3]{Con15}}]
If $\set D \subset \R^m$ is convex with Poincar\'e constant $C$,
$\rho:\set D\to \R_+$ is a probability density function with $\rho(\ve x) \le R$ $\forall \ve x \in \set D$,
and $\ma C \coloneqq \int_{\ve x\in \set D} \nabla f(\ve x) \nabla f(\ve x)^\trans \D \rho(\ve x)$,
then the optimal ridge approximation onto the subspace spanned by $\ma U$ is given by
\begin{equation}
g(\ve y) = \int f(\ma U\ve y + \ma W\ve z) \pi_{Z|Y}(\ve z) \D \, \ve z,
\quad \text{with} \quad
\begin{bmatrix} \ma U & \ma W \end{bmatrix}^\trans
\begin{bmatrix} \ma U & \ma W\end{bmatrix} = \ma I
\end{equation}
with an associated error bound
\begin{equation}\label{eq:C_eig}
\int (f(\ve x) - g(\ma U^\trans \ve x))^2 \D \rho(\ve x) \le
(RC)^2 \cdot \Trace( (\ma I - \ma U\ma U^\trans) \ma C (\ma I -\ma U \ma U^\trans)).
\end{equation}
\end{theorem}
\subsection{Eigenvalue Decay\label{sec:subspace:decay}}
When using the average outer product of gradients,
the presence of low-dimensional structure is assessed by noting a strong decay in the eigenvalues of $\ma C$;
the same argument can be made for the squared Lipschitz matrix $\ma H$.
\Cref{tab:eig} show the eigenvalues of both matrices estimated using random gradient samples
corresponding to a Monte-Carlo estimate of $\ma C$.
Three features are revealed in this example.
First is that the eigenvalues of the squared Lipschitz matrix are always greater than those of the
average outer product of gradients;
this is unsurprising as $\ma H$ is an upper bound on the outer product of gradients
$\nabla f(\ve x)\nabla f(\ve x)^\trans$
whereas $\ma C$ is an average; cf.~\cref{eq:C_to_H}.
Second, the decay of eigenvalues for $\ma H$ is slower than that of $\ma C$ for the same reason.
Third, we note that it is harder to identify low-rank Lipschitz matrices
than low-rank structure in the average outer product of gradients.
The borehole test function is a seven-dimensional ridge function
and, as such, both $\ma C$ and $\ma H$ should be rank-$7$.
This is clearly evident the eigenvalues of $\ma C$,
but less so in the eigenvalues of $\ma H$
due to the increased difficulty of solving the semidefinite program for $\ma H$
along with the associated solver tolerances
(although there still is a nine order of magnitude decrease in the eigenvalues of $\ma H$).
\input{tab_eig}
\subsection{Shadow Plots\label{sec:subspace:one}}
One advantage of subspace-based dimension reduction
is the ability to visualize high-dimensional functions via shadow plots.
These plots, illustrated in \cref{fig:active_subspace} for one-dimensional approximations,
show the projection of the input $\ve x$ onto a one-dimensional subspace defined by $\ve u\in \R^m$
along the horizontal axis and the value of the function along the vertical axis,
collapsing the $m-1$ orthogonal dimensions into the page.
This example shows three different methods for computing the active subspace:
the Lipschitz matrix, the average outer-product of gradients~\cite{Con15},
and a polynomial ridge approximation~\cite{HC18}.
Given either function samples or gradient samples,
each method yields a similar estimate of the active subspace.
\input{fig_active_subspace}
\subsection{$\epsilon$-Lipschitz\label{sec:subspace:epsilon}}
\input{fig_roof}
Although the dominant eigenvectors of the squared Lipschitz matrix
often identify a good subspace, this approach can be fooled by highly oscillatory
but small amplitude components of a function.
As an example, consider the ``corrugated roof'' function~\cite[eq.~(26)]{CEHW17}:
\begin{equation}\label{eq:roof}
f:[-1,1]^2 \to \R, \qquad f(\ve x) = 5 x_1 + \sin (10 \pi x_2).
\end{equation}
In this case we can compute the Lipschitz matrix analytically: $\ma H = \begin{bsmallmatrix} 25 & 0 \\ 0 & 100\pi^2\end{bsmallmatrix}$.
Taking as the active subspace the span of the dominant eigenvector $[0,1]^\trans$
yields projection that is not useful for predicting the value of $f$ as shown in \cref{fig:roof}.
This direction was identified because the although the contribution of the sine term is small,
its gradients are large.
The $\epsilon$-Lipschitz approach offers away around this.
Choosing $\epsilon=2$
the contribution of the sine term is ignored, leaving the rank-1 squared Lipschitz matrix
$\ma H_2 = \begin{bsmallmatrix} 25 & 0 \\ 0 & 0 \end{bsmallmatrix}$.
Then taking the active subspace to be the span of dominant eigenvector of this matrix $[1,0]^\trans$
yields a substantially more predictive projection as seen in \cref{fig:roof}.
Although this is a toy example,
this illustrates how the $\epsilon$-Lipschitz matrix can avoid being influenced by spurious computational noise
which the sine term emulates.
\section*{Acknowledgements}
Thanks to Akil Narayan, Stephen Becker, and Richard Byrd
for conversations that helped inform the develpment of this manuscript.
\section{Uncertainty\label{sec:uncertainty}}
When working with expensive deterministic computer simulations
it is often necessary to employ an approximation of certain quantities of interest,
called a \emph{response surface} or a \emph{surrogate}.
Supposing we have constructed this approximation using samples of $f$,
it is natural ask: what are the range of possible values our approximation could take away from these samples?
This is often called \emph{uncertainty} in this setting.
Gaussian processes provide one approach to define an uncertainty~\cite[sec.~2.2]{RW06};
the Lipschitz constant provides another~\cite{RS15}.
Here we show that scalar Lipschitz uncertainty can be generalized to the Lipschitz matrix setting,
and that the Lipschitz matrix provides a much tighter estimate of uncertainty
than the corresponding scalar Lipschitz constant.
Before continuing, it is important to note that the Gaussian process and Lipschitz
notions of uncertainty are based on very different assumptions.
The Gaussian process perspective views the approximation $\widetilde{f}$ as a random process
with prior
\begin{equation}
\widetilde{f} \sim \mathcal{GP}(m(\ve x), k(\ve x, \ve x'))
\end{equation}
where $m:\set D\to \R$ is the mean and $k: \set D \times \set D \to \R$ is a symmetric covariance kernel;
see, e.g.,~\cite[eq.~(2.14)]{RW06}.
Information about $f$,
the inputs $\hve x_j$ and outputs $y_j \coloneqq f(\hve x_j)$,
serve to condition this prior such that $\widetilde{f}(\ve x)$
is a normal random variable with mean and covariance given by~\cite[eq.~(2.19)]{RW06}:
\begin{multline}
\setlength{\arraycolsep}{2pt}
\widetilde{f}(\ve x) \sim \set N\left(
\begin{bmatrix} k(\ve x, \hve x_1) \\ \vdots \\ k(\ve x, \hve x_M)\end{bmatrix}^\trans
\begin{bmatrix} k(\hve x_1, \hve x_1) & \cdots & k(\hve x_M, \hve x_1) \\
\vdots & & \vdots \\
k(\hve x_1, \hve x_M) & \cdots & k(\hve x_M, \hve x_M)
\end{bmatrix}^{-1}
\begin{bmatrix}
y_1- m(\hve x_1) \\
\vdots \\
y_M - m(\hve x_M)
\end{bmatrix}, \right. \\
\left.
k(\ve x, \ve x) -
\begin{bmatrix} k(\ve x, \hve x_1) \\ \vdots \\ k(\ve x, \hve x_M)\end{bmatrix}^\trans
\begin{bmatrix} k(\hve x_1, \hve x_1) & \cdots & k(\hve x_M, \hve x_1) \\
\vdots & & \vdots \\
k(\hve x_1, \hve x_M) & \cdots & k(\hve x_M, \hve x_M)
\end{bmatrix}^{-1}
\begin{bmatrix} k(\ve x, \hve x_1) \\ \vdots \\ k(\ve x, \hve x_M)\end{bmatrix}
\right).
\end{multline}
Uncertainty in the approximation $\widetilde{f}$ for a given probability threshold $\delta \in (0,1)$
is then defined as all those values of $\widetilde{f}(\ve x)$ with probability greater than $\delta$;
this is illustrated in \cref{fig:gp}.
\input{fig_gp}
An alternative view of uncertainty posits $f$ comes from some function class
and that samples constrain the values $f$ can take.
Denoting this function class as $\set F$,
the set of values $f$ could take at $\ve x$
given $y_j \coloneqq f(\hve x_j)$ is the uncertainty set $\set U$:
\begin{equation}
\set U(\ve x; \set F, \lbrace (\hve x_j, y_j) \rbrace_{j=1}^M)
\coloneqq \lbrace \widetilde{f}(\ve x): \widetilde{f} \in \set F,
\widetilde{f}(\hve x_j) = y_j \ \forall j =1,\ldots, M\rbrace
\subset \R.
\end{equation}
Here we choose the function class based on the Lipschitz matrix associated with $f$;
i.e., $\set F = \set L(\set D, \ma L)$.
\Cref{fig:gp} shows this uncertainty set is substantially different than the Gaussian process uncertainty.
In this section we provide a formula for the interval $\set U(\ve x)$ in \cref{sec:uncertainty:interval}
and discuss how to project this interval onto a shadow plot in \cref{sec:uncertainty:shadow}.
Combined with the Lipschitz-matrix based space filling design discussed in \cref{sec:design}
this allows us to provide informative bounds on shadow plots.
\subsection{Uncertainty Interval\label{sec:uncertainty:interval}}
For Lipschitz functions $f\in \set L(\set D,\ma L)$
the uncertainty set $\set U$ is an interval:
\begin{align}
\label{eq:Lmat_uncertain}
\set U(\ve x; \set L(\set D, \ma L), \lbrace (\hve x_j, y_j) \rbrace_{j=1}^M )
= \lbrace \widetilde{y} : |\widetilde{y}- y_j| \le \| \ma L(\ve x - \hve x_j)\|_2,
\forall j=1,\ldots,M\rbrace \\
= \left[
\max_{j=1,\ldots, M} y_j - \|\ma L(\ve x - \hve x_j)\|_2 \ , \
\min_{j=1,\ldots, M} y_j + \|\ma L(\ve x - \hve x_j)\|_2
\right].
\label{eq:interval}
\end{align}
\Cref{fig:gap} shows the gap between the upper and lower limits of this interval
when using both the scalar Lipschitz constant and the Lipschitz matrix;
note the Lipschitz matrix yields far smaller uncertainty intervals.
\input{fig_gap}
\subsection{Uncertainty on Shadow Plots\label{sec:uncertainty:shadow}}
Given the utility of shadow plots in understanding the behaviour of high-dimensional functions,
we seek to include a projection of uncertainty onto these plots.
Each position $\alpha$ on the horizontal axis consists of all the points in $\set D$
where $\ve u^\trans \ve x = \alpha$;
i.e., the set $\set S_\alpha \coloneqq \lbrace \ve x \in \set D: \ve u^\trans \ve x = \alpha\rbrace$.
Then the uncertainty associated with this position on the horizontal axis
is the union of all the pointwise uncertainties in this set $\set S_\alpha$;
this motivates extending the definition of uncertainty for set valued inputs
\begin{equation}
\set U(\set S; \set F, \lbrace (\hve x_j, y_j) \rbrace_{j=1}^M)
\coloneqq
\bigcup_{\ve x\in \set S}
\set U(\ve x; \set F, \lbrace (\hve x_j, y_j) \rbrace_{j=1}^M).
\end{equation}
For closed sets $\set S$, the Lipschitz uncertainty is an interval
whose boundaries are given by minimax optimization problems:
\begin{multline}\label{eq:Uset}
\set U(\set S; \set L(\set D, \ma L), \lbrace (\hve x_j, y_j) \rbrace_{j=1}^M)
=\\ \left[
\min_{\ve x \in \set S} \max_{j=1,\ldots, M} y_j - \|\ma L(\ve x - \hve x_j)\|_2 \ , \
\max_{\ve x \in \set S} \min_{j=1,\ldots, M} y_j + \|\ma L(\ve x - \hve x_j)\|_2 \
\right].
\end{multline}
\Cref{fig:shadow} shows an application of this set-based definition
using both scalar and matrix Lipschitz on points placed at the corners of the domain
and on points selected based on a space filling design as described in the next section.
Unlike the estimate of the projection of the function in~\cref{fig:active_subspace},
no new samples of $f$ have been taken to construct the uncertainty in this figure.
\input{fig_shadow}
Before concluding, briefly discuss solving the two
minimax optimization problems in~\cref{eq:Uset}.
Although there are sophisticated algorithms for problems of this type,
we use a simple sequential linear program approach due to
Osborne and Watson~\cite{OW69}.
As these two optimization problems are similar, we only discuss the lower bound.
Introducing a slack variable $t$ to represent this lower bound,
we seek to minimize
\begin{equation}\label{eq:minimax}
\minimize_{\ve x \in \set S, t \in \R } t \quad
\text{such that} \quad f(\hve x_j) - \|\ma L(\ve x - \hve x_j)\|_2 \le t, \quad \forall j=1,\ldots M.
\end{equation}
Linearizing the constraints yields a sequence of linear programs
for the update $\ve p$ such that $\ve x^{(k+1)} = \ve x^{(k)} + \ve p$:
\begin{equation}\label{eq:minimax_lin}
\minimize_{\ve x^{(k)} + \ve p \in \set S, t \in \R } t \ \
\text{such that} \ \ f(\hve x_j) -
\|\ma L(\ve x^{(k)} \!- \hve x_j)\|_2 -
\frac{\ve p^\trans\ma L^\trans \ma L(\ve x^{(k)} \!- \hve x_j) }{
\|\ma L(\ve x^{(k)} \!- \hve x_j )\|_2} \le t,
\ \forall j.
\end{equation}
As the 2-norm is convex and the search direction $\ve p$ increases the norm,
the Taylor series approximation provides a lower bound and
each step of this iteration will be feasible, removing the need for a line search.
One challenge in solving this optimization problem is the large number of spurious local minimizers
as evidenced in~\cref{fig:gp}.
To ensure we obtain a nearly optimal objective value of~\cref{eq:minimax}
we try multiple initializations of the iteration~\cref{eq:minimax_lin}
starting from random samples of the Voronoi vertices of $\lbrace \hve x_j \rbrace_{j=1}^M$
in the $\ma L$ weighted 2-norm.
These Voronoi vertices are discussed in more detail in the next section
and play a central role there.
| {
"timestamp": "2019-06-04T02:03:35",
"yymm": "1906",
"arxiv_id": "1906.00105",
"language": "en",
"url": "https://arxiv.org/abs/1906.00105",
"abstract": "We introduce the Lipschitz matrix: a generalization of the scalar Lipschitz constant for functions with many inputs. Among the Lipschitz matrices compatible a particular function, we choose the smallest such matrix in the Frobenius norm to encode the structure of this function. The Lipschitz matrix then provides a function-dependent metric on the input space. Altering this metric to reflect a particular function improves the performance of many tasks in computational science. Compared to the Lipschitz constant, the Lipschitz matrix reduces the worst-case cost of approximation, integration, and optimization; if the Lipschitz matrix is low-rank, this cost no longer depends on the dimension of the input, but instead on the rank of the Lipschitz matrix defeating the curse of dimensionality. Both the Lipschitz constant and matrix define uncertainty away from point queries of the function and by using the Lipschitz matrix we can reduce uncertainty. If we build a minimax space-filling design of experiments in the Lipschitz matrix metric, we can further reduce this uncertainty. When the Lipschitz matrix is approximately low-rank, we can perform parameter reduction by constructing a ridge approximation whose active subspace is the span of the dominant eigenvectors of the Lipschitz matrix. In summary, the Lipschitz matrix provides a new tool for analyzing and performing parameter reduction in complex models arising in computational science.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A Lipschitz Matrix for Parameter Reduction in Computational Science",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211597623861,
"lm_q2_score": 0.727975460709318,
"lm_q1q2_score": 0.7096458828872148
} |
https://arxiv.org/abs/2207.13638 | A Simple and Elegant Mathematical Formulation for the Acyclic DAG Partitioning Problem | This work addresses the NP-Hard problem of acyclic directed acyclic graph (DAG) partitioning problem. The acyclic partitioning problem is defined as partitioning the vertex set of a given directed acyclic graph into disjoint and collectively exhaustive subsets (parts). Parts are to be assigned such that the total sum of the vertex weights within each part satisfies a common upper bound and the total sum of the edge costs that connect nodes across different parts is minimized. Additionally, the quotient graph, i.e., the induced graph where all nodes that are assigned to the same part are contracted to a single node and edges of those are replaced with cumulative edges towards other nodes, is also a directed acyclic graph. That is, the quotient graph itself is also a graph that contains no cycles. Many computational and real-life applications such as in computational task scheduling, RTL simulations, scheduling of rail-rail transshipment tasks and Very Large Scale Integration (VLSI) design make use of acyclic DAG partitioning. We address the need for a simple and elegant mathematical formulation for the acyclic DAG partitioning problem that enables easier understanding, communication, implementation, and experimentation on the problem. | \section{Introduction}
\label{sec.intro}
Graph partitioning has been an active area of research for several decades, and is an essential
technique for
data and computation distribution for efficient computation~\cite{kaku:98:metis,Catalyurek99,HENDRICKSON20001519}.
A graph partitioning problem is, in general, defined as the task of dividing the vertex set of a
directed or undirected graph into roughly balanced disjoint subsets (also called as parts,
partitions, and clusters)
while minimizing the total weight of edges that connect nodes from different parts~\cite{pell:08:scotch,kaku:98:metis,nossack2014mathematical}.
Research community has developed various types of graphs for the many specific real-world scenario or applications.
One type of graphs that is perhaps the de-facto for task scheduling and task/workflow management is
directed acyclic graphs (DAGs). A directed acyclic graph can be defined as a graph in which there
are no cycles, i.e., if there is a directed path between from a node $A$ to a node $B$,
there is no path from node $B$ to node $A$. Hence, many efforts have focused on partitioning
this specific subclass of graphs.
In this work, we focus on one special constraint variant of DAG partitioning:
The acyclic DAG partitioning problem.
The acyclic DAG partitioning problem introduces an additional constraint to a regular graph partitioning
problem: The quotient graph of the partitioning, i.e., the resulting graph when all nodes that are
assigned to the same part are contracted together to a single vertex and in which the edges represent the
cumulative edges between those nodes, is also a directed acyclic graph~
\cite{mops:17b,Herrmann19-SISC,nossack2014mathematical,albareda2019reformulated}.
The acyclic DAG partitioning problem is defined and attempted to be tackled in many domains for many years
such as partitioning and computation of boolean networks~\cite{colb:94}, VLSI design~
\cite{kocan2005}, rail-rail transshipment~\cite{nope:14,albareda2019reformulated},
RTL simulations~\cite{beamer2020efficiently}, task scheduling problems~
\cite{Ozkaya19-PPAM,Ozkaya19-IPDPS}, quantum circuit simulation~\cite{Fang22-ARXIV}, etc.
Similar to graph partitioning, acyclic DAG partitioning is an NP-Hard problem~\cite{Herrmann19-SISC}.
Thus, many solutions are heuristic algorithms~\cite{mops:17b,Herrmann19-SISC}.
Most of the recent work on acyclic partitioning has either used very restrictive approaches to avoid
cycles, or expensive algorithms to detect and eliminate possible emergence of cycles~\cite{mops:17b,Herrmann19-SISC}.
Although many researchers tend towards fast heuristics for computations on larger data,
there are efforts from many domains to formulate and solve this problem to optimality using mixed
integer linear programming (ILP) models~\cite{nope:14,nossack2014mathematical,albareda2019reformulated}.
Previous work on the mathematical formulation of acyclic partitioning problem uses subtour
elimination constraints derived from Traveling Salesman Problem (TSP)~\cite{miller1960integer} or
complex formulations with possibly expensive pre-processing phases,
which may be hard to follow and discouraging for the less-informed on the topic.
Furthermore, simpler model formulations tend to be easier to approach and implement and thus,
help bridge the gap between theory, the understanding and communication of it,
and the practice in different domains.
In this work, we present a simple and elegant formulation for
optimally solving the acyclic graph partitioning problem.
Our main focus is elegantly modelling the acyclicity constraint since it is the
major source of the complexity in the formulation.
Even so, we present a concise but complete formulation for the problem.
In~\cref{sec:preliminaries}, we define the preliminaries and formally introduce the graph
partitioning and acyclic DAG partitioning problems. In~\cref{sec.related}, we describe the previous
mathematical formulations for the acyclic DAG partitioning problem and in~\cref{sec.algo}, we
present our concise and straightforward model.
In~\cref{sec:real_world_impact}, we give two example application areas where the proposed ILP
formulation can be useful.
And, in~\cref{sec.conc} we summarize and conclude the article.
\section{Preliminaries}
\label{sec:preliminaries}
A {\em simple directed graph} $G^s = (V,E)$ contains a set of vertices $V$ and a set of directed
edges $E = \{e=(u,v) | u,v \in V \}$ for distinct $u,v \in V$, where $e$ is directed from $u$ to $v$.
That is, every edge $e = (u,v)$ connects a distinct pair of vertices. The set of edges is defined
as a subset of the cartesian of the vertex set by itself, $E \subset V \times V$.
In a {\em directed graph} $G$, in addition to the above,
every vertex $u$ has a weight associated with it
and denoted by $w_u$ and every edge $(u,v) \in E$ has a cost denoted by $c_{uv}$,
$G = (V,E,w,c)$.
A {\em path} is a sequence of edges $(u_1,v_1) \cdot (u_2,v_2),\ldots$ with $v_i=u_{i+1}$.
A path $((u_1,v_1) \cdot (u_2,v_2) \cdot (u_3,v_3) \cdots (u_\ell,v_\ell))$ is of length $\ell$,
which connects a sequence of $\ell+1$ vertices
$(u_1, v_1 = u_2, \ldots, v_{\ell-1} = u_\ell, v_\ell)$.
A path is called {\em simple} if the connected vertices are distinct.
Let $u\leadsto v$ denote a simple path that starts from $u$ and ends at $v$.
A path $((u_1,v_1) \cdot (u_2,v_2) \cdots (u_\ell,v_\ell))$ forms a (simple) {\em cycle} if all
$v_i$ for $1 \leq i \leq \ell$ are distinct and $u_1 = v_\ell$.
A {\em directed acyclic graph (DAG)}, is a directed graph with no cycles.
$\Pred{u} = \{ v\ |\ (v,u) \in E\}$ represent
the immediate predecessors of a vertex $u$, and $\Succ{u} = \{ v\ |\ (u,v) \in E\}$ represents the immediate successors of $v$.
The union of the immediate predecessors and successors of a vertex $u$ is called the
neighbor set of $u$: $\Neigh{u} = \Pred{u} \cup \Succ{u}$.
For a vertex $u$, the set of vertices $v$ such that there exist a path $u \leadsto v$ is called the {\em descendants} of $u$.
Similarly, the set of vertices $v$ such that $v \leadsto u$ exists is called the {\em ancestors} of the vertex $u$.
We define $N_u^v$ as the set of all distinct vertices that lie on any simple path between $u$ and
$v$, which is equivalent to the intersection of the descendant set of $u$ and the ancestor set of $v$.
\begin{definition}{A $k$-way partitioning.}
\rm A $k$-way partitioning of a graph $G$ is a partition of its vertex set into $k$ parts (subsets)
$P~=~\{V_1, V_2, \dots, V_k\}$ such that all subsets $V_i$ are disjoint and collectively exhaustive,
$V~=~\bigcup\limits_{i=1}^{k} V_{i}$.
\end{definition}
An edge $e = (u,v)$ where $u \in V_i$, $v \in V_j$, and $i \neq j$ is called a \emph{cut edge}. The
sum of the cost of all cut edges is called \emph{edge cut}, which is a typical objective function to
minimize in the graph partitioning problems.
\begin{definition}{A balanced $k$-way partitioning.}
\rm A $k$-way partitioning of a graph $G$ in which the total weight of vertices for a given
subset $V_i$,
$w(V_i) = \sum_{u \in V} w(u)$, is bounded by a small imbalance parameter $\epsilon$ by the
inequality:
$c(V_i) \leq (1+\epsilon)\left\lceil \frac{C(V)}{k} \right\rceil = B$.
\end{definition}
\begin{definition}{The balanced acyclic $k$-way partitioning problem.}
\rm A balanced acyclic k-way partition $P = \{V_1,\ldots,V_k\}$ of a directed graph $G = (V, E)$ is a
balanced $k$-way partition in which no two paths $u \leadsto v$ and $v' \leadsto u'$
co-exist for any $\{u, u'\} \subset V_i$, $\{v, v'\} \subset V_j$, $i\neq j$. The balanced acyclic
$k$-way partitioning problem is to find a valid balanced acyclic $k$-way partitioning of $G$ where
the objective function (i.e., edge cut) is minimized.
\end{definition}
A $k$-way partition induces a contracted graph $\mathcal{G'} = (\mathcal{V'}, \mathcal{E'})$, i.e.,
part graph, where the nodes represent the $k$ parts of $G$, $w(u') = C(V_u), u' \in \mathcal{G'}$,
and the edges $(u', v') \in \mathcal{E'}, u' \neq v'$ represent the cumulative sum of directed edge costs
between $u'$ and $v'$, and $c_{u'v'} = \sum\limits_{u \in u', v \in v'} {c_{uv}}$.
\Cref{fig:convex} presents examples and difference of cyclic and acyclic partitions
for the toy graph shown in \cref{fig:conv:1}, there are three possible balanced partitions
shown in \cref{fig:conv:2}, \cref{fig:conv:3}, and~\cref{fig:conv:4}. The quotient graphs for
both~\cref{fig:conv:2,fig:conv:3} contains two nodes: red and blue, and two edges with cost 1.
In~\cref{fig:conv:2}, the edge from $a$ to $d$ creates an edge from blue to red, and the edge between
$b$ and $c$ creates and edge from red to blue, creating a cycle in the quotient graph.
\begin{figure}
\centering
\begin{subfigure}[t]{.22\linewidth} \includegraphics[width=0.95\textwidth]{fig/abcg-graph}\caption{A toy graph}\label{fig:conv:1}\end{subfigure}
\begin{subfigure}[t]{.24\linewidth} \includegraphics[width=\textwidth]{fig/abcg-graph-cc-1}
\caption{A cyclic partition}\label{fig:conv:2}\end{subfigure}
\begin{subfigure}[t]{.24\linewidth} \includegraphics[width=0.9\textwidth]{fig/abcg-graph-cc-2}
\caption{A cyclic partition}\label{fig:conv:3}\end{subfigure}
\begin{subfigure}[t]{.25\linewidth} \includegraphics[width=\textwidth]{fig/abcg-graph-ac}
\caption{An acyclic partition}\label{fig:conv:4}\end{subfigure}
\caption{A toy graph (left), two cyclic and convex partitions (middle two),
and an acyclic and convex partition (right).}
\label{fig:convex}
\end{figure}
\section{Related work}
\label{sec.related}
The recent work of Henzinger et al.~\cite{henzinger2020ilp} presents a formulation for the
undirected balanced graph partitioning problem. This model includes the essential definition and
constraints, and can be used as a basis for our problem variant.
The recent notable efforts on mathematical formulation of acyclic partitioning problem are due to
Nossack et al.~\cite{nope:14,nossack2014mathematical} and Albareda-Sambola et al.~\cite{albareda2019reformulated}.
Nossack et al.~\cite{nope:14} introduce a model with a high memory complexity. Their follow-up work~
\cite{nossack2014mathematical} improves this model, and introduces another novel model from a
different perspective, augmented set partitioning formulation.
Both formulations by Nossack et al.~\cite{nossack2014mathematical,nope:14}
rely on the Miller-Tucker-Zemlin (TMZ) subtour elimination
constraints from the Traveling Salesperson Problem~\cite{miller1960integer}
for the acyclicity constraint.
As the problem is dividing the set of vertices into subsets, the resulting assignment may present
equivalent (e.g., symmetric) optimal solutions from the possible solutions space.
Thus, they introduce additional constraints to reduce the symmetry.
We further discuss one of the models they presented in~\cref{sub.nossackmodel1}.
Albareda-Sambola et al.~\cite{albareda2019reformulated} reformulates the model presented by Nossack
et al.~\cite{nossack2014mathematical}, and introduces a rather comprehensive preprocessing of
ancestor and descendant sets of each node to forbid beforehand some node pairs that cannot lead to a valid
partitioning and introduces additional valid inequalities as constraints in order to limit the
search space and speed up the execution time of ILP solvers for their formulation.
Their formulation starts with the same base constraints and
then moves towards a topological-order-based formulation approach,
which is a huge step towards a simpler formulation.
On the other hand, the inequalities presented as constraints include many complex components and the
preprocessing-related variables, which makes the final set of constraints hard to
understand (e.g., pairwise connectivity of all nodes, $O(|V| \cdot |E|)$ pre-computed values, many constraints, etc.). We discuss their formulation further in~\cref{sub.albaredamodel}.
Before delving into acyclic partitioning formulations,
we first briefly introduce the mathematical formulation of the undirected balanced graph
partitioning from Henzinger et al.~\cite{henzinger2020ilp} in~\cref{sub:undirectedmodel},
since it introduces the common components of all formulations.
Then, in~\cref{sub.nossackmodel1}, we present the improved formulation model by Nossack et
al.~\cite{nossack2014mathematical}, and in~\cref{sub.albaredamodel}, we briefly describe the reformulation by
Albareda-Sambola et al.~\cite{albareda2019reformulated}.
\subsection{Formulation for Undirected Graph Partitioning}
\label{sub:undirectedmodel}
The first step is to introduce a variable for the computation of the objective function, i.e., edge cut.
$z_{ij}$ is defined for each edge as a binary variable which is set to one if the edge $(i,j)$ is cut,
and zero otherwise. Then, the objective funtion is defined as the minimization of the sum of the
cost of cut edges (eq. \ref{eq:und:obj}).
An equivalent definition sets $z_{ij}$ as zero for cut edges and one for internal (uncut) edges and
defines the objective function as maximizing the sum of the cost of internal edges (alternatively
phrased as $z_{ij}$ is set to 1 if $i$ and $j$ belong to the same part, and zero otherwise),
which is the preferred presentation by both Nossack et al.~\cite{nossack2014mathematical}
and Albareda-Sambola et al.~\cite{albareda2019reformulated}.
The constraints (\ref{eq:und:constr_onepart}) enforce
that each node is assigned to exactly one part, and (\ref{eq:und:constr_partbal}) enforce the total
weight balance constraint for each part. The constraints (\ref{eq:und:constr_samepart1})
make sure that the $z_{ij}$ variables are set to 1 if the vertices $i$ and $j$ are not assigned to the same
part. $B$ denotes the balance weight limit including the imbalance ratio factored in as defined in~\cref{sec:preliminaries}.
The $w_i$ denotes the weight of vertex $i$, $c_{ij}$ denotes the cost of edge $\{i,j\}$ and defined
only over the set of existing edges.
The decision variable $x_{is}$ is a binary variable set to 1 if the vertex $i$ is assigned to the
part $s$.
Then, the formulation is as follows. \\
Objective: Minimize the edge cut.
\begin{equation}
\label{eq:und:obj}
min \sum_{(i,j) \in E} z_{ij} \cdot c_{ij}
\end{equation}
Subject to:
\begin{align}
\shortintertext{Constraint that all nodes belong to exactly one part:}
&\sum_{s = 0}^{k-1} x_{is} = 1 &\forall i \in V, \quad 0 \leq i < N = |V| \label{eq:und:constr_onepart}\\
\shortintertext{Constraint for part weight balance:}
&\sum_{i \in V} w_i \cdot x_{is} \leq B &\forall 0 \leq s < k \label{eq:und:constr_partbal}\\
\shortintertext{Constraint for marking (i.e., setting the $z_{ij}$ variable to one) the cut edges if
they are in the different parts:}
&z_{ij} \geq |x_{is} - x_{js}| &\forall \{i, j\} \in E, 0 \leq s < k \label{eq:und:constr_samepart1}\\
\shortintertext{Domains of decision variables:}
& x_{is} \in \{0,1\} &\forall i \in V, 0 \leq s < k \label{eq:und:dom_x}\\
& z_{ij} \in \{0,1\} &\forall \{i,j\} \in E, 0 \leq s < k \label{eq:und:dom_z}
\end{align}
In all following formulations with the same objective (i.e., minimizing the edge cut/maximizing the weight of
internal edges), the $z_{ij}$ variables can be relaxed to continuous variables.
\subsection{Nossack et al.'s Model}
\label{sub.nossackmodel1}
Nossack et al.~\cite{nope:14}'s initial formulation deviates from the formulation
defined in \cref{sub:undirectedmodel} by defining the z variable
three dimensional in which the first two dimensions are for the pairs of nodes, and the third is for a specific part number.
Their next work improves this initial formulation and uses a two dimensional z variable. Hence, we
ignore their earlier formulation and focus on the improved formulation presented in~\cite{nossack2014mathematical}.
The main components of the formulation does not deviate drastically from the undirected partitioning to acyclic DAG partitioning.
Only, the acyclic DAG partitioning includes additional constraints to address the acyclicity. And
several constraints to further decrease the search space and improve runtime performance in ILP solvers.
The mathematical formulation of Nossack et al.'s approach assumes the number of parts can be as
high as the number of nodes. To unify and simplify the presentation, we present the limit of number of parts
as $k$, as opposed to $N = |V|$.
As noted, the definition of $z_{ij}$ is the reverse of Henzinger et al.~\cite{henzinger2020ilp}'s,
i.e., the cut edges are marked as zero and internal (uncut) edges are marked as one.
Their formulation is as follows:\\
Objective: Maximize the total cost of internal edges, i.e., edges whose end points $i$ and $j$ belong to the same part $s$.
\begin{equation}
\label{eq:obj1}
max \sum_{(i,j) \in E} z_{ij} \cdot c_{ij}
\end{equation}
Subject to:
\begin{align}
\shortintertext{Constraint for all nodes belonging to exactly one part:}
&\sum_{s = 0}^{k-1} x_{is} = 1 &\forall i \in V \\
\shortintertext{Constraint for part weight balance:}
&\sum_{i \in V} w_i \cdot x_{is} \leq B &\forall 0 \leq s < k\\
\shortintertext{Constraint for marking the internal (uncut) edges if they are in the same part:}
&z_{ij} + x_{is} - x_{js} \leq 1 &\forall i < j, \{i,j\} \subset V, 0 \leq s < k \label{eq:nos:constr_samepart1}
\end{align}
Constraints for applying triangular inequality for all nodes $j$ that lie in any path between $i$
and $h$. If $i$ and $h$ are in the same part, $j$ must be in the same part as well. If $i$ and $j$
are not in the same part (i.e., $z_{ij}$ is cut), $i$ and $h$ cannot be in the same part (i.e.,
$z_{ih}$ must also be cut), and if $j$ and $h$ are not in the
same part, $i$ and $h$ cannot be in the same part:
\begin{align}
& \begin{rcases}
&z_{ij} + z_{jh} - z_{ih} \leq 1 \\
&z_{ij} - z_{jh} + z_{ih} \leq 1 \\
&-z_{ij} + z_{jh} + z_{ih} \leq 1 \\
&z_{ih} \leq z_{ij} \\
&2z_{ih} \leq z_{ij} + z_{jh}
\end{rcases} & &\forall i < j < h, \{ i,j,h\} \subset V, i \leadsto j, j \leadsto h \label{eq:nos:constr_tri1}\\
\end{align}
Constraints for variables in $y$ matrix, i.e., induced edges represented as an adjacency matrix of
parts, i.e., every cell of $y$ matrix, $y_{st}$ has value 1 if there exists an edge between any node
in $V_s$ to any node in $V_t$:
\begin{align}
&x_{is} + x_{jt} - 1 \leq y_{st} &\forall (i,j) \in E, i \in V_s, j \in V_t, 0 \leq s \neq t < k
\label{eq:nos:constr_induced_edge}\\
\shortintertext{Constraints for $\pi$ values from TMZ subtour (cycle) elimination constraints (See ~\cite{miller1960integer}):}
&|V| \cdot (y_{st} - 1) \leq \pi_t - \pi_s - 1 &\forall 0 \leq s \neq t < k \label{eq:nos:constr_pi}\\
\shortintertext{Constraint to decrease symmetry by assigning the part indices sorted by total
vertex weight within the part:}
& \sum_{i \in V} x_{is} \leq \sum_{i \in V} x_{i,s-1} &\forall 1 \leq s < k
\label{eq:nos:constr_symdec}\\
\shortintertext{Domains of decision variables:}
& x_{is} \in \{0,1\} &\forall i \in V, 0 \leq s < k \label{eq:nos:dom_x}\\
& z_{ij} \in \{0,1\} &\forall i < j, \{i, j\} \subset V \label{eq:nos:dom_z}\\
& y_{st} \in \{0,1\} &\forall 0 \leq s \neq t < k \label{eq:nos:dom_y}\\
& \pi_{s} \in \mathbb{Z} &\forall 0 \leq s < k \label{eq:nos:dom_pi}
\end{align}
Although an essential part of the formulation, the constraints that address the acyclicity are
adapted from Miller-Tucker-Zemlin subtour elimination formulation for Traveling Salesperson
Problem~\cite{miller1960integer}.
The essence of the approach is to define an integer $\pi$ variable for each part within the range
[0,k) where each part is to be assigned a unique value. For a valid acyclic partitioning there
exists a one-to-one mapping between the integers in the range [0,k) and the $\pi$ values. The $y$
matrix is essentially the adjacency matrix for the induced, quotient graph.
The $\pi$ and $y$ variables together with the constraints in (\ref{eq:nos:constr_tri1}),
the formulation enforces the parts to have unique topological order indices and all nodes that lie
in between two nodes from the same part in any path to be assigned in the same part.
Since the part assignments can lead to many symmetrical solutions (e.g., just the possible reorderings of
part indices lead to $k!$ identical partitions), the constraint (\ref{eq:nos:constr_symdec}) is added to enforce
part indices to start from the largest part to smallest in total vertex weight size. Although this
does not prevent all symmetrical solutions, it reduces the optimal solution count significantly.
\subsection{Albareda-Sambola et al.'s Model}
\label{sub.albaredamodel}
Albareda-Sambola et al.~\cite{albareda2019reformulated}'s main idea is to improve upon the
previous
formulations by filtering out some of the impossible cases with the help of a pre-processing phase and
introducing additional valid inequalities. They experiment with many valid inequalities as
the set of constraint combinations.
The new formulations they present have closer ties to the topological orderability of DAGs.
The formulation enforces that the part assignment follows a topological order, i.e., there
exists an edge $(i,j) \in E, i \in V_s, j \in V_t$ if and only if $s < t$.
That is, the indices of the parts should be sorted in a topological order.
This is a huge step towards a simpler formulation of the model. However, combined with the
complexity of the variables introduced in pre-processing and constraints defined as a sum function
on the part assignment vectors for each pair of connected nodes
makes the formulation more complex and hard to follow.
Their pre-processing step defines the new variables $\alpha_{ij} = 1$ if $i \leadsto j$ and
$A_{ij}$ for each $(i,j)$ pair where $i < j, i \leadsto j$ as
$$A_{ij} = w_i + w_j + N_i^j.$$
Last,
$A'_{ijl}$ is defined as the weight sum of all distinct nodes on all paths $i \leadsto j$, $j
\leadsto l$, and $i \leadsto l$, i.e., $$A'_{ijl} = w_i + w_j + w_l + \sum \limits_{h \in N_i^j \cup N_j^l \cup N_i^l
\backslash \{j\}} w_h.$$
Here, the $\alpha_{ij}$ variable is equal to one if $j$ is a descendant of $i$,
i.e., there exists a path from $i$ to $j$. And, the $A_{ij}$ variables store the weight sum of all nodes
that lie on any path between the nodes $i$ and $j$, including $i$ and $j$.
The intuition behind the $A_{ij}$ variables is that for any acyclic partitioning
in which nodes $i$ and $j$ are assigned to the same part, all nodes that are on any path between the
two nodes must be in the same part as well. Thus, if $A_{ij} > B$, there is no feasible solution
that assigns $i$ and $j$ to the same part.
Computation of total node weight $A_{ij}$ for each pair of ancestor-descendant nodes helps
eliminate part assignments where the total part weight constraint is violated.
The downside, on the other hand, is the computation of all distinct vertices on any path between all $(i,j)$
connected pairs of nodes and the introduction of $\mathcal{O}(k \cdot |V|^2)$ constraints as in
(\ref{eq:alba:weight_res_topo1}) and (\ref{eq:alba:weight_res_topo2}) can be computationally expensive and
challenging. For small graphs, this can be a trivial
computation but as the scale and density of graphs increase, the amount of data to store and compute
increases drastically, leading to the need for more complex algorithms that can do those
computations efficiently.
The formulation of Albareda-Sambola et al. is as follows.\\
Objective: Maximize the sum of the cost of internal edges.
\begin{equation}
\label{eq:obj2}
max \sum_{(i,j) \in E}^{} c_{ij} \cdot z_{ij}
\end{equation}
Subject to:
\begin{align}
\shortintertext{Constraint for all nodes belonging to exactly one part:}
&\sum_{s = 0}^{k-1} x_{is} = 1 &\forall i \in V \label{eq:alba:constr_onepart}\\
\shortintertext{Constraint for part weight balance:}
&\sum_{i \in V} w_i \cdot x_{is} \leq B &\forall 0 \leq s < k \label{eq:alba:constr_partbal}
\end{align}
The following constraints are to preserve the acyclicity of partitioning and the topological order
of part indices.
These constraints for part assignment prevent creation of edges $(V_s,V_t)$ where $s>t$ in the quotient graph adjacency matrix:
\begin{align}
& \sum \limits_{t \geq s} x_{it} + \sum \limits_{t < s} x_{jt} \leq 1 & \alpha_{ij} = 1, A_{ij} \leq B, \forall 0 \leq s < k \label{eq:alba:weight_res_topo1}\\
& \sum \limits_{t \geq s} x_{it} + \sum \limits_{t \leq s} x_{jt} \leq 1 & \alpha_{ij} = 1, A_{ij} > B, \forall 0 \leq s < k \label{eq:alba:weight_res_topo2}\\
& z_{ij} + \sum \limits_{t < s} x_{it} + \sum \limits_{t \geq s} x_{jt} \leq 2 &
\begin{array}{r}
\alpha_{ij} = 1, A_{ij} > B,\\
\forall (i,j) \in A, \forall 0 \leq s < k
\end{array}\label{eq:alba:weight_res_topo3}\\
\shortintertext{Domains of decision variables:}
& z_{ij} \in \{0,1\} &\forall (i,j) \in E, \{i,j\} \subset V \label{eq:alba:dom_z}\\
& x_{is} \in \{0,1\} &\forall i \in V, 0 \leq s < k \label{eq:alba:dom_x}
\end{align}
The acyclicity constraints in (\ref{eq:alba:weight_res_topo1} - \ref{eq:alba:weight_res_topo3})
above restrict the part assignments of $i$ and $j$ to $x$ and $y$, $x \leq y$ if $A_{ij} \leq B$ and
$x< y$ if $A_{ij} > B$ respectively. The constraint (\ref{eq:alba:weight_res_topo3}) ensures the
correct assignment of $z_{ij}$ variables when $i$ and $j$ are assigned different parts.
Note that although the set of decision variables is smaller, the preprocessed $A_{ij}$ values may be
as many as $\mathcal{O}(|V|^2)$.
The above formulation is complete by itself, however,
to improve the formulation, the authors add further filtering constraints using the computed $A_
{ij}$ and $A'_{ijl}$ values from pre-processing and add the following constraint to make their
final best performing formulation:
\begin{align}
& z_{ij} = 0 & \forall (i,j) \in A, A_{ij} > B.
\end{align}
In addition, modify the triangular inequality constraints to utilize the $A_{ij}$ and B values as follows:
\begin{align}
& \begin{rcases}
& z_{il} \geq z_{ij} + z_{jl} - 1 \\
& z_{ij} \geq z_{il} + z_{jl} - 1 \\
& z_{jl} \geq z_{ij} + z_{il} - 1 \\
\end{rcases} & \forall (i,j) \in A, A_{ij} > B
\end{align}
\begin{align}
& z_{il} \leq z_{ij} & \forall i<j<l \in V, \quad A_{ij}, A_{il} \leq B, j \in N_l^i \\
& z_{il} \leq z_{jl} & \forall i<j<l \in V, \quad A_{ij}, A_{jl} \leq B, j \in N_l^i
\end{align}
\begin{align}
& z_{ij} + z_{jl} + z_{il} \leq 1 & \forall i<j<l \in V, \quad A_{ij}, A_{il}, A_{jl} \leq B, A'_{ijl} > B \\
& z_{ij} + z_{il} \leq 1 & \forall i<j<l \in V, \quad A_{ij}, A_{il} \leq B, A_{jl} > B \\
& z_{il} + z_{jl} \leq 1 & \forall i<j<l \in V, \quad A_{il}, A_{jl} \leq B, A_{ij} > B \\
& z_{ij} + z_{jl} \leq 1 & \forall i<j<l \in V, \quad A_{ij}, A_{jl} \leq B, A_{il} > B
\end{align}
Finally, as the final extension and simplification to their formulation, they replace the constraints
(\ref{eq:alba:weight_res_topo1}-\ref{eq:alba:weight_res_topo3}) with the following:
\begin{align}
& w_i + \sum \limits_{i-1}^{j=1} w_jz_{ji} + \sum \limits_{j=i+1}^{|V|-1} w_jz_{ij} \leq B &
\forall 0 < i < N\label{eq:alba:monster_weight_const} \\
& z_{ij} + z_{il} + \sum \limits_{t \geq s} (x_{jt} + x_{lt}) + \sum \limits_{t<s} x_{it} \leq 3 &
\begin{array}{r}
\forall i < j < l \in V, \alpha_{jl} = 0,\\
A_{ij},A_{il} \leq B, A'_{ijl} > B, \forall 0 \leq s < k
\end{array}
\label{eq:alba:monster_acyc1}\\
& \sum \limits_{t \geq s} x_{it} + \sum \limits_{t \leq s} x_{jt} \leq 1 + z_{ij} &
\begin{array}{r}
\forall 0 < i < j \in V,\\
A_{ij} \leq B, A'_{ijl} > B, \forall 0 \leq s < k
\end{array}\label{eq:alba:monster_acyc2}
\end{align}
\section{A simple and elegant formulation}
\label{sec.algo}
Our formulation builds on top of the simple idea of topological orderability of DAGs.
It is known that for any given DAG there exists at least one topological ordering of the nodes in
which all edges are from lower order nodes towards higher order nodes,
and, a graph is topologically orderable if and only if it is a directed acyclic graph.
This has been used in many different
domains that makes use of DAGs such as dynamic topological order maintenance algorithms for
streaming and evolving DAGs~\cite{pearce2007dynamic}, and indeed, Albareda-Sambola et al.~
\cite{albareda2019reformulated} makes use of this property in their formulation as well.
In our formulation, we return to the roots of the problem, and build our solution with the minimal
and simple constraint inequalities: Given a DAG $G$, we want to partition the vertex set into
disjoint subsets where the quotient graph is also a directed acyclic graph. Therefore, the resulting quotient graph should also have at least one topological ordering.
Then, we can ignore the many symmetrical solutions by focusing on one specific solution where the part indices
are assigned in one arbitrary valid topological ordering for the quotient graph, and enforce this information at the
\emph{adjacency matrix} of the quotient graph.
Mathematically, the formulation becomes enforcing the adjacency matrix to be an upper triangular matrix.
Our proposed formulation is as follows.\\
Objective: Minimize the edge cut where $z_{ij}$ is one for a cut edge and zero otherwise.
\begin{equation}
\label{eq:obj3}
min \sum_{(i,j) \in E}^{} c_{ij} \cdot z_{ij}
\end{equation}
Subject to:
\begin{align}
\shortintertext{Constraint for all nodes belonging to exactly one part:}
&\sum_{s = 0}^{k-1} x_{is} = 1 &\forall i \in V \label{eq:my:constr_onepart}\\
\shortintertext{Constraint for part weight balance:}
&\sum_{i \in V} w_i \cdot x_{is} \leq B &\forall 0 \leq s < k \label{eq:my:constr_partbal}\\
\shortintertext{Constraint for marking the cut edges as 1 if they are in different parts:}
&x_{js} - x_{is} \leq z_{ij} &\forall (i,j) \in E, 0 \leq s < k \label{eq:my:constr_samepart} \\
\shortintertext{Constraints for the decision variables in $y$ matrix, i.e., induced edges represented as an adjacency matrix of parts:}
&x_{is} + x_{jt} -1 \leq y_{st}
& \begin{array}{r}
\forall (i,j) \in E, i \in V_s, j \in V_t,\\
0 \leq s \neq t < k
\end{array}
\label{eq:my:constr_induced_edge}\\
\shortintertext{Constraint for topologically ordering the part indices and decrease symmetry by
restricting the strictly lower triangle of the y matrix:}
& y_{st} = 0 &\forall 0 \leq t < s < k
\label{eq:my:constr_triangular}\\
\shortintertext{Domains of decision variables:}
& x_{is} \in \{0,1\} &\forall i \in V, 0 \leq s < k \label{eq:my:dom_x}\\
& z_{ij} \in \{0,1\} &\forall (i,j) \in E, \{i,j\} \subset V \label{eq:my:dom_z}\\
& y_{st} \in \{0,1\} &\forall 0 \leq s,t < k \label{eq:my:dom_y}
\end{align}
Here, the value assignments of $x_{is}$ and $z_{ij}$ variables shape the nonzero values of $y$ matrix.
The acyclicity constraint is enforced by the restriction on the strictly lower triangle of the $y$
matrix. This restriction indirectly prevents any assigment of parts where $(i,j) \in E, i \in
V_s, j \in V_t$ and $t < s$ since $y_{st} \geq 1$ contradicts with $y_{st} = 0$ in this case.
It is important to note that the $y$ variable in this formulation need not store the exact adjacency
matrix since there is no tight-bound constraint nor an objective function defined on it. Formally,
we can define the $Adj$, adjacency matrix for the part graph where:
\begin{align}
Adj_{st} = & \begin{cases}
1 & \exists (i,j) \in E, i \in V_s, j \in V_t \\
0 & otherwise
\end{cases}
\end{align}
Then, $y_{st} \geq Adj_{st}$.
Thus, there is no restriction to set the
values to zero for the upper triangle of $y$ when there is no edge to indicate so in the quotient graph.
However, the constraints enforce that all nonzero cells of the actual adjacency matrix of the quotient
graph are correctly assigned nonzeros in the $y$ matrix as well. And, the critical section, i.e.,
the strictly lower triangle is always set to zero.
This formulation saves us from the need for additional variables around TMZ formulation (e.g.,
$\pi$) and constraints to eliminate symmetrical solutions as was used in Nossack et al.~
\cite{nossack2014mathematical}'s formulations as well as complex formulations with
significant pre-processing steps as was used in Albareda-Sambola et al.~
\cite{albareda2019reformulated}'s formulations.
\section{Real-World Impact}
\label{sec:real_world_impact}
Since ILP formulations of the acyclic k-way partitioning problem are not feasible algorithms for large inputs (e.g., number of nodes $>$ 1000),
we focus on the scenarios where this deficiency can be mitigated.
Here, we give two examples of where this new formulation is useful. First, we design heuristic
algorithms that utilize ILP formulation within the popular multilevel acyclic partitioning
paradigm. Then, we present a real-world example use case, i.e., partitioning for hierarchical
state-vector based
quantum circuit simulations, where the ILP is used as the lower bound/optimal result as the baseline
for heuristic algorithms.
\subsection{Application within Multilevel Partitioning Paradigm}
\label{sub:application_within_multilevel_partitioning_paradigm}
Multilevel partitioning is first introduced in 1990s~\cite{bui1993heuristic}, and is the de facto approach
for many graph partitioning problems~\cite{kaku:98:metis,Catalyurek13-UMPa,patohmanual,pell:08:scotch,Herrmann19-SISC,mops:17b}.
In high level, a multilevel partitioning algorithm consists of 3 phases: coarsening, initial
partitioning, and uncoarsening/refinement. Coarsening is the application of a series of contractions
of the nodes of an input graph to create smaller but similar instances of the input.
The goal is to reduce the size of the problem to a more manageable size
while preserving the main features of the input.
The initial partitioning phase is where the coarsest graph is partitioned into the desired
number of parts. The initial partitioning can afford expensive algorithms
that would not be feasible for the original input
since the coarsening phase ideally reduces the problem size significantly.
The uncoarsening phase is essentially the reverse of the coarsening phase:
The small, coarse graph is uncontracted back to the original layer by layer. And, at each step of
the uncoarsening, a refinement algorithm is applied in order to improve the objective function.
There are multiple ways to apply the presented simplified perspective to the acyclic
partitioning problem to the multilevel paradigm. We briefly mention three opportunities: We can
\begin{enumerate}
\item design coarsening and refinement algorithms that
maintain acyclicity and the node indexing which conforms to the topological ordering. This idea is
closely related with the application of dynamic topological order maintenance
algorithms~\cite{pearce2007dynamic} to the partitioning result, however,
it is not implemented in the recent acyclic graph partitioning algorithms~\cite{Herrmann19-SISC,mops:17b}.
Maintaining a topological order of partitions/contracted nodes, or, maintaining an
upper-triangular matrix of adjacency relation, during coarsening and refinement
allows us to reduce the calls to cycle detection procedures.
We would need to run it only for the cases where
the change in the graph creates an edge from a higher indexed node (or part) to a lower indexed node (or part).
Thus, effectively reduces the potential necessary cycle detection calls by half
since creation of any edge from a lower indexed entity to a higher indexed entity
as well as removal of any edge cannot create a cycle.
\item define an ILP-based initial partitioning
(either as an optimal partitioning or a time-restricted heuristic approach),
which may be feasible only because coarsening can bring the size of the graph to a manageable size.
Normally, ILP solutions are not feasible for larger instances because ILP defines exponentially more
variables and constraints which make it much harder to store and process.
Since typically, coarsening phase is used to produce small representatives of input (e.g., less than 500 nodes),
it may bring the execution time within a tolerable range.
\item use ILP formulation as a refinement algorithm,
using the result of initial partitioning as a hot start.
Many recent ILP solvers allow starting with a user-defined valid or partial solution.
And, users have the option to either search for the optimal solution or search for an improvement given a time limit.
As with any other refinement algorithm, one can project back the partitioning of coarser graph to
the current,
finer graph and use this partition projection as initial solution for the ILP solver.
Although the ILP solvers do not make any promises about the use of or improvement upon a given initial solution,
the result would be at least as good as the given initial solution.
\end{enumerate}
Exploring the best ways to design coarsening and refinement phases with ILP solvers for acyclic partitioning is an interesting, ongoing research effort.
Algorithms toward similar goals are developed for the general undirected graph partitioning problem in~\cite{henzinger2020ilp}.
\subsection{Application in Quantum Circuit Simulation Problem}
\label{sub:application_in_quantum_circuit_simulation_problem}
Using ILP for partitioning is generally not efficient since the problem sizes can get quite large
and ILP solvers are not favorable in terms of the runtime in this case.
Quantum circuit simulation is a recent problem area
where a quantum computation is represented as
a directed acyclic graph and simulated using classical computers.
Currently, the largest quantum computers can compute circuits with up to 127 qubits (quantum
bits), however, many simulators can not handle even the half of this number~\cite{Fang22-ARXIV}.
As the number of qubits are still low, the simulated circuits are also quite small compared to
inputs of other use cases (e.g., simulations and scheduling for classical computation~
\cite{Ozkaya19-IPDPS,Ozkaya19-PPAM}). This makes ILP solver based
partitioning a feasible approach for the partitioning of quantum circuit simulation algorithms.
We develop an ILP formulation for the specific partitioning problem defined in HiSVSIM~\cite{Fang22-ARXIV}.
Here, the problem is partitioning a quantum circuit DAG acyclically into
minimum possible number of parts,
where the resulting parts contain no more than a given number of unique qubits.
The added constraints are for counting and limiting the unique qubits per part given a
maximum allowed number of qubits limit ($L_m$).
Finally, the problem objective is not only the minimization of edge cut, but also the number of parts.
The input datasets for HiSVSIM~\cite{Fang22-ARXIV} consists of 13 quantum circuits (9 unique quantum circuits)
where the number of qubits are between 30 and 37.
Out of 13 circuits,
10 of them contain less than 500 quantum gates,
and 6 of them contain less than 200 gates.
Thus, the problem sizes are small enough to try ILP solver based approaches.
The input graphs for this problem has the following structure.
All quantum gates are represented as nodes
and all qubits (operands of quantum computation gates) are represented as edges.
The graph contains \emph{entry} and \emph{exit} gates for each qubit.
Each qubit is an in-edge to and out-edge from a single node at any time.
And, qubits can be traced as a line subgraph from their respective entry nodes to respective exit nodes.
Thus, given a quantum circuit DAG, the involved qubits of each node is known.
And given a subset of nodes, it is trivial to identify the unique qubits involved in this subset.
We define $Q$ as a set of qubits, binary $NQ$ matrix to store node(quantum gate)-qubit dependence and
the values of $NQ$ are pre-computed in linear time with respect to the number of edges in the DAG.
The cell $NQ_{iq}$ stores 1 if the qubit $q$ is required for the computation of node $i$.
Then, we define a binary decision variable $PQ$ that stores the qubit dependence of parts, i.e.,
the cell $PQ_{sq}$ stores 1 if the qubit $q$ is required for part $s$.
Then, the ILP formulation for this problem variant becomes:
Objective: Minimize the edge cut where $z_{ij}$ is one for a cut edge and zero otherwise.
\begin{equation}
\label{eq:obj4}
min \sum_{(i,j) \in E}^{} c_{ij} \cdot z_{ij}
\end{equation}
Subject to:
\begin{align}
\shortintertext{Constraint for all nodes belonging to exactly one part:}
&\sum_{s = 0}^{k-1} x_{is} = 1 &\forall i \in V \label{eq:my:q:constr_onepart}\\
\shortintertext{Constraint for part weight balance:}
&\sum_{i \in V} w_i \cdot x_{is} \leq B &\forall 0 \leq s < k \label{eq:my:q:constr_partbal}\\
\shortintertext{Constraint for marking the cut edges as 1 if they are in different parts:}
&x_{js} - x_{is} \leq z_{ij} &\forall (i,j) \in E, 0 \leq s < k \label{eq:my:q:constr_samepart} \\
\shortintertext{Constraints for the decision variables in $y$ matrix, i.e., induced edges represented as an adjacency matrix of parts:}
& x_{is} + x_{jt} -1 \leq y_{st}
& \begin{array}{r}
\forall (i,j) \in E, i \in V_s, j \in V_t,\\
0 \leq s \neq t < k
\end{array}
\label{eq:my:q:constr_induced_edge}\\
\shortintertext{Constraint for topologically ordering the part indices and decrease symmetry by
restricting the strictly lower triangle of the y matrix:}
& y_{st} = 0 &\forall 0 \leq t < s < k
\label{eq:my:q:constr_triangular}\\
\shortintertext{Constraints for limiting the number of qubits per part by $L_m$:}
& PQ_{iq} \geq X_{is} \cdot NQ_{iq} &\forall i \in V, 0 \leq s < k, q \in Q \label{eq:my:q:quantum1}\\
&\sum_{q \in Q} PQ_{sq} \leq L_m &\forall 0 \leq s < k \label{eq:my:q:quantum2}\\
\shortintertext{Domains of decision variables:}
& x_{is} \in \{0,1\} &\forall i \in V, 0 \leq s < k \label{eq:my:q:dom_x}\\
& z_{ij} \in \{0,1\} &\forall (i,j) \in E, \{i,j\} \subset V \label{eq:my:q:dom_z}\\
& y_{st} \in \{0,1\} &\forall 0 \leq s,t < k \label{eq:my:q:dom_y} \\
& PQ_{sq} \in \{0,1\} &\forall 0 \leq s < k, q \in Q \label{eq:my:q:dom_quantum}
\end{align}
It is important to note that the objective in this problem is to minimize the number of parts, and
then the edge cut. There are two ways to achieve this. First, we can try, starting from a small k
value and increase one by one while trying to find a feasible solution and stop as soon as a
feasible solution is found (or binary search for the smallest k value with a feasible solution). Second, we can define an additional binary variable for each part that stores true if
a part contains at least one node. Then, multiply this variable with a sufficiently large number and
use it as an additive component in the objective function.
The proposed ILP formulation was successfully used to find the optimal partitioning solution for
the acyclic quantum circuit partitioning problem.
\section{Discussion and Conclusion}
\label{sec.conc}
Nossack et al.~\cite{nope:14} presents one of the few analyses of the
mathematical formulations for the acyclic partitioning problem.
Albareda-Sambola et al.~\cite{albareda2019reformulated} introduce
a pre-processing phase and many carefully deduced valid inequality constraints
to speed up the computation of an optimal solution when using linear solvers.
In this work, we present a formulation that can be used together with/in addition to the previous
formulations.
We would like to note that, although the main goal of this work is on
presenting a simple and elegant formulation,
our experiments on a set of DAGs using the latest version of
Gurobi Optimizer available today (v9.5.0)~\cite{gurobi} showed similar runtime
performance for our acyclicity constraints compared to Albareda et al.~\cite{albareda2019reformulated}'s formulations
for the ILP solver phase. And, that Gurobi Optimizer eliminates many
variables and constraints (rows and columns) as redundant during the its presolve phase for all
three formulations.
This indicates that the recent advances in the ILP solver software can make up for the need for
introducing additional valid inequalities as constraints to limit the search space.
This has been a significant tradeoff
decision for many for years as to whether and how many new decision variables and constraints to
include (which increases the model formulation complexity, but may reduce the solution search
space) for the perfect balance for the ILP runtime performance.
To conclude, we present a simple and elegant formulation for the acyclic DAG partitioning problem
that eliminates many additional variables, redundant constraints and symmetrical solutions, and,
finally we show two example real-world scenarios where an elegant ILP-based formulation can be
utilized.
Our formulation performs similarly compared to more complex formulations.
Finally, the simplicity of the formulation may help enable many others to easily define, model,
implement, and experiment with balanced acyclic DAG partitioning problem and formulations.
| {
"timestamp": "2022-07-28T02:18:57",
"yymm": "2207",
"arxiv_id": "2207.13638",
"language": "en",
"url": "https://arxiv.org/abs/2207.13638",
"abstract": "This work addresses the NP-Hard problem of acyclic directed acyclic graph (DAG) partitioning problem. The acyclic partitioning problem is defined as partitioning the vertex set of a given directed acyclic graph into disjoint and collectively exhaustive subsets (parts). Parts are to be assigned such that the total sum of the vertex weights within each part satisfies a common upper bound and the total sum of the edge costs that connect nodes across different parts is minimized. Additionally, the quotient graph, i.e., the induced graph where all nodes that are assigned to the same part are contracted to a single node and edges of those are replaced with cumulative edges towards other nodes, is also a directed acyclic graph. That is, the quotient graph itself is also a graph that contains no cycles. Many computational and real-life applications such as in computational task scheduling, RTL simulations, scheduling of rail-rail transshipment tasks and Very Large Scale Integration (VLSI) design make use of acyclic DAG partitioning. We address the need for a simple and elegant mathematical formulation for the acyclic DAG partitioning problem that enables easier understanding, communication, implementation, and experimentation on the problem.",
"subjects": "Data Structures and Algorithms (cs.DS); Discrete Mathematics (cs.DM)",
"title": "A Simple and Elegant Mathematical Formulation for the Acyclic DAG Partitioning Problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211561049159,
"lm_q2_score": 0.7279754607093178,
"lm_q1q2_score": 0.709645880224666
} |
https://arxiv.org/abs/1607.08712 | Signal Recovery in Uncorrelated and Correlated Dictionaries Using Orthogonal Least Squares | Though the method of least squares has been used for a long time in solving signal processing problems, in the recent field of sparse recovery from compressed measurements, this method has not been given much attention. In this paper we show that a method in the least squares family, known in the literature as Orthogonal Least Squares (OLS), adapted for compressed recovery problems, has competitive recovery performance and computation complexity, that makes it a suitable alternative to popular greedy methods like Orthogonal Matching Pursuit (OMP). We show that with a slight modification, OLS can exactly recover a $K$-sparse signal, embedded in an $N$ dimensional space ($K<<N$) in $M=\mathcal{O}(K\log (N/K))$ no of measurements with Gaussian dictionaries. We also show that OLS can be easily implemented in such a way that it requires $\mathcal{O}(KMN)$ no of floating point operations similar to that of OMP. In this paper performance of OLS is also studied with sensing matrices with correlated dictionary, in which algorithms like OMP does not exhibit good recovery performance. We study the recovery performance of OLS in a specific dictionary called \emph{generalized hybrid dictionary}, which is shown to be a correlated dictionary, and show numerically that OLS has is far superior to OMP in these kind of dictionaries in terms of recovery performance. Finally we provide analytical justifications that corroborate the findings in the numerical illustrations. | \section{Introduction}
\lettrine[findent=2pt]{\textbf{C}}{}OMPRESSED SENSING (CS) \cite{eldar2012compressed} has led to a new paradigm in signal processing. Compressed sensing provides a novel way of acquiring a sparse signal $\bm{x} \in \mathbb{R}^{N}$ such that $\|x\|_{0} \le K$ with very few number of linear measurements $M$ as compared to the original length of the signal N. The linear measurements $\bm{y} \in \mathbb{R}^{M}$ is acquired using a measurement matrix $\mathbf{\Phi}$ as.
\begin{equation}
\bvec{y}=\mathbf{\Phi x}
\end{equation}
The original signal $x$ is recovered back using a reconstruction algorithm. Compressive sensing research, is mainly concentrated around the following questions
\begin{itemize}
\item What class of measurement matrix $\mathbf{\Phi}$ can be used to acquire a compressed signal.
\item Given, the measurements $\bm{y}$ and $\mathbf{\Phi}$, which algorithms can be used to recover the original signal $\bm{x}\in R^{N}$.
\item How many measurements are required to reliably recover the signal.
\end{itemize}
Earlier, from the works of Candes-Tao~\cite{candes2006robust} and Rudelson–Vershynin~\cite{rudelson2008sparse}, it has been established that it is possible to reconstruct every $K$-sparse signal from Gaussian measurements $M$ with probability exceeding $1-e^{-cM}$ given $M \ge CK\ln(\frac{N}{K})$. The recovery is possible through the following convex minimization program.
\begin{equation}
\label{convex_program}
\min \|\bm{x}\|_{1} \hspace{0.2cm} \textnormal{s.t.} \hspace{0.2cm}
\bm{y}=\mathbf{\Phi}\bm{x}
\end{equation}
There are mainly two broad classes of algorithms which are talked about in literature. These are convex relaxation algorithms such as Basis Pursuit \cite{mallat_matching_1993} and second class of algorithms are iterative greedy algorithms~\cite{tropp2004greed}. Basis pursuit~\ref{convex_program} is an example of a convex programming approach.
However, convex program algorithms are computationally very expensive. For instance Basis Pursuit (BP) requires running time of the order of $\mathcal{O}(N^2 M^{3/2})$. As a consequence, there has been a lot of study devoted to alternative algorithms based on greedy approaches. Mallat~\cite{mallat_matching_1993} was the first to propose matching pursuit and Pati et. al.~\cite{pati-etal-1993-omp} proposed an extension of that known as Orthogonal Matching Pursuit(OMP) ~\cite{davis1997adaptive} which was one of the first of these greedy algorithms. Another early greedy algorithm is Orthogonal Least Squares(OLS)~\cite{chen1989orthogonal},\cite{natarajan1995sparse}. These algorithms greedily select indices at each step of the algorithm and append them to the already constructed support to create a successively increasing support over which they take projections to get reconstructed signal. The only difference between these two algorithms is in the method of selecting an index in the identification step. Tropp \cite{tropp2007signal} was first to find that OMP requires $\mathcal{O}(K\ln(N))$ number of measurements which is quite competitive to BP but still is quite higher than the number of measurements required for BP. However, similar efforts does not seem to have spent on the analysis of OLS. Also, the structure of OLS apparently makes it computationally more expensive, which is why in literature OLS has not not gained as much popularity as OMP.
Quite recently, Soussen et. al.~\cite{soussen2013joint} have discussed superior recovery performance of OLS in coherent dictionaries, thorough numerical simulations. Coherent dictionaries are dictionaries that have high mutual coherence~\cite{tropp2004greed}, and as a result, the columns are highly correlated. Soussen et.al. have studied OMP and OLS in these kind of dictionaries, and have given theoretical conditions for their success. However, those conditions do not seem to explain the observed superiority of OLS in recovery performance.
\subsection{Main Objectives} It is the goal of this paper to analyze and discuss properties of OLS and try to justify the superiority it shows relative to OMP.
\begin{itemize}
\item In the first part of the paper we establish recovery guarantees for OLS under the conditions where signal is measured through a measurement matrix whose entries are i.i.d. Gaussian. Specifically, we show that a slight modification to OLS can allow it recover a $K$-sparse unknown vector with $\mathcal{O}(K\ln(N/K))$ number of measurements.
\item Though, OLS seems to be a computationally heavy algorithm, We show that running time complexity of OLS can be made comparable to that of OMP i.e. $\mathcal{O}(KMN)$.
\item We empirically show that with correlated Measurement matrix OLS is able to successfully recover a true support set from while OMP does not.
\item Apart from empirical results, we provide analytical arguments that explain why OLS can outperform OMP in correlated dictionaries.
\end{itemize}
\section{Description of OLS for CS}
\subsection{Notation}
\label{sec:notation}
The following notation will be used throughout the paper.
\begin{description}
\item[$\opnorm{\cdot}_p$:] The $l^p$ norm of a vector $\bvec{v}$, i.e., $\opnorm{\bvec{v}}_p=\left(\sum_{i=1}^n |v_i|^p\right)^{1/p}$.
\item[$\inprod{\cdot}{\cdot}$:] The inner product function defined on $\mathbb{R}^N$, defined as $\inprod{\bvec{u}}{\bvec{v}}=\sum_{i=1}^N u_iv_i,\ \forall \bvec{u,\ v}\in \mathbb{R}^N$.
\item[$\bvec{\Phi}$:] The real measurement matrix with $M$ rows and $N$ columns with $M<N$.
\item[$\bm{\phi}_i$:] The $i$ th column of $\bvec{\Phi}$, for $i=1,2,\cdots,\ N$. We assume that $\norm{\bm{\phi}_i}=1,\ \forall i$.
\item[$K$:] The sparsity of unknown signal. It is assumed to be \emph{exactly} known, i.e. $\opnorm{\bvec{x}}_0=K$.
\item[$\bvec{x}_S$:] the vector $\bvec{x}$ restricted to the subset of indices $S$, i.e. ${x}_{S,i}=x_i\cdot I(i\in S)$, where $I(\cdot)$ is the set indicator function.
\item[$\mathcal{H}$:] The set of all the indices $\{1,2,\cdots,\ N\}$.
\item[$T$:] The unknown support set of the unknown vector $\bvec{x}$.
\item[$\bvec{\Phi}_S$:] The submatrix of $\bvec{\Phi}$ formed with the columns restricted to index set $S$.
\item[$\bvec{\Phi}_{S}^\dagger:$] The Moore-Penrose pseudo-inverse of $\bvec{\Phi}_{S}$, which exists when $\bvec{\Phi}_S$ has column rank $|S|$. It is defined as $(\bvec{\Phi}_{S}^T\bvec{\Phi}_{S})^{-1}\bvec{\Phi}_{S}^T$.
\item[$\proj{S}$:] The projection operator on $span(\bvec{\Phi}_{S})$. It is defined as $\bvec{\Phi}_{S}\bvec{\Phi}_{S}^\dagger$.
\item[$\dualproj{S}$:] The projection operator on the orthogonal complement of $span(\bvec{\Phi}_{S})$. It is defined as $\bvec{I}-\proj{S}$.
\end{description}
\subsection{OLS algorithm}
\label{sec:ols-algo}
\begin{table}[ht!]
\begin{subfigure}{0.5\textwidth}
\caption{OLS \textsc{Algorithm}}
\centering
\begin{tabular}{p{8.5cm}}
\hrulefill
\begin{description}
\item[\textbf{Input:}]\ measurement vector $\bvec{y}\in \mathbb{R}^M$, sensing matrix $\bvec{\Phi}\in \mathbb{R}^{M\times N}$, sparsity level $K$
\item[\textbf{Initialize:}]$\quad$ counter $k=0$, residue $\bvec{r}^0=\bvec{y}$, estimated support set, $T^0=\emptyset$, total set $\mathcal{H}=\{1,2,\cdots,\ N\}$, tolerance $\epsilon>0$
\item[\textbf{While}]($\norm{\bvec{r}^k}\ge \epsilon\ \mbox{and}\ \ k<K$)
\begin{description}
\item[]\ $k=k+1$
\item[]\ {\emph{Identify:}} $\displaystyle h^k=\argmin_{i\in {\mathcal{H}}}\|\dualproj{T^{k-1}\cup \{i\}}\bvec{y}\|_2^2$
\item[]\ {\emph{Augment:}} $T^k=T^{k-1}\cup h^k$
\item[]\ {\emph{Estimate:}} $\displaystyle \bvec{x}^k=\argmin_{\bvec{u}:\bvec{u}\in \mathbb{R}^n,\ supp(\bvec{u})= T^k}\|\bvec{y}-\bvec{\Phi}\bvec{u}\|_2$
\item[]\ \emph{Update:} $\bvec{r}^k=\bvec{y}-\bvec{\Phi}\bvec{x}^k$
\end{description}
\item[\textbf{End While}]
\end{description}
\hrulefill
\begin{description}
\item[\textbf{Output:}]$\quad$ estimated support set $\displaystyle \hat{T}=\argmin_{S:|S|=K}\|\bvec{x}^k-\bvec{x}^k_S\|_2$ and $K$-sparse signal $\hat{\bvec{x}}$ satisfying $\hat{\bvec{x}}_{\hat{T}}=\bvec{x}^k_{\hat{T}},\ \hat{\bvec{x}}_{\mathcal{H}\setminus\hat{T}}=\mathbf{0}$
\end{description}
\hrulefill
\label{tab:OLS}
\end{tabular}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\caption{OMP \textsc{Algorithm}}
\centering
\begin{tabular}{p{8.5cm}}
\hrulefill
\begin{description}
\item[\textbf{Input:}]\ measurement vector $\bvec{y}\in \mathbb{R}^M$, sensing matrix $\bvec{\Phi}\in \mathbb{R}^{M\times N}$, sparsity level $K$
\item[\textbf{Initialize:}]$\quad$ counter $k=0$, residue $\bvec{r}^0=\bvec{y}$, estimated support set, $T^0=\emptyset$, total set $\mathcal{H}=\{1,2,\cdots,\ N\}$, tolerance $\epsilon>0$
\item[\textbf{While}]($\norm{\bvec{r}^k}\ge \epsilon\ \mbox{and}\ \ k<K$)
\begin{description}
\item[]\ $k=k+1$
\item[]\ {\emph{Identify:}} $\displaystyle h^k=\argmax_{i\in {\mathcal{H}}}\abs{\inprod{\bm{\phi}_i}{\bvec{r}^{k-1}}}$
\item[]\ {\emph{Augment:}} $T^k=T^{k-1}\cup h^k$
\item[]\ {\emph{Estimate:}} $\displaystyle \bvec{x}^k=\argmin_{\bvec{u}:\bvec{u}\in \mathbb{R}^n,\ supp(\bvec{u})= T^k}\|\bvec{y}-\bvec{\Phi}\bvec{u}\|_2$
\item[]\ \emph{Update:} $\bvec{r}^k=\bvec{y}-\bvec{\Phi}\bvec{x}^k$
\end{description}
\item[\textbf{End While}]
\end{description}
\hrulefill
\begin{description}
\item[\textbf{Output:}]$\quad$ estimated support set $\displaystyle \hat{T}=\argmin_{S:|S|=K}\|\bvec{x}^k-\bvec{x}^k_S\|_2$ and $K$-sparse signal $\hat{\bvec{x}}$ satisfying $\hat{\bvec{x}}_{\hat{T}}=\bvec{x}^k_{\hat{T}},\ \hat{\bvec{x}}_{\mathcal{H}\setminus\hat{T}}=\mathbf{0}$
\end{description}
\hrulefill
\label{tab:OMP}
\end{tabular}
\end{subfigure}
\end{table}
In Table.~\ref{tab:OLS} descriptions of OLS as well as the OMP algorithm are presented. It can be observed from these descriptions that OLS functionally differs from OMP only at the atom identification step. At this step, OMP chooses a new atom by evaluating the list of absolute correlations of the atoms of the dictionary with the residual vector at the last step, and then finding the index corresponding to the maximum of that list. OLS, on the other hand, at the identification step, creates a list of residual vector norm, that would have been obtained if index of a new atom of the dictionary were added to the support. OLS, then looks up that index, the inclusion of which results in the least residual vector norm. This procedure, however, seems to be formidable to work with, because it needs to evaluate the orthogonal projection error with respect to different subspaces, as indicated by the term $\norm{\dualproj{T^{k-1}\cup \{i\}}\bvec{y}}$. Fortunately, the following lemma exhibits that this selection procedure has an equivalent form that is much easier to work with.
\begin{lemma}
\label{lem:OLS-selection-step}
Let $k\ge 1$. Let $T^{k-1}$ be the support set constructed by OLS after the $(k-1)^{th}$ iteration, and let $\bm{r^{k-1}}$ be the corresponding residual. Then, an index $i\in \mathcal{H}\setminus T^{k-1}$ will be chosen at the $k^{th}$ iteration if $$\bm{\phi}_{i}=\argmax_{i\in \mathcal{H}\setminus T^{k-1}}\frac{\abs{\inprod{\bm{\phi}_i}{\bvec{r}^{k-1}}}}{\norm{\dualproj{T^{k-1}}\bm{\phi}_i}}$$.
\end{lemma}
\begin{proof}
First, observe that, by definition, for any $i\in T^{k-1}$, $\proj{T^{k-1}\cup \{i\}}\bvec{y}\in span(\bvec{\Phi}_{T{k-1}})\implies \norm{\dualproj{T^{k-1}\cup \{i\}}\bvec{y}}=0$. Now, note that, for any $i\in T^{k-1}$,
\begin{align}
{\proj{T^{k-1}\cup \{i\}}\bvec{y}} & = \proj{T^{k-1}}\bvec{y}+\frac{\inprod{\bm{\phi}_i}{\bvec{r}^{k-1}}}{\norm{\dualproj{T^{k-1}}\bvec{\phi}_i}^2}{\dualproj{T^{k-1}}\bvec{\phi}_i}\nonumber\\
\implies {\dualproj{T^{k-1}\cup \{i\}}\bvec{y}} & =\bvec{r}^{k-1}-\frac{\inprod{\bm{\phi}_i}{\bvec{r}^{k-1}}}{\norm{\dualproj{T^{k-1}}\bvec{\phi}_i}^2}{\dualproj{T^{k-1}}\bvec{\phi}_i}\nonumber\\
\implies \norm{\dualproj{T^{k-1}\cup \{i\}}\bvec{y}}^2 & =\norm{\bvec{r}^{k-1}}^2-\frac{\abs{\inprod{\bm{\phi}_i}{\bvec{r}^{k-1}}}^2}{\norm{\dualproj{T^{k-1}}\bvec{\phi}_i}^2}
\label{eq:OLS-selection-step-equivalence}
\end{align}
Since $i\in \mathcal{H}\setminus T^{k-1}$ is chosen if the L.H.S. of Eq.~\eqref{eq:OLS-selection-step-equivalence} is minimized, the R.H.S. of Eq.~\eqref{eq:OLS-selection-step-equivalence} implies the desired result.
\end{proof}
\subsection{Implementation and Time Complexity of the Algorithm}
\label{sec:OLS-runtime-complexity}
Though the atom selection criteria leveraging the result of Lemma.~\ref{lem:OLS-selection-step} is relatively simpler than the original atom selection criteria of OLS, as described in Table.~\ref{tab:OLS}, it is still, apparently, seems very expensive to be implemented efficiently, because of the involvement of the orthogonal projection operators. However, exploiting QR decomposition of the projected matrices,the OLS algorithm can be implemented with a time complexity of $\mathcal{O}(KMN)$ which is same as that of OMP. This can be done by allowing twice as much space as that of OMP, maintaining the space complexity of $\mathcal{O}(MN)$. The algorithm mainly consists of two steps
\begin{itemize}
\item Identification of the columns of the support set.
\item Computation of the signal vector $\bm{x}$
\end{itemize}
Throughout the algorithm,for any iteration say $k$ we maintain a matrix consisting the columns $\{\dualproj{T^k}\bm{\phi_{i}}\}_{i=1,2..N}$. If a column $\bm{\phi_{j}}$ is chosen at iteration $k+1$, we can modify the columns as
\begin{equation}
\dualproj{T^{k+1}}\bm{\phi}_{i}=\dualproj{T^{k}}\bm{\phi}_{i}-\frac{\inprod{\dualproj{T^{k}}\bm{\phi}_i}{\dualproj{T^{k}}\bm{\phi}_j}}{\norm{\dualproj{T^k}\bm{\phi}_j}^2}\dualproj{T^{k}}\bm{\phi}_j
\end{equation}
Overall for $N$ columns, at any iteration the above step does not take more than $\mathcal{O}(MN)$ of floating point operations. The rest of the steps are similar to that as that of the OMP. Since, the algorithm runs for $K$ number of iterations, therefore the time complexity for identification step is $\mathcal{O}(KMN)$.\\
For,the second step, we maintain $QR$ decomposition of the selected columns. The signal vector $\bm{x}$ can then be found in not more than $\mathcal{O}(K^{2}M)$ operations.\\
Thus, the overall time complexity is $\mathcal{O}(KMN)$.
\section{Random Measurement Ensembles}
\label{sec:random-measurement-ensemble}
In this section, we describe the type of matrix ensembles that will be used throughout the paper to carry out analysis of OLS. In this paper, two types of matrix ensembles are considered, which are refereed to ``uncorrelated'' and ``correlated'' dictionaries. These dictionaries have the following key properties:
\paragraph{\textbf{Uncorrelated dictionary}}
This kind of matrix ensemble represents incoherent dictionaries (referred to as ``uncorrelated'' dictionaries in the sequel), i.e. dictionaries which constitute ``almost'' orthonormal matrix ensembles. Matrices of this type are assumed to have the following properties:
\begin{itemize}
\item The entries of the matrix are i.i.d. Gaussian, i.e. $~\mathcal{N}(0,1/M)$.
\item The columns of the matrix are stochastically independent.
\item The columns are normalized, i.e., $\mathbb{E}\| \bm{\phi}_j \|_{2}^{2}=1,\ j=1,2,3....N$
\end{itemize}
\paragraph{\textbf{Correlated dictionary:}}
The main property that distinguishes a correlated dictionary to a uncorrelated dictionary, is the relatively higher mutual inner product between the columns of the matrices, compared to the uncorrelated dictionaries. One of the key measures of correlatedness of a matrix is the \emph{worst-case coherence}, defined as \begin{align*}
\mu:=\max_{i\ne j}\left\{\frac{\abs{\inprod{\bm{\phi}_i}{\bm{\phi}_j}}}{\norm{\bm{\phi}_i}\norm{\bm{\phi}_j}}\bigg|1\le i,j\le N\right\}
\end{align*}
We define the correlated dictionaries with the key property that it has high worst case coherence. For the purpose of demonstration, in this paper, a particular type of correlated dictionary is considered, that, in the sequel, will be referred to as the \emph{generalized hybrid dictionary}.
We define the generalized hybrid dictionary model as below:
\begin{definition}
\label{defn:hybrid}
A generalized hybrid dictionary of order $r$, is defined as the collection of normalized vectors $\{\bm{\phi}_{i}/\norm{\bm{\phi}_i}\}_{i=1}^N,\ \bm{\phi}_i\in\mathbb{R}^M $ such that \begin{align*}
\bm{\phi}_i=\bvec{n}_i+\sum_{j=1}^r u_{ij} \bvec{a}_j
\end{align*} where $\{\bvec{n}_i\}_{i=1}^N$ are i.i.d.$\sim\mathcal{N}_{M}(\bvec{0},M^{-1}I_M)$, $\{u_{ij}\}_{\scriptsize{1\le i\le N,\ 1\le j\le r}}$ are i.i.d.$\sim \mathcal{U}[0,T)$, $u_{ij}\independent \bvec{n}_k\ \forall i,j,k$ with $T>0$, and $\{\bvec{a}_i\}_{i=1}^r,\ \bvec{a}_i\in\mathbb{R}^M$, is an orthonormal basis for $\mathbb{R}^r$.
\end{definition}
Note that this model can describe a matrix with rank $r$ in the case when $MT>>1$. Denote $W_r=\spn{\bvec{a}_1,\cdots,\ \bvec{a}_r}$. Then, the model can be seen to describe each column $\bm{\phi}_i$ as a random vector concentrated around a vector that lies inside the positive orthant of the space $W_r$, with random components, lying within the $r-$hypercube of edge length $T$, embedded in $W_r$. Specific properties of this dictionary are discussed in greater detail in Section.~\ref{sec:ols-hybrid-dictionary}.
\section{Properties of Orthonormally Projected Uncorrelated Dictionaries}
\label{sec:projected-uncorrelated-dictionary-properties}
A crucial property of OLS is that, OLS can be thought of acting like OMP at each step, but with a matrix with columns projected on a subspace. This property was exhibited by lemma.~\ref{lem:OLS-selection-step} where it was found that the $(k+1)^{\mathrm{th}}$ step of OLS can be thought as the $k^{th}$ step of OMP, but with the matrix ensemble, consisting of the columns $\{\bvec{c}_i^k\}_{i}$, where \begin{align*}
\bvec{c}_i^k=\left\{\begin{array}{ll}
\frac{\dualproj{T^k}\bvec{\phi}_i}{\norm{\dualproj{T^k}\bvec{\phi}_i}}, & i\in T^k\\
\bvec{0}, & i\notin T^k
\end{array}\right.
\end{align*}
So, it is important to study the properties of this kind of matrix ensembles, before attempting an analysis of OLS.
\subsection{Joint Correlation}
The following lemma shows that with high probability, a projected column of the form $\frac{\dualproj{T^k}\bvec{\phi}_i}{\norm{\dualproj{T^k}\bvec{\phi}_i}}$ is ``almost'' orthogonal to at least one of any sequence of unit norm vectors. For the uncorrelated dictionaries that are considered here with i.i.d. gaussian entries, one can use standard concentration inequalities, along with some results from matrix theory to establish this result as shown below.
\begin{lem}
\label{lem:projected-joint-correlation}
Consider the uncorrelated dictionary $\bm{\Phi}$ assumed in Section.~\ref{sec:random-measurement-ensemble}. Let ${\bm{u}}\in \mathbb{R}^{M}$ be a vector whose $l_{2}$ norm do not exceed one. Given set $T^k$ of columns of $\bm{\Phi}$ and let $\bm{z}\in \mathbb{R}^{M}$ be a gaussian random vector with i.i.d. entries, independent of ${\bm{u}}$ and the columns of $\bm{\Phi}_{T^k}$. Then,
\begin{align}
\mathbb{P}\left\{\abs{\inprod{\frac{\dualproj{T^k}\bvec{z}}{\norm{\dualproj{T^k}\bvec{z}}}}{\bvec{u}} }\le \epsilon\right\} & \ge 1-e^{-\frac{m(M_1-1)\epsilon^2}{2}}
\end{align}
where $M_{1}=M-k-1$, and $m(n)=(\sqrt{n-1}+1)^2,\ \forall n\in \mathbb{N}$.
\end{lem}
\begin{proof}
Since $\bvec{z},\bvec{u}\independent \bvec{\Phi}_{T^k}$, we first find a lower bound of the probability of the desired event, conditioned on the fact that the columns of $\bvec{\Phi}_{T^k}$, and the vector $\bvec{u}$ are given.
Now, given the $\bvec{\Phi}_{T^k}$, note that $\dualproj{T^k}$ is an orthogonal projection error operator and hence can be decomposed as \begin{align*}
\dualproj{T^k}=\bvec{U\Sigma U}^T
\end{align*} where $\bvec{U}\in \mathbb{R}^{M\times M}$ is an orthogonal matrix, and $\Sigma$ is a diagonal matrix, with, w.l.o.g. its first $M-d$ diagonal elements $1$ and the rest $0$, where $d=\dim R(\bm{\Phi}_{T^k})\le k$, so that $d\le k$. Then, one can write, \begin{align*}
\lefteqn{\inprod{\frac{\dualproj{T^k}\bvec{z}}{\norm{\dualproj{T^k}\bvec{z}}}}{\bvec{u}}} & &\\
\ & =\inprod{\frac{\bvec{U\Sigma U}^T\bvec{z}}{\norm{\bvec{U\Sigma U}^T\bvec{z}}}}{\bvec{u}} \\
\ & =\inprod{\frac{\bvec{\Sigma U}^T\bvec{z}}{\norm{\bvec{\Sigma U}^T\bvec{z}}}}{\bvec{\Sigma U}^T\bvec{u}}\\
\ & =\inprod{\frac{\bvec{v}}{\norm{\bvec{v}}}}{\bvec{\tilde{u}}}
\end{align*}
where \begin{itemize}
\item $\bvec{v}\in \mathbb{R}^{M-d}$, with its components as the first $M-d$ components of $\bvec{U}^T\bvec{z}$
\item $\bvec{\tilde{u}}\in \mathbb{R}^{M-d}$, with its components as the first $M-d$ components of $\bvec{U}^T\bvec{u}$
\end{itemize}
Two further observation are in order:
\begin{itemize}
\item Given $\bvec{\Phi}_{T^k}$, $\bvec{U}^T\bvec{z}\sim \mathcal{N}(\bvec{0},M^{-1}\bvec{I}_M)$\footnote{Since given $\bvec{U}$, by independence of $\bvec{\Phi}_{T^k}$ and $\bvec{x},\ \expect {(\bvec{U}^T\bvec{z})(\bvec{U}^T\bvec{z})^T\mid \bvec{U}}=\bvec{U}^T\expect{\bvec{z z}^T}\bvec{U}= \bvec{U}^T M^{-1}\bvec{I}_M\bvec{U}=M^{-1}\bvec{I}_M$}, which implies that $\bvec{v}\sim \mathcal{N}(\bvec{0},M^{-1}\bvec{I}_{M-d})$.
\item $\norm{\bvec{\tilde{u}}}=\norm{\Sigma\bvec{U}^T\bvec{u}}\le\norm{\bvec{u}}\le 1$.
\end{itemize}
A further simplification can be furnished by finding an orthogonal matrix $\bvec{U}_1\in \mathbb{R}^{(M-d)\times (M-d)}$, i.e. a rotation in the $(M-d)$ dimensional space that transforms $\bvec{\tilde{u}}$ to a vector lying on one of the coordinate axes; specifically, $\bvec{\hat{u}}:=\bvec{U}_1\bvec{\tilde{u}}$ has its first component nonzero and all the other components $0$. This task can be executed by constructing $\bvec{U}_1$ by putting in the first row the vector $\bvec{\tilde{u}}/\norm{\bvec{\tilde{u}}}$, and putting in the rest of the rows an orthonormal basis of the orthogonal complement of $\spn{\bvec{\tilde{u}}}$ in $\mathbb{R}^{M-d}$. Then, one can write, \begin{align*}
\lefteqn{\inprod{\frac{\bvec{v}}{\norm{\bvec{v}}}}{\bvec{\tilde{u}}}} & & \\
\ =& \inprod{\frac{\bvec{v}_1}{\norm{\bvec{v}_1}}}{\bvec{\hat{u}}}
\end{align*}
where $\bvec{v}_1=\bvec{U}_1\bvec{v}$. Note that, since $\bvec{U}_1$ is constructed from $\bvec{\tilde{u}}$ which is independent of $\bvec{v}$, conditioned on $\{\bvec{u}\}$ and $\bvec{\Phi}_{T^k}$, $\bvec{v}_1\sim \mathcal{N}(\bvec{0},M^{-1}\bvec{I}_{M-d})$.
At this point, it is useful to make a change of coordinates from Cartesian to polar, to represent $\frac{\bvec{v}_1}{\norm{\bvec{v}_1}}$ as \begin{align*}
\begin{bmatrix}
\cos\Theta_1\\
\sin \Theta_1\cos \Theta_2\\
\vdots\\
\sin\Theta_1\sin\Theta_2\cdots\sin\Theta_{M_1-1}\cos\Theta_{M_1}\\
\sin\Theta_1\sin\Theta_2\cdots\sin\Theta_{M_1-1}\sin\Theta_{M_1}
\end{bmatrix}
\end{align*}
Where $M_1=M-d-1$. Here, given $\bvec{\Phi}_{T^k}$, $\Theta_1, \Theta_2,\ \cdots,\ \Theta_{M_1}$ are independent, but not identically distributed, continuous valued random variables, with $\Theta_1,\ \cdots,\ \Theta_{M_1-1}\in [0,\pi)$ and $\Theta_{M_1}\in [0,2\pi)$. The probability distribution function of these random variables are given by (conditioned on $\bvec{\Phi}_{T^k}$)\begin{align*}
p_{\Theta_i}(\theta_i)=\left\{\begin{array}{ll}
\frac{(\sin\theta_i)^{M_1-i}}{\beta\left(\frac{M_1}{2},\frac{1}{2}\right)}, & 1\le i\le M_1-1\\
\frac{1}{2\pi}, & i=M_1
\end{array}\right.
\end{align*}
Therefore, we have \begin{align*}
\lefteqn{\inprod{\frac{\dualproj{T^k}\bvec{z}}{\norm{\dualproj{T^k}\bvec{z}}}}{\bvec{u}}} & & \\
\ =& \cos\Theta_1
\end{align*}
Since $\Theta_1$ has density $p_{\Theta_1}$ when $\bvec{\Phi}_{T^k}$ is given, we find \begin{align*}
\lefteqn{\mathbb{P}\left\{\abs{\inprod{\frac{\dualproj{T^k}\bvec{z}}{\norm{\dualproj{T^k}\bvec{z}}}}{\bvec{u}}}\ge \epsilon\mid \bvec{\Phi}_{T^k},\bvec{u}\right\}} & & \\
\ &=\mathbb{P}(\abs{\cos \Theta_1}\ge \epsilon\mid \bvec{\Phi}_{T^k},\bvec{u})\\
\ &=2\int_{0}^{\cos^{-1}\epsilon}p_{\Theta_1}(\theta_1)d\theta_1\\
\ &=\frac{2}{\beta\left(\frac{M_1}{2},\frac{1}{2}\right)}\int_{0}^{\cos^{-1}\epsilon}\sin^{M_1-1}\theta_1 d\theta_1\\
\ &=f_{M_1-1}(\epsilon^2)
\end{align*}
where, for a given $n\in \mathbb{N}$, $f_n:[0,1]\to [0,1]$ is defined as \begin{align*}
f_{n}(x)=\frac{1}{A_n}\int_{0}^{\sqrt{1-x}}\frac{u^n}{\sqrt{1-u^2}}du,\quad \forall x\in [0,1]
\end{align*}
where $A_n=\frac{1}{2}\beta\left(\frac{n+1}{2},\frac{1}{2}\right)$. The following lemma is invoked to find a upper bound of the desired probability.
\begin{lem}
\label{lem:theta_1-prob-upper-bound}
$\forall n\in \mathbb{N},\ \forall x\in [0,1]$, \begin{align}
\label{eq:theta_1-prob-upper-bound}
f_n(x)\le e^{-\frac{m(n) x}{2}}
\end{align}
where $m(n):=(\sqrt{n-1}+1)^2$.
\end{lem}
\begin{proof}
The proof is postponed to Appendix.~\ref{sec:proof-lemma-theta_1-prob-upper-bound}
\end{proof}
Invoking Lemma.~\ref{lem:theta_1-prob-upper-bound}, it is immediate to find \begin{align*}
\lefteqn{\mathbb{P}\left\{\abs{\inprod{\frac{\dualproj{T^k}\bvec{z}}{\norm{\dualproj{T^k}\bvec{z}}}}{\bvec{u}}}\ge \epsilon\mid \bvec{\Phi}_{T^k},\bvec{u}\right\}} & & \\
\le & e^{-\frac{m(M_1-1)\epsilon^2}{2}}
\end{align*}
Thus, the desired probability can be upper bounded as \begin{align*}
\lefteqn{\mathbb{P}\left\{\abs{\inprod{\frac{\dualproj{T^k}\bvec{z}}{\norm{\dualproj{T^k}\bvec{z}}}}{\bvec{u}}}\ge \epsilon\right\}} & & \\
\ =& \int_{\bvec{\Phi}_{T^k},\ \bvec{u}_t}\mathbb{P}\left\{\abs{\inprod{\frac{\dualproj{T^k}\bvec{z}}{\norm{\dualproj{T^k}\bvec{z}}}}{\bvec{u}_t}}\ge \epsilon\mid \bvec{\Phi}_{T^k},\bvec{u}\right\}d\mathbb{P}(\bvec{\Phi}_{T^k})d\mathbb{P}(\bvec{u}_t)\\
\ \le & e^{-\frac{m(M_1-1)\epsilon^2}{2}}\\
\ \le & e^{-\frac{(\sqrt{M-k-2}+1)^2\epsilon^2}{2}}=e^{-\frac{m(M_1-1)\epsilon^2}{2}}
\end{align*}
where $M_1$, with a slight abuse of notation, is again defined as $M-k-1$.
\end{proof}
\subsection{Smallest Singular Value}
Inspired by the analysis technique introduced by Tropp and Gilbert in~\cite{tropp2007signal}, it is imperative to find out a tail bound of the smallest singular value of the projected uncorrelated matrix defined before in this section. Before proceeding to find the tail bound, we recall an important result associated with the tail bounds for ``unprojecetd'' uncorrelated matrices, i.e. matrices with the uncorrelated dictionary without being projected to any subspace. To do so, the unprojected matrix, is assumed to satisfy the following type of concentration inequality:
\begin{align}
\label{eq:concentration_inequality}
\mathbb{P}\left\{ \abs{ \norm{\bvec{\Phi x}}^2-\norm{\bvec{x}}^2 } \le \epsilon \norm{\bvec{x}}^2\right\} \ge 1-2e^{-M c_{0}(\epsilon)},\quad 0 < \epsilon \le 1
\end{align}
where $c_0(\epsilon)$ is a constant that depends only on $\epsilon$, such that $c_0(\epsilon)>0\ \forall
\epsilon\in (0,1)$. Then the singular values can be bounded with high probability thanks to the following lemma due to Baraniuk ,
\begin{lem}
\label{lem:Baranuik_singular_value_bound}
Suppose that $\bm{Z}\in \mathbb{R}^{M\times K}$ be a sub-matrix of a $M\times N$ matrix $\bvec{\Phi}$, suhc that $\bvec{Z}$ satisfies the concentration inequality \eqref{eq:concentration_inequality}. Then, for any $\epsilon\in (0,1)$ and $\bvec{x}\in \mathbb{R}^K$, one has
\begin{align}
(1-\epsilon)\|\bm{x}\|_{2}\le\|\bm{Z x}\|_2\le (1+\epsilon)\|\bm{x}\|_{2} \hspace{1cm}
\end{align}
with probability exceeding $1-2(12/\epsilon)^Ke^{-c_0(\epsilon/2)M}$.
\end{lem}
It should be emphasized that this result is true for a \emph{given} $M\times K$ sub-matrix of $\bvec{\Phi}$, as opposed to the stronger result, generally alluded to the restricted isometry property (RIP), which gives conditions ensuring the above kind of bound to hold for \emph{all} $M\times K$ sub-matrices of $\bvec{\Phi}$.
Another important result, specific to the case of Gaussian matrices, is attributed to Davidson and Zarek, which gives a much tighter estimate for lower bound of the lowest singular value. This one will be more useful in our analysis as we have considered Gaussian entries for our matrices.
\begin{align}
\label{eq:zarek-singular-bound}
\mathbb{P} \left\lbrace \sigma_{min}({\bm{Z}}) \ge 1-\sqrt{\frac{K}{M}} - \epsilon \right\rbrace \ge 1-e^{-\frac{\epsilon^{2}M}{2}}
\end{align}
In the following lemmas, we now attempt to find similar estimates of lower bounds on the lowest singular values of the ``projected'' uncorrelated matrix, as defined at the beginning of Section.~\ref{sec:projected-uncorrelated-dictionary-properties}.
\begin{lem}
\label{lem:least-singular-value-lem1}
Let $\mathbf{\Phi}$ $\in \mathbf{R^{M \times K}}$ and let $T^{k}$ be a set of $k$ arbitrary chosen indices from $\{1,2,\cdots,\ K\}$. Then, \begin{align*}
\sigma_{\mathrm{min}}(\mathbf{P_{T^{k}}^{\perp}}\bm{\Phi}_{(T^{k})^{c}})\ge \sigma_{\mathrm{min}}(\mathbf{\Phi})
\end{align*}
\end{lem}
\begin{proof}
Since the singular values of a matrix is invariant under permutation of its columns, without any loss of generality, we can partition matrix $\mathbf{\Phi}$ as $$\begin{bmatrix}
\bvec{\Phi}_{T^k} & \bvec{\Phi}_{(T^k)^c}
\end{bmatrix}$$ Then we have
\begin{align*}
\bvec{\Phi}^T\bvec{\Phi}=\begin{bmatrix}
\bm{\Phi}_{T^k}^T\bm{\Phi}_{T^k} & \bm{\Phi}_{T^k}^T\bm{\Phi}_{(T^k)^c}\\
\bm{\Phi}_{(T^k)^c}^T\bm{\Phi}_{T^k} & \bm{\Phi}_{(T^k)^c}^T\bm{\Phi}_{(T^k)^c}
\end{bmatrix}
\end{align*}
Now, Clearly the upper left entry of the block matrix $(\bm{\Phi}^T\bm{\Phi})^{-1}$ would be the inverse of the Schur complement of the matrix $\bm{\Phi}^{T}\bm{\Phi}$ which is clearly $(\bm{\Phi}_{(T^{k})^{c}}^{T}\mathbf{P_{T^{k}}^{\perp}}\bm{\Phi}_{(T^{k})^c})^{-1}$. Now, using Cauchy's interlacing theorem for eigenvalues of Hermitian matrices~\cite{horn2012matrix} we have $\lambda_{\mathrm{max}}(\bm{\Phi}_{(T^{k})^{c}}^{T}\mathbf{P_{T^{k}}^{\perp}}\bm{\Phi}_{(T^{k})^c})^{-1} \le \lambda_{\mathrm{max}}(\mathbf{\Phi}^T\mathbf{\Phi})^{-1}$ and thus, $\lambda_{\mathrm{min}}(\bm{\Phi}_{(T^{k})^{c}}^{T}\mathbf{P_{T^{k}}^{\perp}}\bm{\Phi}_{(T^{k})^c}) \ge \lambda_{\mathrm{min}}(\mathbf{\Phi}^T\mathbf{\Phi})$. Now recall that, for any matrix $\bvec{A}$, $\sigma_{\mathrm{min}}(\bvec{A})=\sqrt{\lambda_{\min}(\bvec{A}^T\bvec{A})}$, from which the desired result follows directly.
\end{proof}
\begin{lem}
\label{lem:least-singular-value-lem2}
Let $\bvec{\Phi}\in \mathbb{R}^{M\times K}$ and $T^k$ represent a set of $k$ arbitrary indices from $\{1,2,\cdots,\ K\}$. Let $\bm{z}_{1}, \bm{z}_{2}, \bm{z}_{3}....\bm{z}_{L} \in R^{M}$ be random vectors i.i.d. $\mathcal{N}(0,M^{-1}\bvec{I})$ and are independent of the columns of $\bvec{\Phi}_{T^{k}}$. Then the following event holds with probability exceeding $1-2Le^{{-\frac{\delta^{2}(M-d)}{8}}}$
\begin{align}
\ & \left\lbrace\left(1-\frac{d}{M}\right) \left(1-\delta \right) \le \| \mathbf{P_{T^{k}}^{\perp}}\bm{z}_i \|_{2}^{2} \le \left(1-\frac{d}{M}\right) \left(1+\delta \right)\forall\ 1\le i\le L\nonumber\right\rbrace
\end{align}
where $d=\dim R(\bm{\Phi}_{T^k})\le k$.
\end{lem}
\begin{proof}
Recalling the notation introduced in Section.~\ref{sec:notation}, it is easy to see that $\mathbf{P_{T^{k}}^{\perp}}$ is an idempotent matrix with $M-d$ eigenvalues $1$ and $d$ eigenvalues $0$, where $d=\dim{{R}}(\bvec{\Phi}_{T^k})$. Therefore, we can decompose $\mathbf{P_{T^{k}}^{\perp}}$ as $\mathbf{U \Sigma U^{T}}$ where $\mathbf{\Sigma}$ is a diagonal matrix containing $M-d$ $1$s and $d$ $0$s, and $\mathbf{U}$ is an orthonormal matrix. \\
Now, for any $\bm{z}\in\mathbb{R}^{M}$, distributed as $\mathcal{N}(\bvec{0},M^{-1}\bvec{I})$, and independent of $\bvec{\Phi}_{T^k}$, the following observation can be made
\begin{align}
{\norm{\dualproj{T^k}\bvec{z}}^2} & =\bvec{z}^T\dualproj{T^k}\bvec{z}\nonumber\\
\ &=\bvec{z}^T\bvec{U}\bvec{\Sigma}\bvec{U}^T\bvec{z}^T\nonumber\\
\ &=(\bvec{\Sigma U}^T \bvec{z})^T(\bvec{\Sigma U^T z})
\end{align}
Since $\mathbf{U}$ is an orthonormal matrix independent of $\bm{z}$, given $\bvec{\Phi}_{T^k}$, $\mathbf{U^{T}}\bm{z}$ is a random vector with entries i.i.d Gaussian distributed. Now, let $\bm{v}$ be the vector with the first $M-d$ components of $\Sigma U^{T}\bm{z}$. Then $\bm{v} \sim \mathcal{N}(\bvec{0},M^{-1}\bvec{I}_{M-d})$ (given $\bvec{\Phi}_{T^k}$). Thus, given $\bvec{\Phi}_{T^k}$, $\bm{v}$ is an i.i.d distributed Gaussian vector with ${\mathbb{E}(\|v\|_2^{2})}=\left(1-\frac{d}{M}\right)$. Now, a standard exercise in concentration inequalities show that norm of $\bvec{v}/\sqrt{1-d/M}$ will be concentrated about its mean~\cite{foucart2013mathematical}, i.e. the following holds
\begin{align}
\mathbb{P}\left\lbrace\abs{\norm{\dualproj{T^k}\bvec{z}}^{2}-{\left(1-\frac{d}{M}\right)}}\ge {\delta\left(1-\frac{d}{M}\right)}\right\rbrace \le 2e^{-\frac{\delta^{2}(M-d)}{8}}
\end{align}
Then, considering the probability for the complementary events and taking union bound over $L$ vectors, we arrive at the desired result.
\end{proof}
\begin{lem}
\label{lem:least-singular-value-lem3}
Let $\bm{\Phi}\in R^{M\times K}$, and let $\bm{\Phi}$ satisfies smallest singular value property as in \ref{lem:Baranuik_singular_value_bound} with $\sigma_{min}\ge \sigma$. Let $T^k$ be a set of $k$ indices. Then $\|(\mathbf{P_{T^{k}}^{\perp}}\bm{\Phi}_{(T^{k})^{c}} \bvec{D})\bvec{x}\|_{2} \ge \sigma\sqrt{1-\delta}$ with probability exceeding \begin{align*}
\ & 1-2(12/(1-\sigma))^Ke^{-c_0((1-\sigma)/2)M}-2(K-k)e^{-(M-k)\delta^2/8} \\
\ & >1-e^{-cM}
\end{align*} for some $M>CK$ where
$$\mathbf{D}=\mathrm{diag}\left(\frac{1}{\|\mathbf{P_{T^{k}}^{\perp}}\bm{\phi}_{1}\|_{2}},\ \frac{1}{\norm{\dualproj{T^k}\bvec{\phi}_2}}\cdots,\ \frac{1}{\norm{\dualproj{T^k}\bvec{\phi}_{K-k}}}\right)$$ where $\bm{\phi}_{i} \in (T^{k})^{c}$ for all $\bvec{x} \in R^{K-k}$ with $\norm{\bvec{x}}=1$, and $k=1,2,3,\cdots,\ K-1$.
\end{lem}
\begin{proof}
First of all note that, for any $\bvec{x}\in \mathbb{R}^{K-k}$ such that $\norm{\bvec{x}}=1$, \begin{align*}
\norm{\dualproj{T^k}\bvec{\Phi}_{(T^k)^c}\bvec{D x}}^2 & =\bvec{x}^T\bvec{D}^T\bvec{\Phi}_{(T^k)^c}^T\dualproj{T^k}\bvec{\Phi}_{(T^k)^c}\bvec{D}\bvec{x}\\
\ &\ge \sigma_{\min}^2(\dualproj{T^k}\bvec{\Phi}_{(T^k)^c})\sigma_{\min}^2(\bvec{D})\\
\ &\ge \sigma_{\min}^2(\bvec{\Phi})\frac{1}{\max_{i\in (T^k)^c}\norm{\dualproj{T^k}\bvec{\phi}_i}^2}
\end{align*}
where the last inequality uses Lemma.~\ref{lem:least-singular-value-lem1}. Thus, \begin{align*}
\mathbb{P}\left(\norm{\dualproj{T^k}\bvec{\Phi}_{(T^k)^c}\bvec{D x}}\ge \sigma\sqrt{1-\delta}\right) & \ge \mathbb{P}(E_1\cap E_2) \\
\ \ge 1-\mathbb{P}(E_1^c)-\mathbb{P}(E_2^c)
\end{align*}
where \begin{align*}
E_1:= & \left\{\sigma_{\min}(\bvec{\Phi})\ge \sigma\right\}\\
E_2:= & \left\{{\max_{i\in (T^k)^c}\norm{\dualproj{T^k}\bvec{\phi}_i}}\le \sqrt{1-\delta}\right\}
\end{align*}
Now, if $\bm{\Phi}$ satisfies smallest singular value property as in \ref{lem:Baranuik_singular_value_bound}, with $\sigma_{min}\ge\sigma$ then clearly using \ref{lem:least-singular-value-lem1} we can conclude that matrix $\mathbf{P_{T^{k}}^{\perp}}\bm{\Phi}_{(T^{k})^{c}}$ satisfies the least singular value bound with probability atleast $1-2(12/(1-\sigma))^Ke^{-c_0((1-\sigma)/2)M}$. On the other hand, Lemma.~\ref{lem:least-singular-value-lem2} dictates that \begin{align*}
\mathbb{P}(E_2)\ge & \mathbb{P}\left\{{\max_{i\in (T^k)^c}\norm{\dualproj{T^k}\bvec{\phi}_i}}^2\le \left(1-\frac{d}{M}\right)(1-\delta)\right\}\\
\ \ge & 1-2(K-k)e^{-(M-d)\delta^2/8}\\
\ \ge & 1-2(K-k)e^{-(M-k)\delta^2/8}
\end{align*}
Thus, \begin{align*}
\lefteqn{\mathbb{P}\left(\norm{\dualproj{T^k}\bvec{\Phi}_{(T^k)^c}\bvec{D x}}\ge \sigma\sqrt{1-\delta}\right)} & & \\
\ & \ge 1-2(12/(1-\sigma))^Ke^{-c_0((1-\sigma)/2)M}-2(K-k)e^{-(M-k)\delta^2/8}
\end{align*}
For Gaussian matrices, one can use $c_0(\epsilon)=\epsilon^2/8$ to further simplify the bound.
\end{proof}
\section{Analysis of OLS in Uncorrelated dictionaries}
\label{sec:ols-uncorrelated-dictionaries}
The following theorem is the one of the main results in the paper. In this theorem, we argue that a slight modification to OLS Algorithm can be shown to be requiring $\mathcal{O}(K\log(N/K))$ number of measurements for perfect recovery, which is asymptotically the same as that required by Basis Pursuit. Our modification concerns with the first iteration in OLS.
We discuss about our modification within the proof of this theorem.
\begin{thm}
\label{thm:uncorrelated-dcitionary-recovery-probability}
Let $\mathbf{\Phi} \in \mathbf{R^{M \times N}}$ be a measurement matrix and let $\mathbf{x} \in \mathbf{R^{N}}$ be an arbitrary $K$-sparse signal. Given $\mathbf{y} = \mathbf{\Phi x} $, the measurement vector. Then, Orthogonal Least Squares algorithm can reconstruct the signal $\mathbf{x}$ with probability exceeding $1-\delta$, with $\delta \in (0,1)$, for number of measurements $M>CK\ln\left(\frac{N}{Kc(\delta)}\right)+K+1$ for some suitably chosen constant $C>0$, and some suitable chosen constant $c(\delta)$ that depends on $\delta$.
\end{thm}
\begin{proof}
Our proof of this Theorem is inspired by the approached adopted by Tropp \cite{tropp2007signal}. The main innovation in our proof relies on the fact that the first iteration of OLS is essentially the same as that of OMP, where a column is chosen which has a maximum absolute correlation with measurement $\bvec{y}$. From second iteration onwards, the selection criteria of OLS is unique to itself. Let us consider the greedy selection rule for OLS from $2^{nd}$ iteration onwards.
\begin{equation}
\begin{split}
\rho(\mathbf{r}) &= \frac{\| (\mathbf{P_{T^{k}}^{\perp} {\Psi \bvec{D}_{1}})^{T}\bm{r_{k}}} \|_{\infty}}{\| (\mathbf{P_{T^{k}}^{\perp} {\mathbf{\Phi}_{S}\bvec{D}_{2}})^{T}\bm{r_{k}}} \|_{\infty}} = \frac{\max_{\bm{\psi}} \left| \langle \frac{\mathbf{P_{T^{k}}^{\perp} \bm{\psi}}}{\|\mathbf{P_{T^{k}}^{\perp} \bm{\psi}}\|_{2}},\bm{r_{k}} \rangle \right|}{\| (\mathbf{{P_{T^{k}}^{\perp}\mathbf{\Phi}_{S}\bvec{D}_{2}})^{T}\bm{r_{k}}} \|_{\infty}} \\
D_{1}(i) &= \frac{1}{\| \mathbf{P_{T^{k}}^{\perp}} \bm{\psi}_{i} \|_{2}} \\
D_{2}(j) &= \frac{1}{\| \mathbf{P_{T^{k}}^{\perp}} \bm{\phi}_{j} \|_{2}} \textnormal {if $j \notin T^{k}$ otherwise 0} \\
\end{split}
\end{equation}
where $D_{k}(i)$ represents $i^{th}$ entry of a diagonal matrix for $k=1,2$
with $\mathbf{r}=\mathbf{y}$ where $ \mathbf{\Phi_{S}}$ is the matrix formed by the stacking the columns indexed by the true support set $S$. At the $k^{\mathrm{th}}$ iteration,the OLS algorithm chooses a column from the true support set $S$ if and only if $\rho(\bm{r_{k}}) < 1$. This can be shown aloing the same lines of the argument that Tropp \cite{tropp2004greed} used to show how the greedy selection rule $\rho(\mathbf{r}) < 1$ ensures the reconstruction of signal $\bvec{x}$ by the OMP algorithm in $K$ iterations under non-noisy conditions. The gist of the idea for OMP is to consider an imaginary and a real execution of the OMP algorithm. Starting with initial residual as $r_{0}=y$ and posing the induction arguments, Tropp shows that the greedy selection rule ensures that the OMP algorithm will identify the correct set of indices from the true support set $S$ of signal $\bvec{x}$. In our case we pose the same arguments as above for OLS algorithm but from the $2^{nd}$ iteration onwards and with a different greedy selection rule.
Before proceeding with the proof, we propose a slight modification to the OLS Algorithm. The purpose of this modification will eventually become clear. We choose a dummy column say $\bm{\phi}_{d}$ whose distribution is same as that of the other columns and a signal value say $\alpha$. Then, we simply modify our measurement vector $y_{1}=y+\alpha\bm{\phi}_{d}$. Consequently, we can warm start the OLS algorithm with $\bm{\phi}_{d}$ as the column preselected for the first iteration and proceed with the rest of the iterations in the usual way OLS works. The effect of this modification is that now we are able to bypass the first iteration of OLS, whose selection criteria is same as that of OMP, and instead use the greedy selection criteria unique to OLS, from second iteration onwards. We call this as the $0^{th}$ iteration.
The rest of the iterations will continue until it has picked $K$ columns.
The two conditional events we consider are,
\begin{itemize}
\item $\Sigma ={\sigma_{min}(\mathbf{\Phi_S}) > \sigma}$
\item $E_{1}$ =Success at the first iteration
\end{itemize}
Conditioning on the above two events, we denote $\mathbb{P}(E_{S})$ as the overall probability of success of OLS i.e. OLS is able to recover $K$-sparse signal.
\begin{equation}
E_{S} = {\rho (\bm{r_{k}}) < 1 , \hspace{0.2cm} \textnormal{for} \hspace{0.2cm} k=1,2,3,4,...K}
\end{equation}
Recall Lemma.~\ref{lem:least-singular-value-lem2} to appreciate that, for any iteration $k=1,2,,\cdots,\ KK$, the event $\Sigma$ implies that $\sigma_{\min} (\mathbf{P_{T^{k}}^{\perp}\bm{\phi}_{(T^{k})^{c}}\bvec{D}}) > \sigma\sqrt{(1-\delta)}$. We denote $\sigma_{min}=\sigma \sqrt{(1-\delta)}$ Also, clearly,
\begin{equation}
\mathbf{P_{T^{k}}^{\perp}\bm{\phi}_{(T^k)^{c}}\bvec{D}} = \mathbf{({P_{T^{k}}^{\perp}\mathbf{\Phi}_{S}\bvec{D}_{2}})}
\end{equation}
To ensure success for iterations $k = 1,2,\cdots,\ K $ with residual $\bm{r_{k}}$, we require,
\begin{equation}
\rho(\bm{r_{k}}) < 1 \implies \frac{\max_{\mathbf{\bm{\psi}}} \left| \langle \frac{\mathbf{P_{T^k}^{\perp} \bm{\psi}}}{\|\mathbf{P_{T^k}^{\perp} \bm{\psi}}\|_{2}},\bm{r_{k}} \rangle \right|}{\| (\mathbf{{P_{T^k}^{\perp}\mathbf{\Phi}_{S}D_{2}})^{T}\bm{r_{k}}} \|_{\infty}} < 1
\end{equation}
At any iteration $k$, assuming all previous iterations were successful, $\bm{r_{k}}$ lies in the span of $\mathbf{P_{T^k}^{\perp} \mathbf{\Phi}_{S}D_{2}}$, so that $\| \mathbf{P_{T^k}^{\perp} \mathbf{\Phi}_{S}D_{2}r_{k}} \|_{2} \ge \sigma_{\min} \| \bm{r_{k}}\|_{2}$
Thus,
\begin{equation}
\| \mathbf{(P_{T^k}^{\perp} \mathbf{\Phi}_{S} D_{2})^{T}r_{k}} \|_{\infty} \ge \frac{\| \mathbf {( P_{T^k}^{\perp} \mathbf{\Phi}_{S} D_{2} )^{T} \bm{r}_{k} } \|_{2} }{\sqrt(K)} \ge \frac{ \sigma_{\min} \| \bm{r_{k}}\|_{2} }{\sqrt{K}}
\end{equation}
\begin{equation}
\begin{split}
&\mathbb{P}(\rho(\bm{r_{k}}) < 1 \hspace{0.1cm} \forall \hspace{0.1cm} k=1,2,3,4....K)\\
&\ge \mathbb{P}\left( \max_{k} \frac{\sqrt{K} \max_{\mathbf{\bm{\psi}}} \left| \langle \frac{\mathbf{P_{T^k}^{\perp} \bm{\psi}}}{\|\mathbf{P_{T^k}^{\perp} \bm{\psi}}\|_{2}},\bm{r_{k}} \rangle \right| }{ \| \mathbf {( P_{T^k}^{\perp} \mathbf{\Phi}_{S} D_{2} )^{T} r_{k} } \|_{2}} < 1 \right) \\
&\ge \mathbb{P}\left( \max_{k} \max_{\mathbf{\bm{\psi}}} \left| \langle \frac{\mathbf{P_{T^k}^{\perp} \bm{\psi}}}{\|\mathbf{P_{T^k}^{\perp} \bm{\psi}}\|_{2}},\mathbf{\frac{r_{k}}{\| r_{k} \|_{2}}} \rangle \right| < \frac{\sigma_{\min}}{\sqrt(K)} \right)\\
\end{split}
\end{equation}
Using stochastic independence of columns of matrix $\mathbf{\Phi}$, for $\mathbb{P}(\rho(r_{k}) < 1)$ for $k=1,2,\cdots,\ K$, and defining $\bm{u_k}=\mathbf{\frac{r_{k}}{\| r_{k}\|_{2} }}$ and $\epsilon=\frac{\sigma_{\min}}{\sqrt{K}}$, we express the lower bound as
\begin{align}
\label{eq:joint-rpoduct}
\ & \prod_{\mathbf{\bm{\psi}}}\mathbb{P}\left( \max_{k} \left| \left\langle \frac{\mathbf{P_{T^k}^{\perp} \bm{\psi}}}{\|\mathbf{P_{T^k}^{\perp} \bm{\psi}}\|_{2}},\mathbf{\frac{r_{k}}{\| r_{k} \|_{2}}} \right\rangle \right| < \epsilon \right)
\end{align}
Now, recall Lemma.~\ref{lem:projected-joint-correlation} to find that \begin{align}
\ & \mathbb{P}\left( \max_{k} \left| \left\langle \frac{\mathbf{P_{T^k}^{\perp} \bm{\psi}}}{\|\mathbf{P_{T^k}^{\perp} \bm{\psi}}\|_{2}},\mathbf{\frac{r_{k}}{\| r_{k} \|_{2}}} \right\rangle \right| < \epsilon \right)\nonumber\\
\ & \ge 1-\sum_{k=1}^K e^{\frac{-m(M-k-1)\epsilon^2}{2}}\nonumber\\
\ & =1-K\exp\left(-\frac{\sigma_{\min}^2\sqrt{M_1}}{K}\right) e^{-\frac{M_1\sigma_{\min}^2}{2K}}
\end{align}
where $M_{1}=M-K-1$ (again slightly abusing notation used earlier)
Thus, we have,
\begin{align}
\label{eq:prob-succ-uncorrelated-dcitionary}
\mathbb{P}(E_{s}) \ge \left(1-K e^{-\frac{\sigma_{\min}^2\sqrt{M_1}}{K}}e^{-\frac{M_1\sigma_{\min}^2}{2K}}\right)^{N-K}(1-e^{-cM})
\end{align}
To complete the proof, we need to find bounds on the number of measurements $M$, that will allow recovery with high probability.
To that end, first note that, whenever $x\ge 0,\ (1-x)^k\ge 1-kx,\ \forall k\in \mathbb{N}$. Thus, we can write, \begin{align*}
\mathbb{P}(E_S)\ge 1-K(N-K)e^{-\frac{\sigma_{\min}^2\sqrt{M_1}}{K}}e^{-\frac{M_1\sigma_{\min}^2}{2K}}-e^{-cM}
\end{align*}
Further observe that $e^{-\frac{\sigma_{\min}^2\sqrt{M_1}}{K}}\le K/(\sigma_{\min}^2\sqrt{M_1})$, and allowing to further simplify the probability expression as \begin{align*}
\mathbb{P}(E_S)\ge 1-\frac{K^2(N-K)}{\sigma_{\min}^2\sqrt{M_1}}e^{-\frac{M_1\sigma_{\min}^2}{2K}}-e^{-cM}
\end{align*}
Now, use assumptions, $M>2K,\ N>K\sqrt{K}$, and the simple fact that $K^2(N-K)<N^3$, to get \begin{align*}
\mathbb{P}(E_S) & \ge 1-\frac{N^3}{\sigma_{\min^2}\sqrt{K}}e^{-\frac{M_1\sigma_{\min}^2}{2K}}-e^{-cM}\\
\ & \ge 1-1/\sigma_{\min}^2\left(\frac{N}{K}\right)^8e^{-\frac{M_1\sigma_{\min}^2}{2K}}-e^{-cM}
\end{align*}
Now, the third term can be absorbed into the second, probably by some change in constants, to produce the following simplified expression, \begin{align*}
\mathbb{P}(E_S)\ge 1-c_1\left(\frac{N}{K}\right)^8e^{-\frac{M_1\sigma_{\min}^2}{2K}}
\end{align*}
It takes little effort to see now that the failure probability can be upper bounded by $\delta$ where $\delta$ is some small constant $\delta\in (0,1)$, if $M_1>CK\ln (N/({c_2(\delta) K}))$, where $C$ is some suitably chosen constant, and $c_2$ is some constant that depends only on $\delta$.
\end{proof}
The process of choosing the constant $C$ in the proof of Lemma.~\ref{lem:least-singular-value-lem3} can be made more rigorous to get rough estimates of $C$. See Appendix.~\ref{sec:constant-C-lem:uncorrelated-prob-bound} for details.
\section{Experiments}
Several sets of numerical experiments are carried out to verify the claims presented in this paper.
In the first experiment we verify the result of Theorem~\ref{thm:uncorrelated-dcitionary-recovery-probability} by an experiment in which we empirically calculated the number of measurements required by the OLS algorithm to recover a sparse signal of dimension $N$ with a probability equal to say $0.95$. The experiment was performed with for $N=1024$ and $N=3000$. Figure \ref{fig:measurement_vs_sparsity} shows the accuracy of the estimates. The solid line in each of the figures is drawn after estimation of the constant $C$. We can easily see that the solid line matches reasonably well to the actual data points. However, it is worth sating here that our calculation of the associated constant $C$ (see Appendix.~\ref{sec:constant-C-lem:uncorrelated-prob-bound} is too high as compared to the constant that is observed empirically. Nevertheless, we are able to show that number of measurements required required by OLS is of the order of $K\log(N/K)$ which is an improved result as compared to the previous results \cite{tropp2007signal} for greedy algorithms like OMP which show that measurements required are of the order of $K\log N$.
In the second experiment we compare the numerical simulation results for percentage of signals recovered vs number of measurements $M$ for $N=1024$, against theoretical estimates of lower bounds of probability of success, as found in the proof of Theorem.~\ref{thm:uncorrelated-dcitionary-recovery-probability}. From the plots in Figure.~\ref{fig:prob-recovery-vs-measurements} we see that though the shape of the lower bound on recovery probability curve, as estimated from theory, matches well with the simulations, the actual values have a large gap. We attribute the large gap between the empirical and the theoretical values to the analysis technique and the constants produced thereof. We admit that this is an intrinsic limitation of our analysis approach, which was also the case for Tropp's result on OMP~\cite{tropp2007signal}, where he listed a few such limitations and possible reasons of those arriving from the analysis technique.
Finally, the last experiment is done to verify the runtime complexity of OLS, as claimed in Section.~\ref{sec:OLS-runtime-complexity} of the algorithm as $\mathcal{O}(KMN)$. A clever modification to the implementation of OLS algorithm as suggested previously makes the algorithm run in linear time wrt. to $K,M$ and $N$. The experimental results in Fig.~\ref{fig:running_complexity_ols} shows the correspondence between theory and experiment.
\label{sec:experiments-uncorrelated-dictionary}
\begin{figure}[t!]
\centering
\includegraphics[height=2.5in,width=3.5in]{recovery_prob_95}
\caption{Measurements for 95 percent recovery probability with $N=3000$}
\label{fig:measurement_vs_sparsity}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[height=2.5in,width=3.5in]{probability_recovery_ols_sim_vs_theory}
\caption{Probability of recovery vs no. of measurements}
\label{fig:prob-recovery-vs-measurements}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[height=2.5in,width=3.5in]{Execution_time_OLS_small}
\caption{Execution time in seconds for OLS}
\label{fig:running_complexity_ols}
\end{figure}
\section{Signal recovery using OLS with Hybrid Dictionaries}
In the preceding sections it was shown, both theoretically and empirically that, Orthogonal Least Squares algorithm can efficiently recovers a sparse signal when the measurement matrix consists of i.i.d Gaussian entries. A huge amount of research work has been devoted to the sparse signal representation/recovery of signal when the associated measurement matrix/dictionary satisfies strong RIP bounds. If a matrix satisfies RIP of an order $K$, it is an indicator of the fact that every $K$ columns of the matrix are almost orthogonal to each other. However, RIP is a measure used to assist worst case analysis of recovery algorithms. Average case analysis, on the other hand, studies the performance of recovery algorithms in a Monte-Carlo setup. From these average case performance plots, It has been noted in the literature, that recovery algorithms often can perform significantly well even in the presence of sensing matrices which do not satisfy the RIP bounds established by the worst case analysis. Soussen et. al.~\cite{soussen2013joint} has talked about this kind of matrices and has empirically shown that OLS is a better recovery algorithm than OMP for these kind of matrices. The measurement matrix $\mathbf{\Phi_{i}}$ has entries $\bm{\phi_{i}} = u_{i}\bm{1}+\bm{n}$ where $\bm{\phi_{i}}$ represents the $i^{th}$ column of the matrix, $u_{i}$ represents a uniformly distributed random variable from 0 to T and $\mathbf{n}$ is a vector whose entries are i.i.d gaussian distributed. Thus,$\mathbf{\Phi} \in \mathbb{R}^{M \times K}$
\begin{equation}
\mathbf{\Phi}=\mathbf{1}\mathbf{u}^{T}+\mathbf{N}
\end{equation}
where $\mathbf{u}=\lbrace u_{i} \rbrace_{i=1}^{K}$ is vector of i.i.d uniformly distributed random variables while $N$ is matrix containing i.i.d Gaussian entries. However, there is nothing special about the vector $\mathbf{1}$, we can choose any other vector in place of it.\\
It is important to note here that before using a greedy algorithm like OLS or OMP to recover sparse signal from hybrid dictionary, it is mandatory to normalize the columns since norm varies highly from column to column unlike in the case of gaussian dictionaries where the norm of a column is highly concentrated around the mean. Since OMP/OLS relies on the criteria of using inner product to select a column, a column with a large norm will be highly probable to be selected even though it is not a part of support set. So, in a normalized Hybrid matrix every column $\bm{\phi_{i}}=\alpha_{i}(u\bm{1}+\bm{n})$ where $\alpha_{i}=1/\|u\bm{1}+\bm{n} \|_{2}$. For signal recovery, a normalized Hybrid matrix is used to identify the columns in the support set and then original hybrid matrix is used to calculation signal $\bm{x}$
\subsection{Motivation of studying signal recovery with Hybrid dictionaries}
Soussen et. al. \cite{soussen2013joint} have extended the idea of Tropp's \cite{tropp2004greed} Exact Recovery Condition (ERC) on OMP to ERC on OLS. They found the ERC at any step for OMP and OLS. If ERC condition holds true at a certain iteration, it can be shown that the algorithm will be able to successfully recovery the rest of the columns without any uncertainty i.e with Probability 1.\\
For Gaussian dictionaries Soussen et. al. have empirically shown that the probability ERC is met at any iteration is same for both OMP and OLS but for Hybrid dictionaries, ERC can be guaranteed to hold with a high probability at very early iteration for OLS than for OMP. In other words that likelihood that OLS identifies correct column from the support set increases with every iteration.
Though, the normalized Hybrid measurement matrix as defined above consists of independently generated random columns but they are structured. Each and every column of the matrix is distributed around a single vector which is $\bm{1}$ in our case. Secondly, it can be shown the smallest singular value of this matrix is indeed very close to zero with high probability i.e we can find a vector $u$ such that $\mathbf{\Phi}\bm{u} \approx 0$.
As the probability of identifying correct column from the support set increases with every iteration for OLS, later in this paper, we'll show how OLS can be utilized to recover correct support set by simply running the algorithm for more than $K$ number of iterations.
\subsection{A note on Soussens et al \cite{soussen2013joint}}
Soussen's et al \cite{soussen2013joint} have inspired us to study deeply about OLS and its recovery performance with Hybrid Measurement matrix. Some of the chief contributions of the Soussen's paper are
\begin{itemize}
\item
Extension of Tropp's ERC-OMP to any arbitrary iteration for both OMP and OLS
\begin{equation}
\max_{j\notin Q^{*}}F^{Oxx}_{Q^{*},Q}(\mathbf{a_{j}}) < 1
\end{equation}
with Card(Q)=$q(<k)$ where k is the actual sparsity of the signal to be recovered.
\item
With Hybrid dictionaries,the phase transition curve for OLS was empirically shown to be significantly higher than OMP.
\end{itemize}
This establishes the fact that the once the OLS reaches a given iteration and is able to recover a proper subset of support-set, it is guaranteed that it will be able to recover complete support set.
This has inspired us to study what is so special about the OLS algorithm that it is relatively better than OMP for recovering the sparse signal especially when the columns of the dictionaries are highly coherent as in Hybrid matrix. Also, as stated earlier, the smallest singular value of the Hybrid matrix is very close to zero with high probability.
In the following section, we have recreated the experiments to analyze the behavior of OLS and OMP for Hybrid dictionaries.In the subsequent sections we'll also provide a mathematical explanation of the experiments.
\subsection{Experiments}
Here, we will discuss about the experiments which were done to study performance of Orthogonal Least Squares and Orthogonal Matching pursuit for Hybrid dictionaries as well as Gaussian Dictionaries. We consider here a hybrid dictionary $M \times N$ where $N=256$. we choose a fixed sparsity say $K=12$. For every $M$, we perform 1000 trials where we generate random measurement matrices and use these matrices for measuring a randomly generated K(=12) sparse signal under non-noisy conditions. Then we use OLS as well as OMP algorithm to recover the signal. The probability of recovery is calculated as
\begin{equation}
\begin{split}
&\textnormal{Probability of recovery}\ = \\
&\frac{\textnormal{Percentage signal recovered successfully}}{\textnormal{Number of trials}}
\end{split}
\end{equation}
In the above experiments, we have calculated conditional success probability for both OLS and OMP with Gaussian and Hybrid Dictionaries. We know that both OLS and OMP algorithm goes through $K(=12)$ iterations. At every iteration, the algorithms select a particular column from the measurement matrix. The algorithm is said to be successful if it chooses correct column at every iteration. While doing these experiments we have empirically calculated $P(S_{i})$ for every iteration $i=1,2\cdots,\ K$ where $P(S_{i})$ where $P(S_{i})$ denotes success at all iterations from $j=1,2,3,\cdots,\ i$. Therefore, we have $P(S_{i}|S_{i-1}))=\frac{P(S_{i} \cap S_{i-1})}{P(S_{i-1})}=\frac{P(S_{i})}{P(S_{i-1})}$. Effectively, $P(S_{i}|S_{i-1})$ denotes conditional probability that the $i^{th}$ iteration is successful given that previous iterations were all successful.
One can see from the figure\ref{fig:OMP_OLS_with_Gaussian} that for Gaussian Dictionaries, both for OLS and OMP, one can see the curves $P(S_{i}|S_{i-1}) vs M $ is continuously increasing with the iteration and achieves the value 1 subsequently for Measurements closer to $N$(dimension of the signal).The final figure shows the overall recovery performance for both OMP and OLS. We can see OLS is superior to OMP in terms of recovery probability but the difference is not appreciable.\\
While, for the figure \ref{fig:OMP_OLS_with_Hybrid}, which is same experiment as above but for Hybrid Dictionaries, one can see that curve$P(S_{i}|S_{i-1}) vs M$ shows an increasing trend with increasing $i$ only for OLS algorithm while it is not the case with the OMP algorithm. Going by this trend one can anticipate that there should exist an iteration after which OLS successfully recovers all columns from true support set with probability 1. We found this to be true but only for OLS while we were unable to find any such step/iteration after which OMP is successful with probability 1. In simple words, the implication of this observation is that there exist an iteration (say j) that once OLS is successful at all the iterations prior to j it is guaranteed that it will be successful at all the subsequent iterations i.e $j,j+1,\cdots,\ K$ (with probability 1) while no such iteration exists for OMP. Infact one can show analytically that if OLS is successful till $K-1$ iterations, it will definitely
be successful at the last iteration i.e. $K_{th}$ iteration.
\begin{figure}[t!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[height=2.5in,width=3.5in]{Gaussian_OMP_steps.eps}
\caption{OMP with Gaussian Dictionary}
\label{fig:Gaussian_OMP}
\end{subfigure}%
\hfill
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[height=2.5in,width=3.5in]{Gaussian_OLS_steps.eps}
\caption{OLS with Gaussian Dictionary}
\label{fig:Gaussian_OLS}
\end{subfigure}
\caption{Conditional success Probability for OMP and OLS with Gaussian Dictionary}
\label{fig:OMP_OLS_with_Hybrid}
\end{figure}
\begin{figure}[t!]
\centering
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[height=2.5in,width=3.5in]{OMP_step_hybrid_T_100_smaller.eps}
\caption{OMP with Hybrid Dictionary, T=100}
\label{fig:Hybrid_OMP}
\end{subfigure}%
\hfill
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[height=2.5in,width=3.5in]{OLS_step_hybrid_T_100_smaller.eps}
\caption{OLS with Hybrid Dictionary, T=100}
\label{fig:Hybrid_OLS}
\end{subfigure}
\caption{Conditional success Probability for OMP and OLS with Hybrid Dictionary}
\label{fig:OMP_OLS_with_Gaussian}
\end{figure}
\subsection{Aanlytical justification for the empirically observed phenomenon}
\label{sec:ols-hybrid-dictionary}
In this section, we provide an explanation of the phenomenon observed in the above set of experiments. We have observed in the above experiments with Hybrid dictionary probability of success of $2^{nd}$ iteration and beyond (in OLS) conditioned on the event that the previous iteration is successful continuously increases which is not the case with OMP. Also, there exist an iteration such that if OLS is successful till that iteration, it is guaranteed to be successful for the subsequent iterations thus recovering the entire support set successfully.
We will give the justification of these empirically oberved phenomenon with respect to the generalized hybrid dictionary, defined in Section.~\ref{sec:random-measurement-ensemble}. This is achieved by first stating a series of lemmas, that will be needed to give the proof of our main theorem on the performance of OLS in generalized hybrid dictionaries.
Recall that $W_r=\spn{\bvec{a}_1,\cdots,\ \bvec{a}_r}$. A natural question arises, whether there is a collection of $r$ such columns of the generalized hybrid dictionary, that can span $W_r$ with high probability. The following lemma shows that this indeed is the case, though the probability of happening depends highly on the level of correlation, which is a function of $T$:
\begin{lem}
\label{lem:prob_hybrid_vector_align}
Let $\{\bm{\phi}_i/\norm{\bm{\phi}_i}\}_{i=1}^N$ be a collection of $N$ normalized columns forming a generalized hybrid dictionary, with orthonormal basis $\{\bvec{a}_i\}_{i=1}^r$, and parameter $T>0$ (see Definition.~\ref{defn:hybrid}). Define the events $$A_{ij}:=\left\{\left|\inprod{\frac{\boldsymbol{\phi}_i}{\norm{\boldsymbol{\phi}_i}}}{\bvec{a}_j}\right|\ge 1-\delta\right\}$$ $\forall\ 1\le i\le N,\ 1\le j\le r$. Then,
\begin{align}
\mathbb{P}\left(A_{ij}\right)\ge :p(\delta)\ \forall i,j
\end{align}
where \begin{align}
\lefteqn{p(\delta)}& &\\
\ & = \sup_{\tiny{\sigma>M-1+(r-1)T^2} }2\left\{\left(1-e^{g(\sigma)/2}\right)\mathbb{E}_u\left(Q\left(\sqrt{M}(\sqrt{\sigma}\delta_1 -u)\right)\right)\right\}
\end{align}
where $u\sim\mathcal{U}[0,T),\ \delta_1^2=\frac{(1-\delta)^2}{1-(1-\delta)^2}$,
\begin{align*}
g(\sigma)&=-\sigma-(r-1)T^2+\left(h(\sigma)-(M-1)\ln\left(\frac{M-1+h(\sigma)}{2\sigma}\right)\right)
\end{align*}
and \begin{align*}
h(\sigma)=\sqrt{(M-1)^2+4\sigma(r-1)T^2}
\end{align*}
\end{lem}
\begin{proof}
The proof is postponed to Appendix.~\ref{sec:proof_lem_hybrid_vector_align}
\end{proof}
\begin{lem}
\label{lem:hybrid_subspace_align}
Let $L$ be a positive integer such that $r\le L\le N$. Then it follows that, whenever $\delta \in (0,0.293)$, and $p(\delta)\in [0,1/r]$,\begin{align}
&\mathbb{P}\left(\mbox{\emph{in every collection of }}L \ \mbox{\emph{columns, }}\right.\nonumber\\
&\left.\exists \mbox{\emph{at least one set of $r$ columns, indexed by}}\right. \\
&\left.\ i_1,i_2,\cdots,\ i_r, \ \mbox{\emph{such that the event}} \right.\nonumber\\
& \left.\ A_{i_11}\cap A_{i_22}\cap\cdots\cap A_{i_rr}\ \mbox{\emph{takes place}}\right)\nonumber\\
& \ge \sum_{j=0}^r (-1)^j \binom{r}{j}(1-jp(\delta))^L
\end{align}
\end{lem}
\begin{proof}
The proof is postponed to Appendix.~\ref{sec:proof_lem_hybrid_subspace_align}.
\end{proof}
\begin{lem}
\label{lem:proj_error_bound}
Let, w.l.o.g., the support selected upto the $r^{\mathrm{th}}$ step of OLS be $T_r=\{1,\ 2,\cdots,\ r \}$, such that $\min_{1\le k\le r}\abs{\inprod{\bm{\phi}_{k}/\norm{\bm{\phi}_k}}{\bvec{a}_k}}\ge 1-\delta$, then,
\begin{align}
\norm{\dualproj{T_r}\bvec{a}_i}\le \sqrt{2\delta-\delta^2}
\end{align}
for all $1\le i\le r$.
\end{lem}
\begin{proof}
The proof follows from the simple observation that \begin{align*}
\norm{\dualproj{T_r}\bvec{a}_i} & \le\norm{\dualproj{\{i\}}\bvec{a}_i} \\
\ &=\sqrt{1-\frac{\abs{\inprod{\bm{\phi}_i}{\bvec{a}_i}}^2}{\norm{\bm{\phi}_i}^2}}\\
\ &\le \sqrt{1-(1-\delta)^2}=\sqrt{2\delta-\delta^2}.
\end{align*}
\end{proof}
The following lemma shows that the generalized hybrid dictionary defined above is indeed correlated in the sense that it has a high worst case coherence.
\begin{lem}
\label{lem:coherence-generalized-hybrid-dictionary}
For a sensing matrix $\bm{\Phi}\in \mathbb{R}^{M\times N}$, with columns belonging to a generalized hybrid dictionary as defined in~\ref{defn:hybrid}, for some $\delta\in (0,0.293)$, \begin{align*}
\lefteqn{\mathbb{P}\left(\mu(\Phi)\ge 1-4\delta+2\delta^2\right)} & & \\
\ & \ge \sum_{k=0}^r (-1)^{k}\binom{r}{k}\sum_{j=0}^{k} \binom{k}{j} \frac{(2r)!}{(2r-j)!}p(\delta)^j(1-kp(\delta))^{2r-j}
\end{align*}
where $p(\delta)$ was defined in Lemma.~\ref{lem:prob_hybrid_vector_align}.
\end{lem}
\begin{proof}
The proof is postponed to Appendix.~\ref{sec:proof-lem-coherence-generalized-hybrid-dictionary}.
\end{proof}
It is evident that the poor performance of OLS/OMP at the very first iteration is due the presence of a constant bias term residing in the space $W_r$, being added to each of the columns of the Hybrid measurement matrix. This makes the columns of the matrix to be correlated or packed closely to each other. This actually results in incorrectly identifying the columns since all the columns vectors are packed very closely to each other.\\
The above lemma will be helpful in proving our claim that from $2^{nd}$ iteration onwards in OLS, the strength of the bias term added due to vector in the space $W_r$ decreases due to which the correlation among the columns decreases. This helps the algorithm to easily figure out the correct column from the support set.
Our aim was to explain the reason behind the improved conditional recovery performance of OLS from second iteration onwards. The following theorem and the discussion which follows will provide an explanation for the phenomena observed in the experiments.
\begin{thm}
\label{thm:ols-acting-like-omp}
Let $T^k$ be the set of indices selected in the first $k(r\le k<K)$ iterations of OLS, and, w.l.o.g., assume that $T^k=\{1,2,\cdots,\ k\}$. Also, assume that the first $k$ iterations of OLS are successful, that is $T^k\subset T $, where $T$ is the actual (unknown) support of the unknown vector $\bvec{x}$. Then, at the $(k+1)^{\mathrm{th}}$ iteration, the probability that OLS chooses another correct index, is at least as large as the probability of OMP choosing a correct index in its first iteration, with a $(K-k)$-sparse unknown vector, and with a hybrid sensing matrix, with non-orthonormal bias, such that the Frobenius norm of the bias matrix is upper bounded by $\sqrt{r}\kappa(\delta)$, where $\kappa(\delta):=\sqrt{2\delta-\delta^2}$.
\end{thm}
\begin{proof}
Since the unknown vector $\bvec{x}$ is $K$-sparse with support set $T$, (w.l.o.g. assumed to be $\{1,2,\cdots,\ K\}$) the measurement vector $\bvec{y}$ can be represented as \begin{align*}
\bvec{y}=x_1\bm{\phi}_1/\norm{\bm{\phi}_1}+\cdots+x_K\bm{\phi}_K/\norm{\bm{\phi}_K}
\end{align*}
Now, note that the residual after $k^{\mathrm{th}}$ iteration becomes, \begin{align*}
\bvec{r}^k=\dualproj{T^k}\bm{y}=x_{k+1}\dualproj{T^k}\frac{\bm{\phi}_{k+1}}{\norm{\bm{\phi}_{k+1}}}+\cdots+x_K\dualproj{T^k}\frac{\bm{\phi}_K}{\norm{\bm{\phi}_K}}
\end{align*}
Observe that the operator $\dualproj{T^k}$ has two distinct eigenvalues, $0$ and $1$. Let the corresponding eigenspaces have dimensions $d_0$ and $d_1$ respectively. Then, it follows that $\dim(R(\bvec{\Phi}_{T^k}))=d_0$ and $d_0+d_1=M$. One can construct a unitary matrix $\bvec{U}$ with the columns as the orthonormal eigenvectors corresponding to the eigenvalues of $\dualproj{T^k}$, so as to get, \begin{align*}
\bvec{U}=[\bvec{c}_1\ \cdots\ \bvec{c}_{d_1}\ \bvec{b}_1\ \cdots\ \bvec{b}_{d_0}]
\end{align*}
where $\{\bvec{b}_1,\cdots,\ \bvec{b}_{d_0}\}$ and $\{\bvec{c}_1,\ \cdots,\ \bvec{c}_{d_1}\}$ are a set of orthonormal bases for the eigenspaces corresponding to the $0$ and $1$ eigenvalues, respectively, of $\dualproj{T^k}$. Denote $\bvec{U}_0=[\bvec{b}_1\ \bvec{b}_{2}\ \cdots\ \bvec{b}_{d_0}]$, and $\bvec{U}_1=[\bvec{c}_1\ \cdots\ \bvec{c}_{d_1}]$. Then, note that $\bvec{U}_0,\ \bvec{U}_1$ are not square, but they satisfy $\bvec{U}^T_0\bvec{U}_0=\bvec{I}_{d_0}$, $\bvec{U}_1^T\bvec{U}_1=\bvec{I}_{d_1}$.
Recall that $\bm{\phi}_i:={\sum_{j=1}^r u_{ij}\bvec{a}_j+\bvec{n}_i}$
Take some $k+1\le i\le K$. Then, \begin{align*}
\dualproj{T^k}\frac{\bm{\phi}_i}{\norm{\bm{\phi}_i}}=&\frac{\sum_{j=1}^r u_{ij}\dualproj{T^k}\bvec{a}_j+\dualproj{T^k}\bvec{n}_i}{\norm{\sum_{j=1}^r u_{ij}\bvec{a}_j+\bvec{n}_i}}
\end{align*}
Now, since $r\le k<K<N$, Lemma.~\ref{lem:hybrid_subspace_align} dictates that, by appropriately choosing some $\delta\in [0,0.293]$, such that $p(\delta)<1/r$ ($p(\delta)$ was defined in Lemma.~\ref{lem:prob_hybrid_vector_align}), w.h.p., one can find $r$ indices, w.l.o.g., taken as $\{1,2,\cdots,\ r\}$, from the collection $\{1,2,\cdots,\ k\}$ such that $\abs{\inprod{\bm{\phi}_i/\norm{\bm{\phi}_i}}{\bvec{a}_i}}\ge 1-\delta$, where $\{\bvec{a}_j\}_{j=1}^r$ are defined in Definition~\ref{defn:hybrid} of the generalized hybrid dictionary.
Then, from Lemma.~\ref{lem:proj_error_bound}, we have that for any $j,\ 1\le j\le r$, \begin{align*}
\norm{\dualproj{T^k}\bvec{a}_j}\le & \norm{\dualproj{T_r}\bvec{a}_j} \le \kappa(\delta)
\end{align*}
where $T_r:=\{1,2,\cdots,\ r\}$.
On the other hand, note that $\dualproj{T^k}=\bvec{U}\bvec{\Sigma}{\bvec{U}}^T$, where $\bvec{\Sigma}=\mathrm{diag}\left(1,\ 1,\ \cdots\ ,0,\ 0,\ \cdots,\ 0\right)$, with $d_0$ number of $0$'s and $d_1$ number of $1$'s. Thus, $\dualproj{T^k}=\bvec{U}_1\bvec{U}_1^T$. Consequently, for any $1\le j\le r$, $\dualproj{T^k}\bvec{a}_j=\bvec{U}_1\bm{\epsilon}_j$, where $\bm{\epsilon}_j=\bvec{U}_1^T\bvec{a}_j$. Observe that $\norm{\bm{\epsilon}_j}=\norm{\bvec{U}_1\bm{\epsilon}_j}\le \kappa(\delta)$.
Also note that, $\dualproj{T^k}\bvec{n}_i=\bvec{U}_1\tilde{\bvec{n}_i}$, where $\tilde{\bvec{n}_i}=\bvec{U}_1^T\bvec{n}_i$. Now, \emph{given} the columns $\{\bm{\phi}_i/\norm{\bm{\phi}_i}\}_{i\in T^k}$, $\tilde{\bvec{n}_i}\sim\mathcal{N}(\bvec{0},M^{-1}\bvec{I}_{d_1})$.
Putting everything together, we get \begin{align*}
\dualproj{T^k}\frac{\bm{\phi}_i}{\norm{\bm{\phi}_i}}&=\frac{\bvec{U}_1\left(\sum_{j=1}^r u_{ij}\bm{\epsilon}_j+{\tilde{\bvec{n}}_i}\right)}{\norm{\sum_{j=1}^r u_{ij}\bm{\epsilon}_j+\tilde{\bvec{n}}_i}}
\end{align*}
where $\tilde{\bm{\phi}_i}=\sum_{j=1}^r u_{ij} \bm{\epsilon}_j+\tilde{\bvec{n}}_i$. So, in the $(k+1)^{\mathrm{th}}$ step of OLS, the new index is chosen by finding the index that maximizes
$\abs{\inprod{\tilde{\bm{\phi}_i}}{\bvec{r}^k}}/\norm{\tilde{\bm{\phi}_i}}$ where $\bvec{r}^k$ is obtained from measuring a $K-k$ sparse vector, and $\left\{\tilde{\bm{\phi}_i}/\norm{\tilde{\bm{\phi}_i}}\right\}$ form a dictionary such that $\tilde{\bvec{\phi}_i}=\sum_{j=1}^r {u}_{ij}{\bm{\epsilon}}_j+\tilde{\bvec{n}}_i$. Note that the bias matrix here is $\bvec{E}$, where $\bvec{E}:=[\bm{\epsilon}_1\ \bm{\epsilon}_2\ \cdots\ \bm{\epsilon}_r]$. Since $\norm{\bm{\epsilon}_j}\le \kappa(\delta)$, it is an easy matter to check that the Frobenius norm of the bias matrix is bounded by $\sqrt{r}\kappa(\delta)$.
\end{proof}
\begin{remark}
This is analogous to the generalized hybrid dictionary defined before, however, with the major difference that the columns $\{\tilde{\bm{\epsilon}}_{i}\}$ are \emph{not} orthonormal. Though we can see that the Frobenius norm of the bias can decrease with suitable choice of $\delta$. This is further clarified in the corollary below that stresses on the specific case with $r=1$.
\end{remark}
\begin{cor}
\label{cor:ols_hybrid_dictionary_r=1_case}
For the case when $r=1$, the algorithm OLS, after picking $k(<K)$ correct indices in the first $k$ iterations, acts as OMP with generalized hybrid dictionary with parameters $r=1$ and $T\sqrt{2\delta-\delta^2}$, in the $(k+1)^{\mathrm{th}}$ iteration.
\end{cor}
\begin{proof}
This is a straightforward implication of Theorem.~\ref{thm:ols-acting-like-omp}, for $r=1$.
\end{proof}
\begin{remark}
The importance of the implication of this corollary is far greater than the corollary itself. What it says is that after OLS is able to capture $k$ correct indices in $k$ iterations, it acts like OMP in the $(k+1)^{\mathrm{th}}$ iteration, with a hybrid dictionary which is less coherent. This in turn, acts as a warms-start for a new OLS algorithm with less coherent dictionary, and after a few iterations, if it chooses a few correct indices, it can again act as OMP with a hybrid dictionary where the coherence of the dictionary is further reduced, as the parameter decreases to $T(2\delta-\delta^2)$. This ``decorrelation'' effect is the key to the success of OLS in coherent dictionaries. As can be seen from the proof of Theorem.~\ref{thm:ols-acting-like-omp}, this feature of OLS is attributed to the selection criteria of OLS, which basically searches indices based on the angles between residual error vector and the normalized orthogonal projection error of the columns of the sensing matrix, after being projected onto the space spanned by the columns already selected. In OMP, the normalization of the orthogonal projection error vector is absent, and that is the reason for the failure of OMP in hybrid dictionaries.
\end{remark}
\section{OLS support recovery with Hybrid dictionary}
\begin{figure}[t!]
\centering
\centering
\includegraphics[height=2.5in, width=3.5in]{Prob_correct_index_iteration.eps}
\caption{Prob. of choosing correct index at the start of various iteration}
\label{fig:Prob_correct_index_iteration}
\end{figure}%
\begin{figure}[t!]
\centering
\centering
\includegraphics[height=2.5in,width=3.5in]{T_delta_iterations.eps}
\caption{T$\sqrt{\delta-\delta^2}$ for different iterations}
\label{fig:T_delta_iterations}
\end{figure}%
\begin{figure}[t!]
\centering
\includegraphics[height=2.5in,width=3.5in]{Ps.eps}
\caption{Probability of success for different ranks (2K iterations)}
\label{fig:Different_ranks}
\end{figure}%
The Corollary.~\ref{cor:ols_hybrid_dictionary_r=1_case} states that, in generalized hybrid dictionary, with $r=1$, after $k(>1)$ successful iterations of OLS, the probability of success in the next will be equal to the success of the first iteration of when OMP is used to recover $K-k$ sparse signal with a modified hybrid dictionary having parameter $T\sqrt{2\delta-\delta^2}$. Now, we perform an experiment where we initially choose $T=100$ and $K=12$. For every iteration $k$, we experimentally find the value of $\delta$ using equation as in Lemmas~\ref{lem:prob_hybrid_vector_align}, and \ref{lem:hybrid_subspace_align}, for the case $r=1$, such that the probability $p(\delta)$, as defined in Lemma.~\ref{lem:prob_hybrid_vector_align} calculates out to be more than $0.99$. We plot that probability in figure \ref{fig:T_delta_iterations}. We can see the rapid decrease in the value $T\sqrt{2\delta-\delta^2}$ with the iteration numbers, as estimated in Corollary.~\ref{cor:ols_hybrid_dictionary_r=1_case}.\\
Using the values of $T\sqrt{2\delta-\delta^2}$ as in previous experiment, we perform an experiment with OMP and plot the probability of success at various iterations w.r.t. $M$ as shown in the fig. \ref{fig:Prob_correct_index_iteration}. We can see that as iteration proceeds, the probability of success increases. This justifies the phenomenon that was observed before empirically.
As we have observed that with Hybrid dictionaries, the high probability of success of OLS is assured only during last iteration, the overall probability of success of OLS is quite low. However, if we run the OLS for more than $K$ iterations lets say $2K$ iterations, the probability that $2K$ columns obtained by OLS will be definitely increase. We empirically show this in the figure \ref{fig:Different_ranks}. If we assume that $M>2K$, by projecting $\bvec{y}$ on $2K$ columns we can find the true $K$ columns of the support set since when $M>2K$ and column entries have been generated from a continuous probability space the event that every $2K$ columns is linearly independent occurs almost surely.
\section{Conclusion}
In this paper we have established the probabilistic recovery guarantee of Orthogonal Least Squares Algorithm with compressive measurements through Gaussian dictionaries under non-noisy conditions. Specifically we found lower bounds on the probability of success of OLS algorithm in unocrrelated dictionaries with Gaussian dictionaries and with normalized columns. We showed that OLS can be implemented in a way so that it has the same computational complexity as that of the popular OMP algorithm. We have defined certain type of correlated dictionary that we call geenralized hybrid dictionary, and have numerically demonstrated the competitive edge offered by OLS in these dictionaries, compared against OMP, in terms of recovery performance. We have given theoretical justifications for these numerical evidence and found out that the core reason behind the success of OLS in correlated dictionaries is a phenomenon of ``decorrelation'' of sensing matrix, which is unique to OLS because of the rule it uses at its identification step. Our future aim is to provide a more rigorous explanation of the improved performance of OLS algorithm at any general iteration, and probably find a way to use that to design algorithms with superior performances.
\appendices
\section{Proof of Lemma.~\ref{lem:theta_1-prob-upper-bound}}
\label{sec:proof-lemma-theta_1-prob-upper-bound}
\begin{proof}
For each $n\in \mathbb{N}$, define the function $g_n:(-\infty,0]$, such that $g_n(x)=\ln f_n(x)\ \forall x\in [0,1]$. Now, it is easy to verify that $f_n$ is continuous in $[0,1]$, and differentiable in $(0,1)$, which makes $g_n$ continuous in $[0,1]$, and differentiable in $(0,1)$. Now, note that $f_n(0)=1,\ f_n(1)=0$, and $f_n$ is decreasing in $[0,1]$. Using mean value theorem, we get, for any $x\in (0,1)$, \begin{align*}
g_n(x)= & g_n(0)+xg_n'(\theta x)\\
\ =& xg_n'(\theta x)
\end{align*} for some $\theta\in [0,1]$. Now, for any $x\in [0,1]$, $g_n'(x)=f_n'(x)/f_n(x)$. Since $x$ is positive, $g_n$ can be upper bounded by a linear function if a constant upper bound of $g'$ can be found. Let $h_n=-g_n'$, with domain $[0,1]$. We want to estimate $\inf_{x\in [0,1]}h_n(x)$. Now note that $h_n$ is continuous in $(0,1)$, and $h_n(x)<\infty,\ \forall x\in (0,1)$. Furthermore, observe that \begin{align*}
h_n(x)=&\frac{1}{2}\frac{(1-x)^{(n-1)/2}}{\sqrt{x}\int_{0}^{\sqrt{1-x}}\frac{u^n}{\sqrt{1-u^2}}du}\\
\implies \lim_{x\to 0+}h_n(x)=&\infty\\
\lim_{x\to 1-}h_n(x)=&\infty
\end{align*} i.e. $h_n$ is continuous on the domain $[0,1]$. Since $[0,1]$ is compact, $h_n$ attains its sup and inf in $[0,1]$. Moreover, $h_n(0+)=h_n(1-)=\infty\implies\ x^*:=\argmin_{x\in [0,1]}h_n(x)\in (0,1)$. $x^*$ satisfies \begin{align*}
h_n'(x^*)=&0\\
\ \implies f_n''(x^*)f_n(x^*)=(f_n'(x^*))^2
\end{align*}
Then, \begin{align*}
h_n(x^*)=& -\frac{f_n'(x^*)}{f_n(x^*)}=-\frac{f_n''(x^*)}{f_n'(x^*)}
\end{align*}
Recalling the definition of $f_n$, we find, $\forall x\in (0,1)$, \begin{align*}
f_n'(x)=& -\frac{1}{2A_n}\frac{(1-x)^{(n-1)/2}}{\sqrt{x}}\\
f_n''(x)=& \frac{(1-x)^{(n-3)/2}}{4A_nx^{3/2}}(1+(n-2)x)
\end{align*}
Thus \begin{align*}
h_n(x^*)=\frac{1+(n-2)x^*}{2x^*(1-x^*)}
\end{align*}
Now, let us look at the real valued function $\phi_n$, with domain $[0,1]$, defined as \begin{align*}
\phi_n(x)=\frac{1+(n-2)x}{x(1-x)}
\end{align*}
It is straightforward to see that $\phi_n$ attains its minima at $a=\frac{1}{\sqrt{n-1}+1}$. Thus $h_n(x^*)\ge \phi_n(a)=(\sqrt{n-1}+1)^2$. Hence, \begin{align*}
g_n(x)=-xh_n(\theta x)\le -m(n)x
\end{align*}
where $m(n)=(\sqrt{n-1}+1)^2$. The desired result follows immediately.
\end{proof}
\section{Rough estimate of constant $C$ in Lemma.~\ref{lem:least-singular-value-lem3}}
\label{sec:constant-C-lem:uncorrelated-prob-bound}
We had shown in the proof of Theorem~\ref{thm:uncorrelated-dcitionary-recovery-probability}
\begin{equation}
\begin{split}
\mathbb{P}(E_{s})&=1-\frac{c_{1}N^{2.5}}{\sigma \sqrt{K}}e^{\frac{-\sigma^2 M_{1}}{2K}}-e^{\frac{-\epsilon_{1}^2K}{2}}-2Ke^{\frac{-\epsilon_{2}^2K}{8}}\\
\textnormal{where} \hspace{0.1cm} \sigma &\ge\left(1-\sqrt{\frac{K}{M_{1}}}-\epsilon_{1}\sqrt{\frac{K}{M_{1}}}\right)\left(\sqrt{1-\epsilon_{2}\sqrt{\frac{K}{M_{1}}}}\right)\\
M_{1}&=M-K-1
\end{split}
\end{equation}
We claim that no of measurements required to reduce probability of failure below $3\delta$ for some $\delta \in (0,1)$ is $CK \ln(\frac{N}{\delta K})+K+1$ for some positive constant C. Choosing, $\epsilon_{1}=\left(\frac{2\ln(N/K\delta)}{K}\right)^{\frac{1}{2}}$ and $\epsilon_{2}=\left(\frac{8\ln(N/\delta)}{K}\right)^{\frac{1}{2}}$ \\
$e^{\frac{-\epsilon_{1}^2K}{2}}=\frac{K\delta}{N} \le \delta$ and $2Ke^{\frac{-\epsilon_{2}^2K}{8}}=\frac{2K\delta}{N} \le \delta$\\
Consider, $\epsilon_{2}\sqrt{K/M_{1}}=\left(\frac{8\ln(N/\delta)}{M_{1}} \right)^{\frac{1}{2}} \le\left(\frac{8}{C} \right)^{\frac{1}{2}}$ With assumption on $M_{1}$ and $K>1$\\
$\frac{\sigma^{2}M_{1}}{K}\ge\left( \sqrt{\frac{M_{1}}{K}}-1-\epsilon_{1} \right)^{2}\left(1-(\frac{8}{C})^{\frac{1}{2}} \right)$\\
$e^{-\frac{\sigma^{2}M_{1}}{K}}=\left( \frac{\delta K}{N} \right)^ {\left[ \sqrt{C} -\frac{1}{\ln(N/\delta K)} - \sqrt{\frac{2}{K}} \right]^2 \left( 1- (\frac{8}{C})^{\frac{1}{2}} \right)/2}$\\
$\frac{c_{1}N^{2.5}}{\sigma \sqrt{K}}e^{\frac{-\sigma^2 M_{1}}{2K}}= \frac{c_{1}N^{2.5}}{\sigma \sqrt{K}} \left( \frac{\delta K}{N} \right)^ {\left[ \sqrt{C} -\frac{1}{\ln(N/\delta K)} - \sqrt{\frac{2}{K}} \right]^2/2 }$ \\
Let $\sqrt{C}=\left( \sqrt{S} + \frac{1}{\ln(N/\delta K)} + \sqrt{\frac{2}{K}} \right)$, where $S$ is some sufficiently large number such that \\
$\frac{c_{1}N^{2.5}}{\sigma \sqrt{K}}e^{\frac{-\sigma^2 M_{1}}{2K}}= \frac{c_{1}N^{2.5}}{\sigma \sqrt{K}} \le \delta $. \\
One good choice of $S$ could be 32.
So we have $C=\left( \sqrt{32}+\frac{1}{\ln(N/\delta K)} +\sqrt{\frac{2}{K}} \right)^{2}$
\section{Proof of Lemma.~\ref{lem:prob_hybrid_vector_align}}
\label{sec:proof_lem_hybrid_vector_align}
The following simple observation will be useful for the proof of Lemma.~\ref{lem:prob_hybrid_vector_align}.
\begin{lem}
\label{lem:increasing_g}
Let $g_{a,b}$ be a real valued function, parameterized by $a,b\in \mathbb{R}^+$, defined as \begin{align*}
g_{a,b}(x)&=\sqrt{a^2+4bx}-x\\
\ &-a\ln\left(\frac{\sqrt{a^2+4bx}+a}{2b}\right)
\end{align*}
Then, $g_{a,b}$ is a monotonically increasing function.
\end{lem}
\begin{proof}[Proof of Lemma.~\ref{lem:prob_hybrid_vector_align}]
We first note that, given the $\{u_{ij}\}$s, the random vector $\bm{\phi}_i$ is distributed as $\mathcal{N}_{M}\left(\sum_{j=1}^r u_{ij} \bvec{a}_j,\ M^{-1}\bvec{I}_M\right)$. Let us define the matrix $\bvec{U}$ such that \begin{align*}
\bvec{U}=\begin{bmatrix}
\bvec{a}_1^T\\
\bvec{a}_2^T\\
\vdots\\
\bvec{a}_r^T\\
\bvec{a}_{r+1}^T\\
\vdots\\
\bvec{a}_M^T
\end{bmatrix}
\end{align*} where the collection $\{\bvec{a}_i\}_{i=1}^M$ forms an orthogonal basis for $\mathbb{R}^M$. By construction $\bvec{U}$ is unitary which implies, for any $1\le i,j\le M$, \begin{align*}
\frac{\inprod{\bm{\phi}_i}{\bvec{a}_j}}{\norm{\bm{\phi}_i}}&=\frac{\inprod{\bm{U\phi}_i}{\bvec{Ua}_j}}{\norm{\bm{U\phi}_i}}\\
\ &=\frac{\psi_{ij}}{\norm{\bm{\psi}_i}}
\end{align*}
where $\bm{\psi}_i=\bm{U\phi_i}=[{\psi}_{i1}\ {\psi}_{i2}\cdots\ {\psi}_{iM}]^T$. Note that, by construction, $\psi_{ij}\sim\mathcal{N}(u_{ij},1/M),\ j=1,2,\cdots,\ r$ and $\psi_{ij}\sim\mathcal{N}(0,1/M),\ j=r+1,\cdots,\ M$. Then, \begin{align*}
\lefteqn{\mathbb{P}\left(\left|\frac{\inprod{\bm{\phi}_i}{\bvec{a}_j}}{\norm{\bm{\phi}_i}}\right|\ge 1-\delta\mid\{u_{ik}\}_{k=1}^r.\right)}& &\\
\ &=\mathbb{P}\left(\frac{\psi_{ij}^2}{\sum_{k=1}^r \psi_{ik}^2}\ge (1-\delta)^2\right)\\
\ &=\mathbb{P}\left(\sum_{k\ne j}\psi_{ik}^2\le \frac{\psi_{ij}^2}{\delta_1^2}\right)\\
\ &\ge \sup_{\sigma>0}\mathbb{P}\left(\sum_{k\ne j}\psi_{ik}^2\le \sigma\right)\mathbb{P}(\psi_{ij}^2\ge \delta_1^2\sigma)
\end{align*} where $\delta_1$ is defined as in Lemma.~\ref{lem:prob_hybrid_vector_align}.
We are interested in the case where $1\le j\le r$. To bound the whole probability, we bound the two product terms separately as below
\begin{itemize}
\item Note that $\psi_{ij}\sim \mathcal{N}(0,M^{-1})\implies$ $$\mathbb{P}(\psi_{ij}^2\ge \delta_1^2\sigma)=2Q\left(\sqrt{M}(\delta_1\sqrt{\sigma}-u_{ij})\right)$$
\item Using Chernoff's bound, \begin{align*}
\lefteqn{\mathbb{P}\left(\sum_{k\ne j}\psi_{ik}^2\le \sigma\right)}& &\\
\ & \ge 1-\sup_{\theta>0}e^{-\theta \sigma}\mathbb{E}\left(\exp\left[\theta\sum_{k\ne j}\psi
_{ik}^2\right]\right)\\
\ &=1-\sup_{1/2>\theta>0}e^{-\theta\sigma}\frac{\exp\left(\frac{\sum_{k\ne j}{u_{ik}^2}\theta}{1-2\theta}\right)}{(1-2\theta)^{(M-1)/2}}\\
\ &=1-\sup_{1/2>\theta>0}\exp(f(\theta,\sigma))
\end{align*}
where $f(\theta,\sigma)=\frac{\sum_{k\ne j}{u_{ik}^2}\theta}{1-2\theta}-\theta\sigma-\frac{M-1}{2}\ln(1-2\theta)$
\end{itemize}
Then, defining $t=1-2\theta$, it is easy to observe that, for a fixed $\sigma$, $f(\theta,\sigma)$ is maximized at $$t=t_{max}=\frac{M-1+\sqrt{(M-1)^2+4\lambda_{ij}\sigma}}{2\sigma}$$ where $\lambda_{ij}:=\sum_{k\ne j}u_{ik}^2$. Note that the condition $\theta\in [0,1/2)$ constraints the range of $\sigma$ to be $[M-1+\lambda_{ij},\infty)$. Now, it is a simple matter of computation to show that $f(\theta_{\mathrm{max}},\sigma)=\left(-\sigma+g_{M-1,\sigma}(\lambda_{ij})\right)/2$. Using Lemma.~\ref{lem:increasing_g}, we get $f(\theta_{\mathrm{max}},\sigma)\le\left(-\sigma+g_{M-1,\sigma}((r-1)T^2)\right)/2$, since $\lambda_{ij}=\sum_{k\ne j}u_{ik}^2\le (r-1)T^2$. It is important to note that we can upper bound $g_{M-1,\sigma}$ in this way only when $\sigma>M-1+(r-1)T^2$.
Then, we get \begin{align*}
\lefteqn{\mathbb{P}\left(\left|\frac{\inprod{\bm{\phi}_i}{\bvec{a}_j}}{\norm{\bm{\phi}_i}}\right|\ge 1-\delta\mid\{u_{ik}\}_{k=1}^r\right)}& &\\
\ &\ge \sup_{\tiny{\sigma>M-1+(r-1)T^2}}2\left[(1-e^{g(\sigma)/2})Q\left(\sqrt{M}(\delta_1\sqrt{\sigma}-u_{ij})\right)\right]
\end{align*}
where the function $g$ is defined as in Lemma.~\ref{lem:prob_hybrid_vector_align}. Finally taking expectation with respect to the random variables $\{u_{ik}\}_{k=1}^r$, results in the desired expression for $\mathbb{P}(A_{ij})$ as in Lemma.~\ref{lem:prob_hybrid_vector_align}.
\end{proof}
\section{Proof of Lemma.~\ref{lem:hybrid_subspace_align}}
\label{sec:proof_lem_hybrid_subspace_align}
The following lemmas will be essential for the proof of Lemma.~\ref{lem:hybrid_subspace_align}:
\begin{lem}
\label{lem:condition_mutual_independence_Aij}
Let the events $A_{ij}$ be defined as in Lemma.~\ref{lem:prob_hybrid_vector_align} for $1\le i\le N,\ 1\le j\le r$. Then, if $\delta\in (0,1-1/\sqrt{2})$, the events $A_{ij_1}$ and $A_{ij_2}$ are mutually exclusive $\forall \ 1\le i\le N$ and $1\le j_1,j_2\le M,\ j_1\ne j_2$.
\end{lem}
\emph{\remark{Lemma.~\ref{lem:condition_mutual_independence_Aij} demonstrates that the condition $\delta\in (0,1-1/\sqrt{2})$ ensures that a column of a generalized hybrid dictionary cannot be ``too close'', simultaneously, to more than one of the basis vectors $\{\bvec{a}_{i}\}$}.}
\begin{proof}
Let $\{\bvec{a}_i\}_{i=1}^M$ form an orthonormal basis for $\mathbb{R}^M$. Then, one can uniquely express a column $\bm{\tilde{\phi}}_i:=\bm{\phi}_i/\norm{\bm{\phi}_i}$ as \begin{align*}
\bm{\tilde{\phi}}_i=\sum_{k=1}^M \epsilon_{ik}\bvec{a}_k
\end{align*}
where $\epsilon_{ik}=\inprod{\bm{\tilde{\phi}_i}}{\bvec{a}_k}$. Since the columns are normalized, we have $\sum_{k=1}^M \epsilon_{ik}^2=1$. Hence, if one has $|\epsilon_{ij_1}|\ge 1-\delta$ for some $1\le j_1\le M$, essentially, for any other index $1\le j_2\le M$, $|\epsilon_{ij_2}|\le \sqrt{1-(1-\delta)^2}<1-\delta$ under the condition $\delta\in (0,1-1/\sqrt{2})$. This concludes the proof.
\end{proof}
\begin{lem}
\label{lem:increasing_fn_p}
The real valued polynomial $P(L,p,r)$ defined as \begin{align*}
P(L,p,r)=\sum_{j=0}^r (-1)^j \binom{r}{j}(1-jp)^L
\end{align*}
is a positive valued monotonically increasing function of $p$ whence $p\in [0,1/r]$ where $r\ge 1, L\ge r$.
\end{lem}
\begin{proof}
The fact that $P(L,p,r)$ is positive valued for $p\in [0,1/r], L\ge r$, will be clear from the proof of Lemma.~\ref{lem:hybrid_subspace_align} as it will be shown there that this polynomial is in fact a probability expression and hence is always non negative, and for $p\in (0,1/r)$, it is strictly positive.
To show that this polynomial is increasing in $p$, note that for $r=1$, $P(L,p,1)=1-(1-p)^L$ which is clearly increasing when $p\in [0,1]$.
For $r\ge 2$, note that, from the definition of $P$, $P(L,p,r)$ is continuous and differentiable everywhere, w.r.t. $p$. Then, \begin{align*}
\lefteqn{\frac{\partial P(L,p,r)}{\partial p}}& &\\
\ &=-L\sum_{j=0}^r j(-1)^j\binom{r}{j}(1-jp)^{L-1}\\
\ &=Lr\sum_{j=0}^{r-1}(-1)^{j}\binom{r-1}{j}(1-(j+1)p)^{L-1}\\
\ &=Lr(1-p)^{L-1}P\left(L-1,\frac{p}{1-p},r-1\right)
\end{align*}
Note that $p\in [0,1/r]\implies \frac{p}{1-p}\in[0, \frac{1}{r-1}]$, which implies, from the positivity of the polynomial $P(L,p,r)$ for $p\in [0,1/r],\ L\ge r\ge 1$, that $P(L-1,p/(1-p),r-1)$ is also positive valued for $r\ge 2,\ p\in [0,1/r]$. Thus, the polynomial $P(L,p,r)$ is monotonically increasing in $p$ for $r\ge 2$.
\end{proof}
\begin{proof}[Proof of Lemma.~\ref{lem:hybrid_subspace_align}]
Let us first discuss the model of the random experiment that generates the events $A_{ij}$. To describe the experiment, we symbolize the different vectors involved according to the following notation:
\begin{description}
\item[$c_i$] denotes the symbol for the $i^{\mathrm{th}}$ normalized random column vector $\bm{\tilde{\phi}_i}$, for $1\le i\le L$
\item[$b_j$] denotes the symbol for the $j^{\mathrm{th}}$ vector $\bvec{a}_j$ in the orthonormal basis, for $1\le j\le r$
\end{description}
We say that an ``assignment'' of $c_i$ to $b_j$ has taken place if the event $A_{ij}$ occurs. Now, in virtue of Lemma.~\ref{lem:condition_mutual_independence_Aij}, whenever $\delta\in (0,0.293)$, the events $A_{ij}$ are pairwise mutually exclusive for any fixed $i$. Also, observe that, due to the independence of the random vectors $\bm{\tilde{\phi}_i}$, any pair of events $A_{ij}$ and $A_{kl}$ are stochastically independent, as long as $i\ne k$.
Thus the model for the random experiment can be described as below:
\begin{itemize}
\item An assignment of $c_i$ to $b_j$ occurs independently of an assignment of $c_k$ to $b_l$ whenever $i\ne k$
\item For a given $i$, $c_i$ can be assigned to at most one of $b_j$, for some $1\le j\le r$.
\end{itemize}
Each of these events occur with the same probability, say $p$ which greater than $p(\delta)$ as shown in Lemma.~\ref{lem:prob_hybrid_vector_align}.
Having set the stage for the experiment, now Let us define, for a fixed $j$, $X_j$ as random variable denoting the number of $c_i$'s assigned to $b_j$. Then, note that $0\le X_j\le L$, for any $1\le j\le r$ and $0\le \sum_{j=1}^r X_j\le L$. The objective of Lemma.~\ref{lem:hybrid_subspace_align} then boils down to calculating the quantity $\mathbb{P}(X_1\ge 1,X_2\ge 1,\cdots,\ X_r\ge 1)$. We proceed by finding out the joint distribution of the random variables $X_1,X_2,\cdots,\ X_r$.
Let $k_1,k_2,\cdots,\ k_r$ be natural numbers such that $0\le k_1,k_2,\cdots, k_r\le L $ and $\sum_{j=1}^r k_j\le L $. Then
\begin{align*}
\lefteqn{\mathbb{P}(X_1=k_1,X_2=k_2,\cdots,\ X_r=k_r)} & &\\
\ &=\binom{L}{k_1}p^{k_1}\cdot\binom{L-k_1}{k_2}p^{k_2}\\
\ &\cdots\binom{L-(k_1+k_2+\cdots+k_{r-1})}{k_{r}}p^{k_r}\\
\ &\cdot(1-rp)^{L-(k_1+k_2+\cdots+k_{r-1})}\\
\ &=\binom{L}{k_1\ k_2\ \cdots\ k_r\ L-\sum_{j=1}^r k_j}p^{\sum_{j=1}^r k_j}(1-rp)^{L-\sum_{j=1}^r k_j}
\end{align*}
That is the random variables $X_1,X_2,\cdots,\ X_r, (L-\sum_{j=1}^r X_j)$ are multinomial distributed with parameters $p_1=p_2=\cdots=p_{r}=p,\ p_{r+1}=1-rp$. Obviously, the condition $p\in [0,1/r]$ is necessary here. Finally, we can find the desired probability as \begin{align*}
\lefteqn{\mathbb{P}(X_1\ge 1,X_2\ge 1,\cdots,\ X_r\ge 1)} & & \\
\ &=1-\mathbb{P}\left(\{X_1=0\}\cup \{X_2=0\}\cup\cdots\cup \{X_r=0\}\right)\\
\ &=1-\sum_{k=1}^r(-1)^{k-1}\\
\ & \sum_{1\le i_1<i_2<\cdots<i_k\le r}\mathbb{P}\left(X_{i1}=0,X_{i_2}=0, \cdots, X_{i_k}=0\right)\\
\ &=1-\sum_{k=1}^r (-1)^{k-1}\binom{r}{k}(1-kp)^L\\
\ &=\sum_{k=0}^r (-1)^{k}\binom{r}{k}(1-kp)^L
\end{align*}
Now, since $p\ge p(\delta)$ from Lemma.~\ref{lem:prob_hybrid_vector_align}, use of Lemma.~\ref{lem:increasing_fn_p} concludes the proof.
\end{proof}
\section{Proof of Lemma.~\ref{lem:coherence-generalized-hybrid-dictionary}}
\label{sec:proof-lem-coherence-generalized-hybrid-dictionary}
\begin{lem}
\label{lem:increasing_fn_q}
The real valued polynomial $Q(L,p,r)$ defined as \begin{align*}
Q(L,p,r)=\sum_{k=0}^r (-1)^{k}\binom{r}{k}\sum_{j=0}^{k} \binom{k}{j} \frac{L!}{(L-j)!}p^j(1-kp)^{L-j}
\end{align*}
is a positive valued monotonically increasing function of $p$ whence $p\in [0,1/r]$ where $r\ge 1, L\ge r$.
\end{lem}
\begin{proof}
From the proof of Lemma.~\ref{lem:coherence-generalized-hybrid-dictionary}, $Q(L,p,r)$ will be understood as a probability expression, which will imply the non-negativity of $Q(L,p,r)$.
Now observe, \begin{align*}
\lefteqn{\frac{\partial Q(L,p,r)}{\partial p}} & & \\
\ =& \sum_{k=0}^r (-1)^k \binom{r}{k}\\
\ & \sum_{j=0}^k \binom{k}{j}\frac{L!}{(L-j)!}\left(jp^{j-1}(1-kp)^{L-j}\right.\\
\ & \left.-k(L-j)p^j(1-kp)^{L-j-1}\right)
\end{align*}
After a bit of manipulation, and using the identity $\binom{k}{j}-\binom{k-1}{j}=\binom{k-1}{j-1}$, it follows that \begin{align*}
\lefteqn{\frac{\partial Q(L,p,r)}{\partial p}} & & \\
\ &= \sum_{k=1}^r (-1)^{k-1} k \binom{r}{k}\\
\ & \left(\sum_{j=1}^{k}\binom{k-1}{j-1}\frac{L!}{(L-j-1)!}p^j(1-kp)^{L-j-1}\right)\\
\ &= r\sum_{k=1}^{r}(-1)^{k-1}\binom{r-1}{k-1}\\
\ &\left(\sum_{j=0}^{k-1}\binom{k-1}{j}\frac{L!}{(L-2-j)!}p^{j+1}(1-kp)^{L-2-j}\right)\\
\ &= rL(L-1)p(1-p)^{L-2}\\
\ &\left[\sum_{k=0}^{r-1}(-1)^k\binom{r-1}{k}\sum_{j=0}^k\binom{k}{j}\frac{(L-2)!}{(L-2-j)!}q^{j}(1-kq)^{L-2-j}\right]\\
\ &=rL(L-1)p(1-p)^{L-2}Q(L-2,q,r-1)>0
\end{align*}
where $q=\frac{p}{1-p}$. This proves that $Q(L,p,r)$ is increasing in $p$, as long as $p\in (0,1/r)$.
\end{proof}
\begin{proof}[Proof of Lemma.~\ref{lem:coherence-generalized-hybrid-dictionary}]
The columns of $\bm{\Phi}$ are of the form $\frac{\bm{\phi}_i}{\norm{\bm{\phi}_i}}$, where $\bm{\phi}_i$ is of the form $\bm{\phi}_i=\sum_{j=1}^r u_{ij}\bvec{a}_j+\bvec{n}_i$, where properties of these variables can be recalled from the definition~\ref{defn:hybrid} of generalized hybrid dictionary. For the sake of simplicity we will denote $\frac{\bm{\phi}_i}{\norm{\bm{\phi}_i}}$ by $\bm{\psi}_i$. Now, let us assume that, in any collection of $K$ columns, $\exists\ 2r$ columns, w.l.o.g. $\{\bm{\psi}_i\}_{i=1}^{2r}$, such that $\min_{1\le i\le r}\{\abs{\inprod{\bm{\psi}_i}{\bvec{a}_i}}\}\ge 1-\delta$, and $\min_{r+1\le i\le 2r}\{\abs{\inprod{\bm{\psi}_i}{\bvec{a}_i}}\}\ge 1-\delta$. Let $\{\bvec{a}_i\}_{i=1}^M$ form an orthonormal basis for $\mathbb{R}^M$. We can uniquely expand any column $\bm{\psi}_i$ as $\bm{\psi}_i=\sum_{j=1}^M \epsilon_{ij}\bvec{a}_j$. Then, note that the assumption of existence of the $2r$ columns with aforementioned property forces the constraints, $\abs{\epsilon_{ii}}\ge 1-\delta$, for $i=1,2,\cdots,\ r$, $\abs{\epsilon_{i+r,i}}\ge 1-\delta$, for $i=1,2,\cdots,\ r$, with $\sum_{j=1}^M{\epsilon_{ij}^2}=1,\ \forall i$, so that we have $\sum_{j\ne i}\epsilon_{ij}^2\le 1-(1-\delta)^2=2\delta-\delta^2$, and , similarly, $\sum_{j\ne i}\epsilon_{i+r,j}^2\le 1-(1-\delta)^2=2\delta-\delta^2$. Thus, for any $1\le i\le r$, we have \begin{align*}
\lefteqn{\abs{\inprod{\bm{\psi}_i}{\bm{\psi}_{i+r}}}} & & \\
\ &= \abs{\sum_{j=1}^M\epsilon_{ij}\epsilon_{i+r,j}}\\
\ &\ge \abs{\epsilon_{ii}\epsilon_{i+r,i}}-\abs{\sum_{j\ne i}\epsilon_{ij}\epsilon_{i+r,j}}\\
\ &\ge (1-\delta)^2-(1-(1-\delta)^2)\quad (\mbox{Using Cauchy-Scwartz inequality})\\
\ &=1-4\delta+2\delta^2
\end{align*}
To find the probability that there are $2r$ columns $\bm{\psi}_i$ satisfying the aforementioned condition, we recall the proof of Lemma.~\ref{lem:hybrid_subspace_align}, and recognize the desired probability as $\mathbb{P}\left(X_1\ge 2,\ X_2\ge 2,\ \cdots,\ X_r\ge 2\right)$ where $X_1,\cdots,X_r,\ (K-\sum_{i=}^r X_i)$ are multinomial distributed random variables with parameters $(p_1=p,\ p_2=p,\cdots,\ p_r=p,\ p_{r+1}=1-rp)$, where $p$ was defined as in Lemma.~\ref{lem:hybrid_subspace_align}, with $p\ge p(\delta)$. consequently, the desired probability is \begin{align*}
\lefteqn{\mathbb{P}\left(X_1\ge 2,\ X_2\ge 2,\ \cdots,\ X_r\ge 2\right)} & &\\
\ &=1-\mathbb{P}\left(\{X_1\in \{0,1\}\}\cup \{X_2\in \{0,1\}\}\cup\cdots\cup \{X_r\in\{0,1\}\}\right)\\
\ &=1-\sum_{k=1}^r(-1)^{k-1}\\
\ & \sum_{1\le i_1<i_2<\cdots<i_k\le r}\mathbb{P}\left(X_{i1}\in\{0,1\},X_{i_2}\in\{0,1\}, \cdots, X_{i_k}\in\{0,1\}\right)
\end{align*}
With a little effort, it can be shown that \begin{align*}
\lefteqn{\mathbb{P}\left(X_{i1}\in\{0,1\},X_{i_2}\in\{0,1\}, \cdots, X_{i_k}\in\{0,1\}\right)} & & \\
\ &=\sum_{j=0}^{k} \binom{k}{j} \frac{K!}{(K-j)!}p^j(1-kp)^{K-j}
\end{align*}
Thus, the desired probability is \begin{align*}
\ & 1-\sum_{k=1}^r (-1)^{k-1}\binom{r}{k}\sum_{j=0}^{k} \binom{k}{j} \frac{K!}{(K-j)!}p^j(1-kp)^{K-j} \\
\ &=\sum_{k=0}^r (-1)^{k}\binom{r}{k}\sum_{j=0}^{k} \binom{k}{j} \frac{K!}{(K-j)!}p^j(1-kp)^{K-j}
\end{align*}
From Lemma.~\ref{lem:increasing_fn_q}, the preceding term can be recognized as $Q(K,p,r)$. Then, an application of Lemma.~\ref{lem:increasing_fn_q} along with the fact that $p\ge p(\delta)$ establishes that \begin{align*}
\ & \mathbb{P}\left(\exists 2r\ \mbox{columns $\{\bm{\psi}_i\}_{i=1}^{2r}$ in any collection of $K$ columns}\right.\\
\ & \left.\mbox{ such that} \min_{1\le i\le r}\min\left\{\abs{\inprod{\bm{\psi}_i}{\bvec{a}_i}},\abs{\inprod{\bm{\psi}_{i+r}}{\bvec{a}_i}}\right\}\ge 1-\delta\right)\\
\ &\ge \sum_{k=0}^r (-1)^{k}\binom{r}{k}\sum_{j=0}^{k} \binom{k}{j} \frac{K!}{(K-j)!}p(\delta)^j(1-kp(\delta))^{K-j}
\end{align*}
Thus,
\end{proof}
| {
"timestamp": "2016-08-01T02:05:48",
"yymm": "1607",
"arxiv_id": "1607.08712",
"language": "en",
"url": "https://arxiv.org/abs/1607.08712",
"abstract": "Though the method of least squares has been used for a long time in solving signal processing problems, in the recent field of sparse recovery from compressed measurements, this method has not been given much attention. In this paper we show that a method in the least squares family, known in the literature as Orthogonal Least Squares (OLS), adapted for compressed recovery problems, has competitive recovery performance and computation complexity, that makes it a suitable alternative to popular greedy methods like Orthogonal Matching Pursuit (OMP). We show that with a slight modification, OLS can exactly recover a $K$-sparse signal, embedded in an $N$ dimensional space ($K<<N$) in $M=\\mathcal{O}(K\\log (N/K))$ no of measurements with Gaussian dictionaries. We also show that OLS can be easily implemented in such a way that it requires $\\mathcal{O}(KMN)$ no of floating point operations similar to that of OMP. In this paper performance of OLS is also studied with sensing matrices with correlated dictionary, in which algorithms like OMP does not exhibit good recovery performance. We study the recovery performance of OLS in a specific dictionary called \\emph{generalized hybrid dictionary}, which is shown to be a correlated dictionary, and show numerically that OLS has is far superior to OMP in these kind of dictionaries in terms of recovery performance. Finally we provide analytical justifications that corroborate the findings in the numerical illustrations.",
"subjects": "Information Theory (cs.IT); Numerical Analysis (math.NA); Methodology (stat.ME)",
"title": "Signal Recovery in Uncorrelated and Correlated Dictionaries Using Orthogonal Least Squares",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211597623863,
"lm_q2_score": 0.7279754548076478,
"lm_q1q2_score": 0.7096458771341418
} |
https://arxiv.org/abs/2211.07559 | Lagrangian intersections and cuplength in generalised cohomology | We find lower bounds on the number of intersection points between two relatively exact Hamiltonian isotopic Lagrangians. The bounds are given in terms of the cuplength of the Lagrangian in various multiplicative generalised cohomology theories. The intersection of the Lagrangians need not be transverse, however, we require certain orientation assumptions. This gives stronger bounds than previous estimates on the number of self-intersection points of a suitable closed, relatively exact Lagrangian diffeomorphic to Sp$(2)$ or Sp$(3)$. Our proof uses Lusternik-Schnirelmann theory, following and extending work by Hofer. | \section{Two examples}\label{Examples}
Let
$$\Sp(n) := \normalfont\text{Sp}(2n;\mathbb{C})\cap U(2n)$$ be the \emph{compact symplectic group}. It is a compact simply-connected Lie group of dimension $n(2n+1)$. The zero section defines a Lagrangian embedding $\Sp(n)\hookrightarrow T^*\Sp(n)$, where we endow $T^*\Sp(n)$ with the canonical symplectic structure. As this embedding is a homotopy equivalence, $\Sp(n)$ is relatively exact.
We will consider $\Sp(2)$ and $\Sp(3)$ since their cuplength with respect to a certain generalised cohomology theory was computed in \cite{IM04} (see also \cite{Ki07}) and is strictly greater than their cuplength with respect to integral cohomology.
\begin{prop}[\cite{IMN03, IM04}] \label{computations}
The mod-2 and integral cuplengths of $\Sp(2)$ are
$$c_{\mathbb{Z}/2}(\Sp(2)) = c_\mathbb{Z}(\Sp(2)) = 3,$$
while
$$c_{h^*}(\Sp(2)) = 4$$
where $h^*$ is the cohomology theory associated to the truncated sphere spectrum $\mathbb{S}[0, 2]$. In particular, \cite{IM04} shows that its cuplength in real $K$-theory is
$$c_{KO}(\Sp(2)) = 4.$$
Similarly the cuplengths of $\Sp(3)$ in the same cohomology theories are given by
$$c_{\mathbb{Z}/2}(\Sp(3)) = c_\mathbb{Z}(\Sp(3)) = 4$$
and
$$c_{h^*}(\Sp(3)) = c_{KO}(\Sp(3)) = 5.$$
\end{prop}
\begin{rem}
We use Hofer's convention in \cite{Ho88} for cuplengths which differs by one compared to that of \cite{IM04}.
\end{rem}
Since $\Sp(2)$ and $\Sp(3)$ are Lie groups, they are parallelisable. By Proposition \ref{Orientability Holds Often} we can therefore apply Theorem \ref{Intersection Points and Cuplength} with real $K$-theory to either one as the zero section lying inside its cotangent bundle. This gives a (strictly) stronger bound on the Arnol'd number than Hofer's cup length estimate, though this estimate was already known due to work of Laudenbach and Sikorav \cite{LS85}, using finite-dimensional approximations.
\begin{cor}\label{corollary}
The minimum number of intersection points between a relatively exact Lagrangian embedding of $\Sp(2)$ (satisfying Proposition \ref{Orientability Holds Often}.\ref{sphere}) and its image under any Hamiltonian diffeomorphism is at least $4$. The same is true for $\Sp(3)$ with $5$ instead of $4$.
\end{cor}
\begin{prop}
The critical number of $\Sp(2)$ is 4.
\end{prop}
\begin{proof}
The critical number of $\Sp(2)$ is bounded below by 4 by Proposition \ref{computations}. On the other hand, \cite[Theorem 6.1]{Sm62} combined with the computation of its integral homology in Lemma \ref{Z_2 cuplength of Sp(2)} implies that the Morse number of $\Sp(2)$ is $4$, which is an upper bound for the critical number.
\end{proof}
The bound for $\Sp(2)$ (and a stronger bound for $\Sp(3)$) were shown by \cite{LS85} for the respective zero section in the cotangent bundle. However, their methods are specifically geared towards cotangent bundles while our bounds persists under Weinstein handle attachments. As an example, one can plumb a copy of $T^*\Sp(2)$ containing $\Sp(2)$ as the zero section, with the cotangent bundle of any other $2$-connected manifold of the same dimension to obtain a new Weinstein manifold $X$. Since $\Sp(2)$ is $2$-connected, the resulting manifold admits the the same $2$-skeleton (up to homotopy) as $T^*\Sp(2)$, which is trivial. Therefore Proposition \ref{Orientability Holds Often}\ref{sphere} still holds for the embedding $\Sp(2)\hookrightarrow X$. This gives a stronger bound than Hofer's estimate, in a case for which the estimate of \cite{LS85} does not apply.\\
We recap the computation of the integral cohomology rings of $\Sp(2)$ and $\Sp(3)$ here.
\begin{lem}[\cite{IM04}]\label{Z_2 cuplength of Sp(2)} The integral cuplengths of $\Sp(2)$ and $\Sp(3)$ are given by
$$c_{\mathbb{Z}}(\Sp(2)) = 3\quad\text{and}\quad c_{\mathbb{Z}}(\Sp(3)) = 4.$$ Furthermore, $H^*(\Sp(2))$ and $H^*(\Sp(3))$ are free of rank 4 and 8 respectively, over both $\mathbb{Z}$ and $\mathbb{Z}/2$.
\end{lem}
\begin{proof} Given $n$ we can identify $\mathbb{H}^n$ with $\mathbb{R}^{4n}$ to see the existence of a principal $\Sp(n-1)$-bundle $\Sp(n)\to S^{4n-1}$ induced by the canonical action on the unit quaternions. Thus we can apply the Leray-Serre spectral sequence to compute the cohomology of $\Sp(2)$ with coefficients in $A = \mathbb{Z}$ or $A = \mathbb{Z}/2$. The $E_2$-page is given by
$$E^{p,q}_2 = H^p(S^7,H^q(\Sp(1);A))$$
which vanishes for $p \notin\{0,7\}$ and $q \notin\{0, 3\}$. Hence the spectral sequence collapses for degree reasons at the second page. As $\Sp(1) \cong S^3$, we obtain
\begin{equation}\label{iso-cohom}H^*(\Sp(2); A) \cong H^*(S^7; A) \otimes_A H^*(S^3; A).\end{equation}
By the multiplicativity of the spectral sequence, \eqref{iso-cohom} is an isomorphism of graded rings. Therefore,
$$H^n(\Sp(2);A) = \begin{cases}
A\quad & n \in \{0,3,7,10\},\\
0 \quad &\normalfont\text{otherwise}
\end{cases}$$
and we can deduce the values of $c_{\mathbb{Z}/2}(\Sp(2))$ and $c_\mathbb{Z}(\Sp(2))$.\par
The same argument gives an isomorphism of graded rings
$$H^*(\Sp(3);A) \cong H^*(S^{11};A) \otimes_A H^*(\Sp(2);A).$$
Therefore,
$$H^n(\Sp(2);A) = \begin{cases}
A\quad & n \in \{0,3,7,10, 11, 14, 18, 21\},\\
0 \quad &\normalfont\text{otherwise}
\end{cases}$$
from which we deduce the corresponding statements for $\Sp(3)$.
\end{proof}
\section{Working some things out}
What we want to prove:
\begin{thm}\label{Possible Main Theorem Two}Let $(X,\omega)$ be a closed symplectic manifold with $J\in \mathcal{J}(X,\omega)$, $L_0 \in \LCr(X)$ symplectically aspherical and $\psi \in \Ham(M,\omega)$. Let $(\bm{E},\mu,\iota)$ be a ring spectrum and suppose $L_0$ and $\normalfont\text{Ind}(\normalfont\text{ev}^*TM,\normalfont\text{ev}^*TL_0)$ are $\bm{E}$-oriented. Then $$\pi^*\colon E^{*}(L_0)\rightarrow E^{*}(\mathcal{M}_J(L_0,\psi(L_0)))$$
is injective.
\end{thm}
Following \cite{AMS21} we denote for a rank-$k$ vector bundle $\pi$ over $X$ by $X^{\pi}$ the Thom space of $\pi$ and by $X^{-\pi}$ the Thom space of its virtual bundle. If $(\pi,\eta)$ is a virtual vector bundle and $\bm{E}$ a spectrum, we set
$$E^{*}(X^{\pi-\eta}) := E^{N+\bullet}(X^{\pi\oplus\eta^\perp})$$
where $\eta^\perp$ is any vector bundle over $X$ such that $\eta\oplus\eta^\perp = \theta^N_X$, the trivial bundle over $X$ of rank $N$.
\begin{rem}Clearly, $E^{*}(X^{\pi-\eta})$ is only defined up to natural isomorphism. We will ignore this in our notation.\end{rem}
\begin{rem}\label{Cohomology of Virtual Bundle Shift 2} We have for any $k \in \mathbb{N}$ that $E^{*}(X^{-\eta-\theta^k_X}) = E^{k+\bullet}(X^{-\eta})$.
\end{rem}
\begin{proof}Let $\eta^\perp$ be a vector bundle over $X$ such that $\eta\oplus\eta^\perp \cong \theta^N_X$. Then $\eta\oplus\theta^k_X\oplus \eta^\perp \cong \mathbb{R}^{N+k}_X$. Thus
$$E^{*}(X^{-\eta-\theta^k_X}) = E^{N+k+\bullet}(X^{\eta^\perp}) = E^{k+\bullet}(X^{-\eta})$$
\end{proof}
\begin{lem}\label{Finding Operator To Stabilise}Let $\psi \colon V\times\Lambda\rightarrow \mathcal{H}$ be a smooth Fredholm map from a separable Banach manifold to a separable Banach space, where $\Lambda$ is a finite-dimensional manifold, possibly with boundary. Let $$V^{\reg} := \{(x,\lambda)\in V\times\Lambda: d_1\psi(x,\lambda) \text{ is surjective}\}$$
For any closed subset $A\sub V^{\reg}$ and any compact $K \sub V\times\Lambda$ there exists a neighbourhood $U \sub V\times\Lambda$ of $K$ and a smooth map $T\colon V\times \Lambda\times \mathbb{R}^k\rightarrow \mathcal{H}$ such that
\begin{enumerate}
\item $T(z,\cdot)$ is linear for each $z\in V\times\Lambda$;
\item $T(z,\cdot) = 0$ for $z \in A$;
\item $d_1\psi(x,\lambda) + T(x,\lambda,\cdot)\colon T_xV \oplus\mathbb{R}^k \rightarrow \mathcal{H}$ is surjective for $(x,\lambda) \in U$
\item $T(z,\cdot) = 0$ for $z\in V\times\Lambda\sm U$.
\end{enumerate}
We may choose $U$ arbitrarily small. Moreover, if $G$ is a compact Lie group acting on $V\times\Lambda$ and $\mathcal{H}$ such that $\psi$ is $G$-equivariant and $K$ and $A$ are $G$-invariant, we may choose $U$ to be $G$-invariant and $T$ to be $G$-equivariant, where $G$ acts trivially on the last factor in $V\times\Lambda\times\mathbb{R}^k$.
\end{lem}
\begin{nota}We set $|T|:=k$ for smooth maps $T$ as above.\end{nota}
\begin{proof}For each $(x,\lambda) \in K$ there exists a neighbourhood $U_x \sub V\times\Lambda$ of $(x,\lambda)$ and $k_x \in \mathbb{N}_0$ such that we can find a map $T_x \colon \mathbb{R}^{k_x}\rightarrow \mathcal{H}$ with $d_1\psi(y,\mu) + T_{x,\lambda} \colon T_yV \oplus\mathbb{R}^{k_x} \rightarrow \mathcal{H}$ surjective for $(y,\mu) \in U_x$. Let $Z \sub K$ be a finite subset such that $K \sub U:= \union{z\in Z}{U_x}$
and let $\{\rho_z : z\in Z\}\cup \{\rho'\}$ be a smooth partition of unitity subordinate to $\{U_z\}_{z\in Z}\cup \{V\times\Lambda\sm K\}$. Let $\chi\in C^\infty(V\times\mathbb{R},[0,1])$ be a smooth map\footnote{Can we expect smoothness? Perhaps we have to require compactness of $A$ as well.} with $\chi|_A \equiv 0$ and $\chi|_{V\sm V^{\reg}} \equiv 1$. Let $k := \s{z\in Z}{k_z}$ and let $\tilde{T}_z$ be the composite $\mathbb{R}^k \rightarrow \mathbb{R}^{k_z}\xrightarrow{T_z} \mathcal{H}$, where the first map is the projection (choosing some enumeration of $Z$). Then define $T\colon V\times\Lambda\times\mathbb{R}^k\rightarrow \mathcal{H}$ by
$$T(x,\lambda,v) = \chi(x,\lambda)\s{z\in Z}{\rho_z(x,\lambda)\tilde{T}_{z}(v)}$$
This map clearly has the desired properties. The last assertion is immediate from the construction of $T$. In the case of a $G$-action, define $\tilde{U} := \union{g \in G}{g\cdot U}$, where $U$ is as above and define
$$T(x,\lambda,v) = \s{z\in Z}{\int_G\chi(g\cdot(x,\lambda))\rho_z(g\cdot(x,\lambda))d\mu_G(g)\;\tilde{T}_{z}(v)}$$
where $\mu_G$ is the right normalised Haar measure on $G$. As $\psi$ is $G$-equivariant, this still satisfies 3.
\end{proof}
\begin{rem}Unless $V$ is finite-dimensional (and consequently so is $\mathcal{H}$) we cannot expect $T$ to have compact support in the first variable.\end{rem}
\begin{rem} We don't know whether the assumption of separability is necessary.\end{rem}
\begin{de} Let $\psi \colon V\times\Lambda\rightarrow Y$ be a smooth Fredholm map with constant index, where $V$ is a separable Banach manifold, $\Lambda$ is a smooth manifold with boundary, and $Y$ is a separable Banach space. Suppose $K\sub V$ is compact. The \emph{pre-index bundle (with respect to $(T,U)$ along $K$)} of $\psi$ is the stabilisation of $\ker(d\psi + T)|_{U}\oplus \theta_{U}^{-|T|}$, denoted $\normalfont\text{Ind}_K(\psi;(T,U))$. Here $T$ and $U$ are as in Lemma \ref{Finding Operator To Stabilise}.
\end{de}
\begin{rem} If $\psi$ is $G$-equivariant with respect to actions of a compact Lie group $G$ on its domain and target and $(T,U)$ was chosen to be $G$-equivariant as well, then $\normalfont\text{Ind}_K(\psi;(T,U))$ admits a canonical $G$-action. By restricting to such pairs, everything below can also be defined in the equivariant setting.
\end{rem}
\begin{lem}\label{Dependent Index Bundles Equivalent over Intersection}Let $\psi \colon V\times\Lambda\rightarrow Y$ be a smooth Fredholm map with constant index, where $V$ is a separable Banach manifold, $\Lambda$ is a smooth manifold with boundary, and $Y$ is a separable Banach space. Suppose $K\sub V$ is compact and that we have two pre-index bundles $\normalfont\text{Ind}_K(\psi;(T,U))$ and $\normalfont\text{Ind}_K(\psi;(S,W))$. Then $\normalfont\text{Ind}_K(\psi;(T,U))$ and $\normalfont\text{Ind}_K(\psi;(S,W))$ are equivalent stable bundles when pulled back to $U\cap W$.
\end{lem}
\begin{proof} Let us show first that $\normalfont\text{Ind}_K(\psi;(T,U))$ and $\normalfont\text{Ind}_K(\psi;(\tilde{T},U))$ are equivalent where $\tilde{T}$ is the composite
$$V\times \Lambda\times\mathbb{R}^{|T|+|S|}\xrightarrow{\ide_V\times\ide_\Lambda\times\pr_1} V\times \Lambda\times \mathbb{R}^{|T|}\xrightarrow{T}\mathcal{H}.$$
Indeed, as $T(x,\lambda,\cdot)$ is zero on $\{0\}\times\mathbb{R}^{|S|}$, we have
\begin{equation}\label{dibei 1}\ker(d\psi + \tilde{T})|_U = (\ker(d\psi + T)\oplus \mathbb{R}^{|S|})|_{U} \end{equation}
Let $\Phi_t := (1-t)\tilde{T} + t\tilde{S}$ be a homotopy from $\tilde{T}$ to $\tilde{S}$. Then each $\Phi_t$ satisfies the conditions of Lemma \ref{Finding Operator To Stabilise} for $U \cap W$ and we have the bundle
$$\ker(d\psi + \Phi)|_{(U\cap W)\times I}\rightarrow (U\cap W)\times I$$
Over $(U\cap W)\times\{0\}$ this is $\ker(d\psi + \tilde{T})|_{U\cap W}$ and over $(U\cap W)\times\{1\}$ it is given by $\ker(d\psi + \tilde{S})|_{U\cap W}$. Applying \cite[Theorem 14.3.2]{tD08} and combining this with Equation \eqref{dibei 1}, we may conclude.
\end{proof}
\begin{de}Let $\psi\colon V\times\Lambda\rightarrow \mathcal{H}$ be a smooth Fredholm map from the product of a separable Banach manifold with a finite-dimensional manifold to a separable Banach space. Given a compact subset $K \sub V\times\Lambda$, we denote by $\normalfont\text{Ind}_K(\psi)$ any stable bundle $\normalfont\text{Ind}_K(\psi;(T,U))$.
\end{de}
As we are interested in homotopical properties of $\normalfont\text{Ind}_K(\psi)$, the exact choice of representative will be irrelevant, so this definition is well-defined for our purposes.
\begin{rem}Suppose now $\psi \colon V\times \Lambda\rightarrow E$ be a family of sections of a separable Banach bundle over $V$. As any infinite-dimensional separable Hilbert space bundle is trivialisable as a consequence of Kuiper's theorem, we can define $\normalfont\text{Ind}_K(\psi)$ in the obvious way for compact $K \sub V\times\Lambda$. If the vector bundle has finite rank, then $V$ must be a finite-dimensional manifold, so we can construct $T$ via a system of local trivialisations. Thus $\normalfont\text{Ind}_K(\psi)$ is defined for a large class of sections as well.\end{rem}
\begin{de}Let $\psi\colon V\times\Lambda\rightarrow \mathcal{H}$ be a smooth Fredholm map from the product of a separable Banach manifold with a compact finite-dimensional manifold to a separable Banach space. Let $(\bm{E},\mu,\iota)$ be a ring spectrum. We say that $\psi$ is \emph{$\bm{E}$-orientable along a compact subset} $K \sub V\times\Lambda$ if $\normalfont\text{Ind}_K(\psi)$ is $\bm{E}$-orientable. We say that $\psi$ is \emph{$\bm{E}$-orientable} if it is $\bm{E}$-orientable along any compact subset.
\end{de}
Consider the following variant of \cite[Theorem 5]{Ho88}.
\begin{prop}\label{Injectivity of Evaluation Map with Spectra} Let $V$ be a smooth separable Hilbert manifold and $\mathcal{H}$ be a separable Hilbert space. Let $(\bm{E},\mu,\iota)$ be a ring spectrum. Suppose the following conditions hold
\begin{enumerate}
\item $\psi \colon V\times I\rightarrow \mathcal{H}$ is a smooth Fredholm map of index $n+1$, proper with respect to a neighbourhood of $0$ in $\mathcal{H}$;
\item $\psi_0$ is submersive near $\psi_0^{-1}(0)$ and $\bm{E}$-orientable along $\psi^{-1}(\{0\})$;
\item there exists a smooth map $\pi \colon V\rightarrow N$ to a connected $\bm{E}$-oriented closed manifold $N$ such that
$$\pi|_{\psi_0^{-1}(\{0\})}\colon \psi_0^{-1}(\{0\})\rightarrow N$$ is a diffeomorphism.
\end{enumerate}
Then $\pi^*\colon E^{*}(N)\rightarrow E^{*}(\psi_1^{-1}(\{0\}))$ is injective.
\end{prop}
\begin{proof} Set $K := \psi^{-1}(\{0\})$. Given any set $W\sub V\times I$ we denote $W_t :=\{(x,s)\in W: s = t\}$. Let $(T,U)$ be as in Lemma \ref{Finding Operator To Stabilise} with $A = K_0\times\{0\}$ and set $S := (\psi + T)^{-1}(\{0\})\sub V\times I\times\mathbb{R}^k$. Then $S$ is a smooth cobordism from $S_0$ to $S_1$. Moreover, $TS = \ker(d\psi + T)$. By assumption on $\normalfont\text{Ind}(\psi)$, $S$ is thus $\bm{E}$-oriented. We have $K \cong \{(x,t,z)\in S : z= 0\}$ and $S_0 = K_0\times\mathbb{R}^k$. By the compactness of $T(v,t,\cdot)$ for $(v,t)\in V\times I$ and \cite[Theorem A.1.5.i]{MS04} we see that $\dim(S) = n+k+1$. Let $\tilde{\pi} := \pi\times\ide_I\times\ide_{\mathbb{R}^k}$ with $\tilde{\pi}_t$ the restriction to $V\times\{t\}\times\mathbb{R}^k$. This fits into a commutative diagram of pairs
\begin{center}\begin{tikzcd}
(S_0,S_0\sm K_0)\arrow[r,"\tilde{\pi}_0"] \arrow[d,hook,"i_0"] & (N\times \mathbb{R}^k,N\times (\mathbb{R}^k\sm\{0\}))\arrow[d,hook]\\
(S,S\sm K)\arrow[r,"\tilde{\pi}"] & (N\times I\times \mathbb{R}^k,N\times I\times (\mathbb{R}^k\sm\{0\}))\\
(S_1,S_1\sm K_1)\arrow[u,hook,"i_1"] \arrow[r,"\tilde{\pi}_1"] & (N\times \mathbb{R}^k,N\times(\mathbb{R}^k\sm\{0\}))\arrow[u,hook]\end{tikzcd} \end{center}
As $\tilde{\pi}_0$ is a diffeomorphism and $N$ is connected, we have $(\tilde{\pi}_0)_*[S_0]^{K_0}_{E} = \pm \fcl{N\times \mathbb{R}^k}^{N}_E$.
It follows that
\begin{align}\label{inevsp 1}\notag(\iota_1)_*(\tilde{\pi}_1)_*\fcl{S_1}_E^{K_1}&\notag = \tilde{\pi}_*(i_1)_*\fcl{S_1}_E^{K_1} \\\notag &= \tilde{\pi}_*(i_0)_*\fcl{S_0}_E^{K_0}\\ &= (\iota_0)_*(\tilde{\pi}_0)_*\fcl{S_0}_E^{K_0} \\\notag&= \pm (\iota_0)_*\fcl{N\times \mathbb{R}^k}^{N}_E \\\notag &= \pm(\iota_1)_*\fcl{N\times \mathbb{R}^k}^{N}_E. \end{align}
Finally, consider the diagram
\begin{center}\begin{tikzcd}
E^{*}((N\times \{0\})^{-T(N\times\mathbb{R}^k)}) \arrow[r,"\tilde{\pi}_1^*"] \arrow[d,"\vartheta"]&E^{*}(K_1^{-\tilde{\pi}_1^*T(N\times\mathbb{R}^k)})\arrow[r,"\normalfont\text{Th}(d\tilde{\pi}_1)^*"]&E^{*}(K_1^{-TS_1|_{K_1}}) \arrow[d,"\simeq"] \\
E^{*}((N\times \{0\})^{-T(N\times\mathbb{R}^k)})& E_{n+k-\bullet}(N\times \mathbb{R}^k,N\times (\mathbb{R}^k\sm\{0\}))\arrow[l,"\simeq"] &E_{n+k-\bullet}(S_1,S_1\sm K_1;\mathbb{Z}_2) \arrow[l,"(\tilde{\pi}_1)_*"] \end{tikzcd} \end{center}
where we define $\vartheta$ such that the diagram commutes and the isomorphisms are given by Atiyah duality\footnote{Find a good reference for that other than \cite{AMS21}.}. By \eqref{inevsp 1} and the naturality of the cap product, we see that $\vartheta = \pm\ide$. Hence $\tilde{\pi}_1^*$ is injective. Now the claim follows by applying the Thom isomorphism \cite[Theorem V.1.3]{Rud98} to the commutative square
\begin{center}\begin{tikzcd}
E^{*}(N^{-TN}) \arrow[r,"\simeq"] \arrow[d,"\hat{\pi}^*"]&E^{k+\bullet}((N\times \{0\})^{-T(N\times\mathbb{R}^k)}) \arrow[d,"\tilde{\pi}_1^*"]\\
E^{*}(K_1^{-\pi^*TN|_{K_1}}) \arrow[r,"\simeq"] & E^{k+\bullet}(K_1^{-\tilde{\pi}_1^*T(N\times\mathbb{R}^k)}) \end{tikzcd} \end{center}
where the horizontal isomorphism are given by Remark \ref{Cohomology of Virtual Bundle Shift} and we let $\hat{\pi} \colon (K_1)^{-\pi^*TN|_{K_1}}\rightarrow N^{-TN}$ be the canonical morphism induced by the pullback.
\end{proof}
Let us see how this bears on our situation. Let $(M,\omega)$ be a closed symplectic manifold of dimension $2m$ and $L \in \LCr(X)$ with $\omega|_{\pi_2(M,L)} = 0$. Let $G\sub \mathbb{C}$ be a compact convex submanifold with boundary and let $\hat{L}$ be an exact family of Lagrangians over $\partial G$ with base $L$. We note that $\partial G$ is connected. Fix $j \geq 6$ and denote $\mathcal{D}:= \set{u\in W^{j,2}(G,M) : u(\partial G)\sub L}$.\\
By Lemma \ref{Partition of Unity for Sobolev Functions} we can fix a smooth isometric trivialisation $\Theta^{j,\ell} \colon \tilde{E}^\ell\rightarrow W^{j,2}(G,M) \times \mathcal{H}^{j,\ell}$ for any $j \geq 2$ and $\ell \in \{2,\dots,j\}$. We will now fix $j \geq 2$ and set $\Theta:= \Theta^{j,j-1}$ and $\mathcal{H}:= \mathcal{H}^{j,j-1}$ with induced norm $\norm{\cdot}$. Then we can define $\check{\partial}\colon [0,1]\times \mathcal{D} \rightarrow \mathcal{H}$ by
\begin{equation}\check{\partial}_a u := \check{\partial}(a,u) := \pr_2\Theta(\cc{\partial} \tau_a(u))\end{equation}
Fix $x_0 \in \partial G$ and let $\pi := \normalfont\text{ev}_{x_0}\colon \mathcal{D} \rightarrow L$ be the evaluation map. We obtain a bundle pair
$$\rho \colon (E,F) := \normalfont\text{ev}^*(TM,TL) \rightarrow \mathcal{D}\times (G,\partial G)$$
\begin{cor}\label{}Let $(\bm{E},\mu,\iota)$ be a ring spectrum. If $\normalfont\text{Ind}(E,F)$ and $TL$ are $\bm{E}$-oriented, then
$$\pi^*\colon E^{*}(L)\rightarrow E^{*}(\Omega_{G,\hat{L},J})$$ is injective.
\end{cor}
\begin{proof}As $\check{\partial}_0^{-1}(\{0\}) = \{u\in \mathcal{D}: \cc{\partial}_J u = 0\}$, we see immediately that $\pi$ defines a diffeomorphism $\check{\partial}_0^{-1}(\{0\})\rightarrow L$. Moreover, $\check{\partial}_0$ is submersive by the proof of \cite[Lemma 5]{Ho88}. If $L$ is connected, the claim follows immediately. If $L$ is not connected, write $L = \djun{j\in J}L_j$ and $U = \djun{b \in B}U_b$ with $L_j$ and $U_b$ path-connected. Then $\pi$ maps $U_b$ to a unique $L_i$ and we can apply the argument to each restriction $\pi\colon \djun{\substack{b \in B\\ \pi(U_b)\sub L_j}}U_b \rightarrow L_j$ separately, using the additivity of $E^{*}$.
\end{proof}
\section{Homotopy and Fredholm theory}\label{Section 2}
\subsection{Generalised cohomology and Thom spectra}\label{2.1}
For our purposes, it suffices to work with classical spectra as defined in \cite{Rud98} or \cite{Ad95}. However, our definitions require some care, as we will take the generalised (co)homology of spaces which are not necessarily homotopy equivalent to a CW complex. A generalised cohomology theory defined on CW complexes can always be extended to all spaces, but this extension may not be unique. We need an extension that satisfies a certain continuity property, namely the statement of \cite[Lemma 5.2]{AMS21}.\par
Unless otherwise specified, we work with spectra whose level spaces are homotopy equivalent to CW complexes. We denote by $\mathbb{S}$ the sphere spectrum. All our spaces are assumed to be compactly generated and Hausdorff.\\
Given an $\Omega$-spectrum $R$, we define, for a pointed space $X$, the \emph{$n^{\mathrm{th}}$ $R$-homology and $R$-cohomology groups} to be
$$R_n(X) := \pi_n(X \wedge R_n) \quad \quad \quad \quad R^n(X) := [X,R_n]_*$$
respectively, for $n$ in $\mathbb{Z}$, where $[\cdot, \cdot]_*$ denotes the set of pointed homotopy classes of maps. As $R_n\simeq \Omega^2 R_{n + 2}$, the sets $R_n(X)$ and $R^n(X)$ carry a natural abelian group structure for all $n$.\\
We recall the definition of relative $R$-(co)homology.\\
Given an inclusion $j\colon A\hookrightarrow X$ of pointed spaces, we denote by
$$C_X A := C_A\cup_j X$$
the \emph{(reduced) mapping cone} of $j$, where $C_A$ is the cone over $A$. We view this as a pointed space, with basepoint the vertex of $C_A$ (or equivalently the basepoint of $X$).
The inclusion $X\hookrightarrow C_XA$ is a cofibration and collapsing $X$ induces a natural map $C_XA\rightarrow \Sigma A$. By \cite[Theorem 4.6.4]{tD08} these maps fit into an $h$-coexact sequence
$$A\rightarrow X\rightarrow C_XA \rightarrow \Sigma A \rightarrow \Sigma X \rightarrow \dots$$
In particular, there exists for any pointed space $W$ a long exact sequence of pointed sets
\begin{equation}\label{dold-puppe 1}\dots\rightarrow [\Sigma X,W]_* \rightarrow [\Sigma A,W]_*\rightarrow [C_XA,W]_*\rightarrow [X,W]_* \rightarrow [A,W]_* \end{equation}
The \emph{relative $R$-(co)homology} of $(X,A)$ is defined by
$$R_{*}(X,A) := R_{*}(C_{X}A) \qquad \qquad R^*(X,A) := R^*(C_{X}A).$$
We recover the usual long exact sequence of a pair in $R$-cohomology due to \eqref{dold-puppe 1}. Similarly, there is a long exact sequence of a pair in $R$-homology.\par
In the case of a pair of unpointed spaces $(X, A)$, one simply considers the pair $(X_+,A_+)$, where $\cdot_+$ denotes the addition of a disjoint basepoint.\\
A \emph{ring spectrum} consists of an $\Omega$-spectrum $R$ endowed with both a multiplication map $\mu \colon R\wedge R\to R$ and a unit $\iota \colon \mathbb{S}\to R$ such that the usual associativity and unit diagrams commute up to homotopy. In this case we can define a cup product
$$\cdot\; \colon R^*(X)\otimes R^*(X)\rightarrow R^*(X)$$
by letting $\alpha \cdot \beta$ be the composite
\begin{equation}\label{cup product de}X\xrightarrow{\Delta} X\wedge X \xrightarrow{\alpha\wedge \beta} R_n\wedge R_k \rightarrow (R\wedge R)_{n+k} \xrightarrow{\mu}R_{n+k}\end{equation}
for $\alpha\in R^n(X)$ and $\beta\in R^k(X)$, where the map $R_n\wedge R_k \rightarrow (R\wedge R)_{n+k}$ comes from the construction of the smash product.
When one works outside the setting of CW complexes, the cup product does not necessarily descend to a map
$$R^{*}(X,A)\otimes R^{*}(X,B) \rightarrow R^{*}(X,A\cup B).$$
However, we can make the following two observations, which will be useful in Section \ref{Section 4}.
\begin{rem}\label{Cup Product and Relative Cohomology}
Suppose $\alpha \in R^n(X)$ and $\beta \in R^m(X)$ admit representatives $\tilde{\alpha}: X \rightarrow R_n$ and
$\tilde{\beta}: X \rightarrow R_m$
which send subspaces $A, B \subseteq X$ to the basepoints $*$ respectively. Then, by construction, $\alpha \cdot \beta$ admits a representative $X \rightarrow R_{n+m}$ sending $A \cup B$ to $*$. Induction shows the same result for classes $\alpha_1, \ldots, \alpha_k\in R^*(X)$ that admit representatives $\tilde{\alpha_i}$ mapping subspaces $A_i \subseteq X$ to $*$ respectively. In particular,
$$\alpha_1\cdot \ldots\cdot \alpha_k = 0\quad \text{in}\quad R^*(A_1\cup \ldots \cup A_k).$$
\end{rem}
\begin{lem}\label{rel cup}
Let $(X, d)$ be a compact metric space, with an open cover $U_1, \ldots, U_k$. Let $\alpha_i \in R^{n_i}(X)$ be cohomology classes such that $\alpha_i|_{{U_i}} = 0$ in $R^{n_i}({U}_i)$ for all $i$. Then the product $\alpha_1 \cdot \ldots \cdot \alpha_k$ vanishes in $R^*(X)$.
\end{lem}
\begin{proof} Let $V_1,\dots,V_k$ be an open cover of $X$ so that $\cc{V_i}\sub U_i$ for each $i$. Set $d_i := d(\cdot,\cc{V_i})$. As $A_i:= X\sm U_i$ is disjoint from $\cc{V_i}$ and compact, there exists $\epsilon_i > 0$ so that $d_i^{-1}([0,\varepsilon_i])\sub U_i$. \par
Pick maps $\tilde{\alpha}_i: X \rightarrow R_{n_i}$ representing each $\alpha_i \in R^{n_i}(X)$. Since $\alpha_i|_{U_i} = 0$, we can choose nullhomotopies
$$H_i: U_i \times [0, 1] \rightarrow R_{n_i}$$
such that $H_i(\cdot, 0) \equiv *$ and $H_i(\cdot, 1) = \tilde{\alpha}_i|_{U_i}$.\par
Define maps $\beta_i: X \rightarrow R_{n_i}$ by
$$\beta_i(x) := \begin{cases}
\tilde{\alpha}_i(x) & \mathrm{ if }\; \varepsilon_i \leq d_i(x)\\
H_i(x, \varepsilon_i^{-1} d_i(x)) & \mathrm{ if }\; 0 \leq d_i(x) \leq \varepsilon_i.
\end{cases}$$
Then $\beta_i \simeq \tilde{\alpha}_i$, via the homotopy $G_i: X \times [0, 1] \rightarrow R_{n_i}$ given by
$$G_i(x, s) := \begin{cases}
\tilde{\alpha}_i(x) & \mathrm{ if }\; \varepsilon_i \leq d_i(x)\\
H_i(x, s + (1-s) \varepsilon_i^{-1} d_i(x)) & \mathrm{ if }\; 0\leq d_i(x) \leq \varepsilon_i.
\end{cases}$$
Hence $\beta_i$ is a representative of $\alpha_i \in R^{n_i}(X)$. $\beta_i$ sends $V_i$ to the basepoint $*$ so $\alpha_i \cdot \ldots \cdot \alpha_k = 0$ in $R^*(X)$ by Remark \ref{Cup Product and Relative Cohomology}.
\end{proof}
\medskip
Suppose now that $X$ is a compact space and that $\xi \colon F\rightarrow X$ is a vector bundle of rank $k$. Let $F_\infty$ be its fibrewise one-point compactification. This is a sphere bundle over $X$ with a canonical section $s_\infty$ given by the point at infinity in every fibre. The \emph{Thom space} of $\xi $ is defined to be the pointed space
$$\normalfont\text{Th}(\xi) := C_{F_\infty} \im(s_\infty)$$
and its \emph{Thom spectrum}, written $X^\xi$ or $X^F$, to be the spectrum $\Sigma^{\infty}\normalfont\text{Th}(\xi)$. In particular, if $\xi$ is the trivial bundle of rank $k$, then $\normalfont\text{Th}(\xi) = \Sigma^kX_+$ and $X^\xi = \Sigma^{\infty + k} X_+$. Note that if $X$ is not a CW complex, $\normalfont\text{Th}(\xi)$ might not be either. These are the only spectra whose level spaces are not CW complexes that we will encounter.\par
For a virtual vector bundle of the form $\eta - \mathbb{R}^N$, we let $\rank(\eta) - N$ be its \emph{rank} and define its \emph{Thom spectrum} to be
$$X^{\eta - \mathbb{R}^N} := \Sigma^{-N} X^\eta.$$
All our virtual bundles will be of this form, so this definition is sufficient for our purposes.\footnote{The homotopy type of the Thom spectrum only depends on the stable isomorphism class of the virtual vector bundle. Due to compactness, any virtual bundle on $X$ is stably isomorphic to one of the considered form, so it suffices for our applications.}\\
Let $R$ be a ring spectrum and $\xi$ a virtual vector bundle over $X$ of rank $k$. An \emph{$R$-orientation} of $\xi$ is an element $u \in R^k(X^\xi)$ such that for any map $j: \Sigma^k \mathbb{S} \rightarrow X^\xi$ which is a (stable) homotopy equivalence to a fibre, we have
$$j^*u = \pm [\iota] \in R^k(\Sigma^k\mathbb{S}) \cong R^0(\mathbb{S}).$$
where $[\iota]$ is the homotopy class of the unit map. Any trivial bundle is $R$-orientable, and if two out of $\xi$, $\eta$ and $\xi \oplus \eta$ are $R$-oriented, then so is the third.\par
By the Thom isomorphism theorem, \cite[Theorem V.1.3]{Rud98} any $R$-orientation on a virtual vector bundle $\xi$ over a compact CW complex $X$ induces a natural isomorphism
$$R^{* + k}(X^{\xi})\cong R^*(X).$$
By \cite[Lemma 5.2]{AMS21}, this also holds when $X$ is a compact subset of a manifold $M$, and both $\xi$ and its $R$-orientation are pulled back from $M$.\par
We will need the following form of Atiyah duality, which can be viewed as a form of Poincar\'e duality for generalised cohomology theories.
\begin{thm}[{\cite[Theorem 5.2]{AMS21}}]\label{} Let $M$ be a smooth (not necessarily compact) manifold, possibly with boundary, and suppose $Z \sub M$ is any compact subset. Then there is a canonical isomorphism
$$R_{-*}(M,M\sm Z) \;\cong\; R^*\lbr{Z^{-TM|_Z}}$$
compatible with restriction to smaller closed subsets $Z' \subseteq Z$.\par
\end{thm}
If $M$ is $R$-oriented on a neighbourhood of $Z$ and of dimension $n$, the Thom isomorphism theorem then gives an isomorphism
$$R^{* + n}(Z) \; \cong \; R_{-*}(M, M \sm Z).$$
for compact subsets $Z \sub M$. In this case, we define the \emph{fundamental class of $M$ along $Z$} $[M]_Z \in R_n(M, M \sm Z)$ to be the image of the unit in $R^0(Z)$ under this isomorphism. The class $[M]_Z$ depends only on the choice of orientation of which there can be more than two.
We will need the following version of the fact that, given an (oriented) compact manifold with boundary $M$, the fundamental class of $\partial M$ has vanishing image in the homology of $M$.
\begin{lem}\label{Fundamental Class of Boundary}Let $W^{n+1}$ be an $R$-oriented smooth manifold with boundary and $K\sub W$ a compact subset. Then the image of $[\partial W]_{K\cap \partial W}$ in $R_n(W,W\sm K)$ is $0$.\end{lem}
\begin{proof} Suppose first that $W$ is compact. The map $\partial$ in the exact sequence of a pair
$$R_{n+1}(W, \partial W) \xrightarrow{\partial} R_n(\partial W) \rightarrow R_n(W)$$
sends $[W]$ to $[\partial W]$ - see \cite[Remark V.2.14.a)]{Rud98}. The claim with $K = W$ then follows from the exactness of this sequence.\par
Now assume $W$ is non-compact and set $C = K \cap \partial W$.
By excision, we may modify $W$ away from $K$, and replace $W$ with a compact smooth neighbourhood of $K$.
Then $[\partial W]_C$ is the restriction of the fundamental class $[\partial W] \in R_n(\partial W)$. We may deduce the claim now from the first step and the commutativity of the following diagram.
$$\begin{tikzcd}
R_{n+1}(W, \partial W) \arrow[r, "\partial"] & R_n(\partial W) \arrow[r] \arrow[d] & R_n(W) \arrow[d] \\
& R_n(\partial W, \partial W \setminus C) \arrow[r] & R_n(W, W \setminus K)
\end{tikzcd}$$
\end{proof}
\subsection{Index bundles}
Let $s \colon \mathcal{B}\rightarrow \mathcal{E}$ be a smooth Fredholm section of a Banach bundle over a Banach manifold intersecting the zero section of $\mathcal{E}$ transversely. By the infinite-dimensional implicit function theorem \cite[Theorem A.3.3]{MS04}, the zero locus $s^{-1}(0)$ is a smooth manifold of dimension $\indo(Ds) = \dim(\ker(Ds))$ with tangent bundle $\ker(Ds) \rightarrow s^{-1}(0)$. If $Ds$ is not fibrewise surjective, the zero locus is not necessarily smooth or may have excess dimension. The natural replacement of $\ker(Ds)$ in this case is the \emph{index bundle}, a virtual vector bundle constructed below. It relies on the notion of the stabilisation of a Fredholm operator.
\begin{de} Let $D\colon X\rightarrow Y$ be a Fredholm operator between two Banach spaces. We call an operator $T\colon \mathbb{R}^N \rightarrow Y$ a \emph{stabilisation of $D$} if $D+ T\colon X\oplus \mathbb{R}^N \rightarrow Y$ is surjective.
\end{de}
As $T$ is compact, $D+T$ is still a Fredholm operator and
$$\indo(D+T) = \indo(D)+N$$
by \cite[Theorem A.1.5(i)]{MS04}. Given a smoothly varying family of Fredholm operators we will show the existence of a smoothly varying family of stabilisations near a compact subset in Lemma \ref{Stabilisation over Compact Subset}.\par
Let us fix our setting for the rest of this subsection. Let $Y$ be a separable Hilbert manifold, $\mathcal{H}$ a separable Hilbert space and $\Lambda$ a compact finite-dimensional manifold with boundary. We assume that $\psi \colon V \rightarrow \mathcal{H}$ is a smooth Fredholm map with $V\sub Y\times\Lambda$ an open subset. Define the open subset $$V^{\reg} := \{(x,\lambda)\in V: d_1\psi(x,\lambda) \text{ is surjective}\}$$
where $d_1$ is the derivative with respect to the first argument.
\begin{lem}\label{Stabilisation over Compact Subset} For any closed subset $A\sub V^{\reg}$ and any neighbourhood $U \sub V$ of a compact subset $K\sub V$, there exists a smooth map $T\colon V\times \mathbb{R}^k\rightarrow \mathcal{H}$ such that
\begin{enumerate}
\item $T(z,\cdot)$ is linear for each $z\in V$;
\item $T(z,\cdot) = 0$ for $z \in A$;
\item $T(z,\cdot) = 0$ for $z \in V\sm U$;
\item $d_1\psi(x,\lambda) + T(x,\lambda,\cdot)\colon T_{(x, \lambda)}V \oplus\mathbb{R}^k \rightarrow \mathcal{H}$ is surjective for $(x,\lambda) \in U$.
\end{enumerate}
\end{lem}
\begin{proof} For each $z \in K$ there exists an open neighbourhood $U_z \sub V$ of $z$, an integer $k_z \geq 0$, and an operator $T_z \colon \mathbb{R}^{k_z}\rightarrow \mathcal{H}$ such that
$$d_1\psi(y,\mu) + T_{z} \colon T_yY \oplus\mathbb{R}^{k_z} \rightarrow \mathcal{H}$$
is surjective for $(y,\mu) \in U_z$. Let $Z \sub K$ be a finite subset such that $U:= \union{z\in Z}{U_z}$ contains $K$ and set $k := \s{z\in Z}{k_z}$. Using a smooth partition of unity subordinate to $\{U_z\}_{z\in Z}\cup \{V\sm K\}$, we obtain an operator $T' \colon V\times \mathbb{R}^k\rightarrow\mathcal{H}$ satisfying all conditions save for the second one. Multiplying $T'$ with a smooth bump function which is identically one on $V\sm V^{\reg}$ and vanishes on $A$, we obtain the desired map.
\end{proof}
\begin{de} A family of operators $T$ as in Lemma \ref{Stabilisation over Compact Subset} is said to be a \emph{stabilisation of $\psi$ along $K$ relative to $A$, of rank $k$}. We call $$\normalfont\text{Ind}_K(\psi;T) := \ker(d_1\psi + T)|_U - \mathbb{R}_U^k$$
the \emph{index bundle of $\psi$ along $K$ (with respect to $T$)}, defined over a neighbourhood $U$ of $K$.
\end{de}
\begin{lem}\label{Dependent Index Bundles Equivalent over Intersection} Any two index bundles of $\psi$ along $K$ are stably equivalent as germs near $K$.
\end{lem}
\begin{proof} Suppose $T$ and $S$ are two stabilisations along $K$. We may assume without loss of generality that $d_1\psi +T$ and $d_1\psi +S$ are surjective over the same subset. As we may add factors of $\mathbb{R}$ to the domain of $T$, respectively $S$, without changing the index bundle, we may assume that $T$ and $S$ are smooth maps $V\times \mathbb{R}^{k+\ell}\rightarrow \mathcal{H}$ with $T$ vanishing on $V\times \mathbb{R}^k\times\{0\}$ and $S$ vanishing on $V\times\{0\} \times\mathbb{R}^{\ell}$. Now we can linearly interpolate between them and apply \cite[Theorem 14.3.2]{tD08}.
\end{proof}
\begin{de} We let $\normalfont\text{Ind}_K(\psi)$ denote the stable equivalence class of any $\normalfont\text{Ind}_K(\psi;T)$ and call it the \emph{index bundle of $\psi$ along $K$}.
\end{de}
\begin{de}\label{Orientability of Fredholm Map} Given a ring spectrum $R$, the map $\psi$ is \emph{$R$-orientable along $K$} if $\normalfont\text{Ind}_K(\psi)$ is $R$-orientable on a neighbourhood of $K$. We say that $\psi$ is \emph{$R$-orientable} if it is $R$-orientable along any compact subset.
\end{de}
\subsection{Proof of Theorem \ref{Main Theorem 1}}
The following result generalises \cite[Theorem 5]{Ho88} to arbitrary ring spectra.
\begin{prop}\label{Injectivity of Evaluation Map with Spectra} Let $\mathcal{Y}$ be a smooth separable Hilbert manifold and $\mathcal{H}$ be a separable Hilbert space. Let $\psi: \mathcal{Y} \times [0, 1] \rightarrow \mathcal{H}$ be a smooth Fredholm map of index $n + 1$, and write $\psi_t$ for its restriction to $\mathcal{Y} \times \{t\}$. Given a ring spectrum $R$, assume
\begin{enumerate}[\normalfont 1.]
\item\label{proper} $\psi$ is proper with respect to a neighbourhood of $0$ in $\mathcal{H}$ and $R$-orientable along $\psi^{-1}(0)$,
\item\label{submersive} $\psi_0$ is submersive near $\psi_0^{-1}(0)$,
\item\label{other-manifold} there exists a smooth map $\pi \colon \mathcal{Y}\rightarrow N$ to a connected closed manifold $N$ such that
$$\pi|_{\psi_0^{-1}(0)}\colon \psi_0^{-1}(0)\rightarrow N$$ is a diffeomorphism.
\end{enumerate}
If $N$ is $R$-oriented, then $\pi^*\colon R^*(N)\rightarrow R^*(\psi_1^{-1}(0))$ is injective.
\end{prop}
\begin{proof} Set $K := \psi^{-1}(0)$ and $I := [0,1]$. By \eqref{proper}, $K$ is compact. Given any subset $W\sub \mathcal{Y}\times I$ we denote by $W_t$ its fibre over $t \in I$. Let $T$ be a stabilisation of $\psi$ along $K$ relative to $K_0\times\{0\}$ of rank $k$. Set $$S := (\psi + T)^{-1}(0)\sub \mathcal{Y}\times I\times\mathbb{R}^k.$$
Then $S$ is a smooth (non-compact) cobordism from $S_0$ to $S_1$ with $TS = \ker(d\psi + T)$. Assumption \eqref{proper} on $\psi$ implies that $S$ is $R$-oriented on a neighbourhood of $K$. By the compactness of $T(v,t,\cdot)$ for $(v,t)\in \mathcal{Y}\times I$ and \cite[Theorem A.1.5.i]{MS04},
$$\dim(S) = n+k+1.$$
Note that $K = \{(x,t,z)\in S : z= 0\}$ and $S_0 = K_0\times\mathbb{R}^k$. Set $$\tilde{\pi} := \pi\times\ide_I\times\ide_{\mathbb{R}^k}: S \rightarrow N \times I \times \mathbb{R}^k$$ and let $\tilde{\pi}_t$ be the restriction to $\mathcal{Y}\times\{t\}\times\mathbb{R}^k$. This fits into a commutative diagram of pairs
\begin{center}\begin{tikzcd}
(S_0,S_0\sm K_0)\arrow[r,"\tilde{\pi}_0"] \arrow[d,hook,"i_0"] & (N\times \mathbb{R}^k,N\times (\mathbb{R}^k\sm0))\arrow[d,hook,"\iota_0"]\\
(S,S\sm K)\arrow[r,"\tilde{\pi}"] & (N\times I\times \mathbb{R}^k,N\times I\times (\mathbb{R}^k\sm0))\\
(S_1,S_1\sm K_1)\arrow[u,hook,"i_1"] \arrow[r,"\tilde{\pi}_1"] & (N\times \mathbb{R}^k,N\times(\mathbb{R}^k\sm 0))\arrow[u,hook,"\iota_1"]\end{tikzcd}
\end{center}
Consider the composition
\begin{equation}\label{iems 1}R^*(N) \xrightarrow{\pi_1^*}
R^*(K_1) \xrightarrow[\cong]{\mathrm{AD}}
R_{n + k - *}(S_1, S_1 \setminus K_1) \xrightarrow{(\pi_1)_*}
R_{n + k - *}(N \times \mathbb{R}^k, N \times (\mathbb{R}^k \setminus 0)) \xrightarrow[\cong]{\mathrm{AD}}
R^*(N)\end{equation}
where $\mathrm{AD}$ denotes the Atiyah duality isomorphism. Note that in the second map we use the $R$-orientability assumption in (1).\par
By construction, this map is given by multiplication by $\mathrm{AD}((\pi_1)_* [S_1]_{K_1}) \in R^0(N)$,
which is equal to $\mathrm{AD}((\pi_0)_* [S_0]_{K_0})$ by Lemma \ref{Fundamental Class of Boundary}.
As $\pi_0$ is a diffeomorphism, this cohomology class is a unit. Hence \eqref{iems 1} is an isomorphism. Because it factors through the pullback $\pi_1^*: R^*(N) \rightarrow R^*(K_1)$, the latter must be injective.\end{proof}
\smallskip
We apply this to our situation.
Let $G\sub \mathbb{C}$ be a simply-connected compact submanifold with smooth boundary, and suppose $\{L_z\}_{z \in \partial G}$ is a \emph{Hamiltonian family} of Lagrangians in $X$. That is, there exists a (relatively exact) Lagrangian $L \sub X$ and a smooth family $\{\phi^t_z\}_{z\in \partial G,t\in [0,1]}$ of Hamiltonian isotopies (which we can assume to be compactly supported) of $X$ such that $L_z = \phi_z^1(L)$ for all $z$. We can assume that $L = L_{z_0}$ for some $z_0 \in \partial G$.\\
Consider the following moduli space of pseudoholomorphic discs with moving Lagrangian boundary conditions:
\begin{equation}\label{changing-boundary}\mathcal{P} := \set{u \in C^\infty(G,X): \cc{\partial}_J u = 0, \;E(u) < \infty,\; \forall z \in\partial G: u(z)\in L_z}\end{equation}
where $\cc{\partial}_J$ is the Cauchy-Riemann operator associated to $J$ and $E$ is the symplectic energy.
Let $\pi: \mathcal{P} \rightarrow L$ be evaluation at $z_0$.
\begin{thm}\label{Main Theorem 1}
The pullback $\pi^*: R^*(L) \rightarrow R^*(\mathcal{P})$ is injective.
\end{thm}
\begin{rem}
If the moduli space $\mathcal{P}$ were cut out transversely, this could be proved using a cobordism argument as in \cite{Por22}. On the other hand, following \cite[Remark 4.6]{Por22}, Theorem \ref{Main Theorem 1} can be used to give a slightly different proof of \cite[Corollary 1.9]{Por22}, without using any transversality results.
\end{rem}
\begin{rem}
Hofer \cite{Ho88} proves Theorem \ref{Main Theorem 1} as well as Theorem \ref{Main Theorem 2}, \ref{Intersection Points and Cuplength} and Corollary \ref{corolary} in the case where $R^*$ is \v{C}ech cohomology with coefficients in $\mathbb{Z} / 2$.
\end{rem}
Using an extension of an associated family of Hamiltonians we may extend the family of Hamiltonian isotopies $\{\phi_z\}_{z\in \partial G}$ to a smooth family $\{\phi_z\}_{z\in G}$ of Hamiltonian isotopies, parametrised by $G$. \par
Fix $k \geq 3$. Given $t \in [0,1]$, we define $\psi_t \colon W^{k,2}(G,X) \rightarrow W^{k,2}(G,X)$ by setting
$$\psi_t(u)(z) := \phi^t_z(u(z))$$
for $z \in G$. By assumption, $\psi_0$ is the identity. Let
$$\mathcal{A}:= \set{u\in W^{k,2}(G,X) : u(\partial G)\sub L}.$$
The smooth Banach bundle $\mathcal{E} \rightarrow \mathcal{A}$ with fibre
$$\mathcal{E}_u := W^{k-1,2}(G,u^*TX).$$
admits a smooth Fredholm section $\cc{\partial}_J: \mathcal{A} \rightarrow \mathcal{E}$ given by
$$\cc{\partial}_J u = \partial_s u + J(u)\partial_t u.$$
The canonical evaluation map defines a map of pairs $\normalfont\text{ev} \colon \mathcal{A}\times (G,\partial G)\rightarrow (X,L)$. By pulling back, this defines a bundle pair
$$(F,F') := \normalfont\text{ev}^*(TX,TL) \rightarrow \mathcal{A}\times (G,\partial G).$$
Using a connection on $TX$, the linearisation of $\cc{\partial}_J$ defines a real Cauchy-Riemann operator on $(F,F')$ by \cite[Proposition 3.1.1]{MS04}. Then Assumption \ref{Assumption} states exactly that its index bundle
is $R$-oriented. For $u \in \cc{\partial}_J^{-1}(0)$ we have, by the Riemann-Roch theorem, \cite[Theorem C.1.10]{MS04}, that
\begin{equation}\label{index 1}\indo(D_u\cc{\partial}_J ) = \dim(L)+\mu(F|_u,F'|_u), \end{equation}
where $\mu(F|_u,F'|_u)$ is the boundary Maslov index of the pullback of $(F,F')$ to $\{u\}\times G$. \par
\begin{rem}\label{r}
If $u \in \mathcal{A}$ is pseudoholomorphic, then $\mu(F|_u,F'|_u) = 0$ as $u$ must be constant due to relative exactness. However, if $u$ instead satisfies that $\psi_t(u)$ is pseudoholomorphic for some $t > 0$, $u$ need not be constant and may lie in a non-trivial relative homotopy class of discs. In this case, $\mu(F|_u, F'|_u) $ might not vanish.
\end{rem}
By \cite[Theorem (3)]{Kui65} we can fix a smooth isometric trivialisation $\Psi \colon \mathcal{E}\rightarrow \mathcal{A}\times \mathcal{H}$, where $\mathcal{H}$ is some separable Hilbert space.
Define $\mathcal{F}\colon \mathcal{A}\times[0,1] \rightarrow \mathcal{H}$ by
\begin{equation}\mathcal{F}_t(u) := \pr_2\Psi(\cc{\partial}_J \psi_t(u))\end{equation}
letting $\pr_2$ denote the projection to the second factor. Note that $\mathcal{F}_1^{-1}(0)$ is diffeomorphic via $\psi_1$ to the space $\mathcal{P}$ of pseudoholomorphic maps from $G$ to $M$ which have finite energy and map $z\in \partial G$ to $L_z$.
\begin{proof}[Proof of Theorem \ref{Main Theorem 1}]
Let
$$\mathcal{W} := \set{(u,t)\in \mathcal{A}\times[0,1] : \mu(F|_{\psi_t(u)},F'|_{\psi_t(u)})= 0}.$$
This is an open subset of $\mathcal{A}\times [0,1]$. We restrict to the subset where the Maslov index is 0 in order to have control over the index of $\mathcal{F}$, due to Remark \ref{r}. However, with a little more care the entire argument could also be applied without this restriction. Let $\pi: \mathcal{W} \rightarrow L$ be the evaluation map at $z_0$.\par
By \cite[Proposition 6]{Ho88} there exists a neighbourhood $U \sub\mathcal{W}$ of $\mathcal{F}^{-1}(0)$ such that $\mathcal{F}|_U: U \rightarrow \mathcal{H}$ is a Fredholm map of index $\dim(L)+1$ and such that $\mathcal{F}|_U$ is proper with respect to a neighbourhood of $0 \in \mathcal{H}$. We note that $U$ and $\mathcal{H}$ are separable Hilbert manifolds. Since pseudoholomorphic discs with boundary on $L$ are constant, $\pi$ defines a diffeomorphism $\mathcal{F}_0^{-1}(0)\rightarrow L$. Moreover, $\mathcal{F}_0$ is submersive by the proof of \cite[Lemma 5]{Ho88}, and $\mathcal{F}$ is $R$-orientable by Assumption \ref{Assumption}. Thus the claim follows from Proposition \ref{Injectivity of Evaluation Map with Spectra}.
\end{proof}
\section{Approximating pseudoholomorphic strips}\label{Section 3}
The key idea in the proof of Theorem \ref{Main Theorem 2} is to study a one-parameter family of moduli spaces of pseudoholomorphic discs $\mathcal{P}_\ell$ with moving boundary conditions. They approximate the moduli space of pseudoholomorphic strips $\mathcal{M}_{L, L'}$ as the parameter $\ell$ tends to $\infty$. Together with \cite[Lemma 5.2]{AMS21}, this allows us to infer Proposition \ref{Main Theorem 2} from Theorem \ref{Main Theorem 1}.\par
Throughout this section we fix a convex domain $G$ in $\mathbb{C}$ with smooth boundary, such that both $(-\eta, \eta)$ and $(-\eta, \eta) + i$ are contained in $\partial G$ for some $\eta > 0$. For $\ell > 0$, define $Z_\ell := [-\ell, \ell] + [0, 1]i$, and let
$$G_\ell := {Z_\ell \cup (G + \ell) \cup (G - \ell)}$$
be a smoothing as shown below. \par
{\centering\input{picture1.tikz}\par }
\begin{de}
For a domain $W$ in $\mathbb{C}$ and a smooth map $u: W \rightarrow X$, we define the \emph{symplectic energy} to be
$$E(u) := \frac12\int_W u^* \omega$$
whenever this integral is defined.
\end{de}
When $u$ is pseudoholomorphic, the symplectic energy of $u$ is defined and non-negative, although not necessarily finite.\par
We consider the following moduli spaces. Recall from the introduction that $L$ is a closed, relatively exact Lagrangian in $X$ and $L'$ is Hamiltonian isotopic to $L$. We denote by $Z := \mathbb{R}+[0, 1]i $ the infinite strip.
\begin{de} We define
$$\mathcal{D}_{L, L'} := \set{u \in C^\infty(Z,X) : |E(u)|< \infty,\; u(\mathbb{R})\sub L,\; u(\mathbb{R}+i)\sub L'}.$$
It contains $\mathcal{M}_{L, L'} := \{u \in \mathcal{D}_{L, L'}: \cc{\partial}_J u = 0\}$ as the subspace of pseudoholomorphic maps.\par
Given $\ell > 0$ and $A \geq 0$, we set
$$\mathcal{F}_{\ell, A} := \set{u \in C^\infty(Z_\ell,X): E(u) \leq A,\; \cc{\partial}_Ju = 0,\; u([-\ell,\ell])\sub L,\; u([-\ell,\ell]+i)\sub L'}.$$
Given $\ell \geq 0$ and a Hamiltonian family $\{L_t\}_{t \in [0, 1]}$ of Lagrangians in $X$ with $L_0 = L$ and $L_1 = L'$, we define the moduli space
$$\mathcal{P}_\ell := \set{u\in C^\infty(G_\ell,X) : \cc{\partial}_J u = 0,\; u(s+i t)\in L_t \normalfont\text{ for } s +i t \in \partial G_\ell}.$$
\end{de}
All of these spaces are endowed with the weak $C^\infty$ Whitney topology. By the Nash Embedding Theorem applied to the metric $g_J = \omega(\cdot,J\cdot)$, this topology is metrisable. Hofer showed in \cite[Theorems 1 and 2]{Ho88} that the moduli spaces $\mathcal{P}_\ell$ and $\mathcal{M}_{L, L'}$ are compact.\par
Evaluation at $0 \in \mathbb{C}$ defines a continuous map, denoted by $\pi$, from each of these spaces to $L$.
\begin{rem} Pick some Hamiltonian isotopy $\{\psi^t\}_{t \in [0, 1]}$ such that $\psi^t(L) = L_t$ for all $t$. Setting
$$L_{x + i y} := L_y\qquad \quad\text{and}\qquad \quad\psi^t_{x + i y} := \psi^{ty}$$
for $x + i y$ in $\partial G$ shows that $\mathcal{P}_\ell$ is the space of pseudoholomorphic maps $G_\ell \to M$ of finite energy which map $z \in \partial G_\ell$ to $L_z$, i.e, of the form \eqref{changing-boundary}. This allows us to apply Theorem \ref{Main Theorem 1} with $\mathcal{P} = \mathcal{P}_\ell$.
\end{rem}
We require the following uniform energy bound.
\begin{lem}[{\cite[Lemma 2]{Ho88}}]\label{Energy Uniformly Bounded}
The symplectic energy is uniformly bounded on all $\mathcal{P}_\ell$. More precisely, there exists a constant $C \geq 0$ such that for all $\ell > 0$ and all $u$ in $\mathcal{P}_\ell$, we have $E(u) \leq C$.
\end{lem}
Fix a smooth cutoff function $\rho: \mathbb{R} \rightarrow [0, 1]$ with
\begin{equation*}\label{}\rho(t) =
\begin{cases}
1 \quad& t \leq \frac12\\
0 \quad& t \geq \frac32\end{cases}\end{equation*}
and define for $\ell > 0$ the function $r_\ell: \mathcal{F}_{\ell, A} \rightarrow \mathcal{D}_{L, L'}$ by
$$r_\ell(u)(x + iy) := u(\rho(\ell^{-1}|x|)x+ i y).$$
By construction, $r_\ell(u)$ agrees with $u$ on $Z_{\frac\ell2}$.\\
We require the following result which one should consider as a variant of Gromov compactness.
\begin{prop}[{\cite[Proposition 3]{Ho88}}]\label{P3}
For any neighbourhood $U$ of $\mathcal{M}_{L, L'}$ in $\mathcal{D}_{L, L'}$ and any $A \geq 0$, there exists $\ell_0 > 0$ such that $r_\ell(\mathcal{F}_{\ell, A}) \subseteq U$ for all $\ell \geq \ell_0$.
\end{prop}
\smallskip
\begin{proof}[Proof of Proposition \ref{Main Theorem 2}]
Let $C$ be the uniform energy bound from Lemma \ref{Energy Uniformly Bounded}. Any $u$ in any $\mathcal{P}_\ell$ clearly satisfies $E(u|_{Z_\ell}) \leq C$. Picking $U$ an open neighbourhood of $\mathcal{M}_{L, L'}$ in $\mathcal{D}_{L, L'}$, and taking $\ell_0$ as in Proposition \ref{P3} with $A = C$, we obtain a commutative diagram
\[\begin{tikzcd}
\mathcal{P}_{\ell_0} \arrow[r, "\cdot\vert_{Z_{\ell_0}}"] \arrow[drr, "\pi"]&
\mathcal{F}_{\ell_0, C} \arrow[r, "r_{\ell_0}"] &
U \arrow[d, "\pi"] \\
& & L\\
\end{tikzcd}\]
By Theorem \ref{Main Theorem 1}, the pullback $\pi^*: R^*(L) \rightarrow R^*(U)$ must thus be injective. Using the isomorphism $$R^*(\mathcal{M}_{L, L'}) \cong \varinjlim R^*(U)$$
(taking a direct limit over open neighbourhoods of $\mathcal{M}_{L, L'}$ in $\mathcal{D}_{L, L'}$) given by \cite[Lemma 5.2]{AMS21} and the exactness of the direct limit functor, we may conclude.
\end{proof}
\begin{rem} \cite[Lemma 5.2]{AMS21} states that generalised cohomology theories as defined in Section \ref{2.1} satisfy the continuity axiom. This is one key ingredient for our generalisation of the results in \cite{Ho88}.
\end{rem}
\section{Lusternik-Schnirelmann theory}\label{Section 4}
Our goal in this section is the proof of Theorem \ref{Intersection Points and Cuplength}.
Observe that there is a natural $\mathbb{R}$-action on $\mathcal{M}_{L, L'}$, by setting $t \cdot u := u(\cdot - t)$. The fixed points of this action are exactly the constant maps to points in $L \cap L'$. Hence there is a bijection between these fixed points and $L \cap L'$.
\begin{lem}[{\cite[Lemma 4]{Ho88}}]
There exists a continuous map $\sigma: \mathcal{M}_{L, L'} \rightarrow \mathbb{R}$ such that for any $u$ which is not a fixed point of the $\mathbb{R}$-action, the function $t\mapsto \sigma(t \cdot u)$ is strictly decreasing.
\end{lem}
\begin{proof}[Sketch of the construction]
One should think of $\sigma$ as something akin to the Floer action functional. Indeed, if $X$ is Liouville and $L$ is an exact Lagrangian, we can take $\sigma$ to be the usual Floer action functional.\par
If not, for each path component $Q$ in $\mathcal{M}_{L, L'}$, we fix a basepoint $u_0$ in $Q$, and define $\sigma(u_0) := 0$. Then for some other $u_1$ in $Q$, we pick a path $\{u_t\}_{t \in [0, 1]}$ from $u_0$ to $u_1$, and define
$$\sigma(u_1) = \int_{[0, 1]^2} v^* \omega$$
where $v: [0, 1]^2 \rightarrow X$ is a smoothing (rel endpoints) of the map sending $(s, t)$ to $u_s(t i)$. This is well-defined due to relative exactness.
\end{proof}
Fix some basepoint $x_0\in L$. For any subset $S$ of $\mathcal{M}_{L, L'}$ or $\mathcal{D}_{L, L'}$, we consider the map of pairs
$$\pi_S: (S, \emptyset) \rightarrow (L,x_0) : u \mapsto u(0),$$
as well as the pullback
$$\pi_S^*: R^*(L,x_0) \rightarrow R^*(S).$$
\begin{de}
To each subset $S$ of $\mathcal{M}_{L, L'}$, we assign the non-negative integer
$$I(S) := \min\set{k \geq 1: \exists\; U_1, \ldots, U_k\sub \mathcal{M}_{L, L'}\normalfont\text{ open } : S\sub U_1\cup\dots \cup U_k\normalfont\text{ and } \pi^*_{U_i} = 0}.$$
\end{de}
Note $I$ has a uniform upper bound. Indeed, let $N$ be the minimal number of contractible open subsets of $L$ required to cover $L$. Then $I(S) \leq N$ for any $S\sub \mathcal{M}_{L, L'}$.\par
\begin{lem}\label{Index Function}
Fix subsets $S$ and $T$ of $\mathcal{M}_{L, L'}$.
\begin{enumerate}
\item If $S \subseteq T$, then $I(S) \leq I(T)$.
\item There is some open neighbourhood $U$ of $S$ such that $I(S) = I(U)$.
\item $I(S \cup T) \leq I(S) + I(T)$.
\item If $\{\varphi_t\}_{t \in\mathbb{R}}$ is a flow on $\mathcal{M}_{L,L'}$, then $I(S) = I(\varphi_t(S))$ for all $t\in\mathbb{R}$.
\item\label{finite} $I(\{u_1,\dots,u_n\}) = 1$ for any $u_1,\dots,u_n\in \mathcal{M}_{L,L'}.$
\end{enumerate}
\end{lem}
Thus $I$ is an \emph{index function} in the sense of \cite[Definition 4.2]{Rud99}.
\begin{proof} If $S\leq T$, we take the minimum over a larger set, so the inequality is immediate. If $U_1, \ldots, U_{I(S)}$ are open subsets of $\mathcal{M}_{L, L'}$ covering $S$ with $\pi^*_{U_i} = 0$ for all $i$, set
$$U = U_1 \cup \ldots \cup U_{I(S)}$$
Then $I(U)\leq I(S)$, so equality holds by (1).
The union of two suitable open covers for $S$, respectively $T$ defines a suitable open cover for $S\cup T$ which must have cardinality at least $I(S\cup T)$. This shows 3. As $\varphi_t$ is homotopic to the identity, it takes a suitable cover for $S$ to a suitable cover for $\varphi_t(S)$. Thus $I(\varphi_t(S)) \leq I(S)$, and equality follows from applying the same argument to $\varphi_{-t}(\varphi_t(S))$.
To see the last claim, denote $\{p_1,\dots,p_k\} = \{u_1(0),\dotsc,u_n(0)\}$. For each $j \leq k$ choose a contractible open neighbourhood $V_j$ of $p_j$ in $L$, such that $\cc{V_i}\cap \cc{V_j} =\emptyset$ for $i \neq j$. Then the preimage $U$ = $\pi^{-1}(V_1 \cup \dots \cup V_k)$ defines a suitable cover for $\{u_1,\dots,u_n\}$.
\end{proof}
\begin{lem}\label{Index Function and Cuplength}
$I(\mathcal{M}_{L, L'}) \geq c_R(L)$.
\end{lem}
\begin{proof}
Fix an open cover $U_1, \ldots, U_k$ of $\mathcal{M}_{L, L'}$ such that $\pi^*_{U_i} = 0$ for $i\leq k$ and let $\alpha_1,\dots,\alpha_k\in R^*(L,x_0)$ be arbitrary. By Lemma \ref{rel cup} the product $\pi_{\mathcal{M}_{L, L'}}^* \alpha_1 \cdot \ldots \cdot \pi_{\mathcal{M}_{L, L'}}^* \alpha_k$ vanishes in $R^*(\mathcal{M}_{L, L'})$. By Theorem \ref{Main Theorem 1}, $\pi_{\mathcal{M}_{L, L'}}^*$ is injective, so $\alpha_1 \cdot \ldots \cdot \alpha_k = 0$ in $R^*(L, x_0)$ and $c_R(L) \leq k$. Taking the infimum over all such open covers completes the proof.
\end{proof}
Given Lemma \ref{Index Function and Cuplength}, the proof of Theorem \ref{Intersection Points and Cuplength} reduces to showing that
\begin{equation}\label{Star} \# L \cap L' \geq I(\mathcal{M}_{L, L'})\end{equation}
Since $I$ is an index function, this follows from \cite[Theorem 4.2]{Rud99}. For the sake of exposition, we give a proof here, using standard Lusternik-Schnirelman theory as in \cite[Section V]{Ho88}.
\begin{de}
For $1 \leq i \leq I(\mathcal{M}_{L, L'})$, we define
$$d_i := \inf_{I(S) \geq i} \sup \sigma(S)$$
where the infimum is taken over subsets of $\mathcal{M}_{L, L'}$.\par
For any $d\in \mathbb{R}$, we denote
$$Cr(d) := \set{u \in \mathcal{M}_{L, L'} : \sigma(u) = d,\; \mathbb{R} \cdot u = u }.$$
\end{de}
It follows that
$$\sum_d \# Cr(d) = \# L \cap L'.$$
\begin{lem}
$$-\infty < d_1 \leq \ldots \leq d_{I(\mathcal{M}_{L, L'})} < \infty.$$
\end{lem}
\begin{proof} First observe that
$d_j \leq d_{j + 1}$ for all $j$ since we take the infimum over a smaller set. The compactness of $\mathcal{M}_{L, L'}$ implies that
$-\infty < d_1$ and $d_{I(\mathcal{M}_{L, L'})} < \infty$.
\end{proof}
\begin{lem}\label{ccc}
For any neighbourhood $U$ of $Cr(d)$, there exists some $\varepsilon > 0$ such that
$$u \in \sigma^{-1}((-\infty, d + \varepsilon])\setminus U\quad \Rightarrow\quad 1\cdot u\in \sigma^{-1}((-\infty, d - \varepsilon]).$$
\end{lem}
\begin{proof}
This follows from the compactness of $\sigma^{-1}((-\infty, d]) \setminus U$ along with the continuity of the $\mathbb{R}$-action.
\end{proof}
\begin{lem}\label{aaa}
$Cr(d_j)$ is non-empty for all $j$.
\end{lem}
\begin{proof}
Suppose $Cr(d_j)$ is empty. Applying Lemma \ref{ccc} to $U = \emptyset$ we obtain some $\varepsilon > 0$ such that
$$1 \cdot\sigma^{-1}((-\infty, d_j + \varepsilon]) \subseteq \sigma^{-1}((-\infty, d_j - \varepsilon]).$$
By definition of $d_j$, there exists $S \subseteq \mathcal{M}_{L, L'}$ such that $I(S) \geq j$ and
$$ d_j \leq \sup \sigma(S) \leq d_j + \varepsilon.$$
But then $I(1 \cdot S) \geq j$ and $\sup \sigma(1 \cdot S) < d_j$, which is a contradiction.
\end{proof}
\begin{lem}\label{bbb}
If $d_j = d_{j + 1}$ for any $j$, then $Cr(d_j)$ is infinite.
\end{lem}
\begin{proof}
If $Cr(d_j)$ is finite, then $I(Cr(d_j)) = 1$ by Lemma \ref{Index Function}.\eqref{finite}. So it suffices to show that $I(Cr(d_j)) \geq 2$.
Suppose by contradiction $I(Cr(d_j)) \leq 1$. Since $Cr(d_j)$ is non-empty, we must have equality. Then there is some open neighbourhood $U$ of $Cr(d_j)$ such that $\pi^*_{U} = 0$. Given this $U$, fix $\varepsilon > 0$ as in the statement of Lemma \ref{ccc}.\par
Choose $S \subseteq \mathcal{M}_{L, L'}$ such that $I(S) \geq j + 1$ and
$$d_j \leq \sup \sigma(S) \leq d_j + \varepsilon.$$
Then $I(1 \cdot (S \setminus U)) \geq j$ but $\sigma(1 \cdot (S \setminus U)) \leq d_j - \varepsilon$, a contradiction.
\end{proof}
The inequality in \eqref{Star}, and hence Theorem \ref{Intersection Points and Cuplength}, follows from Lemmas \ref{aaa} and \ref{bbb}.
\section{Introduction}
\subsection{Background}
Let $(X,\omega)$ be a symplectic manifold and let $L$ and $L'$ be two Lagrangian submanifolds in $X$. A classical problem in symplectic topology is to find lower bounds on the number of intersection points between $L$ and $L'$ when they are related by a Hamiltonian diffeomorphism.
This has been studied by many authors, under various different assumptions. When the Lagrangians are assumed to be transverse, there are lower bounds obtained from Floer homology, such as in \cite{Fl88}, \cite{Oh93}, and \cite{FOOO11} among many other references.\par
The classical Arnol'd conjecture concerns a special case of this question, where $(X, \omega)$ is the product symplectic manifold $(Y \times Y, \sigma \oplus -\sigma)$ for some compact symplectic manifold $(Y, \sigma)$, $L$ is the diagonal and $L'$ is the graph of a Hamiltonian diffeomorphism of $Y$. This case has been the subject of much study, both with and without the additional assumption of transverse intersection. See \cite{FO99}, \cite{Rud99}, \cite{P16} and \cite{AB21} and the references therein.\par
We will not assume that $L$ and $L'$ are transverse, but instead make the following assumption. It excludes disk and sphere bubbling, guaranteeing the compactness of certain moduli spaces of pseudoholomorphic curves.
\begin{assumption}\label{relativeexactness}Throughout this paper, we will assume that
\begin{enumerate}
\item $X$ is either closed or a Liouville manifold.
\item $L$ is connected, closed and \emph{relatively exact}, i.e.,
$$\omega \cdot \pi_2(X, L) = 0.$$
\end{enumerate}
\end{assumption}
Under these assumptions, Floer proved
\begin{thm}[\cite{Fl88}]
If $L$ and $L'$ intersect transversely, there is a lower bound
$$\#L \cap L' \geq \sum\limits_i \mathrm{Rank}(H_i(L; \mathbb{Z}/2)).$$
\end{thm}
Without the transversality assumption, a version of the Arnol'd conjecture states that
\begin{conjecture}\label{c}
Given Assumption \ref{relativeexactness}, there is a lower bound
$$ \#L \cap L' \geq \mathrm{Crit}(L) $$
where $\mathrm{Crit}(L)$ is the minimal number of critical points of any smooth map $L \rightarrow \mathbb{R}$.
\end{conjecture}
A standard application of the Weinstein neighbourhood theorem implies that if true, this bound must be sharp.\\
Lusternik-Schnirelmann theory is a powerful tool for studying (numbers of) critical points or intersection points without any transversality assumptions. It has been used in many fields other than symplectic geometry. For example, Klingenberg proved in \cite[Theorem 5.1.1]{Kl78} that any metric on $S^2$ admits at least three closed geodesics. Another application is to show that any (not necessarily Morse) function on a closed smooth manifold $M$ has at least $c_{\mathbb{Z}}(M)$ critical points, where $c_{\mathbb{Z}}(M)$ is the cuplength of $M$ in singular cohomology with integer coefficients.
Lusternik-Schnirelmann theory has also been used in contact topology, e.g. by Ginzburg and G\"urel in \cite{GiGu20} to find lower bounds for numbers of Reeb orbits. For a more comprehensive discussion and further applications we refer to \cite{CLOT03} or Chapter 11 in \cite{MS17}.\\
In particular, this technique can be applied to study Conjecture \ref{c}.
\begin{thm}[{\cite[Theorem 3]{Ho88}, \cite[Theorem 1]{Fl89}, \cite[Theorem 1]{Ho85}}]\label{hof}
Under Assumption \ref{relativeexactness}, there is a lower bound
\begin{equation}\label{int-mod-2}\#L \cap L' \geq c_{\mathbb{Z}/2}(L).\end{equation}
If $X = T^* L$ is a contangent bundle with $L$ the zero section, there is a lower bound
\begin{equation}\label{int-mod-0}\#L \cap L' \geq c_{\mathbb{Z}}(L).\end{equation}
\end{thm}
Here $c_{R}(L)$ is the cuplength of $L$ in cohomology with coefficients in $R$ for a ring $R$, defined below.
This result was proved using Lusternik-Schnirelmann theory by Hofer. Independently, Floer's proof proceeds via Conley indices. In general, \eqref{int-mod-2} and \eqref{int-mod-0} are weaker bounds than the one given by Conjecture \ref{c}. There are examples in which it is a strictly weaker bound, such as \cite[Example 3.7]{Rud99}.\par
Similar results have been obtained in the monotone setting (rather than the relatively exact setting, in which we work) by L\^{e}-Ono \cite{VaOn96} and Gong \cite{Go21b}.
\subsection{Main results}
We will extend Theorem \ref{hof}, as well as other results in \cite{Ho88}, to generalised cohomology theories. The strategy is to prove the injectivity of a certain pullback map relating the generalised cohomology of $L$ with the generalised cohomology of a certain moduli space of pseudoholomorphic curves.\par
Fix a ring spectrum $R$, representing a multiplicative generalised cohomology theory $R^*$. The main invariant we will use is the cuplength.
\begin{de} Let $Y$ be a path-connected topological space. The \emph{$R$-cuplength} of $Y$ is the natural number (or $\infty$)
c \begin{equation}\label{cuplength de} c_{R}(Y) := 1 +\sup\{k \in \mathbb{N}: \exists \alpha_1,\dots,\alpha_k \in \tilde{R}^*(Y) : \alpha_1 \cdot \dots\cdot \alpha_k \neq 0\}.\end{equation}
\end{de}
The proof of Theorem \ref{hof} relies on various moduli spaces of pseudoholomorphic curves, which might not be cut out transversely. Thus they do not admit an a tangent bundle; instead one has to work with a virtual vector bundle, induced by the linearisation of the Cauchy-Riemann operator. We make the following assumption throughout the paper, which will allow us to apply a version of Atiyah duality later on.
\begin{assumption}\label{Assumption}
This virtual vector bundle is $R$-orientable.
\end{assumption}
From \cite{Por22} we have the following criteria for Assumption \ref{Assumption} to be satisfied for some generalised cohomology theories.
\begin{prop}[{\cite[Proposition 1.13]{Por22}}]\label{Orientability Holds Often}
Assumption \ref{Assumption} holds when
\begin{enumerate}
\item $R$ is the Eilenberg-MacLane spectrum $H\mathbb{Z}/2$.
\item $R$ is the Eilenberg-MacLane spectrum $H\mathbb{Z}$, and $L$ is (relatively) spin.
\item $R^*$ is complex $K$-theory, and $L$ is spin.
\item\label{sphere} $R^*$ is real $K$-theory, and $TL$ admits a stable trivialisation over a 3-skeleton of $L$ which extends (after complexification) to a stable trivialisation of $TX$ over a 2-skeleton of $X$ (as a complex vector bundle).
\end{enumerate}
\end{prop}
Fix an $\omega$-compatible almost complex structure $J$. If $X$ is Liouville, we assume $J$ to be convex at infinity. Let $Z$ be the infinite strip $\mathbb{R} + [0, 1]i$ in the complex plane. By pseudoholomorphic, we will always mean with respect to $J$. We are interested in the following standard moduli space of pseudoholomorphic strips with Lagrangian boundary conditions
$$\mathcal{M}_{L, L'} := \set{u \in C^\infty(Z,X) : \cc{\partial}_J u = 0,\;E(u)< \infty,\; u(\mathbb{R})\sub L,\; u(\mathbb{R}+i)\sub L'}.$$
Evaluation at 0 defines a map $\pi: \mathcal{M}_{L, L'} \rightarrow L$. Following Hofer's strategy, we can approximate $\mathcal{M}_{L, L'}$ with spaces of pseudoholomorphic discs and use Theorem \ref{Main Theorem 1} to prove the following.
\begin{prop}\label{Main Theorem 2}The map $\pi^*\colon R^*(L)\rightarrow R^*(\mathcal{M}_{L,L'})$ is injective.
\end{prop}
The main work of the paper, done in Section \ref{Section 2}, lies in the proof of the auxiliary result Theorem \ref{Main Theorem 1}. In Section \ref{Section 3} we approximate $\mathcal{M}_{L,L'}$ by moduli spaces of pseudoholomorphic discs. The continuity axiom, which is satisfied by generalised cohomology theories as defined in Section \ref{2.1}, allows us to deduce Proposition \ref{Main Theorem 2}.
As in \cite{Ho88}, we combine this result in section \ref{Section 4} with standard Lusternik-Schnirelmann theory to obtain the following lower bound.
\begin{thm}\label{Intersection Points and Cuplength} Suppose $L$ satisfies Assumption \ref{relativeexactness} and Assumption \ref{Assumption} for a ring spectrum $R$. Then the number of intersection points between $L$ and $L'$ satisfies
$$\#L \cap L' \geq c_{R}(L).$$
\end{thm}
\smallskip
\begin{rem} If $L = L_1 \sqcup \dots \sqcup L_k$ is a disjoint union of Lagrangians satisfying Assumption \ref{relativeexactness}, we may write $L' = L'_1 \sqcup \dots \sqcup L'_k$ with $L'_i$ Hamiltonian isotopic to $L_i$. Theorem \ref{Intersection Points and Cuplength} applied to each pair $L_i$ and $L'_i$ shows that
\begin{equation*}\#L \cap L' \geq \sum_{i=1}^kc_{R}(L_i).\end{equation*}
\end{rem}
In Section \ref{Examples} we give two examples where Theorem \ref{Intersection Points and Cuplength} represents a stronger bound than what was previously known. This uses computations in \cite{IM04} of the cuplengths of the compact symplectic groups $\Sp(2)$ and $\Sp(3)$ with respect to certain generalised cohomology theories. \\
As we do not assume the transversality of $L$ and $L'$, to our knowledge there is no analogue of our strategy of proof using the setup in \cite{CoJoSe95,CJS09}. However, it may be possible to use their setup to prove Theorem \ref{Intersection Points and Cuplength} using the strategy of \cite{Go21a} instead.\\
Suppose $X$ is compact and symplectically aspherical. If $\psi$ is a (possibly degenerate) Hamiltonian diffeomorphism of $X$, we can apply Theorem \ref{Intersection Points and Cuplength} to the graph of $\psi$ in $X\times X$ to deduce the Hamiltonian version of this inequality.
\begin{cor}\label{corolary}
The number of fixed points of $\psi$ satisfies
$$\#\fixp(\psi) \geq c_{R}(X).$$
\end{cor}
In this setting, Conjecture \ref{c} (which implies Corollary \ref{corolary}) has already been verified; see \cite[Theorem A]{Rud99}, \cite[Corollary 4.2]{OR99} and \cite[Theorem 8.28]{CLOT03}.
\subsection*{Acknowledgements}
Both authors are grateful to Ailsa Keating and Ivan Smith for valuable discussions and for suggesting this project in the case of the latter. Oscar Randal-Williams pointed out the example of $\Sp(2)$ and gave helpful feedback, as did Jonny Evans. The first author thanks Matija Sreckovic for comments on an earlier draft. The second author would also like to thank Jack Smith, Nick Nho and Sam Frengley for helpful discussions.\\ Both authors were funded by the EPSRC during their graduate studies. The second author is supported by the Engineering and Physical Sciences Research Council [EP/W015889/1]. This work was finished while both authors were resident at the Simons Laufer Mathematical Sciences Institute, supported by the NSF grant No. DMS-1928930.
| {
"timestamp": "2022-11-15T02:31:51",
"yymm": "2211",
"arxiv_id": "2211.07559",
"language": "en",
"url": "https://arxiv.org/abs/2211.07559",
"abstract": "We find lower bounds on the number of intersection points between two relatively exact Hamiltonian isotopic Lagrangians. The bounds are given in terms of the cuplength of the Lagrangian in various multiplicative generalised cohomology theories. The intersection of the Lagrangians need not be transverse, however, we require certain orientation assumptions. This gives stronger bounds than previous estimates on the number of self-intersection points of a suitable closed, relatively exact Lagrangian diffeomorphic to Sp$(2)$ or Sp$(3)$. Our proof uses Lusternik-Schnirelmann theory, following and extending work by Hofer.",
"subjects": "Symplectic Geometry (math.SG)",
"title": "Lagrangian intersections and cuplength in generalised cohomology",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211597623861,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7096458771341416
} |
https://arxiv.org/abs/1802.09039 | Flag bundles, Segre polynomials and push-forwards | In this note, we give Gysin formulas for partial flag bundles for the classical groups. We then give Gysin formulas for Schubert varieties in Grassmann bundles, including isotropic ones. All these formulas are proved in a rather uniform way by using the step-by-step construction of flag bundles and the Gysin formula for a projective bundle. In this way we obtain a comprehensive list of new universal formulas. | \section{Introduction}
\label{se:intro}
Let \(E\to X\) be a vector bundle of rank \(n\) on a variety \(X\) over an algebraically closed field.
Let \(\pi\colon\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(E)\to X\) be the bundle of flags of subspaces of dimensions \(1,2,\dots, n-1\) in the fibers of \(E\to X\).
The flag bundle \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(E)\) is used, \textit{e.g.}, in splitting principle, a standard technique which reduces questions about vector bundles to the case of line bundles; namely the pullback bundle \(\pi^{\ast}E\) decomposes as a direct sum of line bundles.
One can construct \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(E)\) inductively as a sequence of projective bundles, using the following iterative step, that decreases the rank by \(1\).
Let \(p_{1}\colon\P(E)\to X\) denote the projective bundle of lines in the fibers of \(E\), and let \(U_{1}:=\mathcal{O}_{\P(E)}(-1)\) denote the universal subbundle on \(\P(E)\), then one has a universal exact sequence
of vector bundles on \(\P(E)\)
\[
0\to U_{1}\longrightarrow p_{1}^{\ast}E \longrightarrow Q_{n-1}\to 0,
\]
where \(Q_{n-1}\) (the universal quotient bundle on \(\P(E)\)) is a rank \(n-1\) vector bundle.
Replacing \(E\) by \(Q_{n-1}\), one obtains a universal subbundle on \(\P(Q_{n-1})\), that it is convenient to denote \(U_{2}/U_{1}\), together with a universal quotient bundle \(Q_{n-2}\). Iterating this process until obtaining a quotient bundle \(Q_{1}\) of rank one, one gets a sequence of projective bundles
\begin{equation}
\label{eq:full}
\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(E)
:=
\P(Q_{2})
\stackrel{p_{n-1}}\longrightarrow
\cdots
\to
\P(Q_{n-1})
\stackrel{p_{2}}\longrightarrow
\P(E)
\stackrel{p_{1}}\longrightarrow
X,
\end{equation}
and universal exact sequences of vector bundles on \(\P(Q_{n-i+1})\):
\begin{equation}
\label{eq:taut}
0
\to
U_{i}/U_{i-1}
\to
p_{i}^{\ast}Q_{n-i+1}
\to
Q_{n-i}
\to
0
\end{equation}
such that
\(
\pi^{\ast}E
=
U_{1}\oplus U_{2}/U_{1}\oplus\cdots\oplus U_{n-1}/U_{n-2}\oplus Q_{1}
\).
Here \(U_{i+1}/U_{i}\) stands for the universal subbundle on \(\P(Q_{n-i+1})\) and also for the pullbacks of these to \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(E)\).
Now we would like to outline how to obtain a Gysin formula for the flag bundle \(\pi\colon\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(E)\to X\) (\textit{cf.} Example \ref{ex1}),
and introduce some notation.
We shall work in the framework of intersection theory of \cite{Fulton}.
Recall that a proper morphism \(g\colon Y\to X\) of nonsingular algebraic varieties over an algebraically closed field yields an additive map \(g_{\ast}\colon A^{\bullet}Y\to A^{\bullet}X\) of Chow groups induced by push-forward cycles, called the Gysin map.
The theory developed in \cite{Fulton} allows also one to work with singular varieties, or with cohomology.
In this note, \(X\) will always be nonsingular.
For \(E\to X\) a vector bundle, let \(s(E)\) be the Segre class of \(E\), that is the formal inverse of the Chern class \(c(E)\) in the Chow ring of \(X\).
Let \(\xi=c_{1}(\mathcal{O}_{\P(E)}(1))\); then \(A^{\bullet}(\P(E))\) is generated algebraically by \(\xi\) over \(A^{\bullet}X\)---here we identify \(A^{\bullet}X\) with a subring of \(A^{\bullet}(\P(E))\)---and
\begin{equation}
\label{eq:segre}
(p_{1})_{\ast} \xi^{i}
=
s_{i-(n-1)}(E),
\end{equation}
\textit{cf.} \cite{Fulton}.
To obtain a Gysin formula for the sequence of projective bundles (\ref{eq:full}), it suffices to appropriately iterate formula (\ref{eq:segre}).
The intermediate formulas involve the individual Segre classes of the universal quotient bundles, that can be eliminated using (\ref{eq:taut}) and the Whitney sum formula.
However, it seems rather difficult to obtain a universal formula in this way.
A universal formula should hold for \emph{any} polynomial in characteristic classes of universal vector bundles and depend explicitly on the Segre classes of the original bundle \(E\).
To obtain such a formula, we use the generating series of the Segre classes of the universal quotient bundles.
A prototype is the reformulation of (\ref{eq:segre}) in
\begin{equation}
\label{eq:fund}
(p_{1})_{\ast} \xi^{i}
=
[t^{n-1}]
\big(t^{i}s_{1/t}(E)\big),
\end{equation}
where we consider the specialization in \(x=1/t\) of the Segre polynomial \(s_{x}(E)=\sum_{i}s_{i}(E)x^{i}\) and where for a monomial \(m\) and a Laurent polynomial \(P\), \([m](P)\) denotes the coefficient of \(m\) in \(P\).
Formula (\ref{eq:fund}) and the projection formula imply that for any polynomial \(f\) in one variable with coefficients in \(A^{\bullet}X\)
\begin{equation}
\label{eq:fund_f}
(p_{1})_{\ast} f(\xi)
=
[t^{n-1}]
\big(f(t)s_{1/t}(E)\big).
\end{equation}
In this formula, (i) one does not need to expand \(f\) into a combination of monomials; (ii) one uses the Segre polynomial that, like the total Segre class, is a group homomorphism from the Grothendieck group of \(X\) to the multiplicative group of units with degree zero term \(=1\) in \(A^{\bullet}X\).
Iterating the Gysin formula (\ref{eq:fund_f}) yields a closed universal Gysin formula for the flag bundle \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(E)\to X\), as announced in Example \ref{ex1}.
It is clear that the outlined strategy of proof applies to more general step-by-step constructions than the construction (\ref{eq:full}) of the flag bundle \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(E)\to X\).
Considering the truncated composition \(p_{k}\circ\cdots \circ p_{1}\) in (\ref{eq:full}) yields formulas for
full flag bundles, \textit{i.e.} bundles of flags of subspaces of dimensions \(1,2,\dots,k\) in the fibers, for \(k=1,\dots,n-1\).
Then, using certain commutative diagrams (see ~\cite[(5)]{DP1}), one extends these formulas to arbitrary partial flag bundles.
One other interesting generalization is to restrict to the zero locus of a section of some vector bundle
at each step of the sequence of projective bundles.
In other words, one can impose some geometric conditions that the subspaces of the flag have to satisfy.
An illustrative example is Theorem~\ref{thm:gysin-BD}, in the orthogonal setting, obtained by considering at each step quadric bundles of isotropic lines in projective bundles of lines.
This method of step-by-step construction of generalized flag bundles leads to uniform short proofs of the different results announced in this note.
This note is organized as follows.
In Section \ref{se:flags}, we shall announce universal Gysin formulas for partial flag bundles for general linear groups, symplectic groups and orthogonal groups.
The proofs of the results announced there can be found in \cite{DP1}.
In Section~\ref{se:schubert} we give Gysin formulas for Kempf-Laksov flag bundles.
These generalized flag bundles are used to desingularize Schubert varieties in Grassmann bundles.
Theorem~\ref{thm:gysin-KL-A} is established in~\cite{DP2}.
Theorem~\ref{thm:gysin-KL-C} is announced for the first time in the present note.
\section{Universal Gysin formulas for flag bundles}
\label{se:flags}
In this section, the letter \(f\) denotes a polynomial in the indicated number of variables with coefficients in \(A^{\bullet}X\).
The appropriate symmetries that \(f\) has to satisfy to be in the Chow ring of the flag bundle under consideration are always implied.
We shall discuss separately the cases of general linear groups, symplectic groups and orthogonal groups.
\subsection{General linear groups}
Let \(E\to X\) be a rank \(n\) vector bundle.
Let \(1\leq d_{1}<\cdots< d_{m}=d\leq n-1\) be a sequence of integers.
We denote by \(\pi\colon\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(d_{1},\dots,d_{m})(E)\to X\)
the bundle of flags of subspaces of dimensions \(d_{1},\dots,d_{m}\) in the fibres of \(E\).
On \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(d_{1},\dots,d_{m})(E)\), there is a universal flag
\(U_{d_{1}}\subsetneq\cdots\subsetneq U_{d_{m}}\)
of subbundles of \(\pi^{\ast}E\), where \(\mathrm{rk}(U_{d_{k}})=d_{k}\)
(the fiber of \(U_{d_{k}}\) over the point
\((V_{d_{1}}\subsetneq\cdots\subsetneq V_{d_{m}}\subset E_{x})\),
where \(x\in X\), is equal to \(V_{d_{k}}\)).
For a foundational account on flag bundles, see~\cite{Groth2}.
For \(i=1,\dots,d\), set \(\xi_{i}=-c_{1}(U_{d+1-i}/U_{d-i})\).
\medskip
\begin{theorem}
\label{thm:gysin-A}
With the above notation,
for
\(
f(\xi_{1},\dots,\xi_{d})
\in
A^{\bullet}(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(d_{1},\dots,d_{m})(E)),
\)
one has
\[
\pi_{\ast}f(\xi_{1},\dots,\xi_{d})
=
\Big[
{t_{1}}^{e_{1}}\dots{t_{d}}^{e_{d}}
\Big]
\bigg(
{
\textstyle
f(t_{1},\dots,t_{d})\,
\prod\limits_{1\leq i<j\leq d}
(t_{i}-t_{j})
\prod\limits_{1\leq i \leq d}
s_{1/t_{i}}(E)
}
\bigg),
\]
where for \(j=d-d_{k}+i\) with \(i=1,\dots,d_{k}-d_{k-1}\),
we denote
\(
e_{j}
=
n-i
\).
\end{theorem}
\medskip
\begin{example}
\label{ex1}
For the complete flag bundle \(\pi\colon\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(E)\to X\), one has
\[
\pi_{\ast}f(\xi_{1},\dots,\xi_{n-1})
=
\Big[{\textstyle\prod\limits_{i=1}^{n-1} t_{i}^{(n-1)}}\Big]
\bigg(
{\textstyle
f(t_{1},\dots,t_{n-1})
\prod\limits_{1\leq i< j\leq n-1}
(t_{i}-t_{j})
\prod\limits_{i=1}^{n-1}
s_{1/t_{i}}(E)
}
\bigg);
\]
and for the Grassmann bundle \(\pi\colon\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(d)(E)\to X\), one has
\[
\pi_{\ast}f(\xi_{1},\dots,\xi_{d})
=
\Big[{\textstyle\prod\limits_{i=1}^{d}t_{i}^{n-i}}\Big]
\bigg(
{\textstyle
f(t_{1},\dots,t_{d})
\prod\limits_{1\leq i< j\leq d}
(t_{i}-t_{j})
\prod\limits_{i=1}^{d}
s_{1/t_{i}}(E)
}
\bigg).
\]
\end{example}
\subsection{Symplectic groups}
Let \(E\to X\) be a rank \(2n\) vector bundle equipped with a non-degenerate symplectic form \(\omega\colon E\otimes E\to L\) (with values in a certain line bundle \(L\to X\)). We say that a subbundle \(S\) of \(E\) is isotropic if \(S\) is a subbundle of its symplectic complement \(S^{\omega}\), where
\[
S^{\omega}
:=
\{w\in E\mid \forall v\in S\colon \omega(w,v)=0\}.
\]
Let \(1\leq d_{1}<\cdots<d_{m}\leq n\) be a sequence of integers.
We denote by \(\pi\colon\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}^{\omega}(d_{1},\dots,d_{m})(E)\to X\) the bundle of flags of isotropic subspaces of dimensions \(d_{1},\dots,d_{m}\) in the fibers of \(E\).
On \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}^{\omega}(d_{1},\dots,d_{m})(E)\), there is a universal flag \(U_{d_{1}}\subsetneq\cdots\subsetneq U_{d_{m}}\) of subbundles of \(\pi^{\ast}E\), where \(\mathrm{rk}(U_{d_{k}})=d_{k}\).
For \(i=1,\dots,d\), set \(\xi_{i}=-c_{1}(U_{d+1-i}/U_{d-i})\).
\medskip
\begin{theorem}
\label{thm:gysin-C}
With the above notation,
for
\(
f(\xi_{1},\dots,\xi_{d})
\in
A^{\bullet}(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}^{\omega}(d_{1},\dots,d_{m})(E)),
\)
one has
\[
\pi_{\ast}f(\xi_{1},\dots,\xi_{d})
=
\Big[
{t_{1}}^{e_{1}}\cdots{t_{d}}^{e_{d}}
\Big]
\bigg(
{
\textstyle
f(t_{1},\dots,t_{d})\,
\prod\limits_{1\leq i<j\leq d}
(c_{1}(L)+t_{i}+t_{j})
(t_{i}-t_{j})
\prod\limits_{1\leq i \leq d}
s_{1/t_{i}}(E)
}
\bigg),
\]
where for \(j=d-d_{k}+i\) with \(i=1,\dots,d_{k}-d_{k-1}\),
we denote
\(
e_{j}
=
2n-i
\).
\end{theorem}
\medskip
\begin{example}
For the symplectic Grassmann bundle \(\pi\colon\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}^{\omega}(d)(E)\to X\), where \(\omega\) has values in a trivial line bundle, one has
\[
\pi_{\ast}f(\xi_{1},\dots,\xi_{d})
=
\Big[{\textstyle\prod\limits_{i=1}^{d}t_{i}^{2n-i}}\Big]
\bigg(
{\textstyle
f(t_{1},\dots,t_{d})
\prod\limits_{1\leq i< j\leq d}
(t_{i}^{2}-t_{j}^{2})
\prod\limits_{i=1}^{d}
s_{1/t_{i}}(E)
}
\bigg).
\]
\end{example}
\subsection{Orthogonal groups}
Let \(E\to X\) be a vector bundle of rank \(2n\) or \(2n+1\) equipped with a non-degenerate orthogonal form \(Q\colon E\otimes E\to L\) (with values in a certain line bundle \(L\to X\)). We say that a subbundle \(S\) of \(E\) is isotropic if \(S\) is a subbundle of its orthogonal complement \(S^{\perp}\), where
\[
S^{\perp}
:=
\{w\in E\mid \forall v\in S\colon Q(w,v)=0\}.
\]
Let \(1\leq d_{1}<\cdots<d_{m}\leq n\) be a sequence of integers. We denote by \(\pi\colon \mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}^{Q}(d_{1},\dots,d_{m})(E)\to X\) the bundle of flags of isotropic subspaces of dimensions \(d_{1},\dots,d_{m}\) in the fibers of \(E\). On \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}^{Q}(d_{1},\dots,d_{m})(E)\), there is a universal flag \(U_{d_{1}}\subsetneq\dots\subsetneq U_{d_{m}}\) of subbundles of \(\pi^{\ast}E\), where \(\mathrm{rk}(U_{d_{k}})=d_{k}\).
For \(i=1,\dots,d\), set \(\xi_{i}=-c_{1}(U_{d+1-i}/U_{d-i})\).
\medskip
\begin{theorem}
\label{thm:gysin-BD}
With the above notation,
for
\(
f(\xi_{1},\dots,\xi_{d})
\in
A^{\bullet}(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}^{Q}(d_{1},\dots,d_{m})(E)),
\)
one has
\begin{eqnarray*}
&&\pi_{\ast}f(\xi_{1},\dots,\xi_{d})
= \\ &&\qquad
\Big[
{t_{1}}^{e_{1}}\cdots{t_{d}}^{e_{d}}
\Big]
\bigg(
{
\textstyle
f(t_{1},\dots,t_{d})\,
\prod\limits_{1\leq i \leq d}
(2t_{i}+c_{1}(L))
\prod\limits_{1\leq i<j\leq d}
(c_{1}(L)+t_{i}+t_{j})
(t_{i}-t_{j})
\prod\limits_{1\leq i \leq d}
s_{1/t_{i}}(E)
}
\bigg),
\end{eqnarray*}
where for \(j=d-d_{k}+i\) with \(i=1,\dots,d_{k}-d_{k-1}\),
we denote
\(
e_{j}
=
\mathrm{rk}(E)-i
\).
\end{theorem}
\medskip
Note that, if the rank is \(2n\) and \(d=n\), we consider \emph{both} of the two isomorphic connected components of the flag bundle. Thus, if one is interested in only one of the two components, the result should be divided by \(2\). When \(c_{1}(L)=0\), this makes appear the usual coefficient \(2^{n-1}\).
\section{Universal Gysin formulas for Kempf--Laksov flag bundles}
\label{se:schubert}
In this section, we give Gysin formulas for Kempf--Laksov flag bundles, that are desingularizations of Schubert bundles in Grassmann bundles. We also extend the results to the symplectic setting. The orthogonal cases will be treated elsewhere.
\subsection{General linear groups}
Let \(E\to X\) be a rank \(n\) vector bundle on a variety \(X\) with a reference flag of bundles
\(E_{1}\subsetneq\cdots\subsetneq E_{n}=E\) on it, where \(\mathrm{rk}(E_{i})=i\).
Let \(\pi\colon\G_{d}(E)=\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(d)(E)\to X\) be the Grassmann bundle of subspaces of dimension \(d\) in the fibers of \(E\).
For any partition \(\lambda\subseteq(n-d)^{d}\), one defines the \textsl{Schubert bundle} \(\varpi_{\lambda}\colon\Omega_{\lambda}(E_{\bullet})\to X\) in \(\G_{d}(E)\) over the point \(x\in X\)
by
\begin{equation}
\label{eq:def_omega}
\Omega_{\lambda}(E_{\bullet})(x)
:=
\{
V\in \G_{d}(E)(x)
\colon
\dim(V\cap E_{n-d-\lambda_{i}+i}(x))\geq i,
\mbox{ for }i=1,\dots,d
\}.
\end{equation}
We denote by
\[
(\nu_{1},\dots,\nu_{d})
:=
(n-d-\lambda_{d}+d,\dots,n-d-\lambda_{1}+1)
\]
the dimensions of the spaces of the reference flag involved in the definition of \(\Omega_{\lambda}(E_{\bullet})\)---in reverse order---.
The partition \(\nu\) is a strict partition, and furthermore, \(n-i\leq\nu_{i}\leq\nu_{1}=n-\lambda_{d}\leq n\) for any \(i\).
Note that the above definition of \(\Omega_{\lambda}(E_{\bullet})\) can be restated using \(\nu\) with the conditions
\begin{equation}
\label{eq:condition_nu}
\dim(V\cap E_{\nu_{i}}(x))\geq d+1-i,\mbox{ for }i=1,\dots,d.
\end{equation}
For a strict partition \(\mu\subseteq(n)^d\) with \(d\) parts,
consider the flag bundle \(\vartheta_{\mu}\colon F_{\mu}(E_{\bullet})\to X\) defined over the point \(x\in X\) by
\begin{equation}
\label{eq:def_f}
F_{\mu}(E_{\bullet})(x)
:=
\Big\{
0\subsetneq V_{1}\subsetneq\cdots\subsetneq V_{d} \in \mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(1,\dots,d)(E)(x)
\colon
V_{d+1-i}\subseteq E_{\mu_{i}}(x),
\mbox{ for }i=1,\dots,d
\Big\}.
\end{equation}
We will call \textsl{Kempf--Laksov flag bundles} such bundles \(\vartheta_{\mu}\) introduced in \cite{KL}.
These appear naturally as desingularizations of Schubert bundles.
For a partition \(\lambda\subseteq(n-d)^{d}\), defining \(\nu\) as above, by (\ref{eq:condition_nu}) the forgetful map \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(1,\dots,d)(E)\to\G_{d}(E)\) induces a birational morphism \(F_{\nu}(E_{\bullet})\to\Omega_{\lambda}(E_{\bullet})\). On the \textsl{Schubert cell} defined over the point \(x\in X\) by
\[
\mathring\Omega_{\lambda}(E_{\bullet})(x)
:=
\Big\{
V\in \G_{d}(E)(x)
\colon
\dim(V\cap E_{\nu_{i}}(x))= d+1-i,\mbox{ for } i=1,\dots,d
\Big\},
\]
which is open dense in \(\Omega_{\lambda}(E_{\bullet})\), the inverse map is
\(
V\mapsto (V\cap E_{\nu_{d}}(x),\dots, V\cap E_{\nu_{1}}(x))
\).
It establishes a desingularization of \(\Omega_{\lambda}(E_{\bullet})\) (see~\cite{KL}).
Consider the sequence of projective bundles
\begin{equation}\label{sequence}
F_\mu(E_\bullet)=\P(E_{\mu_1}/U_{d-1}) \to \P(E_{\mu_2}/U_{d-2}) \to \cdots \to \P(E_{\mu_{d-1}}/U_{1}) \to \P(E_{\mu_d})\,,
\end{equation}
where \(U_{d-i+1}/U_{d-i}\) is the universal line bundle on \(\P(E_{\mu_i}/U_{d-i})\). Set \(\xi_i=-c_1(U_{d-i+1}/U_{d-i})\), \(i=1,\dots,d\).
\smallskip
Let \(f\) be a polynomial in \(d\) variables with coefficients in \(A^{\bullet}(X)\).
\medskip
\begin{theorem}
\label{thm:gysin-KL-A}
With the above notation, one has
\[
(\vartheta_{\mu})_{\ast} f(\xi_1,\ldots,\xi_d)
=
\Big[t_{1}^{\mu_{1}-1}\cdots t_{d}^{\mu_{d}-1}\Big]
\left(
{\textstyle
f(t_{1},\dots,t_{d})
\prod\limits_{1\leq i<j\leq d}(t_{i}-t_{j})
\prod\limits_{1\leq i\leq d}s_{1/t_{i}}(E_{\mu_{i}})
}
\right).
\]
\end{theorem}
\medskip
A proof of this theorem is based on (\ref{sequence}) and (\ref{eq:fund_f}).
\subsection{Symplectic groups}
Let \(E\to X\) be a rank \(2n\) symplectic vector bundle endowed with the symplectic form \(\omega\colon E\otimes E\to L\) with value in a line bundle \(L\to X\), over a variety \(X\).
For \(d\in\{1,\dots,n\}\), let \(\G_{d}^{\omega}(E)=\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}^{\omega}(d)(E)\) be the Grassmannian bundle of isotropic \(d\)-planes in the fibers of \(E\).
Let
\[
0=E_{0}\subsetneq E_{1}\subsetneq\cdots\subsetneq E_{n}=E_{n}^{\omega}\subsetneq\cdots\subsetneq E_{0}^{\omega}=E
\]
be a reference flag of isotropic subbundles of \(E\) and their symplectic complements, where \(\mathrm{rk}(E_{i})=i\). For \(i=1,\dots,n\), we set \(E_{n+i}:= E_{n-i}^{\omega}\).
For a partition \(\lambda\subseteq(2n-d)^{d}\), one defines the Schubert cell \(\mathring{\Omega}_{\lambda}(E_{\bullet})\) in \(\G_{d}^{\omega}(E)\) over the point \(x\in X\) by the conditions
\[
\mathring{\Omega}_{\lambda}(E_{\bullet})(x)
:=
\big\{
V\in\G_{d}^{\omega}(E)(x)
\colon
\dim\big(V\cap E_{2n-d+i-\lambda_{i}}(x)\big)=i,
\mbox{ for }i=1,\dots,d
\big\}.
\]
Denote \(\nu_{d+1-i}:=2n-d+i-\lambda_{i}\) the dimension of the reference space appearing in the \(i\)th condition.
A partition indexing the Schubert cell \(\mathring\Omega_{\lambda}\) must satisfy the conditions \(\nu_{i}+\nu_{j}\neq 2n+1\)
(see \cite[p. 174]{P}, where this is shown for \(d=n\), and for arbitrary \(d\) the argument is the same).
For such partitions one defines the Schubert bundle \(\varpi_{\lambda}\colon \Omega_{\lambda}\to X\) as the Zariski-closure of \(\mathring{\Omega}_{\lambda}\), given over a point \(x\in X\) by the conditions
\[
\Omega_{\lambda}(E_{\bullet})(x)
:=
\big\{
V\in\G_{d}^{\omega}(E)(x)
\colon
\dim\big(V\cap E_{2n-d+i-\lambda_{i}}(x)\big)\geq i,
\mbox{ for }i=1,\dots,d
\big\}.
\]
For a strict partition \(\mu\subseteq(2n)^{d}\) with \(d\) parts, such that \(\mu_{i}+\mu_{j}\neq 2n+1\) for all \(i,j\), we define the \textsl{isotropic Kempf--Laksov bundle} \(\vartheta_{\mu}\colon F_{\mu}(E_{\bullet})\to X\) over the point \(x\in X\) by
\[
F_{\mu}(E_{\bullet})(x)
:=
\big\{
0\subsetneq V_{1}\subsetneq\dots\subsetneq V_{d}\in \mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}^{\omega}(1,\dots,d)(E)(x)
\colon
V_{d+1-i}\subseteq E_{\mu_{i}}(x)
\big\}.
\]
Note that as in the previous section, \(F_{\nu}(E_{\bullet})\) is birational to \(\Omega_{\lambda}(E_{\bullet})\), but here it is not smooth in general.
Let \(U_{i}\) stands for the restriction to \(F_{\mu}(E_{\bullet})\) of the rank \(i\) universal bundle on \(\mathbf{F}}\def\P{\mathbf{P}}\def\G{\mathbf{G}(1,\ldots,d)(E)\).
Set \(\xi_{i}=-c_{1}(U_{d-i+1}/U_{d-i})\), for \(i=1,\ldots,d\).
\smallskip
Let \(f\) be a polynomial in \(d\) variables with coefficients in \(A^{\bullet}(X)\).
\medskip
\begin{theorem}
\label{thm:gysin-KL-C}
With the above notation, one has
\begin{eqnarray*}
&&
(\vartheta_{\mu})_{\ast} f(\xi_1,\dots,\xi_d)
=
\\
&&
\hskip15mm
\big[t_{1}^{\mu_{1}-1}\cdots t_{d}^{\mu_{d}-1}\big]
\Big(
{\textstyle
f(t_{1},\dots,t_{d})
\prod\limits_{1\leq i<j\leq d}\!
(t_{i}-t_{j})
\prod\limits_{{\scriptstyle 1\leq i<j\leq d\atop\scriptstyle\mu_{i}+\mu_{j}>2n+1}}\hskip-3.5mm
(c_{1}(L)+t_{i}+t_{j})
\prod\limits_{1\leq j\leq d}\!
s_{1/t_{j}}(E_{\mu_{j}})
}
\Big).
\end{eqnarray*}
\end{theorem}
\medskip
A proof of this theorem will appear in a separate publication.
| {
"timestamp": "2018-02-27T02:11:08",
"yymm": "1802",
"arxiv_id": "1802.09039",
"language": "en",
"url": "https://arxiv.org/abs/1802.09039",
"abstract": "In this note, we give Gysin formulas for partial flag bundles for the classical groups. We then give Gysin formulas for Schubert varieties in Grassmann bundles, including isotropic ones. All these formulas are proved in a rather uniform way by using the step-by-step construction of flag bundles and the Gysin formula for a projective bundle. In this way we obtain a comprehensive list of new universal formulas.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Flag bundles, Segre polynomials and push-forwards",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211575679041,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7096458755366126
} |
https://arxiv.org/abs/1807.06283 | Tropical Fano Schemes | We define a tropical version $\F_d(\trop X)$ of the Fano Scheme $\F_d(X)$ of a projective variety $X\subseteq \mathbb P^n$ and prove that $\F_d(\trop X)$ is the support of a polyhedral complex contained in $\trop \Grp(d,n)$. In general $\trop \F_d(X)\subseteq \F_d(\trop X)$ but we construct linear spaces $L$ such that $\trop \F_1(X)\subsetneq \F_1(\trop X)$ and show that for a toric variety $\trop \F_d(X)=\F_d(\trop X)$. | \section{Introduction}
The classical Fano scheme of a projective variety $X\subseteq \mathbb P^n$ is the fine moduli space parametrising linear spaces contained in $X$. It is denoted by $\F_d(X)$, with $d$ the dimension of the linear spaces, and is a subscheme of the Grassmannian
$\Grp(d,n)$ of $d-$dimensional subspaces of $\mathbb P^n$. Fano schemes have been intensively studied because of
their geometric properties. Gino Fano \cite{Fano} first introduced these schemes and mostly considered the case of hypersurfaces. Then in the $70s$
these schemes have been used to prove results on the irrationality of cubic threefolds \cite{CG,MURRE}. Recently there has been new interests for Fano schemes not only in algebraic geometry
\cite{I-C,I-Z,I-S,I} but also in machine learning \cite{L-K} and geometric complexity theory \cite{Mulm}.
In this paper we study a tropical version of the Fano scheme. We investigate the structure of this tropical object and relations with the classical $F_d(X)$.
The first way of obtaining a tropical version of $\F_d(X)$ is to consider its tropicalization inside $\operatorname{trop^{ext}} \Grp(d,n)$. The points of $\operatorname{trop^{ext}} \F_d(X)$ are in correspondence with the tropicalization of the classical linear spaces contained in $X$. However it is not true in general that a tropicalized linear space that lies in $\operatorname{trop^{ext}} X$ is the tropicalization of a classical linear space in $X$. A famous example of this is in \cite{Vig} where Vigeland proves that there are smooth surfaces in $\mathbb P^3$ of degree $3$ whose tropicalization contains infinitely many lines. Since there are only $27$ lines in the classical surfaces we deduce that these infinite tropical lines do not come from their tropicalization.
This leads us to define the second tropical version of $\F_d(X)$ to be the set of tropicalized linear spaces of dimension $d$ contained in $\operatorname{trop^{ext}} X$. We call this the \textit{ tropical Fano scheme} and we denote it by $\F_d(\operatorname{trop^{ext}} X)$. We take the first steps in studying the structure and the properties of this object that can also be used to investigate the classical Fano scheme.
\begin{introteo}\label{theorem1}
Let $X$ be a projective variety in ${\mathbb P}^n$. Then the tropical Fano scheme $\F_d(\operatorname{trop^{ext}} X)$ is a polyhedral complex whose support is contained in $\operatorname{trop^{ext}} \Grp(d,n)$. Moreover if $X $ is a fan then $\F_d(\operatorname{trop^{ext}} X)$ is a fan.
\end{introteo}
The two tropical versions of the Fano scheme come from two different constructions. The first is strictly linked to the algebraic variety and to its classical Fano scheme while the other only depends on the tropical variety $\operatorname{trop^{ext}} X$. However we immediately observe that
\begin{equation}\label{cont}
\operatorname{trop^{ext}} \F_d(X)\subseteq\F_d(\operatorname{trop^{ext}} X)
\end{equation}
and since Theorem \ref{theorem1} allows us to define a dimension for $\F_d(\operatorname{trop^{ext}} X)$ we obtain a bound for the dimension of $F_d(X)$.
A natural question arises:
\begin{quest}\label{question}
For which varieties $X$ do we have $\operatorname{trop^{ext}} \F_d(X)=\F_d(\operatorname{trop^{ext}} X)$?
\end{quest}
We start by looking at the simplest algebraic varieties: linear subspaces of $\mathbb P^n$. We then analyse the case of toric varieties embedded in $\mathbb P^n$ via monomial maps. These are two examples where the tropicalization can be easily described. For a linear space $L$ the tropicalization is computed from the matroid associated to $L$. On the other hand a monomial map can be tropicalized to a linear map from $\mathbb R^r$ to $\mathbb R^n$ and its image is the tropicalization of the toric variety associated to the monomial map (\cite[Corollary 3.2.13]{M-S}).
\begin{introteo}\label{introte}\textcolor{white}{ciap}
\begin{enumerate}
\item Let $n\ge 5$. If $L$ is a generic $2-$dimensional plane in $\mathbb P^{n}$ then $$\operatorname{trop^{ext}} \F_1(L)\subsetneq \F_1(\operatorname{trop^{ext}} L).$$
\item If $X$ is a toric variety in $\mathbb P^n$ then $\F_d(\operatorname{trop^{ext}} X)=\operatorname{trop^{ext}} \F_d(X)$.
\end{enumerate}
\end{introteo}
The paper is structured as follows. In Section \ref{sec1} we define the tropical Fano scheme and we give a rigorous statement of Theorem \ref{theorem1} (Theorem \ref{Fanstructure} and Corollary \ref{fanProperty}). We study the case of linear spaces in Section \ref{counter}. We prove the first part of Theorem \ref{introte} in Theorem \ref{counterexample} and then use it to prove the strict containment in (\ref{cont}) for a generic hypersurface.
In Section \ref{toricv} we analyse the case of toric varieties and we prove the second part of Theorem \ref{introte} (Theorem \ref{equality}).
Finally in Section \ref{proofThm} we study the structure of $\F_d(\operatorname{trop^{ext}} X)$.
\subsection*{Acknowledgements}
The author would like to thank Diane Maclagan for useful suggestions and a close reading,
Nathan Ilten, Paolo Tripoli,
Mar\'ia Ang\'elica Cueto and Annette Werner for helpful discussions and Melody Chan for her valuable comments that lead to the improvement of the final version of this paper.
The author was supported by EPSRC grant 1499803 and partially by LOEWE research unit USAG.
\section{Definitions of $\F_d(\operatorname{trop^{ext}} X)$}\label{sec1}
In this section we set notation and define the tropical Fano Scheme $\F_d(\operatorname{trop^{ext}} X)$ of the tropicalization of a projective variety $ X\subseteq \mathbb P^n$.\\
Let $\Bbbk $ be a field with a surjective valuation $\mathfrak{v}:\Bbbk^* \to \mathbb R$ (cf. Remark \ref{nonsurval}) and let $T^m$ be the torus $(\Bbbk^*)^{m+1}/\Bbbk^*$ contained in $\mathbb P^m$.
The tropical projective space $\operatorname{trop^{ext}} \mathbb P^m$ is $(\overline{\mathbb R}^{m+1}\setminus \{(\infty,\ldots,\infty )\})/\mathbb R \textbf 1$ where $\overline{\mathbb R}$ denotes $ \mathbb R\cup\{\infty\}$ and $\mathbb R\textbf 1$ is the linear space spanned by the vector $(1,\ldots ,1)$. Let $ O$ be a $T^m$-orbit of $\mathbb P^m$. This is the locus of points in $\mathbb P^m$ where $x_i=0$ for every $i$ in the subset $I$ of all coordinates and $x_i\neq 0$ for $i\notin I$.
Its tropicalization $\mathcal O :=\operatorname{trop^{ext}} O$ is the locus of points $(x_0,\ldots,x_n)$ in
$\operatorname{trop^{ext}} \mathbb P^m$ where $x_i=\infty$ if and only if $i\in I$. We refer to $ \mathcal O$ as an orbit of $\operatorname{trop^{ext}} \mathbb P^m$.
For any projective variety $Y\subseteq \mathbb P^m$ the tropicalization $\operatorname{trop^{ext}} Y$ is given by the union of $\operatorname{trop^{ext}} Y\cap \mathcal O:=\operatorname{trop^{ext}} (Y\cap O)$ where $O$ is the unique orbit of $\mathbb P^m$ such that $\operatorname{trop^{ext}} O=\mathcal O$ (see Section 6 in \cite{M-S}). If $Y$ is irreducible and $O$ is such that $\dim \overline{Y\cap O}=\dim Y$ then $ Y\subseteq \overline O$ and $\operatorname{trop^{ext}} Y=\overline{\operatorname{trop^{ext}} Y\cap \mathcal O}$ in $\operatorname{trop^{ext}} \mathbb P^m$ \cite[Theorem 6.2.18]{M-S}.\\
Let $\Grp(d,n)$ be the Grassmannian parametrising $d$-dimensional projective subspaces in $\mathbb P^n$. We consider it embedded via the Pl\"ucker map into $\mathbb{P}^{\binom{n+1}{d+1}-1}$. Its tropicalization $\operatorname{trop^{ext}} \Grp(d,n)\subseteq \operatorname{trop^{ext}} \mathbb{P}^{\binom{n+1}{d+1}-1} $ parametrises tropicalized linear spaces of dimension $d$ in $\operatorname{trop^{ext}} \mathbb P^n$ (\cite[Theorem 3.8]{Speyer}, \cite[Theorem 4.3.17 and Remark 4.4.2]{M-S},\cite{CC}). Hence it is possible to associate to each point $p$ of $\operatorname{trop^{ext}} \Grp(d,n)$ a unique tropicalized linear space which we denote by $\Gamma_p$.
\begin{n}
Given two tropical varieties $\operatorname{trop^{ext}} X,\operatorname{trop^{ext}} Y$ we write
$\operatorname{trop^{ext}} X\subseteq \operatorname{trop^{ext}} Y$ for the containment of the support of $\operatorname{trop^{ext}} X$ in the support of $\operatorname{trop^{ext}} Y$.
\end{n}
\begin{deff}
The \textit{tropical Fano scheme} is the set $\F_d(\operatorname{trop^{ext}} X)\subseteq \operatorname{trop^{ext}} \Grp(d,n) $ defined by $$ \F_d(\operatorname{trop^{ext}} X ):=\{p\in \operatorname{trop^{ext}} \Grp(d,n) : \Gamma_p\subseteq \operatorname{trop^{ext}} X \}. $$
\end{deff}
In Section \ref{proofThm} we prove the following results:
\begin{thm}\label{Fanstructure}
Let $X$ be a projective variety in ${\mathbb P}^n$ and $\mathcal O$ be an orbit of $\operatorname{trop^{ext}} \mathbb P^{\binom{n+1}{d+1}-1}$. Then $\F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$ is a polyhedral complex whose support is contained in the intersection $\operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O$.
\end{thm}
\begin{cor}\label{fanProperty}
Consider a non empty intersection $\F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$
and let $\mathcal O'$ be the unique orbit of $\operatorname{trop^{ext}} \mathbb P^n$ such that $\overline{\Gamma_p\cap \mathcal O'}=\Gamma_p$ for all $p\in \F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$. Then $\F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$ is a fan if $\operatorname{trop^{ext}} X\cap \mathcal O' $ is a fan.
\end{cor}
\begin{remark}
Note that $\operatorname{trop^{ext}} \F_d( X)$ does not have the same property described in Proposition \ref{fanProperty}. There are varieties $X\subseteq \mathbb P^n$ such that $\operatorname{trop^{ext}} (X\cap T^n)$ is a fan but $\operatorname{trop^{ext}} \F_d(X)\cap \operatorname{trop^{ext}} T^{\binom{n+1}{d+1}-1}$ is not. In the next section we give an explicit example of this (Example \ref{nonfan}).
\end{remark}
\section{Linear spaces and generic hypersurfaces}\label{counter}
In this section we show that there exist linear spaces and hypersurfaces for which the containment $\operatorname{trop^{ext}} \F_1(X)\subseteq \F_1(\operatorname{trop^{ext}} X)$ is strict.
In Theorem \ref{counterexample} we prove that if $n\ge 5$ and $L$ is a generic plane in $\mathbb P^n$ then there exists a tropical line in $\operatorname{trop^{ext}} L$ that is not realizable in $L$. We then compute an explicit example of a plane $L\subseteq \mathbb P^5$ with this property and we show that $\dim \operatorname{trop^{ext}} \F_1(L)<\dim \F_1(\operatorname{trop^{ext}} L)$.
Finally in Proposition \ref{LinearObstruction}
we prove that the containment is strict for a \textit{general} hypersurface $X$ whose tropicalization has the same support as a tropical hyperplane.
\begin{thm}\label{counterexample}
Let $n\ge 5$. There exists a semi-algebraic set in $\Grp(2,n)$ whose points are planes $L\subseteq \mathbb P^{n}$ such that $\operatorname{trop^{ext}} \F_1(L)\subsetneq \F_1(\operatorname{trop^{ext}} L)$.
\end{thm}
A semialgebraic subset of an algebraic variety $X$ is a subset of $X$ that can locally be defined by finitely many Boolean operators and inequalities of the form $\mathfrak{v} (f) \le \mathfrak{v} (g)$ where $f,g$ are algebraic functions on $X$(\cite{Nic}).
For example every set in $X$ that is Zariski open is also a semialgebraic set.
\begin{proof}[Proof of Theorem \ref{counterexample}]
Let $\mathcal L$ be the standard tropical plane in
$\operatorname{trop^{ext}} \mathbb P^{n}$.
This is the closure in $\operatorname{trop^{ext}} \mathbb P^{n}$ of the tropicalization of the uniform matroid of rank $3$ in $\{0,1,\ldots,n\}$, which is the fan in $\operatorname{trop^{ext}} T^{n}\cong \mathbb R^{n+1}/\mathbb R \textbf 1$ given by the $2$-dimensional cones $\pos(\textbf e_i,\textbf e_j)$ for $0\le i<j\le n$ where $\textbf e_0,\ldots,\textbf e_{n}$ is the standard basis of $\mathbb R^{n+1}$.
Let $\Gamma^{\circ}\subseteq \operatorname{trop^{ext}} (T^{n})$ be the $1$-dimensional fan whose rays are $\pos(\textbf e_{i}+\textbf e_{j})$ where $0\le i\neq j\le n$.
The closure of $\Gamma^{\circ}$ in $\operatorname{trop^{ext}} \mathbb P^{n}$ is a tropical line $\Gamma$ and since $\Gamma^{\circ}\subseteq \mathcal L\cap \operatorname{trop^{ext}} T^{n}$ then $\Gamma$ is contained in $\mathcal L$.
Given $p\in\Grp(2,n)$ we denote by $L_p$ the associated plane in $\mathbb P^{n}$.
We show that we can find an open semi-algebraic set $\mathcal U$ in $\Grp(2,n)$ such that for every $p\in\mathcal U$ we have $\operatorname{trop^{ext}} L_p=\mathcal L$ and there does not exist $\ell\subseteq L_p$ such that $\operatorname{trop^{ext}} \ell=\Gamma$.
Firstly we have that $\operatorname{trop^{ext}} L_p=\mathcal L$ if and only if $p\in \mathcal U_1$ where $$\mathcal U_1=\{q\in \Grp(2,n): \mathfrak{v}(q)=(0,\ldots,0)\}.$$
The plane $L_p$ induces a line arrangement $\mathcal A=\{\ell_0,\ldots,\ell_{n}\}\subseteq \mathbb P^{n}$ given by the lines $\ell_i=L_p\cap \{x_i=0\}$, with $x_0,\ldots,x_{n}$ coordinates of $\mathbb P^{n}$. Let $i,j$ be two distinct indices then we denote by $w_{i,j}$ the point of intersection of $\ell_i$ and $\ell_j$.
There exists a Zariski open set $\mathcal U_2$ of $\Grp(2,n)$ such that for every $p\in \mathcal V$ the line arrangement induced by $L_p$ satisfies the following conditions
\begin{itemize}
\item[(I)] $\ell_i\cap \ell_j\cap \ell_k=\emptyset$ for any three distinct indices $i,j,k$;
\item[(II)] $w_{i_0,i_1},w_{i_2,i_3},w_{i_4,i_5}$ are not collinear unless $\{i_0,i_1\}\cap \{i_2,i_3\}\cap \{i_4,i_5\}\neq \emptyset$.
\end{itemize}
Let $\mathcal U$ be the set $\mathcal U_1\cap\mathcal U_2$. We prove that if $p\in \mathcal U$ then $\Gamma$ is not realisable in $L_p$.
Suppose there exists a line $\ell\subseteq L_p$ such that $\operatorname{trop^{ext}} \ell=\Gamma$. Let
$ O_{i,j}$ be the orbit of $\mathbb P^n$ where $x_i=x_j=0$, then by Theorem 6.3.4 in \cite{M-S} we have that $\ell\cap O_{k,k+1}\neq \emptyset$ for $k=0,\ldots,n-1$ if $n$ is odd and for $k=0,\ldots,n-2$ if $n$ is even. In fact we have that $\operatorname{trop^{ext}} \ell\cap \pos(\textbf e_{k},\textbf e_{k+1}) =\Gamma\cap \pos(\textbf e_{k},\textbf e_{k+1})=\pos(\textbf e_{k}+\textbf e_{k+1})$. Moreover $\ell\cap O_{k,k+1}\subseteq L_p\cap O_{k,k+1}=\ell_{k}\cap \ell_{k+1}=w_{k,k+1}$ hence $\ell\cap O_{k,k+1}=w_{k,k+1}$. This implies that $w_{0,1},\ldots,w_{n-1,n}$ (\emph{resp.} $w_{0,1},\ldots,w_{n-2,n-1}$ ) are collinear and if $n\ge 5$ this is a contradiction since $L_p$ satisfies condition (II).
\end{proof}
\begin{remark}
Note that condition (I) is satisfied by all linear spaces $L_p$ with $p\in\mathcal U$. In fact
$\ell_i\cap\ell_j\cap\ell_k=\emptyset$ if and only if $\operatorname{trop^{ext}} \ell_i\cap\operatorname{trop^{ext}} \ell_j\cap\operatorname{trop^{ext}} \ell_k=\emptyset$. Since $\operatorname{trop^{ext}} L_p=\mathcal L$ we have that $\operatorname{trop^{ext}}\ell_i=\operatorname{trop^{ext}} L_p\cap \{x_i=\infty\}=\mathcal L\cap \{x_i=\infty\}$ and by definition of $\mathcal L$ the intersection $\operatorname{trop^{ext}} \ell_i\cap\operatorname{trop^{ext}} \ell_j\cap\operatorname{trop^{ext}} \ell_k$ is empty for every triple of distinct indices $i,j,k$.
\end{remark}
In the following examples we will always assume $\Bbbk$ to be the field of generalised Puiseux series $\mathbb C((\mathbb R))$ with the natural valuation associated to it (see \cite[Example 2.17]{M-S}). The explicit computations for the tropical varieties and prevarieties
are done with Tropical.m2 \cite{Trop2}, while we use \verb|Polymake| \cite{GJ00} and the \textit{Polyhedra} package in \verb|Macaulay2| \cite{M2}
to get the tree associated the tropical lines in a cone of $\F_1(\operatorname{trop^{ext}} L)$.
\begin{es}\label{saras}
Let $L$ be the plane spanned by the rows of the following matrix
$$
\begin{pmatrix}
0& -271& -92& 0& -13& -54\\
0& -18& -7& -1& 0& -4\\
-1& 12293 & 4173 & 0 & 588& 2450
\end{pmatrix}.
$$
The line arrangement $\mathcal A=\{\ell_i=L\cap \{x_i=0\} :i=0,\ldots,5\}$ satisfies conditions (I) and (II) in the proof of Theorem \ref{counterexample}.
The coordinates of the point $p\in \Grp(2,5)$ associated to $L$ are non zero complex numbers hence $\mathfrak{v}(p)=(0,\ldots,0)$. This implies that $\operatorname{trop^{ext}} L=\mathcal L$ hence $p\in \mathcal U$.
The Fano scheme $\F_1(L)$ is defined by the ideal
\begin{eqnarray*}
&&(49p_{25}-37p_{35}-29p_{45},49p_{15}+40p_{35}-64p_{45},49
p_{05}-26p_{35}-27p_{45},\\
&&98p_{24}-74p_{34}+153p_{45},98p_{14}+80p_{34}+461p_{45},\\
&&98p_{04}-52p_{34}-13p_{45},98p_{23}+58p
_{34}+153p_{35},\\
&&98p_{13}+128p_{34}+461p_{35},98p_{03}+54p_{34}-13p_{35},\\
&&98p_{12}+144p_{34}+473p_{35}+73p_{45},98p_{02}+10p
_{34}-91p_{35}-92p_{45},\\
&&98p_{01}-112p_{34}-234p_{35}-271p_{45})
\end{eqnarray*}
The tropicalization $\operatorname{trop^{ext}} F_1(L)$ is $2-$dimensional fan in $\operatorname{trop^{ext}} \mathbb P^{9}$.
The tropical Fano scheme $\F_1(\operatorname{trop^{ext}} L)$ is the tropical prevariety defined by the tropical incidence relations associated to $\operatorname{trop^{ext}} L$ (\cite[Theorem 1]{Haque}).
These are given by the Pl\"ucker relations generating $\Grp(1,5)$ and by all tropical polynomials of the form $$\bigoplus_{i\in T\setminus S} p_{S\cup i}p_{T\setminus i} $$ where $S\subseteq \{0,1,2,3\}=T$, $|S|=1$ and $p_{T\setminus i}$ are the valuations of coordinates of $p$. In this case $p_{T\setminus i}=0$ for all $0\le i\le3$.
Computations show that while $\operatorname{trop^{ext}} F_1(L)\cap \operatorname{trop^{ext}} T^{9} $ is a $2$-dimensional fan, the tropical Fano scheme $\F_1(\operatorname{trop^{ext}} L)\cap \operatorname{trop^{ext}} T^{9} $ is a fan with $15$ maximal cones of dimension $3$ and $30$ maximal cones of dimension $2$. The rays of $\F_1(\operatorname{trop^{ext}} L) \cap \operatorname{trop^{ext}} T^{9}$ are the same as the rays of $\operatorname{trop^{ext}} F_1(L) \cap \operatorname{trop^{ext}} T^{9}$ and the dimension $2$ maximal cones are also cones of $\operatorname{trop^{ext}} F_1(L) \cap \operatorname{trop^{ext}} T^{9}$.
The dimension $3$ cones of $\F_1(\operatorname{trop^{ext}} L)\cap \operatorname{trop^{ext}} T^{9}$ are the ones parametrizing tropical lines whose \textit{combinatorial type} (see Section \ref{proofThm} for a definition) is a snow-flake tree. This is the graph in Figure \ref{snowflakeType} whose leaves are labelled by numbers from $0$ to $5$. The $2$-dimensional faces of these cones are contained in $\operatorname{trop^{ext}} \F_1(L)$. The relative interior is parametrising all tropical lines not realisable in $L$. In Figure \ref{snowflake} we have an example of one of these tropical lines.
\end{es}
\begin{figure}
\definecolor{rvwvcq}{rgb}{0.30196078431372547,0.30196078431372547,2}\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.5cm,y=0.5cm]\clip(-15.64,-6.91) rectangle (12.04,7.31);\draw [line width=2pt] (-4,3)-- (-2,1);\draw [line width=2pt] (-2,1)-- (0,3);\draw [line width=2pt] (0,3)-- (0,5);\draw [line width=2pt] (0,3)-- (2,3);\draw [line width=2pt] (-4,3)-- (-4,5);\draw [line width=2pt] (-4,3)-- (-6,3);\draw [line width=2pt] (-2,1)-- (-2,-2);\draw [line width=2pt] (-2,-2)-- (-4,-4);\draw [line width=2pt] (-2,-2)-- (0,-4);\begin{scriptsize}
\draw [fill=rvwvcq] (0,5) circle (2.5pt);\draw[color=rvwvcq] (0.16,5.44) node {};
\draw [fill=rvwvcq] (2,3) circle (2.5pt);\draw[color=rvwvcq] (2.16,3.44) node {};\draw [fill=rvwvcq] (-4,5) circle (2.5pt);\draw[color=rvwvcq] (-3.84,5.44) node {};\draw [fill=rvwvcq] (-6,3) circle (2.5pt);\draw[color=rvwvcq] (-6,3.44) node {};
\draw [fill=rvwvcq] (-4,-4) circle (2.5pt);\draw[color=rvwvcq] (-4.2,-3.58) node {};\draw [fill=rvwvcq] (0,-4) circle (2.5pt);\draw[color=rvwvcq] (0.16,-3.58) node {};\end{scriptsize}
\end{tikzpicture}
\caption{A snow-flake tree in $\Grp(1,5)$. }
\label{snowflakeType}
\end{figure}
In the next example we show that it is possible to realise the line $\Gamma$ in the proof of Theorem \ref{counterexample} by choosing a particular $L'$ with $\operatorname{trop^{ext}} L'=\mathcal L$.
\begin{es}\label{L'}
Let $L'\subseteq \mathbb P^5$ be the plane spanned by the rows of the following matrix:
$$
\begin{pmatrix}
1& 3& 0& 1& 5& 7\\
0& 0 & 1 & 3& -1&-1\\
1 & 4 &-1 &-3 & 0 & 0
\end{pmatrix}
$$
The line arrangement $\mathcal A'$ associated to $L'$ satisfies condition (I) of the proof of Theorem \ref{counterexample} and we have $\operatorname{trop^{ext}} L'=\mathcal L=\operatorname{trop^{ext}} L$. However $\mathcal A'$ does not satisfy condition (II).
Let $p'_{i,j}$ be the point $L'\cap O_{i,j}$. The points $p'_{0,1},p'_{2,3}$ and $p'_{4,5}$ are collinear and the line $\ell$ passing through them is defined by the following equations
\begin{eqnarray*}
x_4-x_5=0, 3x_2-x_3=0,3x_1+4x_3+12x_5=0,3x_0+x_3+3x_5=0.
\end{eqnarray*}
The tropical line $\operatorname{trop^{ext}} \ell $ is the closure in $\operatorname{trop^{ext}} \mathbb P^5$ of the fan in $\operatorname{trop^{ext}} T^5$ whose rays are $\pos(\textbf e_0+\textbf e_1),\pos(\textbf e_2+\textbf e_3),\pos(\textbf e_4+\textbf e_5)$. Hence this is the tropical line $\Gamma$ of the proof of Theorem \ref{counterexample}. We now compare $\operatorname{trop^{ext}} \F_1(L')$ with $\operatorname{trop^{ext}} \F_1(L)$.
The ideal associated to the Fano scheme $F_1(L')$ is
\begin{eqnarray*}
&&(6p_{25}-2p_{35}-p_{45},6p_{15}+8p_{35}+97p_{45},6p_{05}+2p_{35}+25p_{45}\\
&&6p_{24}-2p_{34}-p_{45},6p_{14}+8p_{34}+73p_{45},6p_{04}+2p_{34}+19p_{45}\\
&&6p_{23}+p_{34}-p_{35},6p_{1,3 }-97p_{34}+73p_{35},\\
&&6p_{03}-25p_{34}+19p_{35},6p_{12}-31p_{34}+23p_{35}-4p_{45},\\
&&6p_{02}-8p_{34}+6p_{35}-p_{45},6p_{01}+p_{34}-p_{35}-3p_{45})
\end{eqnarray*}
and $\operatorname{trop^{ext}} \F_1(L')$ is a $2-$dimensional fan in $\operatorname{trop^{ext}} \mathbb P^5$.
Let $L$ be the plane of Example \ref{saras}. Since $\operatorname{trop^{ext}} L=\operatorname{trop^{ext}} L'$ then $F_1(\operatorname{trop^{ext}} L)=\F_1(\operatorname{trop^{ext}} L')$ and both $\operatorname{trop^{ext}} \F_1(L')$ and $\operatorname{trop^{ext}} \F_1(L)$ are contained in $\F_1(\operatorname{trop^{ext}} L)$. All rays of $\operatorname{trop^{ext}} \F_1(L)$ are also rays of $\operatorname{trop^{ext}} \F_1(L')$ but $\operatorname{trop^{ext}} \F_1(L')$ has also an extra ray $r$ that is not contained in $\operatorname{trop^{ext}} \F_1(L)$.
The combinatorial type of the tropical lines associated to points in $r$ is the snowflake in Figure \ref{snowflakeType}.
Moreover $r$ is the barycentre of the $3$-dimensional cone $C$ of $\F_1(\operatorname{trop^{ext}} L)$ containing $r$ in its relative interior. If $C=\pos(r_1,r_2,r_3)$ then $r=\pos(r_1+r_2+r_3)$. We have that $C\cap \operatorname{trop^{ext}} F_1(L)$ is given by the two dimensional faces of $C$. On the other hand $C\cap \operatorname{trop^{ext}} F_1(L') $ is the union of the three cones $\pos(r_1,r_1+r_2+r_3),\pos(r_2,r_1+r_2+r_3),\pos(r_3,r_1+r_2+r_3)$ (see Figure \ref{triang}).
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=1]
\begin{scope}
\draw(-1,0)--(7.3,0);
\draw (3,0)--(0.5,4.8);
\draw(3,0)--(5.3,4.8);
\draw (3,0)--(7.3,-4);
\draw (-1,-4)--(3,0);
\node [right] at (5,4){$\textbf e_1$};
\node [right] at (0.2,4){$\textbf e_0$};
\node [right] at (6.7,0.2){$\textbf e_2$};
\node [right] at (6.8,-3.5){$\textbf e_3$};
\node [right] at (-1.2,-3.5){$\textbf e_4$};
\node [right] at (-0.9,0.2){$\textbf e_5$};
\draw[-][ultra thick ,red] (3,0)--(5.5,-1);
\draw[-][ultra thick ,red] (5.5,-1)--(6.8,-1);
\draw[-][ultra thick ,red] (5.5,-1)--(6.8,-2);
\draw [-][ultra thick ,red] (3,0)--(3,2.5);
\draw[-][ultra thick ,red] (3,2.5)--(4,4.6);
\draw[-][ultra thick ,red] (3,2.5)--(2,4.6);
\draw [-][ultra thick ,red] (3,0)--(0.5,-1);
\draw[-][ultra thick ,red] (0.5,-1)--(-0.8,-1);
\draw[-][ultra thick ,red] (0.5,-1)--(-0.8,-2);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{A tropical line contained in the three cones $\pos(\textbf e_0,\textbf e_3),\pos(\textbf e_2,\textbf e_5),\pos(\textbf e_1,\textbf e_4)$ of $\operatorname{trop^{ext}} L\subseteq \mathbb R^5\cong \mathbb R^6/\mathbb R \textbf 1 $ as in Example \ref{saras}.}
\label{snowflake}
\end{figure}
\end{es}
\begin{figure}
\begin{center}
\definecolor{ccqqqq}{rgb}{0.8,0,0}\definecolor{uuuuuu}{rgb}{0.26666666666666666,0.26666666666666666,0.26666666666666666}\definecolor{ududff}{rgb}{0.30196078431372547,0.30196078431372547,1}\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.6cm,y=0.6cm]\clip(-13,0) rectangle (4,6);\draw [line width=2pt] (-5.3,5.08)-- (-8.12,0.18);\draw [line width=2pt] (-8.12,0.18)-- (-2.4664755214562497,0.18780836132788226);\draw [line width=2pt] (-2.4664755214562497,0.18780836132788226)-- (-5.3,5.08);
\draw [line width=1pt,dashed] (-5.28,2.08)-- (-8.12,0.18);
\draw [line width=1pt,dashed] (-5.28,2.08)-- (-5.3,5.08);
\draw [line width=1pt,dashed] (-5.28,2.08)-- (-2.4664755214562497,0.18780836132788226);\draw (-5.14,5.51) node {$r_1$};\draw (-8.7,0.35) node {$r_2$};\draw (-1.8,0.35) node {$r_3$};
\draw (0,4.5) node {$C\cap \F_1(\operatorname{trop^{ext}} L) $};\draw [line width=2pt](2.5,4.5)--(3.7,4.5);
\draw (0,2.7) node {$C\cap\F_1(\operatorname{trop^{ext}} L')$};
\draw [line width=1pt,dashed](2.5,2.7)--(3.7,2.7);
\end{tikzpicture}
\end{center}
\caption{ A section of the cone $C\subseteq \F_1(\operatorname{trop^{ext}} L)$ as in Example \ref{L'}. }
\label{triang}
\end{figure}
In Example \ref{nonfan} we exhibit a plane $L''$ such that $\operatorname{trop^{ext}} (L''\cap T^n)$ is a fan but $\operatorname{trop^{ext}} (\F_1(L'')\cap T^{\binom{n+1}{2}-1})$ is not. This shows that Proposition \ref{fanProperty} does not hold if we replace $\F_1(\operatorname{trop^{ext}} X)$ with $\operatorname{trop^{ext}}\F_1(X)$.
\begin{es}\label{nonfan}
Let $L''$ be the plane in $ \mathbb P^5$ spanned by the rows of the following matrix
$$
M=\begin{pmatrix}
1&1&0&t&1&1\\
1&t+1&1&2&t&0\\
5&8&6&9&7&10
\end{pmatrix}.
$$
We have that $\operatorname{trop^{ext}} L''=\operatorname{trop^{ext}} L$ with $L$ the plane in Example \ref{saras} and
the line arrangement $\mathcal A''=\{ L''\cap O_{i,j}:0\le i<j\le 5\}$ satisfies condition (I) of proof of Theorem \ref{counterexample}. Moreover the points $p''_{01}=L''\cap O_{0,1},p''_{23}=L''\cap O_{2,3}$ and $p''_{45}=L''\cap O_{4,5}$ are not collinear.
The line spanned by the first two rows of $M$ tropicalizes to a tropical line whose combinatorial type is a snowflake tree whose pairs of leaves are labelled by $i$ and $i+1$ for $i=0,..,4$. The corresponding point in $\operatorname{trop^{ext}} F_1( L'')$ is $\textbf e_{01}+\textbf e_{23}+\textbf e_{45}$ in $\mathcal O=\operatorname{trop^{ext}} (\Grp(1,5)\cap T^{\binom{6}{2}})\subseteq \mathbb R^{\binom{6}{2}}/\mathbb R \textbf 1$, where the $\textbf e_{ij}$'s denote the standard basis vectors of $\mathbb R^{\binom{6}{2}}$.
We want to show that $\operatorname{trop^{ext}}\F_1(L'')$ is not a fan by proving that the ray $\pos(\textbf e_{01}+\textbf e_{23}+\textbf e_{45})$ is not contained in $\operatorname{trop^{ext}} (\F_1( L'')\cap T^{\binom{6}{2}})$.
By contradiction suppose $\pos(\textbf e_{01}+\textbf e_{23}+\textbf e_{45})\subseteq \operatorname{trop^{ext}} (\F_1( L'')\cap T^{\binom{6}{2}})$ then its closure in $\operatorname{trop^{ext}} \mathbb P^{\binom{6}{2}}$ is a point $Q$ and it is contained in $\operatorname{trop^{ext}} \F_1(L'')$.
The point $Q$ is in the orbit $\mathcal O=\{[p_{ij}]\in \operatorname{trop^{ext}} \mathbb P^{\binom{6}{2}-1}: p_{01}=p_{23}=p_{45}=\infty \}$ and $Q_{ij}=0$ for $ij\neq 01,23,45$. The tropical line $\Gamma_Q$ is given by the fan in $\operatorname{trop^{ext}}\mathbb P^5$ with rays $\pos(\textbf e_0+\textbf e_1),\pos(\textbf e_2+\textbf e_3)$ and $\pos(\textbf e_4+\textbf e_5)$. Moreover $\Gamma_Q$ is not realizable in $L''$ otherwise the points $p''_{01},p''_{23}$ and $p''_{45}$ would be collinear.
\end{es}
Another instance where the containment $\operatorname{trop^{ext}} \F_1(X)\subseteq \F_1(\operatorname{trop^{ext}} X)$ is strict is the case of \textit{general} hypersurfaces whose tropicalization has the same support of a tropical linear space. An hypersurface is \textit{general} if its Fano scheme of lines has dimension $2n-d-3$ (see \cite[Theorem 8]{barth}).
\begin{prop}\label{LinearObstruction}
If $X$ is a general hypersurface of degree $d>1$ and the tropicalization $\operatorname{trop^{ext}} X$ has the same support as a tropical linear space then $\operatorname{trop^{ext}}(\F_1(X))\subsetneq \F_1(\operatorname{trop^{ext}} X).$
\end{prop}
\begin{proof}
If $L$ is a $(n-1)$-dimensional linear space then the dimension of $\F_1(L)$ is $\dim \Grp(2,n)=2n-4$. By hypothesis we have that $\F_1(\operatorname{trop^{ext}} X)=\F_1(\operatorname{trop^{ext}} L)$ and $\dim \F_1(\operatorname{trop^{ext}} L)\ge \dim \operatorname{trop^{ext}} F_1(L)=2n-4$.
On the other hand the dimension of $\operatorname{trop^{ext}} F_1(X)$ is equal to the dimension of $\F_1(X)$ which is $2n-d-3$. Suppose $\operatorname{trop^{ext}} F_1(X)=\F_1(\operatorname{trop^{ext}} X)$ then
we would have $2n-d-3 \ge 2n-4$ but this is not the case if $d>1$.
\end{proof}
\section{Toric varieties }\label{toricv}
In this section we look at Fano schemes of toric varieties. We prove that for these varieties the tropical Fano scheme is equal to the tropicalization of the classical Fano scheme.\\
Consider a toric variety $X$ associated to a set of lattice points $\mathcal A=\{\textbf a_0,\ldots,\textbf a_n\}$ with $\mathcal A \subseteq \mathbb{Z}^m \times \{1\} $ and denote by $A$ the matrix whose columns are the points in $\mathcal A$.
The variety $X$ has a natural embedding in $\mathbb P^n$ given by a monomial map
$\phi_{\mathcal A}:(\Bbbk^*)^m\times \Bbbk^*\to \mathbb P^n$ (see \cite[Section 2.1]{Cox2}).
We denote the closure of the image of this map by $X_{\mathcal A}$. The matrix $A$ also defines a map $\operatorname{trop^{ext}} (\phi_{\mathcal A}):\mathbb {R}^{m+1}\to \mathbb {R}^{n+1}$. By \cite[Theorem 3.2.13]{M-S} we have that $\operatorname{trop^{ext}}(X_{\mathcal A}\cap T^n)\subseteq \mathbb R^{n+1}/\mathbb R \textbf 1$ is the quotient by $\mathbb R \textbf 1 $ of the image of $\operatorname{trop^{ext}} (\phi_{\mathcal A})$ which is the classical linear space spanned by the rows of $A$. Since the embedding of the toric variety only depends on the row span of $A$ (\cite[Proposition 1.1.9]{Cox2}) it is possible to recover the ideal defining $X_{\mathcal A}$ from $\operatorname{trop^{ext}} (X_{\mathcal A}\cap T^n)$.
\begin{es}\label{firstEs}
Let $X_{\mathcal A}\subseteq \mathbb P^3$ be the toric variety associated to the set of lattice points $\mathcal A=\{(1,1,1),(0,0,1),(0,-1,1),(1,0,1)\}$. The matrix $A$ is $$\begin{pmatrix}
1&0&0&1\\
1&0&-1&0\\
1&1&1&1
\end{pmatrix}
$$ and the ideal defining $X_{\mathcal A}$ is $(xz-yw)$. The tropicalization $\operatorname{trop^{ext}} (X_{\mathcal A} \cap T^3)$ is the quotient by $\mathbb R \textbf 1$ of $\{(x,y,z,w):x+z=y+w\}$ and this is equal to the quotient by $\mathbb R \textbf 1$ of the linear span of the rows of A.
\end{es}
By contrast with the case of linear spaces we show that for toric varieties the tropical Fano scheme is the same as the tropicalization of the classical Fano scheme.
\begin{thm}\label{equality}
Let $X=X_{\mathcal A}$ be a toric variety. Then $\F_d(\operatorname{trop^{ext}} X)=\operatorname{trop^{ext}} \F_d(X)$.
\end{thm}
We prove this result by showing that for each tropicalized linear space $\Gamma\subseteq \operatorname{trop^{ext}} X$ there exists a linear space $\ell\subseteq X$ that tropicalizes to it.
We explicitly construct $\ell$ using \textit{Cayley structures} on $\mathcal A$. We use results in \cite[Section 3]{I-Z} where the authors prove that for each $s-$Cayley structure $\pi$
there exists a subvariety $Z_{\pi}$ of $ \F_{s}(X_{\mathcal A})$ and from $\pi$ it is also possible to deduce equations of the linear spaces parametrised by $Z_{\pi}$. \\
Given a set of $n+1$ lattice points $\mathcal A$ in $\mathbb Z^m\times \{1\}$, let $L$ be the kernel of the map defined by the matrix $A$ and $\textbf e_i$ be the standard basis vectors of $\mathbb R^{n+1}$. If $\textbf l\in L$ we can write $\textbf l=\sum_{l_i>0} l_i\textbf e_i-\sum_{l_i<0} -l_i\textbf e_i$ and denote by $l^+=\sum_{l_i>0} l_i\textbf e_i$ and $l^-=\sum_{-l_i<0}- l_i\textbf e_i$. We have that $\textbf l\in L$ if and only if $ \sum_i l_i \textbf a_i=0$.
The toric variety $X_{\mathcal A}\subseteq \mathbb P^n$ is generated by binomials of the form $\textbf x^{l^+}-\textbf x^{l^-}=\prod_{l_i>0} x_i^{l_i} -\prod_{l_i<0} x_i^{l_i} $ with $\textbf l\in L$ (\cite[Proposition 1.1.9]{M-S}).
A face $\tau $ of $\mathcal A$ is the intersection of a face of $\conv (\mathcal A)$ with $\mathcal A$. Denote by $\Delta_s$ the standard basis $\{\textbf e_0,\ldots,\textbf e_s\}$ of $\mathbb Z^{s+1}$.
\begin{deff}
An $s$-Cayley structure on $\tau$ is a surjective map $\pi : \tau \to \Delta_s$
such that if
$\textbf l\in L,l_i\neq 0$ for all $i$ with $\textbf a_i\in\tau$ and $\sum_{l_i\neq 0} l_i\textbf a_i=0$ then
$\sum_{ l_i\neq 0} l_i\pi(\textbf a_i)=0$, or equivalently
$\sum_{l_i>0}l_i \pi(\textbf a_i)=\sum_{l_i<0}-l_i \pi(\textbf a_i)$.
\end{deff}
\begin{es}
Consider the set of lattice points $\mathcal A$ as in Example \ref{firstEs}.
A $1-$Cayley structure is given by $\pi:\mathcal A\to \mathbb Z^2$ with $\pi((0,0,1))=\pi((0,-1,1))=\textbf e_0$ and $\pi((1,0,1))=\pi((1,1,1))=\textbf e_1.$
An example of a surjective map $\pi:\mathcal A\to \Delta_1$ that is not a Cayley structure is given by $\pi:\mathcal A\to \mathbb Z^2$ with $\pi((1,1,1))=\pi((0,-1,1))=\textbf e_0$ and $\pi((0,0,1))=\pi((1,0,1))=\textbf e_1$. We can see that $\textbf l=(1,-1,1,-1)$ is in $L$ hence $(1,1,1)-(0,0,1)+(0,-1,1)-(1,0,1)=0$ but if we apply $\pi$ we get $2\textbf e_1-2\textbf e_2=0$ which is a contradiction.
\end{es}
We now prove that given a tropicalized linear space in $\operatorname{trop^{ext}} (X\cap T^n)$ we can associate a Cayley structure on $\mathcal A$ to it.
Let $\Gamma$ be a $d$-dimensional tropicalized linear space in $\operatorname{trop^{ext}} T^n$ and let $M_{\Gamma}$ be the matroid associated to it. This is the matroid on $\{0,1,\ldots n\}$ whose bases are the set $\{i_0,\ldots,i_d\}$ such that the corresponding Pl\"ucker coordinates $p_{i_0,\ldots,i_d}$ is not zero. Note that this matrix does not have loops, circuits of one element.\\
The recession fan of $\Gamma$ is the fan whose cones are $\pos(\textbf {e}_{F_1},\ldots, \textbf {e}_{F_{d+1}})+\mathbb R \textbf {1}$ where $\emptyset\neq F_1\subsetneq \ldots\subsetneq F_{d+1}$ is a
maximal chain of flats of $M_{\Gamma}$, $\textbf e_{F_i}=\sum_{j\in F_i}\textbf {e}_i$
and $(\textbf{e}_i)_k=1 $ for $k=i$ and $(\textbf{e}_i)_k=0$ otherwise.
\begin{prop}\label{Cay}
Let $X_{\mathcal A}\subseteq \mathbb P^n$ be a toric variety and let $\Gamma$ be a tropicalized linear space contained in $\operatorname{trop^{ext}} (X_{\mathcal A}\cap T^n)$.
If $M_{\Gamma}$ has $m+1$ non-empty minimal flats then there exists an $m-$Cayley structure on $\mathcal A$.
\end{prop}
The following is a technical lemma which will be used for the proof of Proposition \ref{Cay}.
\begin{lemma}\label{lines}
Let $\Gamma \subseteq \operatorname{trop^{ext}} T^n$ be a tropicalized linear space and $\{F^0_1,\ldots, F^m_1\}$ the set of non-empty minimal flats of
$M_{\Gamma}$. Then
\begin{itemize}
\item [(i)] there exists a unique $F_1^j$ such that $i\in F_1^j$;
\item [(ii)] $\bigcup _{j=1}^m F^j_1=\{0,\ldots,n\}$.
\end{itemize}
\end{lemma}
\begin{proof}
For (i) we observe that if $i\in F_1^j\cap F_1^k$ then, since there are no loops, $\{i\}$ would also be a flat but this would contradict the minimality of $F_1^j$ and $ F_1^k$.
If there exists $i\in \{0,\ldots,n\}$ that is not in $\bigcup _{j=1}^m F^j_1$ then $\{i\}$ can not be a flat. This implies that it is a loop but this is a contradiction since $M_{\Gamma}$ has no loops.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Cay}]
Let $\Gamma$ be a tropicalized linear space contained in $\operatorname{trop^{ext}} (X\cap T^n)$ and $F^0_1,\ldots, F^m_1$ the non-empty minimal flats of $M_{\Gamma}.$
The ray $\pos(\textbf {e}_{F^i_1}) $ of $\Gamma$ is contained in $\operatorname{trop^{ext}} ( X\cap T^n)$ for all $i$ hence the vectors $\textbf {e}_{F^0_1},\ldots,\textbf {e}_{F^m_1}$ are part of a set of generators for the linear space $\operatorname{trop^{ext}}( X\cap T^n)$. Lemma \ref{lines} implies that they are linearly independent vectors in $\mathbb R^{n+1}$.
The linear span in $\mathbb R^{n+1}/\mathbb R\textbf 1$ of $\textbf {e}_{F^0_1},\ldots,\textbf {e}_{F^m_1}$ is equal to the linear span of $\textbf {e}_{F^1_1},\ldots,\textbf {e}_{F^m_1}$ and $(1,\ldots,1)$. Hence we can assume that $\textbf {e}_{F^1_1},\ldots,\textbf {e }_{F^m_1}$ are the first $m$ rows of $A$ and $\textbf {e}_{F^0_1}$ is the unique among
$\textbf {e}_{F^0_1},\ldots,\textbf {e}_{F^m_1}$
with last coordinate equal to $1$.
The columns of $A$ are the points of $\mathcal A$ and by Lemma \ref{lines} they can be partitioned in $m+1$ sets $A_0,\ldots,A_{m}$. The set $A_i$, for $i=0,\ldots,m-1$, is given all points whose coordinates $(p_0,\ldots,p_n)$ are such that $p_i=1$ and $p_j=0$ for all $0\le j\neq i\le m$. The set $A_{m}$ is given by the points whose first $m$ coordinates are zero. We have that $A_0\cup \ldots\cup A_m=\mathcal A$. In fact
by Lemma \ref{lines} for any $i$ there exists a unique $\textbf e_{F_1^j}$ such that $(\textbf e_{F_1^j})_i=1$. This implies for each point $(p_0,\ldots,p_n)$ in $\mathcal A$ (equivalently each column of $A$) there exists a unique $0\le i\le m$ such that $p_i=1$. Since each $\textbf {e}_{F_1^j}$ has at least one coordinate equal to $1$ we have that $A_0\cup\ldots\cup A_{m-1}\subseteq \mathcal A$. Moreover since the first $m$ rows of $A$ are $\textbf {e}_{F^1_1},\ldots,\textbf {e}_{F^m_1}$ we have that the last column of $A$ has first $m$ entries equal to zero. Hence $A_{m}\neq \emptyset$ and $\mathcal A=\cup _{i=0}^s A_i$.
We define $\pi:\mathcal A \to \Delta_m$ to be the map that sends the points in $A_r$ to $\textbf e_{r+1}\in\mathbb Z^{m+1}$. This map is an $m-$Cayley structure on $\mathcal A$.
In fact let $\textbf l\in L$ with $\textbf l=l^{+}-l^{-}=\sum_{l_i>0}l_i\textbf e_i-\sum_{l_i<0}l_i\textbf e_i$ and $\{i:l_i\neq 0\}=\{i:\textbf a_i\in \mathcal A\}$ then we have
$$
\sum_{l_i>0,\textbf a_i\in A_0} l_i \textbf a_i+\ldots+\sum_{l_i>0,\textbf a_i\in A_m} l_i \textbf a_i=\sum_{l_i<0,\textbf a_i\in A_0} -l_i \textbf a_i+\ldots+\sum_{l_i<0,\textbf a_i\in A_m} -l_i \textbf a_i.
$$
We need to prove that
\begin{equation*}
\sum_{l_i>0,\textbf a_i\in A_0} l_i \pi(\textbf a_i)+\ldots+\sum_{l_i>0,\textbf a_i\in A_m} l_i \pi(\textbf a_i)=\sum_{l_i<0,\textbf a_i\in A_0} -l_i \pi(\textbf a_i)+\ldots+\sum_{l_i<0,\textbf a_i\in A_m} -l_i \pi(\textbf a_i).
\label{eq1}
\end{equation*}
By definition of $\pi$ we have that
$$
\sum_{l_i>0,\textbf a_i\in A_0} l_i \pi(\textbf a_i)+\ldots+\sum_{l_i>0,\textbf a_i\in A_m} l_i \pi(\textbf a_i)=(\sum_{l_i>0,\textbf a_i\in A_0} l_i ,\ldots,\sum_{l_i>0,\textbf a_i\in A_m} l_i )
$$
and
$$
\sum_{l_i<0,\textbf a_i\in A_0} -l_i \pi(\textbf a_i)+\ldots+\sum_{l_i<0,\textbf a_i\in A_m} -l_i \pi(\textbf a_s)=(\sum_{l_i<0,\textbf a_i\in A_0} -l_i ,\ldots,\sum_{l_i<0,\textbf a_i\in A_m}- l_i ).
$$
Consider $(p_0,\ldots,p_n)=\sum_{l_i>0,\textbf a_i\in A_0} l_i \textbf a_i+\ldots+\sum_{l_i>0,\textbf a_i\in A_m} l_i \textbf a_i=\sum_{l_i<0,\textbf a_i\in A_0} -l_i \textbf a_i+\ldots+\sum_{l_i<0,\textbf a_i\in A_m} -l_i \textbf a_i$.
The first coordinate $p_0$ is given by the first coordinate of $\sum_{l_i>0,\textbf a_i\in A_0} l_i \textbf a_i$ that is $\sum_{l_i>0,\textbf a_i\in A_0} l_i $ or equivalently by the first coordinate of $\sum_{l_i<0,\textbf a_i\in A_0} -l_i \textbf a_i$ that is $\sum_{l_i<0,\textbf a_i\in A_0} -l_i $.
From this we obtain $\sum_{l_i>0,\textbf a_i\in A_0} l_i= \sum_{l_i<0,\textbf a_i\in A_0} -l_i $. In the same way we have $\sum_{l_i>0,\textbf a_i\in A_1} l_i= \sum_{l_i<0,\textbf a_i\in A_1} -l_i ,\;\ldots\;,\sum_{l_i>0,\textbf a_i\in A_{m-1}} l_i= \sum_{l_i<0,\textbf a_i\in A_{m-1}} -l_i$.
Since $p_n=\sum _{l_i>0}l_i=\sum _{l_i<0}-l_i$ we can also deduce that $$\sum_{l_i>0,\textbf a_i\in A_m} l_i= \sum_{l_i<0,\textbf a_i\in A_sm} -l_i $$ therefore
$$
(\sum_{l_i>0,\textbf a_i\in A_0} l_i ,\ldots,\sum_{l_i>0,\textbf a_i\in A_m} l_i )=(\sum_{l_i<0,\textbf a_i\in A_0} -l_i ,\ldots,\sum_{l_i<0,\textbf a_i\in A_m} -l_i ).
$$
\end{proof}
\begin{es}\label{excay}
Let $\mathcal A$ be the set given by the columns of the matrix $A$ where
$$A=\begin{pmatrix}
0&1&0&0&0\\
1&0&0&0&0\\
2&1&7&3&5\\
1&1&1&1&1\\
\end{pmatrix}.$$
The toric variety $X_{\mathcal A}$ is defined by the ideal $(x_2x_3-x_4^2)\subseteq \mathbb C[x_0,x_1,x_2,x_3,x_4]$.
The tropical line $\Gamma_1$ spanned by $(0,1,0,0,0)$ is contained in $\operatorname{trop^{ext}} (X_{\mathcal A}\cap T^3)$. In the case of tropical lines the cones $\pos(\textbf {e}_{F^0_1})+\mathbb R\textbf 1,\ldots,\pos(\textbf {e}_{F^m_1})+\mathbb R\textbf 1 $ are exactly the rays of $\Gamma$. We can define a $1-$Cayley structure associated to $\Gamma_1$ by sending the set $$A_0=\{(0,1,2,1),(0,0,7,1),(0,0,3,1),(0,0,5,1)\}$$
to $\textbf e_0$ and $A_1=\{(1,0,1,1)\}$ to $\textbf e_1$.
We also notice that the tropical line $\Gamma_2$ whose rays are $\pos(1,0,0,0,0),\pos (0,1,0,0,0)$ and $\pos(-1,-1,0,0,0)$ is contained in $\operatorname{trop^{ext}} (X_{\mathcal A}\cap T^3)$. The $2-$Cayley structure associated to $\Gamma_1$ is the map sending $A_0=\{(0,1,2,1)\}$ to $\textbf e_0$, $A_1=\{(1,0,1,1)\}$ to $\textbf e_1$, $A_2=\{(0,0,7,1),(0,0,3,1),(0,0,5,1)\}$ to $\textbf e_2$.
\end{es}
\begin{proof}[Proof of Theorem \ref{equality}]
We will prove that given a tropicalized linear space $\Gamma\subseteq \operatorname{trop^{ext}} X$ there exists a linear space $\ell '$ in $X$ such that $\operatorname{trop^{ext}} \ell'=\Gamma$.
Assume that $\Gamma$ is in $\operatorname{trop^{ext}} (X\cap O)$ with $O$ an orbit of $\mathbb P^n$. We can consider $Y=\overline{X\cap O}$ as a subvariety of $\overline { O}\cong \mathbb P^s$ with $s=\dim \overline{O}$. The variety $Y$ is also a toric variety and we denote by $\mathcal A'$ the set of lattice points associated to it.
Suppose $M_{\Gamma}$ has $l+1$ minimal flats. By Lemma \ref{Cay} we have that there exists a $l-$Cayley structure $\pi$ on $\mathcal A'$.
Let $Z_{\pi}$ be the subvariety of $\F_l(Y)$ associated to $\pi$ (see \cite[Section 3, Section 4]{I-Z}). This is the closed torus orbit of the linear space $L$ generated by $\textbf v_0,\ldots,\textbf v_l\in \mathbb R^{s+1}$ where
\begin{equation*}
(\textbf v_j)_i=\left \{\begin{matrix} 1\text{ if } \pi(\textbf a_j)=\textbf e_1\\
0 \text{ else }
\end{matrix}\right. .
\end{equation*}
Let $\Gamma'$ be the translation of $\Gamma$ to the origin. There exists a point $p$ in $\operatorname{trop^{ext}} Y$ such that $\Gamma=\Gamma'+p$.
The vectors $\textbf {e}_{F^0_1},\ldots,\textbf {e}_{F^m_1}$ generate a linear space $\mathcal L$ and $\Gamma\subseteq \mathcal L+p$. We have that $\mathcal L=L$. In fact by definition of the $(\textbf v_j)_i$ and by construction of $\pi$ in Lemma \ref{Cay} the matrix
\begin{displaymath} \begin{pmatrix}
\textbf v_1\\
\vdots\\
\textbf v_{l}
\end{pmatrix}
\end{displaymath}
is equal to the submatrix of $A$ given by the first $l$ rows.
The equations of $L$ are $\codim L $ binomials of type $x_i-x_j$ for pairs $(i,j)$ with $0\le i\neq j\le m$,
hence $\operatorname{trop^{ext}} L=L$. Moreover there exists $t\in T^{\dim Y} $ such that $\operatorname{trop^{ext}} (t\cdot L)=\mathcal L+p$.
We show that $\Gamma$ is the tropicalization of a linear space in $t\cdot L$ hence in $Y$. Using the equations of $L$ we can choose $x_0,\ldots,x_{l} \in\{x_0,\ldots,x_s\}$ such that for any $q\in L$ we have $q_i=q_j$ for $j\in\{0,\ldots,l\}$. This implies that the projection
$\phi=\phi_{x_0,\ldots,x_{l} }: \mathbb P^s \to \mathbb P^l$ induces an isomorphism between $L$ and $\mathbb P^l$. Let $\psi^{-1} $ be its inverse. Since $\phi$ and $\phi^{-1}$ are linear monomial maps then $\operatorname{trop^{ext}} (\phi)=\phi$ and $\operatorname{trop^{ext}} (\phi^{-1})=\phi^{-1}$. Consider the linear space $\phi(\Gamma')\subseteq \operatorname{trop^{ext}} \mathbb P^l$. This linear space is realizable in $\mathbb P^l$, that is there exists $\ell'\in\mathbb P^l$ such that $\operatorname{trop^{ext}} \ell'=\phi(\Gamma')$.
Now $\phi^{-1}(\ell')\subseteq L\subseteq Y$ and $\operatorname{trop^{ext}}(\phi^{-1}(\ell'))=\operatorname{trop^{ext}} (\phi^{-1})(\operatorname{trop^{ext}} (\ell'))=\Gamma'$. If we consider $\ell=t\cdot \phi(\ell') $ then $\operatorname{trop^{ext}}(\ell )=\Gamma$.
\end{proof}
\begin{es}
Consider the toric variety $X_{\mathcal A}$ of Example \ref{excay}. We use the proof of Theorem \ref{equality} to compute the lines $\ell_1,\ell_2$ in $X_{\mathcal A}$ that tropicalize to $\Gamma_1$ and $\Gamma_2$ respectively.
The line $\ell_1$ is the line $L$ associated to the $1-$Cayley structure $\pi_1$.
Its defining equations are $x_0-x_2=0,x_2-x_3=0,x_3-x_4=0$.
The tropical line $\Gamma_2$ is contained in the linear space $L$ defined by
$x_2-x_3=0,x_3-x_4=0$. Consider the projection $\phi=\phi_{x_0,x_1,x_2}:\mathbb P^5\to \mathbb P^2$ then $\phi (\Gamma_2) $ is the tropical line in $\mathbb R^3/\mathbb R\textbf 1$ with rays $\pos(1,0,0),\pos(0,1,0),\pos(0,0,1)$ and it is the tropicalization of the line $V(x_0+x_1+x_2)$. Applying $\phi$ we get that $\ell_2$ is defined by $(x_0+x_1+x_2,x_2x_3-x_4^2,x_3-x_4)$.
\end{es}
\section{Proof of Theorem \ref{Fanstructure} and Proposition \ref{fanProperty}}\label{proofThm}
In this section we prove Theorem \ref{Fanstructure} by showing that there exists a polyhedral structure on
each $\F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$.
The key point in the proof of Theorem \ref{Fanstructure} is the identification of $\operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O$ with the subfan of the secondary fan $\Sigma$ of the matroid polytope $P_M$ (\cite[Definition 4.2.9 ]{M-S}). We see in the following paragraph that $M$ is the uniform matroid associated to $\operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O$. The cones of this subfan are the intersection of $\operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O$ with the cones of $\Sigma$ and the subdivisions associated to these cones are the \textit{matroid subdivisions} (see \cite[\S 4.4 ]{M-S} for a definition).
The space $\operatorname{trop^{ext}} \Grp(d,n)\cap \operatorname{trop^{ext}} T^{\binom{n+1}{d+1}-1}$ was first studied by Speyer and Sturmfels in \cite{Speyer} and can be identified with a subfan of the secondary fan of the uniform matroid of rank $d+1$ on $\{0,1,\ldots,n\}$\cite[\S4.4]{M-S}. The same interpretation of $\operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O$ can be extended to the case where $\mathcal O$ is any orbit of
$ \operatorname{trop^{ext}} \mathbb P^{\binom{n+1}{d+1}-1}$. This is done in
the forthcoming paper of Cueto and Corey \cite{CC}.
In particular they show that
$$\Grp(d,n)\cap \overline O\cong\Grp(1,n')\times \prod_{j\in J} T^j$$ where $n'<n$ and $J\subseteq \mathbb N$ with $|J|<\infty$. The isomorphism between them is a map $\psi=\pi\times f$ where $\pi$ is a projection and $f$ is a monomial map. Hence it is possible to consider the tropicalization of this map to get
$$\operatorname{trop^{ext}} \Grp(d,n)\cap \overline{ \mathcal O}\cong\operatorname{trop^{ext}} \Grp(d,n')\times \prod_{j\in J} \operatorname{trop^{ext}} T^j.$$
Let $M'$ be the uniform matroid of rank $d+1$ on $\{0,1,\ldots,n'\}$. We can identify $\operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O$ with a product of a subfan of the secondary fan of $P_{M'}$ with $ \prod_{j\in J} \operatorname{trop^{ext}} T^j=\mathbb R^{\sum_{j\in J} j}/\mathbb R\textbf 1$.
This identification induces a polyhedral structure on
$\operatorname{trop^{ext}} \Grp (d,n)$ given by the union of cones $C_{T}$ where each $T$ is a different matroid subdivision.
Consider $p$ in the relative interior $C_T^{\circ}$ of $ C_T$ and the corresponding tropical linear space $\Gamma_p$. We say that the \textit{combinatorial type} of $\Gamma_p$ is $T$. If $p$ is contained in $\in C_T\setminus C_T^{\circ}$ then
the combinatorial type of $\Gamma_p$ is $T'$ where $T'$ is the matroid subdivision associated to a cone $C_{T'}$ in the boundary of $C_T$ such that $p\in C_{T'}^{\circ}$. Note that if $C_{T'}$ is in the boundary of $C_T$ then a cell $\sigma$ in $C_{T}$ is either equal to a cell in $C_{T'}$ or it is obtained by subdividing a cell $\sigma'$ of $C_{T'}$. In the second case all the cells in the subdivision of $\sigma'$ are cells of in $C_T$.
For the case of $\operatorname{trop^{ext}} \Grp(1,n)$ instead of $T$ one considers the corresponding tree with $n'\le n$ labelled leaves. In fact in this case the polyhedral complex dual to the subdivision has the coarsest polyhedral structure.
In what follows we call an \textit{open polyhedron} a set of the form $P\setminus \partial P$ where $P$ is a polyhedron and $\partial P$ is its boundary. For example the open square with vertices $(0,0,1),(1,0,1),(1,0,0),$ and $(0,0,0)$ in $\mathbb R^3$ is an open polyhedron.
\begin{proof}[Proof of Theorem \ref{Fanstructure}]
We prove that $\F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$ can be written as the union of finitely many polyhedra, denoted by $F_T$, and hence the common refinement of these polyhedra is the polyhedral complex structure on $\F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$.
There are two key points in the proof. The first is that the complement of a polyhedron is the union of open polyhedra and second that the projection of an open polyhedron is an open polyhedron. Secondly it is crucial to describe the polyhedral structure of a tropical linear space from its Pl\"ucker coordinates. In the following we will start by showing this last point.
Let $T$ be a combinatorial type of tropical linear spaces associated to the relative interior of a cone $C_T\subseteq \operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O$. Consider $p\in C_T$ then the tropical linear space $\Gamma_p$ is a subcomplex of the dual complex to a subdivision $T'$ of $P_M$, where $M$ is the uniform matroid associated to $\operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O$ and $C_{T'}$ is a face of $C_T$. This implies that
$\Gamma_p=\coprod_i C_i(p) $ and each cell $C_i(p)$
in $ \operatorname{trop^{ext}}\mathbb P^n$ has the following form
$$\{x\in \operatorname{trop^{ext}}\mathbb P^n: A(i,T) x^t\le \textbf {f}(p) \text{ and } B(i,T) x^t= \textbf {g}(p) \}$$
where $A(i,T) $ and $B(i,T)$ are matrices with entries in $ \mathbb R$ and $\textbf{f}(p),\textbf{g}(p)$ are vectors whose entries are linear forms in the coordinates of $p$, that depend only on $T$ and not on $p$. Note that if $p\in C_T\setminus C_T^{\circ}$ then $p\in C_{T'}\subset C_T$ hence some of the $C_i(p)$ might be the same. These are dual to the cell of $T'$ that is subdivided in $T$.
We are now ready to define $F_T$. This is the set
$$
F_T=\{p\in C_T : \Gamma_p\subseteq \operatorname{trop^{ext}} X \}
$$
hence
$$\F_d(\operatorname{trop^{ext}} X)\cap \mathcal O=\bigcup_{T}F_T
$$
where the union is over all combinatorial types $T$ associated to the relative interior of the maximal cones of $\operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O$.
The tropical linear space $\Gamma_p$ is contained in $\operatorname{trop^{ext}} X$ if and only if for every $i$ we have $C^{\circ}_i(p)\subseteq \operatorname{trop^{ext}} X$, that is
$$
F_T=\bigcap_i \{p\in C_T: C^{\circ}_i(p)\subseteq \operatorname{trop^{ext}} X\}
$$
where $\Gamma_p=\coprod_i C^{\circ}_i(p)$ and $C^{\circ}_i(p)$ is the relative interior of a cell $C_i(p)$ of $\Gamma_p$.
Denote by $F_{C_i}$ the set $\{p\in C_T: C^{\circ}_i(p)\subseteq \operatorname{trop^{ext}} X\}$.
We show that this set is open and is the union of open polyhedra.
Consider the set
$$
\tilde{F}_{C_i}:=\{(p,x)\in C_T\times \operatorname{trop^{ext}} \mathbb P^{n} : x\in C^{\circ}_i(p)\text{ and } x\notin \operatorname{trop^{ext}} X \}\subseteq (\operatorname{trop^{ext}} \Grp(d,n)\cap \mathcal O)\times \operatorname{trop^{ext}} \mathbb P^{n}.
$$
Firstly we observe that $x\notin\operatorname{trop^{ext}} X$ if and only if $x$ is in the complement of any cell of $\operatorname{trop^{ext}} X$ that is
\begin{equation}\label{intsigma}
x\notin \operatorname{trop^{ext}} X \Leftrightarrow x\in \bigcap_{\sigma \text{ cell of } \operatorname{trop^{ext}} X} \sigma^{c}.
\end{equation}
The complement of a polyhedron is a union of open polyhedra hence
the term on the left of (\ref{intsigma}) is the union of finitely many open polyhedra.
Since $$\tilde{F}_{C_i}=\{(p,x)\in C_T\times \operatorname{trop^{ext}} \mathbb P^n :x\in C_i^{\circ}(p)\} \cap \{(p,x)\in C_T\times \operatorname{trop^{ext}} \mathbb P^n:
x\notin \operatorname{trop^{ext}} X \}$$ we obtain that $\tilde{F}_{C_i}$ is the union of finitely many open polyhedra. Moreover this is also the case for $\pi(\tilde{F}_{C_i})$ where $\pi$ is the projection $$\pi:\operatorname{trop^{ext}}\Grp(d,n)\cap \mathcal O\times \operatorname{trop^{ext}} \mathbb P^n\to \operatorname{trop^{ext}}\Grp(d,n)\cap \mathcal O.$$ We can be describe $\pi(\tilde{F}_{C_i})$
in the following way
$$
\pi(\tilde{F}_{C_i})=\{p\in C_T : \exists x\in C_i(p) \text{ such that } x\notin \operatorname{trop^{ext}} X \}.
$$
The set $F_{C_i}$ is the complement of $\pi(\tilde{F}_{C_i})$ hence it is closed and it is the union of finitely many polyhedra. This proves that $F_T$ is the union of finitely many polyhedra and hence the same holds for $\F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$.
\end{proof}
\begin{remark}\label{nonsurval}
It is not necessary to have a surjective valuation $\mathfrak{v}$. Let $G=\mathfrak{v}(\Bbbk)$ be the value group of $\mathfrak{v}$ and assume $G\subsetneq \mathbb R$. Then for any variety $X\subseteq \mathbb P^n$ we have that each face of $\operatorname{trop^{ext}} X$ is a $\mathfrak{v}(\Bbbk)$-polyhedron, so it is defined by linear equalities and inequalities with coefficients in $\mathfrak{v}(\Bbbk)$. In particular if $\Gamma$ is a tropical linear space then the inequalities defining the cells have coefficients in $\mathfrak{v}(\Bbbk)$. This implies that the set $F_T$ is not a union of polyhedra but it is the intersection of this union with $\mathfrak{v}(\Bbbk^*)^m$. Let $\mathcal O$ be an orbit of $\operatorname{trop^{ext}} \mathbb P^{\binom{n+1}{d+1}-1}$ then we can define $\F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$ to be the Euclidean closure of $\bigcup_{T} F_T$.
\end{remark}
The structure of the tropical Fano scheme is strictly connected to the structure of the tropical variety $\operatorname{trop^{ext}} X$.
\begin{proof}[Proof of Corollary \ref{fanProperty}]
The polyhedral structure on $ F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$ is the common refinement of the $F_T$. In the case in which $\operatorname{trop^{ext}} X\cap \mathcal O'$ is a fan we get that $F_T$ is the union of finitely many cones for every $T$. This can be seen from the construction of each $F_T$ in the proof of Theorem \ref{Fanstructure}. Then the common refinement of these cones for every $T$ gives a fan structure on $ F_d(\operatorname{trop^{ext}} X)\cap \mathcal O$ .
\end{proof}
\input{FinalArxivFanoTex.bbl}
\end{document}
\section{Appendix}
Firstly we show how to explicitly determine the tropical linear space $\Gamma_p$ from the point $p\in \Grp(d,n)$. This is described in Lemma \ref{coord} for $p\in\operatorname{trop^{ext}}\Grp_0(d,n)$ and then generalised to $p\in \operatorname{trop^{ext}}\Grp(d,n)\cap \mathcal O$ in Lemma \ref{gencoord}. The generalisation is based on the description in \cite{CC} of the structure of $\operatorname{trop^{ext}}\Grp(d,n)\cap \mathcal O$.\\
\begin{lemma}\label{coord}
Let $p$ be a point in a cone $C_T\subseteq \operatorname{trop^{ext}} \Grp_0(1,n)$. Then for every vertex $\overline V\in T$ the coordinates of $V\in \Gamma_p\subseteq \mathbb R^{n+1}/\mathbb R\textbf 1$ are given by $(f^0_{\overline V}(p_{ij}),\ldots,f^n_{\overline V}(p_{ij}))$ where the $f_i$'s are linear forms. Moreover the $f_i$'s only depend on $T$.
\end{lemma}
We recall here the definition of \textit{regular subdivision} which we will use in the proof of Lemma \ref{coord}.
\begin{deff
Let $P$ be the polytope in $\mathbb R^{s}$ with vertices $\textbf u_1,\ldots,\textbf u_r$ and let $\textbf w$ be a weight vector $\textbf{w}=(w_1,\ldots,w_r)\in \mathbb R^r$. Consider the polytope $$P_{\textbf{w}}=\conv \{(\textbf{u}_i,w_i) :1\le i\le r \}$$ and its lower faces (those with an inner normal vector with last coordinate positive).
The \textit{regular subdivision} $\Delta_{\textbf w}$ of $P$ induced by $\textbf{w}$ is the polyhedral complex given by the projection of the lower faces of $P_{\textbf w}$ onto $P$.
\end{deff}
\begin{deff}
The \textit{dual complex } to the regular subdivision $\Delta_{\textbf w}$ of a polytope $P$ is the polyhedral complex whose faces are $\tilde{\pi}(\mathcal N (F))$ where $\mathcal N (F)=\mathcal N _{P_{\textbf w}}(F)$, $F$ is a lower face of $P_{\textbf w}$ and $\tilde{\pi}$ is the restriction of the projection to the vectors $(\textbf c,1)\in \mathcal N (F)$. Each $\textbf c$ is dual to a face of $P_{\textbf w}$.
\end{deff}
\begin{remark}\label{compdualcom}
We have that $\textbf c$ is in $\tilde{\pi}(\mathcal N (F))$ if and only if there exists $c_0>0$ such that $(\textbf c ,1)\cdot (\textbf u_i,w_i)=c_0$ for $i\in F$ and $(\textbf c,1)\cdot (\textbf u_i,w_i)>c_0$ for $i\notin F$. This is equivalent to $(-\textbf c ,c_0)\cdot (\textbf u_i,1)=w_i$ for $i\in F$ and $(-\textbf c ,c_0)\cdot (\textbf u_i,1)<w_i$ for $i\notin F$. Hence we can solve these systems of equations $(-\textbf c ,c_0)\cdot (\textbf u_i,1)=w_i$ for $i\in F$ for all lower faces of $F$ to get the dual complex to the regular subdivision of a polytope $P$.
\end{remark}
\begin{proof}
Let $P_{U_{2,n+1}}$ be the matroid polytope associated to $\Grp_0(1,n)$ that is the convex hull of all the points $\textbf e_{i,j}=\textbf e_i+\textbf e_j$ in $\mathbb R^{n+1}$ for $0\le i,j\le n$. Then every point $p$ in $\operatorname{trop^{ext}}\Grp_0(1,n)$ induces a regular subdivision $\Delta_p$ on the $P_{U_{2,n+1}}$.
The tropical line $\Gamma_p$ is the subcomplex of the dual complex $\Sigma$ to $\Delta_p$ given by the faces of $\Sigma$ whose dual faces in $\Delta_p$ are matroid polytopes of loop free matroids on $\{0,1,\ldots,n\}$ (\cite[Lemma 4.4.7]{M-S}). Moreover we will prove in Lemma \ref{coarsest} that the
polyhedral structure on $\Gamma_p$ induced by the dual complex is the coarsest one. Hence we have a correspondence between vertices of $\Sigma$ and non-leaf vertices of $T$. The vertices of $\Sigma$ are dual to maximal facets of $\Delta_p$ hence applying the definition of regular subdivision it is possible to compute the coordinates of these vertices as in Remark \ref{compdualcom}. In fact if $F$ is the face dual to $V$ then there exists $c_{0}>0$ such that
\begin{equation}\label{sistema}
(V ,c_{0}) \cdot ( \textbf e_{ij},1)=p_{ij}\quad \quad \text{ for }\textbf e_{ij}\in F.
\end{equation}
It follows that the coordinates of $V$ are linear combination of $p_{ij}$, in other words they are $(f^0(p_{ij}),\ldots,f^n(p_{ij}))$ where the $f_i$'s are linear forms.
Let $q\in C_T$ then $\Delta_q=\Delta_p$ by \cite[Proposition 5.5]{M-G} so (\ref{sistema}) is solvable when we consider the $p_{ij}$ as parameters and the solution is $(f^0(q_{ij}),\ldots,f^{n}(q_{ij}))$.
\end{proof}
\begin{thm}\label{Boundary}\cite{CC}
\begin{enumerate}
\item There exists $n'<n$ and $J\subseteq \mathbb N$ with $|J|<\infty$ such that $$\Grp(1,n)\cap \overline O\cong\Grp(1,n')\times \prod_{j\in J} T^j$$ and the isomorphism between them is a map $\psi=\pi\times f$ where $\pi$ is a projection and $f$ is a monomial map.
\item The isomorphism $\operatorname{trop^{ext}} (\psi)$ induces a polyhedral structure $\Sigma$ on $\operatorname{trop^{ext}}\Grp(1,n)\cap \mathcal O$. Each cone of $\Sigma$ is $C_{T}:=\operatorname{trop^{ext}}(\psi)^{-1}(C_{\tilde T}\times \prod_{j\in J} \mathbb R^{j+1}/\mathbb R\textbf 1)$ where $C_{\tilde{T}}$ is a cone of $\operatorname{trop^{ext}}_0\Grp(1,n')$. The tree $T$ is obtained by relabelling the leaves of $\tilde T$ with subsets of $\{0,\ldots,n\}$.
Moreover for any $p\in C_{T}^{\circ}$ the combinatorial type of $\Gamma_p$ is $T$ and the direction of an edge of $\Gamma_p$ adjacent to the leaf labelled by $I$ is $\sum_{i\in I}\tilde {\textbf e}_i$.
\item For every point $p\in C_{T}$ with $\operatorname{trop^{ext}}(\psi)(p)=(q,(\textbf t_j)_{j\in J})$ there exists a linear map $l_{\psi}:\mathbb R^{n'+1}/\mathbb R \textbf 1\to \mathcal O'$ such that $\Gamma_p=l_{\psi}(\Gamma_q)$. In particular $l_{\psi}(y_0,\ldots,y_{n'})=(x_0,\ldots,x_n)$ where $x_l=\infty $ if $l\in L$ otherwise $x_l=y_i+g_l(p)$ for some $i=0,\ldots,n'$ with
$g_0,\ldots,g_{n'}$ linear forms that only depend on $O$.
\end{enumerate}
\end{thm}
Note that for $\operatorname{trop^{ext}}_0\Grp(1,n)$ we have that $O= T^{\binom{n+1}{2}-1}$ hence $ \overline O=\mathbb P^{\binom{n+1}{2}-1}$ and $\Grp(1,n)\cap \overline O\cong \Grp(1,n)$. In this case the set $J$ is the empty set. The matroid is $M=U_{2,n+1}$ that is the uniform matroid of rank $2$ on $\{0,\ldots,n\}$. The orbit $O'$ is $T^n$ and the map $l_{\psi}$ is the identity.
We give an application of Theorem \ref{Boundary} in Example \ref{rettefuori}.
\begin{es}\label{rettefuori}
Consider $\Grp(1,4)$ and the orbit of $\mathbb P^9$
$$O=\{ p_{ij}=0 \text{ if and only if } \{ij\}\in
\{ \{01\},\{02\},\{03\},\{04\},\{34\} \}\}.$$
The matroid $M$ associated to $\Grp(1,4)\cap O$ has bases $\{1,2\},\{1,3\},\{1,4\},\{2,3\},$
$\{2,4\}$. The loop $\{0\}$ is a circuit of $M$ hence $O'=\{(0,x_1,\ldots,x_4)\in \mathbb P^4: x_i\neq 0 \text{ for }i=1,..,4\}$.
In this case $n'=2$ and $\Grp(1,4)\cap \overline O\cong \Grp(1,2)\times T^1$. The isomorphism $\psi $ is
given by $\pi\times f$ where $\pi $ is the projection onto $p_{12},p_{13},p_{23}$ and
$f(p)=(1,\frac{p_{14}}{p_{13}})$. The map $\psi$ comes from a map of matrices. In fact
a line $l_p$ associated to $p\in \Grp(1,4)\cap \overline O$ can be identified with the span of the rows of a $2\times 5$ matrix $A_p$ whose first column is $(0,0)$ and the last two columns are linearly dependent. More precisely let $A_p^3$ and $A^4_p$ be the last two columns of $A_p$ then $A_p^4=\frac{p_{14}}{p_{13}}A^3_p$. The projection $\pi$ is mapping $l_p$ to the line in $\mathbb P^2$ spanned by the rows of the submatrix of $A_p$ given by the three middle columns.
Since $\operatorname{trop^{ext}}_0 \Grp(1,2)$ is a linear space we deduce that $\operatorname{trop^{ext}} \Grp(1,4)\cap \mathcal O$ is a linear space too and the tropical lines associated to each point of $\operatorname{trop^{ext}} \Grp(1,4)\cap \mathcal O$ are of a unique combinatorial type. This is given by a tree $T$ with three edges and one non-leaf vertex. The leaves are labelled by $1,2,\{34\}$. These are the sets of linearly dependent columns of the matrix $A_p$ associated to any $p$ in $\operatorname{trop^{ext}} \Grp(1,4)\cap \mathcal O$.
Let $p$ be a point in $\operatorname{trop^{ext}} \Grp(1,4)\cap \mathcal O$ then the map $l_{\psi}:\mathbb R^3/\mathbb R\textbf 1\to \operatorname{trop^{ext}} O'$ sends $(x_1,x_2,x_3)$ to $(\infty,x_1,x_2,x_3,x_3+p_{14}-p_{13})$. For example let $p$ be the point $(0,0,\ldots,0)$ in $\operatorname{trop^{ext}} \Grp(1,4)\cap \mathcal O$ and $\operatorname{trop^{ext}}(\psi)(p)=(q,\textbf t_1)=((0,0,0),(0,0))$. The tropical line $\Gamma_q$ is the fan in $\mathbb R^3/\mathbb R\textbf 1$ whose rays are $\pos(\textbf e_0),\pos(\textbf e_1),\pos(\textbf e_2)$. The line $\Gamma_p$ is the fan in $\operatorname{trop^{ext}} O'$ whose rays are $\pos((\infty,1,0,0,0)),\pos((\infty,0,1,0,0)),\pos((\infty,0,0,1,1))$.
If we identify $\operatorname{trop^{ext}} O'$ with $\mathbb R^4/\mathbb R\textbf 1$ we get that the rays of $\Gamma_p$ are $\pos(\textbf e_1),\pos(\textbf e_2),\pos(\textbf e_3+\textbf e_4)$.
\end{es}
| {
"timestamp": "2019-04-05T02:18:43",
"yymm": "1807",
"arxiv_id": "1807.06283",
"language": "en",
"url": "https://arxiv.org/abs/1807.06283",
"abstract": "We define a tropical version $\\F_d(\\trop X)$ of the Fano Scheme $\\F_d(X)$ of a projective variety $X\\subseteq \\mathbb P^n$ and prove that $\\F_d(\\trop X)$ is the support of a polyhedral complex contained in $\\trop \\Grp(d,n)$. In general $\\trop \\F_d(X)\\subseteq \\F_d(\\trop X)$ but we construct linear spaces $L$ such that $\\trop \\F_1(X)\\subsetneq \\F_1(\\trop X)$ and show that for a toric variety $\\trop \\F_d(X)=\\F_d(\\trop X)$.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Tropical Fano Schemes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211568364099,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7096458750041028
} |
https://arxiv.org/abs/2009.09440 | The Significance Filter, the Winner's Curse and the Need to Shrink | The "significance filter" refers to focusing exclusively on statistically significant results. Since frequentist properties such as unbiasedness and coverage are valid only before the data have been observed, there are no guarantees if we condition on significance. In fact, the significance filter leads to overestimation of the magnitude of the parameter, which has been called the "winner's curse". It can also lead to undercoverage of the confidence interval. Moreover, these problems become more severe if the power is low. While these issues clearly deserve our attention, they have been studied only informally and mathematical results are lacking. Here we study them from the frequentist and the Bayesian perspective. We prove that the relative bias of the magnitude is a decreasing function of the power and that the usual confidence interval undercovers when the power is less than 50%. We conclude that failure to apply the appropriate amount of shrinkage can lead to misleading inferences. | \section{Introduction}
The long-standing debate about the role of statistical significance in research \cite{rozeboom1960fallacy}, \cite{meehl1978theoretical} has recently intensified \cite{wasserstein2016asa},\cite{benjamin2018redefine},\cite{wasserstein2019moving},\cite{mcshane2019abandon},\cite{amrhein2018remove} and \cite{ioannidis2019importance}. Looking back to the beginning, we find that Ronald Fisher wrote in 1926 \cite{fisher1992arrangement}:
\begin{quote}
``Personally, the writer prefers to set a low standard of significance at the 5 per cent point, and ignore entirely all results which fail to reach this level.''
\end{quote}
In other words, Fisher considered the familiar 5\% level to be quite liberal and recommended that results that fail to reach even that level can be safely ignored. Now, more than 90 years later, Fisher's advice to apply the ``significance filter'' is widely followed. Recently, Barnett and Wren \cite{barnett2019examination} collected over 968,000 confidence intervals extracted from abstracts and over 350,000 intervals extracted from the full-text of papers published in Medline (PubMed) from 1976 to 2019. We converted these to $z$-values and their distribution is shown in Figure \ref{fig:z}. The under-representation of $z$-values between -2 and 2 is striking.
\begin{figure}[htp] \centering{
\includegraphics[scale=0.8]{barnett_z.pdf}
\caption{The distribution of more than one million $z$-values from Medline (1976--2019).}\label{fig:z}
}
\end{figure}
As time and resources are always limited, it certainly makes sense to focus on significant results to avoid chasing noise. However, there is a problematic side-effect; considering only results that have reached statistical significance leads to overestimation \cite{ioannidis2008most}. This is sometimes called the ``winner's curse''. Moreover, it has been demonstrated informally, i.e.\ by simulation, that the winner's curse is especially severe when the power is low \cite{ioannidis2008most},\cite{gelman2014beyond}. Here, we provide the first formal proof of this important fact.
As it turns out, low power is very common in the biomedical sciences \cite{button2013power},\cite{dumas2017low}. In particular, so-called pilot studies often have extremely low power. When such a study yields a significant result, the effect is likely grossly overestimated. Unfortunately, effect estimates from significant pilot studies are often used to inform the sample size calculation of a larger trial \cite{leon2011role},\cite{gelman2014beyond}. Low power also occurs when some correction is used to adjust for multiple comparisons. Such corrections are especially severe in genomics research, and the resulting overestimation of effects is well known \cite{goring2001large}. The recent suggestion to lower the significance level to improve reproducibility \cite{benjamin2018redefine} also reduces power, and therefore may backfire by aggravating the winner's curse \cite{mcshane2019abandon}.
While the winner's curse is a relatively well known phenomenon, mathematical results are lacking. In the first part of this paper, we study the winner's curse both the frequentist point of view. We prove that the relative bias in the magnitude is a decreasing function of the power. We also examine the effect of the significance filter on the coverage of confidence intervals and find it results in undercoverage when the power is less than 50\%.
In the second part of the paper, we study the significance filter from the Bayesian perspective. We conclude that it is necessary to apply shrinkage. We end the paper with a short discussion.
\section{The frequentist perspective}
Suppose that $b$ is a normally distributed, unbiased estimator of $\beta$ with standard error $\se>0$. We have in mind that $\beta$ is some regression coefficient such as a difference of means, a slope, a log odds ratio or log hazard ratio, and we shall sometimes refer to $\beta$ as the ``effect''.
\subsection{Bias of the magnitude}
By Jensen's inequality, $|b|$ is positively biased for $|\beta|$. Indeed, given $\beta$, $|b|$ has the folded normal distribution with mean
\begin{equation}
\E( |b| \mid \se, \beta)= |\beta| + \sqrt{\frac{2}{\pi}} \se\ e^{-\beta^2/2\se^2} - 2 |\beta| \Phi\left( - \frac{|\beta|} {\se} \right).
\end{equation}
\begin{proposition}\label{prop:bias}
The bias $\E( |b| \mid \se, \beta) - |\beta|$ is positive for all $\se$ and $\beta$. Moreover, it is decreasing in $|\beta|$ and increasing in $\se$.
\end{proposition}
\noindent
The proposition asserts that in low powered studies (small effects and large standard errors), the magnitude of the effect tends to be overestimated. For fixed $\se$, the bias $\E( |b| \mid \se, \beta) - |\beta|$ is maximal at $\beta=0$ where it is equal to $\sqrt{2/\pi}\, \se \approx 0.8\, \se$.
Importantly, the bias in the magnitude becomes even larger if we condition on $|b|$ exceeding some threshold. This ``significance filter'' happens when journals preferentially accept results that are statistically significant (i.e.\ $|b|>1.96 \se$) but {\em also} when authors or readers choose to focus on such promising results as per Fisher's advice. We have the following extension of Proposition 1.
\begin{theorem}
The conditional bias $\E( |b| \mid \se, \beta, |b|/\se>c) - |\beta|$ is positive for all $\se$ and $\beta$. Moreover, it is decreasing in $|\beta|$ and increasing in $\se$ and $c$.
\end{theorem}
\noindent
We define the {\it relative} conditional bias as
$$\frac{\E( |b| \mid \se, \beta, |b|/\se>c) - |\beta|}{|\beta|}$$
and the exaggeration ratio or type M error \cite{gelman2014beyond} as $\E( |b| \mid \se, \beta, |b|>c) /|\beta|$.
\begin{corollary}
The relative conditional bias is positive and and the exaggeration factor is greater than 1. Both depend on $\beta$ and $\se$ only through the signal-to-noise ratio (SNR) $|\beta|/\se$. Both quantities are decreasing in the SNR and increasing in $c$.
\end{corollary}
We illustrate this result in Figure \ref{fig:m}. Now the power for two-sided testing of $H_0:\beta=0$ at
level 5\% is
$$P(|b|>1.96\, \se \mid \beta,\se) = \Phi(\SNR-1.96) + 1 - \Phi(\SNR+1.96),$$
which is a strictly increasing function of the SNR. Hence, the relative conditional bias and the exaggeration factor are decreasing functions of the power, as was already noted on the basis of simulation in \cite{ioannidis2008most} and \cite{gelman2014beyond}.
\begin{figure}[htp] \centering{
\includegraphics[scale=0.8]{exaggeration.pdf}
\caption{The exaggeration factor as a function of the SNR and the power, when conditioning on significance at the 5\% level ($c=1.96$).}\label{fig:m}
}
\end{figure}
\subsection{Coverage}
The significance filter also has consequences for the coverage of confidence intervals. We start by recalling their definition. Suppose a random variable $X$ is distributed according to some distribution $f_\theta$. A $(1-\alpha) \times 100\%$ confidence set $S(X)$ is a random subset of the parameter space such that
$$P(\theta \in S(X) \mid \theta)=1-\alpha,$$
for all $\theta$ \cite{lehmann2006testing}. A negatively biased semi-relevant (or recognizable) set $R$ is a subset of the sample space such that
$$P(\theta \in S(X) \mid \theta, X \in R)<1-\alpha,$$
for all $\theta$ . It is quite problematic if such a set $R$ exists, for is it still reasonable to report $S(X)$ with $(1-\alpha) \times 100\%$ confidence, after the event $X \in R$ has been observed?
Semi-relevant sets have been constructed in various situations, most notably in case of the standard one-sample $t$-interval \cite{lehmann2006testing}. Lehmann \cite{lehmann2006testing} called the existence of certain relevant sets ``an embarrassment to confidence theory''. Now suppose $b$ is normally distributed with mean $\beta$ and known standard deviation $\se$. If we define
$$S(b)=\{ \beta : |b-\beta|/\se < z_{1-\alpha/2} \}$$
where $z_{1-\alpha/2}$ is the $1-\alpha/2$ quantile of the standard normal distribution, then we have the following confidence statement
$$P(\beta \in S(b) \mid \beta,\se) = P(|b-\beta|/\se < z_{1-\alpha/2} \mid \beta,\se)=1-\alpha,$$
for all $0 < \alpha < 1$, $\beta$ and $\se>0$. Lehmann \cite{lehmann2006testing} shows that in this particular setting, there do not exist any negatively biased semi-relevant sets. This is certainly reassuring. However, we if $c>0$, then the conditional coverage
$$P(|b-\beta|/ \se < z_{1-\alpha/2} \mid \beta, \se, |b|/\se>c)$$
depends on $\beta$ and $\se$. This dependence is not simple. For instance, it is {\em not} monotone in $\beta$. We do have the following Theorem.
\begin{theorem}
Suppose $b$ is normally distributed with mean $\beta$ and standard deviation $\se$. If the SNR $|\beta|/\se$ is less than $z=z_{1-\alpha/2}$ then
\begin{equation}
P(|b-\beta| / \se< z \mid \beta, \se, |b|/\se>z) < P(|b-\beta| / \se< z \mid \beta, \se) =1-\alpha.
\end{equation}
\end{theorem}
Note that if the SNR is equal to $z_{1-\alpha/2}$, then the power for testing $H_0 : \beta = 0$ at level $\alpha$ is slightly more than 50\%. So the Theorem implies that if we have a significant result while the power is 50\% or less, then the confidence interval will not reach its nominal coverage.
The result is quite sharp. By inspecting the proof, we can see that if the SNR is slightly larger than $z_{1-\alpha/2}$ then the conditional coverage {\em exceeds} the nominal (unconditional) coverage.
\section{The Bayesian Perspective}
Bayesian inference is valid conditionally on the data, and so the significance filter should not pose any difficulties. On the other hand, Bayesian estimators are naturally biased. In this section we compare the performance of the unbiased estimator $b$ and the Bayes estimator.
Let us assume that $\beta$ has a normal prior distribution with mean 0 and known standard deviation $\tau >0$. The conditional distribution of $\beta$ given $b$ is normal with mean $b^*=\tau^2 b/(\se^2 + \tau^2)$ and variance $v=\se^2\tau^2/(\se^2 + \tau^2)$. We will write $s=\sqrt{v}$. Note that $b^*$ is the Bayes estimator (under squared error loss) of $\beta$. Clearly, $|b^*|<|b|$ and for that reason $b^*$ is called a shrinkage estimator.
We can evaluate $b$ and $b^*$ as estimators of $\beta$ conditionally on the parameter and averaged over the distribution of the data, which is the frequentist point of view. Alternatively, we can condition on the data and average over the distribution of the parameter, which is the Bayesian point of view. We have the following nicely symmetric situation, where we consider $\se$ and $\tau$ to be fixed and known.
\begin{align}
\E(b - \beta \mid \beta) &= 0 &\text{and}\quad \E(b^* - \beta \mid \beta) &= -\frac{\se^2}{\se^2 + \tau^2} \beta \\
\E(b - \beta \mid b) &= \frac{\se^2}{\se^2 + \tau^2}b &\text{and} \quad \E(b^* - \beta \mid b) &= 0
\end{align}
So, from the frequentist point of view, $b$ is unbiased for $\beta$ and $b^*$ is biased. However, from the Bayesian point of view, it is the other way around!
\subsection{Bias of the magnitude}
Now, if we are interested in the magnitude of $\beta$, then we could take the posterior mean of $|\beta|$ as an estimator. However, it is still relevant to evaluate the performance of $|b^*|$ as an estimator of $|\beta|$ from the Bayesian point of view. Conditionally on $s$ and $b^*$, $\beta$ has the normal distribution with mean $b^*$ and standard deviation $s$ and hence $|\beta|$ has the folded normal distribution. Similarly to Proposition 1, we have the following.
\begin{proposition}
The difference $\E( |\beta| \mid s, b^*) - |b^*|$ is positive. It is decreasing in $|b^*|$ and increasing in $s$. Moreover, the difference vanishes as $|b^*|$ tends to infinity.
\end{proposition}
\noindent
So, conditionally on the data, $|b^*|$ underestimates $|\beta|$ on average, but the difference disappears if we focus on large or significant effects. So now the significance filter actually {\em reduces} the bias in the magnitude! In other words, shrinkage lifts the winner's curse.
\bigskip \noindent
So far, we have conditioned either on the parameter or the data, and averaged over the other. However, in practice we do not keep the parameter fixed and repeat the experiment many times. We also do not keep the data fixed and vary the parameter. So, it is also relevant to consider the performance of $b$ and $b^*$ on average over the distribution of {\em both} the parameter {\em and} the data. If the distribution of the parameter represents some field of research, then this averaging will provide insight into how our statistical procedures perform when used repeatedly in that field.
Under our simple model, the marginal distribution of $b$ is normal with mean zero and variance $\se^2+\tau^2$ and the marginal distribution of $b^*$ is normal with mean zero and variance $\tau^4/(\se^2+\tau^2)$. So, trivially, $\E(b)=\E(b^*)=\E(\beta)=0$. Moreover, it is easy to see that the variance of $b^*$ is less than the variance of $b$. Marginally, $|\beta|$, $|b|$ and $|b^*|$ have half-normal distributions with means
\begin{equation}
\E \left| b^* \right| =\frac{\tau^2}{\sqrt{\se^2 + \tau^2}} \sqrt{\frac{2}{\pi}}, \quad \E|\beta| = \tau \sqrt{\frac{2}{\pi}}, \quad \E|b| = \sqrt{\se^2 + \tau^2} \sqrt{\frac{2}{\pi}}
\end{equation}
It is easy to see that
\begin{equation}
\E \left| b^* \right| < \E |\beta| < \E|b|.
\end{equation}
Negative bias is more conservative than positive bias, and that may be preferable in many situations. It is interesting to note that the factor by which $|b|$ {\em over}estimates $|\beta|$ is the same as the factor by which $|b^*|$ {\em under}estimates it. That is,
\begin{equation}
\frac{\E|b|}{ \E|\beta|} = \frac{ \E|\beta|}{\E|b^*|} = \frac{\sqrt{\se^2 + \tau^2}}{\tau} .
\end{equation}
Moreover, the following proposition says that the bias of $|b^*|$ is smaller (on average) than the bias of $|b|$.
\begin{proposition}
Suppose $\beta$ has a normal prior distribution with mean 0 and standard deviation $\tau >0$. Suppose that conditionally on $\beta$, $b$ is normally distributed with mean $\beta$ standard error $\se>0$. Let $b^*=\E(\beta \mid b)$, then
\begin{equation}
\E( |b| - |\beta|) > \E(|\beta| - |b^*|).
\end{equation}
\end{proposition}
\noindent
Most importantly, however, while the bias of $|b|$ increases as we condition on $|b|$ exceeding some threshold, the bias of $|b^*|$ vanishes!
\begin{theorem}
As $c$ goes to infinity, $\E(|b^*| -|\beta| \mid |b|>c)$ vanishes.
\end{theorem}
\subsection{Coverage}
We now return to the coverage issue we discussed in section 2.2. It might seem that Theorem 2 is not much of a problem in practice because conditional on a significant result, the power is unlikely to be small. But such an argument would depend on the (prior) distribution of the signal-to-noise ratio $|\beta|/\se$. We have the following result.
\begin{theorem}
Suppose $\beta$ and $\se$ are distributed such that the SNR $|\beta|/\se$ has a decreasing density and $|\beta|/\se$ and $\se$ are independent. Also suppose that conditionally on $\beta$ and $\se$, $b$ is normally distributed with mean $\beta$ and standard deviation $\se$. For every $0<\alpha < 1$
\begin{equation}
P(|b-\beta| < z_{1-\alpha/2}\se \mid |b|/\se>c) < 1-\alpha.
\end{equation}
\end{theorem}
This result suggest that across research fields where of the SNR has a decreasing density, confidence interval undercover on average. But how realistic is it to assume such a decreasing density? Clearly, it would imply a decreasing density of the absolute $z$-value, and this is certainly not the case in Figure \ref{fig:z}. However we believe that this is due to selective reporting.
We have made an effort to collect an unselected sample of $z$-values as follows. It is a fairly common practice in the life sciences to build multivariate regression models by ``univariable screening''. First, the researchers run a number of univariable regressions for all predictors that they believe could have an important effect. Next, those predictors with a $p$-value below some threshold are selected for the multivariate model. While this approach is statistically unsound, we believe that the univariable regressions should be largely unaffected by selection on significance, simply because that selection is still to be done. For further details, we refer to \cite{van2019default}. We do note that in that article, we discarded $p$-values below 0.001, but these are included here.
We have collected 732 absolute $z$-values from 51 recent articles from Medline. We show the distribution in Figure \ref{fig:z2} which suggest a decreasing distribution of the absolute $z$-values, which implies a that the distribution of the SNR is decreasing as well.
\begin{figure}[htp] \centering{
\includegraphics[scale=0.8]{medline_z.pdf}
\caption{The distribution of 732 absolute $z$-values from Medline. Here we made an effort to avoid the significance filter.}\label{fig:z2}
}
\end{figure}
\section{Discussion}
In this paper we have considered the generic situation where we have an unbiased, normally distributed estimator $b$ of a parameter $\beta$, with known standard error $\se$. Frequentist properties, such as the unbiasedness of $b$ and the coverage of the confidence interval are only meaningful before the data have been observed. Once the data are in, they become meaningless since $b$ is just some fixed number and the confidence interval either covers $\beta$ or it does not. Nothing more can be said without specifying a (prior) distribution for $\beta$.
However, suppose we condition not on $(b,\se)$ but only on the event $|b|>1.96\, \se$. That is, we condition on statistical significance at the 5\% level. Now $b$ is still random and we can talk about bias and coverage. Conditionally on significance, $b$ is biased away from zero. This tendency to overestimate the magnitude of significant effects is sometimes called the ``winner's curse''. It is especially severe when the signal-to-noise ratio $|\beta|/\se$ is low. Also, if the SNR is low, then conditionally on significance the confidence interval will undercover. By providing mathematical proofs of these facts, we hope to contribute to the awareness of these very serious problems.
The goal of hypothesis testing is to try to avoid chasing noise, which is perfectly reasonable. However, the consequence of focusing on significant results is that all the nice frequentist properties no longer hold. Many proposals have been made to address this issue. From a frequentist point of view, one could condition throughout on statistical significance. See, for example, \cite{ghosh2008estimating} and references therein. Alternatively, one can take a Bayesian approach, such as proposed by \cite{xu2011bayesian} and ourselves \cite{van2019default}. Of course, the Bayesian approach relies on correct specification of the prior.
Shrinkage is often viewed as a method to achieve a lower mean squared error by reducing the variance at the expense of increasing the bias. Our most important point is that it is necessary to apply shrinkage to {\em reduce} the bias that results from focusing on interesting results.
| {
"timestamp": "2020-09-22T02:17:00",
"yymm": "2009",
"arxiv_id": "2009.09440",
"language": "en",
"url": "https://arxiv.org/abs/2009.09440",
"abstract": "The \"significance filter\" refers to focusing exclusively on statistically significant results. Since frequentist properties such as unbiasedness and coverage are valid only before the data have been observed, there are no guarantees if we condition on significance. In fact, the significance filter leads to overestimation of the magnitude of the parameter, which has been called the \"winner's curse\". It can also lead to undercoverage of the confidence interval. Moreover, these problems become more severe if the power is low. While these issues clearly deserve our attention, they have been studied only informally and mathematical results are lacking. Here we study them from the frequentist and the Bayesian perspective. We prove that the relative bias of the magnitude is a decreasing function of the power and that the usual confidence interval undercovers when the power is less than 50%. We conclude that failure to apply the appropriate amount of shrinkage can lead to misleading inferences.",
"subjects": "Methodology (stat.ME)",
"title": "The Significance Filter, the Winner's Curse and the Need to Shrink",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211619568682,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7096458729785976
} |
https://arxiv.org/abs/0809.4561 | Stable string operations are trivial | We show that in closed string topology and in open-closed string topology with one $D$-brane, higher genus stable string operations are trivial. This is a consequence of Harer's stability theorem and related stability results on the homology of mapping class groups of surfaces with boundaries. In fact, this vanishing result is a special case of a general result which applies to all homological conformal field theories with a property that in the associated topological quantum field theories, the string operations associated to genus one cobordisms with one or two boundaries vanish. In closed string topology, the base manifold can be either finite dimensional, or infinite dimensional with finite dimensional cohomology for its based loop space. The above vanishing result is based on the triviality of string operations associated to homology classes of mapping class groups which are in the image of stabilizing maps. | \section{Introduction}
Let $M$ be a closed oriented smooth $d$ dimensional manifold and let $LM$ be the loop space consisting of continuous maps from $S^1$ into $M$. In this paper, we use (co)homology with integral coefficients unless otherwise stated, except in section 4. Chas and Sullivan \cite{CS} showed that the homology of the loop space with a degree shift $\mathbb H_*(LM)=H_{*+d}(LM)$ has the structure of a Batalin-Vilkovisky algebra. Cohen and Godin \cite{CG}, Godin \cite{Go2}, and Cohen and Schwarz \cite{CoSch} showed that $\mathbb H_*(LM)$ even carries the structure of a homological conformal field theory (HCFT).
Namely, let $F_{g,p+q}$ be a connected smooth oriented genus $g$ surface with $p+q$ parametrized boundaries out of which $p$ of them are designated as incoming and the other $q$ as outgoing. The mapping class group $\Gamma_{g,p+q}$ is the group of isotopy classes of orientation preserving diffeomorphisms of $F_{g,p+q}$ fixing boundaries pointwise. We assume $p\ge0, q\ge1$. This condition is called the positive boundary condition. Then to each homology class of the mapping class group, HCFT structure assigns the following string operation acting on $H_*(LM)$ (modulo K\"unneth theorem):
\begin{equation}\label{string operation}
\mu_{g,p+q}: H_*(B\Gamma_{g,p+q})\otimes H_*(LM)^{\otimes p} \longrightarrow H_*(LM)^{\otimes q},
\end{equation}
lowering degree by $-d\cdot \chi(F_{g,p+q})=d(2g+p+q-2)$, where $B\Gamma_{g,p+q}$ is the classifying space of the mapping class group $\Gamma_{g,p+q}$. Since we assume $q\ge1$, $B\Gamma_{g,p+q}$ is homotopy equivalent to the moduli space $\mathfrak M_{g,p+q}$ of connected Riemann surfaces of genus $g$ with $p+q$ disjoint holomorphically embedded discs. A representation theory of these moduli spaces $\mathfrak M_{g,p+q}$ is a conformal field theory. Hence the name homological conformal field theory is a suitable one in our framework.
When the manifold $M$ is infinite dimensional, Cohen-Godin's construction of string operations does not work. However, if $M$ is simply connected and the cohomology of its based loop space $\Omega M$ is finite dimensional over a field $k$, then Chataur and Menichi \cite{CM} constructed a homological conformal field theory structure on the cohomology $H^*(LM;k)$ under a different condition $p,q\ge1$ (noncompact HCFT). String operations in this case, are of the following form:
\begin{equation}\label{cohomology string operation}
\mu_{g,p+q}: H_*(B\Gamma_{g,p+q};k)\otimes H^*(LM;k)^{\otimes q} \longrightarrow H^*(LM;k)^{\otimes p}.
\end{equation}
In section \ref{vanishing theorem (II)}, we show that the coproduct for $H^*(LM;k)$ is trivial (i) when $H^*(\Omega M;\mathbb Z)$ is torsion free and $k$ is any field, and (ii) when $H^*(\Omega M;\mathbb Z)$ is $p$-torsion free where $p$ is the characteristic of the field $k$. See Corollary \ref{trivial coproduct}. The homology $H_*(LM;k)$ has a dual HCFT structure with trivial product under the same condition on $\Omega M$. Their construction applies in particular to classifying spaces $BG$ of finite dimensional Lie groups $G$.
The main result of this paper is that all ``stable" string operations vanish both in closed string topology and in open-closed string topology. To describe this stability condition, we recall Harer's stability theorem \cite{H}. Let $T$ be a torus with two boundaries. By sewing one boundary of $T$ to any boundary of $F_{g,p+q}$, we obtain a surface $F_{g+1,p+q}$ of genus $g+1$. By extending a self-diffeomorphism of $F_{g,p+q}$ to one of $F_{g+1,p+q}$ by letting the extension to be identity on $T$, we get an inclusion of groups $\varphi:\Gamma_{g,p+q} \rightarrow \Gamma_{g+1,p+q}$. The induced map in homology is an isomorphism in low degrees, increasing with genus. Thus, the homology of mapping class groups stabilizes with increasing genus. More precisely,
\begin{Harer's Stability Theorem}[Harer \cite{H} and Ivanov \cite{I1}, \cite{I2}] Assume $p+q\ge1$. The stabilizing homomorphism
\begin{equation*}
\varphi_*: H_k(B\Gamma_{g,p+q}) \longrightarrow H_k(B\Gamma_{g+1,p+q})
\end{equation*}
is an isomorphism and independent of $p+q$ for $g\ge2k+1$. It is onto and independent of $p+q$ for $g\ge2k$.
\end{Harer's Stability Theorem}
Thus for sufficiently large $g$, the homology of mapping class groups is independent of genus and the number of boundaries $p+q\ge1$. We will show that there are actually $p+q$ possibly different choices of stabilizing maps related by $\Sigma_p\times\Sigma_q$ equivariance, corresponding to different choices of boundary circles of $F_{g,p+q}$ used for sewing with $T$. The above stability theorem is valid for each of these maps. In fact, it turns out that in stable range, all of these $p+q$ choices give the same stabilizing map. This is a consequence of Harer's stability theorem. See Remark \ref{trivial S_n action}.
For the statement of Ivanov's reformulation of homology stability in \cite{I2}, see Theorem \ref{Ivanov stability}. This reformulation is more convenient for us when we discuss open-closed string topology operations. The above stability range was proved by Ivanov improving over the original Harer's stability range. Note that $\varphi_*$ is always isomorphism for $k=0$ because all of the spaces $B\Gamma_{g,p+q}$ are connected.
We say that the homology group $H_k(B\Gamma_{g,p+q})$ is in stable range if the stabilizing map $\varphi_*$ into $H_k(B\Gamma_{g,p+q})$ is surjective, and all the subsequent $\varphi_*$ are isomorphisms, as in the following sequence:
\begin{equation*}
H_k(B\Gamma_{g-1,p+q}) \xrightarrow[\text{onto}]{\varphi_*}
H_k(B\Gamma_{g,p+q}) \xrightarrow[\cong]{\varphi_*} H_k(B\Gamma_{g+1,p+q}) \xrightarrow[\cong]{\varphi_*} \dots.
\end{equation*}
By the Ivanov's result, $H_k(B\Gamma_{g,p+q})$ is in stable range when $g\ge 2k+1$ for all $k\ge0$.
\begin{Vanishing Theorem}[\textbf{Closed String Topology Case}]
\textup{(I)} Let $M$ be a finite dimensional closed smooth oriented manifold. Consider the closed string topology for $M$.
\textup{(i)} String operations \eqref{string operation} on $H_*(LM)$ associated to elements in the image $\textup{Im}\,\varphi_*\subset H_k(B\Gamma_{g,p+q})$ of any stabilizing map $\varphi_*$ are trivial.
\textup{(ii)} String operations associated to any elements in the homology $H_k(B\Gamma_{g,p+q})$ in stable range are all trivial.
\noindent\textup{(II)} Let $M$ be a simply connected infinite dimensional space such that $H^*(\Omega M;k)$ is finite dimensional for some field $k$. Then with respect to \textup{(}co\textup{)}homology with coefficients in $k$, the above statements \textup{(i)} and \textup{(ii)} for string operations associated to elements in $H_k(B\Gamma_{g,p+q})$ acting on $H^*(LM;k)$ are valid.
\end{Vanishing Theorem}
The second parts (ii) describe the meaning of the title of this paper. Obviously parts (ii) are consequences of parts (i) in view of Harer's Stability Theorem. Parts (i) apply to all stable operations as well as the majority of higher genus unstable operations, and consequently, most higher genus string operations vanish. In the last section of this paper, we compute some genus one unstable string operations and show them to be trivial.
In \cite{Go2}, higher string operations are also constructed in open-closed string topology for a finite dimensional manifold $M$ in which the set of $D$-branes (a collection of submanifolds of $M$ in which open strings can end) consists of just $M$. To describe these operations, let $S$ be a connected open-closed cobordism of genus $g(S)$ with $p$ incoming closed strings, $q$ outgoing closed strings, $r$ incoming open strings, $s$ outgoing open strings, and $m$ completely free boundaries. Let $\Gamma(S)$ be the mapping class group of $S$, where diffeomorphisms are allowed to permute completely free boundaries carrying the same label. Let $\sigma_m\cong\mathbb Z$ be the sign representation of the symmetric group $\Sigma_m$ on $m$ letters. Since there is a canonical surjective map $\Gamma(S)\to \Sigma_m$, the module $\sigma_m$ is also a module over $\Gamma(S)$. See section \ref{open-closed string topology} for details. Then the open-closed string operation is of the following form (modulo K\"unneth theorem):
\begin{equation}\label{open-closed operation}
\mu: H_*(\Gamma(S);\sigma_m^d)\otimes H_*(LM)^{\otimes p}\otimes H_*(M)^{\otimes r} \longrightarrow
H_*(LM)^{\otimes q}\otimes H_*(M)^{\otimes s},
\end{equation}
where $d=\dim M$ and $\sigma_m^d=(\sigma_m)^{\otimes d}$. We prove in Proposition \ref{chi_S} that $(\det\chi_S)^d$ appearing in \cite{Go2} is the same as $\sigma_m^d$ as $\Gamma(S)$-modules. We will formulate and prove a stability property of the homology of the mapping class group $H_*(\Gamma(S);\sigma_m^r)$ with coefficients in $\sigma_m^r$ for $r\ge0$. In particular, we show that the group $H_k(\Gamma(S);\sigma_m^d)$ is in stable range when $g(S)\ge2k+1$. See Remark \ref{related stability} for related works.
\begin{Vanishing Theorem}[\textbf{Open-Closed String Topology Case}] Let $M$ be a $d$-dimensional closed oriented smooth manifold. Consider the open-closed string topology for $M$ with $D$-brane set consisting only of $M$.
\textup{(i)} Open-closed string operations \eqref{open-closed operation} associated to elements in $H_k(\Gamma(S);\sigma_m^d)$ in the image of any stabilizing map $\varphi_*$ are trivial.
\textup{(ii)} Open closed string operations associated to any elements in the homology group $H_k(\Gamma(S);\sigma_m^d)$ in stable range are trivial.
\end{Vanishing Theorem}
At the end of section \ref{open-closed string topology}, we will comment on the general open-closed string topology case with an arbitrary collection of $D$-brane submanifolds.
Let $\Gamma_{\infty,r}=\lim_{g\to\infty}\Gamma_{g,r}$ be the stable mapping class group for $r\ge1$. In view of the Harer's stability theorem, the homology of $\Gamma_{\infty,r}$ is independent of $r\ge1$. The stable homology of the mapping class groups is given by the homology $H_*(B\Gamma_{\infty,r})$ of the stable mapping class group. The homotopy type of $B\Gamma_{\infty}=B\Gamma_{\infty,r}$ has been identified by Madsen and Weiss \cite{MW}: they showed that $H_*(\mathbb Z\times B\Gamma_{\infty};\mathbb Z)\cong H_*(\Omega^{\infty}\mathbb CP^{\infty}_{-1};\mathbb Z)$, and as a consequence, its rational cohomology is given by
\begin{equation*}
H^*(B\Gamma_{\infty};\mathbb Q)\cong\mathbb Q[\kappa_1,\kappa_2,\dots,\kappa_n,\dots],
\end{equation*}
solving Mumford conjecture, where $\kappa_n$'s of degree $2n$ classes are Miller-Morita-Mumford classes. The mod $p$ homology of $B\Gamma_{\infty}$ was computed by Galatius \cite{Ga}. In contrast, homology of mapping class groups in unstable range has not been well understood, and only a few groups have been calculated. Although statements in part (ii) of the vanishing theorems are not the same as saying that string operations associated to $H_*(B\Gamma_{\infty})$ are trivial, this would be a concise statement.
The organization of this paper is as follows. In section 2, we recall homological conformal field theory and discuss some of its properties. In section 3, we analyze the meaning of Harer's Stability Theorem in a general homological conformal field theoretic context. From this, the vanishing theorem of string operations for finite dimensional manifolds follows as a special case of a general fact. In section \ref{vanishing theorem (II)}, we prove the vanishing theorem for infinite dimensional manifolds $M$ with finite dimensional $H^*(\Omega M;k)$. This is done by showing that the genus $1$ TQFT operator is trivial (Theorem \ref{vanishing of genus one map}). We also show that the coproduct in the loop cohomology $H^*(LM;k)$ is trivial and the Serre spectral sequence for the fibration $p:LM \to M$ collapses when $H^*(\Omega M;k)$ is an exterior algebra (Theorem \ref{cohomology of LM}). In particular, the coproduct in $H^*(LM;k)$ is trivial if $H^*(\Omega M;\mathbb Z)$ is $p$ torsion free, where $p$ is the characteristic of the coefficient field $k$ (Corollary \ref{trivial coproduct}). In section \ref{open-closed string topology}, we prove the corresponding vanishing theorem in open-closed string topology with a single $D$-brane consisting of $M$ itself, which follows from a stability result of certain mapping class groups with nontrivial module coefficients obtained by a spectral sequence comparison argument. In the last section, we compute some genus one unstable string operations, and show them to be trivial.
\bigskip
\section{Homological conformal field theory}
We briefly recall basics of homological conformal field theory. Let $\mathcal M_{g,p+q}$ be the moduli space of (not necessarily connected) Riemann surfaces of genus $g$ with a holomorphic map from a disjoint union of $p+q$ discs onto their disjoint images on the surface. Here the first $p$ discs are designated as incoming and the remaining $q$ discs as outgoing. There is a natural action of the product of symmetric groups $\Sigma_p\times \Sigma_q$ on $\mathcal M_{g,p+q}$ by relabeling incoming and outgoing holomorphic discs. Let $\mathcal C$ be a category such that the set of objects is the set of nonnegative integers $Ob(\mathcal C)=\mathbb N\cup\{0\}$, and for $p,q\in Ob(\mathcal C)$ the morphism set from $p$ to $q$ is $\mathcal C(p,q)=\coprod_{g\ge0}\mathcal M_{g,p+q}$. Disjoint union and sewing of Riemann surfaces give rise to operations:
\begin{align*}
\otimes&: \mathcal C(p,q)\times\mathcal C(p',q') \longrightarrow \mathcal C(p+p',q+q'),\\
\circ &: \mathcal C(q,r)\times \mathcal C(p,q) \longrightarrow \mathcal C(p,r).
\end{align*}
With respect to the tensor law on objects given by $p\otimes q=p+q$, the category $\mathcal C$ has the structure of a strict symmetric monoidal category. A symmetric monoidal functor from the category $\mathcal C$ to the category of complex vector spaces is a conformal field theory \cite{Se}. Thus a conformal field theory is a representation theory of moduli spaces of Riemann surfaces.
Let $\mathcal H_*\mathcal C$ be a strict symmetric monoidal category whose set of objects is $\mathbb N\cup\{0\}$, and whose set of morphisms from $p$ to $q$ for $p,q\in Ob(\mathcal H_*\mathcal C)$ is given by
\begin{equation*}
\mathcal H_*\mathcal C(p,q)=H_*(\mathcal C(p,q))=\bigoplus_{g\ge0} H_*(\mathcal M_{g,p+q}).
\end{equation*}
The composition of morphisms come from the gluing operation of Riemann surfaces
\begin{equation*}
\circ: H_*\bigl(\mathcal C(q,r)\bigr)\otimes H_*\bigl(\mathcal C(p,q)\bigr) \longrightarrow H_*\bigl(\mathcal C(p,r)\bigr).
\end{equation*}
A homological conformal field theory (HCFT) is a symmetric monoidal functor $\mathcal F$ from the category $\mathcal H_*\mathcal C$ to the category $\mathcal Gr_*$ of graded groups. For such a functor $\mathcal F$, the graded group $\mathcal F(1)=A_*$ comes equipped with linear maps
\begin{equation*}
\mathcal F: H_*\bigl(\mathcal C(p,q)\bigr) \longrightarrow \text{Hom}(A_*^{\otimes p},A_*^{\otimes q})
\end{equation*}
for $p,q\ge0$, which are compatible with gluing of Riemann surfaces so that for homology classes $x\in H_*\bigl(\mathcal C(q,r)\bigr)$ and $y\in H_*\bigl(\mathcal C(p,q)\bigr)$, their composition $x\circ y\in H_*\bigl(\mathcal C(p,r)\bigr)$ satisfies the relation
\begin{equation*}
\mathcal F(x\circ y)=\mathcal F(x)\circ \mathcal F(y): A_*^{\otimes p} \longrightarrow A_*^{\otimes q} \longrightarrow A_*^{\otimes r}.
\end{equation*}
Let $C$ be a Riemann sphere with one incoming and one outgoing disjoint holomorphic discs, and let $P$ be a Riemann sphere with two incoming and one outgoing holomorphic discs (a pair of pants). Their homology classes $c=[C]\in H_0(\mathcal M_{0,1+1})$ and $m=[P]\in H_0(\mathcal M_{0,2+1})$ are independent of conformal structures on $C$ and on $P$. The associated morphism $\mathcal F(c)=1 : A_* \rightarrow A_*$ is the identity on $A_*$, and $\mathcal F(m): A_*\otimes A_* \rightarrow A_*$ gives an associative product structure with unit on $A_*$.
Let $\mathcal H_0\mathcal C$ be a strict symmetric monoidal category with the object set $\mathbb N\cup\{0\}$, and with the morphism set from $p$ to $q$ given by $\mathcal H_0\mathcal C(p,q)=H_0\bigl(\mathcal C(p,q)\bigr)$, which depends only on the topological type of surfaces. A functor $\mathcal F_0$ from the category $\mathcal H_0\mathcal C$ to the category $\mathcal Gr_*$ of graded groups is a topological quantum field theory (TQFT) \cite{At}. If $B_*=\mathcal F_0(1)$ is the graded group associated to the object $1\in Ob(\mathcal H_0\mathcal C)$, then $B_*$ has the structure of a Frobenius algebra. Note that every homological conformal field theory $\mathcal F$ restricts to a topological quantum field theory $\mathcal F_0$ by restricting the morphism set from $H_*\bigl(\mathcal C(p,q)\bigr)$ to $H_0\bigl(\mathcal C(p,q)\bigr)$.
If the object set of the category $\mathcal H_*\mathcal C$ and $\mathcal H_0\mathcal C$ is the set $\mathbb N$ of positive integers, then the corresponding homological conformal field theories and topological quantum field theories do not necessarily have units and counits, and these theories are called noncompact HCFT and noncompact TQFT, respectively. In the context of string topology $\mathbb H_*(LM)$ for finite dimensional manifold $M$, we only require $q$ to be positive. Such a theory is called a theory with positive boundary. In the context of string topology on the cohomology $H^*(LM)$ of simply connected infinite dimensional manifold $M$ whose based loop space $\Omega M$ has finite dimensional cohomology over a field $k$, we require both $p$ and $q$ to be positive. Thus we have a noncompact HCFT in this case.
As mentioned in the introduction, the moduli space $\mathfrak M_{g,p+q}$ of connected genus $g$ Riemann surfaces with $p+q$ embedded holomorphic discs is homotopy equivalent to the classifying space $B\Gamma_{g,p+q}$ of the mapping class group $\Gamma_{g,p+q}$ when $p+q\ge1$. In this paper, we will mostly working with mapping class groups rather than moduli spaces of Riemann surfaces. Thus, we briefly describe some structures of the category $\mathcal H_*\mathcal C$ including compositions of morphisms and actions of symmetric groups, in terms of surface diffeomorphisms and mapping class groups.
For $i=1,2$, let $S_i$ be a smooth oriented (not necessarily connected) surface of genus $g_i$ with $p_i$ incoming and $q_i$ outgoing parametrized boundaries. Let $\text{Diff}^+(S_i,\partial)$ be the topological group of orientation preserving diffeomorphisms of $S_i$ fixing boundaries pointwise, and let $\Gamma(S_i)=\pi_0\bigl(\text{Diff}^+(S_i,\partial)\bigr)$ be the mapping class group of $S_i$. Suppose the number $q_1$ of outgoing boundaries of $S_1$ is the same as the number $p_2$ of incoming boundaries of $S_2$. Then we can sew two surfaces together to obtain a surface $S_2\#S_1$. Since self-diffeomorphisms of $S_1$ and $S_2$ can be combined together to a self-diffeomorphism of $S_2\#S_1$, we get a homomorphism $\circ: \Gamma(S_2)\times\Gamma(S_1) \rightarrow \Gamma(S_2\#S_1)$, which induces a map of their classifying spaces $\circ: B\Gamma(S_2)\times B\Gamma(S_1) \rightarrow B\Gamma(S_2\#S_1)$. The induced homology homomorphism is the composition of morphisms in the category $\mathcal H_*\mathcal C$.
Next we explain that $H_*(B\Gamma_{g,n})$ has a natural right $\Sigma_n$ action. In particular, $H_*(B\Gamma_{g,p+q})$ has a natural $\Sigma_p\times\Sigma_q$ action. To see this, let $F_{g,n}$ be a connected oriented smooth surface of genus $g$ with $n$ boundaries with parametrization given by $\phi=(\phi_1,\phi_2,\dots,\phi_n):\coprod^n S^1 \xrightarrow{\cong}\partial F_{g,n}$. Let $D_{\partial}$ be the topological group of orientation preserving diffeomorphisms fixing boundaries pointwise, and let $D_{\Sigma_n}$ be the topological group of orientation preserving diffeomorphisms $f$ permuting parametrized boundaries in the sense that $f\circ \phi_i=\phi_{\tau(i)}$ for all $1\le i\le n$ and for some permutation $\tau\in \Sigma_n$.
The group $D_{\partial}$ is a normal subgroup of $D_{\Sigma_n}$ with the quotient $\Sigma_n$. Let $\widetilde{\Gamma}_{g,n}=\pi_0(D_{\Sigma_n})$. Then we have exact sequences of groups:
\begin{align*}
1 \longrightarrow D_{\partial} \longrightarrow & D_{\Sigma_n} \longrightarrow \Sigma_n \longrightarrow 1, \\
1 \longrightarrow \Gamma_{g,n} \longrightarrow & \widetilde{\Gamma}_{g,n}
\longrightarrow \Sigma_n \longrightarrow 1.
\end{align*}
Let $ED_{\Sigma_n} \rightarrow BD_{\Sigma_n}$ be the universal bundle where the group $D_{\Sigma_n}$ acts freely on $ED_{\Sigma_n}$ from the right. Since $n\ge1$, the natural projection $D_{\partial} \rightarrow \pi_0(D_{\partial})=\Gamma_{g,n}$ is a homotopy equivalence \cite{ES}, and hence we have a homotopy equivalence $BD_{\partial}=ED_{\Sigma_n}/D_{\partial}\simeq B\Gamma_{g,n}$. Since $\Sigma_n$ acts freely on $BD_{\partial}$ from the right, it acts on its homology $H_*(BD_{\partial})\cong H_*(B\Gamma_{g,n})$ from the right.
There is another way to view $\Sigma_n$ action on the homology $H_*(B\Gamma_{g,n})=H_*(\Gamma_{g,n})$. This point of view is more relevant for the next section. For each $\tau\in\Sigma_n$, choose a diffeomorphism $f_{\tau}\in D_{\Sigma_n}$ whose restriction to boundaries gives the permutation $\tau$. Such $f_{\tau}$ is only unique up to $D_{\partial}$. The conjugation by $f_{\tau}$ from the right induces an automorphism of $D_{\partial}$, hence of its group of connected components $\Gamma_{g,n}$. However, this automorphism of $\Gamma_{g,n}$ can depend on the choice of $f_{\tau}$. Since inner automorphism of a group $\Gamma_{g,n}$ induces an identity on homology of $\Gamma_{g,n}$ (see for example \cite{B}, page 48), the conjugation action of $f_{\tau}$ on the homology $H_*(\Gamma_{g,n})$ depends only on $\tau\in\Sigma_n$.
These two actions of $\Sigma_n$ on $H_*(B\Gamma_{g,n})$ are in fact the same.
By regarding a $\mathbb Z[\widetilde{\Gamma}_{g,n}]$ free resolution of $\mathbb Z$ as a free resolution of $\mathbb Z[\Gamma_{g,n}]$ and considering its geometric realization, we can easily see that this conjugation action of $\Sigma_n$ on $H_*(\Gamma_{g,n})$ from the right coincides with the one induced by the $\Sigma_n$ free action above on the classifying space $BD_{\partial}$ from the right.
\begin{remark}\label{trivial S_n action}
As it turns out that the $\Sigma_n$ action on $H_k(\Gamma_{g,n})$ is trivial for $g\ge 2k$ (See Lemma 3.3 in \cite{BT}). This is a consequence of Harer-Ivanov stability theorem. Since this fact is relevant in the next section, we explain the reason. For $g\ge 2k$ we have an onto map $H_k(\Gamma_{g,1}) \to H_k(\Gamma_{g,n})$ using Ivanov's reformulation of the stability theorem stated in Theorem \ref{Ivanov stability}. For any element $x\in H_k(\Gamma_{g,n})$, let $y$ be a cycle in the bar complex of $\text{Diff}^+(F_{g,1})$ representing $x$. The surface $F_{g,n}$ can be decomposed as $F_{g,n}=F_{g,1}\#F_{0,n+1}$. For any $\tau\in\Sigma_n$, let $f_{\tau}$ be a diffeomorphism of $F_{0,n+1}$ which induces the permutation on $n$ boundaries not used for sewing with $F_{g,1}$ and which is identity on the boundary used for sewing. Since diffeomorphisms appearing in the expression of the cycle $y$ and $f_{\tau}$ have disjoint support on $F_{g,n}$, conjugation action of $f_{\tau}$ on $y$ is trivial. Hence the action of $\Sigma_n$ on $H_k(\Gamma_{g,n})$ is trivial. This $\Sigma_n$-invariance in the stable range has an interesting consequence in terms of HCFT. Namely, in the stable range $g\ge 2k$, any element $x\in H_k(\Gamma_{g,p+q})$ defines an $\Sigma_p\times \Sigma_q$-invariant operation $\mathcal F(x): A_*^{\otimes p} \longrightarrow A_*^{\otimes q}$. Thus we have an operation between symmetric powers of $A_*$:
\begin{equation*}
\mathcal F(x):S^p(A_*) \longrightarrow S^q(A_*),
\end{equation*}
where the first $S^p(A_*)$ is the symmetric quotient of $A^{\otimes p}$, and the second $S^q(A_*)$ is the $\Sigma_q$-invariants in $A^{\otimes q}$.
\end{remark}
Our final remark in this section is to point out that $\Sigma_p\times\Sigma_q$-equivariance of the closed string topology operation \eqref{string operation} and \eqref{cohomology string operation} essentially comes from the following strictly commutative diagram, where $(\sigma,\tau)\in\Sigma_p\times\Sigma_q$, and $F$ is a surface with $p+q$ parametrized boundaries. Here, the left and the middle horizontal maps are induced by restriction to incoming or outgoing boundaries of $F$.
\begin{equation*}
\begin{CD}
BD_{\partial}\!\times \!(LM)^p @<<< ED_{\Sigma_n}\!\!\underset{D_{\partial}}\times\!\!\text{Map}(F,M) @>>> BD_{\partial}\!\times \!(LM)^q @>>> (LM)^q \\
@V{\cong}V{(\sigma, \tau)\times \sigma}V @V{\cong}V{(\sigma,\tau)}V @V{\cong}V{(\sigma,\tau)\times\tau}V @V{\cong}V{\tau}V \\
BD_{\partial}\!\times \!(LM)^p @<<< ED_{\Sigma_n}\!\!\underset{D_{\partial}}\times\!\!\text{Map}(F,M) @>>> BD_{\partial}\!\times \!(LM)^q @>>> (LM)^q
\end{CD}
\end{equation*}
Note that $BD_{\partial}$ as well as the second space from the left admit free actions of the entire symmetric group $\Sigma_{p+q}$.
\bigskip
\section{Vanishing theorem for closed string topology (I)}\label{proof of theorem}
We carefully examine Harer's Stability Theorem from HCFT point of view. For this purpose, we fix a connected smooth oriented surface $F_{g,p+q}$ for each $g\ge0$ and for each $p,q$ with $p+q\ge1$. Let $T=F_{1,1+1}$ be a torus with one incoming and one outgoing parametrized boundaries. The surface resulting from sewing $T$ to the $i$-th incoming boundary of $F_{g,p+q}$ is denoted by $F_{g,p+q}\#_iT$ for $1\le i\le p$. Similarly, when we sew $T$ to the $j$-th outgoing boundary of $F_{g,p+q}$, the resulting surface is denoted by $T\#_jF_{g,p+q}$ for $1\le j\le q$. These are genus $g+1$ surfaces with $p+q$ boundaries, and there exist orientation preserving diffeomorphisms $h_i$ and $h_j$ from these surfaces to $F_{g+1,p+q}$:
\begin{equation*}
F_{g,p+q}\#_iT \xrightarrow[\cong]{h_i} F_{g+1,p+q} \xleftarrow[\cong]{h_j} T\#_jF_{g,p+q}.
\end{equation*}
Both $h_i$ and $h_j$ are determined up to post-compositions with elements in $D_{g+1,\partial}=\text{Diff}^+(F_{g+1,p+q},\partial)$.
Since both diffeomorphisms $f\in\text{Diff}^+(F_{g,p+q},\partial)$ and $g\in\text{Diff}^+(T,\partial)$ fix boundaries, they can be glued along a boundary to obtain a diffeomorphism $f\# g\in\text{Diff}^+(F_{g+1,p+q},\partial)$. By taking their isotopy classes, we obtain a homomorphism of groups and an associated homomorphism in homology:
\begin{align*}
\Phi_i &:\Gamma_{g,p+q}\times\Gamma_{1,1+1} \longrightarrow \Gamma_{g+1,p+q}, \\
(\Phi_i)_* &: H_*(\Gamma_{g,p+q})\otimes H_*(\Gamma_{1,1+1}) \longrightarrow H_*(\Gamma_{g+1,p+q}),
\end{align*}
given by $\Phi_i([f],[g])=[h_i\circ(f\#g)\circ h_i^{-1}]$. Since different choices of $h_i$ differ by elements of $D_{g+1,\partial}$, and since every inner automorphism induces identity on homology, the homology homomorphism $(\Phi_i)_*$ depends only on $i$. Similarly, if we glue the torus $T$ to $j$-th outgoing boundary of $F_{g,p+q}$, we obtain homomorphisms
\begin{align*}
\Psi_j&: \Gamma_{1,1+1}\times \Gamma_{g,p+q} \longrightarrow \Gamma_{g+1,p+q},\\
(\Psi_j)_* &: H_*(\Gamma_{1,1+1})\otimes H_*(\Gamma_{g,p+q}) \longrightarrow H_*(\Gamma_{g+1,p+q}).
\end{align*}
Let $\varphi_i: \Gamma_{g,p+q} \rightarrow \Gamma_{g+1,p+q}$ be given by $\varphi_i(z)=\Phi_i(z,1)$ for $z\in \Gamma_{g,p+q}$, where $1\in\Gamma_{1,1+1}$ is the unit. Similarly we let $\psi_j: \Gamma_{g,p+q} \rightarrow \Gamma_{g+1,p+q}$ be defined by $\psi_j(z)=\Psi_j(1,z)$. The induced homology maps $(\varphi_i)_*$ and $(\psi_j)_*$ for $1\le i\le p$ and $1\le j\le q$ are Harer's stabilizing maps. These stabilizing maps depend on $i,j$, but only up to $\Sigma_{p+q}$-equivariance, as we show next.
First, note that the mapping class group $\Gamma_{g,p+q}$ does not distinguish between incoming and outgoing boundaries. Thus any statement on homology of mapping class groups before applying HCFT functor must be independent of the distinction between incoming and outgoing boundaries. So for convenience, for $1\le j\le q$ let $\varphi_{p+j}=\psi_j$, and we write $T\#_jF_{g,p+q}$ as $F_{g,p+q}\#_jT$ for uniformity of notation.
\begin{proposition}\label{transposition}
For $1\le i, j\le p+q$, the homomorphisms $(\varphi_i)_*$ and $(\varphi_j)_*$ are related by the right action of the transposition $\tau_{ij}\in\Sigma_{p+q}$ as in the following diagram\textup{:}
\begin{equation}\label{transposition diagram}
\begin{CD}
H_k(B\Gamma_{g,p+q}) @>{(\varphi_i)_*}>> H_k(B\Gamma_{g+1,p+q}) \\
@V{\cong}V{\cdot\tau_{ij}}V @V{\cong}V{\cdot\tau_{ij}}V \\
H_k(B\Gamma_{g,p+q}) @>{(\varphi_j)_*}>> H_k(B\Gamma_{g+1,p+q}).
\end{CD}
\end{equation}
In the stable range of $g\ge 2k$, the action of $\Sigma_{p+q}$ is trivial and all stabilizing homomorphisms are the same\textup{:} $(\varphi_i)_*=(\varphi_j)_*$ for $1\le i,j\le p+q$.
\end{proposition}
\begin{proof} As before, we choose diffeomorphisms $h_i$ and $h_j$ to the surface $F_{g+1,p+q}$ as in the following diagram:
\begin{equation*}
F_{g,p+q}\#_iT \xrightarrow[\cong]{h_i} F_{g+1,p+q} \xleftarrow[\cong]{h_j} F_{g,p+q}\#_jT.
\end{equation*}
Choose a diffeomorphism $u_{ij}$ of $F_{g,p+q}$ which switches the $i$-th and the $j$-th boundaries, and which fixes other boundaries pointwise. The map $u_{ij}$ is unique up to post and pre-composition with elements in $\text{Diff}^+(F_{g,p+q},\partial)$. The map $u_{ij}$ induces a diffeomorphism $u_{ij}\#1: F_{g,p+q}\#_iT \xrightarrow{\cong} F_{g,p+q}\#_jT$ which is identity on $T$ and switches the $i$-th and the $j$-th boundaries. Now two diffeomorphisms $h_j\circ(u_{ij}\#1)$ and $h_i$ differ by a self-diffeomorphism $v_{ij}$ of $F_{g+1,p+q}$ switching the $i$-th and the $j$-th boundaries. Thus we have $h_j\circ(u_{ij}\#1)=v_{ij}\circ h_i$. Since $h_i$ and $h_j$ are unique up to post-composition by elements in $\text{Diff}^+(F_{g+1,p+q},\partial)$, the map $v_{ij}$ is unique up to post and pre-composition with elements in $\text{Diff}^+(F_{g+1,p+q},\partial)$. Since $\varphi_i([f])=[h_i\circ(f\#1)\circ h_i^{-1}]$ for $[f]\in\Gamma_{g,p+q}$ for all $i$, it is straightforward to check the commutativity of the following diagram:
\begin{equation*}
\begin{CD}
\Gamma_{g,p+q} @>{\varphi_i}>> \Gamma_{g+1,p+q} \\
@V{[u_{ij}]\circ(\ )\circ [u_{ij}]^{-1}}V{\cong}V
@V{\cong}V{[v_{ij}]\circ(\ )\circ [v_{ij}]^{-1}}V \\
\Gamma_{g,p+q} @>{\varphi_j}>> \Gamma_{g+1,p+q}.
\end{CD}
\end{equation*}
As observed earlier, elements $[u_{ij}]$ and $[v_{ij}]$ are unique up to post and pre-composition with elements in $\Gamma_{g,p+q}$ and in $\Gamma_{g+1,p+q}$, respectively. Thus, on homology level, they induce unique maps, namely the action by the transposition $\tau_{ij}\in\Sigma_{p+q}$, and the homology commutative diagram \eqref{transposition diagram} follows.
When we are in the stable range $g\ge 2k$, by Remark \ref{trivial S_n action} the action of the symmetric group $\Sigma_{p+q}$ is trivial. Thus, all the stabilizing maps $(\varphi_i)_*$ for $1\le i\le p+q$ are the same. This completes the proof.
\end{proof}
Let $\mathcal F:\mathcal H_*\mathcal C \rightarrow \mathcal Gr_*$ be a HCFT with $\mathcal F(1)=A_*$, and let $\mathcal F_0:\mathcal H_0\mathcal C \rightarrow \mathcal Gr_*$ be the associated TQFT obtained by restriction from $\mathcal F$. For $x\in H_*(B\Gamma_{g,p+q})$, we compare HCFT operations $\mathcal F(x)$ and $\mathcal F\bigl((\varphi_i)_*(x)\bigr):A_*^{\otimes p} \rightarrow A_*^{\otimes q}$ associated to $x$ and $(\varphi_i)_*(x)$, where the latter belongs to $H_*(B\Gamma_{g+1,p+q})$. Let $T=F_{1,1+1}$ be as before, and let $t=[T]\in H_0(B\Gamma_{1,1+1})\cong\mathbb Z$ be the generator. We also consider similar questions for $\psi_j$'s for $1\le j\le q$.
\begin{proposition}\label{stabilizing map}
For $1\le i\le p$ and $x\in H_k(B\Gamma_{g,p+q})$, we have
\begin{equation*}
\mathcal F\bigl((\varphi_i)_*(x)\bigr)=\mathcal F(x)\circ(1\otimes\cdots\otimes 1\otimes \mathcal F_0(t)\otimes 1\otimes \cdots\otimes 1): A_*^{\otimes p} \longrightarrow A_*^{\otimes q}.
\end{equation*}
Here $\mathcal F_0(t): A_* \rightarrow A_*$ is the TQFT operator associated to the torus $T$, inserted at the $i$-th position. For $\psi_j$ with $1\le j\le q$, the corresponding formula is
\begin{equation*}
\mathcal F\bigl((\psi_j)_*(x)\bigr)=(1\otimes\cdots\otimes 1\otimes \mathcal F_0(t)\otimes 1\otimes \cdots\otimes 1)\circ\mathcal F(x): A_*^{\otimes p} \longrightarrow A_*^{\otimes q},
\end{equation*}
where $\mathcal F_0(t)$ is inserted at the $j$-th position.
In the stable range of $g\ge 2k$, the above two operations are the same and defines a map between symmetric powers\textup{:}
\begin{equation*}
\mathcal F\bigl((\varphi_i)_*(x)\bigr)=\mathcal F\bigl((\psi_j)_*(x)\bigr)
:S^p(A_*) \longrightarrow S^q(A_*),
\end{equation*}
for $1\le i\le p$ and $1\le j\le q$.
\end{proposition}
\begin{proof} The homology homomorphism $(\Phi_i)_*=\circ_i: H_*(B\Gamma_{g,p+q})\otimes H_*(B\Gamma_{1,1+1}) \rightarrow H_*(B\Gamma_{g+1,p+q})$ induced from a group homomorphism $\Phi_i$ is part of the composition of morphisms $\mathcal H_*\mathcal C(p,q)\otimes \mathcal H_*\mathcal C(p,p) \rightarrow \mathcal H_*\mathcal C(p,q)$ in the category $\mathcal H_*\mathcal C$. Thus, by gluing property of HCFT $\mathcal F$, for $x\in H_*(B\Gamma_{g,p+q})$ and $y\in H_*(B\Gamma_{1,1+1})$ we have
\begin{equation*}
\mathcal F(x\circ_i y)=\mathcal F(x)\circ(1\otimes\cdots\otimes\mathcal F(y)\otimes\cdots\otimes 1): A_*^{\otimes p} \longrightarrow A_*^{\otimes q},
\end{equation*}
where $\mathcal F(y)$ is at the $i$-th position. Since $\varphi_i:\Gamma_{g,p+q} \rightarrow \Gamma_{g+1,p+q}$ is given by $\varphi_i(z)=\Phi_i(z,1)$, the induced map on classifying spaces is given by
\begin{equation*}
B\varphi_i: B\Gamma_{g,p+q} \overset{\cong}\longrightarrow B\Gamma_{g,p+q}\times \{*\}
\longrightarrow B\Gamma_{g.p+q} \times B\Gamma_{1,1+1} \xrightarrow{\Phi_i} B\Gamma_{g+1,p+q},
\end{equation*}
where $*\in B\Gamma_{1,1+1}$ is any point. Since $[*]=t=[T]\in H_0(B\Gamma_{1,1+1})$, for any element $x\in H_*(B\Gamma_{g,p+q})$, we have $(\varphi_i)_*(x)=(\Phi_i)_*(x\otimes t)=x\circ_i t$. Since $\mathcal F(t)=\mathcal F_0(t)$ by definition, we have the formula in the proposition for $\mathcal F\bigl((\varphi_i)_*(x)\bigr)$.
The proof for $\psi_j$'s is similar.
In the stable range, the action of $\Sigma_{p+q}$ is trivial by Remark \ref{trivial S_n action}, and we have operations on symmetric powers. By Proposition \ref{transposition}, elements $(\varphi_i)_*(x)$ and $(\psi_j)_*(x)$ for $x\in H_k(B\Gamma_{g,p+q})$ are the same in $H_k(B\Gamma_{g+1,p+q})$, and hence give rise to the same HCFT operation. This completes the proof.
\end{proof}
The commutativity diagram in Proposition \ref{transposition} implies that for $x\in H_k(B\Gamma_{g,p+q})$, we have $\mathcal F\bigl((\varphi_i)_*(x)\cdot\tau_{ij}\bigr)=\mathcal F\bigl((\varphi_j)_*(x\cdot\tau_{ij})\bigr)$. When $1\le i,j\le p$, this formula can be verified directly using Proposition \ref{stabilizing map} in view of $\Sigma_p$-equivariance of string operations as follows. For $a_1,a_2,\dots,a_p\in A_*$, we have
\begin{multline*}
\mathcal F\bigl((\varphi_i)_*(x)\cdot\tau_{ij}\bigr)(a_1\otimes a_2\otimes \cdots\otimes a_p)
=\mathcal F\bigl((\varphi_i)_*(x)\bigr)
\bigl(\tau_{ij}(a_1\otimes\cdots\otimes a_p)\bigr)\\
=(-1)^{\varepsilon}\mathcal F(x)(a_1\otimes\cdots\otimes \mathcal F_0(t)a_j\otimes\cdots\otimes a_i\otimes\cdots\otimes a_p),
\end{multline*}
where $\mathcal F_0(t)a_j$ is at the $i$-th position and $a_i$ is at the $j$-th position, and the sign $(-1)^{\varepsilon}$ is given by
\begin{equation*}
\varepsilon=|a_i||a_j|+(|a_i|+|a_j|)(|a_{i+1}|+\cdots+|a_{j-1}|)+|\mathcal F_0(t)|(|a_1|+\cdots+|a_{i-1}|).
\end{equation*}
On the other hand,
\begin{multline*}
\mathcal F\bigl((\varphi_j)_*(x\cdot\tau_{ij})\bigr) (a_1\otimes\cdots\otimes a_p)=\mathcal F(x\cdot\tau_{ij})(a_1\otimes\cdots \otimes a_i\otimes\cdots\otimes\mathcal F_0(t)a_j\otimes\cdots\otimes a_p)\\
=(-1)^{\varepsilon}\mathcal F(x)(a_1\otimes\cdots\otimes \mathcal F_0(t)a_j\otimes\cdots\otimes a_i\otimes\cdots\otimes a_p),
\end{multline*}
with the same $\varepsilon$ as above, where in the first line, $\mathcal F_0(t)a_j$ is at the $j$-th position, and in the second line $\mathcal F_0(t)a_j$ is at the $i$-th position.
The corresponding formulas involving $\psi_j$'s can be checked similarly using $\Sigma_q$-equivariance of the HCFT operation. In the mixed case, for $1\le i\le p$ and $1\le j\le q$, by Proposition \ref{transposition} we have $\mathcal F\bigl((\varphi_i)_*(x)\cdot\tau_{ij}\bigr)=\mathcal F\bigl((\psi_j)_*(x\cdot\tau_{ij})\bigr)$ for $\tau_{ij}\in\Sigma_{p+q}$. Since HCFT operations are only $\Sigma_p\times \Sigma_q$-equivariant, we cannot go any further in this case, although of course in the stable range we can eliminate $\tau_{ij}$ from this formula.
If the HCFT $\mathcal F$ is defined for an object $p=0$, and the unit $1\in A_*$ with respect to the product structure $\mathcal F_0(m):
A_*^{\otimes 2} \rightarrow A_*$ exists, then the homomorphism $\mathcal F_0(t):A_* \rightarrow A_*$ is simply given by multiplication by an element $\xi=\mathcal F_0(t)(1)\in A_*$. This is because capping one of the incoming boundaries of $F_{1,2+1}$ gives a surface $F_{1,1+1}=F_{1,0+1}\#F_{0,2+1}$. In the following commutative diagram in which vertical homomorphisms are induced by capping $p$ incoming boundaries by discs,
\begin{equation*}
\begin{CD}
H_*(\Gamma_{0,p+1}) @>>> H_*(\Gamma_{g,p+1}) \\
@VVV @VVV \\
H_*(\Gamma_{0,1})=0 @>>> H_*(\Gamma_{g,1}),
\end{CD}
\end{equation*}
the right vertical homomorphism is an isomorphism for a sufficiently large genus $g$ by Harer's stability theorem. Since the top map factors through a trivial homology group due to $\Gamma_{0,1}=1$, it is a zero homomorphism. Using this observation, Tillmann \cite{Til} showed that if $A_*$ supports a HCFT with unit and $\xi=\mathcal F_0(t)(1)\in A_*$ is such that $\xi^n\not=0$ for all $n\ge1$, then the Batalin-Vilkovisky algebra structure in $A_*$ is trivial after localization $A_*[\xi^{-1}]$. Here note that if $\xi$ is a multiplicative torsion element with $\xi^m=0$ for some $m\ge1$, then $A_*[\xi^{-1}]=0$, and the situation is trivial.
The situation we deal with is complementary to the above situation. Recall that $T$ denotes a Riemann surface with one incoming and one outgoing embedded discs, and $t=[T]\in H_0(B\Gamma_{1,1+1})\cong\mathbb Z$ is a generator.
\begin{proposition} \label{trivial HCFT operations}
Let $\mathcal F:\mathcal H_*\mathcal C \rightarrow \mathcal Gr_*$ be a homological conformal field theory with $\mathcal F(1)=A_*$, and let $\mathcal F_0=\mathcal F|_{H_0}$ be the associated topological quantum field theory. If $\mathcal F_0(t)=0: A_* \rightarrow A_*$, then operations $\mathcal F(x)$ associated to elements $x\in H_*(B\Gamma_{g,p+q})$ in the image of any stabilizing maps
\begin{equation*}
(\varphi_i)_*, \ (\psi_k)_*: H_*(B\Gamma_{g-1,p+q}) \longrightarrow H_*(B\Gamma_{g,p+q}),\qquad 1\le i\le p, \quad 1\le k\le q,
\end{equation*}
are trivial.
\end{proposition}
\begin{proof}
This is a direct consequence of the formula in Proposition \ref{stabilizing map}.
\end{proof}
\begin{proof}[Proof of Vanishing Theorem \textup{(I)}] If $M$ is a finite dimensional smooth oriented closed manifold, then Cohen-Godin \cite{CG} and Godin \cite{Go2} showed that $\mathbb H_*(LM)$ supports a structure of HCFT with positive boundary ($q\ge1$). Previously we showed that all higher genus topological quantum field theory operations in closed string topology are trivial \cite{T3}, \cite{T5}. In particular $\mathcal F_0(t)=0$ for the genus one case as an operator on $\mathbb H_*(LM)$. Hence by Proposition \ref{trivial HCFT operations}, all string operations associated to images of stabilizing maps are trivial.
A proof for part (II) is given in the next section.
\end{proof}
\bigskip
\section{Vanishing theorem for closed string topology (II)} \label{vanishing theorem (II)}
If $M$ is a simply connected infinite dimensional manifold with finite dimensional cohomology $H^*(\Omega M;k)$ for its based loop space with coefficients in a field $k$ of an arbitrary characteristic, then Chataur and Menichi \cite{CM} showed that $H^*(LM;k)$ carries the structure of a noncompact HCFT requiring $p,q\ge1$. We will show that genus $1$ topological quantum field theory operator vanishes, $\mathcal F_0(t)=0$ in this noncompact HCFT. Also, we show that the Serre spectral sequence for the fibration $p:LM\to M$ collapses and the coproduct map in $H^*(LM;k)$ vanishes if $H^*(\Omega M;k)$ is an exterior algebra on odd degree generators, which is the case when $H^*(\Omega M;\mathbb Z)$ has no torsion elements of order divisible by the characteristic of the field $k$ (Corollary \ref{trivial coproduct}). Again, by Proposition \ref{trivial HCFT operations}, all string operations associated to elements in images of stabilizing maps are trivial.
The main point here is that relevant transfer maps, the integration along the fiber, can be defined because the fiber $\Omega M$ of fibrations $f_{\text{out}}$, $g_{\text{out}}$, and $\overline{g}$ in \eqref{fibration diagram} and \eqref{fibration diagram 2} below behaves as a finite dimensional oriented manifold. In particular, applying the transfer map $f_{\text{out}}^!$ is essentially equivalent to taking Poincar\'e duality of the Pontrjagin product map in $\Omega M$. Thus if the cohomology of $LM$ can be written as a tensor product $H^*(LM;k)=H^*(M;k)\otimes H^*(\Omega M;k)$ (which is the case when $H^*(\Omega M;k)$ is an exterior algebra by Theorem \ref{cohomology of LM} below), then applying partial Poincar\'e duality along the cohomologically finite dimensional fiber gives $H^*(M;k)\otimes H_*(\Omega M;k)$ in which cohomology loop product is given by the cup product in $H^*(M;k)$ and the Pontrjagin product in $H^*(\Omega M;k)$. In the context of the previous section, transfer maps used for construction of string operations can be defined because the manifold $M$ itself is finite dimensional. If the homology of $LM$ can be written as a tensor product $H_*(LM)=H_*(M)\otimes H_*(\Omega M)$, then applying partial Poincar\'e duality along the finite dimensional base $M$, we get $H^*(M)\otimes H_*(\Omega M)$, which is the loop homology algebra $\mathbb H_*(LM)$ for the finite dimensional $M$. Thus, in this sense, Chataur-Menichi construction of noncompact HCFT on the cohomology $H^*(LM;k)$ does produce none other than loop homology in case of an infinite dimensional simply connected space $M$.
First we briefly recall TQFT product $\mu$ and coproduct $\Phi$ in $H^*(LM;k)$ defined in \cite{CM}. Suppose the finite dimensional connected commutative Hopf algebra $H^*(\Omega M;k)$ is concentrated between the degrees $0\le *\le d$. In this case, by Hopf-Borel Theorem on the structure of Hopf algebras (\cite{MT}, Theorem 1.3 and Corollary 1.4 in Chapter 7), the finite dimensional connected commutative Hopf algebra $H^*(\Omega M;k)$ must be one of the following forms as an algebra, where $p$ is the characteristic of the field $k$:
\begin{enumerate}
\item[(i)] When $p=0$, $H^*(\Omega M;k)\cong \Lambda_k(x_1,x_2,\dots,x_{\ell})$, where $|x_i|$ is odd.
\item[(ii)] When $p=2$, $H^*(\Omega M;k)\cong \bigotimes_{i=1}^rk[y_i]/(y_i^{2^{f_i}})$.
\item[(iii)] When $p\not=0,2$, $H^*(\Omega M;k)\cong \Lambda_k(x_1,x_2,\dots,x_{\ell})\otimes \bigotimes_{j=1}^m\bigl(k[y_j]/(y_j^{p^{f_j}})\bigr)$, where $|x_i|$ is odd and $|y_j|$ is even.
\end{enumerate}
In particular, $H^*(\Omega M;k)$ is a Poincar\'e duality algebra with an orientation class $[\Omega M]\in H^d(\Omega M;k)$ in the top degree. The cohomology loop product and loop coproduct maps
\begin{align*}
\Phi&: H^*(LM;k) \longrightarrow H^*(LM;k)\otimes H^*(LM;k),\\
\mu&: H^*(LM;k)\otimes H^*(LM;k) \longrightarrow H^*(LM;k),
\end{align*}
are homomorphisms of degree $-d$ defined as follows. Let $F=F_{0,2+1}$ be a pair of pants with two incoming boundaries and one outgoing boundary. Restriction to boundaries give rise to two fibrations:
\begin{equation}\label{two fibrations}
\begin{CD}
LM\times LM @<{g_{\text{out}}}<< \text{Map}\,(F,M) @>{g_{\text{in}}}>> LM.
\end{CD}
\end{equation}
Here we switched words ``in" and ``out" since in cohomology formulation, arrows are reversed as in \eqref{cohomology string operation}. Since the surface $F$ is homotopy equivalent to a graph \begin{tikzpicture}\draw (0,0) arc (0:180:0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035);\end{tikzpicture} with an appropriate orientation, by replacing $\text{Map}(F,M)$ with a homotopy equivalent space $\text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035);\end{tikzpicture},M)$, we have the following commutative diagram where the square is a pull-back diagram of fibrations $g_{\text{out}}$ and $\overline{g}$ with fiber $\Omega M$:
\begin{equation}\label{fibration diagram}
\begin{CD}
LM\times LM @<{g_{\text{out}}}<< \text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180:0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035);\end{tikzpicture},M) @>{g_{\text{in}}}>> LM \\
@V{p\times p}VV @V{q}VV @. \\
M\times M @<{\overline{g}}<< \text{Map}(I,M). @.
\end{CD}
\end{equation}
The map $q$ above is the restriction to the interval between two circles, and the bottom map $\overline{g}$ is the evaluation map at end points of the unit interval $I=[0,1]$. The map $g_{\text{in}}$ is defined by an onto map $S^1 \to \begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035);\end{tikzpicture}$ which maps the base point of $S^1$ to one of the vertices of the graph, and traces each circle of the graph once and traces the middle interval twice in opposite directions. See the description of the map $f_1$ in \eqref{correspondence maps} for details. Since $M\times M$ is simply connected by hypothesis, the map $\overline{g}: \text{Map}(I,M) \longrightarrow M\times M$ is an oriented fibration with fiber $\Omega M$. Namely, $\pi_1(M\times M)$ acts trivially on the orientation class $[\Omega M]\in H^d(\Omega M;k)$. Consequently, the pull-back fibration $g_{\text{out}}: \text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035);\end{tikzpicture},M) \longrightarrow LM\times LM$ is also an oriented fibration with fiber $\Omega M$.
Then we can consider the following transfer maps $g_{\text{out}}^!$ and $\overline{g}^!$ of degree $-d$, both integrations along the fiber:
\begin{align*}
g_{\text{out}}^!&: H^*\bigl(\text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035);\end{tikzpicture},M);k\bigr) \longrightarrow H^*(LM;k)\otimes H^*(LM;k),\\
\overline{g}^!&: H^*\bigl(\text{Map}(I,M);k\bigr)=H^*(M;k) \longrightarrow H^*(M;k)\otimes H^*(M;k).
\end{align*}
The coproduct map $\Phi$ in $H^*(LM;k)$ is defined in terms of the transfer map by
\begin{equation*}
\Phi: H^*(LM;k) \xrightarrow{g_{\text{in}}^*} H^*\bigl(\text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035);\end{tikzpicture},M);k\bigr) \xrightarrow{g_{\text{out}}^!} H^*(LM;k)\otimes H^*(LM;k).
\end{equation*}
To understand these transfer maps, we recall the Serre spectral sequence description of the integration along the fiber in a general form. Let $p:E \longrightarrow B$ be an oriented fibration with connected fiber $F$ such that the cohomology $H^*(F;k)$ with coefficient in a field $k$ is finite dimensional with a top degree orientation class $[F]\in H^d(F;k)\cong k$. Then the integration along the fiber $p^!$ is given by the following composition of maps and lowers cohomological degree by $d$:
\begin{equation}\label{integration along fiber}
p^!: H^{n+d}(E;k)=D^{n,d} \xrightarrow{\text{onto}} E^{n,d}_{\infty}\subset E^{n,d}_2=H^n\bigl(B;H^d(F;k)\bigr) \xleftarrow[\cong]{[F]} H^n(B;k),
\end{equation}
where $H^{n+d}(E)=D^{0,n+d}\supset\cdots\supset D^{n,d}\supset D^{n+1,d-1}\supset\cdots\supset D^{n+d,0}\supset 0$ is a filtration on $H^{n+d}(E;k)$. Since $H^k(F;k)=0$ for $k>d$, we have $E_{\infty}^{n+d-k,k}=0$ for $k>d$. Thus, we have $H^{n+d}(E;k)=D^{0,n+d}=\cdots=D^{n,d}$, as in \eqref{integration along fiber}.
Here is a simple and useful criterion for vanishing of the transfer map $p^!$. This Lemma is used in Theorem \ref{cohomology of LM} below.
\begin{lemma}\label{vanishing transfer} Let $p:E \longrightarrow B$ be an oriented fibration with connected fiber $F$ with an orientation class $[F]\in H^d(F;k)$ in the top degree. If $p^*:H^*(B;k) \longrightarrow H^*(E;k)$ is onto, then $p^!=0$.
\end{lemma}
\begin{proof} In terms of the Serre spectral sequence, the map $p^*$ is given by the following composition for an arbitrary $n$:
\begin{equation*}
p^*: H^n(B;k)\cong H^n\bigl(B;H^0(F;k)\bigr)=E_2^{n,0} \xrightarrow{\text{onto}} E_{\infty}^{n,0}=D^{n,0}\subset D^{0,n}=H^n(E;k).
\end{equation*}
Thus, if $p^*$ is onto, then we have $D^{n,0}=D^{0,n}$, which is equivalent to $E_{\infty}^{*,q}=D^{*,q}/D^{*+1,q-1}=0$ for all $q\ge1$. In particular, $E_{\infty}^{*,d}=0$. Thus, $p^!=0$. This completes the proof.
\end{proof}
Similarly, the intergration along the fiber in homology can be defined by the following composition of maps:
\begin{equation}\label{homology transfer}
p_!:H_n(B;k)\cong H_n\bigl(B;H_d(F;k)\bigr)=E^2_{n,d} \xrightarrow{\text{onto}} E^{\infty}_{n,d}=D_{n,d}\subset D_{n+d,0}=H_{n+d}(E;k).
\end{equation}
We naturally expect that over a field coefficient $k$, the homology transfer $p_!$ and the cohomology transfer $p^!$ are dual to each other. Since we use this fact later, we quickly verify this.
\begin{lemma}\label{dual transfers} With the same hypothesis on the fibration $p:E\longrightarrow B$ as in Lemma \ref{vanishing transfer}, homology and cohomology transfer maps over a field $k$ are dual to each other, namely $(p_!)^*=p^!$.
\end{lemma}
\begin{proof} By comparing cohomology and homology transfers given in \eqref{integration along fiber} and \eqref{homology transfer}, all we have to show is that the dual of homology $E^{\infty}$-terms are isomorphic to cohomology $E_{\infty}$-terms: $(E^{\infty}_{p,q})^*=E_{\infty}^{p,q}$. To see this, we have to recall the definition of homology and cohomology filtrations $\{D_{*,*}\}$ and $\{D^{*,*}\}$. These filtrations are defined in terms of an increasing subchain complexes $\{A_p\}$ of $C_*(E)$ by
\begin{align*}
D_{p,q}&=\text{Im}\,[\iota_*:H_{p+q}(A_p;k) \longrightarrow H_{p+q}(E;k)]\\
D^{p,q}&=\text{Ker}\,[\iota^*:H^{p+q}(E;k) \longrightarrow H^{p+q}(A_{p-1};k)].
\end{align*}From this description, it is easy to see that homology and cohomology filtrations are related by
\begin{equation}\label{cohomology filtration}
D^{p,q}\cong\bigl(H_{p+q}(E;k)/D_{p-1,q+1}\bigr)^*.
\end{equation}
By taking the dual of the following exact sequence,
\begin{equation*}
0 \longrightarrow E^{\infty}_{p,q} \longrightarrow H_{p+q}(E;k)/D_{p-1,q+1} \longrightarrow H_{p+q}(E;k)/D_{p,q} \longrightarrow 0,
\end{equation*}
and using \eqref{cohomology filtration}, we see that the above sequence becomes $0 \gets (E^{\infty}_{p,q})^* \gets D^{p,q} \gets D^{p+1,q-1} \gets 0$. Hence $(E^{\infty}_{p,q})^*\cong E_{\infty}^{p,q}$. This completes the proof.
\end{proof}
\begin{remark} Using the descriptions of $p^!$ and $p^*$ given above in terms of spectral sequences, it is easy to see that $p^!\circ p^*=0$ Similarly, we can show that $p_*\circ p_!=0$.
\end{remark}
Next, we describe cohomology loop product. By replacing a pair of pants $F_{0,1+2}$ with one incoming and two outgoing boundaries by a homotopy equivalent graph \begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture} with an appropriate orientation, we can replace the diagram \eqref{two fibrations} with the following one, where the square is a pull-back diagram of fibrations $f_{\text{out}}$ and $\overline{g}$ with fiber $\Omega M$:
\begin{equation}\label{fibration diagram 2}
\begin{CD}
LM @<{f_{\text{out}}}<< \text{Map}(\begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture},M) @>{f_{\text{in}}}>> LM\times LM \\
@V{p\times p_{\frac12}}VV @V{q}VV @. \\
M\times M @<{\overline{g}}<< \text{Map}(I,M). @.
\end{CD}
\end{equation}
Here $q$ is the restriction to the middle interval of the graph \begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture}, $f_{\text{out}}$ is induced by the restriction to the outer circle, and $f_{\text{in}}$ is the restriction to boundaries of upper and lower half discs of the graph. See the description of maps $f_3$ and $f_4$ in \eqref{correspondence maps} for details.
The cohomology loop product map $\mu$ in $H^*(LM;k)$ of degree $-d$ is then defined by
\begin{equation*}
\mu: H^*(LM;k)\otimes H^*(LM;k) \xrightarrow{f_{\text{in}}^*} H^*\bigl(\text{Map}(\begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture};k\bigr) \xrightarrow{f_{\text{out}}^!} H^{*-d}(LM;k).
\end{equation*}
The product map $\mu$ is in general nontrivial, but the coproduct map $\Phi$ is often trivial. We will discuss two cases in which $\Phi$ is trivial. But before this we prove a general fact that the composition $\mu\circ\Phi$, the genus 1 TQFT operator, is always trivial over any coefficient field $k$. This is exactly what is needed for Vanishing Theorem in the introduction.
\begin{theorem}\label{vanishing of genus one map} Let $M$ be simply connected with finite dimensional $H^*(\Omega M;k)$. Then the genus $1$ TQFT operator associated to $F_{1,1+1}$ is trivial. Namely,
\begin{equation*}
\mu\circ\Phi=0: H^{*+2d}(LM;k) \xrightarrow{\Phi} H^{*+d}(LM;k)\otimes H^*(LM;k) \xrightarrow{\mu} H^*(LM;k).
\end{equation*}
\end{theorem}
\begin{proof} We consider the following composition diagram of correspondences for the product $\mu$ and the coproduct $\Phi$, with renamed maps for convenience:
\begin{equation}\label{composition}
\xymatrix{
LM & \text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture}, M) \ar[l]_{f_1\ \ \ \ \ } \ar[r]^{\ \ f_2} & LM\times LM \\
& \text{Map}(\begin{tikzpicture} \draw (0,0) ellipse ( 0.1 and 0.035) ellipse ( 0.1 and 0.1); \fill ( 0.1,0) circle ( 0.035); \fill (- 0.1,0) circle ( 0.035);\end{tikzpicture}, M) \ar[u]_{f_3'} \ar[r]^{f_2'}
\ar[ul]^{f_5=f_1\circ f_3'\ \ } \ar[dr]_{f_6=f_4\circ f_2'\ \ }
& \text{Map}(\begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture}, M) \ar[u]_{f_3} \ar[d]^{f_4} \ar[u]_{f_3} \\
& & LM
}
\end{equation}
The coproduct map and the product map are given by $\Phi=f_2^!\circ f_1^*$ and $\mu=f_4^!\circ f_3^*$, respectively.
We label elements in the mapping spaces $A\in \text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture} ,M)$, $B\in \text{Map}(\begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture} ,M)$, and $C\in \text{Map}(\begin{tikzpicture} \draw (0,0) ellipse ( 0.1 and 0.035) ellipse ( 0.1 and 0.1); \fill ( 0.1,0) circle ( 0.035); \fill (- 0.1,0) circle ( 0.035);\end{tikzpicture},M)$ by labeling arcs and vertices of the above graphs using image arcs and image vertices in $M$ as follows, where $x,y$ are points in $M$ and $\alpha,\beta,\gamma,\dots$ denote arcs in $M$. For example, $A$ below represents a map $A:\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture} \to M$ such that two circles of the graph are mapped to loops $\rho$ and $\sigma$ in $M$, and the middle interval is mapped to an arc $\eta$ from a point $x$ to $y$ in $M$. Here, the point $x$ plays the role of the base point of the images in all three cases.
\begin{center}
\begin{tikzpicture}[>=stealth]
\draw (0,0) arc (0:180:0.3) -- ++(-0.6,0) arc (0:360:0.3) -- ++(0.6,0) arc (180:360:0.3);
\fill (-1.2,0) circle ( 0.035);
\fill (-0.6,0) circle ( 0.035);
\path (-0.3,0.3) node[above] {$\sigma$};
\path (-1.5,0.3) node[above] {$\rho$};
\path (-0.9,0) node[above] {$\eta$};
\path (-0.6,0) node[right] {$y$};
\path (-1.2,0) node[left] {$x$};
\draw[->] (-0.85,0) -- ++(0.01,0);
\draw[->] (-0.35,0.3) -- ++(-0.01,0);
\draw[->] (-1.55,0.3) -- ++(-0.01,0);
\path (-1.9,0) node[left] {$A$:};
\path (2.5,0) coordinate (P1);
\draw (P1)++(0,0) arc (0:360:0.4) -- ++(-0.8,0);
\fill (P1)++(0,0) circle ( 0.035) ;
\fill (P1)++(-0.8,0) circle ( 0.035);
\path (P1)++(0,0) node[right] {$x$};
\path (P1)++(-0.8,0) node[left] {$y$};
\path (P1)++(-0.4,0.4) node[above] {$\alpha$};
\path (P1)++(-0.4,-0.53) node[above] {$\beta$};
\path (P1)++(-0.35,-0.08) node[above] {$\gamma$};
\draw [->] (P1)++(-0.45,0.4) -- ++(-0.01,0);
\draw [->] (P1)++(-0.35,-0.4) -- ++(0.01,0);
\draw[->] (P1)++(-0.35,0) -- ++(0.01,0);
\path (P1)++(-1.1,0) node[left] {$B$:};
\path (5,0) coordinate (P2);
\draw (P2)++(0,0) ellipse (0.5 and 0.3)
ellipse (0.5 and 0.7);
\fill (P2)++(0.5,0) circle ( 0.035);
\fill (P2)++(-0.5,0) circle ( 0.035);
\path (P2)++(0,0.7) node[above] {$\alpha$};
\path (P2)++(0,0.25) node[above] {$\eta$};
\path (P2)++(0,-0.32) node[above] {$\gamma$};
\path (P2)++(0,-0.82) node[above] {$\beta$};
\path (P2)++(0.5,0) node[right] {$x$};
\path (P2)++(-0.5,0) node[left] {$y$};
\draw[->] (P2)++(-0.1,0.7) -- ++(-0.01,0);
\draw[->] (P2)++(-0.1,0.3) -- ++(-0.01,0);
\draw[->] (P2)++(0.1,-0.3) -- ++(0.01,0);
\draw[->] (P2)++(0.1,-0.7) -- ++(0.01,0);
\path (P2)++(-0.8,0) node[left] {$C$:};
\path (2,-1) node[text width=11cm] {\textsc{Figure 1.} Descriptions of elements in three mapping spaces};
\end{tikzpicture}
\end{center}
In terms of this description, the top horizontal maps and the right vertical maps in the diagram \eqref{composition} are given in the following way:
\begin{equation}\label{correspondence maps}
f_1(A)=\rho\eta\sigma\eta^{-1}, \quad f_2(A)=(\rho, \sigma), \quad f_3(B)=(\alpha\gamma, \beta\gamma^{-1}), \quad f_4(B)=\alpha\beta.
\end{equation}
The map $f_2'$ forgets the arc $\eta$, and the map $f_3'$ maps $C$ to $A$ with $\rho=\alpha\gamma$ and $\sigma=\beta\gamma^{-1}$. Finally, the diagonal maps in \eqref{composition} are given by
\begin{equation*}
f_5(C)=\alpha\beta\eta\beta\gamma^{-1}\eta^{-1}, \qquad f_6(C)=\alpha\beta.
\end{equation*}
In the diagram \eqref{composition}, the maps $f_2, f_2', f_4$ are fibrations with fiber $\Omega M$ induced from $\overline{g}:\text{Map}(I,M) \longrightarrow M\times M$. Since the square in the diagram commutes, we have $f_3^*\circ (f_2)^!=(f_2')^!\circ(f_3')^*$. Hence
\begin{equation*}
\mu\circ\Phi=(f_4^!\circ f_3^*)\circ (f_2^!\circ f_1^*)=(f_4\circ f_2')^!\circ(f_1\circ f_3')^*=f_6^!\circ f_5^*.
\end{equation*}
To understand the diagonal arrows of the diagram, consider the following pull-back diagram of fibrations:
\begin{equation*}
\begin{CD}
LM\!\!\!\!\underset{M\times M}\times\!\!\!\! LM @>{\pi_1}>> LM \\
@V{\pi_2}VV @V{(p_0, p_{\frac12})}VV \\
LM @>{(p_0, p_{\frac12})}>> M\times M,
\end{CD}
\end{equation*}
where for a loop $\gamma:[0,1] \to M$ in $LM$, $p_0(\gamma)=\gamma(0)$ and $p_{\frac12}(\gamma)=\gamma(\frac12)$.
The space $LM\times_{M\times M}LM$ consists of pairs of loops $(\ell_1,\ell_2)$ with the same base points and the same mid points, and maps $\pi_1,\pi_2$ are projections onto the first and the second components. In other words, any element in $LM\times_{M\times M}LM$ with base point $x$ and mid point $y$ is of the form $(\alpha\beta,\eta\gamma)$ where $\alpha,\eta$ are arcs from $x$ to $y$ and $\beta,\gamma$ are arcs from $y$ to $x$. Thus this space is exactly the same as the space $\text{Map}(\begin{tikzpicture} \draw (0,0) ellipse ( 0.1 and 0.035) ellipse ( 0.1 and 0.1); \fill ( 0.1,0) circle ( 0.035); \fill (- 0.1,0) circle ( 0.035);\end{tikzpicture},M)$. By homotopy equivalence, we can replace the space $LM\times_{M\times M}LM$ by a more familiar space. Let $LM\times_M LM\times_M LM$ be the space of triples of loops sharing the same base points. Consider maps
\begin{align*}
h&: LM\!\!\!\!\underset{M\times M}\times\!\!\!\! LM \longrightarrow LM\underset{M}\times LM\underset{M}\times LM \\
\overline{h}&: LM\underset{M}\times LM\underset{M}\times LM \longrightarrow LM\!\!\!\!\underset{M\times M}\times\!\!\!\! LM,
\end{align*}
given by $h(\alpha\beta,\eta\gamma)=(\alpha\beta, \alpha\gamma, \eta\beta)$ and $\overline{h}(\alpha\beta,\xi_1,\xi_2)=\bigl(\alpha\beta,(\xi_2\beta^{-1})\cdot(\alpha^{-1}\xi_1)\bigr)$. See the diagram for $C\in\text{Map}(\begin{tikzpicture} \draw (0,0) ellipse ( 0.1 and 0.035) ellipse ( 0.1 and 0.1); \fill ( 0.1,0) circle ( 0.035); \fill (- 0.1,0) circle ( 0.035);\end{tikzpicture},M)\cong LM\times_{M\times M}LM$ in Figure 1 above. We can easily check that these two maps are homotopy inverses to each other. Hence the diagonal part of the diagram \eqref{composition} can be replaced by the bottom line of the following diagram:
\begin{equation*}
\begin{CD}
LM @<{f_5}<< LM\!\!\!\!\underset{M\times M}\times\!\!\!\! LM @>{f_6}>> LM \\
@| @V{h}V{\simeq}V @| \\
LM @<{f_7}<< LM\underset{M}\times LM\underset{M}\times LM @>{p_1}>> LM,
\end{CD}
\end{equation*}
where $p_1(\delta,\xi_1,\xi_2)=\delta$, and $f_7(\delta,\xi_1,\xi_2)=\xi_1\xi_2\xi_1^{-1}\delta\xi_2^{-1}$. The right square of the above diagram strictly commutes and the left square commutes up to homotopy. Thus,
\begin{equation*}
f_6^!\circ f_5^*=p_1^!\circ f_7^*: H^*(LM;k) \longrightarrow H^*(LM\underset{M}\times LM\underset{M}\times LM;k) \longrightarrow H^{*-2d}(LM;k).
\end{equation*}
To understand this map, we consider the dual homology map using Lemma \ref{dual transfers}:
\begin{equation*}
(f_7)_*\circ(p_1)_!: H_*(LM;k) \longrightarrow H_{*+2d}(LM\underset{M}\times LM\underset{M}\times LM;k) \longrightarrow H_{*+2d}(LM;k).
\end{equation*}
We show that this composition is a zero map. To see this, let $w\in H_r(LM;k)$ be an arbitrary element, and let $W\subset M$ be a subspace of dimension at most $r$ such that an $r$-dimensional cycle representing $w$ is contained in $p^{-1}(W)\subset LM$ where $p:LM \rightarrow M$ is the base point projection. Let $L_WM=p^{-1}(W)$. Since $f_7$ preserves fibers over $M$, we have the following commutative diagram, where $\iota$'s are inclusion maps:
\begin{equation*}
\begin{CD}
LM @<{f_7}<< LM\underset{M}\times LM\underset{M}\times LM @>{p_1}>> LM \\
@A{\iota}AA @A{\iota}AA @A{\iota}AA \\
L_WM @<{f_7^W}<< L_WM\underset{W}\times L_WM\underset{W}\times L_WM @>{p_1^W}>> L_WM
\end{CD}
\end{equation*}
Let $w'\in H_r(L_WM;k)$ be an element such that $\iota_*(w')=w$. By the commutativity of the diagram, we have $(f_7)_*(p_1)_!(w)=\iota_*(f_7^W)_*(p_1^W)_!(w')$ where the element $(f_7^W)_*(p_1^W)_!(w')$ is in the group $H_{r+2d}(L_WM;k)$.
Since $\dim W\le r$ and fiber $\Omega M$ has $k$-cohomological dimension $d$, we have $H_*(L_WM;k)=0$ in degrees $*>r+d$. Hence $H_{r+2d}(L_WM;k)=0$. Thus $(f_7)_*(p_1)_!(w)=0$. Since $w$ is arbitrary, it follows that the homology map $(f_7)_*(p_1)_!$ is a trivial map. Taking its dual, we see that the cohomology map $p_1^!f_7^*: H^{*+2d}(LM;k) \rightarrow H^*(LM;k)$ is also trivial. Consequently, we finally get $\mu\circ\Phi=f_6^!f_5^*=p_1^!f_7^*=0$.
\end{proof}
\begin{proof}[Proof of Vanishing Theorem \textup{(II)}] Theorem \ref{vanishing of genus one map} proves the vanishing of the TQFT operator $\mathcal F_0(t)=\mu\circ\Phi$ associated to the genus one surface $T_{\text{closed}}=F_{1,1+1}$ with one incoming and one outgoing boundaries. Again as in case (I), Proposition \ref{trivial HCFT operations} proves the assertions in part (II) of Vanishing Theorem in the introduction.
\end{proof}
Next, we consider two cases in which the coproduct map $\Phi$ in the loop cohomology $H^*(LM;k)$ vanishes, which implies by dualizing that the product map in the loop homology $H_*(LM;k)$ vanishes, although the coproduct in $H_*(LM;k)$ is in general nontrivial. In the first case, we show that the existence of the unit with respect to the cohomology loop product implies the vanishing of the coproduct $\Phi$. Let $\iota:\Omega M \longrightarrow LM$ be the inclusion map of a fiber.
\begin{proposition} Let $M$ be simply connected with finite dimensional $H^*(\Omega M;k)$ concentrated in degrees $0\le *\le d$. Suppose there exists a unit $u\in H^d(LM;k)$ with respect to the cohomology loop product $\mu$. Then the followings hold\textup{:}
\textup{(i)} The coproduct map $\Phi$ is trivial.
\textup{(ii)} The restriction of the unit to the fiber is the orientation class of the fiber. Namely, $\iota^*(u)=\{\Omega M\}\in H^d(\Omega M;k)$.
\end{proposition}
\begin{proof} (i) In the diagram \eqref{composition}, the coproduct map $\Phi$ is given by $\Phi=f_2^!\circ f_1^*$. We consider the dual homology maps from a degree $0$ homology group:
\begin{equation*}
(f_1)_*\circ (f_2)_!: H_0(LM\times LM;k) \longrightarrow H_d\bigl(\text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture}, M);k\bigr) \longrightarrow H_d(LM;k).
\end{equation*}
The generator of $H_0(LM\times LM;k)$ can be chosed to be $[(c_x,c_x)]$ for the constant loop $c_x$ at $x\in M$. Note that
\begin{equation*}
f_1\circ f_2^{-1}\bigl((c_x,c_x)\bigr)=\{c_x\gamma c_x\gamma^{-1}\mid \gamma\in\Omega_xM\}\subset LM.
\end{equation*}
Let $\kappa:\Omega_xM \rightarrow LM$ be given by $\kappa(\gamma)=c_x\gamma c_x\gamma^{-1}$. Then $(f_1)_*(f_2)_!([(c_x,c_x)])=\kappa_*([\Omega_xM])$ in $H_d(LM;k)$, where $[\Omega_xM]$ is the orientation class in $H_d(\Omega_x M;k)$. Since obviously $\kappa$ is contractible, we have $\kappa_*([\Omega_xM])=0$. Thus, the composition of the above maps $(f_1)_*\circ (f_2)_!$ from degree $0$ homology is trivial.
Let $z\in H^d(LM;k)$ be an arbitrary degree $d$ element. Since the dual of the coproduct map $\Phi$ is $(f_1)_*\circ (f_2)_!$, we have
\begin{equation*}
\langle \Phi(z),[(c_x,c_x)]\rangle=\langle z,(f_1)_*(f_2)_!\bigl([(c_x,c_x)]\bigr)\rangle=\langle z,0\rangle=0.
\end{equation*}
Hence it follows that $\Phi(z)=0\in H^0(LM\times LM;k)$.
Now let $u\in H^d(LM;k)$ be the multiplicative unit in the loop cohomology $H^*(LM;k)$ so that for any $x\in H^*(LM;k)$, we have $u\cdot x=\mu(u\otimes x)=x$. Then the Frobenius relation implies that
$\Phi(x)=\Phi(u\cdot x)=\Phi(u)\cdot x=0$, since $\Phi(u)=0$ because the degree of the unit $u$ is $d$. Hence the coproduct map $\Phi$ identically vanishes on the loop cohomology.
(ii) We recall that the cohomology loop product map $\mu$ in $H^*(LM;k)$ is given by $\mu=(f_4)^!(f_3)^*$, where
\begin{equation*}
\begin{CD}
LM\times LM @<{f_3}<< \text{Map}(\begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture},M) @>{f_4}>> LM.
\end{CD}
\end{equation*}
We examine the following dual homology map from a degree $0$ homology group:
\begin{equation*}
\begin{CD}
H_0(LM;k) @>{(f_4)_!}>> H_d\bigl(\text{Map}(\begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture},M);k\bigr) @>{(f_3)_*}>> H_d(LM\times LM;k).
\end{CD}
\end{equation*}
Since $M$ is simply connected, the free loop space $LM$ is connected. So a generator of $H_0(LM;k)$ can be chosen to be the class of the constant loop $[c_x]$ at $x\in M$. Then $(f_4)_!([c_x])=[\Omega_xM]\in H_d\bigl(\text{Map}(\begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture},M);k \bigr)$ corresponding to the orientation class of the space of based loops obtained by mapping the middle interval of the graph \begin{tikzpicture} \draw (0,0) arc (0:360: 0.1) -- ++(- 0.2,0); \fill (0,0) circle ( 0.035); \fill (- 0.2,0) circle ( 0.035);\end{tikzpicture} into $M$, where the outer circle is mapped to a point $x\in M$ by $c_x$. Thus using the description of $f_3$ given in \eqref{correspondence maps}, the element $(f_3)_*(f_4)_!([c_x])$ is given by the homology class of the set $\{(\gamma,\gamma^{-1})\mid \gamma\in\Omega_xM\}\subset LM\times LM$. This is the image of the following composition:
\begin{equation*}
\Omega_xM \xrightarrow{\phi} \Omega_x M\times \Omega_x M \xrightarrow{1\times S} \Omega_x M\times \Omega_x M \xrightarrow{\iota\times \iota} LM\times LM.
\end{equation*}
Thus $(f_3)_*(f_4)_!([c_x])=\iota_*[\Omega M]\otimes 1 + 1\otimes \iota_*S_*([\Omega M]) + \text{(other terms)}$. For $1\in H^0(LM;k)$, we have $\mu(u\otimes 1)=1$ because $u$ is the unit with respect to $\mu$. Since the dual of $\mu$ is $\mu^*=(f_3)_*(f_4)_!$ by Lemma \ref{dual transfers},
\begin{align*}
1&=\langle 1,[c_x]\rangle=\langle \mu(u\otimes 1),[c_x]\rangle=\langle u\otimes 1, (f_3)_*(f_4)_!([c_x])\rangle \\
&=\langle u\otimes 1, \iota_*[\Omega M]\otimes 1 + 1\otimes \iota_*S_*([\Omega M]) + \text{(other terms)}\rangle=\langle u, \iota_*([\Omega M])\rangle.
\end{align*}
Hence $\iota^*(u)([\Omega M])=1$. This completes the proof of (ii).
\end{proof}
In the second case in which $\Phi$ is trivial, we show that if $H^*(\Omega M;k)$ is an exterior algebra generated by finitely many odd degree elements, then the transfer map
\begin{equation*}
f_2^!=g_{\text{out}}^!: H^*\bigl(\textup{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M);k\bigr) \longrightarrow H^*(LM;k)\otimes H^*(LM;k).
\end{equation*}
is already trivial. Consequently, the coproduct map $\Phi=f_2^!f_1^*$ is also trivial. To show this, we combine the Eilenberg-Moore spectral sequences and the Serre spectral sequences (see \cite{McC} for example).
Let $p:E \to B$ be a fibration. In general, consider a diagram of pull-back fibrations and their induced cohomology maps:
\begin{equation*}
\begin{CD}
E' @>{f}>> E \\
@V{p'}VV @V{p}VV \\
B' @>{\overline{f}}>> B
\end{CD}
\qquad \qquad
\begin{CD}
H^*(E';k) @<{f^*}<< H^*(E;k) \\
@A{(p')^*}AA @A{p^*}AA \\
H^*(B';k) @<{\overline{f}^*}<< H^*(B;k)
\end{CD}
\end{equation*}
If $p^*$ is onto, then under an appropriate condition on $H^*(F;k)$, Lemma \ref{vanishing transfer} implies $p^!=0$. What can we say about the other transfer $(p')^!$? We can use the Eilenberg-Moore spectral sequence, which is a second quadrant spectral sequence, to analyze the situation and to compute the cohomology $H^*(E';k)$. Its $E_2$-terms are given by
\begin{equation}\label{Tor group}
E_2^{*,*}=\text{Tor}^{*,*}_{H^*(B)}\bigl(H^*(B'),H^*(E)\bigr).
\end{equation}
We consider a special case in which the Eilenberg-Moore spectral sequence collapses.
\begin{lemma}\label{Eilenberg-Moore} Suppose $H^*(B';k)$ is a free $H^*(B;k)$-module. Then
\begin{equation*}
H^*(E';k)\cong H^*(B';k)\!\!\!\!\underset{H^*(B)}\otimes\!\!\!\! H^*(E;k).
\end{equation*}
If furthermore, $p^*$ is onto and $I=\textup{Ker}\,p^*$, then
\begin{equation*}
H^*(E';k)\cong H^*(B';k)/I',
\end{equation*}
where $I'=I\cdot H^*(B';k)$ is the extension of the ideal $I$ to an ideal in $H^*(B';k)$.
\end{lemma}
\begin{proof} Since $H^*(B';k)$ is a free module over $H^*(B;k)$, $\text{Tor}^{-p,q}=0$ for $p>0$ in \eqref{Tor group}. Thus the Eilenberg-Moore spectral sequence collapses and
\begin{equation*}
H^*(E')\cong\text{Tor}^{0,*}_{H^*(B)}\bigl(H^*(B'),H^*(E)\bigr)=H^*(B')\!\!\!\!\underset{H^*(B)}\otimes\!\!\!\! H^*(E).
\end{equation*}
The second part follows from this.
\end{proof}
The pull-back diagram of fibrations relevant to us is the diagram \eqref{fibration diagram} which we reproduce here for convenience.
\begin{equation*}
\begin{CD}
\text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M) @>{q}>> \text{Map}(I,M)\simeq M \\
@VV{f_2=g_{\text{out}}}V @V{\overline{g}=p_0\times p_1}V{\simeq \phi}V \\
LM\times LM @>{p\times p}>> M\times M
\end{CD}
\end{equation*}
Here, $\phi:M \to M\times M$ is the diagonal map.
Since fibrations $q$ and $p\times p$ have sections, induced cohomology maps $g^*$ and $(p\times p)^*$ are injective. Since the induced cohomology map $\overline{g}^*=\phi^*:H^*(M;k)\otimes H^*(M;k) \longrightarrow H^*(M;k)$ is nothing but the cup product map, it is an onto map. Let $J=\text{Ker}\,\overline{g}^*$ be the kernel of the cup product map.
If $H^*(LM;k)$ is a free $H^*(M;k)$-module, then Lemma \ref{Eilenberg-Moore} implies
\begin{equation}\label{cohomology of mapping space}
H^*\bigl(\text{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M);k\bigr)\cong H^*(LM\times LM;k)/J'
\end{equation}
and $g_{\text{out}}^*$ is onto, where $J'$ is an extension of $J$. By Lemma \ref{vanishing transfer}, we see that the transfer map $g_{\text{out}}^!$ is trivial. This proves the second part of the next theorem.
\begin{theorem}\label{cohomology of LM} Let $k$ be a field of any characteristic. Let $M$ be simply connected such that
\begin{equation}\label{cohomology of Omega M}
H^*(\Omega M;k)\cong\Lambda_k(x_1,x_2,\dots,x_{\ell}),
\end{equation}
an exterior algebra on odd degree generators. Then the following statements hold.
\textup{(1)} The cohomology of $M$ is given by $H^*(M;k)\cong k[y_1,y_2,\dots,y_{\ell}]$, a polynomial algebra on even degree generators with $|y_i|=|x_i|+1$ for $1\le i\le \ell$, and the cohomology of $LM$ and $\textup{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M)$ are given by
\begin{align*}
H^*(LM;k)&\cong H^*(M;k)\otimes H^*(\Omega M;k) \\
&\cong k[y_1,y_2,\dots,y_{\ell}]\otimes \Lambda_k(x_1,x_2,\dots,x_{\ell}), \\
H^*\bigl(\textup{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M);k\bigr) &\cong H^*(M;k)\otimes H^*(\Omega M;k)\otimes H^*(\Omega M;k).
\end{align*}
\textup{(2)} The transfer map for the fibration $g_{\textup{out}}$ vanishes. Namely,
\begin{equation*}
g_{\text{out}}^!=0: H^*\bigl(\textup{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M);k\bigr) \longrightarrow H^*(LM;k)\otimes H^*(LM;k).
\end{equation*}
Consequently, the coproduct map $\Phi=g_{\textup{out}}^!g_{\textup{in}}^*$ in $H^*(LM;k)$ also vanishes. Here the transfer $g_{\textup{out}}^!$ can be identified with
\begin{multline*}
g_{\textup{out}}^!=\overline{g}^!\otimes 1\otimes 1: H^*(M;k)\otimes H^*(\Omega M;k)\otimes H^*(\Omega M;k) \\
\longrightarrow H^*(M;k)\otimes H^*(M;k)\otimes H^*(\Omega M;k)\otimes H^*(\Omega M;k).
\end{multline*}
\end{theorem}
Over the rationals $k=\mathbb Q$, if $H^*(\Omega M;\mathbb Q)$ is finite dimensional, then it must be an exterior algebra on finitely many odd degree generators by the classical Hopf theorem. Thus hypothesis \eqref{cohomology of Omega M} is a natural one. See also Remark \ref{torsion elements and exterior algebra}. In this case, the rational cohomology $H^*(LM;\mathbb Q)$ can also be computed using minimal models. We emphasize that in Theorem \ref{cohomology of LM}, the characteristic of the field $k$ is arbitrary.
The cohomology $H^*(LM;k)$ in part (1) of Theorem \ref{cohomology of LM} can be directly computed using Eilenberg-Moore spectral sequence. See for example \cite{Sm} where characteristic zero case is discussed. General characteristic $p$ case can be dealt with in the similar way. However, we found it more interesting and instructive (at least to the author) to show that the Serre spectral sequence for the fibration $\Omega M \to LM \xrightarrow{p} M$ collapses under our hypothesis on $H^*(\Omega M;k)$ given in \eqref{cohomology of Omega M}. See Remark \ref{differentials} at the end of this section for a motivation.
We first discuss the structure of Serre spectral sequences of related fibrations. Let $x_0\in M$ be a base point, and let $\Omega M \to PM \xrightarrow{p} M$ be the path fibration, where $PM=\{\gamma:[0,1] \to M\mid \gamma(0)=x_0\}$ is the path space starting at $x_0$, and $p(\gamma)=\gamma(1)$. When the cohomology of the based loop space is given as in \eqref{cohomology of Omega M}, the Borel transgression theorem (\cite{MT}, Chapter 7, Theorem 2.9) tells us that the exterior algebra generators can be chosen to be transgressive so that
\begin{equation*}
H^*(M;k)\cong k[y_1,y_2,\dots,y_{\ell}],
\end{equation*}
where $y_i$ is the image of $x_i$ under the transgression so that $|y_i|=|x_i|+1$ for $1\le i\le \ell$. In algebras $H^*(M;k)$ and $H^*(\Omega M;k)$, let $I(r)$ be the ideal generated by generators of degree less than $r$. We order generators $x_i$'s so that $|x_1|\le|x_2|\le\cdots\le|x_{\ell}|$, and let $|x_i|=r_i-1$ with $r_i=|y_i|$ even for $1\le i\le \ell$. The Serre spectral sequence $\{E_r^{*,*}\}$ for the path fibration $p:PM \to M$ is of the form
\begin{equation}\label{spectral sequence for path fibration}
E_r^{*,*}=H^*(M;k)/I(r)\otimes H^*(\Omega M;k)/I(r-1),
\end{equation}
and all nonzero differentials in the spectral sequence are consequences of $d_{r_i}(x_i)=y_i$ for $1\le i\le \ell$, where $d_{r_i}: E_{r_i}^{0,r_i-1} \longrightarrow E_{r_i}^{r_i,0}$.
Next we examine the Serre spectral sequence for the diagonal map $\phi: M\longrightarrow M\times M$ regarded as a fibration. Let $J$ be the kernel of the cup product map $\phi^*:H^*(M)\otimes H^*(M) \longrightarrow H^*(M)$. It can be easily checked that
\begin{equation}\label{ideal J}
J=(y_1\otimes 1-1\otimes y_1, \ y_2\otimes 1-1\otimes y_2,\ \dots,\ y_{\ell}\otimes 1-1\otimes y_{\ell})\subset H^*(M\times M;k).
\end{equation}
Let $J(r)$ be the subideal of $J$ generated by generators of $J$ given in \eqref{ideal J} of degree less than $r$.
\begin{lemma}\label{spectral sequence for g-bar} Let $M$ be simply connected with $H^*(\Omega M;k)$ given as in \eqref{cohomology of Omega M}. Let $\{E_r^{*,*}\}$ be the Serre spectral sequence for the fibration $\textup{Map}(I,M) \xrightarrow{\overline{g}=p_0\times p_1} M\times M$ with fiber $\Omega M$. Then its $E_r$-page is given by
\begin{equation}\label{E_r}
E_r^{*,*}\cong H^*(M\times M;k)/J(r)\otimes H^*(\Omega M;k)/I(r-1),
\end{equation}
and all differentials are consequences of
\begin{equation*}
d_{r_i}(x_i)=1\otimes y_i-y_i\otimes 1\in H^*(M\times M;k)/J(r_i),
\end{equation*}
where $d_{r_i}:E_{r_i}^{0,r_i-1} \longrightarrow E_{r_i}^{r_i,0}$ for $1\le i\le \ell$.
\end{lemma}
\begin{proof} To examine the spectral sequence $\{E_r^{*,*}\}$, we compare it with related spectral sequences for path fibrations $PM \to M$ and $\overline{P}M \to M$, where $\overline{P}M=\{\gamma:[0,1] \to M\mid \gamma(1)=x_0\}$ is the space of paths ending at $x_0$. We have the following diagram
\begin{equation*}
\begin{CD}
PM @>{h_1}>> \text{Map}(I,M) @<{h_2}<< \overline{P}M \\
@V{p_1}VV @V{\overline{g}=p_0\times p_1}VV @V{p_0}VV \\
\{x_0\}\times M @>{\bar{h}_1}>> M\times M @<{\bar{h}_2}<< M\times \{x_0\},
\end{CD}
\end{equation*}
where $h_1$ and $h_2$ are inclusions.
Let $S:PM \xrightarrow{\cong} \overline{P}M$ be the inverse path homeomorphism given by $S(\gamma)(t)=\gamma(1-t)$ for $t\in[0,1]$. Since the composite $\Omega M \xrightarrow{\phi} \Omega M\times \Omega M \xrightarrow{1\times S} \Omega M\times \Omega M \xrightarrow{m} \Omega M$, where $m$ is the loop multiplication map, is homotopic to the constant map, for any primitive element $z\in H^*(\Omega M;k)$, we have $S^*(z)=-z$. Since $H^*(\Omega M;k)$ is an exterior algebra generated by odd degree elements, by Hopf-Samelson Theorem (\cite{MT}, Chapter 7, corollary 1.13) $H^*(\Omega M;k)$ is primitively generated. Hence $S^*$ acts as $-1$ on odd degree elements. In particular, $S^*(x_i)=-x_i$ for $1\le i\le \ell$. (This argument is needed since in \eqref{cohomology of Omega M}, we did not assume that $x_i$'s are primitive.) Let $\{{}'E_r^{*,*}\}$ and $\{{}''E_r^{*,*}\}$ be Serre spectral sequences for the fibrations $PM\to M$ and $\overline{P}M \to M$. We discussed the spectral sequence $\{{}'E_r^{*,*}\}$ earlier in \eqref{spectral sequence for path fibration}. Using the isomorphism of spectral sequences $S^*:{}''E_r^{*,*} \xrightarrow{\cong} {}'E_r^{*,*}$, we see that the differential $d_{r_i}'(x_i)=y_i$ in ${}'E_r^{*,*}$ translates to a differential $d_{r_i}''(x_i)=d_{r_i}'\bigl(S^*(x_i)\bigr)=-y_i$ in $\{{}''E_r^{*,*}\}$ for $1\le i\le \ell$, and all differentials in $\{{}''E_r^{*,*}\}$ are consequences of the above differentials.
We prove Lemma \ref{spectral sequence for g-bar} by induction on $r\ge2$. When $r=2$, we have
\begin{equation*}
E_2^{*,*}=H^*\bigl(M\times M;H^*(\Omega M;k)\bigr)=H^*(M\times M;k)\otimes H^*(\Omega M;k),
\end{equation*}
since the local system is trivial because $M$ is simply connected. Assume that $E_r$-page is given by \eqref{E_r}. Suppose $H^*(\Omega M;k)/I(r-1)\cong\Lambda_k(x_i,x_{i+1},\dots,x_{\ell})$ with $|x_i|\ge r-1>|x_{i-1}|$. We consider two cases. If $|x_i|>r-1$, then two consecutive degrees in $H^*(\Omega M)/I(r-1)$ with nontrivial groups are at least $|x_i|>r-1$ apart. Hence the differential $d_r:E_r^{*,*} \longrightarrow E_r^{*+r,*-r+1}$ must be trivial, and we have $E_{r+1}^{*,*}=E_r^{*,*}=H^*(M\times M)/J(r+1)\otimes H^*(\Omega M)/I(r)$ since $J(r+1)=J(r)$ and $I(r)=I(r-1)$ in this case.
Suppose $|x_i|=r-1$ or $r=r_i$. We consider maps between spectral sequences induced by $h_1$ and $h_2$: $h_1^*: E_r^{*,*} \rightarrow {}'E_r^{*,*}$ and $h_2^*: E_r^{*,*} \rightarrow {}''E_r^{*,*}$.
We recall that ${}'E_r^{*,*}$ and ${}''E_r^{*,*}$ are given by
\begin{equation*}
{}'E_r^{*,*}\cong H^*(M)/I(r)\otimes H^*(\Omega M)/I(r-1)\cong {}''E_r^{*,*}.
\end{equation*}
See \eqref{spectral sequence for path fibration}. Applying $h_1^*$ to the differential $d_r(x_i)$, we get $\bar{h}_1^*\bigl(d_r(x_i)\bigr)=d_r'(x_i)=y_i$. Similarly, applying $h_2^*$, we get $\bar{h}_2^*\bigl(d_r(x_i)\bigr)=d_r''(x_i)=-y_i$. Hence $d_r(x_i)$ is of the form $d_r(x_i)=1\otimes y_i-y_i\otimes 1+(\text{decomposables})$ in $H^*(M\times M)/J(r_i)$. Since the spectral sequence $\{E_r^{*,*}\}$ converges to $H^*\bigl(\text{Map}(I,M)\bigr)\cong H^*(M\times M)/J$, the decomposable elements in the above must lie in $J(r_i)$. Thus
\begin{equation*}
d_{r_i}(x_i)=1\otimes y_i-y_i\otimes 1 \in H^*(M\times M)/J(r_i).
\end{equation*}
If there are other generators of $H^*(\Omega M)/I(r_i-1)$ of degree $r_i-1$, we can apply the same argument and we get corresponding results for their differentials. Hence
\begin{equation*}
E_{r_i+1}^{*,*}=H^*(M\times M)/J(r_i+1)\otimes H^*(\Omega M)/I(r_i).
\end{equation*}
This completes the inductive step and the proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{cohomology of LM}] We show that the Serre spectral sequence $\{E_r^{*,*}\}$ for the fibration $\Omega \to LM \xrightarrow{p} M$ collapses at $E_2$-term. Thus, suppose $E_r^{*,*}=E_2^{*,*}$ for some $r\ge2$. We show that $d_r=0$ on $E_r^{*,*}$, implying that $E_{r+1}^{*,*}=E_r^{*,*}$. Since $E_r^{*,*}=E_2^{*,*}=E_2^{*,0}\otimes E_2^{0,*}$, by derivation property of the differential, we only have to show that $d_r=0$ on $E_r^{0,*}=H^*(\Omega M;k)$. To see this, we consider the following pull-back diagram of fibrations:
\begin{equation}\label{pull-back for LM}
\begin{CD}
LM @>{h}>> \text{Map}(I,M) \\
@V{p}VV @V{\overline{g}=p_0\times p_1}VV \\
M @>{\phi}>> M\times M.
\end{CD}
\end{equation}
Let $\{{}'E_r^{*,*}\}$ be the Serre spectral sequence for the fibration $\overline{g}:\text{Map}(I,M) \longrightarrow M\times M$ described in Lemma \ref{spectral sequence for g-bar}. The map $h$ induces a map of spectral sequences $h^*: {}'E_r^{*,*} \longrightarrow E_r^{*,*}$.
Suppose ${}'E_r^{0,*}=H^*(\Omega M;k)/I(r-1)=\Lambda_k(x_i,x_{i+1},\dots,x_{\ell})$, where $|x_i|\ge r-1>|x_{i-1}|$.
We consider two cases. Suppose $|x_i|>r-1$. Since $d_r'$ is trivial on $x_i, x_{i+1}, \dots, x_{\ell}$ in the spectral sequence ${}'E_r^{0,*}$ in view of Lemma \ref{spectral sequence for g-bar}, mapping this relation to $E_r^{0,*}$ via $h^*$, we get $d_r(x_j)=0$ for $i\le j\le\ell$. By degree reason, $d_r$ is trivial on $x_1,x_2,\dots, x_{i-1}$, where $|x_1|\le\cdots\le|x_{i-1}|<r-1$. Hence $d_r=0$ on $E_r^{0,*}$. As remarked above, the derivation property of $d_r$ implies that $d_r=0$ on the entire $E_r^{*,*}$.
Next, suppose $|x_i|=\cdots=|x_{k}|=r-1$ and $|x_{k+1}|>r-1$ for some $i\le k\le \ell$. In this case, $r=r_i$. Since $d_{r_i}'(x_j)=1\otimes y_j-y_j\otimes 1\in {}'E_{r_i}^{r_i,0}$ for $i\le j\le k$, mapping this relation by $h^*$, we get $d_{r_i}(x_j)=0\in E_{r_i}^{r_i,0}$ for $i\le j\le k$, since $h^*$ on the base spaces is simply the cup product $\phi^*$ (see diagram \eqref{pull-back for LM}). By Lemma \ref{spectral sequence for g-bar}, for $x_j$ with $j>k$, we have $d_{r_i}'(x_j)=0$, which in turn implies that $d_{r_i}(x_j)=0$ for $k< j\le \ell$. By degree reason, $d_{r_i}$ is trivial on $x_1,x_2,\dots,x_{i-1}$. Hence again $d_{r_i}$ is trivial on $E_{r_i}^{0,*}$. Hence derivation property of differential implies that $d_{r_i}=0$ on the entire $E_{r_i}^{*,*}$. This completes the inductive step and we have proved the formula for the cohomology of $LM$.
The cohomology of $\textup{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M)$ then follows from \eqref{cohomology of mapping space}. This completes the proof of part (1).
(2) Although the proof of part (2) was discussed right before the statement of Theorem \ref{cohomology of LM}, we give an alternate proof from a different point of view.
The square diagram in \eqref{fibration diagram} can be thought of as a pull-back diagram of fibrations $p\times p$ and $q$ with fiber $\Omega M\times \Omega M$.
\begin{equation*}
\begin{CD}
\Omega M\times \Omega M @= \Omega M\times \Omega M \\
@V{\iota}VV @V{\iota'}VV \\
LM\times LM @<{g_{\text{out}}}<< \textup{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M) \\
@V{p\times p}VV @V{q}VV \\
M\times M @<{\overline{g}}<< \text{Map}(I,M)
\end{CD}
\end{equation*}
Since $H^*(LM\times LM;k)\cong H^*(M\times M;k)\otimes H^*(\Omega M\times \Omega M;k)$ by what we proved in part (1), the inclusion map of the fiber $\iota:\Omega M\times \Omega M \longrightarrow LM\times LM$ is totally nonhomologous to zero and $\iota^*$ is onto. This implies that the fiber inclusion map $\iota':\Omega M\times \Omega M \longrightarrow \textup{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M)$ is such that $(\iota')^*$ is onto and $\iota'$ is totally nonhomologous to zero. Hence the spectral sequence for the fibration $q$ collapses and we get $H^*\bigl(\textup{Map}(\begin{tikzpicture}\draw (0,0) arc (0:180: 0.1) -- ++(- 0.2,0) arc (0:360: 0.1) -- ++( 0.2,0) arc (180:360: 0.1);\fill (- 0.2,0) circle ( 0.035); \fill (- 0.4,0) circle ( 0.035); \end{tikzpicture},M);k\bigr)\cong H^*(M;k)\otimes H^*(\Omega M\times \Omega M;k)$ and $g_{\text{out}}^*=\overline{g}^*\otimes 1$ is onto. By Lemma \ref{vanishing transfer}, we see that the transfer map $g_{\text{out}}^!$ vanishes. This completes the proof of Theorem \ref{cohomology of LM}.
\end{proof}
\begin{remark}\label{torsion elements and exterior algebra} If the characteristic $p$ of the field $k$ is positive, then the hypothesis \eqref{cohomology of Omega M} follows from the $p$-torsion freeness of the integral cohomology of $\Omega M$. More precisely, by Theorem 2.12 in Chapter 7 of \cite{MT}, if $H^*(\Omega M;\mathbb Z)$ is torsion free and finitely generated, then for some $\ell$,
\begin{equation*}
H^*(\Omega M;\mathbb Z)\cong\Lambda_{\mathbb Z}(x_1,x_2,\dots,x_{\ell}), \quad |x_i|=\text{odd}.
\end{equation*}
If $H^*(\Omega M;\mathbb Z)$ is $p$-torsion free and finitely generated, and if the characteristic of a field $k$ is $p>0$, then for some $\ell$
\begin{equation*}
H^*(\Omega M;k)\cong \Lambda_k(x_1,x_2,\dots,x_{\ell}), \quad |x_i|=\text{odd}.
\end{equation*}
\end{remark}
Combining the above remark and Theorem \ref{cohomology of LM}, we obtain the following corollary.
\begin{corollary} \label{trivial coproduct} \textup{(1)} Suppose $H^*(\Omega M;\mathbb Z)$ is torsion free and finitely generated. Then the cohomology loop coproduct map in $H^*(LM;\mathbb Z)$ is trivial.
\textup{(2)} Suppose $H^*(LM;\mathbb Z)$ has no $p$-torsion for a prime $p$. Then the cohomology loop coproduct in $H^*(LM;k)$ is trivial, where $k$ is any field of characteristic $p$.
\end{corollary}
\begin{proof} We only have to observe that in the torsion free case (1), all the arguments in Theorem \ref{cohomology of LM} are valid with integral coefficients.
\end{proof}
\begin{remark}\label{differentials}
In some cases, it is not difficult to determine algebraically all the differentials in the Serre spectral sequence for the fibration $q:\text{Map}(I,M) \to M\times M$, given the cohomology of $M$. Thus the method we employ here can also be used to determine purely algebraically all the nontrivial differentials in the Serre spectral sequence for some fibrations $p:LM \to M$ without using any external geometric information. For example, all the nontrivial differentials for the fibration $p:L\mathbb CP^n \to \mathbb CP^n$ can be determined this way. In \cite{CJY}, Cohen, Jones and Yan compute the loop homology algebra structure of $H_*(LM;\mathbb Z)$ using a Serre type spectral sequence in which differentials were determined using the geometric result of Ziller \cite{Zil}, where the cohomology of $L\mathbb CP^n$ is determined using a Morse theoretic method.
\end{remark}
\bigskip
\section{Vanishing theorem for open-closed string topology} \label{open-closed string topology}
We extend our results to open-closed string topology on an oriented closed smooth manifold $M$ of dimension $d$. First we briefly recall the general framework of open-closed string topology. An open-closed cobordism is an oriented cobordism between two compact ordered parametrized 1-dimensional manifolds, which are ordered finite union of unit intervals and unit circles. We simply refer to intervals as open strings and to circles as closed strings. Thus, an open-closed cobordism is an oriented surface $S$ whose boundary consists of three parts: (i) incoming open or closed strings $\partial_{in}S$, (ii) outgoing open or closed strings $\partial_{out}S$, and (iii) the remaining boundary part called free boundary $\partial_{free}S$. The manfold $\partial_{free}S$ is a cobordism between boundaries of $\partial_{in}S$ and $\partial_{out}S$. We assume that each connected component of an open-closed cobordism has at least one outgoing open or closed strings. This is the positive boundary condition. End points of open strings are only allowed to move along submanifolds belonging to a specified family of submanifolds of $M$ called $D$-branes. Let $\mathcal D=\{I,J,\dots\}$ be such a collection. Thus connected components of the free boundary $\partial_{free}S$ are labeled with $D$-branes. Note that a boundary component of an open-closed cobordism can have both incoming open strings and outgoing open strings, and also it can be a free boundary entirely. See \cite{Ra} and \cite{Su} for details.
The mapping class group $\Gamma(S)$ for such an open-closed cobordism $S$ is the group of isotopy classes of orientation preserving diffeomorphisms of $S$ which fix incoming and outgoing strings pointwise and which may permute completely free boundaries as long as they carry the same $D$-brane labels. Using isotopy, we may assume that such diffeomorphisms fix boundary components containing open or closed strings pointwise.
For simplicity, suppose the set of $D$-branes consists only of $M$. For a general case, see the remark at the end of this section. If a connected open-closed cobordism $S$ of genus $g$ has $n$ boundaries containing open or closed strings and $m$ completely free boundaries carrying the same label $M$, then the mapping class group $\Gamma(S)$ is isomorphic to $\Gamma_{g,n}^{(m)}=\pi_0(\Lambda)$, where $\Lambda$ is the topological group of orientation preserving diffeomorphisms of a genus $g$ connected oriented surface $F_{g,n}^m$ with $n$ boundaries and $m$ marked points, where diffeomorphisms fix $n$ boundaries pointwise and possibly permute $m$ marked points. Let $\Gamma_{g,n}^m$ be the mapping class group of $F_{g,n}^m$ in which diffeomorphisms fix not only $n$ boundaries pointwise but also $m$ marked points. The group $\Gamma_{g,n}^m$ is a normal subgroup of $\Gamma_{g,n}^{(m)}$ and we have the following exact sequence:
\begin{equation}\label{group extension}
1\longrightarrow \Gamma_{g,n}^m \longrightarrow \Gamma_{g,n}^{(m)} \longrightarrow \Sigma_m \longrightarrow 1.
\end{equation}
Let $\sigma_m\cong\mathbb Z$ be the sign representation of $\Sigma_m$, and we regard it as a $\Gamma_{g,n}^{(m)}$-module through the projection to $\Sigma_m$. As such $\sigma_m$ is a trivial $\Gamma_{g,n}^m$-module. Note that $\sigma_m^2$ is a trivial $\Sigma_m$-module.
In \cite{Go2}, Godin constructs string operations in open-closed string topology in which the set of $D$-branes consists only of $M$ itself. Suppose a connected open-closed cobordism $S$ has $p$ incoming closed strings, $q$ outgoing closed strings, $r$ incoming open strings, and $s$ outgoing open strings, where we assume $q+s\ge1$, due to the positive boundary condition. Suppose the open-closed cobordism surface $S$ has genus $g$ with $n$ boundaries containing open or closed strings, and with $m$ completely free boundaries. Let $\partial_{\textup{in}}S$ be the collection of incoming open and closed strings in $S$, and let $\chi_S$ be the $\mathbb Z_2$ graded $\Gamma(S)$-module $\bigl(H_1(S,\partial_{\textup{in}}S), H_0(S,\partial_{\textup{in}}S)\bigr)$. Consider a $\Gamma(S)$ module
\begin{equation*}
\det\chi_S=\bigl(\det H_1(S,\partial_{\textup{in}}S)\bigr)\otimes \bigl(\det H_0(S,\partial_{\textup{in}}S)\bigr)^{-1},
\end{equation*}
where $\det$ denotes the highest exterior power. Then the associated string operations constructed in \cite{Go2} are of the following form, (modulo K\"unneth theorem):
\begin{equation}\label{open-closed string operation}
\mu: H_*(\Gamma_{g,n}^{(m)};(\det\chi_S)^d)\otimes
H_*(LM)^{\otimes p}\otimes H_*(M)^{\otimes r} \longrightarrow
H_*(LM)^{\otimes q}\otimes H_*(M)^{\otimes s},
\end{equation}
where $\Gamma_{g,n}^{(m)}\cong\Gamma(S)$.
We show that the representation $\det\chi_S$ of $\Gamma(S)$ is isomorphic to the module $\sigma_m$ described above as $\Gamma(S)$-modules.
\begin{proposition} \label{chi_S} Suppose and open-closed cobordism $S$ has $m$ completely free boundaries. Then as $\Gamma(S)$-module, $\det\chi_S\cong \sigma_m$.
\end{proposition}
\begin{proof} First suppose the open-closed cobordism $S$ is connected. Then $H_0(S)\cong\mathbb Z$ is a trivial $\Gamma(S)$-module. To understand $H_1(S)$ as a $\Gamma(S)$-module, suppose $S$ has genus $g$ and has $p$ boundaries $c_1,\dots, c_p$ containing incoming open or closed strings, $m$ completely free boundaries $d_1,\dots,d_m$, and $q$ boundaries $e_1,\dots,e_q$ containing outgoing open or closed strings, where $p,m\ge0, q\ge1$ by positive boundary condition. Let $\hat{S}$ be the closed surface obtained by capping $p+m+q$ boundaries of $S$, and let $a_i,b_i$ with $1\le i\le g$ be a symplectic basis of $H_1(\hat{S})$. Then the homology basis of $H_1(S)$ consists of $a_i,b_i,[c_j],[d_k],[e_{\ell}]$, where $1\le i\le g$, $1\le j\le p$, $1\le k\le m$ and $1\le \ell\le q-1$. Note that the last homology class $[e_q]$ is a linear combination of other basis elements. Since $\Gamma(S)$ acts trivially on the basis elements $[c_j]$'s and $[e_{\ell}]$'s, and $\Gamma(S)$ permutes $[d_k]$'s, we have $\det H_1(S)\cong\bigl(\det H_1(\hat{S})\bigr)\otimes\sigma_m$ as a $\Gamma(S)$-module. Since the action of $\Gamma(S)$ on $H_1(\hat{S})$ preserves the intersection pairings of $a_i$'s and $b_j$'s, the action factors through a symplectic group, and consequently $\det H_1(\hat{S})$ is a trivial $\Gamma(S)$-module. Hence $\det H_1(S)\cong \sigma_m$ as $\Gamma(S)$-modules.
In the general case, consider the following homology exact sequence of pairs:
\begin{equation*}
0\to H_1(\partial_{\textup{in}}S) \to H_1(S) \to H_1(S,\partial_{\textup{in}}S)
\to H_0(\partial_{\textup{in}}S) \to H_0(S) \to H_0(S,\partial_{\textup{in}}S) \to 0.
\end{equation*}
Since $\Gamma(S)$ acts trivially on $H_*(\partial_{\textup{in}}S)$, the above exact sequence gives
\begin{equation*}
\det H_1(S,\partial_{\textup{in}}S)\otimes \det H_0(S)\cong \det H_1(S)\otimes \det H_0(S,\partial_{\textup{in}}S).
\end{equation*}
Let $S=\coprod_iS_i$ be the decomposition into connected components. Then $\Gamma(S)\cong\prod_i\Gamma(S_i)$ and
\begin{equation*}
\det\chi_S\cong\det H_1(S)\otimes \bigl(\det H_0(S)\bigr)^{-1} \cong \textstyle{\bigotimes}_i\bigl(\det H_1(S_i)\otimes \det^{-1} H_0(S_i)\bigr)
\cong\textstyle{\bigotimes}_i\det H_i(S_i).
\end{equation*}
If $S_i$ has $m_i$ completely free boundaries with $m=\sum_im_i$, then by what we have proved earlier, we have $\det\chi_S\cong\bigotimes_i\sigma_{m_i}\cong\sigma_m$, as $\Gamma(S)$-modules. This completes the proof.
\end{proof}
If we can prove a stability property for the homology $H_*(\Gamma_{g,n}^{(m)};\sigma_m^d)$ for large genus $g$, then the same argument as in the previous section applies and we have a similar vanishing theorem for open-closed string operations. This is what we do.
\begin{remark}\label{related stability} In \cite{CoMa} and \cite{I2}, stability properties of the homology of the mapping class group $\Gamma_{g,n}$ with twisted coefficients are studied. Here note that $\Gamma_{g,n}$ fixes boundaries of the surface $F_{g,n}$, where the above mapping class group $\Gamma_{g,n}^{(m)}$ can permute $m$ of the $m+n$ boundaries of a genus $g$ surface. So our context is somewhat different from the above papers.
\end{remark}
A stabilizing map for an open-closed cobordism surface $S$ can be constructed in two ways. Let $T_{\text{closed}}$ be a torus with one incoming and one outgoing closed strings, and let $T_{\text{open}}$ be a torus with one boundary containing one incoming and one outgoing open strings. See Figures 2 and 3. If $S$ has an incoming or outgoing closed string, we can sew the torus $T_{\text{closed}}$ to a closed string of $S$. If $S$ has an incoming or outgoing open string, then we can sew $T_{\text{open}}$ to an open string of $S$. These two types of sewing increases the genus of $S$ by one without changing the numbers $p,q,r,s$ of incoming/outgoing open/closed strings, and the numbers $n,m$ of boundaries with or without open/closed strings. Because of the positive boundary condition $q+s\ge1$ in \eqref{open-closed string operation}, we can always apply at least one of the above two sewing procedures to increase the genus of any open-closed cobordism surface $S$.
\begin{figure}
\begin{tikzpicture}
\draw (0,0) ellipse (1.5 and 1);
\draw (0,2) node[] (1) {} arc (90:150:3 and 2 )
arc (330:270:1 and 0.66 ) --++(-0.5,0) node[] (2) {};
\draw (1) arc (90:30:3 and 2 ) arc (210:270:1 and 0.66 )
-- ++(0.5,0) node[] (3) {};
\draw (0,-2) node[] (4) {} arc (270:210:3 and 2 )
arc (30:90:1 and 0.66 ) -- ++(-0.5,0) node[] (5) {};
\draw (4) arc (270:330:3 and 2 ) arc (150:90:1 and 0.66 )
-- ++(0.5,0) node[] (6) {};
\draw[densely dashed] (0,2) arc (90:270:0.2 and 0.5 );
\draw (0,1) arc (270:360:0.2 and 0.5 ) arc (0:90:0.2 and 0.5 );
\draw[densely dashed] (0,-1) arc (90:270:0.2 and 0.5 );
\draw (0,-2) arc (270:360:0.2 and 0.5 ) arc (0:90:0.2 and 0.5 );
\draw[ultra thick] (-3.95,0) ellipse (0.25 and 0.65 );
\draw[ultra thick] (3.95,0) ellipse (0.25 and 0.65 );
\path (0,-3) node[text width=10cm]
{\textsc{Figure 2.} The torus $T_{\textup{closed}}$ has one incoming boundary and one outgoing boundary.};
\end{tikzpicture}
\qquad
\begin{tikzpicture}[>=stealth]
\draw (2.5,0) arc (0:60: 2.5 and 1 );
\draw (-2.5,0) arc (180:120:2.5 and 1 );
\draw (-2.5,0) arc (180:360:2.5 and 1 );
\draw[ultra thick] (2.5,0) arc (0:30:2.5 and 1 );
\draw[ultra thick] (-2.5,0) arc (180:210: 2.5 and 1 );
\draw[ultra thick] (-2.5,0) arc (180:150: 2.5 and 1 );
\draw[ultra thick] (2.5,0) arc (360:330:2.5 and 1 );
\draw (-1.5,0) arc (180:360:0.5 and 0.2 )
arc (180:0:0.5 ) arc (180:360:0.5 and 0.2 )
arc (0:180:1.5 );
\draw[->, ultra thick] (2.5,0) -- ++(0,0.1);
\draw[->, ultra thick] (-2.5,0) -- ++(0,0.1);
\path (1.8,-0.7) node[below] {$I$};
\path (1.8,0.7) node[above] {$J$};
\draw[dashed] (1.5,0) arc(0:180:0.5 and 0.2 );
\draw[dashed] (-0.5,0) arc (0:180:0.5 and 0.2 );
\node[text width=10cm] at (0,-1.7)
{\textsc{Figure 3}. The torus $T_{\textup{open}}$ has one boundary with one incoming open string and one outgoing open string};
\end{tikzpicture}
\end{figure}
By extending diffeomorphisms on $S$ by identity on $T_{\text{closed}}$ or on $T_{\text{open}}$, we obtain a homomorphism $\varphi: \Gamma(S)\rightarrow \Gamma(S\#T)$, that is, a homomorphism $\varphi:\Gamma_{g,n}^{(m)} \to \Gamma_{g+1,n}^{(m)}$. By restriction to those diffeomorphisms of $S$ preserving completely free boundaries component wise, we obtain a homomorphism $\varphi: \Gamma_{g,n}^m \to \Gamma_{g+1,n}^m$.
We show that Harer-Ivanov's stability result in the introduction can be extended to the above mapping class groups with the same stability range and with the coefficient in the module $(\sigma_m)^{\otimes r}$.
For this, we first recall Ivanov's formulation of stability theorem. Let $X,Y$ be compact connected orientable surfaces with nonempty boundary such that $X\subset Y$. Let $\mathcal M_X$ and $\mathcal M_Y$ be their mapping class group consisting of isotopy classes of orientation preserving diffeomorphisms fixing the boundaries pointwise. Thus, if $X$ is a surface of genus $g$ with $n$ boundaries, then $\mathcal M_X\cong\Gamma_{g,n}$. Let $g(X)$ be the genus of the surface $X$.
\begin{theorem}[Ivanov \cite{I2}]\label{Ivanov stability} With the above notation, the homomorphism of mapping class groups induced by the inclusion $X\longrightarrow Y$
\begin{equation*}
H_k\bigl(\mathcal M_X\bigr) \longrightarrow H_k\bigl(\mathcal M_Y\bigr)
\end{equation*}
is an isomorphism when $g(X)\ge 2k+1$, and onto when $g(X)\ge 2k$.
\end{theorem}
In the original formulation of stability theorem by Harer, it uses sewing of the torus $T_{\text{closed}}$ along a boundary of a surface, and it does not cover the case of sewing $T_{\text{open}}$ to a surface. This latter case is covered by Ivanov's formulation of the stability theorem. When the surface $Y$ is obtained by sewing $T_{\text{closed}}$ or $T_{\text{open}}$ to $X$, the actual homomorphism may depend on the choice of (part of) the boundary of $X$ used for sewing, as we saw in the previous section.
\begin{theorem} \label{homology stability} Let $n\ge1$ and $r\ge0$. Let $\varphi$ be a stabilizing map obtained by sewing $T_{\textup{closed}}$ or $T_{\textup{open}}$ to the open-closed cobordism $S$. Then, both of the following homology stabilizing maps are isomorphisms for $g\ge 2k+1$ and are onto for $g\ge2k$\textup{:}
\begin{align}
\varphi_*&: H_k(\Gamma_{g,n}^m) \longrightarrow H_k(\Gamma_{g+1,n}^m),
\label{stability with marked points}\\
\varphi_*&: H_k(\Gamma_{g,n}^{(m)};\sigma_m^r) \longrightarrow H_k(\Gamma_{g+1,n}^{(m)};\sigma_m^r).\label{stability with twisted module}
\end{align}
Here $\sigma_m^r=(\sigma_m)^{\otimes r}$ is a trivial $\Gamma_{g,n}^{(m)}$-module for even $r$.
When $g\ge 2k+1$, the homology groups $H_k(\Gamma_{g,n}^m)$ and $H_k(\Gamma_{g,n}^{(m)})$ are independent of $n\ge1$. Furthermore, when $g\ge2k$, the action of $\Sigma_n$ on both homology groups $H_k(\Gamma_{g,n}^m)$ and $H_k(\Gamma_{g,n}^{(m)})$ is trivial and consequently, the stabilizing map $\varphi_*$ in \eqref{stability with marked points} and \eqref{stability with twisted module} is independent of the choice of boundaries of $S$ used for sewing with $T_{\textup{closed}}$ or $T_{\textup{open}}$.
\end{theorem}
\begin{proof} For the first case, we have the following exact sequence (see for example \cite{ABE}):
\begin{equation}\label{group extension: capping by discs}
1 \longrightarrow \mathbb Z^m \longrightarrow \Gamma_{g,n+m} \longrightarrow \Gamma_{g,n}^m \longrightarrow 1,
\end{equation}
where $\Gamma_{g,n+m}$ is the mapping class group of $S$ consisting of isotopy classes of orientation preserving diffeomorphisms of $S$ fixing all $n+m$ boundaries pointwise. This is a central extension and the kernel $\mathbb Z^m$ is generated by Dehn twists along simple closed curves parallel to $m$ completely free boundaries. We consider the associated Hochschild-Serre spectral sequence
\begin{equation*}
E^2_{p,q}=H_p\bigl(\Gamma_{g,n}^m;H_q(\mathbb Z^m)\bigr) \Longrightarrow
H_{p+q}(\Gamma_{g,n+m}).
\end{equation*}
Since the above extension is a central extension, the action of $\Gamma_{g,n}^m$ on $\mathbb Z^m$ is trivial, and thus we have a trivial local system in the above $E^2$-terms. Since the homology $H_*(\mathbb Z^m)$ is torsion free, the above $E^2$-term can be written as $E^2_{p,q}=H_p(\Gamma_{g,n}^m)\otimes H_q(\mathbb Z^m)$. Now sewing $T_{\text{closed}}$ or $T_{\text{open}}$ to $S$ induces the following homomorphisms between group extensions.
\begin{equation*}
\begin{CD}
1 @>>> \mathbb Z^m @>>> \Gamma_{g,n+m} @>>> \Gamma_{g,n}^m @>>> 1 \\
@. @| @V{\varphi}VV @V{\varphi}VV @. \\
1 @>>> \mathbb Z^m @>>> \Gamma_{g+1,n+m} @>>> \Gamma_{g+1,n}^m @>>> 1
\end{CD}
\end{equation*}
This diagram induces a homomorphism of spectral sequences
\begin{equation*}
E^2_{p,q}=H_p(\Gamma_{g,n}^m)\otimes H_q(\mathbb Z^m)
\xrightarrow{\varphi_*\otimes 1} {}'E^2_{p,q}=H_p(\Gamma_{g+1,n}^m)\otimes
H_q(\mathbb Z^m)
\end{equation*}
converging to the homomorphism $\varphi_*: H_{p+q}(\Gamma_{g,n+m}) \longrightarrow H_{p+q}(\Gamma_{g+1,n+m})$ which we know to be an isomorphism for $g\ge 2(p+q)+1$ and onto for $g\ge2(p+q)$ by Harer-Ivanov stability theorem. Using one version of Zeeman's comparison theorem of spectral sequences (see Theorem 1.3 in \cite{I2}), stability property of the group $\Gamma_{g,n+m}$ (via sewing of $T_{\text{closed}}$ or $T_{\text{open}}$ to open-closed cobordisms $S$) implies the stability property of $\Gamma_{g,n}^m$ in the same range. Namely, $\varphi_*:H_k(\Gamma_{g,n}^m) \rightarrow H_k(\Gamma_{g+1,n}^m)$ is an isomorphism for $g\ge 2k+1$ and onto for $g\ge 2k$. This proves the first part.
For the second stabilizing map, we again consider Hochschild-Serre spectral sequence associated to the group extension \eqref{group extension} and the $\Gamma_{g,n}^{(m)}$-module $\sigma_m^r$:
\begin{equation*}
E^2_{p,q}=H_p\bigl(\Sigma_m; H_q(\Gamma_{g,n}^m;\sigma_m^r)\bigr) \Longrightarrow H_{p+q}(\Gamma_{g,n}^{(m)};\sigma_m^r),
\end{equation*}
where $H_*(\Gamma_{g,n}^m;\sigma_m^r)=H_*(\Gamma_{g,n}^m;\mathbb Z)\otimes\sigma_m^r$ since $\Gamma_{g,n}^m$ acts trivially on the module $\sigma_m^r$. Now, sewing $T_{\text{closed}}$ or $T_{\text{open}}$ to the open-closed cobordism $S$ induces the following homomorphism between group extensions:
\begin{equation}\label{extension diagram}
\begin{CD}
1 @>>> \Gamma_{g,n}^m @>>> \Gamma_{g,n}^{(m)} @>>> \Sigma_m @>>> 1 \\
@. @V{\varphi}VV @V{\varphi}VV @| @.\\
1 @>>> \Gamma_{g+1,n}^m @>>> \Gamma_{g+1,n}^{(m)} @>>> \Sigma_m @>>> 1,
\end{CD}
\end{equation}
which induces a homomorphism of spectral sequences
\begin{equation*}
E^2_{p,q}=H_p\bigl(\Sigma_m; H_q(\Gamma_{g,n}^m)\otimes\sigma_m^r\bigr) \longrightarrow
{}'E^2_{p,q}=H_p\bigl(\Sigma_m; H_q(\Gamma_{g+1,n}^m)\otimes\sigma_m^r\bigr)
\end{equation*}
converging to the homomorphism $\varphi_*:
H _{*}(\Gamma_{g,n}^{(m)};\sigma_m^r) \longrightarrow
H_{*}(\Gamma_{g+1,n}^{(m)};\sigma_m^r)$. A standard Zeeman's comparison theorem of spectral sequence (see Theorem 1.2 in \cite{I2}) together with the stability property for $\Gamma_{g,n}^m$ we have just proved, we conclude that the group $\Gamma_{g,n}^{(m)}$ also enjoys a stability property in the stated range.
To see that the homology group $H_k(\Gamma_{g,n}^m)$ is independent of $n\ge1$ when $g\ge 2k+1$, we consider the following diagram:
\begin{equation*}
\begin{CD}
1 @>>> \mathbb Z^m @>>> \Gamma_{g,n+m} @>>> \Gamma_{g,n}^m @>>> 1 \\
@. @| @VVV @VVV @. \\
1 @>>> \mathbb Z^m @>>> \Gamma_{g,1+m} @>>> \Gamma_{g,1}^m @>>> 1,
\end{CD}
\end{equation*}
where the vertical maps are induced by capping $n-1$ incoming boundaries with discs. By Theorem \ref{Ivanov stability}, the induced middle vertical homomorphisms in homology is onto when $g\ge 2k$ and isomorphism when $g\ge 2k+1$. Thus, using Zeeman's spectral sequence comparison theorem as above, we see that the same is true for induced homology map on the right. Thus, for $g\ge 2k+1$, we have $H_k(\Gamma_{g,1}^m)\cong H_k(\Gamma_{g,n}^m)$ for any $n\ge1$.
In the above context, if we sew a surface $F_{0,1+n}$ to $F_{g,1+m}$ or to $F_{g,1}^m$, then we have homomorphisms going from the bottom row to the top row. We can deduce the same conclusion arguing as before using this new diagram with reversed vertical arrows.
The proof that the homology group $H_k(\Gamma_{g,n}^{(m)})$ is independent of $n\ge1$ when $g\ge2k+1$ is the same as above using analogous diagrams induced by either capping $n-1$ incoming boundaries or sewing $F_{0,1+n}$ to $F_{g,1+m}$ and to $F_{g,1}^{(m)}$.
Finally, triviality of the action of $\Sigma_n$ on homology groups $H_k(\Gamma_{g,n}^m)$ and $H_k(\Gamma_{g,n}^{(m)})$ can be shown in exactly the same way as in Remark \ref{trivial S_n action} in the stable range $g\ge2k$. Then arguing as in Proposition \ref{transposition}, we can see that the stabilizing map $\varphi_*$ in \eqref{stability with marked points} and \eqref{stability with twisted module} is independent of choices involved in sewing the open-closed cobordism $S$ and $T_{\text{closed}}$ or $T_{\text{open}}$ in the same stable range $g\ge2k$. This completes the proof.
\end{proof}
In the Harer's original paper \cite{H}, stability property of the group $\Gamma_{g,n}^m$ is proved, but the slope of the stability range is $3$, instead of $2$ as in Theorem \ref{homology stability}.
The proof of the stability property of the stabilizing map \eqref{stability with twisted module} was done in two steps, using a spectral sequence comparison theorem each time. We could complete the proof of the stability of \eqref{stability with twisted module} using a single spectral sequence. Let $F_{g,n}^{(m)}$ be a smooth oriented surface of genus $g$ with $n$ boundaries, $m$ marked points $\{x_1,x_2,\dots,x_m\}$, and a choice of an oriented frame $(u_i,v_i)$ at each marked point $x_i$. Let $\text{Diff}^+\bigl(F_{g,n}^{(m)}\bigr)$ be the topological group of orientation preserving diffeomorphisms which fix $n$ boundaries pointwise and which possibly permute $m$ marked points. For each diffeomorphism $f\in \text{Diff}^+\bigl(F_{g,n}^{(m)}\bigr)$, let $f(x_i)=x_{\tau(i)}$, $1\le i\le m$, for some permutation $\tau\in\Sigma_m$, and let $\bigl(f_*(u_i),f_*(v_i)\bigr)=(u_{\tau(i)},v_{\tau(i)})A_i$ for some $A_i\in\text{GL}^+_2(\mathbb R)$ for $1\le i\le m$. This correspondence from $f$ to $(\tau;A_1,A_2,\dots,A_m)$ defines an onto homomorphism from the diffeomorphism group to a wreath product $\text{Diff}^+\bigl(F_{g,n}^{(m)}\bigr) \longrightarrow \Sigma_m\wr \text{GL}^+_2(\mathbb R)$ whose kernel consists of those diffeomorphisms which fix $m$ marked points and whose induced differentials are identity at $m$ marked points. Thus the kernel is homotopy equivalent to $\text{Diff}^+(F_{g,n+m})$. Since $n\ge1$, connected components of these diffeomorphism groups are contractible \cite{ES}. Thus, passing to classifying spaces and then replacing diffeomorphism groups by mapping class groups, we have a homotopy fibration given in the top row of the next diagram, and a map of fibrations induced by a stabilizing map as in the diagram below:
\begin{equation*}
\begin{CD}
B\Gamma_{g,n+m} @>>> B\Gamma_{g,n}^{(m)} @>>> B\bigl(\Sigma_m\wr\text{GL}^+_2(\mathbb R)\bigr) \\
@V{B\varphi}VV @V{B\varphi}VV @| \\
B\Gamma_{g+1,n+m} @>>> B\Gamma_{g+1,n}^{(m)} @>>> B\bigl(\Sigma_m\wr\text{GL}^+_2(\mathbb R)\bigr).
\end{CD}
\end{equation*}
Here, since $\text{GL}^+_2(\mathbb R)$ is homotopy equivalent to the circle $S^1$, the classifying space $B\bigl(\Sigma_m\wr\text{GL}^+_2(\mathbb R)\bigr)$ is homotopy equivalent to $E\Sigma_m\times_{\Sigma_m}(\mathbb CP^{\infty})^m$. Now we consider maps between two Serre spectral sequences associated to the above two fibrations in the top and the bottom rows, and by applying Zeeman's comparison theorem, we get the same result as in Theorem \ref{homology stability}. See \cite{BT} for related results on stable mapping class groups.
As before, we say that the group $H_k(\Gamma_{g,n}^{(m)};\sigma_m^d)$ is in stable range if the stabilizing map $\varphi_*$ mapping to this group is onto and all the subsequent stabilizing maps are isomorphisms:
\begin{equation*}
H_k(\Gamma_{g-1,n}^{(m)};\sigma_m^d) \xrightarrow[\text{onto}]{\varphi_*}
H_k(\Gamma_{g,n}^{(m)};\sigma_m^d)
\xrightarrow[\cong]{\varphi_*}
H_k(\Gamma_{g+1,n}^{(m)};\sigma_m^d)
\xrightarrow[\cong]{\varphi_*} \cdots
\end{equation*}
\begin{Vanishing Theorem}[\textbf{Open-Closed String Topology Case}] Let $M$ be a smooth oriented closed manifold of dimension $d$. Consider open-closed string topology on $M$ in which the set of $D$-brane submanifolds consists only of $M$. Let
\begin{equation*}
\varphi_*:
H_k(\Gamma_{g,n}^{(m)};\sigma_m^d) \longrightarrow
H_k(\Gamma_{g+1,n}^{(m)};\sigma_m^d)
\end{equation*}
be a stabilizing map of mapping class groups obtained by sewing either $T_{\textup{closed}}$ or $T_{\textup{open}}$ to open-closed cobordisms.
\textup{(i)} Open-closed string operations \eqref{open-closed string operation} associated to elements in the image $\textup{Im}\,\varphi_*$ of any stabilizing map $\varphi_*$ are trivial.
\textup{(ii)} Open-closed string operations \eqref{open-closed string operation} associated to any elements in the homology group $H_k(\Gamma_{g,n}^{(m)};\sigma_m^d)$ in stable range are trivial.
\end{Vanishing Theorem}
\begin{proof} As before, using the gluing property of the homological conformal field theory, we only have to observe that the topological quantum field theory operations associated to surfaces $T_{\textup{closed}}$ and $T_{\textup{open}}$ are trivial. We have already seen that the TQFT operation associated to $T_{\textup{closed}}$ is trivial. The TQFT operation associated to $T_{\textup{open}}$ describs a process in which a closed string splits off from an open string, and later they join together to form an open string. The general form of such string operation in which open strings carry submanifold labels $I,J$ at their end points is given by
\begin{equation*}
\mu_{T_{\textup{open}}}: H_*(P_{IJ}) \longrightarrow H_*(P_{IJ})\otimes H_*(LM) \longrightarrow H_*(P_{IJ}),
\end{equation*}
where $P_{IJ}$ is the space of open strings $\gamma:[0,1] \rightarrow M$ such that $\gamma(0)\in I$ and $\gamma(1)\in J$. In Proposition 3.4 in \cite{T5} we called such an operation a handle attaching operation, and we showed that handle attaching operations are always trivial for any labels $I,J$. In our present case, we have $I=J=M$. By a similar argument as in the closed string topology case, the open-closed string operation associated to any element in the image of any stabilizing map $\varphi_*$ can be factored into a composition of operations, and one of the factors is the TQFT operation associated to $T_{\textup{closed}}$ or $T_{\textup{open}}$, which is trivial. This proves part (i). Part (ii) is a consequence of part (i) and definition of stable range.
\end{proof}
As a final remark in this section, we consider a general case in which the set $\mathcal D$ of $D$-brane submanifolds has more than one element. Let $\mathcal D=\{K_1,K_2,\dots, K_h,\dots\}$ be a set of $D$-brane labels. Let $S_{g,n}^m$ be an oriented open-closed cobordism of genus $g$ with $n$ boundaries containing open or closed strings, and with $m$ completely free boundaries. In this case, we consider orientation preserving diffeomorphisms of $S_{g,n}^m$ which fix $n$ boundaries containing open or closed strings pointwise, and which may permute completely free boundaries provided they carry the same label. Suppose there are $m_i$ completely free boundaries carrying the same label $K_i$ for $1\le i\le h$ so that $\sum_im_i=m$. Let $H(\vec{m})=\Sigma_{m_1}\times \Sigma_{m_2}\times\cdots\times \Sigma_{m_h}\subset\Sigma_m$ be the subgroup of the symmetric group corresponding to $\vec{m}=(m_1,m_2,\dots,m_h)$. The corresponding mapping class group is denoted by $\Gamma_{g,n}^{H(\vec{m})}$. Then as before, there exists a homotopy fibration
\begin{equation*}
B\Gamma_{g,n+m} \longrightarrow B\Gamma_{g,n}^{H(\vec{m})} \longrightarrow B\bigl(H(\vec{m})\wr\text{GL}^+_2(\mathbb R)\bigr).
\end{equation*}
Then arguing as before, we can show that the homology group $H_k\bigl(\Gamma_{g,n}^{H(\vec{m})};\sigma_m^r\bigr)$ for $r\ge0$ has a stability property with respect to the genus $g$ as in Theorem \ref{homology stability} with the same range of stability. This stability property would be relevant to a vanishing property of open-closed string operations with a general set of $D$-branes.
\bigskip
\section{Some computations of unstable closed string operations}
We have shown that stable string operations vanish. We can ask: what about unstable string operations? Part (i) of vanishing theorems applies to stable as well as unstable string operations, and it shows that most of the unstable string operations vanish. Those unstable operations not covered by part (i) are those operations associated to homology elements of mapping class groups not in the image of any stabilizing maps. Can they be nontrivial? We can try to compute some unstable string operations. Unfortunately not many homology groups of mapping class groups in unstable range have been calculated (see for example \cite{ABE}), although the situation for stable mapping class groups is much better (\cite{Ga} and \cite{MW}).
In this final section, we compute some genus one unstable string operations in closed string topology for finite dimensional $M$. First we examine string operations associated to degree $1$ homology group $H_1(\Gamma_{1,p+q})$ of genus $1$ mapping class groups with $p+q\ge1$. This homology group is well known to be $H_1(B\Gamma_{1,p+q})\cong\mathbb Z^{p+q}$, generated by a Dehn twist along a nonseparating simple closed curve on the surface $F_{1,p+q}$ (we can use the meridian of the torus), and Dehn twists along simple closed curves parallel to $p+q-1$ boundary circles (for an explanation, see for example \cite{K}, Theorem 5.1). Note that it is well known that Dehn twists along nonseparating simple closed curves on any connected surface are conjugate to each other in its mapping class group, and hence they represent the same first homology classes. The string operation corresponding to the Dehn twist on a cylinder generating $H_1(\Gamma_{0,1+1})\cong\mathbb Z$ is the BV operator $\Delta:H_*(LM) \to H_{*+1}(LM)$ coming from the homological circle action on the free loop space $LM$. By inserting BV operator appropriately on a decomposition of the genus 1 surface $F_{1,p+q}$, and using the gluing property of HCFT, we see that all string operations associated to $H_1(\Gamma_{1,p+q})$ vanish. For example, consider the Dehn twist along the meridian of $F_{1,p+q}$. Let $\Psi$ and $\mu$ be the loop coproduct and the loop product maps both of degree $-d$:
\begin{align*}
\Psi&: H_*(LM) \longrightarrow H_*(LM)\otimes H_*(LM), \\
\mu &: H_*(LM)\otimes H_*(LM) \longrightarrow H_*(LM).
\end{align*}
In \cite{T3} Theorem 2.5, we showed that the coproduct is nontrivial only on $H_d(LM)$ and its image under $\Psi$ are integral multiples of the generator $[c_0]\otimes [c_0]\in H_0(LM)\otimes H_0(LM)$ by degree reason, where $c_0$ is a constant loop. The string operation associated to the Dehn twist along the meridian is given by
\begin{equation*}
\mu\circ(\Delta\otimes 1)\circ\Psi:H_*(LM) \longrightarrow H_{*-d+1}(LM),
\end{equation*}
and since $\Delta([c_0])=0$, this string operation is trivial. For Dehn twists along a curve parallel to boundaries, the situation is even more straightforward and they are given by post or pre-composition of the genus one operator $\mu\circ\Psi$ with a BV operator $\Delta$, and hence they are trivial, too.
Next we compute a string operation associated to a degree $2$ homology class. Since the torus $T_{\text{closed}}$ is used in the stabilization map in Harer's theorem, we examine string operations associated to the homology group $H_*(\Gamma_{1,1+1})$. This homology group is computed in \cite{Go1}, and is given by $H_1\cong\mathbb Z\oplus\mathbb Z$, $H_2\cong\mathbb Z\oplus\mathbb Z_2$, $H_3\cong\mathbb Z_2$, and $H_k=0$ for $k\ge4$. As before the string operations associated to $H_1$ vanish. We examine the string operation associated to a generator the infinite cyclic group in $H_2$. To understand this generator, we compute the homology of $\Gamma_{1,2}$ in a different way. We have the following group extension
\begin{equation}\label{group extension: one cap}
1\longrightarrow \mathbb Z \longrightarrow \Gamma_{1,2} \longrightarrow \Gamma_{1,1}^1 \longrightarrow 1,
\end{equation}
where the homomorphism $\Gamma_{1,2} \rightarrow \Gamma_{1,1}^1$ is induced by capping one boundary of $T$ with a disc keeping the center point of the disc. The kernel $\mathbb Z$ is generated by the Dehn twist along a simple closed curve parallel to the capped boundary, and it is in the center of $\Gamma_{1,2}$. This group extension is a special case of \eqref{group extension: capping by discs} Since the action of $\Gamma_{1,1}^1$ on $\mathbb Z$ is trivial, the local system in the following Hochschild-Serre spectral sequence is trivial:
\begin{equation*}
E^2_{p,q}=H_p\bigl(\Gamma_{1,1}^1; H_q(\mathbb Z)\bigr) \Longrightarrow H_{p+q}(\Gamma_{1,2}).
\end{equation*}
Homology of $\Gamma_{1,1}^1$ is listed in the table in \cite{ABE} as follows: $H_1(\Gamma_{1,1}^1)\cong\mathbb Z$ generated by a Dehn twist along any nonseprating simple closed curve (we can take this to be a meridian) on the genus $1$ surface with one boundary and one puncture, and $H_2(\Gamma_{1,1}^1)\cong\mathbb Z_2$. By glancing at the $E^2$-terms of the above spectral sequence, we see that there cannot be any nontrivial differentials by degree reason, and the spectral sequence must collapse. For the extension problem, the group $H_2(\Gamma_{1,2})$ fits into the following exact sequence:
\begin{equation*}
0\longrightarrow E^{\infty}_{1,1} \longrightarrow H_2(\Gamma_{1,2}) \longrightarrow E^{\infty}_{2,0} \longrightarrow 0,
\end{equation*}
where $E^{\infty}_{1,1}\cong\mathbb Z$ and $E^{\infty}_{2,0}\cong\mathbb Z_2$. From this exact sequence, we see that a generator of the infinite cyclic group in $H_2(\Gamma_{1,2})\cong\mathbb Z\oplus\mathbb Z_2$ comes from a generator of $E^2_{1,1}=H_1(\Gamma_{1,1}^1)\otimes H_1(\mathbb Z)\cong\mathbb Z$. Thus in the fibration $S^1\to B\Gamma_{1,2} \to B\Gamma_{1,1}^1$ associated to \eqref{group extension: one cap}, an infinite cyclic generator of $H_2(B\Gamma_{1,2})$ is given by a cycle $S^1\times S^1 \to B\Gamma_{1,2}$ where the fist $S^1$ corresponds to the Dehn twist along the meridian of the torus $T_{\text{closed}}$ corresponding to a generator of $H_1(B\Gamma_{1,1}^1)$, and the second $S^1$ corresponds to the Dehn twist along a simple closed curve parallel to one of the boundaries of $T_{\text{closed}}$, corresponding to a generator of $H_1(B\mathbb Z)=H_1(S^1)$. Thus the corresponding string operation is given by
\begin{equation*}
\Delta\circ\mu\circ(\Delta\otimes 1)\circ\Psi :H_*(LM) \longrightarrow H_{*-d+2}(LM).
\end{equation*}
Since as before the coproduct $\Psi$ in $H_*(LM)$ followed by a BV operator $\Delta\otimes 1$ vanishes, the string operation associated to the generator of $\mathbb Z\subset H_2(\Gamma_{1,2})\cong\mathbb Z\oplus\mathbb Z_2$ is trivial.
For the other genus $1$ surface $T_{\text{open}}=F_{1,1}$ we used for stabilizing maps, the homology of the corresponding mapping class group $\Gamma_{1,1}$ is given by $H_1(\Gamma_{1,1})\cong\mathbb Z$ and $H_k(\Gamma_{1,1})=0$ for $k\ge2$ \cite{ABE}. Thus, there are no interesting homology classes in this case.
For closed string topology for simply connected infinite dimensional manifold $M$ with finite dimensional $H^*(\Omega M;k)$, the discussion goes through in parallel, and we obtain the same result.
We record our results of the above discussion on unstable genus one closed string operations in the next proposition. Of course similar results can be obtained in the context of unstable genus one open-closed string operations using similar decompositions of open-closed cobordisms.
\begin{proposition} Let $\Gamma_{1,r}$ be the mapping class group of genus $1$ surface with $r$ boundaries. Then in closed string topology for finite or infinite dimensional $M$, the followings hold\textup{:}
\textup{(1)} Closed string operation associated to an arbitrary element in the first homology group $H_1(B\Gamma_{1,r})\cong\mathbb Z^r$ for $r\ge1$ vanishes.
\textup{(2)} Closed string operation associated to a generator of the free summand of the second homology group $H_2(B\Gamma_{1,2})\cong\mathbb Z\oplus\mathbb Z_2$ vanishes.
\end{proposition}
| {
"timestamp": "2008-09-26T10:42:43",
"yymm": "0809",
"arxiv_id": "0809.4561",
"language": "en",
"url": "https://arxiv.org/abs/0809.4561",
"abstract": "We show that in closed string topology and in open-closed string topology with one $D$-brane, higher genus stable string operations are trivial. This is a consequence of Harer's stability theorem and related stability results on the homology of mapping class groups of surfaces with boundaries. In fact, this vanishing result is a special case of a general result which applies to all homological conformal field theories with a property that in the associated topological quantum field theories, the string operations associated to genus one cobordisms with one or two boundaries vanish. In closed string topology, the base manifold can be either finite dimensional, or infinite dimensional with finite dimensional cohomology for its based loop space. The above vanishing result is based on the triviality of string operations associated to homology classes of mapping class groups which are in the image of stabilizing maps.",
"subjects": "Algebraic Topology (math.AT)",
"title": "Stable string operations are trivial",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211582993982,
"lm_q2_score": 0.7279754489059775,
"lm_q1q2_score": 0.7096458703160494
} |
https://arxiv.org/abs/1502.04200 | Gaps in the Milnor-Moore spectral sequence and the Hilali conjecture | In his study of Halperin's toral-rank conjecture, M. R. Hilali conjectured that for any simply connected rationally elliptic space $X$, one must have $dim\pi_*(X)\otimes \mathbb{Q} \leq dimH^*(X,\mathbb{Q})$. Let $(\Lambda V, d)$ denote a Sullivan minimal model of $X$ and $d_k$ the first non-zero homogeneous part of the differential $d$. In this paper, we use spectral sequence arguments to prove that if $(\Lambda V, d_k)$ is elliptic, then, there is no gaps in the $E_{\infty}$ term of the Milnor-Moore spectral sequence of $X$. Consequently, we confirm the Hilali conjecture when $V = V^{odd}$ or else when $k\geq 3$ and $(\Lambda V, d_k)$ is elliptic. | \section[#1]{\centering #1}}
\newcommand{\mathop{.}}{\mathop{.}}
\newcommand{\expo}[2]{#1^{#2}}
\author{Youssef Rami}
\address{D\'epartement de Math\'ematiques \& Informatique,\\
Universit\'e My Ismail, B. P. 11 201 Zitoune, Mekn\`es, Morocco,}
\email{yousfoumadan@gmail.com}
\title
{Gaps in the Milnor-Moore spectral sequence and the Hilali conjecture}
\date{13 Mars 2015}
\subjclass{Primary 55P62; Secondary 55M30 }
\keywords{Elliptic spaces, Milnor-Moore spectral sequence, Toral-rank, $e_0$-gaps.}
\begin{document}
\maketitle
\selectlanguage{english}
\begin{abstract} In his study of Halperin's toral-rank conjecture, M. R. Hilali conjectured that for any simply connected rationally elliptic space $X$, one must have $dim\pi _*(X)\otimes \mathbb{Q} \leq dimH^*(X,\mathbb{Q})$. Let $(\Lambda V, d)$ denote a Sullivan minimal model of $X$ and $d_k$ the first non-zero homogeneous part of the differential $d$. In this paper, we use spectral sequence arguments to prove that if $(\Lambda V, d_k)$ is elliptic, then, there is no gaps in the $E_{\infty}$ term of the Milnor-Moore spectral sequence of $X$. Consequently, we confirm the Hilali conjecture when $V = V^{odd}$ or else when $k\geq 3$ and $(\Lambda V, d_k)$ is elliptic.
\end{abstract}
\section{Introduction}
The rational dichotomy theorem (\cite{F89}) states that a $1$-connected finite CW-complex $X$ is either $\mathbb{Q}$-elliptic or $\mathbb{Q}$-hyperbolic. The first ones are characterized by the inequalities $dim(\pi _*(X)\otimes \mathbb{Q})< \infty$ and $dim(H^*(X,\mathbb{Q}))< \infty$. Although they are not generic, they are subject of many work in rational homotopy theory and several conjectures are put around them (refer to \cite{FHT01}, Part VI for details).
Among these we cite the {\it toral-rank conjecture} due to S. Halperin. Recall first that the action of an $n$-dimensional torus on $X$ is said {\it almost-free} if all ensuing isotropy groups are finite.
The largest integer $n\geq 1$, denoted $rk(X)$, for which $X$ admits an almost-free $n$-torus action is called the toral-rank of $X$. In \cite{Hal85}, S. Halperin conjectured the following relation between this rank and the rational cohomology of $X$:
\begin{conj}\label{Tc} (The toral-rank conjecture): If $X$ is a simply-connected finite type CW-complex, then $dim(H^*(X,\mathbb{Q})\geq 2^{rk(X)}$.
\end{conj}
In \cite{Hil90}, M. R. Hilali studied this latter and was led to pose the following conjecture:
\begin{conj}\label{Hc} (The H-conjecture): If $X$ is an elliptic simply-connected space, then $dim(H^*(X,\mathbb{Q})) \geq dim(\pi _*(X)\otimes \mathbb{Q})$.
\end{conj}
This conjecture was resolved in several cases, as pure elliptic spaces (\cite{Hil90}), formal spaces (\cite{HM08}), hyperelliptic spaces (\cite{BFMM12}), elliptic spaces with formal dimension $\leq 16$ (\cite{OT11}) and a large family of elliptic spaces who's Sullivan minimal model has an homogeneous differential (\cite{HM08}). These results are obtained after the translation of \ref{Hc} in terms of the Sullivan minimal model of $X$. We state it in a general form as follow:
\begin{conj}\label{AvHC} (The algebraic version H-conjecture): If $(\Lambda V,d)$ is an elliptic Sullivan minimal algebra, then $dimH(\Lambda V,d) \geq dim(V)$.
\end{conj}
The starting point in our treatment of \ref{AvHC} relies on its validity for any model with an homogeneous differential $d$ of length $k\geq 3$ and for $k=2$ with some restrictions (\cite{HM08}).
\\
The main tool, we will use to do this, is the following spectral sequence introduced by the author in \cite{Ram12}:
\begin{equation}\label{1} E_k^{p,q} = H^{p,q}(\Lambda V, d_k)\Longrightarrow
H^{p+q}(\Lambda V, d)
\end{equation}
When $k=2$ this is identified to the Milnor-Moore spectral sequence of $X$ (\cite{FH82}, Prop. 9.1.):
$$ Ext_{H_*(\Omega X, \mathbb{Q})}^{p,q}(\mathbb{Q},\mathbb{Q})\Rightarrow H^{p+q}(X,\mathbb{Q}).$$
Further, remark that the ${\infty}$-terms of this later coincides with that of (\ref{1}). \\
Our first purpose in this paper is to look for possible gaps in $E_{\infty}^{*,*}$ assuming that spaces in consideration are rationally elliptic.
Indeed, notice that in \cite{KV}, T. Kahl and L. Vandembroucq proved that the first term of the Milnor-Moore spectral sequence
has no gaps and provided a non rationally elliptic finite CW-complex which presents gaps in the term $E_{\infty}^{*,*}$. \\
Henceforth, $(\Lambda V, d)$ is a Sullivan minimal algebra, that is, its differential $d$ has the form $d=\sum
_{i\geq k}d_i$ with $d_i(V)\subseteq \Lambda ^iV$ and $k\geq 2$. By degree reason, $d_k$ is also a differential.\\
In addition, $H(\Lambda V,d_k)$ admits a second graduation given by lengths of representative cocycles; so $H^+(\Lambda V,d_k) = \oplus _{p\geq 1}H^+_p(\Lambda V, d_k) $ (the elements of $H^+_p(\Lambda V,d_k)$ are represented by homogeneous cocycles of length $p$). Assume that $(\Lambda V,d_k)$ is elliptic. The part $(B)$ of Theorem 2.2 in \cite{Lup02} states that the subspace $H^+_p(\Lambda V,d_k)$ ($p = 1, \ldots , e$) doesn't reduce to zero. Hence by denoting $[\omega _0] = 1_{\mathbb{Q}}$ we deduce that for any $p=0, \ldots , e$, $E_k^{p,q} \not = 0$. This is expressed by saying that {\it the spectral sequence (\ref{1}) has no gaps in its first term $E_k^{*,*}$}.
Recall that, $(\Lambda V,d_k)$ being elliptic (cf.\S 2), its cohomology $H(\Lambda V,d_k)$ satisfies Poincar\'e-duality (\cite{FH82}) and all the representing cocycles of its fundamental class $\omega$ are homogeneous of length $e = dimV^{odd} + (k-2)dimV^{even}$ (\cite{LM02}). A particular cocycle representing $\omega $ is given by the evaluation map:\\
\centerline{ $ev_{(\Lambda V, d)} : \mathcal{E}xt_{(\Lambda V, d)}(\mathbb {Q}, (\Lambda V, d)) \rightarrow H(\Lambda V, d)$}.\\
introduced by Y. F\'elix et al. in \cite{FHT88} (see \S 3.1 for more details).
It will serves us to obtain an cocycle that survives to the $\infty$-term of (\ref{1}) (see Remark 3.5. (2) for another eventual use of this spectral sequence). \\
In fact this map connects the spectral sequence (\ref{1}) to the following one
(see also \cite{Ram12}):
\begin{equation}\label{2} \mathcal{E}_k^{p,q} = \mathcal{E}xt_{(\Lambda V, d_k)}^{p,q}(\mathbb Q, (\Lambda V,
d_k))\Longrightarrow \mathcal{E}xt_{(\Lambda V, d)}^{p+q}(\mathbb Q,
(\Lambda V, d)).\end{equation}
In the sequel, we shall call (\ref{1}) and (\ref{2}), respectively, the {\it generalized} and the {\it $\mathcal{E}$xt-version generalized Milnor-Moore spectral sequences}.
\\
Our main theorem in this paper gives a partial response to the above purpose.
\begin{thm}
If $X$ is a simply connected finite type space whose Sullivan minimal model $(\Lambda V,d)$ is such that $(\Lambda V,d_k)$ is elliptic, then, at the $E_{\infty}$ term of the generalized Milnor-Moore spectral sequence (\ref{1}) of $X$, there can't be any gap.
\end{thm}
As a consequence, we have:
\begin{thm}
For any space $X$ with Sullivan minimal model $(\Lambda V,d)$, the H-conjecture holds if:
\begin{enumerate}
\item $V = V^{odd}$, or else
\item $(\Lambda V,d_k)$ is elliptic and $k\geq 3$.
\end{enumerate}
\end{thm}
We note that the first case in the last theorem gives an improvement of the corollary of theorem A in \cite{AM}.\\
Finally, as mentioned G. Lupton in his paper, we find interesting to recall that his main motivation was the following question asked by Y. F\'elix:
\begin{que}
Can an elliptic space have $e_0$-gaps in its cohomology?
\end{que}
Here the cohomology $H^*(X, \mathbb{Q})$ has an $e_0$-gap if it has an element $x$ whose Toomer invariant $e_0(x) = k$ (see \S 2 for the definition of $e_0(x)$), but does not have any element whose Toomer invariant is $k-1$. It follows from Theorem 1.0.4 the
\begin{thm} If $X$ is a simply connected finite type space whose Sullivan minimal model $(\Lambda V,d)$ is such that $(\Lambda V,d_k)$ is elliptic. Then $H^*(X, \mathbb{Q})$ has no $e_0$-gaps
\end{thm}
The rest of the paper is organized as follow: In \S 2, we give a brief summary on ingredients of rational homotopy theory we will use in the sequel. We recall also the filtrations inducing the spectral sequences (\ref{1}) and (\ref{2}). $\S 3 $ is reserved to the proofs our results
and for some remarks.\\
{\it Acknowledgements:} My interest to Hilali conjecture comes from several discussions exchanged during the monthly seminar of the Moroccan Research Group in Rational Homotopy theory. I would like to thank all of his members for their perseverance. I am also indebted to Y. F\'elix for his helpful comments on the first version of this work which allowed me to improve statements of my results .
\section{Preliminary}
Let $\mathbb K$ a field of characteristic zero, $V= \oplus _{i=0}^{i=\infty}V^i$ a
graded $\mathbb K$-vector space and $\Lambda V =
Exterior(V^{odd}) \otimes Symmetric(V^{even})$ ($V^i$ is the subspace of elements of degree $i$).
A {\it Sullivan algebra} is a free commutative differential graded algebra $(\Lambda V, d)$ ({\it
cdga } for short) such that $V$ admits a basis $\{x_{\alpha}\}$ indexed by a well-ordered set such that
$dx_{\alpha }\in \Lambda V_{<\alpha }$ where $V_{<\alpha} = \{v_{\beta} \mid \beta < \alpha \}$. Such algebra is said {\it
minimal } if $deg(x_{\alpha })< deg(x_{\beta })$ implies $\alpha
<\beta $. If $V^0= V^1=0$, this is equivalent to saying that $
d(V)\subseteq \oplus _{i=2}^{i=\infty} \Lambda ^iV$.
A {\it Sullivan model} for a commutative differential graded algebra $(A,d)$
is a quasi-isomorphism $(\Lambda V, d)\stackrel{\simeq } \rightarrow (A,d)$(morphism inducing an isomorphism in cohomology) with source, a Sullivan
algebra.
When $\mathbb{K} = \mathbb{Q}$ and $X$ is any simply connected space, let $A(X)$ denote the algebra of polynomial
differential forms associated to it (\cite{Sul78}). The minimal model $(\Lambda V,
d)$ of this later is called the {\it Sullivan minimal model} of $X$. It is related to rational homotopy groups of $X$ by
$V^i\cong Hom_{\mathbb Z}(\pi _i(X), \mathbb Q) ;\;\;\; \forall
i\geq 2,$ that is, in the case where $X$ is a finite type
CW-complex, the generators of $V$ corresponds to those of $\pi
_*(X)\otimes \mathbb Q$.
Recall in passing that $(\Lambda V,d)$ is an elliptic Sullivan algebra if and only if $dimV$ and $dimH\c(\Lambda V,d)$ are both finite dimensional. In this case, $X$ is said rationally elliptic. In addition, its cohomlogy is a Poincar\'e-duality algebra (\cite{FHT01} Prop. 38. 3) with the formal dimension $N = sup\{ p\mid H^p(\Lambda V,d_k)\not =0\} $ given by the formula (\cite{FHT01}):
$N = dim V^{even} - \sum _{i =1}^{dim V}(-1)^{|x_i|}|x_i|,$ where $\{x_1, x_2, \ldots , x_n\}$ designates a basis of $V$. A complete reference about such algebras is (\cite{FHT01} \S 32). \\
Now, let $(A,d)$ be an
augmented $\mathbb K$-differential graded algebra and choose an $(A,d)$-semifree
resolution
(\cite{FHT88}) $\rho : (P,d) \stackrel{\simeq}\rightarrow (\mathbb
K,0)$ of $\mathbb K$.
Providing $\mathbb K$ with the $(A,d)$-module structure induced by
the augmentation, we define a chain
map:\\
$ev : Hom_{(A,d)}((P,d), (A,d)) \longrightarrow (A,d)$ by
$f\mapsto f(z)$, where $z\in P$ is a cycle representing $1_{\mathbb{K}}$. Passing to homology we obtain the {\it evaluation
map} of $(A,d)$ :
$$ ev_{(A,d)}: \mathcal{E}xt_{(A, d)}(\mathbb K,
(A, d)) \longrightarrow H(A, d),$$
where $\mathcal{E}xt$ is the differential $Ext$ of Eilenberg
and Moore (\cite{Moo59}). Note that this definition is independent of the choice of $P$ and
$z$ and it is natural with respect to $(A,d)$. \\
The authors of
\cite{FHT88} defined also the concept of a {\it Gorenstein algebra} over any
field $\mathbb K$. If $(A,d)$ is as above, it is of Gorenstien, provided
$dim(\mathcal{E}xt_{(A,d)}(\mathbb K, (A,d))=1$.
In the particular case where $(A,d) = (\Lambda V,d)$ is elliptic,
its cohomology $H(\Lambda V,d)$ satisfies Poincar\'e-duality property over $\mathbb K$ with the formal dimension (\cite{FHT88}, Prop. 5.1): \begin{equation}\label{3}
N = sup\{ p\mid H^p(\Lambda V,d)\not =0\}.
\end{equation}
If $\mathbb{K} = \mathbb{Q}$ and $\{x_1, x_2, \ldots , x_n\}$ designates a basis of $V$, this one has an explicit expression (\cite{FHT88}), Prop. 5.2):
\begin{equation}\label{4}
N = dim V^{even} - \sum _{i =1}^{dim V}(-1)^{|x_i|}|x_i|
\end{equation}
(where the elliptic nature of $(\Lambda V,d)$ is implicit). \\
In addition, if $h$ represents a generator of $\mathcal{E}xt_{(\Lambda V,d)}(\mathbb K, (\Lambda V,d))$, the
fundamental class of $(\Lambda V,d)$ is precisely $ev_{(A,d)}([h]) = [h(1)]$ (\cite{Mur94}).\\
Another invariant in connection with the last ones is the Toomer invariant. It is defined by more than one way.
Here we recall its definition in the context of minimal models.
Let
$(\Lambda V, d)$ any Sullivan minimal algebra and denote
$$p_n: \; \Lambda V \rightarrow {\Lambda V}/ {\Lambda ^{\geq
n+1}V},$$ the projection onto the quotient differential graded
algebra obtained by factoring out the differential graded ideal
generated by monomials of length at least $n+1$. {\it The
Toomer invariant } $e_{ \mathbb K}(\Lambda V,d)$ of
$(\Lambda V, d)$ is the smallest integer $n$ such that $p_n$ induces
an injection in cohomology or $\infty $ if there is no such
integer.
For $\mathbb{K}=\mathbb{Q}$, we shall denote $e(\Lambda V,d)$ instead of $e_{ \mathbb
Q}(\Lambda V,d)$.
In fact (cf. \cite{FH82}), $e(\Lambda V,d)$ is also expressed in terms of the Milnor-Moore spectral sequence (which coincide with (\ref{1}) for $k=2$) by: $e(\Lambda V, d)= sup\{p \mid E_{\infty}^{p,q}\not = 0\}$
or $\infty$ if such maximum doesn't exists.\\
More explicitly, whenever $H(\Lambda V,d)$ has Poincar\'e-duality, we have $$e( \Lambda V,d)=sup\{k \; \mid \; \omega \; \hbox{ can be
represented by } \hbox{a cocycle in}\; \Lambda ^{\geq k}V\},$$
where $\omega$ represents the fundamental class of $(\Lambda V,d)$.
Consider now $x \in H(\Lambda V,d)$ an arbitrary non zero cohomology class. G. Lupton defines (\cite{Lup02}) its Toomer invariant $e_o(x)$ to be the smallest integer $n$ for which $p_n^*(x)\not = 0$. If the set $\{ e_0(x) \mid 0\not = x\in H(\Lambda V,d)\}$ has a maximum, then $e(\Lambda V,d)$ is this maximum. In fact, in this case, we have $e(\Lambda V,d) = e_0(\omega )$.\\
To finish this preliminary, we recall the filtrations that induce the spectral sequences (\ref{1})
and (\ref{2}) mentioned in the introduction. This passes by introducing a semifree resolution of $\mathbb K$ endowed with a $(\Lambda V,d)$-module structure.
Recall that the suspension $sV$ of $V$ is given by the identification $(sV)^i = V^{i+1}, \forall i\geq 0$. On the graded algebra $\Lambda V\otimes \Lambda sV$, Let $S$ the derivation on $\Lambda (V\oplus sV)$ specified by $S(v)=sv$ and $S(sv)=0$, for all $v\in V$. Define $(\Lambda V\otimes \Lambda (sV), D)$ by putting $D(sv) = -S(dv)$ and $D_{\mid V} =d$. It is an acyclic commutative differential graded algebra called an
{\it acyclic closure of} $(\Lambda V,d)$. So this is an $(\Lambda
V,d)$-semifree module and therefore, the projection $(\Lambda V\otimes \Lambda
(sV), D) \stackrel {\simeq } {\longrightarrow} \mathbb K$ is a
semifree resolution of $\mathbb K$. So on $Hom_{\Lambda
V}(\Lambda V\otimes \Lambda sV,\Lambda V)$ a differential $\mathcal{D}$ is defined
by $$ \mathcal{D}(f)=d\circ f+ (-1)^{|f|+1}f\circ D.$$
The filtrations in question are defined respectively as
follow: \begin{equation} \label{5} F^p(\Lambda V)= \Lambda ^{\geq
p}V=\bigoplus_{i=p}^{\infty }\Lambda ^{i}V\end{equation}
\begin{equation} \label{6} \mathcal{F}^p=\{f\in Hom_{\Lambda V}(\Lambda V\otimes \Lambda
(sV),\Lambda V)\; \mid \; f(\Lambda (sV))\subseteq \Lambda ^{\geq p}V
\}\end{equation}
\begin{rem} Let $(\Lambda V,d)$ an elliptic Sullinan minimal algebra.
The chain map
$ ev : (Hom_{\Lambda
V}(\Lambda V\otimes \Lambda sV,\Lambda V),\mathcal{D}) \longrightarrow (\Lambda V,d)$ inducing the evaluation map, is clearly filtration preserving. Remark also that (\ref{5}) is a filtration of a filtered graded algebra. Hence (\ref{1}) is a spectral sequence of a graded algebra.
\end{rem}
\section{Proofs of our results:} Denote by $(\Lambda V,d)$ a Sullivan minimal model of $X$.
We remind in what follow, some facts about this model to clarify our proofs and about the spectral sequence of a filtered complex, essentially to fix notations and terminology.\\
\subsection{Some facts about $(\Lambda V,d)$}:\\
With notations as in the introduction, assume that $(\Lambda V,d_k)$ is elliptic. So, in one hand, by the convergence of (\ref{1}), $(\Lambda V,d)$ is so and the two have the same formal dimension $N$ given by (\ref{4}).
On the other hand, $(\Lambda V,d_k)$ is a Gorenstein algebra
implying that the spectral sequence (\ref{2}) collapse at its first term $\mathcal{E}xt_{(\Lambda V, d_k)}^{*,*}(\mathbb{Q}, (\Lambda V, d_k))$. Denote $[h_k]$ any generator of this later.
Hence $ev_{(\Lambda V, d_k)}([h_k]) = [h_k(1)]\in H^N(\Lambda V,d_k)$ is the fundamental class of $(\Lambda V,d_k)$. It results that $h_k(1)$ is a cocycle that survives to the $\infty$-term of (\ref{1}). As mentioned in the introduction, since the differential $d_k$ is homogeneous of length $k$, all cocycles representing this fundamental class have the same word length. This is exactly the Toomer invariant $e(\Lambda V,d_k)$.
According to notations of \cite{Lup02}, we will denote $ h_k(1) =: \omega _e$ and $N =: N_e$.
\subsection{Spectral sequence terminology of a filtered complex} (see for instance \cite{Mer} \S 6):
\\
It should be noted that the first term $H(\Lambda V,d_k)$ of (1) corresponds effectively to the $k^{-th}$ term $E_{k}^{p,q}$ arising in the construction process of the spectral sequence of the filtered complex $(\Lambda V,d)$. It is given by the formula:
$$E_{k}^{p,q} = Z_{k}^{p,q}/Z_{k-1}^{p+1,q-1} + B_{k-1}^{p,q},$$
where
$$Z_{k}^{p,q} = \{ x\in [F^p(\Lambda V)]^{p+q} \mid dx\in [F^{p+k}(\Lambda V)]^{p+q+1}\}$$
$$\hbox{and}\;\;\;\; B_k^{p,q} = d([F^{p-k+1}(\Lambda V)]^{p+q-1})\cap F^p(\Lambda V) = d(Z_{k-1}^{p-k+1,q+k-2}).$$
Recall also that the differential $\delta _k : E_k^{p,q}\rightarrow E_k^{p+k,q-k+1}$ in $E_k^{*,*}$ is induced from the differential $d$ in $(\Lambda V,d)$ by the formula $\delta _k[v]_k = [dv]_k$,
$v$ being any representative in $Z^{p,q}_k$ of the class $[v]_k$ in $E^{p,q}_k$. \\
Put $Z(E_k^{p,q}) := Ker(\delta _k)$ and $B(E_k^{p,q}) := Im(\delta _k)$.
A straightforward argument permit to construct a natural monomorphism $$I^{p,q}_k : {Z_{k+1}^{p,q} + Z_{k-1}^{p+1,q-1}}/{Z_{k-1}^{p+1,q-1} + dZ_{k-1}^{p-k+1,q+k-2}} \rightarrow E_k^{p,q}$$
It is given by $I^{p,q}_k(\bar{v}_1 + \bar{v}_2) = \overline{v_1 + v_2}$ and it verifies the relation $\delta _k \circ I^{p,q}_k =0$.
Thus, $Im(I^{p,q}_k) \subseteq Z(E_k^{p,q})$. With a little more analysis, one shows that $I^{p,q}_k$ is a surjection and then we have the isomorphism:
\begin{equation}\label{7}
I^{p,q}_k: {Z_{k+1}^{p,q} + Z_{k-1}^{p+1,q-1}}/{Z_{k-1}^{p+1,q-1} + dZ_{k-1}^{p-k+1,q+k-2}} \stackrel{\cong}{\rightarrow} Z(E_k^{p,q})
\end{equation}
Analogous arguments give the proof of the isomorphism:
\begin{equation}\label{8}
J_k^{p,q}: {dZ_{k}^{p-k,q+k-1} + Z_{k-1}^{p+1,q-1} }/{Z_{k-1}^{p+1,q-1} + dZ_{k-1}^{p-k+1,q+k-2}} \stackrel{\cong}{\rightarrow} B(E_k^{p,q}).
\end{equation}
\subsection{Proof of Theorem 1.0.4.}
\begin{proof}
As mentioned in the introduction, there is no gaps in the first term of the generalized Milnor-Moore spectral sequence (\ref{1}), that is, referring again to notations of \cite{Lup02}, for any $p=1, \ldots , e$, there exists a non zero class $[\omega _p]\in H_p^*(\Lambda V,d_k)$. Let $[\omega _0] = 1_{\mathbb{Q}}$ and $H^{*}_{0}(\Lambda V,d_k) = \mathbb{Q}$. So, using Poincar\'e-duality property, to each $[\omega _p]$, it is associated another non zero class $[\omega _{e-p}]\in H^{*}_{e-p}(\Lambda V,d_k)$ such that $[\omega _e] = [\omega _p]\otimes [\omega _{e-p}]$ (if $p = e-p$ one can denote $\omega _{e-p} =: \tilde{\omega}_p$).
Recall also that $[\omega _p]$ (resp. $[\omega _{e-p}]$) has degree $n_p = min\{ i \mid H^i_p(\Lambda V,d_k) \not = 0\}$ (resp. $N_{e-p} = max\{ i \mid H^i_{e-p}(\Lambda V,d_k) \not = 0\}$) so that $n_0 =N_0 = 0$, $n_e = N_e = N$ and $n_p + N_{e-p} = N_e$ ($1\leq p \leq e-1$).
\\
In the remainder, $H^{p,q}(\Lambda V, d_k)$ will be identified with $$E_{k}^{p,q} = (Z_{k}^{p,q}/Z_{k-1}^{p+1,q-1} + B_{k-1}^{p,q})^{p+q}.$$
With this identification, we have $[\omega _p]\in E^{p,n_p-p}_k$ and $[\omega _{e-p}]\in E^{e-p,N_{p-p}-e+p}_k$. We can then take $\omega _p\in Z_{k}^{p,n_p-p}$ and $\omega _{e-p}\in Z_{k}^{e-p,N_{e-p}-e + p}$.
Thereafter, we denote $[\omega _p] =: \bar{\omega} _p$ and $[\omega _{e-p}] =: \bar{\omega} _{e-p}$. By Theorem 2.2. (C) and Lemma 2.1. in \cite{Lup02} the integers $n_p$ and $N_p$ ($0\leq p\leq e$) satisfy the tow equivalent relations:
$$n_2\geq 2n_1,\; n_3\geq n_2 + n_1,\; \ldots , n_{p+1}\geq n_p + n_1,\; \ldots , n_e\geq n_{e-1} + n_1$$
and
$$N_e = N_{e-1} + n_1,\; N_{e-1}\geq N_{e-2} + n_1,\; \ldots \; N_{p+1}\geq N_{p} + n_1,\; \ldots , N_1\geq n_1.$$
Now since $H_1(\Lambda V,d_k)\not = 0$, we have $n_1>0$. Therefore, $\forall 1\leq p\leq e-1$, $$n_p - p\geq n_{p-1} - {(p-1)},\;\; N_p - p\geq N_{p-1} - {(p-1)}\;\; \hbox{and}\;\; N_e = n_e.$$ Regarding to bi-degrees $(p, n_p-p)$ and $(e-p, N_{e-p} - (e-p))$ of $\omega _p$ and $\omega _{e-p}$ respectively, it results that $\forall p = 1, \ldots , e-1$, we have necessarily:
\begin{enumerate}
\item[a)] $\delta _k (\bar{\omega} _p) = \bar{0}$, so that $\bar{\omega} _p \in Z(E_k^{p,n_p-p})$, but we don't know if it is a $\delta _k$-coboundary. So by the isomorphism (\ref{7}) and due to its homogeneity of length $p$, it is obligatory an element of $Z_{k+1}^{p,n_p-p}$. Hence $\omega _p \in F^{p+k+1}(\Lambda V)$.
\item[b)] $\bar{\omega} _{e-p}$ can't be a $\delta _k$-coboundary ie. $\bar{\omega} _{e-p}\notin B(E_k^{e-p,N_{e-p}-e+p})$.
\item[c)] $\bar{\omega} _{e}$ is an $\delta _k$-cocycle that survives to the $\infty $-term $E_{\infty}^{e, N_e-e}$. In particular we have $\omega _e \in Z_{k+1}^{e,N_e-e}$.
\end{enumerate}
Using Remark 2.0.8. and the identification made later, we have $\bar{\omega} _e = \overline{{\omega} _p \otimes {\omega} _{e-p}}$. Since $\bar{\omega} _e$ survives to the term $E_{k+1}^{*,*}$, we have $\delta _k (\bar{\omega} _e) = \delta _k(\bar{\omega} _p \otimes \bar{\omega} _{e-p}) = \bar{0}$ and then by homogeneity and the isomorphism (\ref{7}), $\omega _p \otimes \omega _{e-p}\in Z_{k+1}^{e,N_e-e}$. Whence $ d(\omega _e) = d(\omega _p)\otimes \omega _{e-p} \pm \omega _p \otimes d(\omega _{e-p})\in F^{e+k+1}$. Thus $d(\omega _p)\otimes \omega _{e-p}\in F^{e+k+1}$ imply that $\omega _p \otimes d(\omega _{e-p})\in F^{e+k+1}$ also. It results that $d(\omega _{e-p})\in F^{e-p+k+1}$ and then $\omega _{e-p}\in Z_{k+1}^{e-p,N_{e-p}-e+p}$. Equivalently by (\ref{7}), we obtain $\delta _k (\bar{\omega} _{e-p}) = \bar{0}$. That is, $\bar{\omega} _{e-p}$ is a $\delta _k$-cocycle. Finally, using $a)$, $b)$ and $c)$ below, we conclude that the formula $[\omega _e] = [\omega _p]\otimes [\omega _{e-p}]$ is still valid in $E_{k+1}^{e, N_e-e}$. The proof is completed by recursion.
\end{proof}
\subsection{Proof of Theorem 1.0.5.}
\begin{proof}
By Theorem 1.0.4. we have $E_{\infty}^{p,n_p -p}\not = 0$ for $p = 1, \ldots ,e$. Hence using the convergence of (\ref{1}) it results that $dimH^{n_p}_p(\Lambda V,d)\geq 1$ for $p = 1, \ldots ,e$, where the lower grading in $H^{n_p}_p(\Lambda V,d)$ comes from that of $H^{n_p}_p(\Lambda V,d_k)$. As $dimH^{0}(\Lambda V,d) = 1$, it follows that $dimH(\Lambda V,d)\geq e = dimV^{odd}= + (k-2)dimV^{even}\geq dimV$. Hence, if $V = V^{odd}$, $(\Lambda V,d_k)$ is always elliptic and $e = dim(V^{odd}) $, so the inequality holds. Now, if $k = 3$, we have immediately $dimH(\Lambda V,d)\geq dim V$.
\end{proof}
\subsection{Remark}
\begin{enumerate}
\item Let $X$ be as in Theorem 1.0.4. and $(\Lambda V,d)$ its Sullivan minimal model, where $d=\sum_{i\geq k}d_i$ and $d_2\not =0$.
We would like to use Theorem 1.0.4. to answer the $H$-conjecture for $(\Lambda V,d)$, assuming that it is valid for $(\Lambda V,d_2)$. Unfortunately, in this case, we can only deduce from Theorem 1.0.4. that $dimH(\Lambda V,d)\geq e = dimV^{odd}$. Indeed even though we assume that $dimH^{*}_p(\Lambda V,d_2)\geq 2$, $(\forall 1\leq p \leq e-1)$ we can't be sure that more than one basis element in $H^{*}_p(\Lambda V,d_2)$ survives to the $E_{\infty}$-term. This fact is illustrated under the following hypothesis for which the $H$-conjecture for $(\Lambda V,d_2)$ is resolved (see \cite{EHM} and \cite{HM08} respectively):
\begin{enumerate}
\item The Quillen model of $(\Lambda V,d_2)$ is nilpotent of degree one or two.
\item $ker(d_2 : V^{odd}\rightarrow \Lambda V)$ is non zero. That is, if the rational Hurewicz homomorphism is non-zero in some odd degree.
\end{enumerate}
Obviously, in both cases, by Theorem 1.0.4. (when $k=2$), their cohomologies can't have $e_0$-gaps. Hence one can ask for a possible relation between the $H$-conjecture, the conjecture 3.4 posed by G. Lupton in \cite{Lup02} and the question 1.0.6 asked by Y. F\'elix.
\item The spectral sequence (\ref{2}) collapses at its first term whenever $dim(V)<\infty$, due to the fact that in this case $(\Lambda V,d_k)$ is a Gorenstein graded algebra. Nevertheless if $H(\Lambda V,d_k) = \infty$, then $[h_k(1)]= 0$ (\cite{Mur94}). But assuming that $(\Lambda V,d)$ is elliptic, the cocycle $h_k(1)$ gives a non zero class of the $\infty$-term of (\ref{1}). Thus, a possible method to use to verify the conjecture H in the general case is looking for a certain term in the spectral sequence (\ref{1}) that is without gaps. This is our project in the future.
\end{enumerate}
| {
"timestamp": "2015-03-31T02:12:14",
"yymm": "1502",
"arxiv_id": "1502.04200",
"language": "en",
"url": "https://arxiv.org/abs/1502.04200",
"abstract": "In his study of Halperin's toral-rank conjecture, M. R. Hilali conjectured that for any simply connected rationally elliptic space $X$, one must have $dim\\pi_*(X)\\otimes \\mathbb{Q} \\leq dimH^*(X,\\mathbb{Q})$. Let $(\\Lambda V, d)$ denote a Sullivan minimal model of $X$ and $d_k$ the first non-zero homogeneous part of the differential $d$. In this paper, we use spectral sequence arguments to prove that if $(\\Lambda V, d_k)$ is elliptic, then, there is no gaps in the $E_{\\infty}$ term of the Milnor-Moore spectral sequence of $X$. Consequently, we confirm the Hilali conjecture when $V = V^{odd}$ or else when $k\\geq 3$ and $(\\Lambda V, d_k)$ is elliptic.",
"subjects": "Algebraic Topology (math.AT); Commutative Algebra (math.AC)",
"title": "Gaps in the Milnor-Moore spectral sequence and the Hilali conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.974821157567904,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7096458697835395
} |
https://arxiv.org/abs/1907.12312 | Unimodular covers of 3-dimensional parallelepipeds and Cayley sums | We show that the following classes of lattice polytopes have unimodular covers, in dimension three: the class of parallelepipeds, the class of centrally symmetric polytopes, and the class of Cayley sums $\text{Cay}(P,Q)$ where the normal fan of $Q$ refines that of $P$. This improves results of Beck et al.~(2018) and Haase et al.~(2008) where the last two classes were shown to be IDP. | \section{Introduction}
A lattice polytope $P\subset \ensuremath{\mathbb{R}}^d$ has the \emph{integer decomposition property} if for every positive integer $n$, every lattice point $p \in nP\cap \ensuremath{\mathbb{Z}}^d$ can be written as a sum of $n$ lattice points in $P$. We abbreviate this by saying that ``$P$ is IDP''. Being IDP is interesting in the context of both enumerative combinatorics (Ehrhart theory) and algebraic geometry (normality of toric varieties). It falls into a hierarchy of several properties each stronger than the previous one; see, e.g., \cite[Section 2.D]{BGbook}, \cite[Sect. 1.2.5]{HPPS-survey}, \cite[p. 2097]{mfo2004}, \cite[p. 2313]{mfo2007}.
Let us here only mention that
\[
P \text{ has a unimodular triangulation}\Rightarrow
P \text{ has a unimodular cover}\Rightarrow
P \text{ is IDP.}
\]
Remember that a \emph{unimodular triangulation} is a triangulation of $P$ into unimodular simplices, and a \emph{unimodular cover} is a collection of unimodular simplices whose union equals $P$.
Oda (\cite{Oda1997}) posed several questions regarding smoothness and the IDP property for lattice polytopes.
Following \cite{HaaseHof, Tsuchiya}, we say that a pair $(P, Q)$ of lattice polytopes has the integer decomposition property, or that \emph{the pair $(P,Q)$ is IDP}, if
\begin{align*}
\label{eq:mixedIDP}
(P+Q) \cap \ensuremath{\mathbb{Z}}^d = P \cap \ensuremath{\mathbb{Z}}^d + Q \cap \ensuremath{\mathbb{Z}}^d.
\end{align*}
A lattice polytope $Q$ is called \emph{smooth} if it is simple and the primitive edge directions at every vertex form a linear basis for the lattice; equivalently, if the projective toric variety defined by the normal fan of $Q$ is smooth.
The following versions of Oda's questions are now considered conjectures~\cite{HNPS2008,mfo2007}, and they are open even in dimension three:
\begin{conjecture}
\label{conj:Oda}
\begin{enumerate}
\item
\label{itm:smoothIDP}
(Related to problems 2 and 5 in \cite{Oda1997})
Every smooth lattice polytope is IDP.
\item
\label{itm:mixedIDP}
(Related to problems 1, 3, 4, 6 in \cite{Oda1997}) Every pair $(P,Q)$ of lattice polytopes with $Q$ smooth and the normal fan of $Q$ refining that of $P$ is IDP.
\end{enumerate}
\end{conjecture}
When the normal fan of a polytope $Q$ refines that of another polytope $P$, as in the second conjecture, we say that $P$ \emph{is a weak Minkowski summand of $Q$}, since this is easily seen to be equivalent to the existence of a polytope $P'$ such that $P+P' = k Q$ for some dilation constant $k>0$.
This property has the following algebraic implication for the projective toric variety $X_Q$: $P$ is a weak Minkowski summand of $Q$ if and only if the Cartier divisor defined by $P$ on $X_Q$ is \emph{numerically effective}, or ``nef'' (see~\cite[Cor.~6.2.15, Prop.~6.3.12]{CLS}, but observe that what we here call ``weak Minkowski summand'' is simply called ``Minkowski summand'' there).
\medskip
Motivated by these and other questions, several authors have studied the IDP property for different classes of lattice polytopes.
For example, very recently
Beck et al.~\cite{BHHHJKM2019} proved that all smooth centrally symmetric $3$-polytopes are IDP.
More precisely, they show that any such polytope can be covered by lattice
parallelepipeds and unimodular simplices, both of which are trivially IDP.
In \Cref{sec:parallelepipeds} we show:
\begin{theorem}
\label{thm:parallelepipeds}
Every $3$-dimensional lattice parallelepiped has a unimodular cover.
\end{theorem}
This, together with the mentioned result from~\cite{BHHHJKM2019}, gives:
\begin{corollary}
\label{coro:3cs}
Every smooth centrally symmetric lattice $3$-polytope has a unimodular cover.
\qed
\end{corollary}
These results leave open the following important questions:
\begin{question}
Do $3$-dimensional parallelepipeds have unimodular triangulations?
\end{question}
\begin{question}
Higher dimensional parallelotopes (affine images of cubes) are IDP. Do they have unimodular covers?
\end{question}
The two-dimensional case of \Cref{conj:Oda}\eqref{itm:mixedIDP} is known to hold, with three different proofs by Fakhruddin~\cite{Fakhruddin}, Ogata~\cite{Ogata} and Haase et al.~\cite{HNPS2008}. This last one actually shows that smoothness of $Q$ is not needed. In dimension three, however, the conjecture fails without the smoothness assumption. Indeed, if we let $P=Q$ be any non-unimodular \emph{empty tetrahedron}, then $P$ is obviously a weak Minkowski summand of $Q$ but the pair $(P.Q)$ is not IDP. By an empty tetrahedron we mean a lattice tetrahedron containing no lattice points other than its vertices (see the proof of \Cref{lemma:corner} for a classification of them).
An alternative approach to \Cref{conj:Oda}\eqref{itm:mixedIDP} is via Cayley sums, which we discuss in \Cref{sec:cayley}.
Recall that the \emph{Cayley sum} of two lattice polytopes $P,Q\subset \ensuremath{\mathbb{R}}^d$ is the lattice polytope
\[
\operatorname{Cay}(P,Q) := \ensuremath{\mathrm{conv}}\hspace{1pt}(P\times\{0\} \cup Q \times \{1\}) \subset \ensuremath{\mathbb{R}}^3.
\]
We normally require $\operatorname{Cay}(P,Q)$ to be full-dimensional (otherwise we can delete coordinates) but this does not need $P$ and $Q$ to be full-dimensional. It only requires the linear subspaces parallel to them to span $ \ensuremath{\mathbb{R}}^d$.
As we note in \Cref{prop:mixedIDP}, if the Cayley sum of $P$ and $Q$ is IDP then the pair $(P,Q)$ is IDP.
In particular, the following statement from \Cref{sec:cayley} is stronger than the afore-mentioned result of \cite{Fakhruddin,HNPS2008,Ogata}:
\begin{theorem}
\label{thm:cayley}
Let $Q$ be lattice polygon, and $P$ a weak Minkowski summand of $Q$. Then the Cayley sum $\operatorname{Cay}(P,Q)$ has a unimodular cover.
\end{theorem}
This has the following two corollaries, also proved in \Cref{sec:cayley}.
A \emph{prismatoid} is a polytope whose vertices all lie in two parallel facets.
A polytope has width $1$ if its vertices lie in two \emph{consecutive} parallel lattice hyperplanes. Observe that this is the same as being ($SL( \ensuremath{\mathbb{Z}},d)$-equivalent to) a Cayley sum.
\begin{corollary}
\label{coro:prismatoid}
Every smooth $3$-dimensional lattice prismatoid has a unimodular cover.
\end{corollary}
\begin{corollary}
\label{coro:width1}
Every integer dilation $kP$, $k\ge 2$, of a lattice $3$-polytope $P$ of width $1$ has a unimodular cover.
\end{corollary}
A special case of the latter are integer dilations of empty tetrahedra. That their dilations have
unimodular covers is \cite[Cor.~4.2]{SantosZiegler} (and is also implicit in \cite{KantorSarkaria}).
\medskip
We believe that the $3$-polytopes in all these statements have unimodular triangulations, but this remains an open question.
\subsection*{Acknowledgements:} We would like to thank Akiyoshi Tsuchiya, Spencer Backman, and Johannes Hofscheier for posing these questions to us and
Christian Haase for helpful discussions.
\section{Parallelepipeds}
\label{sec:parallelepipeds}
The main tool for the proof of \Cref{thm:parallelepipeds} is what we call the parallelepiped circumscribed to a given tetrahedron, defined as follows:
\begin{definition}
\label{def:circunpara}
Let $T$ be a tetrahedron with vertices $p_1$, $p_2$, $p_3$, and $p_4$. Consider the points $q_i= \frac12 (p_1+p_2+p_3+p_4) - p_i$, $i\in [4]$, and let
\[
C(T)=\ensuremath{\mathrm{conv}}\hspace{1pt}(p_i,q_i: i\in[4]).
\]
$C(T)$ is a parallelepiped with facets $\ensuremath{\mathrm{conv}}\hspace{1pt}(p_i, p_j, q_k, q_l)$ for all choices of $\{i,j,k,l\}=[4]$. We call it the \emph{parallelepiped circumscribed} to $T$.
For each $i \in [4]$, let $T_i=\ensuremath{\mathrm{conv}}\hspace{1pt}(q_i, p_j, p_k, p_l)$, with $\{i,j,k,l\}=[4]$; we call these $T_i$ the \emph{corner tetrahedra} of $C(T)$. Together with $T$ they triangulate $C(T)$.
\end{definition}
Modulo an affine transformation, the situation of $T$ and $C(T)$ is exactly that of the regular tetrahedron inscribed in a cube; see \Cref{fig:circumscribed_parall}.
\begin{figure}[htb]
\includegraphics[scale=.25]{circumscribed_parall}
\caption{In red we have a tetrahedron $T$, in black its circumscribed parallelepiped $C(T)$, and in blue the corner simplex $T_4$.}
\label{fig:circumscribed_parall}
\end{figure}
\begin{lemma}
\label{lemma:corner}
Let $T=\ensuremath{\mathrm{conv}}\hspace{1pt}\{p_1,p_2,p_3,p_4\}$ be an empty lattice tetrahedron that is not unimodular. Let $C(T)$ be the parallelepiped circumscribed to $T$ and let $T_1, T_2,T_3$ and $T_4$ be the corresponding corner tetrahedra in $C(T)$. Then, every $T_i$ contains at least one lattice point different from $\{p_1,\dots,p_4\}$.
\end{lemma}
\begin{proof}
By White's classification of empty tetrahedra (\cite{White1964}, see also, e.~g.~\cite[Sect.~4.1]{HPPS-survey}), there is no loss of generality in assuming $T=\ensuremath{\mathrm{conv}}\hspace{1pt}(p_1,p_2,p_3,p_4)$ with
\[
p_1=(0,0,0), \quad
p_2=(1,0,0), \quad
p_3=(0,0,1), \quad
p_4=(a,b,1).
\]
where $b\ge 2$ is the (normalized) volume of $T$, and $a\in \{1,\dots,b-1\}$ satisfies $\gcd(a,b)=1$. This gives
\begin{align*}
q_1=\left(\frac{1+a}2,\frac{b}2,1\right), &&
q_2=\left(\frac{a-1}2,\frac{b}2,1\right), \quad\\
q_3=\left(\frac{1+a}2,\frac{b}2,0\right), &&
q_4=\left(\frac{1-a}2,-\frac{b}2,0\right).
\end{align*}
Then, the inequalities $b\ge 1+a \ge 2$ imply:
\[
u:=(1,1,0)\in \ensuremath{\mathrm{conv}}\hspace{1pt}(p_1p_2q_3) \subset T_4, \quad
v:=(0,-1,0)\in \ensuremath{\mathrm{conv}}\hspace{1pt}(p_1p_2q_4) \subset T_3.
\]
Observe that $u+v=p_1+p_2=q_3+q_4$.
Now, this implies that the quadrilateral $\ensuremath{\mathrm{conv}}\hspace{1pt}(p_1q_4p_2q_3)$ contains a fundamental domain for the lattice $ \ensuremath{\mathbb{Z}}^2\times\{0\}$. Hence, its translate $\ensuremath{\mathrm{conv}}\hspace{1pt}(q_2p_3q_1p_4)$ contains a fundamental domain for $ \ensuremath{\mathbb{Z}}^2\times\{1\}$ and, in particular, it contains at least one lattice point other than $p_3$ and $p_4$. By central symmetry around its center $\left(\frac{a}2,-\frac{b}2,1\right)$, $\ensuremath{\mathrm{conv}}\hspace{1pt}(q_2p_3q_1p_4)$ must contain lattice points in both triangles $\ensuremath{\mathrm{conv}}\hspace{1pt}(q_2p_3p_4)\subset T_1$ and $\ensuremath{\mathrm{conv}}\hspace{1pt}(q_1p_3p_4)\subset T_2$.
\end{proof}
\begin{lemma}
\label{lemma:3<4}
Let $P$ be a lattice parallelepiped and let $T\subset P$ be a tetrahedron. Then, at least one of the four corner tetrahedra $T_i$ of the circumscribed parallelogram $C(T)$ is fully contained in $P$.
\end{lemma}
\begin{proof}
Let us denote the vertices of $T$ by $p_1, p_2, p_3, p_4$ and the vertices of $C(T)$ not in $T$ by $q_1, q_2, q_3, q_4$, with the conventions of \Cref{def:circunpara}.
We call \emph{band} any region of the form $f^{-1}([\alpha,\beta])$ for some functional $f\in ( \ensuremath{\mathbb{R}}^3)^*$ and closed interval $[\alpha,\beta]\subset \ensuremath{\mathbb{R}}$.
We claim that any band containing $T$ must contain at least three of the $q_i$s.
This claim implies that the parallelepiped $P$, which is the intersection of three bands, contains at least one of the $q_i$s and hence it fully contains the corresponding $T_i$.
To prove the claim, suppose that $q_1\not\in B:= f^{-1}([\alpha,\beta])$ for a certain band $B \supset T$.
Without loss of generality, say $f(q_1)<\alpha$. Then the equalities $q_1+q_2=p_3+p_4$ and $q_1+p_1=q_2+p_2$ respectively give:
\begin{gather}
\label{eq:first}
f(q_2) = f(p_3+p_4-q_1) = f(p_3)+f(p_4)-f(q_1) > 2\alpha-\alpha=\alpha,\\
\label{eq:second}
f(q_2) = f(q_2+p_2-p_1) = f(q_2)+(f(p_2)-f(p_1)) < \alpha + (\beta-\alpha) = \beta,
\end{gather}
so that $q_2 \in B$.
Inequality \eqref{eq:first} also implies
\begin{equation}
\label{eq:third}
f(q_1) < f(p_i) < f(q_2), \quad\text{ for $i=3,4$}.
\end{equation}
The translation of vector $\frac12 (p_1+p_2-p_3-p_4)$ sends $q_1,q_2,p_3,p_4$ to $p_2,p_1,q_4,q_3$ (in this order). By applying this to inequality \eqref{eq:third}, we obtain
\[
\alpha \le f(p_2) < f(q_i) < f(p_1) \le \beta, \quad\text{ for $i=3,4$},
\]
so that $q_3,q_4 \in B$.
This finishes the proof of the claim, and of the lemma.
\end{proof}
\begin{corollary}
\label{coro:coverpara}
Let $T$ be an empty lattice tetrahedron contained in a lattice parallelepiped $P$. Then, $T$ can be covered by unimodular tetrahedra contained in $P$.
\end{corollary}
\begin{proof}
We proceed by induction on the (normalized) volume of $T$, which is a positive integer. If this volume equals $1$ then $T$ is unimodular and there is nothing to prove, so we assume $T$ is not unimodular. Let $p_1, p_2, p_3, p_4$ denote the vertices of $T$.
\Cref{lemma:3<4} guarantees that one of the corner tetrahedra $T_i$ of the parallelepiped $C(T)$ is contained in $P$. Without loss of generality, suppose $T_4 = \ensuremath{\mathrm{conv}}\hspace{1pt}(p_1, p_2, p_3,q_4)$ is in $P$. By \Cref{lemma:corner}, we know that $T_4$ contains a lattice point other than the $p_i$s, which we denote by $u$.
Then $S=\ensuremath{\mathrm{conv}}\hspace{1pt}(T\cup \{u\})$ can be triangulated in two different ways: $S=T \cup T'_4$, where $T'_4 = \ensuremath{\mathrm{conv}}\hspace{1pt}(p_1, p_2, p_3, u) \subseteq T_4$ and $S= S_1 \cup S_2 \cup S_3$, with
\[
S_1= \ensuremath{\mathrm{conv}}\hspace{1pt}(p_2,p_3,p_4, u),
S_2=\ensuremath{\mathrm{conv}}\hspace{1pt}(p_1,p_3,p_4, u),
S_3=\ensuremath{\mathrm{conv}}\hspace{1pt}(p_1,p_2,p_4, u).
\]
Each of the tetrahedra $S_i$ has lattice volume strictly smaller than $T$ because, for each $i$, $p_i$ is the unique point of $C(T)$ maximizing the distance to the opposite facet $\ensuremath{\mathrm{conv}}\hspace{1pt}(p_j,p_k,p_l)$ of $T$. Thus, $S_1$, $S_2$ and $S_3$ cover $T$ and have volume strictly smaller than $T$. The $S_i$ may not be empty, but we can triangulate them into empty tetrahedra, which by inductive hypothesis they can be covered unimodularly.
\end{proof}
\begin{proof}[Proof of \Cref{thm:parallelepipeds}]
Arbitrarily triangulate the parallelepiped into empty lattice tetrahedra and apply \Cref{coro:coverpara} to these tetrahedra.
\end{proof}
Let us say that a lattice $3$-polytope $P$ \emph{has the circumscribed parallelepiped property} if it satisfies the conclusion of \Cref{lemma:3<4}: ``for every empty tetrahedron $T$ contained in $P$ at least one of the four corner tetrahedra in $C(T)$ is
contained in $P$''.
If this holds then $P$ has a unimodular cover, since then the proofs of \Cref{coro:coverpara} and \Cref{thm:parallelepipeds} work for $P$.
Hence, a positive answer to the following question would imply that every smooth $3$-polytope has a unimodular cover, which in turn implies \Cref{conj:Oda}\eqref{itm:smoothIDP} in dimension three.
\begin{question}
Does every smooth 3-polytope have the circumscribed parallelepiped property?
\end{question}
Our proof that parallelepipeds have the property (\Cref{lemma:3<4}) is based on the fact that they have only three (pairs of) normal vectors. The proof, and the property of being IDP, fail if there are four of them:
\begin{example}[Non-IDP octahedron and triangular prism]
\label{ex:non-IDP}
The following lattice octahedron $Q$ and triangular prism $P$ are not IDP:
\begin{gather}
Q= \ensuremath{\mathrm{conv}}\hspace{1pt}((0,1,1),(1,0,1),(1,1,0),(0,-1,-1),(-1,0,-1),(-1,-1,0)),\\
P=\ensuremath{\mathrm{conv}}\hspace{1pt}((0,1,1),(1,0,1),(1,1,0),(-1,0,0),(0,-1,0),(0,0,-1)).
\end{gather}
Indeed, in both polytopes the only lattice points are the six vertices and the origin. The point $(1,1,1)$ lies in the second dilation but is not the sum of two lattice points in the polytope. Hence, they are not IDP, which implies they do not admit unimodular covers.
\end{example}
\section{ Cayley sums}
\label{sec:cayley}
Let $P$ and $Q$ be two lattice polytopes in $ \ensuremath{\mathbb{R}}^d$. We do not require them to be full-dimensional, but we assume their Minkowski sum is. Remember that the \emph{Minkowski sum} $P+Q$ and the \emph{Cayley sum} of $P$ and $Q$ are defined as:
\begin{gather*}
P + Q := \{ p+q \in \ensuremath{\mathbb{R}}^d: p\in P, q \in Q\} \subset \ensuremath{\mathbb{R}}^d,\\
\operatorname{Cay} (P,Q) = \ensuremath{\mathrm{conv}}\hspace{1pt}( P\times\{0\} \cup Q\times \{1\}) \subset \ensuremath{\mathbb{R}}^{d+1}.
\end{gather*}
The so-called \emph{Cayley Trick} is the isomorphism
\[
2\operatorname{Cay}(P, Q) \cap ( \ensuremath{\mathbb{R}}^d\times \{1\}) \cong P+Q,
\]
which easily implies:
\begin{proposition}[see, e.g.~\protect{\cite[Thm.~0.4]{Tsuchiya}}]
\label{prop:mixedIDP}
If $\operatorname{Cay}(P,Q)$ is IDP then the pair $(P,Q)$ is mixed IDP.
\end{proposition}
The Cayley Trick also provides the following canonical bijections:
\[
\begin{array}{ccc}
\text{polyhedral subdivisions of $\operatorname{Cay}(P,Q)$} &\leftrightarrow& \text{mixed subdivisions of $P + Q$}\\
\text{triangulations of $\operatorname{Cay}(P,Q)$} &\leftrightarrow& \text{fine mixed subdivisions of $P + Q$}\\
\text{unimodular simplices in $\operatorname{Cay}(P,Q)$} &\leftrightarrow& \text{unimodular prod-simplices in $P + Q$}.
\end{array}
\]
See \cite{DLRS2010} for more details on the Cayley Trick and on triangulations and polyhedral subdivisions of polytopes.
In fact these bijections can be taken as definitions of the objects in the right-hand sides. In particular,
we call \emph{prod-simplices} in $P+Q$ the Minkowski sums $T_1+T_2$ where $T_1\subset P$ and $T_2\subset Q$ are simplices with complementary affine spans. A prod-simplex is \emph{unimodular} if the edge vectors from a vertex of $T_1$ and from a vertex of $T_2$ form a unimodular basis.
\medskip
We now turn our attention to $d=2$, in order to prove \Cref{thm:cayley}. A triangulation of $\operatorname{Cay}(P,Q)\subset \ensuremath{\mathbb{R}}^3$ consists of tetrahedra of types $(1,3)$, $(2,2)$ and $(3,1)$, where the type denotes how many vertices they have in $P$ and in $Q$. Empty tetrahedra of types $(1,3)$ or $(3,1)$, which are Cayley sums of a triangle in $P$ and a point in $Q$, or viceversa, are automatically unimodular. The case that we need to study are therefore tetrahedra of type $(2,2)$, which are Cayley sums of a segment $p\subset P$ and a segment $q\subset Q$.
The following lemma, whose proof we postpone to \Cref{sec:the_lemma}, is crucial to understand how to unimodularly cover these tetrahedra.
We use the following conventions: if $a, b$ are points, we denote by $[a,b]$ and $(a,b)$ respectively the closed and open segments with endpoints $a,b$. Given a segment $s=[a,b]$, we denote the vector $\vec s:= b-a$ and the line spanned by $\vec s$ by $\vecline s$.
\begin{lemma}
\label{lemma:cayley}
Let $Q$ be a two-dimensional lattice polytope and $P$ a weak Minkowski summand of it.
Let $p=[p_1,p_2] \subset P$ and $q=[q_1,q_2]\subset Q$ be two primitive and non-parallel lattice segments, and let $\vecline p$ and $ \vecline q$ be the lines spanned by them. If the parallelogram $p + q$ is not unimodular, then at least one of the regions
\[
((p_1, p_2) + \vecline q ) \cap P,
\qquad \text{and} \qquad
((q_1, q_2) + \vecline p ) \cap Q
\]
contains a lattice point. See \Cref{fig:strips}.
\end{lemma}
\begin{figure}[htb]
\scalebox{.75}{\input{strips.pdf_t}}
\caption{The strips of Lemma \ref{lemma:cayley}}
\label{fig:strips}
\end{figure}
\begin{corollary}
\label{coro:covercayley}
Let $T$ be an empty lattice tetrahedron contained in the Cayley sum $\operatorname{Cay}(P,Q)$, where $Q$ is a lattice polygon and $P$ is a weak Minkowski summand of $Q$. Then, $T$ can be covered by unimodular tetrahedra contained in $\operatorname{Cay}(P,Q)$.
\end{corollary}
\begin{proof}
The proof is by induction on the normalized volume of $T$, which we assume to be at least $2$. This implies that $T$ is of type $(2,2)$,
since empty tetrahedra of types $(1,3)$ and $(3,1)$ are unimodular. Thus, $T$ is the Cayley sum of primitive segments $p=[p_1,p_2]\subset P$ and $q=[q_1,q_2]\subset Q$.
Let $u$ be the lattice point whose existence is guaranteed by \Cref{lemma:cayley}. Assume (the other case is similar) that
\[
u \in ((p_1, p_2) + \vecline q ) \cap P,
\]
and call $t$ the triangle $t=\ensuremath{\mathrm{conv}}\hspace{1pt}( u, p_1, p_2)\subset P$.
Let us denote $\tilde u$, $\tilde p_1$, $\tilde p_2$, $\tilde q_1$, $\tilde q_2$ the points corresponding to $u, p_1, p_2, q_1, q_2$ in $\operatorname{Cay}(P,Q)$.
That is, $\tilde p_i = p\times\{0\}$, $\tilde q_i = p\times\{1\}$, and $\tilde u = u\times\{0\}$.
Observe that the assumption $u\in((p_1, p_2) + \vecline q$ implies that of the segments $[u,q_i]$ crosses the triangle $\ensuremath{\mathrm{conv}}\hspace{1pt}(p_1,p_2,q_j)$, where $\{i,j\}=\{1,2\}$, see \Cref{fig:flip}.
\begin{figure}[htb]
\includegraphics[scale=.3]{flip}
\caption{$[u,q_2]$ intersects $\ensuremath{\mathrm{conv}}\hspace{1pt}(p_1,p_2,q_1)$}
\label{fig:flip}
\end{figure}
In turn, this means that the polytope $\ensuremath{\mathrm{conv}}\hspace{1pt}(\tilde u, \tilde p_1, \tilde p_2, \tilde q_1, \tilde q_2) = \operatorname{Cay}(t,q)$
has the following two triangulations:
\begin{gather*}
\mathcal T^+:= \left\{ \operatorname{Cay}(p,q), \operatorname{Cay}(t, \{q_i\}) \right\},
\\
\mathcal T^-:= \{ \operatorname{Cay}([p_1,u],q), \operatorname{Cay}([p_2,u],q), \operatorname{Cay}(t, \{q_j\}) \}.
\end{gather*}
The tetrahedra $\operatorname{Cay}(t, \{q_j\})$ and $\operatorname{Cay}(t, \{q_i\})$ are unimodular, which implies that $T=\operatorname{Cay}(p,q)$ has volume equal to the sum of the volumes of $\operatorname{Cay}([p_1,u],q)$ and $\operatorname{Cay}([p_2,u],q)$. In particular, we have covered $T$ by the three tetrahedra in $\mathcal T^-$, which are of smaller volume and hence have unimodular covers by inductive assumption.
\end{proof}
\begin{proof}[Proof of \Cref{thm:cayley}]
Arbitrarily triangulate $\operatorname{Cay}(P,Q)$ into empty lattice tetrahedra and apply \Cref{coro:covercayley} to these tetrahedra.
\end{proof}
Let us now show how to derive \Cref{coro:prismatoid,coro:width1} from this theorem.
\emph{Prismatoids} were defined in~\cite{Santos-hirsch} as polytopes whose vertices all lie in two parallel facets. In particular, a \emph{lattice prismatoid} is any $d$-polytope $SL( \ensuremath{\mathbb{Z}},d)$-equivalent to one of the form
\[
\ensuremath{\mathrm{conv}}\hspace{1pt}(Q_1\times\{0\} \cup Q_2 \times \{k\}),
\]
where $Q_1,Q_2$ are lattice $(d-1)$-polytopes and $k\in \ensuremath{\mathbb{Z}}_{>0}$. This is almost a generalization of Cayley sums, which would be the case $k=1$, except the definition of prismatoid requires $Q_1$ and $Q_2$ to be full-dimensional, while the Cayley sum only requires this for $Q_1+Q_2$.
\begin{proposition}
\label{prop:prismatoid}
Let $Q_1$, $Q_2$ be two lattice polygons and consider the prismatoid
\[
P:= \ensuremath{\mathrm{conv}}\hspace{1pt}(Q_1\times\{0\} \cup Q_2 \times \{k\},
\]
with $k\ge 2$.
If $P\cap( \ensuremath{\mathbb{R}}^2\times\{1\})$ is a lattice polygon then $P$ has a unimodular cover.
\end{proposition}
\begin{proof}
The condition that $P\cap( \ensuremath{\mathbb{R}}^2\times\{1\})$ is a lattice polygon implies the same for $P\cap( \ensuremath{\mathbb{R}}^2\times\{i\})$, for every $i$.
Indeed, the condition implies that every edge of $\operatorname{Cay}(P,Q)$ of the form $[u\times \{0\}, v\times \{k\}]$ has a lattice point in $ \ensuremath{\mathbb{R}}^2\times\{i\}$, and hence it has a lattice point in $P\cap( \ensuremath{\mathbb{R}}^2\times\{i\})$, for every $i$.
Observe that for every $i\in \{1,\dots,k-1\}$ the intersection $P\cap( \ensuremath{\mathbb{R}}^2\times\{i\})$ has the same normal fan as $Q_1+Q_2$. Thus, each slice
\[
P \cap ( \ensuremath{\mathbb{R}}^2\times[i-1,i])
\]
is a Cayley polytope. For $i\in\{2,\dots,k-1\}$, both bases have the same normal fan (and therefore each is a weak Minkowski summand of the other); for $i\in \{1,k\}$ one base is a weak Minkowski summand of the other. We can therefore apply \Cref{thm:cayley} to each slice and combine the covers thus obtained to get a unimodular cover of $P$.
\end{proof}
\begin{proof}[Proof of \Cref{coro:prismatoid,coro:width1}]
In both cases the polytope under study satisfies the hypotheses of \Cref{prop:prismatoid}: in \Cref{coro:prismatoid}, the smoothness of the prismatoid implies that every edge of the form $[u\times \{0\}, v\times \{k\}]$ has lattice points in all slices. In \Cref{coro:width1}, since $P$ has width one, $P\cong \operatorname{Cay}(Q_1, Q_2)$ for some $Q_1$ and $Q_2$. Hence,
\[
kP \cap( \ensuremath{\mathbb{R}}^2\times\{1\}) = (k-1)Q_1 + Q_2.
\qedhere
\]
\end{proof}
\section{Proof of \Cref{lemma:cayley}}
\label{sec:the_lemma}
Let $f_q$ be the primitive lattice functional constant on $q$ and $f_p$ the one constant on $p$. We assume that $f_q(p_1) < f_q(p_2)$ and $f_p(q_1) < f_p(q_2)$.
Observe that in the strip $q +\vecline p$, there is a unique lattice point on the line $f_q(x)=-1$; indeed, since $q$ is primitive, the only way that in the strip there could be two lattice points on $f_q(x)=-1$ is if they were on the boundary of the strip, which would however imply that $p+q$ is a unimodular paralellogram, against our assumptions.
Since translating the polytopes by lattice vectors will not result in any loss of generality, we can assume that $p_1$ is that unique lattice point. That is, $f_q(p_1)=-1$, or equivalently, the triangle $\ensuremath{\mathrm{conv}}\hspace{1pt}(q_1, q_2, p_1)$ is unimodular. Similarly, the unique lattice point in the strip on the line $f_q(x)=1$ is then $q_1+q_2 -p_1$.
We let $H_1=\{f_q(x) \leq 0\}$ and $H_2=\{f_q(x) \geq 0\}$; similarly let $V_1=\{f_p(x) \leq 0\}$ and $V_2=\{f_p(x) \geq 0\}$.
In the figures, we draw $p$ as a vertical segment and $q$ as a horizontal one, so that $H_i \cap V_j$ are the four quadrants.
See \Cref{fig:setup}.
\begin{figure}[htb]
\includegraphics[scale=.3]{setup.png}
\caption{Setup for the proof of \Cref{lemma:cayley}}
\label{fig:setup}
\end{figure}
Let $w=\operatorname{area}(p+q) \geq 2$, where $\operatorname{area}$ denotes the area normalized to a fundamental domain. Then:
\[
w=\operatorname{width}_{f_q}(p + \vecline q )=\operatorname{width}_{f_q}(p)=\operatorname{width}_{f_p}(q)=\operatorname{width}_{f_p}(q +\vecline p ).
\]
\begin{proof}[Proof of \Cref{lemma:cayley}]
Suppose by contradiction that there is no lattice point as described in the lemma. In particular, no lattice point on the boundary of $Q$ can be in the interior of the strip $q + \vecline p$. Thus the boundary of $Q$ contains two primitive segments which each have one vertex on each side of the strip $q + \vecline p$; we will call these $b=[b_1, b_2], t=[t_1, t_2]$, with $b$ and $t$ crossing the strip in $H_1$ and $H_2$ respectively and the convention that $f_p(b_2) >f_p(b_1)$ and $f_p(t_2) >f_p(t_1)$. This readily implies
\begin{gather}
\label{eq:widthq}
\begin{array}{cc}
f_p(t_1) \leq f_p(q_1), &
f_p(t_2) \geq f_p(q_2), \\
f_p(b_1) \leq f_p(q_1), &
f_p(b_2) \geq f_p(q_2).
\end{array}
\end{gather}
The same holds for $P$ and the strip $p+\vecline q$, and we call the segments $\ell=[l_1, l_2]$ and $r=[r_1, r_2]$, with $\ell$ and $r$ crossing the strip $p + \vecline q$ in $V_1$ and $ V_2$ respectively. The only difference is that in the case that $P$ is one dimensional we have $\ell=r=p$. Again we have
\begin{gather}
\label{eq:widthp}
\begin{array}{cc}
f_q(l_1) \leq f_q(p_1), &
f_q(l_2) \geq f_q(p_2), \\
f_q(r_1) \leq f_q(p_1),&
f_q(r_2) \geq f_q(p_2).
\end{array}
\end{gather}
Observe that a priori one of $l$ and $r$ can coincide with $p$, if this is on the boundary of $P$, and similarly one of $t,b$ might be $q$, if this is on the boundary of $Q$.
\begin{claim}
The following inequalities hold,
\begin{align*}
\operatorname{width}_{f_q}(\ell) ,
\operatorname{width}_{f_q}(r) ,
\operatorname{width}_{f_p}(t) ,
\operatorname{width}_{f_p}(b) \geq w.
\end{align*}
Each inequality is strict, unless the segment in question coincides with $p$ or $q$.
\end{claim}
\begin{proof}
The inequality $\geq w$ follows in each case from \eqref{eq:widthp} and \eqref{eq:widthq}.
If one of the inequalities, say the one for $\ell$, is not strict, then $\ell$ has one endpoint on each of the boundary lines of $(p + \vecline q)$. Unless $\ell = p$, one of the endpoints of $\ell$ is not an endpoint of $p$, say $l_1 \neq p_1$. Thus the triangle $T=\ensuremath{\mathrm{conv}}\hspace{1pt}(p_2, p_1, l_1)$ is contained in $P$ and its edge $[p_1, l_1]$ is an integer dilation of $q$. Since $\operatorname{width}_{f_q}(T) =w \geq 2$, $T$ must contain a lattice point in the interior of the strip.
\end{proof}
\begin{claim}
\label{claim:b_and_t}
$f_q(b_2-b_1)$ and $f_q(t_2 - t_1)$ are non-zero and have the same sign. That is, $f_q$ achieves its maximum over $b$ and over $t$ on the same halfplane $V_1$ or $V_2$.
\end{claim}
\begin{proof}
Suppose by contradiction that the maximum of $f_q$ on $t$ lies in $V_1$ and that the maximum on $b$ lies in $V_2$.
Then $Q \cap V_2$ is contained in the open strip $\{-1<f_q(x)<w-1\}$, of width $w$. This cannot contain a translated copy of $r$, since $\operatorname{width}_{f_q}(r) \geq w$, see \Cref{fig:claim2}. This is a contradiction, since $P$ is a Minkowski summand of $Q$ and therefore $Q$ must have an edge parallel to $t$.
\end{proof}
\begin{figure}[htb]
\scalebox{.75}{\input{claim2.pdf_t}}
\caption{Illustration of the proof of \Cref{claim:b_and_t}}
\label{fig:claim2}
\end{figure}
We assume w.l.o.g. that the maximum on $t$ (and hence on $b$) is achieved in $V_2$, that is to say, $f_p$ and $f_q$ increase in the same direction along $t$ (and hence along $b$).
\begin{claim}
\label{claim:r}
Assume w.l.o.g.~that $b$ and $t$ either are parallel or their affine spans cross in $V_2$. Then,
\begin{enumerate}
\item The intersection of $Q$ with any line parallel to $p$ in $V_2$ has width w.r.t. $f_q$ strictly smaller than $w$.
\item $f_p(r_2) > f_p(r_1)$, that is, $f_p$ achieves its maximum over $r$ in $H_2$.
\end{enumerate}
\end{claim}
\begin{proof}
Both $t$ and $b$ must intersect $p$, otherwise $p_1$ or $p_2$ are the lattice points we are looking for in $Q$. Their intersections with $p$ are thus endpoints of a segment of width w.r.t $f_q$ less than $w$, the width of $p$. Since $t$ and $b$ cross in $V_2$, the same is true for any segment parallel to $p$ contained in $Q \cap V_2$.
\begin{figure}[htb]
\scalebox{.75}{\input{claim3.pdf_t}}
\caption{Illustration of the proof of \Cref{claim:r}}
\label{fig:claim3}
\end{figure}
For part (b), If $f_p(r_2) \leq f_p(r_1)$, it would be impossible to fit a translated copy $r'$ of $r$ in the correct side of $Q$: $r$ would need to lie inside the triangle delimited by the affine line $\langle t \rangle$ and the inequalities $f_q(x) \geq f_q(r_1)$, $f_p(x) \leq f_p(r_1)$. However, this triangular region has width less than $w$ w.r.t. $f_q$, by combining part (a) with the fact that $f_p$ and $f_q$ increase in the same direction along $t$, see \Cref{fig:claim3}.
\end{proof}
The last two claims can be summarized as saying that in the pictures $b$, $t$ and $r$ have positive slope. Observe that this implies that $q$ is not in the boundary of $Q$ and $p \neq r$, so both $P$ and $Q$ are full dimensional.
Let $g$ be the primitive lattice functional constant on $[p_1, q_2]$ (and therefore constant also on $[q_1, q_1+q_2-p_1]$). By the assumption on $p_1$, the values of $g$ on these segments differer by $1$. We choose the sign of $g$ so that
\[
g([p_1, q_2])= g( [q_1, q_1+q_2-p_1]) +1.
\]
\begin{claim}
\label{claim:g}
$g(t_1) > g(t_2)$, $g(b_1) > g(b_2)$, and $g(r_1) < g(r_2)$.
\end{claim}
\begin{proof}
Since $b$ and $t$ must respectively separate $p_1$ and $q_1+q_2-p_1$ from the other two vertices of the parallelogram $\ensuremath{\mathrm{conv}}\hspace{1pt}(q_1, p_1, q_2, q_1+q_2-p_1)$, they must respectively intersect its (parallel) edges $[p_1, q_2]$ and $[q_1, q_1+q_2-p_1]$, which implies the stated inequalities for $b$ and $t$.
The same argument applied to the parallelogram $\ensuremath{\mathrm{conv}}\hspace{1pt}(p_1, q_2, p_2, p_1+p_2-p_2)$, yields the inequalities for $\ell$ and $r$.
\end{proof}
\begin{figure}[htb]
\scalebox{.75}{\input{claim4.pdf_t}}
\caption{Illustration of the proof of \Cref{claim:g}}
\label{fig:claim4}
\end{figure}
We are now ready to show a contradiction. Since the normal fan of $Q$ refines that of $P$, $Q$ must have an edge $r'$ which is a translated copy of $r$. Let $r_1'$ and $r_2'$ be its endpoints. Now consider the lattice line $d$ through $r_1'$ parallel to $[p_1, q_2]$, that is, $g$ is constant on $d$. Let $d'$ be the parallel line defined by $g(d')=g(d)+1$.
Consider the segment $s$ contained in $r_1' + \vecline p$ with endpoints $s_1=r_1'$ on $d$ and $s_2$ on $d'$.
Since $t$ separates $q_1$ and $q_1+q_2-p_1$ and $g$ decreases from $t_1$ to $t_2$ (by \Cref{claim:g}), the inequality $g(x)< g(d')$ holds on $Q\cap V_2$,
and in particular for $r_2'$. Since $r_2'$ is a lattice point, $g(r_2')\leq g(d)= g(r_1')$, which contradicts \Cref{claim:g}).
\end{proof}
| {
"timestamp": "2019-07-30T02:24:19",
"yymm": "1907",
"arxiv_id": "1907.12312",
"language": "en",
"url": "https://arxiv.org/abs/1907.12312",
"abstract": "We show that the following classes of lattice polytopes have unimodular covers, in dimension three: the class of parallelepipeds, the class of centrally symmetric polytopes, and the class of Cayley sums $\\text{Cay}(P,Q)$ where the normal fan of $Q$ refines that of $P$. This improves results of Beck et al.~(2018) and Haase et al.~(2008) where the last two classes were shown to be IDP.",
"subjects": "Combinatorics (math.CO)",
"title": "Unimodular covers of 3-dimensional parallelepipeds and Cayley sums",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97482115683641,
"lm_q2_score": 0.7279754489059775,
"lm_q1q2_score": 0.7096458692510299
} |
https://arxiv.org/abs/1210.2953 | Characterization of Differentiable Copulas | This paper proposes a new class of copulas which characterize the set of all twice continuously differentiable copulas. We show that our proposed new class of copulas is a new generalized copula family that include not only asymmetric copulas but also all smooth copula families available in the current literature. Spearman's rho and Kendall's tau for our new Fourier copulas which are asymmetric are introduced. Furthermore, an approximation method is discussed in order to optimize Spearman's rho and the corresponding Kendall's tau. | \section{Introduction}
Recently, a study of dependence by using copulas has been getting more attention in the areas of finance, actuarial science, biomedical studies and engineering because a copula function does not require a normal distribution and independent, identical distribution assumptions. Furthermore, the invariance property of copula has been attractive in the finance area. But most copulas including Archimedean copula family are symmetric functions so that the copula model fitting for asymmetric data is not appropriate. Liebscher (\cite{Li08}) introduced two methods for the construction of asymmetric multivariate copulas. The first is connected with products of copulas. The second approach generalizes the Archimedean copulas. The resulting copulas are asymmetric but are little extension of the parametric families of copulas. This paper proposes a new generalized copula family which include asymmetric copulas in addition to all copula families available in the current literature.
We will characterize the set of all twice continuously differentiable copulas that may arise. We will start with some basic concepts of copulas in the next section. In Section 3 we characterize a class of differentiable copulas and state our main theorem. In Section 4 we will discuss some well-known copulas and introduce a new class of copulas (asymmetric in general) in support of our main result. Finally, in Section 5, we will study the dependence structure by calculating Spearman's rho and Kendall's tau and describe a method to maximize and minimize Spearman's rho using an approximation technique for a special class of copulas that arises from our construction.
\section{Definitions and Preliminaries}
In this section we recall some concepts and results that are necessary to understand a (bivariate) copula. Throughout this paper $\mathbb{I}$ denotes the unit interval $[0, 1]$. A copula is a multivariate distribution function defined on $\mathbb{I}^n$, with uniformly distributed marginals. In this paper, we focus on bivariate (two-dimensional, $n=2$) copulas.
\begin{defn}A bivariate copula is a function $C: \mathbb{I}^2\rightarrow \mathbb{I}$, which satisfies the following properties
\begin{enumerate}
\item[(P1)] $C(0,v)\;=\;C(u,0)\;=\;0,\qquad \forall u,v\in \mathbb{I}$
\item[(P2)] $C(1,u)\;=\;C(u,1)\;=\;u,\qquad \forall u\in \mathbb{I}$
\item[(P3)] $C(u_2,v_2)\;+\;C(u_1,v_1)\;-\;C(u_1,v_2)\;-\;C(u_2,v_1)\;\geq 0,\qquad \forall u_1, u_2, v_1, v_2\in \mathbb{I}$ with $u_1\leq u_2, v_1\leq v_2$.
\end{enumerate}
\end{defn}
The importance of copulas has been growing because of their applications in several fields of research. Their relevance primarily comes from the following theorem of Sklar (see \cite{Sk59}):
\begin{thm}\label{Sklar}
Let $X$ and $Y$ be two random variables with joint distribution function $H$ and marginal distribution functions $F$ and $G$, respectively. Then there exists a copula $C$ such that
\begin{center}
H(x,y) = C(F(x), G(y))
\end{center}
for all $x, y \in \mathbb{R}$. If $F$ and $G$ are continuous, then $C$ is unique. Otherwise, the copula $C$ is uniquely determined on $Ran(F)\times Ran(G)$. Conversely, if $C$ is a copula and $F$ and $G$ are distribution functions, then the function $H$ defined above is a joint distribution function with margins $F$ and $G$.
\end{thm}
Sklar's theorem clarifies the role that copulas play in the relationship between multivariate distribution functions and their univariate margins. A proof of this theorem can be found in \cite{ScSk83}.
\begin{defn}
Fr\'{e}chet lower and upper bounds for copulas are denoted by $C_L$ and $C_U$, respectively, and defined by
$$C_L(u, v) = \max\{u+v-1, 0\},$$
$$C_U(u, v) = \min\{u, v\},$$
for all $(u, v)\in \mathbb{I}^2.$
\end{defn}
Then it well-known that for any copula $C(u, v)$,
$$C_L(u, v) \leq C(u, v) \leq C_U(u,v),\qquad \forall (u, v)\in \mathbb{I}^2.$$
In this paper we will mainly concentrate on copulas which are twice continuously differentiable, i.e., $C(\cdot, \cdot) \in \mathcal{C}^2(\mathbb{I}^2)$. With this assumption and using property 3 (P3) of copulas, for all $u_1, u_2, v_1, v_2 \in \mathbb{I}$ with $u_1\leq u_2, v_1\leq v_2$, we have $C(u_2, v_2) - C(u_1, v_2) \geq C(u_2, v_1) - C(u_1, v_1)$. This implies $C(u_2, \cdot)-C(u_1, \cdot)$ is monotonically increasing in the second variable. Hence $\frac{\partial}{\partial v} \left[C(u_2, v)- C(u_1, v)\right] \geq 0$. Therefore $\frac{\partial}{\partial v} C(\cdot, v)$ is increasing in its first variable. Hence we deduce the following lemma:
\begin{lem}\label{copula_pde}
The following two statements are equivalent:
\begin{enumerate}
\item[i.] $C$ is a twice continuously differentiable copula.
\item[ii.] $C$ satisfies the following Dirichlet inequality problem:
\begin{equation}\label{co_pd_in}
\frac{\partial^2}{\partial u\partial v}C(u, v) \geq 0
\end{equation}
with boundary conditions:
$$C(u, 0) \;=\; C(0,v) \;=\; 0,$$
$$C(u, 1) \;=\; C(1,u) \;=\; u.$$
\end{enumerate}
\end{lem}
\section{Characterization of $\mathcal{C}^2$ copulas}
In this section we will start by solving the above problem stated in Lemma \ref{copula_pde} and then we will characterize all twice differentiable copulas.
Suppose $\gamma: \mathbb{I}^2\rightarrow \mathbb{R}$ is a continuous real-valued function. Then inequality (\ref{co_pd_in}) can be reformulated as follows,
$$\frac{\partial^2}{\partial u\partial v}C(u, v) = \gamma^2(u, v).$$
Integrating twice we get
$$C(u, v) = \displaystyle\int_0^v\int_0^u \gamma^2(s, t)\; ds \;dt + H(u) + G(v),$$
where $H$ and $G$ are two arbitrarily real-valued functions of $u$ and $v$, respectively. Using boundary conditions $C(u, 0) = C(0, v) = 0$, we have $H(u) = -G(v) =\; $constant. Hence $C$ has the following form
\begin{equation}\label{copula1}
C(u, v) = \displaystyle\int_0^v\int_0^u \gamma^2(s, t)\; ds \;dt.
\end{equation}
Now using the boundary condition $C(1, v) = v$, we have
$$\displaystyle\int_0^v\int_0^1 \gamma^2(s, t)\; ds \;dt \;=\; v.$$
Differentiating both sides with respect to $v$, we have
\begin{equation}\label{copula2}
\int_0^1 \gamma^2(u, v)\; du \;=\; 1, \;\;\;\;\;\;\; \forall v\in \mathbb{I}.
\end{equation}
Similarly using the fourth boundary condition $C(u, 1) = u$, we have
\begin{equation}\label{copula3}
\int_0^1 \gamma^2(u, v)\; dv \;=\; 1, \;\;\;\;\;\;\; \forall u\in \mathbb{I}.
\end{equation}
This leads to the following theorem,
\begin{thm}\label{MJK}
Suppose $h: \mathbb{I}^2\rightarrow [-1, \infty)$ is a continuous real-valued function such that
\begin{eqnarray}
\int_0^1 h(u, v)\; dv \;=\; 0\;\;\;\;\;\; \forall u\in \mathbb{I},\label{copula_bd1}\\
\int_0^1 h(u, v)\; du \;=\; 0\;\;\;\;\;\; \forall v\in \mathbb{I},\label{copula_bd2}
\end{eqnarray}
then
\begin{equation}\label{copula}
C(u, v) = \displaystyle\int_0^v\int_0^u 1 + h(s, t)\; ds \;dt.
\end{equation}
is a copula. Furthermore, every twice continuously differentiable copula is of the form given in (\ref{copula}).
\end{thm}
\begin{proof}
If we substitute $1 + h(u, v)$ for $\gamma^2(u, v)$ in Equations (\ref{copula1}), (\ref{copula2}) and (\ref{copula3}), it is easy to verify that every twice continuously differentiable copula is given by (\ref{copula}).
To prove the other direction, we have from (\ref{copula}), $\displaystyle\frac{\partial^2C}{\partial u\partial v}\;=\; 1 + h(u, v)$, which is continuous and non-negative on $\mathbb{I}^2$. It is easy to check $C(u, 0) = C(0, v) = 0$ for all $(u, v)\in \mathbb{I}^2$. We also have,
\begin{eqnarray}
C(1, v) &=& \displaystyle\int_0^v\int_0^1 1 + h(s, t)\; ds \;dt\nonumber\\
&=& \displaystyle\int_0^v 1 \; dt\; \nonumber\\
&=& v. \nonumber
\end{eqnarray}
Similarly, we can show that $C(u, 1) = u$. Hence by Lemma \ref{copula_pde} $C$ is a copula.
\end{proof}
\begin{remark}
One importance of Theorem \ref{MJK} is the fact that in general there is no assumption of symmetry on $h(u, v)$ and hence on $C(u, v)$. With the help of above theorem, in the next section we will construct a class of examples of non-symmetric copulas.
\end{remark}
\section{Examples}
It is quite easy to verify Theorem \ref{MJK} for well-known $\mathcal{C}^2$ copulas by constructing the corresponding $h$. In this section we will begin with showing those renowned examples and later we will show how to construct other copulas.
\subsection{Archimedean Copulas}
Archimedean copula is a very interesting class of copulas, whose investigation arose in the context of associative functions and probabilistic metric spaces (see \cite{ScSk83}) and today has also many applications in the statistical context (see \cite{Ne99}). Moreover, Archimedean copulas are widely used in applications, especially in finance, insurance and actuarial science, due to their simple form and nice properties.
\begin{defn}
Let $\varphi: \mathbb{I}\rightarrow [0, \infty]$ be a continuous, decreasing function such that $\varphi(1) = 0$. The \textit{pseudo-inverse} of $\varphi$ is the function denoted $\varphi^{[-1]}$ with domain $[0, \infty]$, range $\mathbb{I}$ and defined by
\begin{equation}
\varphi^{[-1]}(t) = \displaystyle \left\{\begin{array}{lcr}
\varphi^{-1}(t) &\text{if}\quad 0\leq t \leq \varphi(0),\\
0 & \text{if}\quad \varphi(0)\leq t \leq \infty\\
\end{array}\right.\nonumber\end{equation}
\end{defn}
\begin{defn}
Let $\varphi: \mathbb{I}\rightarrow [0, \infty]$ be a continuous, convex, strictly decreasing function such that $\varphi(1) = 0$. Then a copula $C$ is called \textit{Archimedean} if it can be written as
$$C(u, v) = \varphi^{[-1]}\left(\varphi(u) + \varphi(v)\right), \;\;\;\;\forall (u, v)\in \mathbb{I}^2.$$
$\varphi$ is called an additive generator of $C$.
\end{defn}
Now assuming $C\in \mathcal{C}^2(\mathbb{I}^2)$, let us define $$h(u, v) = {\varphi^{[-1]}}^{''}\left(\varphi(u) + \varphi(v)\right) \varphi'(u) \varphi'(v) - 1.$$ Then it is evident that $h$ is continuous. Also one can easily show that $C(u, v) = \displaystyle\int_0^u\int_0^v 1 + h(s, t)\;dt\;ds$ and $h$ satisfies both (\ref{copula_bd1}) and (\ref{copula_bd2}).
\begin{ex}[Frank Copulas]
The Frank copula is an Archimedean copula given by
$$C(u, v) = -\displaystyle \frac{1}{\theta} \ln{\left\{1+\frac{(e^{-\theta u} -1)(e^{-\theta v} -1)}{e^{-\theta} -1}\right\}},$$
where $\theta \in \mathbb{R}\setminus \{0\}$ is a parameter. Its generator is given by
$$\varphi_{\theta}(x) = \displaystyle -\ln{\left\{\frac{e^{-\theta x} -1}{e^{-\theta} -1}\right\}}.$$
Then $$\displaystyle {\varphi_{\theta}^{[-1]}}^{''}(x) = \frac{1}{\theta}\frac{e^{\theta + x} (e^\theta-1)}{(1 - e^\theta + e^{\theta + x})^2}, \qquad \varphi_\theta '(x)=\frac{\theta e^{-\theta x}}{e^{-\theta x}-1}.$$
Therefore we have
$$h(u, v) = \displaystyle \frac{\theta e^{\theta (1 + u + v)} (e^\theta -1) }{(e^\theta-e^{\theta(1+u)}-e^{\theta(1+v)}+e^{\theta(u+v)})^2}.$$
\end{ex}
One can easily verify that $h$ satisfies Theorem \ref{MJK}.
\subsection{FGM Copulas}
The Farlie-Gumbel-Morgenstern (FGM) copula takes the form
$$C(u, v) = uv(1 + \theta (1-u)(1-v)),$$
where $\theta \in [-1, 1]$ is a parameter. The FGM copula was first proposed by Morgenstern (1956). The FGM copula is a perturbation of the product copula; if the dependence parameter $\theta$ equals zero, then the FGM copula collapses to independence. This is attractive primarily because of its simple analytical form. FGM distributions have been widely used in modeling, for tests of association, and in studying the efficiency of nonparametric procedures. However, it is restrictive because this copula is only useful when dependence between the two marginals is modest in magnitude.
Let $h(u, v) = \theta (1-2u)(1-2v)$. It can be easily verified that $h$ agrees with Theorem \ref{MJK}, and it generates the FGM copulas.
\subsection{Fourier Copulas}
The following lemma will introduce a new class of $\mathcal{C}^2$ copulas.
\begin{lem}\label{MJK_lem1}
Suppose $(a_n), (b_n), (c_n), (d_n) \in \ell^1$, the space of sequences whose series is absolutely convergent, are sequences of real numbers. Also suppose $h$ is a real-valued function on $\mathbb{I}^2$ defined by
$$h(u, v) = \displaystyle\sum_{n=1}^\infty \left(a_n\cos{(2\pi nu)} + b_n\sin{(2\pi nu)}\right) \sum_{m=1}^\infty \left(c_m\cos{(2\pi mv)} + d_m\sin{(2\pi mv)}\right),$$
then
$$\displaystyle\int_0^1h(u,v)\; du= 0,\;\;\;\;\;\; \forall v\in{\mathbb{I}},$$
$$\displaystyle\int_0^1h(u,v)\; dv= 0,\;\;\;\;\;\; \forall u\in{\mathbb{I}}.$$
Furthermore, if $\displaystyle\sum_{n, m=1} ^\infty \sqrt{a_n^2 + b_n^2} \sqrt{c_m^2+d_m^2} \leq 1$, then $h(u, v)\geq -1$ for all $(u, v) \in \mathbb{I}^2$.
\end{lem}
\begin{proof}
Notice that we can rewrite $h(u, v)$ as,
\begin{eqnarray}
h(u, v) &=& \displaystyle\sum_{n, m} a_n c_m\cos{(2\pi nu)}\cos{(2\pi mv)} + \sum_{n, m} a_n d_m\cos{(2\pi nu)}\sin{(2\pi mv)}\nonumber\\
& & + \sum_{n, m} b_n c_m\sin{(2\pi nu)}\cos{(2\pi mv)} + \sum_{n, m} b_n d_m\sin{(2\pi nu)}\sin{(2\pi mv)}.\nonumber
\end{eqnarray}
The first conclusion follows from the fact that the sequences $(a_n), (b_n), (c_n), (d_n) \in \ell^1$ and $\displaystyle\int_0^1 \sin{(2\pi nx)}\;dx = \displaystyle\int_0^1 \cos{(2\pi nx)}\;dx = 0$, $\forall n\in \mathbb{N}$.
Now to prove $h(u, v) \geq -1$ for all $(u, v) \in \mathbb{I}^2$, first notice that $\displaystyle \sum_n \sqrt{a_n^2+b_n^2} \leq \sum_n \left(|a_n| + |b_n|\right) < \infty$. Also,
$$-\sqrt{a_n^2+b_n^2} \leq a_n\cos{(2\pi n u)} + b_n \sin{(2\pi n u)} \leq \sqrt{a_n^2+b_n^2},$$
for all $u\in \mathbb{I},\; n\in \mathbb{N}$. Hence we have
$$\displaystyle -\sum_{n, m}\sqrt{a_n^2+b_n^2}\sqrt{c_m^2+d_m^2} \leq h(u, v) \leq \sum_{n, m}\sqrt{a_n^2+b_n^2}\sqrt{c_m^2+d_m^2}.$$
Therefore the additional hypothesis on $h$ guarantees that $h(u, v) \geq -1$.
\end{proof}
It is evident that $h(\cdot, \cdot)$ is continuous and therefore by Theorem \ref{MJK}, $C$, defined by
$$C(u, v) = \displaystyle\int_0^v\int_0^u 1 + h(s, t)\; ds \;dt,$$
is a copula, where $h$ is of the form given in Lemma \ref{MJK_lem1}.
Noting that
$$ \sum_{n=1}^\infty a_n \cos (2\pi n u) + b_n \sin(2 \pi nu) = \sum_{n \in \mathbb{Z}\setminus \{0\}} \gamma_n e^{2\pi i nu}, $$
where $ a_n = \gamma_n + \gamma_{-n} $ and $ b_n = i (\gamma_n - \gamma_{-n}) $, or equivalently,
$$ \gamma_n = \overline{\gamma_{-n} } = \frac{a_n - i b_n}{2}, $$ it follows that $ |\gamma_n| = \frac{\sqrt{a_n^2 + b_n^2}}{2} $. Similarly, if $ \delta_m = \overline{\delta_{-m}} = \frac{c_m - i d_m}{2} $,
$$ h(u,v) = \sum_{n,m \in \mathbb{Z}\setminus\{0\}} \gamma_n \delta_m \exp(2\pi i (nu + mv)), $$ with sequences $ (\gamma_n), (\delta_m) \in \ell^1 $, and $ ||\gamma_n||||\delta_m|| \leq \frac{1}{4} $, then $ h $ will satisfy the conclusions of
Lemma \ref{MJK_lem1}, and will generate a copula by eq. (\ref{copula}). This restatement of Lemma \ref{MJK_lem1} clearly indicates that $ h $ is obtained from products of functions in the disc algebra with vanishing zero-th moment and where the product of the $ \ell^1 $ norms of the coefficients are $ \leq \frac{1}{4} $. More precisely,
\begin{thm}\label{generator}
If $ h $ is a function on the $ 2 $-torus arising from the product of functions on the unit circle, each section of $ h $ has a vanishing zero-th moment, and the product of $ \ell^1 $ norms of Fourier coefficients of components of $ h $ is $ \leq \frac{1}{4} $, then $ h$ generates a $ C^2 $-copula.
\end{thm}
This theorem provides a large class of examples from which one may construct copulas with optimal properties ($\rho $ and $ \tau $).
\begin{ex}\label{four_cop_ex}
Let $b_1 = c_1 = 1, b_n = c_n = 0,\; \forall n\neq 1; a_n= d_n = 0\; \forall n$, then
$$h(u,v) = \sin{(2\pi u)}\cos{(2\pi v)}.$$
Define $C$ as follows
\begin{eqnarray}
C(u, v) &=& \displaystyle\int_0^v\int_0^u 1 + \sin{(2\pi s)}\cos{(2\pi t)}\; ds \;dt\nonumber\\
&=& \displaystyle\int_0^v u + \frac{1}{2\pi}\{1-\cos{(2\pi u)}\}\cos{(2\pi t)}\; ds \;dt\nonumber\\
&=& \displaystyle uv + \frac{1}{4\pi^2}\{1-\cos{(2\pi u)}\}\sin{(2\pi v)}\nonumber
\end{eqnarray}
Then it is easy to verify that $C$ forms a copula and more importantly it is not symmetric. A contour plot of $C$ is given in Figure \ref{fig:fourier-cop}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{fig_fourier-cop.pdf}
\caption{Contour plot of a Fourier copula in Ex. \ref{four_cop_ex}}
\label{fig:fourier-cop}
\end{figure}
\end{ex}
\section{Spearman rho and Kendall tau}
The two most commonly used nonparametric measures of association for two random variables are Spearman rho ($\rho$) and Kendall tau ($\tau$). In general they measure different aspects of the dependence structures and hence for many joint distributions these two measures have different values.
\begin{defn}
Suppose X and Y are two random variables with marginal distribution functions F and G, respectively. Then \textit{Spearman} $\rho$ is the ordinary (Pearson) correlation coefficient of the transformed random variables F(X) and G(Y), while \textit{Kendall} $\tau$ is the difference between the probability of concordance $Pr[(X1 - X2)(Y1 - Y2)>0]$ and the probability of discordance $Pr[(X1 - X2)(Y1 - Y2)<0]$ for two independent pairs (X1, Y1) and (X2, Y2) of observations drawn from the distribution.
\end{defn}
In terms of dependence properties, Spearman $\rho$ is a measure of average quadrant dependence, while Kendall $\tau$ is a measure of average likelihood ratio dependence (see \cite{Ne99} for details). If $X$ and $Y$ are two continuous random variables with copula $C$, then Kendall $\tau$ and Spearman $\rho$ of $X$ and $Y$ are given by,
\begin{equation}\label{tau}
\tau = \displaystyle 4\int\int_{\mathbb{I}^2} C(u, v) \;dC(u, v) - 1
\end{equation}
\begin{equation}\label{rho}
\rho = \displaystyle 12\int\int_{\mathbb{I}^2} C(u, v) \;du\; dv - 3
\end{equation}
\subsection{$\rho$, $\tau$ for Fourier copulas}
From Lemma (\ref{MJK_lem1}), if we define
$$h(u, v) = \displaystyle\sum_n \left(a_n\cos{(2\pi nu)} + b_n\sin{(2\pi nu)}\right) \sum_m\left(c_m\cos{(2\pi mv)} + d_m\sin{(2\pi mv)}\right),$$
where $\displaystyle\sum_{n, m} \sqrt{a_n^2 + b_n^2} \sqrt{c_m^2+d_m^2} \leq 1$, then the Fourier copulas are given by
\begin{eqnarray}
C(u, v) &=& \int_0^v\int_0^u 1 + h(s, t)\; ds \;dt \nonumber\\
&=& uv + \sum_{n, m} \frac{1}{4\pi^2nm}\left[a_n\sin{(2\pi nu)} + b_n\{1-\cos{(2\pi nu)}\}\right] \nonumber\\
& & \qquad \left[c_m\sin{(2\pi mu)} + d_m\{1-\cos{(2\pi mv)}\}\right]\nonumber
\end{eqnarray}
Using (\ref{tau}) and (\ref{rho}), we have
$$\rho = \frac{3}{\pi^2} \sum_{n, m}\frac{b_nd_m}{nm},$$
$$\tau = \frac{2}{\pi^2} \sum_{n, m}\frac{b_nd_m}{nm}.$$
Now since $\displaystyle -\sum_{n, m} \sqrt{a_n^2 + b_n^2} \sqrt{c_m^2+d_m^2} \leq \sum_{n, m}\frac{a_nc_m}{nm} \leq \sum_{n, m} \sqrt{a_n^2 + b_n^2} \sqrt{c_m^2+d_m^2}$, we can conclude that
$$-0.304 \approx -\frac{3}{\pi^2} \leq \rho \leq \frac{3}{\pi^2} \approx 0.304,$$
$$-0.203 \approx -\frac{2}{\pi^2} \leq \tau \leq \frac{2}{\pi^2} \approx 0.203.$$
\begin{remark}
\emph{Fourier copulas can be generalized by writing $h$ as follows,
$$h(s, t) = \sum_{n ,m \in \mathbb{Z}\setminus \{0\}} \alpha_{n, m}\; \exp(2\pi i (ns + mt)),$$
where $\displaystyle \alpha_{n, m} = \overline{\alpha_{-n, -m}} \; \forall n, m \in \mathbb{Z}\setminus \{0\}$ and $\displaystyle \sum_{n, m \in \mathbb{N}} |\alpha_{n, m}| + |\alpha_{-n, m}|$$ \leq \frac{1}{2}$. The latter condition here is to ensure that $ h $ will have range in $ [-1, \infty) $. Notice that for all $n, m \neq 0$, $\alpha_{-n,m}$ is equal to $\overline{\alpha_{n,-m}}$ and hence no additional conditions are necessary to assure $h$ to be real-valued.\\ This yields $\displaystyle \rho = -\frac{3}{\pi^2} \sum_{n, m\in\mathbb{Z}\setminus \{0\}}\frac{\alpha_{n, m}}{nm},$ and $\displaystyle \tau = -\frac{2}{\pi^2} \sum_{n, m\in\mathbb{Z}\setminus \{0\}}\frac{\alpha_{n, m}}{nm},$ and it can be shown again that $ -\frac{3}{\pi^2} \leq \rho \leq \frac{3}{\pi^2}$, and $ -\frac{2}{\pi^2} \leq \tau \leq \frac{2}{\pi^2}.$}
\end{remark}
\subsection{Optimizing $\rho$ for $\mathcal{C}^2$-copulas}
In this section we will optimize Spearman's rho using an approximation method for $\mathcal{C}^2$-copulas with $h$ of the form, $h(x, y) = \varphi(x)\psi(y)$, where $\varphi$ and $\psi$ both are continuous real-valued functions on $\mathbb{I}$. Notice that in this special case, $\rho$ can be simplified into the following form,
\begin{eqnarray}
\rho &=& 12\int\int_{\mathbb{I}^2} \left[\int_{t=0}^v \int_{s=0}^u 1 + \varphi(s) \psi(t) ds\;dt\right] du\;dv - 3\nonumber\\
&=& 12\int_0^1 \int_0^u \varphi(s) ds\;du\int_0^1 \int_0^v \psi(t) dt\;dv.\nonumber
\end{eqnarray}
This suggests that optimizing $\rho$ is equivalent to optimizing both $\int_0^1 \int_0^u \varphi(s) ds\;du$ and $\int_0^1 \int_0^v \psi(t) dt\;dv$.
Define $G(u):=\int_0^u \varphi(s) ds$ and $H(v):=\int_0^v \psi(t) dt$. Then for some positive $\alpha_1, \alpha_2, \beta_1, \beta_2$, the optimization problems become,
\noindent\begin{minipage}{.5\linewidth}
\begin{equation*}
\begin{aligned}
& {\text{max/min}}
& & I_1 = \int_0^1 G(u)\; du \\
& \text{subject to}
& & G(0) = G(1) = 0 \\
&&& -\alpha_1\leq G'(u) \leq \beta_1,
\end{aligned}
\end{equation*}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\begin{equation*}
\begin{aligned}
& {\text{max/min}}
& & I_2 = \int_0^1 H(v)\; du \\
& \text{subject to}
& & H(0) = H(1) = 0 \\
&&& -\alpha_2\leq H'(v) \leq \beta_2.
\end{aligned}
\end{equation*}
\end{minipage}\\
Although it apparently looks like these optimization problems can be solved independently, they are related by the fact that $G'(u) H'(v) = \varphi(u) \psi(v) \geq -1$. This implies $\min\{-\alpha_1\beta_2, -\alpha_2\beta_1\} \geq -1$. For the optimal possibility, we choose, $\beta_2=-(\alpha_1)^{-1}$ and $\alpha_2=-(\beta_1)^{-1}$. This is evident from the fact that both $I_1$ and $I_2$ can be positive or negative, $\rho_{\text{max}}$ will occur either if both $I_1$ and $I_2$ are maximum or if both are minimum and $\rho_{\text{min}}$ will occur if one of $I_1$ and $I_2$ is maximum and the other is minimum.
Geometrically, $I_1$ will be maximum if $G$ has the form as in Figure \ref{fig:Gmax} and will be minimum if $G$ has the form as in Figure \ref{fig:Gmin}.
\begin{figure}[htbp]
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=0.8\textwidth]{fig_Gmax.pdf}
\caption{$G$, Maximizing $I_1$}
\label{fig:Gmax}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=0.8\textwidth]{fig_Gmin.pdf}
\caption{$G$, Minimizing $I_2$}
\label{fig:Gmin}
\end{minipage}
\end{figure}
One can easily prove that, in order to optimize $I_1$, $\beta_1$ must be equal to $\alpha_1$. For convenience, now onwards we will write $\alpha$ for $\alpha_1$. This suggests that if $G(x) = GM(x) = -\alpha |x-0.5|+0.5\alpha$, or $G(x) = Gm(x) = \alpha |x-0.5|-0.5\alpha$ then $I_1$ will be maximum or minimum, respectively. But in either case, $G$ is not differentiable at $x=0.5$, and hence $\varphi$ is not continuous. To avoid this, we will approximate $G$ by a smooth function as follows: for arbitrarily small $\varepsilon >0$, define $$\widetilde{GM}(x) = -\widetilde{Gm}(x) = \frac{\alpha}{2}\left(\sqrt{1+4\varepsilon^2}-\sqrt{(1-2x)^2+4\varepsilon^2}\right).$$ It is worth noting that $\displaystyle\sup_{x\in \mathbb{I}} \Big\{|\widetilde{GM}(x)-GM(x)|, |\widetilde{Gm}(x)-Gm(x)|\Big\} \rightarrow 0$ as $\varepsilon \rightarrow 0$ and $-\alpha \leq \widetilde{GM}'(x), \widetilde{Gm}'(x) \leq \alpha$.
\begin{figure}[htbp]
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=0.8\textwidth]{fig_Gapprox1.pdf}
\caption{$GM$, $\widetilde{GM}$ for $\alpha=5$, $\varepsilon = 0.1$}
\label{fig:Gapprox1}
\end{minipage}%
\begin{minipage}{0.5\linewidth}
\centering
\includegraphics[width=0.8\textwidth]{fig_Gapprox2.pdf}
\caption{$GM$, $\widetilde{GM}$ for $\alpha=5$, $\varepsilon = 0.03$}
\label{fig:Gapprox2}
\end{minipage}
\end{figure}
We can similarly optimize $I_2$ by approximating maximum and minimum of $H$ by the following functions $$\widetilde{HM}(x) = -\widetilde{Hm}(x) = \frac{1}{2\alpha}\left(\sqrt{1+4\varepsilon^2}-\sqrt{(1-2x)^2+4\varepsilon^2}\right).$$
Hence optimum values of $\rho$ will be obtained by approximating $h$ by the following functions,
$$h^\varepsilon_{\text{max}}(x, y) = \widetilde{GM}'(x)\widetilde{HM}'(y) = \widetilde{GM}'(x)\widetilde{HM}'(y) = \frac{(1-2x)(1-2y)}{\sqrt{(1-2x)^2+4\varepsilon^2}\sqrt{(1-2y)^2+4\varepsilon^2}},$$
$$h^\varepsilon_{\text{min}}(x, y) = \widetilde{GM}'(x)\widetilde{Hm}'(y) = \widetilde{Gm}'(x)\widetilde{HM}'(y) = -\frac{(1-2x)(1-2y)}{\sqrt{(1-2x)^2+4\varepsilon^2}\sqrt{(1-2y)^2+4\varepsilon^2}}.$$
Notice that each of $h^\varepsilon_{\text{max}}$ and $h^\varepsilon_{\text{min}}$ will generate a copula as it satisfies all the hypothesis of Theorem \ref{MJK}. Then the corresponding Spearman's rho and Kendall's tau are given by,
$$\rho^\varepsilon_{\text{max}} = \frac{3}{4} \left(\sqrt{1 + 4 \varepsilon^2} - 4 \varepsilon^2 \coth^{-1}(\sqrt{1 + 4 \varepsilon^2})\right)^2$$
$$\rho^\varepsilon_{\text{min}} = -\frac{3}{4} \left(\sqrt{1 + 4 \varepsilon^2} - 4 \varepsilon^2 \coth^{-1}(\sqrt{1 + 4 \varepsilon^2})\right)^2$$
$$\tau^\varepsilon_{\text{max}} = \frac{1}{2} \left[1 + 4 \varepsilon^2 + 4 \varepsilon^2 \left(\sqrt{1 + 4 \varepsilon^2} - 2 \varepsilon^2 \coth^{-1}(\sqrt{1 + 4 \varepsilon^2})\right) \ln\left(\frac{
1 + 2 \varepsilon^2 - \sqrt{1 + 4 \varepsilon^2}}{2 \varepsilon^2}\right)\right]$$
$$\tau^\varepsilon_{\text{max}} = -\frac{1}{2} \left[1 + 4 \varepsilon^2 + 4 \varepsilon^2 \left(\sqrt{1 + 4 \varepsilon^2} - 2 \varepsilon^2 \coth^{-1}(\sqrt{1 + 4 \varepsilon^2})\right) \ln\left(\frac{
1 + 2 \varepsilon^2 - \sqrt{1 + 4 \varepsilon^2}}{2 \varepsilon^2}\right)\right]$$
The optimal values of $\rho$ and corresponding $\tau$ will be obtained by letting $\varepsilon \rightarrow 0$. Table \ref{table:optization} shows how the values of $\rho$ approach the optimal values as $\varepsilon \rightarrow 0$ and it is clear that $-0.75 \leq \rho \leq 0.75$ and $-0.5 \leq \tau \leq 0.5$.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$\varepsilon $ & $\rho^\varepsilon_{\text{max}}$ &$\rho^\varepsilon_{\text{min}}$ & $\tau^\varepsilon_{\text{max}}$ &$\tau^\varepsilon_{\text{min}}$\\
\hline
\hline
1 & 0.0726437 & -0.0726437 & 0.0484292 & -0.0484292\\
\hline
0.1 & 0.644923 & -0.644923 & 0.429949 & -0.429949\\
\hline
0.01 & 0.747539 & -0.747539 & 0.498359 & -0.498359\\
\hline
0.001 & 0.749962 & -0.749962 & 0.499974 & -0.499974\\
\hline
0.0001 & 0.749999 & -0.749999 & 0.5 & -0.5\\
\hline
\end{tabular}
\caption{$\rho$ and $\tau$ values as $\varepsilon$ changes.}
\label{table:optization}
\end{table}
\section{Conclusion}
We proposed a new generalized copula family which include not only asymmetric copulas but also all copula families available in the current literature. Especially, the family of Fourier copulas we proposed is very useful copula family for analyzing asymmetric data such as financial return data or cancer data in Bioinformatics. In our future study, we will extend our copula method to a multivariate case and then incorporate time varying component to our proposed method.
\bibliographystyle{amsalpha}
| {
"timestamp": "2012-10-11T02:07:28",
"yymm": "1210",
"arxiv_id": "1210.2953",
"language": "en",
"url": "https://arxiv.org/abs/1210.2953",
"abstract": "This paper proposes a new class of copulas which characterize the set of all twice continuously differentiable copulas. We show that our proposed new class of copulas is a new generalized copula family that include not only asymmetric copulas but also all smooth copula families available in the current literature. Spearman's rho and Kendall's tau for our new Fourier copulas which are asymmetric are introduced. Furthermore, an approximation method is discussed in order to optimize Spearman's rho and the corresponding Kendall's tau.",
"subjects": "Methodology (stat.ME); Computational Finance (q-fin.CP)",
"title": "Characterization of Differentiable Copulas",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211561049159,
"lm_q2_score": 0.7279754489059775,
"lm_q1q2_score": 0.7096458687185202
} |
https://arxiv.org/abs/1409.8229 | Geometry and stability of tautological bundles on Hilbert schemes of points | The purpose of this paper is to explore the geometry and establish the slope stability of tautological vector bundles on Hilbert schemes of points on smooth surfaces. By establishing stability in general we complete a series of results of Schlickewei and Wandel who proved the slope stability of these vector bundles for Hilbert schemes of 2 points or 3 points on K3 or abelian surfaces with Picard group restrictions. In exploring the geometry we show that every sufficiently positive semistable vector bundle on a smooth curve arises as the restriction of a tautological vector bundle on the Hilbert scheme of points on the projective plane. Moreover we show the tautological bundle of the tangent bundle is naturally isomorphic to the sheaf of vector fields tangent to the divisor which consists of nonreduced subschemes. | \section*{Introduction}
The purpose of this paper is to explore the geometry of tautological bundles on Hilbert schemes of smooth surfaces and to establish the slope stability of these bundles.
Let $S$ be a smooth complex projective surface, and denote by $\hns{n}{S}$ the Hilbert scheme parametrizing length $n$ subschemes of $S$. This parameter space carries some natural tautological vector bundles: if $\mathcal{L}$ is a line bundle on $S$ then $\enl{n}{\mathcal{L}}$ is the rank $n$ vector bundle whose fiber at the point corresponding to a length $n$ subscheme $\xi \subset S$ is the vector space $H^0(S,\mathcal{L} \otimes \mathcal{O}_\xi)$. These tautological vector bundles have attracted a great deal of interest. Danila ~\cite{Danila} and Scala ~\cite{Scala} computed their cohomology. Ellingsrud and Str\o mme ~\cite{EStromme} showed the Chern classes of the bundles $\enl{n}{\mathcal{O}_{\mathbb{P}^2}}$, $\enl{n}{\mathcal{O}_{\mathbb{P}^2}(1)}$, and $\enl{n}{\mathcal{O}_{\mathbb{P}^2}(2)}$ generate the cohomology of $\hns{n}{\mathbb{P}^2}$. Nakajima gave an interpretation of the McKay correspondence by restricting the tautological bundles to the G-Hilbert scheme which is nicely exposited in ~\cite[$\S$4.3]{NakHilb}. Recently Okounkov ~\cite{Okounkov} formulated a conjecture about special generating functions associated to the tautological bundles.
Given the importance of the tautological bundles it is natural to ask whether they are stable. In ~\cite{Schl}, ~\cite{Wandel1}, and ~\cite{Wandel2} this question has been answered positively for Hilbert schemes of 2 points or 3 points on a K3 or abelian surface with Picard group restrictions. Our first result establishes the stability of these bundles for arbitrary $n$ and any surface.
\begin{theoremalpha}\label{A} If $\mathcal{L}$ is a nontrivial line bundle on $S$, then $\enl{n}{\mathcal{L}}$ is slope stable with respect to natural Chow divisors on $\hns{n}{S}$.
\end{theoremalpha}
\noindent More precisely, an ample divisor on $S$ determines a natural ample divisor on $\sym{n}{S}$, and the pullback via the Hilbert-Chow morphism gives one such natural Chow divisor on $\hns{n}{S}$, which is not ample but is big and semiample. More generally, we prove that if $\mathcal{E} \not\cong \mathcal{O}_S$ is any slope stable vector bundle on $S$ with respect to some ample divisor then $\enl{n}{\mathcal{E}}$ is slope stable with respect to the corresponding Chow divisor. Although Theorem A only gives stability with respect to a strictly big and nef divisor, we are able to deduce stability with respect to nearby ample divisors via a perturbation argument on the nef cone.
If $S$ is any smooth surface, there is a divisor $B_n$ in $\hns{n}{S}$ which consists of nonreduced subschemes. The pair $(\hns{n}{S},B_n)$ gives a natural closure of the space of $n$ distinct points in $S$. The vector fields on $\hns{n}{S}$ tangent to $B_n$ form the sheaf of logarithmic vector fields $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$. Our second result says the sheaf $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$ is naturally isomorphic to the tautological bundle associated to the tangent bundle on $S$.
\begin{theoremalpha}\label{B} For any smooth surface $S$ there exists a natural injection:
\begin{center}
$\displaystyle \alpha_n : \enl{n}{(T_S)} \rightarrow T_{\hns{n}{S}}$,
\end{center}
\noindent and $\alpha_n$ induces an isomorphism between $\enl{n}{(T_S)}$ and $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$.
\end{theoremalpha}
\noindent The analogous statement also holds for smooth curves. In general the sheaves $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$ are only guaranteed to be reflexive as $B_n$ is not simple normal crossing. However, \hr{B}{Theorem B} shows the $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$ is locally free, that is $B_n$ is a \textit{free divisor}. Buchweitz and Mond were already aware that $B_n$ is a free divisor as is indicated in the introduction of \cite{Buch}.
Finally, we explore the geometry of the tautological bundles when the surface is the projective plane. We prove the tautological bundles on $\hns{n}{\pt}$ are rich enough to capture all semistable rank $n$ bundles on curves.
\begin{theoremalpha}\label{C} If $C$ is a smooth projective curve and $\mathcal{E}$ is a semistable rank $n$ vector bundle on $C$ with sufficiently positive degree, then there exists an embedding $C \rightarrow \hns{n}{\mathbb{P}^2}$ such that $\enl{n}{\mathcal{O}_{\mathbb{P}^2}(1)}|_C \cong \mathcal{E}$.
\end{theoremalpha}
The proof of \hr{A}{Theorem A} follows the approach taken by Mistretta ~\cite{Mistretta} who studies the stability of tautological bundles on the symmetric powers of a curve. The idea is to examine the tautological vector bundles on the cartesian power $S^n$ and show there are no $\text{$\mathfrak{S}_n$}$-equivariant destabilizing subsheaves. This strategy is more effective for surfaces because the diagonals in $S^n$ have codimension 2. The map in \hr{B}{Theorem B} arises from pushing forward the normal sequence of the universal family. The image of the restriction of the map to a point $\br{\xi} \in \hns{n}{S}$ can be thought of as deformations of $\xi$ coming from vector fields on $S$. The proof of \hr{C}{Theorem C} is constructive, using the spectral curves of \cite{BNR}.
In \hr{Sec1}{Section 1} we give the proof of \hr{A}{Theorem A}. In \hr{Sec2}{Section 2} we explore the geometry of the tautological bundles. We start by proving \hr{C}{Theorem C}. We proceed by showing that for Hilbert schemes of 2 points in the plane the tautological bundles are relative analogues of the Steiner bundles in the plane. Next we prove \hr{B}{Theorem B}. Finally, in \hr{Sec3}{Section 3} we give the perturbation argument, deducing the tautological bundles are stable with respect to ample divisors.
Throughout we work over the complex numbers. For any divisor class $H \in N^1(X)$. We define the \textit{slope of $\mathcal{E}$ with respect to $H$} to be the rational number:
\begin{center}
$\displaystyle \slhf{H}{\mathcal{E}} := \dfrac{c_1(\mathcal{E}) \cdot H^{d-1}}{ \rk{\mathcal{E}}}$.
\end{center}
\noindent We say $\mathcal{E}$ is \textit{slope (semi)stable with respect to H} if for all subsheaves $\mathcal{F} \subset \mathcal{E}$ of intermediate rank:
\begin{center}
$\slhf{H}{\mathcal{F}} \underset{(\le)}{<} \slhf{H}{\mathcal{E}}$.
\end{center}
I am grateful to my advisor Robert Lazarsfeld who suggested the project and directed me in productive lines of thought. I am also thankful for conversations and correspondences with Lawrence Ein, Roman Gayduk, Daniel Greb, Giulia Sacc\`a, Ian Shipman, Brooke Ullery, Dingxin Zhang, and Xin Zhang. This paper is a substantial revision of a previous preprint. I would finally like to thank the referee of the previous paper for a thorough review and helpful suggestions.
\section{Stability of Tautological Bundles}\label{Sec1}
In this section we prove that the tautological bundle of a stable vector bundle $\mathcal{E}$ is stable with respect to natural Chow divisors on $\hns{n}{S}$. Thus we deduce \hr{A}{Theorem A} when $\mathcal{E}$ is a nontrivial line bundle. We start by defining the essential objects in the study of Hilbert schemes of points on surfaces.
Let $S$ be a smooth complex projective surface. We write $\hns{n}{S}$ for the Hilbert scheme of length $n$ subschemes of $S$. We denote by $\mathcal{Z}_n$ the universal family of $\hns{n}{S}$ with projections:
\begin{center}
\begin{tikzpicture}
\node (Zn) {$S \times \hns{n}{S} \supset \mathcal{Z}_n$};
\node (S) [right] at (3,0) {$S$.};
\node (HnS) [below] at (.95,-1) {$\hns{n}{S}$};
\node (p1) at (2.1,.2) {$p_1$};
\node (p2) at (.75,-.65) {$p_2$};
\draw[->] (.95,-.3) -- (.95,-1.05);
\draw[->] (1.2,0) -- (3,0);
\end{tikzpicture}
\end{center}
\noindent For a fixed vector bundle $\mathcal{E}$ on $S$ of rank $r$ we define
\begin{center}
$\enl{n}{\mathcal{E}} := {p_2}_*({p_1}^*\mathcal{E})$.
\end{center}
\noindent It is the \textit{tautological vector bundle associated to $\mathcal{E}$} and has rank $rn$. The fiber of $\enl{n}{\mathcal{E}}$ at a point $\br{\xi} \in \hns{n}{S}$ can be naturally identified with the vector space $H^0(S,\mathcal{E}|_\xi)$.
The symmetric group on $n$ elements $\text{$\mathfrak{S}_n$}$ naturally acts on the cartesian product $S^n$, and we write $\sigma_n$ for the quotient map:
\begin{center}
$\sigma_n:S^n \rightarrow S^n / \text{$\mathfrak{S}_n$} =: \sym{n}{S}$.
\end{center}
\noindent There is also a Hilbert-Chow morphism:
\begin{center}
$h_n : \hns{n}{S} \rightarrow \sym{n}{S}$
\end{center}
\noindent which is a semismall map ~\cite[Definition 2.1.1]{dCM1}.
We wish to view $\enl{n}{\mathcal{E}}$ as an $\text{$\mathfrak{S}_n$}$-equivariant sheaf on $S^n$. Recall that if $G$ is a finite group that acts on a scheme $X$, and if $\mathcal{F}$ is a coherent sheaf on $X$ then a \textit{$G$-equivariant structure on $\mathcal{F}$} is given by a choice of isomorphisms:
\begin{center}
$\phi_g : \mathcal{F} \rightarrow g^* \mathcal{F}$
\end{center}
\noindent for all $g \in G$ satisfying the compatibility condition $h^* (\phi_g) \circ \phi_h = \phi_{gh}$. Following Danila ~\cite{Danila} and Scala ~\cite{Scala} we study the tautological bundles on $\hns{n}{S}$ by working with $\text{$\mathfrak{S}_n$}$-equivariant sheaves on $S^n$. For our purposes it is enough to study $\enl{n}{\mathcal{E}}$ equivariantly on the open subset of distinct points in $\hns{n}{S}$.
We write $\sym{n}{S}_{\circ}$ for the open subset of $\sym{n}{S}$ of distinct points. Likewise given a map $f : X \rightarrow \sym{n}{S}$ we write $X_{\circ}$ for $f^{-1}(\sym{n}{S}_{\circ})$. By abuse of notation given another map $g: X \rightarrow Y$ with domain $X$ we define $g_{\circ} := g|_{X_{\circ}}$ and given a coherent sheaf $\mathcal{F}$ on $X$ we define $\mathcal{F}_{\circ} := \mathcal{F}|_{X_{\circ}}$. The map $h_{n,\circ} : \hns{n}{S}_{\circ} \rightarrow \sym{n}{S}_{\circ}$ is an isomorphism. We define
\begin{center}
$\overline{\sigma}_{n,\circ} := h_{n,\circ}^{-1} \circ \sigma_{n,\circ}:S^n_{\circ} \rightarrow \hns{n}{S}_{\circ}$.
\end{center}
Given a torsion-free coherent sheaf $\mathcal{F}$ on $\hns{n}{S}$ we define a torsion-free coherent sheaf on $S^n$ by
\begin{center}
$(\mathcal{F})_{S^n} := j_*(\overline{\sigma}_{n,\circ}^*(\mathcal{F}_{\circ}))$
\end{center}
\noindent where $j$ is the inclusion $j:S^n_{\circ} \rightarrow S^n$. The sheaf $(\mathcal{F})_{S^n}$ can be thought of as a modification of $\mathcal{F}$ along the exceptional divisor of $h_n$.
The pullback $\overline{\sigma}_{n,\circ}^* (-)$ is left exact as the map $\overline{\sigma}_{n,\circ}$ is \'etale; thus the functor $(-)_{S^n}$ is left exact. If $\mathcal{F}$ is reflexive, the normality of $S^n$ implies the natural $\text{$\mathfrak{S}_n$}$-equivariant structure on the reflexive sheaf $\overline{\sigma}_{n,\circ}^*(\mathcal{F}_{\circ})$ pushes forward uniquely to an $\text{$\mathfrak{S}_n$}$-equivariant structure on $(\mathcal{F})_{S^n}$.
Let $q_i$ denote the projection from $S^n$ onto the $i$th factor. Given a vector bundle $\mathcal{E}$ on $S$ there is an $\text{$\mathfrak{S}_n$}$-equivariant vector bundle on $S^n$ defined by
\begin{center}
$\displaystyle \mathcal{E}^{\boxplus n} := \overset{n}{\underset{i=1}\bigoplus} q_i^*(\mathcal{E})$.
\end{center}
\noindent We have given two natural $\text{$\mathfrak{S}_n$}$-equivariant sheaves on $S^n$ associated to $\mathcal{E}$. In fact they are equivalent.
\begin{lemma}\label{1.1} Given a vector bundle $\mathcal{E}$ on $S$ there is an isomorphism:
\begin{center}
$(\enl{n}{\mathcal{E}})_{S^n} \cong \mathcal{E}^{\boxplus n}$
\end{center}
\noindent of $\text{$\mathfrak{S}_n$}$-equivariant vector bundles on $S^n$.
\end{lemma}
\begin{proof} Consider the fiber square:
\begin{center}
\begin{tikzpicture}
\node (Fib) at (2,0) {$\displaystyle \mathcal{Z}_{n,\circ} \underset{\text{\tiny ${\hns{n}{S}_{\circ}}$}}{\times} S^n_{\circ}$};
\node (F) at (.45,0) {$F :=$};
\node (Sn) at (5,0) {$S^n_{\circ}$};
\node (Zn) at (2,-2) {$\displaystyle \mathcal{Z}_{n,\circ}$};
\node (Hn) at (5,-2) {$\displaystyle \hns{n}{S}_{\circ}$};
\draw[->] (Fib) to node[above] {$p'_{2,o}$} (Sn);
\draw[->] (Fib) to node[left] {$\overline{\sigma}'_{n,\circ}$} (Zn);
\draw[->] (Zn) to node[above] {$p_{2,o}$} (Hn);
\draw[->] (Sn) to node[right] {$\overline{\sigma}_{n,\circ}$} (Hn);
\end{tikzpicture}.
\end{center}
\noindent Every map in the fiber square is an \'etale map between $\text{$\mathfrak{S}_n$}$-schemes (the $\text{$\mathfrak{S}_n$}$-action on $\mathcal{Z}_{n,\circ}$ and $\hns{n}{S}_{\circ}$ is trivial). We write $\Gamma_i$ for the subscheme of $S^n_{\circ} \times S$ that is the graph of the map $q_{i,o} : S^n_{\circ} \rightarrow S$. The scheme $F$ is equal to the disjoint union $\coprod \Gamma_i$ and is a subscheme of $S^n_{\circ} \times S$. The restriction $p_{1,\circ} \circ \overline{\sigma}'_{n,\circ}|_{\Gamma_i}$ is the projection $\Gamma_i \rightarrow S$. So there is an equivariant isomorphism ${p'_{2,o}}_*({\overline{\sigma}'_{n,\circ}}^*({p_{1,\circ}}^*(\mathcal{E}))) \cong \mathcal{E}^{\boxplus n}_{\circ}$.
As the fiber square is made of flat proper $\text{$\mathfrak{S}_n$}$-maps there is a natural $\text{$\mathfrak{S}_n$}$-equivariant isomorphism:
\begin{center}
${p'_{2,o}}_*({\overline{\sigma}'_{n,\circ}}^*({p_{1,\circ}}^*(\mathcal{E}))) \cong {\overline{\sigma}_{n,\circ}}^*({p_{2,o}}_*({p_{1,\circ}}^*(\mathcal{E})))$.
\end{center}
\noindent The latter sheaf is $(\enl{n}{\mathcal{E}})_{S^n,\circ}$. Finally, any isomorphism between vector bundles on $S^n_{\circ}$ uniquely extends to an isomorphism between their pushforwards along $j$. Therefore there is a natural $\text{$\mathfrak{S}_n$}$-equivariant isomorphism $(\enl{n}{\mathcal{E}})_{S^n} \cong \mathcal{E}^{\boxplus n}$.
\end{proof}
Given an ample divisor $H$ on $S$ there is a natural $\text{$\mathfrak{S}_n$}$-invariant ample divisor on $S^n$ defined as:
\begin{center}
$H_{S^n}:=\overset{n}{\underset{i=1}\sum} q_i^*(H)$.
\end{center}
\noindent Fogarty ~\cite[Lemma 6.1]{Fogarty} shows every divisor $H_{S^n}$ descends to an ample Cartier divisor on $\sym{n}{S}$. Pulling back this Cartier divisor along the Hilbert-Chow morphism gives a big and nef divisor on $\hns{n}{S}$ which we denote by $H_n$. If $H$ is effective then $H_n$ can be realized set-theoretically as
\begin{center}
$H_n = \{ \xi \in \hns{n}{S} \text{ }|\text{ } \xi \cap \mathrm{Supp}(H) \ne \emptyset \}$.
\end{center}
\begin{lemma}\label{1.2} If $\mathcal{F}$ is a torsion-free sheaf on $\hns{n}{S}$ then
\begin{center}
$\displaystyle (n!)\int\limits_{\hns{n}{S}} c_1(\mathcal{F}) \cdot (H_n)^{2n-1} = \int\limits_{S^n}c_1((\mathcal{F})_{S^n}) \cdot (H_{S^n})^{2n-1}$.
\end{center}
\end{lemma}
\begin{proof} This is a straightforward calculation using $\hns{n}{S}_{\circ}$, $\sym{n}{S}_{\circ}$, and $S^n_{\circ}$.
\end{proof}
In the following lemma we assume \hr{3.7}{Proposition 3.7} which says the pullback of a stable bundle to a product is stable with respect to a product polarization. For the sake of the exposition we give the proof of \hr{3.7}{Proposition 3.7} in \hr{Sec3}{Section 3}.
\begin{lemma}\label{1.3} If $\mathcal{E} \not\cong \mathcal{O}_S$ is slope stable on $S$ with respect to an ample divisor $H$ then there are no $\text{$\mathfrak{S}_n$}$-equivariant subsheaves of $\mathcal{E}^{\boxplus n}$ that are slope destabilizing with respect to
$H_{S^n}$.
\end{lemma}
\begin{proof} Let $0 \ne \mathcal{F} \subset \mathcal{E}^{\boxplus n}$ be an $\text{$\mathfrak{S}_n$}$-equivariant subsheaf. We can find a (not necessarily equivariant) slope stable subsheaf $0 \ne \mathcal{F}' \subset \mathcal{F}$ which has maximal slope with respect to $H_{S^n}$. Fix $i$ so that the composition:
\begin{center}
$\mathcal{F}' \rightarrow \mathcal{E}^{\boxplus n} \rightarrow q_i^* \mathcal{E}$
\end{center}
\noindent is nonzero. By \hr{3.7}{Proposition 3.7} we know that each $q_i^* \mathcal{E}$ is slope stable with respect to $H_{S^n}$. A nonzero map between slope stable sheaves can only exist if
\begin{enumerate}
\item the slope of $\mathcal{F}'$ is less than the slope of $q_i^* \mathcal{E}$, or
\item $\mathcal{F}' \rightarrow q_i^* \mathcal{E}$ is an isomorphism.
\end{enumerate}
In case (1), $\displaystyle \slhf{H_{S^n}}{\mathcal{F}} \le \slhf{H_{S^n}}{\mathcal{F}'} < \slhf{H_{S^n}}{q_i^* \mathcal{E}}$. By symmetry, $\slhf{H_{S^n}}{q_i^* \mathcal{E}} = \slhf{H_{S^n}}{q_j^* \mathcal{E}}$ for all $i$ and $j$. Thus $\slhf{H_{S^n}}{q_i^* \mathcal{E}} = \slhf{H_{S^n}}{\mathcal{E}^{\boxplus n}}$ and $\mathcal{F}$ does not destabilize $\mathcal{E}^{\boxplus n}$.
In case (2), we know $\mathcal{F}' \cong q_i^* \mathcal{E}$. Because $\mathcal{E} \not\cong \mathcal{O}_S$, the pullbacks $q_i^* \mathcal{E}$ and $q_j^* \mathcal{E}$ are not isomorphic unless $i = j$. As all the $q_j^* \mathcal{E}$ have the same slope and are stable with respect to $H_{S^n}$, $\mathrm{Hom}(\mathcal{F}',q_j^* \mathcal{E})=0$ for $j \ne i$. In particular all the compositions
\begin{center}
$\mathcal{F}' \rightarrow \mathcal{E}^{\boxplus n} \rightarrow q_j^* \mathcal{E}$
\end{center}
\noindent are zero for $j \ne i$. Thus $\mathcal{F}'$ is a summand of $\mathcal{E}^{\boxplus n}$. So $\mathcal{F}$ is an $\text{$\mathfrak{S}_n$}$-equivariant subsheaf of $\mathcal{E}^{\boxplus n}$ which contains one of the summands. But $\text{$\mathfrak{S}_n$}$ acts transitively on the summands so $\mathcal{F}$ contains all the summands, hence $\mathcal{F}$ does not destabilize $\mathcal{E}^{\boxplus n}$.
\end{proof}
Now we prove \hr{A}{Theorem A} in full generality.
\begin{theorem}\label{Agen} If $\mathcal{E} \not\cong \mathcal{O}_S$ is a vector bundle on $S$ which is slope stable with respect to an ample divisor $H$, then $\enl{n}{\mathcal{E}}$ is slope stable with respect to $H_n$.
\end{theorem}
\begin{proof}\label{pA} Let $\mathcal{F} \subset \enl{n}{\mathcal{E}}$ be a reflexive subsheaf of intermediate rank. It is enough to consider reflexive sheaves because the saturation of a torsion free subsheaf of $\enl{n}{\mathcal{E}}$ is reflexive of the same rank and its slope cannot decrease. By \hr{1.2}{Lemma 1.2}, the slope of a torsion-free sheaf $\mathcal{F}$ with respect to $H_n$ is up to a fixed positive multiple the same as the slope of $(\mathcal{F})_{S^n}$ with respect to $H_{S^n}$. In particular
\begin{center}
$\slhf{H_n}{\mathcal{F}}<\slhf{H_n}{\enl{n}{\mathcal{E}}} \iff \slhf{H_{S^n}}{(\mathcal{F})_{S^n}} < \slhf{H_{S^n}}{\mathcal{E}^{\boxplus n}}$.
\end{center}
\noindent Now $(\mathcal{F})_{S^n}$ is naturally an $\text{$\mathfrak{S}_n$}$-equivariant subsheaf of $\mathcal{E}^{\boxplus n}$. Thus by \hr{1.3}{Lemma 1.3}
\begin{center}
$\slhf{H_{S^n}}{(\mathcal{F})_{S^n}} < \slhf{H_{S^n}}{\mathcal{E}^{\boxplus n}}$.
\end{center}
\noindent Therefore, $\slhf{H_n}{\mathcal{F}}<\slhf{H_n}{\enl{n}{\mathcal{E}}}$ for all torsion-free subsheaves of intermediate rank, and $\enl{n}{\mathcal{E}}$ is stable with respect to $H_n$.
\end{proof}
\begin{remark}[On the Bogomolov inequalities]
If $\mathcal{E}$ is a vector bundle on $S$ stable with respect to $H$, then stability of $\enl{n}{\mathcal{E}}$ with respect to $H_n$ along with the perturbation argument from Section 3 implies the Bogomolov type topological inequality:
\begin{center}
$(r-1)c_1(\enl{n}{\mathcal{E}})^2 \cdot H_n^{2n-2} \le 2r c_2(\enl{n}{\mathcal{E}})\cdot H_n^{2n-2}$.
\end{center}
\noindent These intersection numbers can be rewritten in terms of the intersection theory of $S$ and these inequalities reduce to the regular Bogomolov inequality on $S$ and the inequality coming from the Hodge index theorem.
\end{remark}
\section{Geometry of tautological bundles}\label{Sec2}
In this section we give examples of the geometry inherent to the tautological vector bundles. For each curve we construct an embedding in the Hilbert scheme of $n$ points in the plane such that the restriction of $\enl{n}{\Oc_{\pt}(1)}$ to the curve is some prescribed vector bundle. Next, we show that the tautological bundles on the space of 2 points in the plane have explicit resolutions, making them a relative analogue of Steiner bundles in the plane. Finally, for any smooth surface we construct a map from the tautological bundle of the tangent bundle of the surface to the tangent bundle of the Hilbert scheme of points on the surface which realizes the first bundle as the sheaf of logarithmic vector fields.
\subsection{Restrictions to curves}
In this section we prove every sufficiently positive, rank $n$, semistable vector bundle on a smooth projective curve arises as the pull back of $\enl{n}{\Oc_{\pt}(1)}$ along an embedding of the curve in $\hns{n}{\pt}$. To prove the theorem we need the spectral curves of \cite{BNR}. For completeness we recall the construction.
Let $\pi : D \rightarrow C$ be an $n:1$ map between smooth irreducible projective curves and let $\mathcal{E}$ be an $\mathcal{O}_C$-module. If $D$ can be embedded into the total space of a line bundle $\mathcal{L}$ on $C$:
\begin{center}
$\mathbb{L} := \mathcal{S}pec_{\mathcal{O}_C} (\sym{\bullet}{\mathcal{L}^{\vee}}) \xrightarrow{\pi_{\mathbb{L}}} C$
\end{center}
\noindent with $\pi = \pi_{\mathbb{L}}|_D$ then this gives a presentation:
\begin{center}
$\pi_* \mathcal{O}_D \cong \sym{\bullet}{\mathcal{L}^{\vee}} \Big/ (x^n + s_1 x^{n-1} + ... + s_n)$
\end{center}
\noindent for $x^n + s_1 x^{n-1} + ... + s_n \in H^0(\mathbb{L} , ({\pi_{\mathbb{L}}}^* \mathcal{L})^{\otimes n})$. Here we write $x \in H^0(\mathbb{L},{\pi_{\mathbb{L}}}^*(\mathcal{L}))$ for the \textit{coordinate section} of ${\pi_{\mathbb{L}}}^*(\mathcal{L})$. To give $\mathcal{E}$ the structure of a $\pi_*\mathcal{O}_D$-module we need to specify a multiplication map $m: \mathcal{E} \otimes \mathcal{L}^{-1} \rightarrow \mathcal{E}$ (equivalently $\mathcal{E} \rightarrow \mathcal{E} \otimes \mathcal{L}$) which satisfies the relation
$m^n + s_1 m^{n-1} + ... + s_n = 0$.
Every $\mathcal{L}$-twisted endomorphism $m : \mathcal{E} \rightarrow \mathcal{E} \otimes \mathcal{L}$ has an associated $\mathcal{L}$-twisted characteristic polynomial, which is a global section $p_m(x) \in H^0(\mathbb{L},({\pi_{\mathbb{L}}}^* \mathcal{L})^{\otimes n})$. A global version of the Cayley-Hamilton theorem says that $m$ automatically satisfies its $\mathcal{L}$-twisted characteristic polynomial. In particular, if the zero set of $p_m(x)$ is $D$ then $\mathcal{E}$ can naturally be thought of as a $\pi_*\mathcal{O}_D$-module. Fixing $s \in H^0(\mathbb{L},({\pi_{\mathbb{L}}}^* \mathcal{L})^{\otimes n})$ which cuts out the integral curve $D$, \cite[Proposition 3.6]{BNR} gives the beautiful correspondence:
\begin{equation}\label{eqn:diamond}
\left\{ \mathcal{E} \xrightarrow{m} \mathcal{E} \otimes \mathcal{L} \text{ } \Big| \mathcal{E} \text{ a vector bundle and } p_m(x) = s \right\} \stackrel{1:1}{\longleftrightarrow} \{ \text{invertible sheaves } \mathcal{M} \text{ on } D \}.\tag{$\diamond$}
\end{equation}
The correspondence going from right to left is given by taking the coordinate section of ${\pi_{\mathbb{L}}}^*(\mathcal{L})$, restricting to $D$, twisting by $\mathcal{M}$, and pushing forward along $\pi$.
\begin{proof}[Proof of \hr{C}{Theorem C}] Let $C$ be a smooth projective genus $g$ curve and $\mathcal{E}$ a rank $n$ semistable vector bundle on $C$. Let $\mathcal{L}$ be a line bundle on $C$. As $\mathcal{E}$ is semistable, $\mathcal{E} \otimes \mathcal{E}^{\vee}$ is also semistable and has slope 0, hence $\mathcal{E} \otimes \mathcal{E}^{\vee} \otimes \mathcal{L}$ is globally generated when $\deg{\mathcal{L}} \ge 2g$. Therefore, if
\begin{center}
$m: \mathcal{E} \rightarrow \mathcal{E} \otimes \mathcal{L}$
\end{center}
\noindent is a general $\mathcal{L}$-twisted endomorphism then the resulting $\mathcal{L}$-twisted characteristic polynomial is smooth with simple branching. In fact, if $V \subset \mathbb{E} \otimes \mathbb{E}^{\vee} \otimes \mathbb{L}$ is the locus of $\mathcal{L}$-twisted endomorphisms whose characteristic polynomial has repeated roots and if $m : C \rightarrow \mathbb{E} \otimes \mathbb{E}^{\vee} \otimes \mathbb{L}$ meets $V$ transversely and avoids the locus of $V$ where the $\mathcal{L}$-twisted characteristic polynomial has repeated roots to a higher multiplicity, then the resulting spectral curve $D$ is smooth and connected.
Thus, by the correspondence \eqref{eqn:diamond} there is a line bundle $\mathcal{M}$ on $D$ such that $\pi_*\mathcal{M} \cong \mathcal{E}$. The genus of $D$ is $g_D = {r \choose 2} \mathrm{deg}(\mathcal{L}) + n(g-1) + 1$ and is independent of $\mathcal{E}$. However, the degree of $\mathcal{M}$ is $ \mathrm{deg}(\mathcal{E}) + {r \choose 2}\mathrm{deg}(\mathcal{L})$ and does depend on the degree of $\mathcal{E}$. In particular, if
\begin{center}
$\mathrm{deg}(\mathcal{E}) \ge {r \choose 2}\mathrm{deg}(\mathcal{L}) + r(2g-2) + 3$
\end{center}
\noindent then $\mathcal{M}$ is very ample and 3 general sections of $\mathcal{M}$ give a map $\phi : D \rightarrow \mathbb{P}^2$ such that the induced maps $\pi \times \phi : D \rightarrow C \times \mathbb{P}^2$ and $\psi_{\pi,\phi} : C \rightarrow \hns{n}{\pt}$ are embeddings. Under the embedding $\psi_{\pi,\phi}$ the restriction of $\enl{n}{\Oc_{\pt}(1)}$ to $C$ is precisely $\mathcal{E}$, proving Theorem C.
\end{proof}
\subsection{Two points in the projective plane}
Now we restrict our attention to the Hilbert scheme of 2 points in the plane. As a reminder, if we identify $\mathbb{P}^2 = \sym{2}{\mathbb{P}^1}$ (where $\mathbb{P}^1 = \mathbb{P}(W)$), then for $k\ge 2$ the tautological bundle $\enl{2}{\mathcal{O}_{\mathbb{P}^1}(k)}$ has a natural 2-term resolution:
\begin{center}
$\displaystyle 0 \rightarrow \sym{k-2}{W} \otimes_\mathbb{C} \mathcal{O}_{\mathbb{P}^2}(-1) \xrightarrow{m} \sym{k}{W} \otimes_\mathbb{C} \mathcal{O}_{\mathbb{P}^2} \rightarrow \enl{2}{\mathcal{O}_{\mathbb{P}^1}(k)} \rightarrow 0$.
\end{center}
\noindent Here
\begin{center}
$\displaystyle m \in \mathrm{Hom}(\sym{k-2}{W} \otimes_\mathbb{C} \mathcal{O}_{\mathbb{P}^2}(-1),\sym{k}{W} \otimes_\mathbb{C} \mathcal{O}_{\mathbb{P}^2})$\\
\hspace{4cm}$\cong \mathrm{Hom}(\sym{k-2}{W} \otimes \sym{2}{W},\sym{k}{W})$
\end{center}
\noindent is the multiplication in the symmetric algebra. In general, any bundle on $\mathbb{P}^N$ with a similar 2-term linear resolution is called a Steiner bundle.
The Hilbert scheme of two points on the projective plane $\pv{V}$ has a natural map to $\pv{V^\vee}$. The map
\begin{center}
$\psi : \hns{2}{\pv{V}} \rightarrow \pv{V^\vee}$
\end{center}
\noindent exhibits $\hns{2}{\pv{V}}$ as a two dimensional projective bundle over $\pv{V^\vee}$. Geometrically, $\psi$ is given by sending a subscheme $\br{\xi}$ to the line that is spanned by $\xi$. A fiber of $\psi$ is the second symmetric power of the corresponding line. Viewing $\hns{2}{\pv{V}}$ as a $\mathbb{P}^2$-bundle over $\pv{V^\vee}$ the tautological bundles come with a two term \textit{relative linear} resolution, making them a relative version of the Steiner bundles in the plane.
\begin{subproposition}
For $k \ge 2$ there is a 2 term relatively linear resolution of $\enl{2}{\mathcal{O}_{\pv{V}}(k)}$ given by
\begin{center}
$\displaystyle 0 \rightarrow \psi^*(\mathrm{Sym}^{k-2} K_{(V^\vee)}^\vee)(-1) \rightarrow \psi^*(\mathrm{Sym}^{k} K_{(V^\vee)}^\vee) \rightarrow \enl{2}{\mathcal{O}_{\pv{V}}(k)} \rightarrow 0 $
\end{center}
\noindent where $K_{(V^\vee)}$ is the kernel bundle in the tautological sequence on $\pv{V^\vee}$:
\begin{center}
$0 \rightarrow K_{(V^\vee)} \rightarrow V^\vee \otimes \mathcal{O}_{\pv{V^\vee}} \rightarrow \mathcal{O}_{\pv{V^\vee}}(1) \rightarrow 0$.
\end{center}
\end{subproposition}
\begin{proof}[Sketch of proof] There is an isomorphism $\hns{2}{\pv{V}} \cong \mathbb{P}(\mathrm{Sym}^2(K_{(V^\vee)}))$. From this perspective, the universal family $\mathcal{Z}_2$ is a divisor in the fiber product $X := \hns{2}{\pv{V}} \times_{\pv{V^\vee}} \mathbb{P}(K_{(V^\vee)}^\vee)$. Specifically $\mathcal{O}_X(\mathcal{Z}_2) \cong \mathcal{O}_X(1,2)$. And if $p:X \rightarrow \hns{2}{\pv{V}}$ is the projection map then
\begin{center}
$p_* \big( \mathcal{O}_{\mathcal{Z}_2}(0,k)\big) \cong \enl{2}{\mathcal{O}_{\pv{V}}(k)}$.
\end{center}
\noindent Therefore, we can twist the ideal sequence of $\mathcal{Z}_2$ and take direct images of the sequence with respect to $p$ to obtain resolutions of the tautological bundles. When $k \ge 2$ there are no higher direct images, so the pushforward of the twisted ideal sequence is exact, giving the desired resolution.
\end{proof}
\begin{subremark} We can modify the proof to obtain resolutions of $\enl{2}{\mathcal{O}_{\pv{V}}(k)}$ for all $k$.
\begin{center}
\begin{tabular}{r|l}
$k=1$ & $\displaystyle \enl{2}{\mathcal{O}_{\pv{V}}(1)} \cong \psi^* (K_{(V^\vee)}^\vee) $\\
$k=0$ & $\displaystyle 0 \rightarrow \mathcal{O}_{\hns{2}{\pv{V}}} \rightarrow \enl{2}{\big( \mathcal{O}_{\pv{V}} \big)} \rightarrow \psi^* \big(\mathrm{det} (K_{(V^\vee)}^\vee)\big)(-1) \rightarrow 0$\\
$k=-1$ & $\displaystyle \enl{2}{\mathcal{O}_{\pv{V}}(-1)} \cong \psi^* (K_{V^\vee}) \otimes \psi^*\big(\mathcal{O}_{\pv{V^\vee}}(1) \big)(-1) $\\
$k\le -2$ & $\displaystyle 0 \rightarrow \enl{2}{\mathcal{O}_{\pv{V}}(k)} \rightarrow \psi^*\Big( \big(\mathrm{Sym}^{k} K_{(V^\vee)}\big)(1)\Big)(-1) \rightarrow \psi^*\Big( \big(\mathrm{Sym}^{k-2} K_{(V^\vee)}\big)(1) \Big) \rightarrow 0$.\\
\end{tabular}
\end{center}
\end{subremark}
\begin{subremark}
For $N > 2$, the Hilbert scheme of 2 points on $\mathbb{P}^N$ is smooth. There is an analogous map:
\begin{center}$\psi : \hns{2}{(\mathbb{P}^N)} \rightarrow \mathrm{Gr}(2,N+1)$,
\end{center}
\noindent and the tautological bundles have 2 term relative linear resolutions as in the case $N=2$.
\end{subremark}
\subsection{The tautological tangent map} For any smooth surface $S$ (not necessarily projective), the Hilbert scheme $\hns{n}{S}$ is a smooth closure of the space of $n$ distinct points in $S$. The boundary $B_n$ is the locus of nonreduced length $n$ subschemes of $S$. We are interested in vector fields which are tangent to the boundary $B_n$.
\begin{subdefinition}
If $D$ is a codimension 1 subvariety of $X$ a smooth variety, then the sheaf of logarithmic vector fields, denoted $\mathrm{Der}_\mathbb{C}(\mathrm{-log}D)$, is the subsheaf of $T_X$ consisting of vector fields which along the regular locus of $D$ are tangent to $D$.
\end{subdefinition}
\noindent When $D$ is smooth, $\mathrm{Der}_\mathbb{C}(\mathrm{-log}D)$ is just the elementary transformation of the tangent bundle along the normal bundle of $D$ in $X$, in particular it is a vector bundle. Even when $D$ is singular $\mathrm{Der}_\mathbb{C}(\mathrm{-log}D)$ is reflexive, so it is enough to define $\mathrm{Der}_\mathbb{C}(\mathrm{-log}D)$ away from the singular locus (or any codimension 2 set in $X$) of $D$ and then pushforward.
For Hilbert schemes of points on surfaces we can naturally understand $\mathrm{Der}_\mathbb{C}(\mathrm{-log}B_n)$ as the tautological bundles of the tangent bundle on the surface.
\begin{Theorem B} For any smooth connected surface $S$ there exists a natural injection:
\begin{center}
$\displaystyle \alpha_n : \enl{n}{(T_S)} \rightarrow T_{\hns{n}{S}}$,
\end{center}
\noindent and $\alpha_n$ induces an isomorphism between $\enl{n}{(T_S)}$ and $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$.
\end{Theorem B}
\noindent At a point $\br{\xi} \in \hns{n}{S}$ the map $\alpha_n|_{\br{\xi}}$ can be interpreted as deformations of $\xi$ coming from tangent vectors of $S$. We expect that the degeneracy loci of $\alpha_n$ give a interesting stratification of $\hns{n}{S}$.
Before proving \hr{B}{Theorem B} we prove a general lemma.
\begin{sublemma}
Let $X$ and $Y$ be smooth varieties and $f: X \rightarrow Y$ a branched covering with reduced branch locus $B \subset Y$. If $\delta \in H^0(Y,TY)$ is a vector field on $Y$ whose pullback $f^* \delta \in H^0(X,f^*TY)$ is in the image of
\begin{center}
$df: H^0(X,TX) \rightarrow H^0(X,f^*TY)$,
\end{center}
\noindent then $\delta \in H^0(Y,\mathrm{Der}_\mathbb{C}(\mathrm{-log}B))$.
\end{sublemma}
\begin{proof} It is enough to check $\delta$ is tangent to $B$ for points $p \in B$ outside of a codimension 2 subset in $Y$. Let $p \in B$ be a general point and $q$ a ramified point in the fiber of $f$ over $p$. We can choose local analytic coordinates $y_1 , ... , y_n$ centered at $p$ and coordinates $x_1 , ... , x_n$ centered at $q$ such that
\begin{center}
$f^*(y_1)=x_1^m$\\ \hspace{1.3cm}$f^*(x_i)=x_i$ $(i>1)$.
\end{center}
\noindent That is $y_1$ is a local equation for $B$ and $x_1$ is a local equation for the reduced component of ramification containing $q$. Then the derivative $df$ maps
\begin{center}
$\frac{\partial}{\partial x_1} \mapsto m x_1^{m-1} f^* \big( \frac{\partial}{\partial y_1} \big)$\\
\hspace{.2cm}$\frac{\partial}{\partial x_i} \mapsto f^* \big( \frac{\partial}{\partial y_i} \big)$ $(i >1)$.
\end{center}
\noindent Now $f^* \delta$ is in the image of $df$. Expanding locally, $f^*\delta = f^*(g_1) f^* \big( \frac{\partial}{\partial y_1} \big) + ... + f^*(g_n) f^* \big( \frac{\partial}{\partial y_n} \big)$. Thus $x_1^{m-1}$ divides $f^* (g_1)$. So $y_1$ divides $g_1$ and $\delta$ is in $H^0(Y,\mathrm{Der}_\mathbb{C}(\mathrm{-log}B))$.
\end{proof}
\begin{proof}[Proof of \hr{B}{Theorem B}] As in \S1 we use ${\mathcal{Z}_n} \subset S \times \hns{n}{S}$ to denote the universal family of the Hilbert scheme of points. Applying relative Serre duality to the main result of \cite{Lehn} shows the tangent bundle of $\hns{n}{S}$ is given by $T_{\hns{n}{S}} = p_{2*} \mathcal{H}\mathrm{om}(\mathcal{I}_{{\mathcal{Z}_n}},\mathcal{O}_{{\mathcal{Z}_n}})$. The normal sequence for ${\mathcal{Z}_n}$ gives a map:
\begin{center}
$\displaystyle p_1^* T_S \oplus p_2^* T_{\hns{n}{S}} \cong T_{S \times \hns{n}{S}}|_{{\mathcal{Z}_n}} \xrightarrow{\beta} \big(\mathcal{I}_{{\mathcal{Z}_n}}/\mathcal{I}_{{\mathcal{Z}_n}}^2 \big)^\vee \cong \mathcal{H}\mathrm{om}(\mathcal{I}_{{\mathcal{Z}_n}},\mathcal{O}_{{\mathcal{Z}_n}})$.
\end{center}
\noindent Thus after pushing forward the first summand we get a map:
\begin{center}
$\displaystyle \alpha_n :\enl{n}{(T_S)} := p_{2*}(p_1^* T_S) \rightarrow p_{2*}\mathcal{H}\mathrm{om}(\mathcal{I}_{{\mathcal{Z}_n}},\mathcal{O}_{{\mathcal{Z}_n}}) =T_{\hns{n}{S}}.$
\end{center}
To prove that $\alpha_n$ maps $\enl{n}{(T_S)}$ isomorphically onto $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$ we first restrict to the open set $U \subset \hns{n}{S}$ parametrizing subschemes $\xi \subset S$ where $\xi$ contains at least $n-1$ distinct points. The complement of $U$ has codimension 2 so by reflexivity it is enough to prove the theorem on $U$. Moreover the open set
\begin{center}
$V := p_2^{-1}U \subset {\mathcal{Z}_n}$
\end{center}
\noindent is smooth so we are in a situation where we can apply Lemma 2.3.2. There is a map:
\begin{center}
\begin{tikzpicture}
\node (tnl) {$p_2^*\enl{n}{(T_S)}|_V$};
\node (seq) [below] at (0,-1.2) {\hspace{1.17cm}$0 \rightarrow T_{{\mathcal{Z}_n}}|_V \rightarrow p_2^*T_{\hns{n}{S}}|_V \oplus p_1^*T_S|_V \xrightarrow{\beta} \mathcal{H}\mathrm{om}(\mathcal{I}_{{\mathcal{Z}_n}},\mathcal{O}_{{\mathcal{Z}_n}})|_V$,};
\draw[->] (tnl) to node[right] {$p_2^* \alpha_n|_V \oplus -\phi|_V$} (0,-1.4);
\end{tikzpicture}
\end{center}
\noindent where $\phi$ is the natural map coming from pulling back a pushforward. The composition:
\begin{center}
$\beta \circ (p_2^* \alpha_n|_V \oplus -\phi|_V)$
\end{center}
\noindent is identically zero. Therefore, the pullback of each local section of $\enl{n}{(T_S)}|_U$ lies in $T_{{\mathcal{Z}_n}}|_V$. It follows from Lemma 2.3.2 that $\enl{n}{(T_S)}$ is contained in $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$. Now we can think of $\alpha_n$ as having codomain $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$. The map is an isomorphism of $\enl{n}{(T_S)}$ and $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$ away from $B_n$ and they both have the same first Chern class. Therefore, $\alpha_n$ could only fail to be an isomorphism in codimension greater than 2. But both sheaves are reflexive, and any isomorphism between reflexive sheaves away from codimension 2 on a normal variety extends uniquely to an isomorphism on the whole variety.
\end{proof}
At a point $\br{\xi} \in \hns{n}{S}$ we have the isomorphisms $\enl{n}{(T_S)}|_{\br{\xi}} \cong H^0(S,{T_S}|_{\xi})$ and $T|_{\hns{n}{S}} \cong \mathrm{Hom}(I_{\xi},\mathcal{O}_{\xi})$. From this perspective we describe the map $\alpha_n|_{\br{\xi}}$. If $\delta \in T_S|_{\br{\xi}}$ is derivation, then $\alpha_n|_{\br{\xi}}$ maps
\begin{center}
$\displaystyle \alpha_n|_{\br{\xi}}: \delta \mapsto \Big( \begin{array}{c}
I_{\xi} \xrightarrow{\alpha_n|_{\br{\xi}}(\delta)} \mathcal{O}_{\xi} \\
f \mapsto \delta(f)|_{\xi}
\end{array} \Big)$.
\end{center}
From this description of $\alpha_n|_{\br{\xi}}$ we can easily compute the rank at any explicit point. For example, $\alpha_n$ is an isomorphism on the locus of $n$ distinct points, and the generic rank along $B_n$ is 2n-1. The degeneracy loci of $\alpha_n$ are defined to be:
\begin{center}
$\displaystyle \Omega_{r}(\alpha_n) := \big\{ \br{\xi} \in \hns{n}{S} | \rk{\alpha_n|_{\br{\xi}}} \le r \big\}$.
\end{center}
\noindent We expect the irreducible components of $\Omega_{r}(\alpha_n)$ to give an interesting stratification of the Hilbert scheme of points, and we hope to return to study this stratification in future work.
\begin{subremark}
When $C$ is a smooth curve, the locus of nonreduced subschemes $B_n \subset \hns{n}{C}$ forms a divisor. When $C \cong \mathbb{A}^1$, this is the discriminant divisor in the space parametrizing degree $n$ polynomials in 1 variable. A similar proof shows the tautological bundle of the tangent bundle is the sheaf $\mathrm{Der}_{\cc}(\mathrm{-log}B_n)$, again showing $B_n$ is a free divisor.
\end{subremark}
\section{Perturbation of Polarization and Stability}
The goal of this section is to sketch a proof that stability of the tautological bundles with respect to the natural Chow divisors implies stability with respect to nearby ample divisors. We also prove in Proposition 3.7 that the pullback of a stable bundle to a product is stable with respect to a product polarization, a fact that we used in the proof of \hr{A}{Theorem A}. In proving stability with respect to nearby ample divisors we follow ideas appearing in ~\cite{Greb1} where compact moduli spaces of vector bundles are constructed using multipolarizations and more recently in ~\cite{Greb2} where foundational results for studying stability with respect to movable curves are established.
Throughout this section denote by X a normal complex projective variety of dimension $d$. Let $\gamma \in N_1(X)_{\mathbb{R}}$ be a real curve class and $\mathcal{E}$ be a torsion-free sheaf on X. For any sheaf $\mathcal{Q}$ on $X$, we denote by $\sing{\mathcal{Q}}$ the closed locus where $\mathcal{Q}$ is not locally free.
\begin{definition}
The \textit{slope of $\mathcal{E}$ with respect to $\gamma$}, denoted by $\slgf{\gamma}{\mathcal{E}}$, is the real number:
\begin{center}
$\slgf{\gamma}{\mathcal{E}} := \dfrac{c_1(\mathcal{E}) \cdot \gamma}{ \rk{\mathcal{E}}}$.
\end{center}
\end{definition}
\begin{remark}
Fixing an ample class $H \in N^1(X)_{\mathbb{R}}$ it is true that $\slhf{H}{\mathcal{E}} = \slgf{H^{d-1}}{\mathcal{E}}$. Nonetheless, to distinguish the concepts we use subscripts to denote slope with respect to an ample divisor and superscripts to denote slope with respect to a curve class.
\end{remark}
\begin{definition} We say $\mathcal{E}$ is \textit{slope (semi)stable with respect to $\gamma$} if for all torsion-free quotients of intermediate rank $\mathcal{E} \rightarrow \mathcal{Q} \rightarrow 0$:
\begin{center}
$\slgf{\gamma}{\mathcal{E}} \underset{(\le)}{<} \slgf{\gamma}{\mathcal{Q}}$.
\end{center}
\end{definition}
\noindent
A benefit of working with slope (semi)stability with respect to curves rather than divisors is that we can apply ideas of convexity.\label{cone}
\begin{lemma}\label{sum} If $\gamma, \delta$ are classes in $N_1(X)_{\mathbb{R}}$ such that $\mathcal{E}$ is semistable with respect to $\gamma$ and $\mathcal{E}$ is stable with respect to $\delta$ then $\mathcal{E}$ is stable with respect to $a \gamma + b \delta$ for $a,b > 0$. \qed
\end{lemma}
If $C \subset X$ is an irreducible curve we would like to relate the stability of $\mathcal{E}|_C $ and the stability of $\mathcal{E}$ with respect to the class of $C$. However if $\mathcal{Q}$ is a coherent sheaf and $C$ meets $\sing{\mathcal{Q}}$ it is possible that $c_1(\mathcal{Q}|_C) \ne c_1(\mathcal{Q})|_C$. Thankfully we can say something if $C$ is not entirely contained in $\sing{\mathcal{Q}}$.
\begin{proposition}\label{3.5} Let $\mathcal{E} \rightarrow \mathcal{Q} \rightarrow 0$ be a torsion-free quotient which destabilizes $\mathcal{E}$ with respect to the curve class $\gamma$. Suppose $C \subset X$ is a smooth irreducible closed curve which represents $\gamma$, avoids $\sing{\mathcal{E}}$, and avoids the singularities of $X$. If $C$ is not contained in $\sing{Q}$ then $\mathcal{E}|_C$ is not stable on $C$.
\end{proposition}
\begin{proof} First, we can reduce to the surface case by choosing a normal surface $S \subset X$ containing $C$ such that $S$ is smooth along $C$, $S$ meets $\sing{\mathcal{Q}}$ properly, and $S$ meets $\sing{\mathcal{E}}$ properly. This is possible because when the dimension of $X$ is greater than 3 a generic, high-degree hyperplane section containing $C$ is normal, smooth along $C$, and meets both $\sing{\mathcal{Q}}$ and $\sing{\mathcal{E}}$ properly. Once such a surface is chosen
\begin{center}
$c_1(\mathcal{Q})|_S = c_1(\mathcal{Q}|_S)= c_1(\mathcal{Q}|_S / \mathrm{Tors}(\mathcal{Q}|_S))$
$c_1(\mathcal{E})|_S = c_1(\mathcal{E}|_S)= c_1(\mathcal{E}|_S / \mathrm{Tors}(\mathcal{E}|_S))$
\end{center}
\noindent because both $\sing{\mathcal{Q}}\cap S$ and $\sing{\mathcal{E}}\cap S$ are zero-dimensional. Thus
\begin{center}
$\mathcal{E}|_S / \mathrm{Tors}(\mathcal{E}|_S) \rightarrow \mathcal{Q}|_S/\mathrm{Tors}(\mathcal{Q}|_S) \rightarrow 0$
\end{center}
is a torsion-free quotient on S which destabilizes $\mathcal{E}|_S / \mathrm{Tors}(\mathcal{E}|_S)$ with respect to the class of $C$. So we have reduced the Proposition to the case $X$ is a surface.
Let $X$ be a surface. It is enough to show $c_1(\mathcal{Q}|_C) = c_1(\mathcal{Q})|_C$. The restriction $c_1(\mathcal{Q})|_C$ is computed via the derived pullback:
\begin{center}
$\displaystyle c_1(\mathcal{Q})|_C = \overset{\infty}{\underset{i = 0}\sum} (-1)^i c_1({\mathrm{Tor}_i}^{\mathcal{O}_X}(\mathcal{Q},\mathcal{O}_C))$,
\end{center}
\noindent where the ${\mathrm{Tor}_i}^{\mathcal{O}_X}(\mathcal{Q},\mathcal{O}_C)$ are thought of as modules on $C$. Further, $C$ is a Cartier divisor on $X$, so $\mathcal{O}_C$ has a two term locally free resolution. So the $\mathrm{Tor}_i^{\mathcal{O}_{X}}(\mathcal{Q},\mathcal{O}_C)$ vanish for $i>2$ and $\mathrm{Tor}_1^{\mathcal{O}_{X}}(\mathcal{Q},\mathcal{O}_C) = 0$ because $\mathcal{Q}$ is torsion-free. Therefore
\begin{center}
$c_1(\mathcal{Q})|_C = c_1({\mathrm{Tor}_0}^{\mathcal{O}_X}(\mathcal{Q},\mathcal{O}_C)) = c_1(\mathcal{Q}|_C)$.
\end{center}
\noindent So $\mathcal{E}|_C$ is not slope stable.
\end{proof}
An immediate corollary is the following coarse criterion for checking slope stability with respect to $\gamma$.
\begin{corollary}\label{3.6} Let $\pi: C_T \rightarrow T$ be a family of smooth irreducible closed curves in $X$ with class $\gamma$. For $t \in T$ we write $C_t$ to denote $\pi^{-1}(t)$. Suppose $\mathcal{E}$ is a vector bundle on $X$ such that $\mathcal{E}|_{C_t}$ is stable for all $t \in T$. If the curves in $C_T$ are dense in $X$ then $\mathcal{E}$ is stable with respect to the curve class $\gamma$.
\end{corollary}
\begin{proof} Suppose for contradiction that $\mathcal{E}$ is unstable with respect to $\gamma$. Then there exists a torsion-free quotient $\mathcal{E} \rightarrow \mathcal{Q} \rightarrow 0$ with $\slgf{\gamma}{\mathcal{Q}} \le \slgf{\gamma}{\mathcal{E}}$. As $\mathcal{Q}$ is torsion-free, $\sing{\mathcal{Q}}$ has codimension $\ge 2$. The curves in $C_T$ are dense in $X$ so there is a $t\in T$ such that $C_t$ is not contained in $\sing{\mathcal{Q}}$. Then \hr{1.6}{Proposition 1.6} guarantees that $\mathcal{E}|_{C_t}$ is not stable which contradicts our hypothesis.
\end{proof}
\hr{3.5}{Proposition 3.5} can be adjusted so that \hr{3.6}{Corollary 3.6} also holds if stability is replaced by semistability. As a consequence we prove the following basic result about slope stable vector bundles, which we used in the proof of \hr{A}{Theorem A}.
\begin{proposition}\label{3.7} Let $X$ and $Y$ be smooth projective varieties of dimension $d$ and $e$ respectively. Let $H_X$ be an ample divisor on $X$ (resp. $H_Y$ ample on $Y$) and let $p_1$ (resp. $p_2$) denote the projection from $X \times Y$ to $X$ (resp. $Y$). If $\mathcal{E}$ is a vector bundle on X which is slope stable with respect to $H_X$ then $p_1^*(\mathcal{E})$ is slope stable on $X \times Y$ with respect to the ample divisor $p_1^*(H_X) + p_2^*(H_Y)$.
\end{proposition}
\begin{proof} By ~\cite[Theorem 4.3]{MehtaR} if $k \gg 0$ and $C$ is a general curve which is a complete intersection of divisors linearly equivalent to $kH_X$ then $\mathcal{E}|_C$ is stable. Let $F \subset |kH_X|^{d-1}$ be the open subset of the cartesian power of the complete linear series of $kH_X$ defined as
\[F := \left\{ (H_1 , ... , H_{d-1}) \in |kH_X|^{d-1} \Big\vert
\begin{array}{c}
C = H_1 \cap ... \cap H_{d-1} \text{ is a smooth complete}\\
\text{intersection curve and }\mathcal{E}|_C\text{ is stable}
\end{array}
\right\} \subset |kH_X|^{d-1}.
\]
We write $C_F$ for the natural family of smooth curves in $X$ parametrized by $F$. Likewise the fiber product $C_F \times_F (F \times Y)$ is naturally a family of smooth curves in $X \times Y$ parametrized by $F \times Y$. The image of $C_F \times_F (F \times Y)$ in $X \times Y$ is dense, and for any $(f,y) \in F \times Y$ the restriction of $p_1^*(\mathcal{E})$ to $C_{(f,y)}$ is stable. Therefore by \hr{1.7}{Corollary 1.7} $p_1^*(\mathcal{E})$ is stable with respect to the numerical class of $C_{(f,y)}$ which we denote by $\gamma$.
For $l\gg0$ the divisor $l H_Y$ is very ample on $Y$ and a general complete intersection of divisors linearly equivalent to $l H_Y$ is smooth. Let $G \subset |lH_Y|^{e-1}$ be the open subset of the cartesian power of the complete linear series of $lH_Y$ defined as
\[G := \left\{ (H_1 , ... , H_{e-1}) \in |lH_Y|^{e-1} \Big\vert
\begin{array}{c}
H_1 \cap ... \cap H_{e-1} \text{ is a smooth complete}\\
\text{intersection curve}
\end{array}
\right\} \subset |lH_Y|^{e-1}.
\]
As before there is a natural family $D_G$ of smooth curves in $Y$ parametrized by $G$. The fiber product $D_G \times_G (X \times G)$ is a family of smooth curves in $X \times Y$ parametrized by $X \times G$. For $(x,g) \in X \times G$ the restriction of $p_1^*(\mathcal{E})$ to $D_{(x,g)}$ is a direct sum of trivial bundles thus the restriction is semistable. Therefore by applying \hr{1.7}{Corollary 1.7} in the semistable case, $p_1^*(\mathcal{E})$ is semistable with respect to the curve class of $D_{(x,g)}$ which we write $\delta$.
Finally,
\begin{center}
$\displaystyle (p_1^*H_X + p_2^*H_Y)^{d+e-1} = {d+e-1 \choose e}\frac{(H_Y)^e}{k^{d-1}}\cdot\gamma + {d+e-1 \choose d}\frac{(H_X)^d}{l^{e-1}}\cdot\delta$.
\end{center}
\noindent Therefore by \hr{sum}{Lemma 1.5} $p_1^*(\mathcal{E})$ is slope stable with respect to $p_1^*(H_X) + p_2^*(H_Y)$.
\end{proof}
This completes the proof of \hr{A}{Theorem A}. We now sketch a proof of the perturbation argument. The basic idea is an extension of the wall and chamber construction of \cite[Theorem 6.6]{Greb1} to the boundary of the positive cone of curves. This is possible when considering nef divisors which are also lef in the sense of \cite[Definition 2.1.3]{dCM1}.
\begin{proposition}\label{1.9} Let $H$ be a nef divisor and $A$ an ample $\mathbb{Q}$-divisor on $X$ a normal complex projective variety. Suppose $\mathcal{E}$ is a rank $r$ torsion-free sheaf on $X$ which is slope stable with respect to the class of $H^{d-1}$. Assume
\begin{center}
$- \cap H^{d-2}:N^1(X)_\mathbb{R} \rightarrow N_1(X)_{\mathbb{R}}$
\hspace{2.5mm} $ \xi \mapsto \xi \cdot H^{d-2}$
\end{center}
\noindent is an isomorphism, then there is a nonempty open set in the ample cone abutting $H$ where $\mathcal{E}$ is stable.
\end{proposition}
This implies we can perturb our Chow polarization in the case of Hilbert schemes of points on surfaces and Chow divisors.
\begin{corollary}
If $\mathcal{E}$ is a vector bundle on $S$ a smooth projective surface which is stable with respect to $H$ an ample divisor, then $\enl{n}{\mathcal{E}}$ is stable with respect to an ample divisor near the Chow divisor $H_n$.
\end{corollary}
\begin{proof}[Proof of Corollary.]
By \cite[Theorem 2.3.1]{dCM1} we know $H_n$ is lef, so $\enl{n}{\mathcal{E}}$ and $H_n$ satisfy the conditions of Proposition 3.8. Therefore $\enl{n}{\mathcal{E}}$ is stable with respect to ample divisors close to $H_n$.
\end{proof}
\begin{proof}[Sketch of Proof of Proposition 3.8]
For $A$ any ample $\mathbb{Q}$-divisor on $X$ we have a maximally slope-destabilizing quotient $\mathcal{E} \rightarrow \mathcal{Q}_A$. Thus we can bound the negativity of $\slgf{A^{n-1}}{\mathcal{Q}}-\slgf{A^{n-1}}{\mathcal{E}}$ for every torsion-free quotient $\mathcal{E} \rightarrow \mathcal{Q}$ of intermediate rank. On the other hand, we can give the bound:
\begin{center}
$\slgf{H^{n-1}}{\mathcal{Q}}-\slgf{H^{n-1}}{\mathcal{E}} \ge \frac{1}{r(r-1)}$
\end{center}
\noindent because $H$ is a $\mathbb{Z}$-divisor. Combining these bounds and by linearity of slopes in curve classes we see that for small $\epsilon >0$, $\mathcal{E}$ is stable with respect to the curve class $H^{n-1}+\epsilon A^{n-1}$.
Here we have two closed convex cones, the set of nef $\mathbb{R}$-divisors and the set of curve classes $\gamma \in N_1(X)_{\mathbb{R}}$ where $\mathcal{E}$ is semistable with respect to $\gamma$, which we call the \textit{semistable cone}. By considering all ample $\mathbb{Q}$-divisors of the type $tH + (1-t)A$ for rational $t \in (0,1\rbrack$ and taking the limit as $t$ goes to $0$, one can show the derivative of the $(n-1)$st power map sends the tangent vectors pointing into the nef cone to the tangent vectors pointing into the semistable cone.
The derivative at $H$ of the ($n-1$)st power map from $N^1(X)_{\mathbb{R}}$ to $N_1(X)_{\mathbb{R}}$ is $(n-1)H^{n-2}$, which by assumption is nondegenerate. Therefore, it maps tangent vectors pointing into the interior of the ample cone to tangent vectors pointing into the interior of the semistable cone. As $\mathcal{E}$ is stable with respect to $H^{n-1}$, by Lemma 3.4 it is stable with respect to any class on the interior of the semistable cone. Therefore, there is a nonempty open set in the ample cone abutting $H$ where $\mathcal{E}$ is stable.
\end{proof}
| {
"timestamp": "2015-06-30T02:11:29",
"yymm": "1409",
"arxiv_id": "1409.8229",
"language": "en",
"url": "https://arxiv.org/abs/1409.8229",
"abstract": "The purpose of this paper is to explore the geometry and establish the slope stability of tautological vector bundles on Hilbert schemes of points on smooth surfaces. By establishing stability in general we complete a series of results of Schlickewei and Wandel who proved the slope stability of these vector bundles for Hilbert schemes of 2 points or 3 points on K3 or abelian surfaces with Picard group restrictions. In exploring the geometry we show that every sufficiently positive semistable vector bundle on a smooth curve arises as the restriction of a tautological vector bundle on the Hilbert scheme of points on the projective plane. Moreover we show the tautological bundle of the tangent bundle is naturally isomorphic to the sheaf of vector fields tangent to the divisor which consists of nonreduced subschemes.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Geometry and stability of tautological bundles on Hilbert schemes of points",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211612253742,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7096458666930149
} |
https://arxiv.org/abs/1304.0428 | Convex and subharmonic functions on graphs | We explore the relationship between convex and subharmonic functions on discrete sets. Our principal concern is to determine the setting in which a convex function is necessarily subharmonic. We initially consider the primary notions of convexity on graphs and show that more structure is needed to establish the desired result. To that end, we consider a notion of convexity defined on lattice-like graphs generated by normed abelian groups. For this class of graphs, we are able to prove that all convex functions are subharmonic. | \section{Introduction}
Classical analysis provides several equivalent definitions of a convex function, which have led to several non-equivalent concepts of a convex function on a graph. As an interesting alternative, there appears to be a consensus on how to define subharmonic functions on graphs. In the real variable counterpart, all convex functions are subharmonic. It is the aim of this paper to investigate this relationship in the discrete setting.
We show that in the setting of weighted graphs over a normed abelian group one can prove analogs of some classical analysis theorems relating convexity to subharmonic functions. In particular, (Theorem \ref{T:cvx=>sub}) all convex functions are subharmonic, (Lemma \ref{L:dist-to-a-point=cvx}) for a fixed point $a\in X$, the distance function $d(x,a)$ is convex, and (Propositions \ref{P:dist-cvx=>cvx} and \ref{P:cvx=>dist-cvx}) that a set $F$ is convex if and only if the distance function $d(x,F) = \inf_{y\in F}d(x,y)$ is subharmonic.
For a discrete set with metric, there is generally one straight forward way to define convex sets and convex functions on them. For completeness and ease of reference we present these in Section \ref{S:fund-conp}. The definitions we give (or something equivalent to them) can be traced back at least to $d$-convexity \cite{GSS73, S72} and $d$-convex functions \cite{SS79}, and possibly much earlier. Graphs admit a natural metric, i.e. length of the shorted path between two vertices, which leads to one notion of convexity on graphs studied in \cite{S83,S91}. The notion of $d$-convexity on graphs when $d$ is the standard graph metric is equivalent to the more common notion of geodesic convexity \cite{CMOP05, FJ86}.
Common to \cite{CMOP05, FJ86, S83, S91}, one starts with a graph and then puts a convexity theory on it, by using the graph metric. However in Section \ref{S:graph-metric} we show that convex sets and functions defined on graphs with respect to the graph metric have a few pleasant but mostly a large number of undesirable properties. Thereby breaking the analogy with their classical analysis counterparts.
Another approach taken here in Section \ref{S:background_metric} is to allow the vertices themselves to have some underlying structure, e.g. a normed abelian group, and force the edges to be compatible with this metric. (As opposed to making a metric compatible with the edges.) In the setting of a normed abelian group there are many notions of a convex functions, see \cite{K04} and references therein. One introduced in \cite{K04} provides a natural extension of geodesic convexity that makes use of the additional abelian group structure. In this setting convex and subharmonic functions are of particular interest to image analysis, e.g. \cite{K04, K05}. In this setting we are able to prove theorems analogous to several standard results from classical analysis.
In particular, (Theorem \ref{T:cvx=>sub}) all convex functions are subharmonic, (Lemma \ref{L:dist-to-a-point=cvx}) for a fixed point $a\in X$, the distance function $d(x,a)$ is convex, and (Propositions\ref{P:dist-cvx=>cvx} and \ref{P:cvx=>dist-cvx}) that a set $F$ is convex if and only if the distance function $d(x,F) = \inf_{y\in F}d(x,y)$ is subharmonic.
\section{Fundamental concepts}\label{S:fund-conp}
We will always assume that a graph is locally finite.
\subsection{Convexity}
Let $X$ be an at most countable set with a metric $d$, i.e. $d\colon X\times X \rightarrow \mathbb{R}$ with the properties
\begin{enumerate}
\item $d(x,y)\ge 0$ all $x,y \in X$ with $d(x,y)=0$ if and only if $x=y$,
\item $d(x,y)=d(y,x)$, and
\item $d(x,y)\le d(x,z)+d(z,y)$.
\end{enumerate}
Traditionally a set $A$ is convex if for all points $x,y\in A$ every point on the line segment connecting them is also in $A$. Notice that a point $z$ is on the line segment connecting $x,y\in A$ if and only if $d(x,y)=d(x,z)+d(z,y)$. Hence we take the following definitions:
For $A\subset X$ define
\[c_1(A) = \{z\in X\colon d(x,y)=d(x,z)+d(z,y) \text{ for some } x,y\in A\} \]
when $A=\emptyset$, take $c_1(\emptyset)=\emptyset$, and inductively
$c_n(A)=c_1(c_{n-1}(A))$.
Note that $0=d(x,x)=d(x,x)+d(x,x)$, hence $A\subset c_1(A)\subset \cdots \subset c_n(A)$ for all $n$.
\begin{df}
Let $A\subset X$. The \emph{convex hull} of $A$ is
\[{\rm cvx}(A) = \bigcup_{n=1}^\infty c_n(A).\]
Naturally, the set $A$ is said to be \emph{convex} if ${\rm cvx}(A)=A$. Clearly $\emptyset$ and $X$ are convex.
We say that the point $z$ is \emph{in between} $x$ and $y$ whenever $d(x,y)=d(x,z)+d(z,y)$ is satisfied.
\end{df}
Consequently,
\begin{lma}\label{L:a_cvx<=>a=c_1(a)}
A set $A\subset X$ is convex if and only if $A=c_1(A)$.
\end{lma}
\begin{proof}
If $A=c_1(A)$ then $c_2(A) = c_1(c_1(A))=c_1(A)=A$. Hence by induction $c_n(A)=A$ and so $A=\cup c_n(A) = {\rm cvx}(A)$. Thus $A$ is convex.
Suppose that $A$ is convex. Then $A={\rm cvx}(A) = \cup c_n(A) \supset c_1(A) \supset A$. Thus $A=c_1(A)$.
\end{proof}
\begin{prp}
For all sets $A, B\subset X$,
\begin{align}
A &\subset {\rm cvx}(A) \\
A\subset B &\Rightarrow {\rm cvx}(A)\subset {\rm cvx}(B)\\
{\rm cvx}(A) &= {\rm cvx}({\rm cvx}(A)).
\end{align}
\end{prp}
\begin{proof}
\begin{enumerate}
\item We've already shown that $A\subset c_1(A)\subset \cdots \subset c_n(A)$ for all $n$ and so $A\subset \cup c_n(A)={\rm cvx}(A)$.
\item For any sets $X$ and $Y$, if $X\subset Y$ then $c_1(X)\subset c_2(Y)$. Indeed for any $z\in c_1(X)$ there exists by definition $x_1,x_2\in X$ so that $d(x_1,x_2)=d(x_1,z)+d(z,x_2)$, but as $x_1,x_2\in X\subset Y$ this shows that $z\in c_1(Y)$. Then as $A\subset B$, we have $c_1(A)\subset c_1(B)$. Then by induction, $c_n(A)\subset c_n(B)$. Therefore ${\rm cvx}(A)\subset {\rm cvx}(B)$.
\item The claim ${\rm cvx}(A) = {\rm cvx}({\rm cvx}(A))$ amounts to saying that ${\rm cvx}(A)$ is convex. We will use Lemma 1 to show this. Consider any $z\in c_1({\rm cvx}(A))$. This means there exists $x,y\in {\rm cvx}(A) = \cup c_n(A)$ so that $d(x,y) = d(x,z)+d(z,y)$. However as $A\subset c_1(A)\subset c_2(A)\subset \cdots \subset c_n(A) \subset \cdots$ we know $x, y\in c_n(A)$ for some $n$, and so $z\in c_1(c_n(A))=c_{n+1}(A)\subset {\rm cvx}(A)$. Hence $c_1({\rm cvx}(A))={\rm cvx}(A)$.
\end{enumerate}
\end{proof}
The following proposition shows that our definition of convex hull is equivalent to the usual one, i.e. the convex hull of $A$ is the intersection of all convex sets that contain $A$.
\begin{prp}
For any $A\subset X$, the set ${\rm cvx}(A)$ is the intersection of all convex sets that contain $A$.
\end{prp}
\begin{proof}
Let $B\subset X$ be a convex set containing $A$. As noted previously $A\subset B$ implies ${\rm cvx}(A)\subset {\rm cvx}(B)$. However ${\rm cvx}(B)=B$ by hypothesis. Hence ${\rm cvx}(A) \subset B$ for all convex $B$ containing $A$. Therefore
\[{\rm cvx}(A) \subset \bigcap\{B\colon A\subset B \text{ and } B \text{ convex} \}.\]
As ${\rm cvx}(A)$ is convex and $A\subset {\rm cvx}(A)$, it must be included in the intersection above. Thus
\[\bigcap\{B\colon A\subset B \text{ and } B \text{ convex} \} \subset {\rm cvx}(A).\qedhere\]
\end{proof}
\begin{prp}
If $A$ and $B$ are convex, then $A\cap B$ is convex.
\end{prp}
\begin{proof}
Let $A$ and $B$ be convex. Then by Lemma \ref{L:a_cvx<=>a=c_1(a)} $A=c_1(A)$ and $B=c_1(B)$. We will show that $c_1(A\cap B)= c_1(A)\cap c_1(B)=A\cap B$. We've already noted that $A\cap B\subset c_1(A\cap B)$.
Suppose that $z\in c_1(A\cap B)$. Then there exists $x,y\in A\cap B$, such that $d(x,y)=d(x,z) + d(z,y)$. Hence $z\in c_1(A)$ and $z\in c_1(B)$, that is, $z\in c_1(A)\cap c_1(B)$. As $A=c_1(A)$ and $B=c_1(B)$, we now have $z\in c_1(A)\cap c_1(B) = A\cap B$. Therefore $c_1(A\cap B)\subset A\cap B$. Thus $A\cap B = c_1(A\cap B)$ and so $A\cap B$ is convex.
\end{proof}
\begin{prp}Let $I$ be an ordered set and take $\{A_\alpha\}_{\alpha\in I}$ to be a collection of convex sets in $X$ where $A_\alpha\subset A_\beta$ whenever $\alpha < \beta$ and $\alpha,\beta\in I$. The set formed by taking the union of $A_\alpha$ for $\alpha \in I$ is convex.
\end{prp}
\begin{proof}
We must show that $\cup A_\alpha$ is convex.
Consider the set $c_1(\cup A_\alpha)$. For any $z\in c_1(\cup A_\alpha)$, we can find $x,y\in \cup A_\alpha$ so that $d(x,y)=d(x,z)+d(z,y)$. However $x,y \in \cup A_\alpha$ implies that $x \in A_\alpha$ and $y \in A_\beta$ for some $\alpha, \beta \in I$. Without loss of generality we assume that $\alpha < \beta$. By hypothesis, $A_\alpha\subset A_\beta$. Hence $x,y\in A_\beta$. Since $z$ satisfies $d(x,y)=d(x,z)+d(z,y)$ for $x,y\in A_\beta$ with $A_\beta$ convex, we see that $z\in c_1(A_\beta)=A_\beta$. As $z$ was arbitrarily chosen from $c_1(\cup A_\alpha)$, we have $c_1(\cup A_\alpha)\subset \cup A_\alpha$.
By construction the reverse inclusion $\cup A_\alpha\subset c_1(\cup A_\alpha)$ is immediate. Hence $c_1(\cup A_\alpha)= \cup A_\alpha$. Recall, Lemma 1,that a set $A$ is convex if and only if $A=c_1(A)$. Thus $\cup A_\alpha$ is convex.
\end{proof}
\begin{df}
Let $A$ be a convex set. A function $f\colon A\rightarrow \mathbb{R}$ is \emph{convex at the point} $z\in A$ if
\[f(z) \le \frac{d(y,z)}{d(x,y)}f(x)+\frac{d(x,z)}{d(x,y)}f(y)\]
whenever $z$ is in between $x, y\in A$, i.e. $d(x,y)=d(x,z)+d(z,y)$. A function is said to be \emph{convex on} $A$ if it is convex at every point in $A$. Furthermore, a function is simply called \emph{convex} when it is convex on the entire set $X$.
\end{df}
The vertices of a graph admit a natural metric defined as the length of the shortest path between them. With this, the notions of convex and convex functions extend naturally to all graphs, see \cite{CMOP05, FJ86, S83, S91}.
\subsection{Subharmonic functions on a graph}
Introductions to various aspects of the theory can be found in \cite{BLS07, K05, S94, W94}.
Consider a graph $G$. The vertices of this graph will be denoted $X$ (to stay consistent with above), which shall be the domain of our (sub)harmonic functions. A function $f\colon X \rightarrow \mathbb{R}$ is said to be \emph{harmonic} at $x\in X$ if
\[f(x) = \frac{1}{\deg(x)}\sum_{y\sim x}f(y)\]
and {subharmonic} at $x\in X$ if
\[f(x) \le \frac{1}{\deg(x)}\sum_{y\sim x}f(y)\]
where $\deg(x)$ denotes the degree of $x$ and $y\sim x$ means that $y$ is adjacent to $x$. A function is (sub)harmonic if it is (sub)harmonic at every point $x\in X$. Observe that constant functions are always harmonic (thereby subharmonic too), and so these classes of functions are never empty.
\begin{lma}
If the graph $X$ is connected, regular of degree two and triangle free, then a subharmonicity is the same as convexity.
\end{lma}
\begin{proof}
Each vertex $z$ has only two neighbors $x,y$. As the graph is triangle free $d(x,y)=2$. Hence
\[\frac{1}{deg(z)}\sum_{\zeta\sim z}f(\zeta) = \frac{1}{2}\left(f(x)+f(y)\right) = \frac{d(y,z)}{d(x,y)}f(x)+\frac{d(x,z)}{d(x,y)}f(y)\]
By definition $f$ is subharmonic at $z$ if $f(z)$ is less than or equal to the left side of the equation above and $f$ is convex at $z$ if $f(z)$ is less that or equal to the right side of the equation above. Therefore subharmonicity and convexity are equivalent conditions.
\end{proof}
We will also use a standard modification of the definition of subharmonic functions on graphs to allow for positive edge weights. Namely, a function $f\colon X\rightarrow \mathbb{R}$ is subharmonic at $x$ if
\[0\le \sum_{y\sim x} e(x,y)[f(y)-f(x)],\]
which with some arithmetic becomes
\[f(x)\le \frac{1}{M_x}\sum_{y\sim x} e(x,y)f(y),\]
where $e(x,y)=e(y,x)\ge 0$ is the edge weight and $M_x=\sum_{y\sim x} e(x,y)$. If the edge weights are all taken to be one, then this definition is identical to the first.
\section{The distance is given by the graph metric.}\label{S:graph-metric}
In this section we provide two simple theorems which show that for a large class of graphs, convex functions are indeed subharmonic.
\begin{thm}
Let $z$ be a point in $X$. Suppose that $\deg(z)>1$ and that $z$ is not part of any triangle. If $f$ is convex at $z$, then $f$ is subharmonic at $z$. Consequently, if the graph has no triangles or vertices of degree less than $2$, then every convex function is subharmonic.
\end{thm}
\begin{proof}
Let $B=\{y\in X\colon y\sim z\}$ be all the vertices adjacent to $z$. By hypothesis $\deg(z)=|B|>1$, and so there are at least two vertices $y_1, y_2\in B$. As $z$ is adjacent to both $y_1$ and $y_2$ and as $z$ is assumed to not be apart of a triangle, $y_1$ is not adjacent to $y_2$. Hence $z$ is in between $y_1$ and $y_2$, that is, on a geodesic connecting $y_1$ and $y_2$. In fact, $2=d(y_1, y_2)=d(y_1,z)+d(z,y_2)$ with $d(y_1,z)=d(z,y_2)=1$. Hence for all $y_1, y_2\in B$ we have
\begin{equation}\label{E:basic}
2f(z) \le f(y_1)+f(y_2)
\end{equation}
by convexity.
Now we sum Equation (\ref{E:basic}) over all unordered pairs of points $y_1, y_2\in B$. Naturally there are $\binom{\deg(z)}{2}$ such pairs and each vertex $y\in B$ will appear precisely $\deg(z)-1$ times. (Recall $B=\{y\colon y\sim z\}$ and so $|B| = \deg(z)$.) Hence
\[\binom{\deg(z)}{2} 2f(z) \le (\deg(z)-1)\sum_{y\sim z}f(y),\]
which simplifies to
\[f(z) \le \frac{1}{\deg(x)}\sum_{y\sim z}f(y).\]
Thus $f$ is subharmonic at $z$.
\end{proof}
Furthermore,
\begin{thm}
Let $z$ be a point in $X$. If the neighbors of $z$ can be partitioned into pairs such that the vertices in each pair are non-adjacent then a function is convex at $z$ implies that it is also subharmonic at $z$.
\end{thm}
\begin{proof}
For any vertices $y_1, y_2$ in a pairing of the partition of the neighbors of $z$ are non-adjacent, the vertex $z$ must be between them, and hence
\[2f(z) \le f(y_1)+f(y_2)\]
for any function $f$ subharmonic at $z$. Consequently if we sum this inequality over all $\deg(z)/2$ pairings, we have
\[2 \frac{\deg(z)}{2} f(z) \le \sum_{y\sim z}f(y).\]
Therefore $f$ is subharmonic at $z$.
\end{proof}
Notice that for the standard square lattice both theorems imply that a convex function is subharmonic. If $z$ was connected to an odd number of non-adjacent points then only the first theorem implies that a function convex at $z$ is subharmonic at $z$. Similarly when the graph is the standard triangular tiling of the plane, only the second theorem would show that every convex function is subharmonic.
\begin{thm}
Let $F$ be any subset of $X$. If the distance function
\[d(\cdot, F):=\inf\left\{ d(\cdot, f) \colon f\in F \right\}\]
is convex, then $F$ is convex.
\end{thm}
\begin{proof}
Consider any point $z\in X$ that lies between $x,y\in F$. If the distance function is convex, we have
\[0\le d(z,F) \le \frac{d(y,z)}{d(x,y)}d(x,F)+\frac{d(x,z)}{d(x,y)}d(y,F),\]
but $d(x,F)=d(y,F)=0$ as $x,y\in F$. Therefore $d(z,F)=0$ and so $z$ must also be a point in $F$.
\end{proof}
\begin{ex}
\label{Ex:cycle}
Consider a cycle on four vertices, i.e. $X=\{a,x,y,z\}$ with $a\sim x, x\sim y, y\sim z, z\sim a$. One would easily believe that $F=\{a\}$ is convex. Hence $d(x,F)=d(z,F)=1$, and $y$ is in between $x$ and $z$. However
\[2 = d(y,a) \not\le \frac{1}{2} d(x,a) +\frac{1}{2} d(z,a) = 1.\]
Hence $d(\cdot, a)$ is not convex, and certainly not subharmonic.
Observe also the set $\{x,y,z\}$ is NOT convex. We believe this reveals part of the problem with this definition of convexity. Namely that a geodesic line segment need not be convex. It seems that `few' graphs have convex geodesics. (However $X=\mathbb{Z}$ with $x\sim y$ when $|x-y|=1$, and the standard triangular tiling of the plane are two such.)
\end{ex}
It would seem that more structure is needed to have a workable theory.
\section{Graphs over a normed abelian group.} \label{S:background_metric}
For the remainder of this paper, we consider weighted graphs where the vertex set $X$ is a normed abelian group, and the graph is compatible with the norm. We will denote the norm $||\cdot||$. Recall that the graph structure is \emph{compatible with the norm} if there is a constant $r>0$ such that $x\sim y$ if and only if $||x-y||\le r$ and the edge weights are given by the norm $e(x,y)=||x-y||\le r$.
In particular, graphs of this type include all lattice graphs. By rescaling $X$ by $r$ we can always assume without loss of generality that $r=1$.
Graphs of this type pick up a number of traits from analysis. Far from the least important is a local similarity property. When one does analysis in a domain $D\subset \mathbb{R}^n$ (or on a manifold) every point $z\in D$ has a neighborhood which is locally like a ball in $\mathbb{R}^n$. We see the same property here.
This can also be viewed as a translation invariance property, we could translate any point $x_0$ to the origin by taking $X\mapsto X-x_0$ and nothing would change. More explicitly, we denote $B_r(x_0):=\{y\in X:y\sim x_0\}$ and for every $x_0$ in $X$ there is a simple 1-1 correspondence between $B_r(x_0)$ and $B_r(0)$. If $y\in B_r(x_0)$, then $z=y-x_0\in B_r(0)$, and if $z\in B_r(0)$, then $x_0+z\in B_r(x_0)$.
Furthermore, if $\zeta\in B_r(0)$, then $-\zeta\in B_r(0)$. Hence
\begin{equation}\label{E:sim_to_zero}
\{y\in X:y\sim x\}:=B_r(x)=\{x+\zeta\colon \zeta\in B_r(0)\}=\{x-\zeta\colon \zeta\in B_r(0)\}
\end{equation}
We maintain the same notion of a convex function, namely
\[||x-y||f(z)\le ||y-z||f(x)+||x-z||f(y),\]
whenever $||x-y||=||x-z||+||z-y||$. However in this context we can work with midpoints.
In \cite{K96}, Kiselman defines a function $f$ on an abelian group $X$ to be \emph{midpoint convex} if
\[f(x) \le \frac{1}{2}f(x+z)+\frac{1}{2}f(x-z)\]
for all $x$ and $z$ in $X$. (Actually he uses the notion of upper addition to for functions defined on the extended real line, i.e. $\mathbb{R}\cup\{\pm\infty\}$, but we will not be needing such subtleties here.) Trivially a convex function is always midpoint convex.
We will now see that this notion of midpoint convexity allows us to achieve our goals.
\begin{thm}\label{T:cvx=>sub}
Consider a weighted graph where the vertex set $X$ is a normed abelian group and the graph is compatible with the norm. Every midpoint convex function is subharmonic.
\end{thm}
\begin{proof}
Pick any $x\in X$. Observe that by Equation \ref{E:sim_to_zero}
\begin{align*}\sum_{y\sim x} e(x,y) f(y) &= \frac{1}{2}\sum_{z\in B_r(0)} e(x,x+z) f(x+z) + \frac{1}{2}\sum_{z\in B_r(0)} e(x,x-z) f(x-z) \\
&= \sum_{z\in B_r(0)} e(x,x+z) \left(\frac{1}{2}f(x+z)+\frac{1}{2}f(x-z)\right).
\end{align*}
Hence by (midpoint) convexity
\[f(x)M_x=f(x)\sum_{z\in B_r(0)}e(x,x+z) \le \sum_{y\sim x} e(x,y) f(y),\]
which shows that $f$ is subharmonic at $x$.
\end{proof}
A set $A\subset X$ is called \emph{convex} if the function
\[\chi_A(x)=\begin{cases} 0 &\colon x\in A, \\+\infty &\colon x\in X\setminus A,\end{cases}\]
is convex, or, equivalently, if $z\in A$ whenever there exists $x,y\in A$ such that $||x-y||=||x-z||+||z-y||$. This again easily implies midpoint convexity, i.e. if $z\in A$ whenever there is an $x\in X$ such that both $z+x$ and $z-x$ are in $A$
\begin{prp}\label{P:dist-cvx=>cvx}
Let $F$ be any subset of $X$. If the distance function $d(x,F)=\inf\{||x-y||\colon y\in F\}$ is convex, then the set $F$ is convex.
\end{prp}
\begin{proof}
Let $x\in X$ so that there is some $z\in X$ with $x\pm z\in F$. Then by midpoint convexity
\[0\le d(x,F) \le \frac{1}{2} d(x+z,F) + \frac{1}{2} d(x-z,F) = 0.\]
Thus $d(x,F)=0$ and so $x\in F$.
\end{proof}
Notice for that for the simple case $F=\{a\}$ we get the converse of the previous result.
\begin{lma}\label{L:dist-to-a-point=cvx}
For any fixed $a\in X$, the function $f(z) = ||z-a||$ is midpoint convex.
\end{lma}
\begin{proof}
This follows immediately from the triangle inequality on the norm. Indeed, for any $x,y,z\in X$ with $||x-y||=||x-z||+||z-y||$ we have
\begin{align*}
2f(x) & = 2||x-a|| = ||2(x-a)|| \\
& = ||(x-a)-z + (x-a)+z|| \\
& \le ||(x-a)-z || + ||(x-a)+z|| \\
& = f(x-z)+f(x+z). \qedhere
\end{align*}
\end{proof}
Of course, the minimum of a two convex functions is in general not convex, which is perhaps one reason why the following result is interesting.
However in general the classical proofs heavily rely upon the fact that for any point $x$ and convex set $F$ there is always a unique nearest neighbor $y\in F$ to $x$.
\begin{df}
We say that a set $F$ has the \emph{nearest neighbor} property if for all $y_1, y_2\in F$ and $z\in X$ there exists a $y\in F$ (possibly $y_1$ or $y_2$) such that
\[2 ||y-z|| \le ||y_1+y_2 -2z||.\]
\end{df}
\begin{prp}\label{P:cvx=>dist-cvx}
If $F$ is a convex subset of $X$ with the nearest neighbor property, then the distance function $d(\cdot,F)$ is midpoint convex (and hence subharmonic).
\end{prp}
\begin{proof}
Pick any $z\in X\setminus F$. We will show that $d(\cdot,F)$ is midpoint convex at $z$. By replacing $F$ with $F-z$ we may assume without loss of generality that $z=0$.
Clearly it is possible for there to be an $x\in B_r(0)$ such that $d(x,F)\le d(0,F)$. However by switching to normed abelian groups we've a strong property to use. Namely that if $x\in B_r(0)$ then $-x\in B_r(0)$. We will show that for convex sets with the nearest neighbor property, that
\[2d(0,F)\le d(x,F)+d(-x,F),\]
that is to say that $d(\cdot, F)$ is midpoint convex (and hence subharmonic).
We can find $y_1, y_2\in F$ such that $d(x,F)= ||x - y_1||$ and $d(-x,F)= ||(-x) - y_2||$. Let $y$ be a point in $F$ such that $2||y||\le ||y_1+y_2||$. Then
\begin{align*}
2d(0, F) &\le 2||y|| \\
& \le ||y_1+y_2|| \\
& = ||y_1+y_2 + x - x|| \\
& = ||(y_1-x) + (y_2+x)|| \\
& \le ||y_1-x|| +||y_2+x|| \\
& = d(x,F)+d(-x,F). \qedhere
\end{align*}
\end{proof}
\bibliographystyle{amsplain}
| {
"timestamp": "2013-04-02T02:08:19",
"yymm": "1304",
"arxiv_id": "1304.0428",
"language": "en",
"url": "https://arxiv.org/abs/1304.0428",
"abstract": "We explore the relationship between convex and subharmonic functions on discrete sets. Our principal concern is to determine the setting in which a convex function is necessarily subharmonic. We initially consider the primary notions of convexity on graphs and show that more structure is needed to establish the desired result. To that end, we consider a notion of convexity defined on lattice-like graphs generated by normed abelian groups. For this class of graphs, we are able to prove that all convex functions are subharmonic.",
"subjects": "Combinatorics (math.CO)",
"title": "Convex and subharmonic functions on graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211604938801,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7096458661605052
} |
https://arxiv.org/abs/1805.05417 | Hamiltonian systems: symbolical, numerical and graphical study | Hamiltonian dynamical systems can be studied from a variety of viewpoints. Our intention in this paper is to show some examples of usage of two Maxima packages for symbolical and numerical analysis (\texttt{pdynamics} and \texttt{poincare}, respectively), along with the set of scripts \KeTCindy\ for obtaining the \LaTeX\ code corresponding to graphical representations of Poincaré sections, including animation movies. |
\section{Introduction}
For simplicity, we will consider Hamiltonians defined on the symplectic manifold
$\mathbb{R}^{2n}$, with coordinates $(q^j,p_j)$ ($1\leq j\leq n$), endowed with
the canonical form $w=\mathrm{d}p_j\wedge \mathrm{d}q^j$, and the induced Poisson bracket
on $\mathcal{C}^\infty (\mathbb{R}^{2n})$
$$
\{f,g\}=\sum^n_{i=1}\left( \frac{\partial f}{\partial p_i}\frac{\partial f}{\partial q^i}
-\frac{\partial f}{\partial q^i}\frac{\partial f}{\partial p_i}\right)\,,
$$
although all the results remain valid for an arbitrary symplectic manifold.
For background on Hamiltonian systems, see \cite{Cus97}.
Given a Hamiltonian system defined by the Hamiltonian function
$H\in\mathcal{C}^\infty (\mathbb{R}^{2n})$ and the Poisson bracket $\{\cdot ,\cdot\}$,
\begin{align}\label{eq1}
\dot{q}^j &= \frac{\partial H}{\partial p_j} \nonumber\\
\dot{p}_j &= -\frac{\partial H}{\partial q^j}\,,
\end{align}
two of the main goals in the theory of dynamical systems are the determination of
possible closed, stable orbits, and the computation of adiabatic invariants (of
course, taking for granted the impossibility of solving \eqref{eq1} explicitly).
Of particular interest is the case in which the Hamiltonian $H$ is a perturbation of
an integrable one, say, $H=H_0 +\sum^n_{j=1}\varepsilon^j H_j$. A widely used
procedure to study it, consists in writing the Hamiltonian in the so-called
\emph{normal form}, that is, as a formal series
\begin{equation*}
H=\sum^\infty_{j=0}\varepsilon^j N_j
\end{equation*}
where $N_0=H_0$, and each $N_j$ commutes with the unperturbed Hamiltonian,
$$
\{H_0,N_j\}=0\,.
$$
Let us recall that given an integral curve, that is, a $c(t)=(q(t),p(t))$ satisfying \eqref{eq1},
the evolution of any observable $f\in \mathcal{C}^\infty (\mathbb{R}^{2n})$ along $c$ is determined by
$$
\dot{f}(t)=\{H, f\}\,.
$$
Any smooth function such that $\{H,f\}=0$ is thus a constant of motion, also called a first integral.
Indeed, given enough first integrals one can solve the motion of the system, as the physical
trajectories are determined by the intersection of their level hypersurfaces. Unfortunately, determining
first integrals is a very difficult problem, and there are quite a few systems for which enough
first integrals exist (roughly, these are the so-called integrable systems).
Notice that transforming to the normal form introduces a (possibly infinite)
family of first integrals $N_j$ which might not been present in the original system.
These additional, spurious symmetries must be removed in order to have a system equivalent to
the original one, and this is usually done by
restricting the system to a reduced phase space through symplectic (singular)
reduction. The basic idea is to restrict the system to a particular level hypersurface and
to consider its evolution there.
A number of well-known techniques are available to do this, for instance the
ones based on Moser's theorem \cite{Mos70}: If $M_h$ denotes the hypersurface
$H_0=h$, suppose that the orbits of the Hamiltonian flow $\mathrm{Fl}^t_{X_{H_0}}$ are all periodic
with period $T$ and let $S$ be the quotient with respect to the
induced $U(1)-$action on $M_h$. Then, to every non-degenerate critical point
$\overline{p}\in S$ of the restricted averaged perturbation
$\left. N_1\right|_S=\left. \left\langle H_1\right\rangle\right|_S$
corresponds a \emph{periodic} trajectory of the full Hamiltonian vector field $X_H$, that branches
off from the orbit represented by $\overline{p}$ and has period close to $2\pi$.
When the critical points are degenerate, one can resort to the second-order normal
form to decide the stability of orbits. An example of this situation is given by the
H\'enon-Heiles Hamiltonian \cite{Cus94}. These results illustrate the importance of
being able to compute efficiently the normal form of a Hamiltonian system. In Section
\ref{sec2} we show how to do this using the Computer Algebra System (CAS) Maxima.
Another aspect related to the study of existence and stability of closed orbits is
the construction of Poincar\'e sections. They provide a direct and very intuitive
way for detecting these orbits, but their computation in closed
form is usually impossible, so numerical methods are needed. The traditional method
used for this task has been the fourth-order Runge-Kutta, but more recently methods
based on symplectic integrators (such as symplectic Euler, St\"ormer-Verlet, symplectic
RK, etc.) are also intensively used, see \cite{BC16} for a recent review.
The choice of one method or another depends very
much on the properties of the system under consideration, in Section \ref{sec3} we will use the RK method, but symplectic methods can be included by substituting the \texttt{rkfun} command in the code of
\texttt{poincare} with \texttt{symplectic\_ode}, recently included in Maxima
(from version 5.39.1 onward). In any case, one of the main
goals is to obtain a clean picture of the phase-space portrait of the system, something
that can be challenging for CASs, whose graphical
output is not very sophisticate in many cases. To deal with this issue we present in
Section \ref{sec4} the set of CindyScript macros called \KeTCindy\,,
which parse the output of Maxima through
the Dynamical Geometry Software (DGS) Cinderella and return the \LaTeX\ code of the
corresponding graphics, that can be included in any document even in the form of an animation.
The data for these graphics are actually codes of TPIC specials for \LaTeX\,, or
pict2e commands in the case of pdf\LaTeX\,, so there they can be inserted in scientific
documentation with great flexibility (see \cite{icms2016,cas2016,iccsa2017,castr2017}
for installation and examples of use).
The Maxima packages \texttt{pdynamics} (short for `Poisson Dynamics') and
\texttt{poincare} can be downloaded from \url{http://galia.fc.uaslp.mx/~jvallejo/pdynamics.zip}
and \url{http://galia.fc.uaslp.mx/~jvallejo/poincare.mac} respectively\footnote{There is a
documentation file inside \texttt{pdynamics.zip}, and the documentation for \texttt{poincare.mac}
can be found at \url{http://galia.fc.uaslp.mx/~jvallejo/PoincareDocumentation.pdf}. Both files
contain detailed instructions about the installation.}. The
\KeTCindy\ package is available at \url{http://ketpic.com/?lang=english}.
\section{Symbolic study of Hamiltonian systems: normal forms}\label{sec2}
Given a smooth vector field in $\mathbb{R}^{m}$ (although what follows is valid in an
arbitrary manifold) its flow is a mapping $\mathrm{Fl}_X:\mathbb{R}^{m}\to\mathbb{R}^{m}$
defined by
$$
\mathrm{Fl}_X(t,p)\doteq\mathrm{Fl}^t_X(p)\doteq c_p(t)\,,
$$
where $c_p$ is the integral curve of $X$ such that, for $t=0$, passes through $p\in\mathrm{R}^m$
(i.e., $c_p(0)=p$). When $m=2n$, there is a canonical symplectic form
$$
\Omega = \mathrm{d}p_1\wedge\mathrm{d}q_1+\cdots +\mathrm{d}p_n\wedge\mathrm{d}q_n\,.
$$
Any Hamiltonian $H\in\mathcal{C}^\infty (\mathbb{R}^{2n})$, has an associated vector field
$X_H$ defined by the condition $i_{X_H}\Omega =-\mathrm{d}H$. In local coordinates
$(q_i,p_i)$ it has the expression
$$
X_H =\left( \frac{\partial H}{\partial p_1},-\frac{\partial H}{\partial q_1},\ldots,
\frac{\partial H}{\partial p_n},-\frac{\partial H}{\partial q_n}\right)\,.
$$
Suppose now that $X$ is the generator of an $\mathbb{S}^1-$action, so the flow $\mathrm{Fl}^t_X$
is periodic in the variable $t$. This property can be used to put $H$ in normal form (for details,
see \cite{AVV13}). To this end, it is essential to define two averaging operators acting on
observables. The first one is denoted as $\left\langle\cdot\right\rangle$ and is given by
integrating the pullback
$$
\left\langle g\right\rangle\doteq \frac{1}{2\pi}\int^{2\pi}_0(\mathrm{Fl}^t_X)^*g\,\mathrm{d}t\,,
$$
for any observable $g\in\mathcal{C}^\infty (\mathbb{R}^{2n})$. The second operator, denoted
$\mathcal{S}$, is defined as
$$
\mathcal{S}(g)\doteq\frac{1}{2\pi}\int^{2\pi}_0(t-\pi)(\mathrm{Fl}^t_X)^*g\,\mathrm{d}t\,.
$$
In the particular case of a perturbed Hamiltonian, of the form $H=H_0 +\varepsilon H_1
+\frac{\varepsilon^2}{2}H_2 +\cdots$, if the non-perturbed part $H_0$ generates an $\mathbb{S}^1-$action in such a way that its flow is periodic with frequency function
$w$, it can be proved (see \cite{AVV13}) that its second-order normal form is
$$
N=H_0 +\varepsilon\left\langle H_1\right\rangle +
\frac{\varepsilon^2}{2}\left(\left\langle H_2\right\rangle +
\left\langle\left\lbrace\mathcal{S}\left(\frac{H_1}{w}\right) ,H_1\right\rbrace\right\rangle
\right)\,.
$$
There are other representations of the normal form (it must be stressed that it is \emph{not}
unique), but this one has the particular features that it is global (not depending on action-angle
variables), and particularly well-suited for symbolic computation. Let us illustrate the use of
the \texttt{pdynamics} Maxima package by considering the example of the Pais-Uhlenbeck oscillator.
This system is a toy model of a field theory defined by a Lagrangian depending on higher-order
derivatives. These Lagrangians are believed to lead to perturbatively renormalizable
theories, where the infinities appearing in the perturbation series for the field equations can
be cured through some well-defined regularization procedure. The corresponding Hamiltonian is constructed through a higher-order analog of the Legendre transformation, called the
Ostrogadskii formalism \cite{Os50}. After some suitable transformations (see \cite{Pav13})
the Hamiltonian can be expressed
as the \emph{difference} of two harmonic oscillators with respective frequencies $w_1$, $w_2$.
Adding an interaction term in the form of a homogeneous polynomial results in the Hamiltonian
\begin{equation}\label{puham}
H=\frac{1}{2}(p^2_1 +w^2_1q^2_1)-\frac{1}{2}(p^2_2 +w^2_2q^2_2)+\frac{\lambda}{4}(q_1+q_2)^4\,.
\end{equation}
This can be considered as a perturbed system of the form $H=H_0+\lambda H_1$,
let us study it symbolically. We would use the following sequence of commands in Maxima:
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i1)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
load(pdynamics)$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i2)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
declare(w1,integer)$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i3)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
assume(w1>0)$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i4)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
declare(w2,integer)$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i5)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
assume(w2>0)$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i6)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
H0(q1,p1,q2,p2):=(p1^2+w1^2*q1^2)/2-(p2^2+w2^2*q2^2)/2$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i7)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
H1(q1,p1,q2,p2):=(q1+q2)^4/4$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i8)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
H2(q1,p1,q2,p2):=0$\end{verbatim}
\end{minipage}\\
Up to here, we have just defined the parameters of the system (the frequencies $w_1$, $w_2$, and
the subhamiltonians $H_i$). Let us check that the Hamiltonian flow of the non-perturbed part $H_0$ is
periodic by explicitly computing it (we have slightly edited the output of Maxima by writing it as a
column matrix, to make it more readable):
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i9)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
phamflow(H0);\end{verbatim}
\end{minipage}
\[\tag{\% o9}
\begin{pmatrix}
\frac{p_1\, \sin{\left( t\, w_1\right) }}{w_1}+q_1\, \cos{\left( t\, w_1\right) }\\
p_1\, \cos{\left( t\, w_1\right) }-q_1\, w_1\, \sin{\left( t\, w_1\right) }\\
q_2\, \cos{\left( t\, w_2\right) }-\frac{p_2\, \sin{\left( t\,w_2\right) }}{w_2}\\
q_2\, w_2\, \sin{\left( t\, w_2\right) }+p_2\, \cos{\left( t\, w_2\right) }
\end{pmatrix}
\] \\
It is clear that the flow is periodic with period $T=2\pi w_1w_2$. Thus, we define the
frequency function as
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i10)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
u(q1,p1,q2,p2):=1/(w1*w2)$\end{verbatim}
\end{minipage}\\
Finally, we can compute $N_1=\left\langle H_1\right\rangle$:
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i11)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
phamaverage(H1,H0,u(q1,p1,q2,p2));\end{verbatim}
\end{minipage}
\begin{align*}
\tag{\% o11}
\frac{1}{32 {w_1^{4}}\, {w_2^{4}}}\left[\left( \left( 3 {q_2^{4}}+12 {q_1^{2}}\, {q_2^{2}}+3 {q_1^{4}}\right) \, {w_1^{4}}+\left( 12 {p_1^{2}}\, {q_2^{2}}+6 {p_1^{2}}\, {q_1^{2}}\right) \, {w_1^{2}}+3 {p_1^{4}}\right) \, {w_2^{4}}\right.\\
\left. +\left( \left( 6 {p_2^{2}}\, {q_2^{2}}+12 {p_2^{2}}\, {q_1^{2}}\right) \, {w_1^{4}}+12 {p_1^{2}}\, {p_2^{2}}\, {w_1^{2}}\right) \, {w_2^{2}}+3 {p_2^{4}}\, {w_1^{4}}\right]
\end{align*}
The second-order normal form can be computed along similar lines but, as one can guess, the
expressions become very cumbersome, and not much illuminating
(see \eqref{o15} below). Indeed, it is customary to
simplify these expressions by rewriting them in terms of the so-called Hopf variables. The
idea behind these variables is the following: in the case in which the normal subhamiltonians
$N_i$ are polynomials, the fact that they commute with $N_0=H_0$ means that they are invariant under the smooth $\mathcal{S}^1-$action of $X_{H_0}$. The space of smooth invariant functions
is finitely generated (this is a generalization to the smooth case of a classical result of
Hilbert dealing with algebraic invariants, called the Schwarz theorem \cite{Sch75}), and a set of functional generators is precisely given by the Hopf polynomials, that can be considered as new variables.
In other words, any smooth invariant function can be expressed as a smooth function of the Hopf variables. For the Pais-Uhlenbeck oscillator with resonance $1:2$ (that is, when $w_1=1$ and
$w_2=2$), these Hopf invariants can be readily computed \cite{AVV17} and they turn out to be
\begin{align}\label{rhos}
\rho_1 =& q^2_1+p^2_1\nonumber \\
\rho_2=& 4 q^2_2+p^2_2\nonumber \\
\rho_3 =& p_2(p^2_1-q^2_1)-4p_1q_1q_2 \\
\rho_4 =& 2q_2(p^2_1-q^2_1)+2q_1p_1p_2\nonumber \,.
\end{align}
We can compute $N_2$ by extracting the coefficient of $\lambda^2$ in the second-order normal form
of $H$. The following commands show how to study this resonance, defining a function
\texttt{phopf6res12} (not contained in the \texttt{pdynamics} package) adapted to this case, whose
purpose is to express everything in terms of the variables \eqref{rhos}. First, we define the
Hamiltonian:
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i12)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
K0(q1,p1,q2,p2):=(p1^2+q1^2)/2-(p2^2+4*q2^2)/2$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i13)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
K1(q1,p1,q2,p2):=(q1+q2)^4/4$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i14)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
K2(q1,p1,q2,p2):=0$\end{verbatim}
\end{minipage}\\
\noindent and then compute the second-order normal form
(here and below, the Maxima output has been slightly edited in order to fit the page):
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i15)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
pnormal2(K0,K1,K2
\end{minipage}
\begin{align*}
& \frac{\lambda^2}{4587520} \left(255168 q_2^6+\left( 1184256 q_1^2+191376 p_2^2+1184256 p_1^2\right)\,q_2^4\right. \\
&+ \left. \left( 225792 {q_1^{4}}+\left( 592128 {p_2^{2}}+4580352 {p_1^{2}}\right) \, {q_1^{2}}+47844 {p_2^{4}}+592128 {p_1^{2}}\, {p_2^{2}}+225792 {p_1^{4}}\right) \, {q_2^{2}}\right.\\
&+\left.\left( 2064384 p_1\, p_2\, {q_1^{3}}-2064384 {p_1^{3}}\, p_2\, q_1\right) \, q_2+48384 {q_1^{6}}+\left( 314496 {p_2^{2}}+145152 {p_1^{2}}\right) \, {q_1^{4}}\right.\\
&+\left.\left( 74016 {p_2^{4}}-403200 {p_1^{2}}\, {p_2^{2}}+145152 {p_1^{4}}\right) \, {q_1^{2}}+3987 {p_2^{6}}+74016 {p_1^{2}}\, {p_2^{4}}+314496 {p_1^{4}}\, {p_2^{2}}+48384 {p_1^{6}}\right) \\
&+\frac{\lambda}{512} \left( 48 {q_2^{4}}+\left( 192 {q_1^{2}}+24 {p_2^{2}}+192 {p_1^{2}}\right) \, {q_2^{2}}+48 {q_1^{4}}+\left( 48 {p_2^{2}}+96 {p_1^{2}}\right) \, {q_1^{2}}+3 {p_2^{4}}+48 {p_1^{2}}\, {p_2^{2}}+48 {p_1^{4}}\right) \\
&-\frac{4 {q_2^{2}}+{p_2^{2}}}{2}+\frac{{q_1^{2}}+{p_1^{2}}}{2}\mbox{}\tag{\% o15} \label{o15}
\end{align*}
\noindent Now, the term $N_2$ can be easily extracted:
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i16)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
expand(coeff
\end{minipage}
\begin{align*}
&\frac{3987 {{q_2}^{6}}}{71680}+\frac{2313 {{q_1}^{2}}\, {{q_2}^{4}}}{8960}+\frac{11961 {{p_2}^{2}}\, {{q_2}^{4}}}{286720}+\frac{2313 {{p_1}^{2}}\, {{q_2}^{4}}}{8960}+\frac{63 {{q_1}^{4}}\, {{q_2}^{2}}}{1280}+\frac{2313 {{p_2}^{2}}\, {{q_1}^{2}}\, {{q_2}^{2}}}{17920}\\
&+\frac{639 {{p_1}^{2}}\, {{q_1}^{2}}\, {{q_2}^{2}}}{640}+\frac{11961 {{p_2}^{4}}\, {{q_2}^{2}}}{1146880}+\frac{2313 {{p_1}^{2}}\, {{p_2}^{2}}\, {{q_2}^{2}}}{17920}+\frac{63 {{p_1}^{4}}\, {{q_2}^{2}}}{1280}+\frac{9 p_1\, p_2\, {{q_1}^{3}}\, q_2}{20}\\
&-\frac{9 {{p_1}^{3}}\, p_2\, q_1\, q_2}{20}+\frac{27 {{q_1}^{6}}}{2560}+\frac{351 {{p_2}^{2}}\, {{q_1}^{4}}}{5120}+\frac{81 {{p_1}^{2}}\, {{q_1}^{4}}}{2560}+\frac{2313 {{p_2}^{4}}\, {{q_1}^{2}}}{143360}-\frac{45 {{p_1}^{2}}\, {{p_2}^{2}}\, {{q_1}^{2}}}{512}\\
&+\frac{81 {{p_1}^{4}}\, {{q_1}^{2}}}{2560}+\frac{3987 {{p_2}^{6}}}{4587520}+\frac{2313 {{p_1}^{2}}\, {{p_2}^{4}}}{143360}+\frac{351 {{p_1}^{4}}\, {{p_2}^{2}}}{5120}+\frac{27 {{p_1}^{6}}}{2560}\mbox{}\tag{\% o16}
\end{align*}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i17)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
define(N2(q1,p1,q2,p2)
\end{minipage}\\
\noindent And, finally, the reduction to Hopf variables can be achieved as follows:
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i18)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
phopf6res12(expr):=block(
[aux,list_coeff,eq,eqs,W,Wp,U,Up,a,l,
w:[q1^2+p1^2,4*q2^2+p2^2],
u:[-4*p1*q1*q2-p2*q1^2+p1^2*p2,
-2*q1^2*q2+2*p1^2*q2+2*p1*p2*q1]],
W:makelist(w[1]^i*w[2]^(3-i),i,makelist(j,j,0,3)),
Wp:makelist
U:makelist(u[1]^i*u[2]^(2-i),i,makelist(j,j,0,2)),
Up:makelist
a:makelist(a[k],k,1,length(W)+length(U)),
aux:facsum(expandwrt(expr-sum(a[i]*W[i],i,1,length(W))-
sum(a[i+length(W)]*U[i],i,1,length(U)),
q1,p1,q2,p2),
q1,p1,q2,p2),
list_coeff:coeffs(aux,q1,p1,q2,p2),
l:length(list_coeff),
for j:2 thru l do (k:j-1, eq[k]:first(list_coeff[j])),
eqs:makelist(eq[k],k,1,l-1),
subst(first(algsys(eqs,a)),
sum(a[i]*Wp[i],i,1,length(Wp))
+sum(a[i+length(W)]*Up[i],i,1,length(U)))
)$\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i19)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
phopf6res12(N2(q1,p1,q2,p2));\end{verbatim}
\end{minipage}
\begin{align*}
\tag{\% o19}
&-\frac{{{\rho }_{4}^{2}}\, \left( 5120 \% r_1-63\right) }{5120}-\frac{{{\rho }_{3}^{2}}\, \left( 5120 \% r_1-351\right) }{5120}+{{\rho }_{1}^{2}}\, {{\rho }_2}\,\% r_1 \\
&+\frac{3987 {{\rho }_{2}^{3}}}{4587520}+\frac{2313 {{\rho }_1}\, {{\rho }_{2}^{2}}}{143360}+\frac{27 {{\rho }_{1}^{3}}}{2560}\mbox{}
\end{align*}
\noindent
\begin{minipage}[t]{8ex}\bf
(\% i20)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
subst
\end{minipage}
\[\displaystyle
\tag{\% o20}
\frac{63 {{\rho }_{4}^{2}}}{5120}+\frac{351 {{\rho }_{3}^{2}}}{5120}+\frac{3987 {{\rho }_{2}^{3}}}{4587520}+\frac{2313 {{\rho }_1}\, {{\rho }_{2}^{2}}}{143360}+\frac{27 {{\rho }_{1}^{3}}}{2560}\mbox{}
\]\\
Of course, a similar analysis can be done for $N_1$. The resulting expression appears in
\cite{AVV17} (see equation (16) in that paper), applied to the determination of
the existence of closed, stable orbits in the Pais-Uhlenbeck oscillator.
\section{Numerical study: Poincar\'e sections}\label{sec3}
Given a Hamiltonian $H\in\mathcal{C}^\infty (\mathbb{R}^{2n})$, the Maxima package
\texttt{poincare} provides several functions to study its Poincar\'e sections.
The functionality of this package, and even the syntax, is similar to the
package \texttt{DEtools} in Maple\texttrademark\, but it offers two advantages: first,
it uses free (both as in `freedom' and as in `free beer') software and, second,
it is almost three times faster, thus being a serious competitor for long
computations. Moreover, as we will see in Section \ref{sec4}, in conjunction with
\KeTCindy\ animation movies describing the evolution of the system in phase space can
be easily constructed.
It should be stressed that Maxima is a CAS, not a language intended for numerical computations such
as Octave. Thus, speed in computations is not one of its goals, nor was it designed to achieve it.
However, the fact that LISP is its underlying programming language, allows the possibility of
writing specialized routines using declared variables, that can be then compiled. The package
\texttt{poincare} uses a compiled version of the Runge-Kutta method, called \texttt{rkfun},
developed by Richard Fateman (\url{http://people.eecs.berkeley.edu/~fateman/lisp/rkfun.lisp}),
and this is the ultimate reason for the gain in speed.
The function called \texttt{hameqs} constructs the Hamiltonian equations for a given Hamiltonian $H(q1,p1,\ldots ,q2,p2)$. Any
names can be used for the variables, but they must be given in pairs
``coordinate, conjugate momentum''. A good choice (used internally) is
$(q1,p1,...,qn,pn)$. A name must be
provided for the components of the Hamiltonian vector field
$$
X_H(q_1,p_1,...,q_n,p_n)=\sum^n_{i=1}\left(
\frac{\partial H}{\partial p_i}\frac{\partial}{\partial q_i}
-\frac{\partial H}{\partial q_i}\frac{\partial}{\partial p_i}
\right)\,.
$$
Once a name, say $XH$, is chosen, the components of the Hamiltonian vector field
will be globally defined functions $XHj$ with $1\leq j\leq 2n$, where $n$ is the
number of degrees of freedom, and will be available to Maxima. Notice that, for
instance,
$$
XH1(t,q_1,p_1,...,q_n,p_n)=\frac{\partial H}{\partial p_1}
$$
and
$$
XH2(t,q_1,p_1,...,q_n,p_n)=-\frac{\partial H}{\partial q_1}\,.
$$
Although we will work with \emph{autonomous}
Hamiltonian systems, the components $XHj$ returned by this command will
have the set $(t,q_1,p_1,...,q_n,p_n)$ as arguments. This is necessary
to maintain consistency with the \texttt{rkfun} routine (implementing the
Runge-Kutta method), which can work
with both, autonomous and non-autonomous systems.
The function \texttt{poincare3d} constructs the projection of the Hamiltonian
orbits along a certain coordinate which is given as an argument \texttt{coord}.
Other arguments are: a list
of initial conditions $\mathtt{inicond} =[q_1(0),p_1(0),\ldots ,q_n(0),p_n(0)]$, and
a list characterizing the time domain $\mathtt{timestep}=[t,t_{ini},t_{fin},step]$.
Thus, the syntax is \texttt{poincare3d(H,name,inicond,timestep,coord)}. The package is loaded
with
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}2i1)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
batch("poincare.mac")$
\end{verbatim}
\end{minipage}
As a simple example, let us construct the $3D-$surface of a couple of harmonic oscillators
(this is based on Chapter 9 of \cite{Lyn01}, where a similar discussion using Maple\texttrademark\ is
presented):
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i22)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
H(x,v,q,p):=w1*(x^2+v^2)/2+w2*(q^2+p^2)/2$
\end{verbatim}
\end{minipage}
\noindent The corresponding Hamiltonian equations are:
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i23)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
hameqs(H,XH);
\end{verbatim}
\end{minipage}
\[\displaystyle
\tag{\%{}o7}\label{o23}
[v\,\mathit{w1},-\mathit{w1}x,p\,\mathit{w2},-q\,\mathit{w2}]\mbox{}
\]
\noindent and we fix the values of the frequencies $w1=1$, $w2=3$ so they are commensurable:
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}24)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
[w1,w2]:[8,3]$
\end{verbatim}
\end{minipage}
Next we plot the $3D$ Poincar\'e surface by projecting along the $p$ coordinate
(thus, the resulting graphics has $(x,v,q)$ coordinates):
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i25)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
data1:poincare3d(H,XH,[0.3,0.5,0,1.5],[t,0,40,0.01],p)$
\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i26)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
draw3d(title="Poincare section in 3D",
dimensions=[350,500],view=[85,30],
xlabel="x",ylabel="v",zlabel="q",
xtics=1,ytics=1,
surface_hide=true,color="light-blue",
explicit(0,x,-1.35,1.35,y,-1.35,1.35),
point_size=0,points_joined=true,color=black,line_width=1,
points(data1),
user_preamble="set xyplane at -1.8",color="light-blue",
explicit(-1.8,u,-1.5,1.35,v,-1.35,1.35),
point_size=1,point_type=filled_circle,color=red,points_joined=false,
points([[-0.56,0,-1.78],[0.28,-0.52,-1.78],[0.3,0.48,-1.78],
[-0.56,0,0],[0.28,-0.52,0],[0.3,0.48,0]]));
\end{verbatim}
\end{minipage}
\[\displaystyle
\tag{\%{}t26,\%{}o26}\label{t26}
\includegraphics[scale=0.37]{image01.png}
\]
The function \texttt{poincare2d} constructs the surface of section selected by a list
of arguments of the form $\mathtt{scene}=[q0,c,qi,qj]$, that is, the surface
$q0=c$ in which coordinates $[qi,qj]$ are shown.
The method used in the computation of the Poincar\'e surface is that described in
the paper \cite{Cheb96} we select a set of initial conditions,
follow the corresponding orbit numerically, and detect where we have crossed the $q0=c$ surface by
looking at changes of sign in the list of values for this coordinate minus $c$.
In the previous example, we plotted the $3D-$Poincar\'e surface of a couple of
commensurable oscillators, and we included a $2D-$section (corresponding to $q=0$)
showing that the periodicity
of the system reflects itself in the discrete character of the $2D-$Poincar\'e map
(only three points appear in it). Now we can check this directly with \texttt{poincare2d} (notice the selection of the $q=0$ section in the last argument,
\texttt{[q,0,x,v]}):
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i27)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
data2:poincare2d(H,XH,[0.3,0.5,0,1.5],[t,0,40,0.01],[q,0,x,v])$
\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i28)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
draw2d(title="Poincare section in 2D",
xlabel="x",ylabel="v",
xtics=0.2,
point_size=1,point_type=7,color=red,
points_joined=false,proportional_axes=xy,
points(data2));
\end{verbatim}
\end{minipage}
\[\displaystyle
\tag{\%{}t28,\%{}o28}\label{t28}
\includegraphics[scale=0.37]{PoincareDocumentation_2.png}\mbox{}
\]
For a different example, let us consider the case of a elastic
pendulum \cite{Car94}, with Hamiltonian
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i29)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
H(q1,p1,q2,p2):=(p1^2+p2^2)/2+(q1^2+q2^2)/2-0.75*q1^2*(1+q2)/2$
\end{verbatim}
\end{minipage}
We can obtain an analytic expression for $p_2$ once the energy $E$, and the initial values of $(q_1,p_1,q_2)$ are known. Here we work with $E=0{.}00875$:
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i30)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
solve(H(q1,p1,q2,p2)=0.00875,p2);
\end{verbatim}
\end{minipage}
\begin{align*}
[p_2=-\frac{\sqrt{-4{{q_2}^{2}}+3{{q_1}^{2}}\,
q_2-{{q_1}^{2}}-4{{p_1}^{2}}+0.07}}{2},
p_2=\frac{\sqrt{-4{{q_2}^{2}}+3{{q_1}^{2}}\,
q_2-{{q_1}^{2}}-4{{p_1}^{2}}+0.07}}{2}]
\end{align*}\[\displaystyle
\tag{\%{}o30}\label{o30}\]
Let us define the corresponding functions:
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i31)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
define(f(q1,p1,q2),rhs(first
\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i32)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
define(g(q1,p1,q2),rhs(second
\end{verbatim}
\end{minipage}
Now, we compute the $q_2=0$ surface of section, for enough initial conditions $(q_1,p_1,q_2)$ ($10$ different sets) using the positive value of $p_2$, and joining all the resulting points in a
big list of $2D-$coordinates called \texttt{points1}:
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i33)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
for j:1 thru 10 do data1[j]:poincare2d(H,XH,
[0.15,j/100,0.001,g(0.15,j/100,0.001)],[t,0,1000,0.01],[q2,0,q1,p1])$
\end{verbatim}
\end{minipage}
For future reference, here is the time invested in the computation:
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i34)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
time
\end{verbatim}
\end{minipage}
\[\displaystyle
\tag{\%{}o61}\label{o61}
[17.222]\mbox{}
\]
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i35)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
points1:xreduce(append,create_list(data1[j],j,makelist(k,k,1,10)))$
\end{verbatim}
\end{minipage}
The following figure is the plot of these points on the Poincar\'e surface:
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i36)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
draw2d(title="Poincare sections E=0.00875",
xlabel="q1",ylabel="p1",
point_type=7,point_size=0.1,
points(points1)
);
\end{verbatim}
\end{minipage}
\[\displaystyle
\tag{\%{}t36,\%{}o36}\label{t36}
\includegraphics[width=.67\linewidth,height=.52\textheight,keepaspectratio]{PoincareDocumentation_11.png}\mbox{}\]
In order to complete the section, we must select another set of initial conditions, whose orbits pass through the empty region at the center.
This time we use negative values of the momentum $p_2$:
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i37)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
for j:1 thru 10 do data2[j]:poincare2d(H,XH,
[2*j/100,0,j/100+0.0025,f(2*j/100,0,j/100+0.0025)],[t,0,1000,0.01],
[q2,0,q1,p1])$
\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i38)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
time
\end{verbatim}
\end{minipage}
\[\displaystyle
\tag{\%{}o38}\label{o38}
[17.369]\mbox{}
\]
The $2D-$dimensional coordinates of the corresponding points are stored in the list \texttt{points2}
and then plotted with the aid of the \texttt{draw2d} command, which admits lots of optional
arguments to fine tuning the appearance of the figure. Here, we specify that points be represented by
filled circles (\texttt{point\_type=7}) with a given radius size (\texttt{point\_size=0.1}):
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i39)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
points2:xreduce(append,create_list(data2[j],j,makelist(k,k,1,10)))$
\end{verbatim}
\end{minipage}
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i40)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
draw2d(title="Poincare sections E=0.00875",
xlabel="q1",ylabel="p1",
point_type=7,point_size=0.1,
points(points2)
);
\end{verbatim}
\end{minipage}
\[\displaystyle
\tag{\%{}t40,\%{}o40}\label{t40}
\includegraphics[width=.67\linewidth,height=.52\textheight,keepaspectratio]{PoincareDocumentation_12.png}\mbox{}\]
The full Poincar\'e section is obtained by joining both sets of points. The Maxima command
\texttt{append} does exactly that when two lists are given:
\noindent
\begin{minipage}[t]{8ex}\bf
(\%{}i41)
\end{minipage}
\begin{minipage}[t]{\textwidth}\begin{verbatim}
draw2d(title="Poincare sections E=0.00875",
xlabel="q1",ylabel="p1",
point_type=7,point_size=0.1,
points(append(points1,points2))
);
\end{verbatim}
\end{minipage}
\[\displaystyle
\tag{\%{}t41}\label{t41}
\includegraphics[width=.75\linewidth,height=.60\textheight,keepaspectratio]{PoincareDocumentation_13.png}\mbox{}\]
We have done our computations with a fixed value for the energy
$E=0{.}00875$. The same steps can be followed to consider other
values. The collection of Poincar\'e sections so obtained is a
valuable tool to visualize the dynamics of the system.
In Section \ref{sec4} we will see how to put all these
sections together in the form of a movie animating the evolution
in phase space.
\section{Graphical study with KeTCindy}\label{sec4}
\ketpic{} is a macro package involving several mathematical software
for producing high-quality figures to be inserted into \LaTeX\ documents, developed by
one of the authors (ST). It can use the DGS Cinderella
as a graphical interface through \ketcindy{}, another set of macros which acts as a interface
between them.
The reason for choosing Cinderella is that it has its own scripting language, CindyScript,
featuring a simple to understand syntax.
Using CindyScript, we have added a layer to \ketcindy{} to call other software such as Maxima,
\textbf{\textsf{R}}, Fricas, Risa/Asir or C. We refer to previous works for more details on
the CindyScript syntax \cite{icms2016,cas2016,iccsa2017,castr2017}; here we proceed in a more
direct way, explaining how \ketcindy\ calls to Maxima
by way of an example devoted to find the indefinite and definite integrals of a function.
For this, the following code must be inserted into the script editor of Cinderella:
\begin{verbatim}
cmdL=[
"f(x):=sin(x)+cos(2*x)",[],
"ans1:integrate",["f(x)","x"],
"ans2:integrate",["f(x)","x",0,
"ans1::ans2",[]
];
CalcbyM("ans",cmdL,[""]);
\end{verbatim}
Here \texttt{cmdL} is a list of Maxima commands which are parsed sequentially. By executing
\texttt{CalcbyM} in the next line, a file called \texttt{simpleexampleans.txt}
in the directory \texttt{ketwork} with the contents given below will be created:
\begin{verbatim}
writefile("ketwork/simpleexampleans.txt")$/*##*/
powerdisp:false$/*##*/
display2d:false$/*##*/
linel:1000$/*##*/
f(x):=sin(x)+cos(2*x)$/*##*/
ans1:integrate(f(x),x)$/*##*/
ans2:integrate(f(x),x,0
disp(ans1)$/*##*/
disp(ans2)$/*##*/
closefile()$/*##*/
quit()$/*##*/
\end{verbatim}
These instructions will be processed by Maxima, and the results will be passed to
\ketcindy{} to generate either direct graphical output in Cinderella or a \LaTeX\ file with
the corresponding code to generate the graphics that
can be inserted in another document. In more detail, \texttt{calcbyM} will sequentially do the following:
\begin{enumerate}
\item Create the \texttt{txt} file to be processed by Maxima.
\item Create a batch file \texttt{kc.sh(bat)} to call Maxima.
\item Call a Java program to execute the batch file above.
\item Hand the result from Maxima, parsed as strings, to \ketcindy{}.
\end{enumerate}
In the above example, the result \texttt{ans} is a list containing two strings:
\begin{verbatim}
[sin(2*x)/2-cos(x),(sqrt(3)+2)/4]
\end{verbatim}
This result can be directly used in \ketcindy; for example, we can draw the graph of the indefinite integral with the following command:
\begin{verbatim}
Plotdata(``1'',ans_1,''x'');
\end{verbatim}
\begin{center}
\input{simpleexample.tex}
\end{center}
As mentioned, \ketcindy\ can also produce a \TeX\ animation. We illustrate this feature
with the Poincar\'e sections of an elastic pendulum as an example. We begin by defining
\texttt{Elist} as a list of increasing energies:
\begin{verbatim}
Elist=[0.00875,0.0125,0.01625,0.02,0.02375,0.0275,0.03125,0.035,0.03875];
\end{verbatim}
Next, we generate the corresponding data for each energy using Maxima with the package
\texttt{poincare}. The following list of Maxima commands is just the same
as the one described in Section \ref{sec3}:
\begin{verbatim}
cmdL1=concat(Mxload("rkfun.lisp"),Mxbatch("pdynamics.mac"));
cmdL1=concat(cmdL1,Mxbatch("poincare.mac"));
cmdL1=concat(cmdL1,[
"H(q1,p1,q2,p2):=(p1^2+p2^2)/2+(q1^2+q2^2)/2-0.75*q1^2*(1+q2)/2",[],
"ans:solve(H(q1,p1,q2,p2)=E,p2)",[],
"define(f(q1,p1,q2),rhs(first
"define(g(q1,p1,q2),rhs(second
]);
forall(1..9,nn,
cmdL2=[
"E:"+textformat(Elist_nn,6),[],
"for j:1 thru 10 do data1[j]:poincare2d(H,XH,[0.15,j/100,0.001,
g(0.15,j/100,0.001)],[t,0,1000,0.01],[q2,0,q1,p1])",[],
"points1:xreduce(append,create_list(data1[j],j,makelist(k,k,1,10)))",[],
"for j:1 thru 10 do data2[j]:poincare2d(H,XH,[2*j/100,0,j/100+0.0025,
f(2*j/100,0,j/ 100+0.0025)],[t,0,1000,0.01],[q2,0,q1,p1])",[],
"points2:xreduce(append,create_list(data2[j],j,makelist(k,k,1,10)))",[],
"points1::points2",[]
];
cmdL=concat(cmdL1,cmdL2);
CalcbyM("Points",cmdL,[mr,"Wait=40"]);
);
\end{verbatim}
Each result is stored in a text file (with extension \texttt{txt}) with its name constructed appending
a number sequentially to the prefix \texttt{ptdata}.
Finally, we define a function \texttt{mf(nn)}, which describes the animation frame numbered
\texttt{nn}.
\begin{verbatim}
mf(nn):=(
regional(tmp,Points1,Points2);
Com1st("ReadOutData('ptdata"+text(nn)+".txt')");
Setcolor("red");
Pointdata("1","Points",[red,"Size=0.4"]);
Setcolor("black");
Expr(D,"c","E="+textformat(Elist_(nn),6));
);
Setpara("poincare","mf(nn)",1..9,
["m","Frate=3","Scale=0.7","OpA=[loop]"]);
\end{verbatim}
The animation is generated by putting together all the frames with the command \texttt{Mkanimation()}.
The result, shown below, requires Adobe Acrobat Reader\texttrademark\ for playback\footnote{Currently, it is the only PDF reader capable of that. Some versions of the KDE reader Okular have been reported to be able of
reproducing some animations, but we have not had success when using it.}:
\begin{center}
\begin{animateinline}[autoplay,loop,controls]{3}%
\scalebox{0.7}{\input{p001.tex}}%
\newframe%
\scalebox{0.7}{\input{p002.tex}}%
\newframe%
\scalebox{0.7}{\input{p003.tex}}%
\newframe%
\scalebox{0.7}{\input{p004.tex}}%
\newframe%
\scalebox{0.7}{\input{p005.tex}}%
\newframe%
\scalebox{0.7}{\input{p006.tex}}%
\newframe%
\scalebox{0.7}{\input{p007.tex}}%
\newframe%
\scalebox{0.7}{\input{p008.tex}}%
\newframe%
\scalebox{0.7}{\input{p009.tex}}%
\end{animateinline}
\end{center}
\medskip
This graphical analysis illustrates several features common to any perturbed system of the form
$H=H_0+\varepsilon H_1$, where $H_0$ is an integrable Hamiltonian and $\varepsilon \sim 0$. Starting
at low values of the energy, many closed curves can be detected, corresponding to periodic motions
of the system. Those closed curves are intersections of the tori determined by the non-perturbed
part with the Poincar\'e surface. As the energy of the systems increases, the tori are destroyed and
the trajectories initially confined to them start wandering all over the phase space. At a certain
point, we can not distinguish any periodicity and the behavior is completely chaotic. This generic
picture is the content of the famous KAM (for Kolgomorov, Arnold and Moser) theorem, although this
theorem refers to increasing values of the perturbation parameter rather than the total energy (see
\cite{Cue92} for the application to this case, along with some comments on the applicability of the
KAM theorem, which is not immediate).
\section{Conclusions}
The symbolic computation of second-order normal form for perturbed Hamiltonian
systems can be quickly computed in closed form with the aid of the Maxima CAS,
directly in terms of the Hopf invariants. The package \texttt{pdynamics} shows
a practical implementation.
The Maxima package \texttt{poincare} can reproduce the results appearing in
textbooks and research papers dealing with Hamiltonian systems.
The graphical output quality is quite good, comparable (to say the least) to that
of commercial software, but at no cost (for comparison, Maple\texttrademark\, in
its student's version costs $1\,000$USD.) Regarding computation times, the Maxima
version outperforms commercial competitors: the heaviest computation
in this paper is executed in \eqref{o38} while, for instance,
the same takes $50$ seconds in Maple\texttrademark\ \footnote{Used here:
Maple 2016:1a (build 1133417). Maplesoft, a division of Waterloo Maple Inc.,
Waterloo, Ontario. Maxima version was 5.38.0.} (as can be seen in the worksheet
\url{http://galia.fc.uaslp.mx/~jvallejo/ElasticPendulum-MapleSession.pdf}, for
which the same computer was used). Maxima only requires a third of this time.
On the other hand, \ketcindy\ in combination with Maxima can produce \TeX\ animations,
ready for use in complex documents which require high-quality graphics, such as
research papers or handouts to be used in teaching.
The union of these features results in an easy-to-use,
powerful integrated system particularly suitable for studying the dynamics of
Hamiltonian systems.
| {
"timestamp": "2018-05-16T02:02:12",
"yymm": "1805",
"arxiv_id": "1805.05417",
"language": "en",
"url": "https://arxiv.org/abs/1805.05417",
"abstract": "Hamiltonian dynamical systems can be studied from a variety of viewpoints. Our intention in this paper is to show some examples of usage of two Maxima packages for symbolical and numerical analysis (\\texttt{pdynamics} and \\texttt{poincare}, respectively), along with the set of scripts \\KeTCindy\\ for obtaining the \\LaTeX\\ code corresponding to graphical representations of Poincaré sections, including animation movies.",
"subjects": "Numerical Analysis (math.NA); Mathematical Physics (math-ph); Chaotic Dynamics (nlin.CD); Computational Physics (physics.comp-ph)",
"title": "Hamiltonian systems: symbolical, numerical and graphical study",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211568364099,
"lm_q2_score": 0.727975443004307,
"lm_q1q2_score": 0.7096458634979566
} |
https://arxiv.org/abs/2103.10758 | Intermediate spaces, Gaussian probabilities and exponential tightness | Let us consider a Gaussian probability on a Banach space. We prove the existence of an intermediate Banach space between the space where the Gaussian measure lives and its RKHS. Such a space has full probability and a compact embedding. This extends what happens with Wiener measure, where the intermediate space can be chosen as a space of Hölder paths. From this result it is very simple to deduce a result of exponential tightness for Gaussian probabilities. | \section{Introduction}
Let $E=\cl C_0([0,T],\mathbb{R}^m)$ be the space of continuous $\mathbb{R}^m$-valued paths starting at $0$ and endowed with the sup norm and let $\mu$ be the Wiener measure on it. It is well known that the Reproducing Kernel Hilbert Space (RKHS) of this Gaussian probability is the space $\cl H=H^1_0([0,T])$ of the paths $\gamma$ vanishing at $0$, that are absolutely continuous and have a square integrable derivative.
Let us denote by $\cl C^0_\alpha\subset E$ the space of the paths that are $\alpha$-H\"older continuous and whose modulus of continuity
$$
\omega(\delta)=\sup_{{0\le s<t\le T\atop|t-s|\le\delta}}|\gamma(t)-\gamma(s)|
$$
is such that
$$
\lim_{\delta\to 0+}\frac {\omega(\delta)}{\delta^\alpha}=0\ .
$$
This space (whose elements are sometimes called the ``small'' $\alpha$-H\"older continuous functions) is separable and it is also well known (thanks to Kolmogorov's continuity theorem) that, for $0<\alpha<\frac 12$, $\mu(\cl C^0_\alpha)=1$ (whereas $\mu(\cl H)=0$). It is also well known that, still for $0<\alpha<\frac 12$,
$$
E\hookleftarrow \cl C^0_\alpha\hookleftarrow\cl H
$$
{\it the embeddings being compact}.
We are concerned with the question whether the existence of such an ``intermediate'' space is a general fact, i.e. true for every centered Gaussian probability on a separable Banach space.
More precisely we prove the following result.
\begin{theorem}\label{main} Let $E$ be a separable Banach space, $\mu$ a centered Gaussian probability on $E$ and $\cl H$ the corresponding RKHS. Then there exists a Banach space $\widetilde E$, separable and such that
\tin{a)} $\mu(\widetilde E)=1$ and
\tin{b)} the embeddings
$$
E\hookleftarrow\widetilde E\hookleftarrow\cl H
$$
are compact.
\end{theorem}
We shall even prove that there are infinitely many such spaces. We shall call ``intermediate space'' any separable Banach space satisfying a) and b) of Theorem \ref{main}.
The proof of Theorem \ref{main} is the object of \S\ref{sec-main}. Of course Theorem \ref{main} is obvious if $E$ is finite dimensional, as then we can choose $E=\widetilde E=\cl H$. Therefore in the sequel we implicitly assume that $E$ is infinite dimensional.
In the proof we shall also assume that $E=\mathop{\rm supp}(\mu)$. Otherwise just consider $\mathop{\rm supp} (\mu)$ instead of $E$.
This investigation was motivated by an application to the Large Deviations of the sequence of probabilities $(\mu_\varepsilon)_\varepsilon$, $\mu_\varepsilon$ being the image of $\mu$ through the map $x\mapsto\varepsilon x$. The property of exponential tightness is a key step in the proof of these estimates. One remarks that its proof in the case of Wiener measure is particularly simple and is based, besides Fernique's theorem, on the existence of the spaces of H\"older continuous functions, which are intermediate spaces. Thanks to Theorem \ref{main} the same, simple proof of exponential tightness for the Wiener measure works for a general Gaussian probability on a separable Banach space as developed in \S\ref{sec-lg}.
\S\ref{sec-comm} is devoted to comments and complements.
The proof of Theorem \ref{main} is largely inspired to the arguments of the fundamental papers of L.Gross \cite{gross-TAMS} and \cite{gross-berkeley}.
\section{Motivation: exponential tightness of Gaussian measures}\label{sec-lg}
Let $\mu$ be a centered Gaussian probability on the separable Banach space $E$ and let $\mu_\varepsilon$ be its image through the map $x\mapsto \varepsilon x$ as above.
The Large Deviations properties of the family $\mu_\varepsilon$ as $\varepsilon\to0$ are well understood since a long time (see see \cite{dvIII} \S 5 e.g.). One of the main steps in this investigation is to prove that the family $(\mu_\varepsilon)_\varepsilon$ is {\it exponentially tight} at speed $\varepsilon\mapsto\varepsilon^2$, i.e. that for every $R>0$ there exists a compact set $K_R\subset E$ such that
\begin{equation}\label{eq-et}
\limsup_{\varepsilon\to 0}\varepsilon^2\log \mu_\varepsilon(K_R^c)\le -R\ .
\end{equation}
This fact follows immediately from Theorem \ref{main}: let us denote $\|\enspace\|_i$ the norm of the intermediate space $\widetilde E$, as $\|\enspace\|_i$ is $\mu$-a.s. finite (this is a) of Theorem \ref{main}), by Fernique's theorem (\cite{fernique-cras}, \cite{fernique-stflour74}), for some $\rho>0$, we have
\begin{equation*}
\int_E{\rm e}^{\rho\|x\|_i^2}\,d\mu(x):=C_\rho<+\infty\ .
\end{equation*}
Let $K_R$ denote the ball of radius $\sqrt{R/\rho}$ of $\widetilde E$, which is compact in $E$.
From
$$
C_\rho\ge \int_{K_R^c/\varepsilon}{\rm e}^{\rho\|x\|_i^2}\,d\mu(x)\ge
\mu(\tfrac 1{\varepsilon}K_R^c){\rm e}^{R/\varepsilon^2}
$$
we deduce
$$
\limsup_{\varepsilon\to 0}\varepsilon^2\log \mu_\varepsilon(K_R^c)=\limsup_{\varepsilon\to 0}\varepsilon^2\log \mu(\tfrac 1{\varepsilon} K_R^c)\le-R\ ,
$$
i.e. \eqref{eq-et}.
\section{Proof of the main result}\label{sec-main}
From now on $\mu$ will denote a Gaussian probability on the infinite dimensional separable Banach space $E$ as in the introduction, $\|\enspace\|$ being the norm of the Banach space $E$.
If we denote by $E'$ the dual of $E$, then to every continuous functional $\xi\in E'$ we can associate the r.v. $(E,\mu)\to \mathbb{R}$ defined as
$x\mapsto\langle \xi,x\rangle$. Let $E'_\mu$ be the completion of $E'$ in $L^2(\mu)$. This is a separable Hilbert space which is also a Gaussian space.
For every $g\in E'_\mu$ the vectors
\begin{equation}\label{eq-alter}
h=\int_E xg(x)\, d\mu(x)
\end{equation}
form a vector space $\cl H\subset E$ which, endowed with the scalar product
$$
\langle h_1,h_2\rangle_{\cl H}=\int_E g_1(x)g_2(x)\, d\mu(x)
$$
for
$$
h_1=\int_E xg_1(x)\, d\mu(x),\qquad h_2=\int_E xg_2(x)\, d\mu(x)\ ,
$$
is an Hilbert space $\cl H$ isometric to $E'_\mu$. Remark that \eqref{eq-alter} can also be written $h={\rm E}[Xg(X)]$, $X$ denoting an $E$-valued r.v. having law equal to $\mu$. $\cl H$ is called the Reproducing Kernel Hilbert Space (RKHS) of $\mu$.
For more details on the structure of Gaussian probabilities see \cite{ledoux-talagrand} or the very nice and very short presentation in \S 2 of \cite{deacosta-small}.
\medskip
\noindent{\it Proof} of Theorem \ref{main}. Recall that we assume $E$ to be infinite dimensional.
\tin{a)} First step: construction of $\widetilde E$.
Let $(\Omega,\cl F,{\rm P})$ be a probability space and $X:\Omega\to E$ a Gaussian r.v. having distribution $\mu$. Let $(g_n)_n$ be an orthonormal system of the Hilbert space $E'_\mu$. This forms also a sequence of independent $N(0,1)$-distributed r.v.'s. Let $e_n={\rm E}[X g_n(X)]$. $e_n\in E$ and $(e_n)_n$ is an orthonormal system of the RKHS $\cl H$.
Then it is well known (see Proposition 3.6 p. 64 in \cite{ledoux-talagrand}) that the sequence
\begin{equation}\label{eq-Xn}
X_n=\sum_{j=1}^ng_je_j
\end{equation}
is a square integrable $E$-valued martingale converging a.s. and in $L^2$ to $X$. Hence
$$
\lim_{n\to\infty} {\rm E}\Bigl(\Bigl\|\sum_{j=n}^\infty g_je_j\Bigr\|^2 \Bigr)=0\ .
$$
Let $\alpha>0$ be fixed and let $(n_k)_k$ be an increasing sequence of integers such that $n_0=0$ and
\begin{equation}\label{eq-nk}
{\rm E}\Bigl(\Bigl\|\sum_{j=n_k+1}^{\infty} g_je_j\Bigr\|^2 \Bigr)\le 2^{-k(3+2\alpha)}\ .
\end{equation}
Let $\cl H_k=\mathop{\rm span}(e_{n_k+1},\dots,e_{n_{k+1}})$ and let $Q_k$ be the projector $\cl H\to\cl H_k$. Let, for the vector $x=\sum_{n=1}^\infty \alpha_n e_n\in \cl H$,
\begin{equation}\label{eq-rinf}
\|x\|_i:=\sum_{k=0}^\infty 2^{k\alpha}\Big\|\sum_{j=n_k+1}^{n_{k+1}}\alpha_j e_j\Big\|=
\sum_{k=1}^\infty 2^{k\alpha}\|Q_k(x)\|\ .
\end{equation}
$\|\enspace\|_i$ is a norm on $\cl H$. Actually Lemma \ref{lem-finitenorm} below states that $\|x\|_i<+\infty$ for every $x\in\cl H$ while subadditivity and positive homogeneity are immediate.
Let
\begin{equation}\label{eq-wk}
W_k:=\sum_{j=n_k+1}^{n_{k+1}}g_j e_j\ .
\end{equation}
The $E$-valued r.v.'s $W_k$ are Gaussian and independent. Remark that, with our choice of the numbers $n_k$, thanks to the Markov inequality we have
\begin{equation}\label{key-ineq}
{\rm P}(2^{k\alpha}\|W_k\|\ge 2^{-k})\le 2^{2k(1+\alpha)}\,{\rm E}(\|W_k\|^2)\le2^{2k(1+\alpha)}\,{\rm E}\Bigl(\Bigl\|\sum_{j=n_k+1}^{\infty} g_je_j\Bigr\|^2 \Bigr)\le 2^{-k}\ .
\end{equation}
We can now define $\widetilde E=$the completion of $\cl H$ with respect to the norm $\|\enspace\|_i$. Remark that, as for $x\in\cl H$
$$
\|x\|_i=
\sum_{k=1}^\infty 2^{k\alpha}\|Q_k(x)\|\ge
\sum_{k=1}^\infty \|Q_k(x)\|\ge\Big\|\sum_{k=1}^\infty Q_k(x)\Big\| =\|x\|\ ,
$$
we have $\widetilde E\subset E$. It is also obvious that $\widetilde E$ is dense in $E$, as it contains $\cl H$ which is itself dense in $E$.
\tin{b)} Second step: $\mu(\widetilde E)=1$. Let
$$
Y_k=X_{n_k}=\sum_{j=1}^{n_k}g_j e_j
$$
and let us prove that $(Y_k)_k$, as a sequence of $\widetilde E$-valued r.v.'s, converges in probability. We proceed quite similarly as in the proof of the subsequent Lemma \ref{lem-finitenorm}. Let $\varepsilon>0$. If $p_0$ is such that $2^{-p_0}<\frac \ep2$, then for $p_0\le \ell< r$,
$$
\displaylines{
{\rm P}(\|Y_r-Y_\ell\|_i>\varepsilon)\le{\rm P}\Bigl(\sum_{k=\ell+1}^{r}2^{\alpha k}\|W_k\|>\sum_{k=\ell+1}^r2^{-k}\Bigr)\le\cr \le\sum_{k=\ell+1}^r{\rm P}\bigl(2^{\alpha k}\|W_k\|>2^{-k}\bigr)\le \sum_{k=p_0+1}^{\infty}2^{- k}\le2\cdot 2^{-p_0}=\varepsilon\ .\cr
}
$$
Therefore $(Y_k)_k$ is a Cauchy sequence in probability in $\widetilde E$, hence it converges, in probability, to some $\widetilde E$-valued r.v. $\widetilde X$. As $\widetilde E\subset E$ and its topology is stronger, $(Y_k)_k$ also converges in probability in $E$. But we know already that $(Y_k)_k$ in $E$ converges to a r.v. $X$ having law $\mu$. Hence $\widetilde X=X$ a.s. and
$\mu(\widetilde E)={\rm P}(X\in \widetilde E)={\rm P}(\widetilde X\in \widetilde E)=1$.
\tin{c)} Step three: the embedding $E\hookleftarrow\widetilde E$ is compact.
Let $(x_p)_p$ be a bounded sequence in $\widetilde E$ and $(z_p)_p\subset \cl H$ another sequence such that $\|x_p-z_p\|_i<2^{-p}$, which is possible as $\cl H$ is dense in $\widetilde E$. Let $M$ be such that $\|z_p\|_i\le M$ for every $p$. As the projectors $Q_k$ are finite dimensional, for every $k$ there exists a subsequence $(p_r^{(k)})_r$ such that $\|Q_kz_{p_r^{(k)}}-y^{(k)}\|_i\to 0$ for some vector $y^{(k)}\in \cl H_k$, i.e. of the form
$$
y^{(k)}=\sum_{m=n_k+1}^{n_{k+1}} \alpha_m e_m\ .
$$
By the diagonal argument there exists a subsequence $(p'_r)_r$ such that
$\|Q_kz_{p'_r}-y^{(k)}\|_i\to 0$ as $r\to\infty$ for every $k$. Let now
$\varepsilon>0$ be fixed. We have, for every positive integer $k_0$,
\begin{equation}\label{eq-diagonal}
\|z_{p'_r}-z_{p'_\ell}\|=\Bigl\|\sum_{k=1}^\infty Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\le \Bigl\|\sum_{k=1}^{k_0} Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|+\Bigl\|\sum_{k=k_0+1}^\infty Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\ .
\end{equation}
We first choose $k_0$ so that $M\,2^{-\alpha k_0}<\frac \ep3$, so that
$$
\displaylines{
\Bigl\|\sum_{k=k_0+1}^\infty Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\le
\sum_{k=k_0+1}^\infty \Bigl\|Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\le\cr
\le2^{-\alpha k_0}\sum_{k=k_0+1}^\infty 2^{\alpha k} \Bigl\|Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\le 2^{-\alpha k_0}\|z_{p'_r}-z_{p'_\ell}\|_i\le 2^{-\alpha k_0}\cdot 2M\le\frac23\, \varepsilon\cr
}
$$
and then $p_0$ so that, for $r,\ell\ge p_0$
$$
\Bigl\|\sum_{k=1}^{k_0} Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\le \frac \ep3\ \cdotp
$$
Therefore $(z_{p'_r})_r$ is a Cauchy sequence in $E$, hence also $(x_{p'_r})_r$ which proves the compactness of the embedding $\widetilde E\hookrightarrow E$.
\tin{d)} Last step $\widetilde E \hookleftarrow \cl H$ is compact. This is immediate as, $\widetilde E$ being dense in $E$, $\cl H$ is also the RKHS of the Gaussian probability $\mu$ on $\widetilde E$. Such an embedding is always compact.
\par\hfill$\blacksquare$\hphantom{mm
\begin{lem}\label{lem-concentration} Let $\cl K$ be a finite dimensional Hilbert space and $\cl F\subset\cl K$ a subspace and let us denote by $\nu$ and $\nu'$ the standard $N(0,I)$ distributions on $\cl K$ and $\cl F$ respectively. Let $B\subset \cl K$ be a convex, centrally symmetric, convex set. Then
$$
\nu(B)\le\nu'(\cl F\cap B)\ .
$$
\end{lem}
For the proof of Lemma \ref{lem-concentration} the reader is directed to \cite{gross-TAMS} as this is a weaker version of Lemma 4.1 there.
\begin{lem}\label{lem-finitenorm}
$\|x\|_i<+\infty$ for every $x\in\cl H$.
\end{lem}
\noindent{\it Proof} Let us first prove that the sequence of real r.v.'s
$$
Z_n=\sum_{k=1}^n 2^{k\alpha}\|W_k\|
$$
converges in probability.
Let $\varepsilon>0$ and $p_0$ such that $2^{-p_0}<\frac \ep2$. Remark that this implies $\varepsilon>\sum_{k=p_0+1}^\infty2^{-k}$. For $p_0\le n<m$, we have thanks to \eqref{key-ineq}
$$
\displaylines{
{\rm P}(|Z_n-Z_m|>\varepsilon)={\rm P}\Bigl(\sum_{k=n+1}^m 2^{k\alpha}\|W_k\|>\varepsilon\Bigr)\le
{\rm P}\Bigl(\sum_{k=n+1}^m 2^{k\alpha}\|W_k\|>\sum_{k=n+1}^m2^{-k}\Bigr)\le \cr
\le\sum_{k=n+1}^m{\rm P}\Bigl( 2^{k\alpha}\|W_k\|>2^{-k}\Bigr)\le
\sum_{k=n+1}^m 2^{-k}\le \sum_{k=p_0+1}^\infty 2^{-k}=2\cdot 2^{-p_0}< \varepsilon\ .\cr
}
$$
Hence $(Z_n)_n$ is a Cauchy sequence in probability
and converges in probability to some real r.v. $Z$.
Let us prove that, for every $\varepsilon>0$, we have ${\rm P}(Z<\varepsilon)>0$. Let
$$
Z_N=\sum_{k=1}^{N}2^{k\alpha}\|W_k\|,\qquad Z-Z_N=\sum_{k=N+1}^\infty 2^{k\alpha}\|W_k\|\ .
$$
We have ${\rm P}(Z_N<\frac\ep2)>0$, as $Z_N$ depends only on the modulus of finitely many r.v.'s $W_k$ each of them being Gaussian and with values in a finite dimensional vector space. Moreover let $N$ be large enough so that
${\rm P}(Z-Z_N<\frac\ep2)>0$. As the r.v.'s $Z-Z_N$ and $Z_N$ are independent (they depend on different $W_k$'s)
\begin{equation}\label{eq-ep}
{\rm P}(Z<\varepsilon)\ge {\rm P}\Bigl(Z-Z_N<\frac\ep2,Z_N<\frac\ep2\Bigr)={\rm P}\Bigl(Z-Z_N<\frac\ep2\Bigr)
{\rm P}\Bigl(Z_N<\frac\ep2\Bigr)>0\ .
\end{equation}
Finally, let us assume that there exists $x=\sum_{i=1}^\infty \alpha_je_j\in \cl H$ such that $\|x\|_i=+\infty$ and let us prove that this is absurd. Of course we can assume $\|x\|_{\cl H}=1$. Let us consider the seminorms on $\cl H$
$$
\|z\|_k=\sum_{j=1}^k 2^{j\alpha}\|Q_j(z)\|
$$
so that $\lim_{k\to\infty} \|z\|_k=\|z\|_i$. Let $\cl K_k=\mathop{\rm span}(e_1,\dots,e_{n_k})$ so that the r.v. $X_{n_k}$ of \eqref{eq-Xn} takes values in $\cl K_k$. Let $x^{(k)}=\sum_{i=1}^{n_k} \alpha_je_j$ be the projection of $x$ on $\cl K_k$. Remark that $\|x\|_k=\|x^{(k)}\|_k$. We apply Lemma \ref{lem-concentration} considering the convex set
$$
B=\{z\in\cl K_k, \|z\|_k\le a\}
$$
and $\cl F=\mathop{\rm span}(x^{(k)})$. Let $\xi_k=\sqrt{\alpha_1^2+\dots+\alpha_{n_k}^2}$, so that the vector
$
\frac {x^{(k)}}{\xi_k}
$
has modulus $1$ in $\cl H$ and the r.v.
$$
g:=\frac 1{\xi_k}\sum_{j=1}^{n_k}\alpha_jg_j
$$
is $N(0,1)$-distributed. Hence the r.v.
$$
\frac1{\xi_k}\sum_{j=1}^{n_k} \alpha_jg_j\cdot\frac {x^{(k)}}{\xi_k}
$$
is $N(0,1)$-distributed and $\cl F$-valued.
Let $a$ be a continuity point of the partition function of $Z$. Thanks to Lemma \ref{lem-concentration}, as $\xi_k\to \|x\|_{\cl H}=1$ and we assume $\|x\|_i=+\infty$, we have
$$
\displaylines{
{\rm P}(Z\le a)=\lim_{k\to\infty}{\rm P}\Bigl(\Bigl\|\sum_{j=1}^{n_k}g_j e_j\Bigr\|_i\le a\Bigr)\le
\lim_{k\to\infty}{\rm P}\Bigl(\Bigl|\frac 1{\xi_k}\sum_{j=1}^{n_k}\alpha_jg_j\Bigl|\frac{\|x\|_k}{\xi_k}\le a\Bigr)=\cr
=\lim_{k\to\infty}{\rm P}\Bigl(|g|\le\frac {a\xi_k}{\|x\|_k} \Bigr)=0\cr
$$
which is in contradiction with \eqref{eq-ep} and completes the proof of Lemma \ref{lem-finitenorm}.
\par\hfill$\blacksquare$\hphantom{mm
\section{Remarks and complements}\label{sec-comm}%
\begin{rem}\label{rem1}\rm In fact we have proved the existence on infinitely many intermediate spaces $\widetilde E$ between $E$ and $\cl H$ (recall that we assume that $E$ is infinite dimensional). Actually the argument above can be repeated in order to construct a subsequent intermediate space $\widetilde E_1$ between $\cl H$ and $\widetilde E$, which will be necessarily different of $\widetilde E$, the embedding
$\widetilde E\hookleftarrow\widetilde E_1$ being compact. And so on.
\end{rem}
\begin{rem}\label{rem2}\rm In a first attempt to prove Theorem \ref{main} the author tried considering interpolation spaces. More precisely let, for $x\in E$,
$$
K(t,x)=\inf_{a+b=x, a\in E, b\in \cl H}(\|a\|+t|b|_{\cl H})
$$
and let, for $0<\th<1$,
\begin{equation}\label{xf}
\|x\|_\th=\sup_{t>0}t^{-\th}K(t,x)\ .
\end{equation}
Let us define the vector space $G_\th$ as the set of vectors $x\in E$ such that $\|x\|_\th<+\infty$, endowed with the norm $\|\enspace\|_\th$. See \cite{bergh-lofstrom}, \cite{lunardi} or \cite{pisier} for more details on this topic.
It is well known that $G_\th$ is a Banach space and also that the embeddings $E\hookleftarrow G_\th\hookleftarrow\cl H$ are compact (\cite{lions-peetre}, \S V.2).
The question remains whether $\mu(G_\th)=1$.
In the case of the Wiener space, $E=\cl C([0,T],\mathbb{R})$, $\cl H=H^1_0$ and $\mu=$the Wiener measure, it can be proved that
$G_\th$ contains the space of small $\alpha$-H\"older paths for $\alpha>\th$, which is a separable Banach space having Wiener measure $1$. This gives $\mu(G_\th)=1$ for $\th<\frac 12$, which however leaves open the question in the case $\th\ge\frac 12$.
The author does not know whether such a
Banach space $G_\th$ is also separable, but it can be proved that the closure of $H^1_0$ in $G_\th$, $\widetilde G_\th$ say, also contains the small $\alpha$-H\"older paths for $\alpha>\th$. Hence $\widetilde G_\th$, which is separable, is an intermediate space in the sense of Theorem \ref{main} in this case. Note that the requirement $\th<\frac 12$ means that $G_\th$ should be ``closer'' to $E$ than to $\cl H$.
Concerning interpolation spaces, hence, many questions, possibly of interest, remain open.
Is it true, in general, that the interpolated space $G_\th$ is an intermediate space? For every $0<\th<1$ or just for some values of the interpolating parameter $\th$?
\end{rem}
\begin{rem}\label{rem-ref1}\rm The construction of the intermediate space $\widetilde E$ of \S\ref{sec-main} is of course not unique, as other possibilities are available for the candidate norm \eqref{eq-rinf}. For instance, let us define the sequence $(n_k)_k$ so that
\begin{equation}\label{eq-nk2}
{\rm E}\Bigl(\Bigl\|\sum_{j=n_k+1}^\infty g_je_j\Bigr\|^2 \Bigr)\le 2^{-2k(\alpha+\eta)}\ .
\end{equation}
for some $\eta>0$ and then, for $x=\sum_{n=1}^\infty \alpha_n e_n\in \cl H$,
\begin{equation}\label{eq-rinf2}
\|x\|'=\sup_{k>0}2^{k\alpha}\Big\|\sum_{j=n_k+1}^{n_{k+1}}\alpha_j e_j\Big\|\ .
\end{equation}
In this remark we prove that
also $\|x\|'<+\infty$ for $x\in\cl H$ and that the completion of $\cl H$ with respect to $\|\enspace\|'$ is also an intermediate space.
This has some interest as it shows that there are (many) other possible ways of constructing intermediate spaces. Also we shall see, in the next remark, that for a suitable choice of the orthonormal system $(e_n)_n$, in the case $E=\cl C([0,1])$ and Wiener measure the resulting intermediate spaces are the H\"older spaces.
The proof of $\|x\|'<+\infty$ for $x\in\cl H$ is actually even simpler than the one of Lemma \ref{lem-finitenorm}: let $W_k$ as in \eqref{eq-wk} and for $n>0$
$$
Z_n=\sup_{k\le n} 2^{k\alpha}\|W_k\|\ .
$$
Let us show that $Z_n\to Z$ where $Z$ is a r.v. such that ${\rm P}(Z<\varepsilon)>0$ for every $\varepsilon>0$. The a.s. convergence of $(Z_n)_n$ is immediate being an increasing sequence. Denoting by $Z$ its limit we have
$$
\displaylines{
{\rm P}(Z\le \varepsilon)=\lim_{n\to\infty}{\rm P}\bigl(2^{k\alpha}\|W_k\|<\varepsilon, k=1,\dots,n\bigr)=\cr
=\lim_{n\to\infty}\prod_{k=1}^n{\rm P}(2^{k\alpha}\|W_k\|\le \varepsilon)=\prod_{k=1}^\infty\bigr(1-{\rm P}(2^{k\alpha}\|W_k\|>\varepsilon)\bigl)\ .\cr
}
$$
The infinite product above converges to a strictly positive number if and only if the series $\sum_{k=1}^\infty{\rm P}(\|W_k\|>\varepsilon)$ is convergent. But, by Markov's inequality and using the bound \eqref{eq-nk2},
\begin{equation}\label{eq-convprime1}
{\rm P}(2^{k\alpha}\|W_k\|>\varepsilon)\le \frac 1{\varepsilon^2}\,2^{2k\alpha}{\rm E}(\|W_k\|^2)\le \frac 1{\varepsilon^2}\,2^{-2k\eta}
\end{equation}
which is the general term of a convergent series. Moreover the limit $Z$ is finite a.s., as the r.v.'s $W_k$ are independent and the event $\{Z=+\infty\}$ is a tail event having probability $<1$.
The remainder of the proof of Lemma \ref{lem-finitenorm} is quite similar to the one developed in \S\ref{sec-main}.
The new norm $\|\enspace\|'$ also produces a family of intermediate spaces, as stated in the next result
\begin{theorem}\label{main2} Let $\widetilde E'=$the completion of $\cl H$ with the norm $\|\enspace\|'$. Then $\widetilde E'$ is an intermediate space.
\end{theorem}
\noindent{\it Proof} Let us prove first that $\mu(\widetilde E')=1$. Let, as in the proof of Theorem \ref{main},
$Y_k=X_{n_k}=\sum_{j=1}^{n_k}g_j e_j$
and let us prove that $(Y_k)_k$, as a sequence of $\widetilde E'$-valued r.v.'s, converges in probability. If $k_0\le\ell\le r$ we have now
$$
{\rm P}(\|Y_r-Y_\ell\|_i>\varepsilon)={\rm P}\Bigl(\sup_{\ell\le k\le r}2^{\alpha k}\|W_k\|>\varepsilon\Bigr)\le{\rm P}\Bigl(\sup_{k\ge k_0}2^{\alpha k}\|W_k\|>\varepsilon\Bigr)\ .
$$
Markov's inequality gives ${\rm P}(2^{\alpha k}\|W_k\|>\varepsilon)\le\frac 1{\varepsilon^2}\, 2^{-k\eta}$, hence by the Borel-Cantelli Lemma $2^{\alpha k}\|W_k\|>\varepsilon$ for finitely many $k$ only and
$$
\lim_{k_0\to\infty}{\rm P}\Bigl(\sup_{k\ge k_0}2^{\alpha k}\|W_k\|>\varepsilon \Bigr)=0
$$
so that $(Y_k)_k$ is a Cauchy sequence in probability in $\widetilde E'$ which implies, with the same argument as in the proof of Theorem \ref{main}, that $\mu(\widetilde E')=1$.
We are left with the proof that the embedding $\widetilde E'\hookleftarrow\widetilde E$ is compact, which is quite similar to the argument of the proof of Theorem \ref{main}.
Let again $(x_p)_p$ be a bounded sequence in $\widetilde E$ and $(z_p)_p\subset \cl H$ another sequence such that $\|x_p-z_p\|'<2^{-p}$, which is possible as $\cl H$ is dense in $\widetilde E$. Let $M$ be such that $\|z_p\|'\le M$ for every $p$. As the projectors $Q_k$ are finite dimensional, for every $k$ there exists a subsequence $(p_r^{(k)})_r$ such that $\|Q_kz_{p_r^{(k)}}-y^{(k)}\|'\to 0$ for some vector $y^{(k)}\in \cl H_k$, i.e. of the form
$y^{(k)}=\sum_{m=n_k+1}^{n_{k+1}} \alpha_m e_m$.
By the diagonal argument there exists a subsequence $(p'_r)_r$ such that
$\|Q_kz_{p'_r}-y^{(k)}\|_i\to 0$ as $r\to\infty$ for every $k$. Let now
$\varepsilon>0$ be fixed. We have for every positive integer $k_0$
$$
\|z_{p'_r}-z_{p'_\ell}\|=\Bigl\|\sum_{k=1}^\infty Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\le \Bigl\|\sum_{k=1}^{k_0} Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|+\Bigl\|\sum_{k=k_0+1}^\infty Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\ .
$$
As $\|Q_k(z_{p'_r}-z_{p'_\ell}))\|\le2\cdot2^{-k\alpha}\|Q_k(z_{p'_r}-z_{p'_\ell}))\|'\le 2\cdot2^{-k\alpha} M$, we have
$$
\Bigl\|\sum_{k=k_0+1}^\infty Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\le \sum_{k=k_0+1}^\infty \bigl\|Q_k(z_{p'_r}-z_{p'_\ell})\bigr\|\le
2 M\sum_{k=k_0+1}^\infty2^{-k\alpha}\le 2^{-k_0\alpha}\frac {2 M}{1-2^{-\alpha}}
$$
which, for some $k_0$ large enough is $\le \frac\ep2$. We can choose now $p_0$ so that, for $r,\ell\ge p_0$
$$
\Bigl\|\sum_{k=1}^{k_0} Q_k(z_{p'_r}-z_{p'_\ell})\Bigr\|\le \frac \ep2\ \cdotp
$$
Therefore $(z_{p'_r})_r$ is a Cauchy sequence in $E$, hence also $(x_{p'_r})_r$ which proves the compactness of the embedding $\widetilde E\hookrightarrow E$.
\par\hfill$\blacksquare$\hphantom{mm
Note that the norms $\|\enspace\|_i$ and $\|\enspace\|'$ differ not only because of their definitions (\eqref{eq-rinf} as opposed to \eqref{eq-rinf2}) but also on the different requirements on the sequence $(n_k)_k$ (\eqref{eq-nk} and \eqref{eq-nk2}).
\end{rem}
One of the referees raised the natural question whether the intermediate spaces $\widetilde E$ constructed in Theorem \ref{main}, in the case of the Wiener space might produce the H\"older spaces, or, more generally, if they can be described in terms of regularity of the paths, with the idea that the larger the parameter $\alpha$, the greater the regularity of the paths of $\widetilde E$.
The question in general seems to the author to require an analysis going beyond the scope of the present paper, in particular taking into account that regularity also depends on the choice of the orthonormal system $(e_n)_n$ and possibly on the regularity of its elements.
In the next remark we show however that, for a certain choice of the orthonormal system, the intermediate spaces of Theorem \ref{main2} are actually the H\"older spaces $\cl C^0_\alpha$.
\begin{rem}\label{rem-ref}\rm Let $E=\cl C([0,1],\mathbb{R})$ and $\|\enspace\|$ the sup norm.
Let us recall the characterization, due to Ciesielski \cite{Cies1}, of the small Holder spaces $\cl C_\alpha^0$.
Let $\{\chi_n\}_n$ be the Haar system, namely the set of functions on the interval $[0,1]$ defined as $\chi_1(t) \equiv 1$ and
$$
\chi_{2^k+j}(t) =\begin{cases}
\sqrt{2^k}\hfil\qquad &{\rm if\ }t\in [{2j-2\over 2^{k+1}},{2j-1\over
2^{k+1}}[\\
-\sqrt{2^k}\hfil\qquad &{\rm if\ }t\in [{2j-1\over 2^{k+1}},{2j\over
2^{k+1}}[\hfill\cr
0 &{\rm otherwise}\cr
\end{cases}
$$
for $k=1,2,\dots$, $j=1,2,\dots,2^k$. It is well known that $\{\chi_n\}_n$ is a
complete orthonormal system of $L^2([0,1],\mathbb{R})$.
\begin{figure}[h!]
\hbox to \hsize\bgroup\hss
\beginpicture
\setcoordinatesystem units <.37truein,.37truein>
\setplotarea x from -4.2 to 4.2, y from 0 to 2
\axis bottom shiftedto y=0 ticks short withvalues $(j-1)2^{-k}$ $j2^{-k}$ $(j+1)2^{-k}$ $(j+2)2^{-k}$ / at -4 -2 2 4 / /
\axis left shiftedto x=0 invisible ticks length <0pt> withvalues $2^{-1-k/2}$ / at 2 / /
\setplotsymbol ({\bf.})
\plot -4.2 0 -2 0 0 2 2 0 4.2 0 /
\setdots
\plot -5 0 -4.2 0 /
\plot 4.2 0 5 0 /
\endpicture
\hss\egroup
\caption{The graph of $\phi_{2^k+j}$.\label{ex3.sommadiuniform}}
\end{figure}
Moreover let $\phi_n(t)=\int_0^t\chi_n(s)\, ds$ be the
primitive of $\chi_n$ (the Schauder basis). For a continuous path $x\in \cl C([0,1],\mathbb{R})$ let us consider the coefficients $\xi_n=\int_0^1\chi_n(s)\,dx(s)$
which are well defined, as $\chi_n$ is piecewise constant.
Ciesielski \cite{Cies1} proved that the separable Banach spaces $\cl C_\alpha^0$ and $c_0$ (the sequences vanishing at $\infty$ endowed with the sup norm) are isomorphic ($0<\alpha< 1$).
More precisely if
$$
w_{2^k+j}(\alpha)=2^{k(\alpha-\frac 12)+(1-\alpha)}
$$
then $x\in\cl C_\alpha^0,\ 0<\alpha< 1$, if and only if
$\xi=\{\xi_nw_n(\alpha)\}_n\in c_0$. Let us denote by $c_\alpha$ the space of the sequences $\xi=(\xi_n)_n$ such that
$(\xi_nw_n(\alpha))_n\in c_0$.
Ciesielski's theorem states that the mapping
$$
(\xi_n)_n\enspace \leftrightarrow \sum_{m=1}^\infty \xi_m\phi_m
$$
is an isomorphism between $c_\alpha$ and $\cl C^0_\alpha$.
Remark that under Ciesielski's isomorphism $\cl H$ is mapped into $\ell_2$. Actually $(\phi_n)_n$ is itself an orthonormal basis of $\cl H=H^1_0$.
The spaces $\cl C^0_\alpha$ for $0<\alpha<\frac 12$ are actually the intermediate spaces obtained in Remark \ref{rem-ref1}, if we choose, as an orthonormal system for $\cl H$, $e_n=\phi_n$. Actually, if $n_k=2^k$ we have, noting that, the supports of the $\phi_j$ are disjoint and
$\|\phi_j\|= 2^{-1-k/2}$ for $2^k+1\le j\le 2^{k+1}$, for large $k$
$$
{\rm E}\Bigl(\Bigl\|\sum_{j=2^k+1}^{2^{k+1}} g_j\phi_j\Bigr\|^2 \Bigr)=2^{-2-k}{\rm E}\Bigl(\sup_{2^k+1\le j\le 2^{k+1}}g_j^2\Bigr)\le 2^{-k\lambda}
$$
for every $\lambda<1$. This is an elementary, but a bit involved, computation that we are not going to explicit here. Hence, as in Remark \ref{rem-ref1}, the norm
$$
\|x\|'=\sup_{k>0}2^{k\alpha}\Big\|\sum_{j=n_k+1}^{n_{k+1}}\alpha_j e_j\Big\|
$$
is finite on $\cl H$ as soon as $2(\alpha+\eta)<1$, i.e. $\alpha<\frac 12$.
Note, again using the fact that the supports of the $\phi_j$ are disjoint and
$\|\phi_j\|= 2^{-1-k/2}$ for $2^k+1\le j\le 2^{k+1}$,
$$
\displaylines{
\sup_{k>0}2^{k\alpha}\Big\|\sum_{j=2^k+1}^{2^{k+1}}\xi_j\phi_j\Big\|=
\sup_{k>0}\sup_{2^k+1\le j\le 2^{k+1}}2^{k\alpha}2^{-1-k/2} |\xi_j| = const\,|\xi|_\alpha \cr
}
$$
so that, thanks to Ciesielski's isomorphism, the intermediate norm $\|\enspace\|'$ is finite if and only if $x\in \cl C^0_\alpha$.
\end{rem}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2021-03-22T01:17:02",
"yymm": "2103",
"arxiv_id": "2103.10758",
"language": "en",
"url": "https://arxiv.org/abs/2103.10758",
"abstract": "Let us consider a Gaussian probability on a Banach space. We prove the existence of an intermediate Banach space between the space where the Gaussian measure lives and its RKHS. Such a space has full probability and a compact embedding. This extends what happens with Wiener measure, where the intermediate space can be chosen as a space of Hölder paths. From this result it is very simple to deduce a result of exponential tightness for Gaussian probabilities.",
"subjects": "Probability (math.PR)",
"title": "Intermediate spaces, Gaussian probabilities and exponential tightness",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211561049159,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7096458629654471
} |
https://arxiv.org/abs/1301.1107 | Spectral Condition-Number Estimation of Large Sparse Matrices | We describe a randomized Krylov-subspace method for estimating the spectral condition number of a real matrix A or indicating that it is numerically rank deficient. The main difficulty in estimating the condition number is the estimation of the smallest singular value \sigma_{\min} of A. Our method estimates this value by solving a consistent linear least-squares problem with a known solution using a specific Krylov-subspace method called LSQR. In this method, the forward error tends to concentrate in the direction of a right singular vector corresponding to \sigma_{\min}. Extensive experiments show that the method is able to estimate well the condition number of a wide array of matrices. It can sometimes estimate the condition number when running a dense SVD would be impractical due to the computational cost or the memory requirements. The method uses very little memory (it inherits this property from LSQR) and it works equally well on square and rectangular matrices. | \section{Introduction}
Estimating the smallest singular value $\sigma_{\min}$ of a matrix
is difficult. Dense SVD algorithms can approximate $\sigma_{\min}$
well and their running time is predictable, but they are also slow.
Furthermore, dense SVD algorithms require space that is proportional
to $mn$ when the matrix is $m$-by-$n$, which is impractical for
large sparse matrices.
Symmetrization is not an effective way to address the problem. If
we work with the Gram matrix $A^{*}A$, we cannot estimate condition
numbers $\kappa(A)=\sigma_{\max}/\sigma_{\min}$ greater than $1/\sqrt{\epsilon_{\text{machine}}}$,
where $\epsilon_{\text{machine}}$ is the unit roundoff (machine precision).
If we work with the augmented matrix
\[
\begin{bmatrix}0 & A^{*}\\
A & 0
\end{bmatrix}\;,
\]
$\sigma_{\min}(A)$ is transformed into a pair of eigenvalues $\pm\sigma_{\min}$
in the middle of the spectrum. Such eigenvalues are difficult to compute
accurately with Lanczos and its variant
\footnote{For example, the ARPACK User's Guide states that a shift-invert iteration
is usually required to compute eigenvalues in the interior of the
spectrum~\cite[Section 3.4]{ARPACK-UG}. Experiments with ARPACK
on some of the matrices presented later in the paper, whose condition
number our method was able to estimate, showed that ARPACK does not
converge on them when it tries to compute the smallest-magnitude eigenvalues
of the augmented matrix without inversion
}, and it is essentially impossible for such algorithms to determine
that there is no eigenvalue closer to zero than the one that has already
been computed.
This paper describes a Krylov-subspace method that can estimate $\sigma_{\min}$
and hence the spectral condition number $\mathbf{\sigma_{\max}/}\sigma_{\min}$
accurately. Our method is reliable in the sense that it does not incorrectly
report significant overestimates of $\sigma_{\min}$ as accurate (at
least when $A$ is not close to being rank deficient). Our method
is also robust since it requires very little memory.
Our method also has some flaws. The main one is that it sometimes
converges very slowly, making is essentially impossible to compute
$\sigma_{\min}$. Our experience shows that the method always converges
eventually, but that convergence might be too slow to be of practical
use. There is no good way to determine how close the method is to
termination, although it tends to behave consistently on related matrices
(e.g. from the same application area). When $\kappa(A)$ is close
to $\epsilon_{\text{machine}}^{-1}$, the method sometimes overestimates
$\sigma_{\min}$ by several orders of magnitude, but it still returns
a small estimate (smaller than $\sigma_{\max}\times10^{-11}$ in our
experience, with $\epsilon_{\text{machine}}\approx10^{-16}$).
Even with these flaws, to the best of our knowledge this method is
the only practical way to compute $\sigma_{\min}$ with reasonable
accuracy (to within a factor of $2$ or better) on many large matrices.
The key idea of our method is to apply LSQR, a Krylov-subspace least-squares
solver, to minimize $\|Ax-b\|$ (all norms in this paper denote the
$2$-norm unless stated otherwise) when we already have an $x^{\star}$
that satisfies $Ax^{\star}=b$. Thanks to having $x^{\star}$, we
can compute the forward error, which allows us to exploit the tendency
of LSQR to concentrate the forward error in the direction of a singular
vector associated with $\sigma_{\min}$. This is often seen as a flaw
in LSQR and in Conjugate Gradients (LSQR is mathematically equivalent
to Conjugate Gradients applied to $A^{T}Ax=A^{T}b$). These solvers
converge slowly when the coefficient matrix $A$ is ill conditioned
because it is difficult for them to get rid of the error in the small
subspaces of $A$. Our method exploits this flaw, which acts as a
sieve that captures a vector from this subspace.
The method sometimes converges very rapidly and sometimes very slowly.
This is not related to the size of the problem and not to how ill
conditioned it is, but to the distribution of singular values.
The rest of this paper is organized as follows. Section~\ref{sec:Related-Work}
surveys related work on condition number estimation. Section~\ref{sec:The-Algorithm}
describes the core of our algorithm. Section~\ref{sec:Rationale}
explains how it works using mostly visualizations of numerical experiments
that display the singular-vector concentration effect. Section~\ref{sec:analysis-small-err}
analyzes the unique stopping criteria that our method relies upon.
The bulk of our experimental evidence is summarized in Section~\ref{sec:Additional-Experiments}.
\section{\label{sec:Related-Work}Related Work}
The large singular value $\sigma_{\max}$ of $A$ can be computed
accurately using a bounded number of matrix-vector multiplications
involving $A$ and $A^{*}$. This can be done using the power method,
for example, whose analysis for this application we explain below.
The Lanczos method can reduce the number of matrix-vector products
even further~\cite{Kuczynsky92}. Random projection methods can also
estimate $\sigma_{\max}(A)$~\cite{HalkoMartinssonTropp11}.
Estimating $\sigma_{\min}$ is computationally more challenging, because
applying the pseudo-inverse is usually much harder than applying $A$
itself. In general, existing random projection methods cannot efficiently
estimate $\sigma_{\min}(A)$ unless a decomposition of $A$ is computed,
or $A$ is low rank (or numerically low rank). If $A$ is low rank,
random projection methods can be used to estimate $\sigma_{k}(A)$,
where $k$ is the (numerical) rank~\cite{HalkoMartinssonTropp11}.
The LINPACK condition-number estimator requires a triangular factorization
of $A$ (see Higham's monograph~\cite[Chapter 15]{Higham02} for
details on this and related estimators). The Gotsman--Toledo~\cite{GostmanToledo2008}
and the Bischof et al.~\cite{Bischof90} condition-number estimators,
which are specialized to sparse matrices, also require a triangular
factorization. Estimators that require a triangular factorization
are less expensive than the SVD, but they still cannot be applied
to huge matrices.
The LAPACK condition-number estimator relies on repeated applications
of the pseudo-inverses of $A$ and $A^{*}$~\cite{Higham88}. One
way to apply them is using a factorization, but they can also be applied
using an iterative solver. With an effective preconditioner, repeated
applications of the pseudo-inverse may be less expensive than the
method that we propose, but without one our method is less expensive.
Kenny et al.~\cite{Kenney98} describe a way to estimate the condition
number of a square matrix using a single application of the inverse
to one or several vectors.
The spectral condition number measures the norm-wise sensitivity of
matrix-vector products and linear systems to small perturbations in
the inputs. There are methods that estimate more focused metrics,
such as the sensitivity of individual components of the inputs or
output~\cite{Kenney98}. Our method does not address this problem.
\section{\label{sec:The-Algorithm}The Algorithm}
\begin{algorithm}
\begin{algorithmic}[1]
\small{
\STATE \textbf{Input: $A\in\mathbb{R}^{m\times n}$}
\STATE \textbf{Parameters and defaults: $c_{1}$} ($8\epsilon_{\text{machine}}$),
$c_{2}$ ($10^{-3}$), $c_{3}$ ($64/\epsilon_{\text{machine}}$),
$c_{4}$ ($\sqrt{\epsilon_{\text{machine}}}$) and $c_{1}^{\prime}$
($4\epsilon_{\text{machine}}$)
\STATE
\STATE Estimate $\hat{\sigma}_{\max}=\sigma_{\max}(A)$, along with
a certificate $\hat{v}_{\max}$, using power iteration.
\STATE $\hat{\sigma}_{\min}=\hat{\sigma}_{\max}$, $\hat{v}_{\min}=\hat{v}_{\max}$
\STATE Draw a random vector $\hat{x}\in\mathbb{R}^{n}$ with independent
normal entries
\STATE $\tau\gets\erf^{-1}(c_{2})/\Vert\hat{x}\Vert$
\STATE $x^{\star}\gets\hat{x}/\Vert\hat{x}\Vert$
\STATE $b\gets Ax^{\star}$
\STATE $\beta^{(0)}\gets\Vert b\Vert$, $u^{(0)}\gets b/\beta^{(0)}$
\STATE $v^{(0)}\gets Au^{(0)}$, $\alpha^{(0)}\gets\Vert v^{(0)}\Vert$,
$v^{(0)}\gets v^{(0)}/\alpha^{(0)}$
\STATE $w^{(0)}\gets v^{(0)}$
\STATE $x^{(0)}\gets0_{n\times1}$
\STATE $\bar{\phi}^{(0)}\gets\beta^{(0)}$, $\bar{\rho}^{(0)}\gets\alpha$
\STATE $T\gets\infty$\qquad{} \COMMENT{Value of $T$ is set later.}
\FOR{$t=1,\dots,T$}
\STATE $u^{(t)}\gets Av^{(t-1)}-\alpha u^{(t-1)}$
\STATE $\beta^{(t)}\gets\Vert u^{(t)}\Vert$
\STATE $u^{(t)}\gets u^{(t)}/\beta^{(t)}$
\STATE $v^{(t)}\gets A^{*}u^{(t)}-\beta^{(t)}v^{(t)}$
\STATE $\alpha^{(t)}\gets\Vert v^{(t)}\Vert$
\STATE $v^{(t)}\gets v^{(t)}/\alpha^{(t)}$
\STATE $\rho^{(t)}\gets\left\Vert \left(\begin{array}{cc}
\bar{\rho}^{(t-1)} & \beta^{(t)}\end{array}\right)\right\Vert $
\STATE $c^{(t)}\gets\bar{\rho}^{(t-1)}/\rho^{(t)}$, $s^{(t)}\gets\beta^{(t)}/\rho^{(t)}$
\STATE $\theta^{(t)}\gets s^{(t)}\alpha^{(t)}$, $\bar{\rho}^{(t)}\gets-c^{(t)}\alpha^{(t)}$
\STATE $\phi^{(t)}\gets c^{(t)}\bar{\phi}^{(t-1)}$, $\bar{\phi}^{(t)}\gets s\bar{\phi}^{(t-1)}$
\STATE $x^{(t)}\gets x^{(t-1)}+(\phi^{(t)}/\rho^{(t)})w^{(t-1)}$
\STATE $w^{(t)}\gets v^{(t)}-(\theta^{(t)}/\rho^{(t)})w^{(t-1)}$
\STATE $R_{tt}^{(t)}\gets\rho^{(t)}$\qquad{} \COMMENT{Only diagonal and superdiagonal of $R$ are kept in memory}
\STATE \textbf{if $t>1$ set $R_{t-1,t}^{(t)}\gets\theta^{(t-1)}$}
\STATE $d^{(t)}\gets x^{\star}-x^{(t)}$
\STATE \textbf{if }$d^{(t)}=0$ \textbf{set $\hat{\sigma}_{\min}\gets\hat{\sigma}_{\max}$,
$\hat{v}_{\min}\gets\hat{v}_{\max}$ and break for}\\
\qquad{} \COMMENT{For matrices with $\kappa \neq 1$ the probability of getting $d^{(t)}=0$ is $0$}.
\IF{$\Vert A d^{(t)} \Vert \leq \sigma_{\min} \Vert d^{(t)} \Vert$}
\STATE $\hat{\sigma}_{\min}\gets\Vert Ad^{(t)}\Vert/\Vert d^{(t)}\Vert,$
$\hat{v}_{\min}\gets d^{(t)}$
\ENDIF
\STATE \textbf{if $\hat{\sigma}_{\max}/\hat{\sigma}_{\min}\geq c_{4}$
then }$c_{1}\gets c_{1}^{\prime}$
\IF{ not converged ($T=\infty$) and ($\frac{\Vert Ad^{(t)}\Vert}{\hat{\sigma}_{\max}\Vert x^{(t)}\Vert+\Vert b\Vert}\leq c_{1}$
\textbf{or }$\Vert d^{(t)}\Vert\leq\tau$ \textbf{or $\hat{\sigma}_{\max}/\hat{\sigma}_{\min}\geq c_{3}$})
}
\STATE\textbf{$T\gets\left\lceil 1.25t\right\rceil $}
\ENDIF
\ENDFOR
\STATE Estimate $\tilde{\sigma}_{\min}=\sigma_{\min}(R^{(T)})$,
using inverse power iteration
\STATE $\tilde{\sigma}_{\min}\gets\min(\tilde{\sigma}_{\min},\hat{\sigma}_{\min})$
\STATE \RETURN $\hat{\sigma}_{\max}$,$\hat{\sigma}_{\min}$, $\hat{v}_{\max}$
and $\hat{v}_{\min}$, $\tilde{\sigma}_{\min}$
}
\end{algorithmic}
\caption{\label{alg:the-alg}The condition number estimation algorithm.}
\end{algorithm}
This section describes our algorithm for estimating the condition
number of $A$. A detailed pseudo-code description appears in Algorithm~\ref{alg:the-alg}.
The algorithm starts by estimating $\sigma_{\max}(A)$ and a corresponding
certificate vector using power iteration on $A^{*}A$. We perform
enough iterations to estimate $\sigma_{\max}$ to within 10\% with
probability at least $1-10^{-12}$. Using a bound due to Klein and
Lu~\cite[ Section 4.4]{KleinLu1996
\footnote{Note that the statement of Lemma~6 in~\cite{KleinLu1996} is incorrect;
the proof shows the correct bound. Also, the discussion that follows
the proof of the lemma repeats the error in the statement of the lemma
}, we find that given a relative error parameter $\epsilon$ and a
failure probability parameter $\delta$, if we perform
\[
\left\lceil \frac{1}{\epsilon}\left(\ln\left(2n\right)^{2}+\ln\left(\frac{1}{\epsilon\delta^{2}}\right)\right)\right\rceil
\]
iterations, the relative error in our approximation is less than $\epsilon$
with probability at least $1-\delta$. For the parameters $\epsilon=10^{-1}$
and $\delta=10^{-12}$, 1004 iterations suffice even for matrices
with up to $10^{9}$ columns. For $\epsilon=1/3$ and $\delta=10^{-12}$,
only 298 iterations suffice for matrices with up to $10^{9}$ columns.
(The accuracy of the $\sigma_{\max}$ estimate in the power method
is typically much higher than predicted by this bound, but the additional
accuracy depends on the gap between the largest and second-largest
singular values; the bound that we use makes no assumption on the
gap.)
The main phase of the algorithm uses a slightly-enhanced LSQR iteration~\cite{LSQR}
to estimate $\sigma_{\min}$ and to produce a corresponding certificate
vector. The algorithm first generates a uniformly-distributed random
vector $x^{\star}$ on the unit sphere by first generating a vector
$\hat{x}$ with normally-distributed independent random components,
and setting $x^{\star}=\hat{x}/\|\hat{x}\|$. The algorithm multiplies
it by $A$ to produce a consistent right-hand side $b=Ax^{\star}$.
Now the algorithm runs LSQR on this $b$, using Paige and Saunders's
original formulation~\cite[pages 50--51]{LSQR}. LSQR minimizes $\|Ax-b\|$
iteratively using a Lanczos-type bidiagonalization procedure. It is
mathematically equivalent to solving the normal equations $A^{*}Ax=A^{*}b$
using the Conjugate Gradients algorithm, but it behaves much better
numerically.
Our algorithm adds a few steps to each LSQR iteration. At the end
of each (standard) LSQR iteration, we have an updated approximate
solution $x^{(t)}$ and an estimate of $\|r^{(t)}\|=\|Ax^{(t)}-b\|$,
denoted by $\bar{\phi}^{(t)}$. This estimate of $\|r^{(t)}\|$ is
mathematically correct but the equality of $\bar{\phi}^{(t)}$ and
$\|r^{(t)}\|$ depends on the orthogonality of the Lanczos vectors,
which lose orthogonality in floating point arithmetic as the algorithm
progresses. Our algorithm also computes $d^{(t)}=x^{\star}-x^{(t)}$
and $\|d^{(t)}\|$. We have
\begin{eqnarray*}
\left\Vert Ad^{(t)}\right\Vert & = & \left\Vert A\left(x^{\star}-x^{(t)}\right)\right\Vert \\
& = & \left\Vert b-Ax^{(t)}\right\Vert \\
& = & \left\Vert r^{(t)}\right\Vert \;,
\end{eqnarray*}
which in exact arithmetic equals $\bar{\phi}^{(t)}$, but to improve
the robustness of the algorithm we compute $\|Ad^{(t)}\|$ explicitly.
(In our numerical experiments we have found $\bar{\phi}^{(t)}$ to
be an accurate estimate, but we prefer to avoid any reliance on the
orthogonality of the Lanczos vectors in our algorithm.) We also compute
$\|x^{(t)}\|$.
Next, the algorithm computes the ratio $\|Ad^{(t)}\|/\|d^{(t)}\|,$
which like any Rayleigh quotient is an upper bound on $\sigma_{\min}$.
If this ratio is the smallest we have seen so far, the algorithm treats
it as an estimate of $\sigma_{\min}$ and stores both the ratio and
the certificate $d^{(t)}$. When the algorithm terminates, it outputs
the best ratio it has found and the corresponding certificate.
We use three stopping criteria. The first stopping criterion is the
one used by the standard LSQR algorithm~\cite{LSQR} for consistent
systems:
\begin{equation}
\frac{\left\Vert r^{(t)}\right\Vert }{\hat{\sigma}_{\max}\left\Vert x^{(t)}\right\Vert +\left\Vert b\right\Vert }\leq c_{1}\,,\label{eq:stop1}
\end{equation}
where $\hat{\sigma}_{\max}$ is our estimate of $\Vert A\Vert$ and
$c_{1}$ is a parameter that is set by default to $8\epsilon_{\text{machine}}$.
It has been observed experimentally~\cite{CPT09} that for consistent
systems, as long as $c_{1}=\Omega(\epsilon_{\text{machine}})$ this
criterion will be eventually met in spite of the loss of orthogonality
in the biorthogonalization process; however, the residual norm does
not seem to decrease much below the value required to satisfy \eqref{eq:stop1}~\cite{CPT09},
so a much smaller $c_{1}$ cannot be used.
In many cases our second stopping criterion, which is non-standard,
will stop LSQR well before the residual is that small. This second
condition is
\begin{equation}
\left\Vert d^{(t)}\right\Vert \leq\frac{\erf^{-1}(c_{2})}{\Vert\hat{x}\Vert}\;,\label{eq:stop2}
\end{equation}
where $\erf^{-1}$ is the inverse error function (we use a numerical
approximation of $\erf^{-1}(c_{2})$), and $c_{2}$ is a parameter
that is set by default to $10^{-3}$. We explain this stopping criterion
and how the choice of $c_{2}$ affects the algorithm later, in Section~\ref{sec:analysis-small-err}.
The third stopping criterion is
\begin{equation}
\frac{\Vert Ad^{(t)}\Vert}{\Vert d^{(t)}\Vert}\geq c_{3}\,,\label{eq:stop3}
\end{equation}
where $c_{3}$ is a parameter that is set by default to $64/\epsilon_{\text{machine}}$.
In other words, at this threshold we consider the matrix to be numerically
rank deficient and we do not attempt to estimate the exact condition
number. This criterion is used in the standard LSQR algorithm~\cite{LSQR}
as a regularizing criterion.
To achieve good accuracy even for matrices that are terribly ill conditioned
(condition number close to $1/\epsilon_{\text{machine}}$), the stopping
criteria are refined in two additional ways:
\begin{enumerate}
\item If at some point we have
\[
\frac{\Vert Ad^{(t)}\Vert}{\Vert d^{(t)}\Vert}\geq c_{4}\,,
\]
where $c_{4}$ is a parameter that is set by default to $\sqrt{\epsilon_{\text{machine}}}$,
we set $c_{1}$ (residual-based stopping threshold) to $c_{1}^{\prime}$,
which is set by default to $4\epsilon_{\text{machine}}$.
\item Even when the method detects convergence using one of its three criteria
(small residual, small error, and numerical rank deficiency), it keeps
iterating. The number of extra iterations is one quarter of the number
performed until convergence was detected. This rule is a heuristic
that tries to improve the accuracy of the condition number estimate.
The cost of this heuristic is obviously limited and it can be turned
off by the user.
\end{enumerate}
There is one more twist to the algorithm. The algorithm stores the
matrix $R^{(t)}$, one of two bidiagonal matrices that LSQR incrementally
constructs but normally discards. As in the symmetric Lanczos algorithm,
the singular values of $R^{(t)}$ converge to the singular values
of $A$. Once the algorithm terminates, we compute $\tilde{\sigma}_{\min}\approx\sigma_{\min}(R^{(t)})$;
if it is smaller than the best $\|Ad^{(t)}\|/\|d^{(t)}\|$ estimate,
we output both estimates. One estimate ($\tilde{\sigma}_{\min}$)
is tighter, but it comes with no certificate vector; the other is
looser, but comes with a certificate. Generating the certificate for
the Lanczos estimate requires storing the Lanczos vectors or repeating
the iterations, both of which we consider to be too expensive.
Storing $R^{(t)}$ and estimating $\sigma_{\min}(R^{(t)})$ is relatively
inexpensive since $R^{(t)}$ is bidiagonal. We estimate it by running
inverse iteration on $R^{(t)}$, again performing enough iterations
to get to within 10\% with very high probability. Since we use power-iteration,
the error in $\tilde{\sigma}_{\min}$ is one sided: it is always the
case that $\tilde{\sigma}_{\min}\geq\sigma_{\min}(R^{(t)})$ (because
it is generated by a Rayleigh quotient). Also, $\sigma_{\min}(R^{(t)})\geq\sigma_{\min}(A)$.
Therefore, $\tilde{\sigma}_{\min}$ is also an upper bound on $\sigma_{\min}(A)$,
not just an estimate.
\section{\label{sec:Rationale}Rationale}
This section explains the rationale behind the method using several
illustrative numerical experiments. All the experiments were done
on $1000$-by-$400$ matrices with real entries with prescribed singular
values and random singular vectors.
Let us begin the discussion with a matrix that has 300 singular values
that are distributed logarithmically between $10^{-3}$ and $10^{-2}$,
10 values at $10^{-8}$, and 90 at $1$.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\textwidth]{illustration_our}
\par\end{centering}
\caption{\label{fig:illustration_our}The behavior of the method on one matrix
described in the text.}
\end{figure}
Figure~\ref{fig:illustration_our} shows the behavior of our method
on one matrix generated with this spectrum. In the first 90 iterations
or so the residual diminishes logarithmically; the norm of $x^{(t)}-x^{\star}$
drops a bit but then stops dropping much. These two effects cause
our estimate to also diminish roughly logarithmically. The Lanczos
estimate ($\sigma_{\min}(R^{(t)})$) drops a bit initially but stagnates
from iteration 30 or so. What is happening up to iteration 90 or so
is that the Lanczos bidiagonalization resolves the singular values
in the $10^{-3}$-to-$10^{-2}$ cluster, while LSQR removes much of
the projection of the corresponding singular vectors from the residual
and from the error. Around iteration 160 Lanczos has resolved enough
of the spectrum in the $10^{-3}$-to-$10^{-2}$ cluster and the small
singular value of $R^{(t)}$ starts moving toward $10^{-8}$. At that
point, most of the remaining error consists of singular vectors corresponding
to the singular value $10^{-8}$, which causes our estimate to be
accurate (to within 9 decimal digits!). The norm of $x^{(t)}-x^{\star}$
is still large, because the error contains a significant component
in the subspace associated with the $10^{-8}$ singular values. At
this point, LSQR starts to resolve the error in this subspace, the
residual starts decreasing again, the norm of $x^{(t)}-x^{\star}$
starts decreasing, which causes our stopping criterion to be met.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\textwidth]{illustration_our_proj}
\par\end{centering}
\caption{\label{fig:illustration_our_proj}The projection of the forward error
on the right singular vectors in the experiment shown in Figure~\ref{fig:illustration_our}.
The bottom plot shows the singular values and the top image shows
the projections. Blue represents values near $\epsilon_{\text{machine}}$,
green represents values near $\sqrt{\epsilon_{\text{machine}}}$,
and red represents values near $1$.}
\end{figure}
Figure~\ref{fig:illustration_our_proj} visualizes the behavior described
in the previous paragraph. We see that up to around iteration 50,
the error associated with singular spaces associated with singular
values other than $1$ remains very large. From that point on to about
iteration 150, the error in subspaces corresponding to values between
$10^{-3}$ and $10^{-2}$ is resolved, but the error associated with
the singular value $10^{-8}$ is still very large. This is a point
where our method finds the small singular value and its certificate
(the error). As LSQR starts to resolve the error in the $10^{-8}$
singular subspace, the errors in the $10^{-3}$ to $10^{-2}$ subspaces
grows (perhaps due to loss of orthogonality) but they are reduced
again later.
The method yielded similar results when the small singular value was
moved down to $10^{-13}$, with convergence after about 440 iterations
and an estimate that is correct to within 5 decimal digits. The value
$\sigma_{\min}=10^{-13}$ is about the lower limit for which the $\|d^{(t)}\|$
stopping criterion is useful.
When $A$ is rank deficient there is more than one solution to the
system $Ax=b$. The solution $x^{\star}$, which was generated randomly,
has no special property that distinguishes it from other solutions
(like minimum norm), so no least-squares solver can recover $x^{\star}$.
Therefore, it is unlikely $\|d^{(t)}\|$ will become small enough
to cause our method to stop. In this case, the method stops because
the residual eventually becomes very small (close to $\epsilon_{\text{machine}}$)
or because the estimated condition number becomes too big (stopping
condition~\eqref{eq:stop3}). In Figure~\ref{fig:illustration_rankdef_our}
we illustrate the behavior of the algorithm on a rank deficient matrix;
the matrix has 10 singular values that are $10^{-16}$ (numerical
zeros), 300 distributed logarithmically between $10^{-3}$ and $10^{-2}$,
and 90 at $1$.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\textwidth]{illustration_rankdef_our}
\par\end{centering}
\caption{\label{fig:illustration_rankdef_our}The behavior of the method on
one matrix described in the text.}
\end{figure}
The algorithm stopped because the condition number got too big, however
a few iterations later it would have stopped because $\Vert r^{(t)}\Vert$
became too small (stopping condition~\eqref{eq:stop1}). Our estimate
is not very accurate (around $1.8\times10^{-15}$). This still indicates
to the user that the matrix is numerically rank deficient, although
the algorithm cannot tell the user exactly how close to $\epsilon_{\text{machine}}^{-1}$
the condition number is (and certainly not whether it is higher).
Stopping criteria \eqref{eq:stop1} and \eqref{eq:stop3}, and the
relatively-inaccurate estimates they yield, are used only when the
matrix is close to rank deficiency (condition number of about $10^{14}$
or larger).
Can we estimate $\sigma_{\min}$ by minimizing $\|Ax-b\|$ with a
random $b$, which with high probability is inconsistent if $A$ has
fewer columns than rows or is rank deficient? On some matrices, applying
the pseudo inverse of $A$ to such a $b$ produces a minimizer with
a norm that is larger than the norm of $b$ by about a factor of $\sigma_{\min}^{-1}$.
However, unless there is a large gap between the smallest singular
values and the rest, the minimizer has a larger norm and the norms
ratio fails to accurately estimate $\sigma_{\min}^{-1}$.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\textwidth]{random_b_norm_of_x}
\par\end{centering}
\caption{\label{fig:random_x_norm_of_b}Estimating $\sigma_{\min}$ by $\|A^{+}b\|^{-1}$
for a unit-length random $b$ with normal independent components.}
\end{figure}
Figure~\ref{fig:random_x_norm_of_b} shows the relative errors in
this estimate for matrices whose singular values are distributed linearly
between $\sigma_{\min}$ and $1$. The errors are huge, sometimes
by more than 3 orders of magnitude. This is not a particularly useful
method. This method amounts to one half of inverse iteration on $A^{*}A$,
so it is not surprising that it is not accurate; performing more iterations
would make the method very reliable, but at the cost of applying the
pseudo-inverse many times.
This estimator is clearly biased (the estimate is always larger than
$\sigma_{\min}$); so is any fixed number of steps of inverse iteration.
Kenny et al.~\cite{Kenney98} derive a unbiased estimator of this
type for the Frobenius-norm condition number. To the best of our knowledge,
this is not possible in the Euclidean norm.
Other distributions of the singular values lead to more accurate estimates
in this method. But will this method work when the least-squares minimizer
is an iterative method like LSQR? The following experiment suggests
that the answer is no. The matrix used in the experiment has 50 singular
values distributed logarithmically between $10^{-10}$ and $10^{-9}$,
50 more distributed logarithmically between $10^{-1}$ and $1$, and
the rest are all $1$.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\textwidth]{no_Atr_convergence}
\par\end{centering}
\caption{\label{fig:no_Atr_convergence}Running LSQR with an inconsistent right-hand-side
$b$.}
\end{figure}
The results, presented in Figure~\ref{fig:no_Atr_convergence}, indicate
that there is no good way to decide when to terminate LSQR when used
in this way to estimate $\sigma_{\min}$. We obviously cannot rely
on the residual approaching $\epsilon_{\text{machine}}$, because
the problem is inconsistent. The original LSQR paper~\cite{LSQR}
suggests another stopping condition,
\[
\frac{\left\Vert A^{*}r^{(t)}\right\Vert }{\hat{\sigma}_{\max}\left\Vert r^{(t)}\right\Vert }\leq c\;,
\]
but our experiment shows that this ratio may fail to get close to
$\epsilon_{\text{machine}}$. In our experiment the best local minimum
is around $10^{-10}$, six orders of magnitude larger than $\epsilon_{\text{machine}}$!
Moreover, at that local minimum, around iteration 60, the estimate
$\|b+r^{(t)}\|/\|x^{(t)}\|$ is still near $1$, very far from $\sigma_{\min}$.
The Lanczos estimate $\sigma_{\min}(R^{(t)})$ is also very inaccurate
at that time. There does not appear to be a good way to decide when
to stop the iterations and to report the best estimate seen so far.
On matrices with this singular value distribution, our method detects
convergence after 2400-2500 iterations, returning a certified estimate
of $\sigma_{\min}$ that is accurate to within 15--40\% (the accuracy
of the Lanczos estimate is better, with relative errors smaller than
10\%). Figure~\ref{fig:no_Atr_convergence_our} shows a typical run.
The number of iterations is large, but the stopping criteria are robust.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\columnwidth]{no_Atr_convergence_our}
\par\end{centering}
\caption{\label{fig:no_Atr_convergence_our}Running our method on the same
matrix as in Figure~\ref{fig:no_Atr_convergence}.}
\end{figure}
\section{\label{sec:analysis-small-err}Analysis of the Small-Error Stopping
Criterion}
Now that we understand how the method works, we can explain how to
derive the small-error stopping criterion \eqref{eq:stop2}. Suppose
that the smallest singular value of $A$ is simple and that it is
well separated from larger singular values. Let $x^{\star}=\sum_{i=1}^{n}\alpha_{i}v_{i}$
be the initial vector represented in the basis of the right singular
vectors of $A$. As LSQR progresses towards finding $x^{\star}$ it
will tend initially to resolve components in the direction of the
largest singular vectors. Since the $v_{n}$ direction is not present
in $x^{(0)}=0$ we expect it to be not present during the initial
iterations, i.e. $v_{n}^{T}x^{(t)}\approx0$. This implies that we
expect for the initial iterations $t$ to have $\vert v_{n}^{T}(x^{\star}-x^{(t)})\vert\approx\vert\alpha_{n}\vert$
so $\|x^{\star}-x^{(t)}\|\geq\alpha_{n}$. Now, at some point in the
iteration, the solution $x^{(t)}$ will be roughly $x^{(t)}\approx\sum_{i=1}^{n-1}\alpha_{i}v_{i}$,
i.e. the error remains mostly in the direction of the small singular
subspace, but the $v_{n}$ direction is not present at all. At that
point, $\|x^{\star}-x^{(t)}\|\approx\vert\alpha_{n}\vert$. LSQR will
now start to resolve that error at least partially and the norm of
the error will decrease below $\vert\alpha_{n}\vert$. If we stop
the iteration when $\|x^{\star}-x^{(t)}\|\gg\vert\alpha_{n}\vert$,
the error is unlikely to be a good estimate of a small singular vector.
If we stop when $\|x^{\star}-x^{(t)}\|\leq\vert\alpha_{n}\vert$ we
will likely have a good estimate of a small singular value. Ideally,
we want to stop immediately when $\|x^{\star}-x^{(t)}\|$ drops below
$\vert\alpha_{n}\vert$. Stopping later (when the error is much smaller
than $\vert\alpha_{n}\vert$) does not do any harm, since we report
the best Rayleigh quotient seen, but it does not improve the estimate
by much.
We do not know $\alpha_{n}=v_{n}^{T}x^{\star}$ (which is also a random
variable), but we can do a test for which passing it implies that
$\|x^{\star}-x^{(t)}\|\leq\vert\alpha_{n}\vert$ with high probability.
Recall that $x^{\star}=\hat{x}/\|\hat{x}\|$ where $\hat{x}$ is a
vector with normally-distributed independent random components. Let
$\hat{x}^{(t)}$ be the LSQR estimates that we would have found if
we run LSQR on $\hat{b}=A\hat{x}$, and let $\hat{\alpha}_{n}=v_{n}^{T}\hat{x}$.
Clearly, $\hat{x}^{(t)}=\Vert\hat{x}\Vert x^{(t)}$ and $\hat{\alpha}_{n}=\Vert\hat{x}\Vert\alpha_{n}$,
so $\|x^{\star}-x^{(t)}\|\leq\vert\alpha_{n}\vert$ if and only if
$\|\hat{x}-\hat{x}^{(t)}\|\leq\vert\hat{\alpha}_{n}\vert$. Therefore,
if we find a value $\tau$ such that $\Pr(\vert\hat{\alpha}_{n}\vert\geq\tau)\geq1-c_{2}$
then if $\|\hat{x}-\hat{x}^{(t)}\|\leq\tau$ then $\|\hat{x}-\hat{x}^{(t)}\|\leq\vert\hat{\alpha}_{n}\vert$
(and $\|x^{\star}-x^{(t)}\|\leq\vert\alpha_{n}\vert$) with probability
of at least $1-c_{2}$. The condition $\|\hat{x}-\hat{x}^{(t)}\|\leq\tau$
is equivalent to $\|x^{\star}-x^{(t)}\|\leq\tau/\Vert\hat{x}\Vert$.
Since $\Vert v_{n}\Vert=1$ we have $\hat{\alpha}_{n}\sim N(0,1)$.
Therefore, for any $0<c_{2}<1$
\[
\Pr(\vert\hat{\alpha}_{n}\vert\geq\erf^{-1}(c_{2}))=1-c_{2}\,.
\]
This immediately leads to the stopping criterion
\[
\|x^{\star}-x^{(t)}\|\leq\frac{\erf^{-1}(c_{2})}{\Vert\hat{x}\Vert}\,.
\]
The choice of $c_{2}$ in the algorithm determines our confidence
that $\|x^{\star}-x^{(t)}\|$ dropped below $\vert\alpha_{n}\vert$.
We use a default value of $10^{-3}$, which implies a small probability
of failure, but not a tiny one. The user can, of course, change the
value if he needs higher confidence (in either case the error is one
sided). Another approach is to use a much larger $c_{2}$ and to repeat
the algorithm $\ell$ times (possibly in a single run, exploiting
matrix-matrix multiplies) , say $\ell=3$. The probability that we
succeed in at least one run is at least $1-c_{2}^{\ell}$. For $c_{2}=10^{-2}$,
say, setting $\ell=3$ or so should suffice. In our experience it
is better to make $c_{2}$ smaller than to set $\ell>1$, but we did
not do a formal analysis of this issue.
If the small singular value is multiple (associated with a singular
subspace of dimension $k>1$), our situation is even better, because
we can stop when $x^{\star}-x\approx\sum_{i=n-k+1}^{n}\alpha_{i}v_{i}$,
when $\|x^{\star}-x\|\approx\sqrt{\alpha_{n-k+1}^{2}+\cdots+\alpha_{n}^{2}}$,
which is even more likely to be larger than our stopping criterion.
When the small singular value is not well separated, the stopping
criterion is still sound but the Rayleigh quotient estimate we obtain
is not as accurate, because in such cases $x^{\star}-x$ tends to
be a linear combination of singular vectors corresponding to several
singular values. These singular values are all small, but they are
not exactly the same, thereby pulling the Rayleigh quotient up a bit.
\textbf{\textcolor{red}{}}
\section{\label{sec:Additional-Experiments}Additional Experiments}
\subsection{Additional Illustrative Examples}
In Figure~\ref{fig:linear_1e-8} all the singular values are distributed
linearly from $10^{-8}$ up to $1$. Convergence is fairly slow. The
gap between $\sigma_{\min}=\sigma_{400}=10^{-8}$ and $\sigma_{399}$
is relatively large, around $\frac{1}{400}$, so $\sigma_{\min}$
is computed accurately.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\textwidth]{illustration_linear_1e-8}
\par\end{centering}
\caption{\label{fig:linear_1e-8}Singular values are distributed linearly from
$10^{-8}$ up to $1$.}
\end{figure}
When many singular values are distributed logarithmically or nearly
so, convergence is very slow and the small relative gap between $\sigma_{\min}$
and the next-larger singular values causes the method to return a
less accurate estimate.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\textwidth]{illustration_log_1e-3}
\par\end{centering}
\caption{\label{fig:log_1e-3}200 singular values are distributed logarithmically
from $10^{-3}$ up to $1$, and the rest are at $1$.}
\end{figure}
Figure~\ref{fig:log_1e-3} plots the convergence when 200 singular
values are distributed logarithmically between $10^{-3}$ and $1$
and the rest are at $1$. We do not see a a period of stagnation during
which the error is a good estimate of $v_{\min}$. The certified estimate
is only accurate to within 31\% and the Lanczos estimate to within
10\% (much worse than when the small singular value is well separated
from the rest).
LSQR might have several periods of stagnation. This happens when the
spectrum contains several well-separated clusters. Figure~\ref{fig:multiple_stagnations}
plots the convergence when the matrix has a multiple singular value
at $10^{-10}$, a multiple singular value at $10^{-7}$ (both with
multiplicity 10), 300 singular values that are distributed logarithmically
between $10^{-3}$ and $10^{-2}$, and the rest are at $1$. We see
multiple stagnation periods of both the residual, the error, the Lanczos
estimate, and our certified estimate.
\begin{figure}
\begin{centering}
\includegraphics[width=0.6\textwidth]{illustration_multiple_stagnations}
\par\end{centering}
\caption{\label{fig:multiple_stagnations}200 singular values are distributed
logarithmically from $10^{-3}$ up to $1$, and the rest are at $1$.}
\end{figure}
\subsection{Experiments on Large Structured Random Matrices}
The next set of experiments was performed on sparse matrices that
motivated this project. These matrices have exactly 3 nonzeros per
column, where the location of the nonzeros is random and uniform and
their values are $+1$ or $-1$ with equal independent probabilities.
These type of matrices arise in trying to simulate the evolution of
random 2-dimensional complexes in various stochastic models~\cite{ALLM12}.
Such $m$-by-$n$ matrices tend to be well conditioned when $n<0.9m$
and rank deficient when $n>0.95m$.
\begin{figure}
\noindent \begin{centering}
\begin{tabular}{ccc}
\includegraphics[width=0.35\textwidth]{irad_100000_90000} & ~ & \includegraphics[width=0.35\textwidth]{irad_100000_95000}\tabularnewline
\end{tabular}
\par\end{centering}
\begin{centering}
\par\end{centering}
\caption{\label{fig:irad_m=00003D100000}Random matrices with 3 nonzeros per
row. On the left we see the convergence on a $100,000$-by-$90,000$
matrix and on the left convergence on a $100,000$-by-$95,000$ matrix.}
\end{figure}
Figure~\ref{fig:irad_m=00003D100000} shows that the method converges
quite quickly even on large matrices in both the well conditioned
and the rank deficient cases. On smaller matrices of this type we
were able to assess the accuracy of the method. For $m=1000$, $n=900$
yielded a relative error of 22\% (the Lanczos estimate was off by
78\%), and $n=450$ yielded a relative error of 41\% (the Lanczos
estimate was off by 18\%). Problems of this type of size $m=1,000,000$
required similar number of iterations and were easily solved on a
laptop. It is worth noting that due to the random structure of the
non-zero pattern, it is likely that factorization based condition
number estimators will be very slow when applied to this type of matrices.
\subsection{\label{sub:Large-Scale-Experiments}Experiments on Many Real-World
Matrices}
We ran both a dense SVD and our method on all the matrices from Tim
Davis's sparse matrix collection~\cite{UFDavis11} for which $mn^{2}<256\times10^{9}$.
Our method converged in 100,000 iterations or less on 1024 out of
the 1468 matrices in this category.
Out of the 1468 matrices, 404 had condition number $64/\epsilon_{\text{machine}}\approx7\times10^{13}$
or larger. Our method converged on 278 out of them, delivering condition
number estimates of $5\times10^{11}$ or larger. In other words, on
all the matrices that were close to rank deficiency, our method detected
that the condition number is large, but in some cases it underestimated
the actual condition number.
On matrices with condition number smaller than $64/\epsilon_{\text{machine}}$,
our method always estimated the condition number to within a relative
error of 24\% or less.
We ran the method again on some of the matrices on which it failed
to converge in 100,000 iterations, allowing the method to run longer.
It converged in all cases. For example, on \texttt{nos1}, the method
detected convergence after 169,791 iterations. The $\sigma_{\min}$
estimate it returned was actually from iteration 90,173, meaning that
at iteration 100,000 it actually converged, but the algorithm was
not yet able to detect convergence. We note that \texttt{nos1} is
a square matrix of dimension 237; the method can be slow even on small
matrices.
The running time of our method obviously varies a lot and is not easy
to characterize. But on large matrices it is often much faster than
a dense SVD. On one matrix in our test set, \texttt{bips98\_606} (a
square matrix of dimension 7135), our method was more than 550 times
faster than a dense SVD, even though the dense SVD routine used all
4 cores in the Intel Core i7 machine where as our method used only
one. The machine had 16GB of RAM and the SVD computation did not perform
any paging activity.
\subsection{\label{sub:Experiments-on-Large}Experiments on Large Real-World
Matrices}
We ran the algorithm on a few very large matrices from the same matrix
collection. As on the smaller matrices, the method sometimes converged
but sometimes it exceeded the maximal number of iterations (up to
1,000,000). Table~\ref{tab:very-large} shows the statistics of successful
runs; they indicate that when the singular spectrum is clustered,
the method works well even on very large matrices.
\begin{table}
\begin{centering}
\begin{tabular}{lrrrrr}
& $m$ & $n$ & time (s) & iterations & $\kappa$ (est.)\tabularnewline
\cline{2-6}
rajat10 & 30202 & 30202 & 43 & 5219 & 1.2e+03\tabularnewline
flower\_7\_4 & 67593 & 27693 & 4 & 231 & 1.6e+01\tabularnewline
flower\_8\_4 & 125361 & 55081 & 15 & 537 & 2.2e+13\tabularnewline
wheel\_601 & 902103 & 723605 & 1278 & 5260 & 1.3e+14\tabularnewline
Franz11 & 47104 & 30144 & 1.4 & 59 & 3.3e+15\tabularnewline
lp\_ken\_18 & 154699 & 105127 & 59 & 1836 & 2.5e+14\tabularnewline
lp\_pds\_20 & 108175 & 33874 & 13 & 697 & 1.2e+14\tabularnewline
\end{tabular}
\par\end{centering}
\caption{\label{tab:very-large}Large real-world matrices whose condition number
was successfully computed by our method.}
\end{table}
\section{Summary}
We have presented an adaptation of LSQR to the estimation of the condition
number of matrices.
Our method is yet another tool in the condition-number estimation
toolbox. It relies almost solely on matrix-vector multiplications,
so it can be applied to very large sparse matrices. It does not require
much memory, and it is at least as fast as a single application of
un-preconditioned LSQR to solve a least-squares problem. The method
is reliable in the sense that it never returns an overestimate of
the condition number.
In many cases, the method is orders-of-magnitude faster than competing
methods, especially if $A$ is large and has no sparse triangular
factorization.
However, the performance of the method depends on the distribution
of the singular values of $A$, and some distributions lead to very
slow convergence. In such cases, the method still provides a lower
bound on the condition number, but it may be loose. In such cases,
methods that are based on an orthogonal or triangular factorizations
or on preconditioned iterative solvers may be faster.
Our method is primarily based on one key insight: that the forward
error in LSQR tends to converge to an approximate singular vector
associated with $\sigma_{\min}$. This property of LSQR and related
Krylov-subspace solvers is normally seen as a deficiency (because
it slows down the convergence to the minimizer), but it turns out
to be beneficial for condition-number estimation.
\paragraph*{Acknowledgments}
This research was supported in part by grant 1045/09 from the Israel
Science Foundation (founded by the Israel Academy of Sciences and
Humanities) and by grant 2010231 from the US-Israel Binational Science
Foundation.
\bibliographystyle{plain}
| {
"timestamp": "2013-01-08T02:04:04",
"yymm": "1301",
"arxiv_id": "1301.1107",
"language": "en",
"url": "https://arxiv.org/abs/1301.1107",
"abstract": "We describe a randomized Krylov-subspace method for estimating the spectral condition number of a real matrix A or indicating that it is numerically rank deficient. The main difficulty in estimating the condition number is the estimation of the smallest singular value \\sigma_{\\min} of A. Our method estimates this value by solving a consistent linear least-squares problem with a known solution using a specific Krylov-subspace method called LSQR. In this method, the forward error tends to concentrate in the direction of a right singular vector corresponding to \\sigma_{\\min}. Extensive experiments show that the method is able to estimate well the condition number of a wide array of matrices. It can sometimes estimate the condition number when running a dense SVD would be impractical due to the computational cost or the memory requirements. The method uses very little memory (it inherits this property from LSQR) and it works equally well on square and rectangular matrices.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Spectral Condition-Number Estimation of Large Sparse Matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211546419276,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7096458619004276
} |
https://arxiv.org/abs/2009.12989 | Tree densities in sparse graph classes | What is the maximum number of copies of a fixed forest $T$ in an $n$-vertex graph in a graph class $\mathcal{G}$ as $n\to \infty$? We answer this question for a variety of sparse graph classes $\mathcal{G}$. In particular, we show that the answer is $\Theta(n^{\alpha_d(T)})$ where $\alpha_d(T)$ is the size of the largest stable set in the subforest of $T$ induced by the vertices of degree at most $d$, for some integer $d$ that depends on $\mathcal{G}$. For example, when $\mathcal{G}$ is the class of $k$-degenerate graphs then $d=k$; when $\mathcal{G}$ is the class of graphs containing no $K_{s,t}$-minor ($t\geq s$) then $d=s-1$; and when $\mathcal{G}$ is the class of $k$-planar graphs then $d=2$. All these results are in fact consequences of a single lemma in terms of a finite set of excluded subgraphs. | \section{Introduction}
Many classical theorems in extremal graph theory concern the maximum number of copies of a fixed graph $H$ in an $n$-vertex graph\footnote{All graphs in this paper are undirected, finite, and simple, unless stated otherwise. Let $\mathbb{N}:=\{1,2,\dots\}$ and $\mathbb{N}_0:=\mathbb{N}\cup\{0\}$. For $a,b\in\mathbb{N}_0$, let $[a,b]:=\{a,a+1,\dots,b\}$ and $[b]:=[1,b]$. } in some class $\mathcal{G}$. Here, a \emph{copy} means a subgraph isomorphic to $H$. For example, Tur\'an's Theorem determines the maximum number of copies of $K_2$ (that is, edges) in an $n$-vertex $K_t$-free graph~\citep{Turan41}. More generally, Zykov's Theorem determines the maximum number of copies of a given complete graph $K_s$ in an $n$-vertex $K_t$-free graph~\citep{Zykov49}. The excluded graph need not be complete. The Erd\H{o}s--Stone Theorem~\citep{ES46} determines, for every non-bipartite graph $X$, the asymptotic maximum number of copies of $K_2$ in an $n$-vertex graph with no $X$-subgraph. Analogues of the Erd\H{o}s--Stone Theorem for the number of (induced) copies of a given graph within a graph class defined by an excluded (induced) subgraph have recently been widely studied \citep{AS19,AS16,AKS18,GGMV20,GMV19,GP19,GSTZ19,EMSG19,NesOss11,MQ20,Letzter19}.
For graphs $H$ and $G$, let $C(H,G)$ be the number of copies of $H$ in $G$. For a graph class $\mathcal{G}$, let
$$C(H,\mathcal{G},n) := \max_{G\in\mathcal{G},\,|V(G)|=n} C(H,G).$$
This paper determines the asymptotic behaviour of $C(T,\mathcal{G},n)$ as $n\to \infty$ for various sparse graph classes $\mathcal{G}$ and for an arbitrary fixed forest $T$. In particular, we show that $C(T,\mathcal{G},n) \in \Theta(n^k)$ for some $k$ depending on $T$ and $\mathcal{G}$.
It turns out that $k$ depends on the size of particular stable sets in $T$. A set $S$ of vertices in a graph $G$ is \emph{stable} if no two vertices in $S$ are adjacent. Let $\alpha(G)$ be the size of a largest stable set in $G$. For a graph $G$ and $s\in\mathbb{N}_0$, let $$\alpha_s(G):= \alpha( G[\{ v\in V(G): \deg_G(v)\leq s\}]).$$
Note that for a forest $T$ (indeed any bipartite graph), $\alpha_s(T)$ can be computed in polynomial time. See \citep{BW05,BSW-DM04,BDW-CDM06} for bounds on the size of bounded degree stable sets in forests, planar graphs, and other classes.
The first sparse class we consider are the graphs of given degeneracy\footnote{A graph $G$ is \emph{$k$-degenerate} if every subgraph of $G$ has minimum degree at most $k$.}.
\begin{thm}
\label{Degeneracy}
Fix $k\in\mathbb{N}$ and let $\mathcal{D}_k$ be the class of $k$-degenerate graphs. Then for every fixed forest $T$,
$$C(T,\mathcal{D}_k,n) \in \Theta(n^{\alpha_k(T)}).$$
\end{thm}
Our second main theorem determines $C(T,\mathcal{G},n)$ for many minor-closed classes\footnote{A graph $H$ is a \emph{minor} of a graph $G$ if a graph isomorphic to $H$ can be obtained from a subgraph of $G$ by contracting edges. A graph class $\mathcal{G}$ is \emph{minor-closed} if some graph is not in $\mathcal{G}$, and for every graph $G\in \mathcal{G}$, every minor of $G$ is also in $\mathcal{G}$.}\textsuperscript{,}\footnote{A \emph{tree decomposition} of a graph $G$ is given by a tree $T$ whose nodes index a collection $(B_x\subseteq V(G):x\in V(T))$ of sets of vertices in $G$ called \emph{bags}, such that: (T1) for every edge $vw$ of $G$, some bag $B_x$ contains both $v$ and $w$, and (T2) for every vertex $v$ of $G$, the set $\{x\in V(T):v\in B_x\}$ induces a non-empty (connected) subtree of $T$. The \emph{width} of such a tree decomposition is $\max\{|B_x|-1:x\in V(T)\}$. The \emph{treewidth} of a graph $G$, denoted by $\tw(G)$, is the minimum width of a tree decompositions of $G$.
See~\citep{Reed97,HW17} for surveys on treewidth. For each $s\in \mathbb{N}$ the class of graphs with treewidth at most $s$ is minor-closed.}. Several examples of this result are given in \cref{MinorExamples}.
\begin{thm}
\label{MinorClosedClass}
Fix $s,t\in\mathbb{N}$ and let $\mathcal{G}$ be a minor-closed class such that every graph with treewidth at most $s$ is in $\mathcal{G}$ and $K_{s+1,t}\not\in\mathcal{G}$. Then for every fixed forest $T$,
$$C(T,\mathcal{G},n) \in \Theta(n^{\alpha_s(T)}).$$
\end{thm}
The lower bounds in \cref{Degeneracy,MinorClosedClass} are proved via the same construction given in \cref{LowerBounds}. The upper bounds in \cref{Degeneracy,MinorClosedClass} are proved in \cref{UpperBounds}. We in fact prove a stronger result (\cref{UpperBound}) that shows that for any fixed forest $T$ and $s\in\mathbb{N}$ there is a particular finite set $\mathcal{F}$ such that $C(T,G) \in O(n^{\alpha_s(T)})$ for every $n$-vertex graph $G$ with $O(n)$ edges and containing no subgraph in $\mathcal{F}$. This result is applied in \cref{Beyond} to determine $C(T,\mathcal{G},n)$ for various non-minor-closed classes $\mathcal{G}$. For example,
we show a $\Theta(n^{\alpha_2(T)})$ bound for graphs that can be drawn in a fixed surface with a bounded average number of crossings per edge, which matches the known bound with no crossings.
\subsection{Related Results}
Before continuing we mention related results from the literature.
For a fixed complete graph $K_s$, $C(K_s,\mathcal{G},n)$ has been extensively studied for various graph classes $\mathcal{G}$ including: graphs of given maximum degree \citep{Chase20,CR14,EG14,Kahn01,ACM12,Galvin11,GLS15,Wood-GC07,CR17}; graphs with a given number of edges, or more generally, a given number of smaller complete graphs~\citep{CR11,Frohmader10,Eckhoff-DM04,Eckhoff-DM99,FR90,KR19,KN75,FR92,PR00,Hedman-DM85}; graphs without long cycles~\citep{Luo18}; planar graphs~\citep{HS79,Wood-GC07,PY-IPL81}; graphs with given Euler genus~\citep{DFJSW,HJW20}; and graphs excluding a fixed minor or subdivision~\citep{ReedWood-TALG,NSTW-JCTB06,FOT,LO15,FW17,FW20}.
When $\mathcal{J}$ is the class of planar graphs, $C(H,\mathcal{J},n)$ has been determined for various graphs $H$ including: complete bipartite graphs \citep{AC84}, planar triangulations without non-facial triangles~\citep{AC84}, triangles \citep{HS79,HS82,HHS01,Wood-GC07}, 4-cycles \citep{HS79,Alameddine80}, 5-cycles \citep{GPSTZb}, 4-vertex paths \citep{GPSTZa}, and 4-vertex complete graphs \citep{AC84,Wood-GC07}. $C(H,\mathcal{J},n)$ has also been studied for more general planar graphs $H$. Perles (see \citep{AC84}) conjectured that if $H$ is a fixed 3-connected planar graph, then $C(H,\mathbb{S}_0,n) \in \Theta(n)$. Perles noted the converse: If $H$ is planar, not 3-connected and $|V(H)|\geq 4$, then $C(H,\mathbb{S}_0,n) \in \Omega(n^2)$. Perles' conjecture was proved by \citet{Wormald86} and independently by \citet{Eppstein93},
Recently, \citet{HJW20} extended these results to all surfaces and all graphs $H$ (see \cref{MinorExamples}).
Finally, we mention a result of \citet{NesOss11}, who proved that for every infinite nowhere dense hereditary graph class $\mathcal{G}$ and for every fixed graph $F$, the maximum, taken over all $n$-vertex graphs $G\in\mathcal{G}$, of the number of induced subgraphs of $G$ isomorphic to $F$ is $\Omega( n^{\beta})$ and $O( n^{\beta+o(1)})$ for some integer $\beta\leq \alpha(F)$. Our results (when $F$ is a forest and $\mathcal{G}$ is one of the classes that we consider) imply this upper bound (since the number of induced copies of $T$ in $G$ is at most $C(T,G)$). Moreover, our bounds are often more precise since $\alpha_s(T)$ can be significantly less than $\alpha(T)$.
\section{Lower Bound}
\label{LowerBounds}
\begin{lem}
\label{LowerBound}
Fix $s\in\mathbb{N}$ and let $T$ be a fixed forest with $\alpha_s(T)=k$. Then there exists a constant $c_{\ref{LowerBound}}(k):= (2k)^{-k}$ such that for all sufficiently large $n\in\mathbb{N}$, there exists a graph $G$ with $|V(G)|\leq n$ and $\tw(G)\leq s$ and $C(T,G)\geq c_{\ref{LowerBound}}(k) n^k$.
\end{lem}
\begin{proof}
Let $S$ be a maximum stable set in $T[\{ v\in V(T): \deg_T(v)\leq s\}]$ with $|S|=k$. Let $m:=\floor{\frac{n-|V(T)|}{k}}$. Let $G$ be the graph obtained from $T$ as follows: for each vertex $v$ in $S$ add to $G$ a set $C_v$ of $m$ vertices, such that $N_G(x):= N_T(v)$ for each vertex $x\in C_v$. Observe that $G$ has at most $n$ vertices. Each choice of one vertex $x\in C_v$ (for each $v\in S$), along with the vertices in $V(T)\setminus S$, induces a copy of $T$. Thus $C(T,G)\geq m^k$, which is at least $c_{\ref{LowerBound}}(k) n^k$ for $n\geq 2|V(T)| + 2k$.
We now show $\tw(G) \leq s$. Let $T_1$ be a connected component of $T$ and $G_1$ be the corresponding connected component of $G$. Since the treewidth of a graph equals the maximum treewidth of its components, it suffices to show $\tw(G_1) \leq s$. We may assume $|V(T_1)| \geq 2$, as otherwise $\tw(G_1)=0$. Let $T_1'$ be the tree obtained from $T_1$ as follows: for each vertex $v\in S \cap V(T_1)$ and each vertex $x\in C_v$, add one new vertex $x$ and one new edge $xv$ to $T_1'$. Choose $r \in V(T_1) \setminus S$ and consider $T_1'$ to be rooted at $r$. We use $T_1'$ to define a tree-decomposition of $G_1$, where the bags are defined as follows. Let $B_r:=\{r\}$. For each vertex $w\in V(T_1)\setminus(S\cup\{r\})$, if $p$ is the parent of $w$ in $T_1'$, let $B_w:=\{w,p\}$. For each vertex $v\in S \cap V(T_1)$ and each vertex $x$ in $C_v$, let $B_v:=N_{T_1}(v) \cup\{v\}$ and $B_x := N_{T_1}(v) \cup\{x\}$.
We now show that $(B_x:x\in V(T_1'))$ is a tree-decomposition of $G_1$.
The bags containing $r$ are indexed by $N_{T_1}(r)\cup\{r\}$, which induces a (connected) subtree of $T_1'$. For each vertex $w\in V(T_1)\setminus(S\cup\{r\})$ with parent $p$, the bags containing $w$ are those indexed by $\cup \{ C_v\cup\{v\} : v \in N_{T_1}(w) \cap S \}\cup\{w\}\cup (N_{T_1}(w)\setminus\{p\})$, which induces a subtree of $T_1'$ (since $vx\in E(T_1')$ for each $x\in C_v$ where $v\in N_{T_1}(w)\cap S$).
For each vertex $v\in S$ with parent $p$, the bags containing $v$ are those indexed by $N_{T_1}(v) \cup\{v\} \setminus \{p\}$, which induces a subtree of $T_1'$.
For each vertex $v\in S$ and $x\in C_v$, $B_x$ is the only bag that contains $x$. Hence propery (T1) in the definition of tree-decomposition holds. For each edge $pv$ of $T_1$ where $p$ is the parent of $v$, the bag $B_v$ contains both $p$ and $v$. Every other edge of $G_1$ joins $x$ and $w$ for some $v\in S$ and $x\in C_v$ and $w\in N_{T_1}(v)$, in which case $B_x$ contains both $x$ and $w$. Hence (T2) holds. Therefore $(B_x:x\in V(T_1'))$ is a tree-decomposition of $G_1$. Since each bag has size at most $s+1$, we have $\tw(G_1)\leq s$.
\end{proof}
\section{Upper Bound}
\label{UpperBounds}
To prove upper bounds on $C(T,\mathcal{G},n)$, it is convenient to work in the following setting. For graphs $G$ and $H$, an \emph{image} of $H$ in $G$ is an injection $\phi: V(H) \to V(G)$ such that $\phi(u)\phi(v) \in E(G)$ for all $uv \in E(H)$. Let $I(H,G)$ be the number of images of $H$ in $G$. For a graph class $\mathcal{G}$, let $I(H,\mathcal{G},n)$ be the maximum of $I(H,G)$ taken over all $n$-vertex graphs $G\in\mathcal{G}$.
If $H$ is fixed then $C(H,G)$ and $I(H,G)$ differ by a constant factor. In particular, if $|V(H)|=h$ then
\begin{align}
C(H,G)& \leq I(H,G) \leq h!\, C(H,G), \nonumber \\
\label{CIC}
C(H,\mathcal{G},n)& \leq I(H,\mathcal{G},n) \leq h!\, C(H,\mathcal{G},n).
\end{align}
So to bound $C(T,\mathcal{G},n)$ it suffices to work with images rather than copies.
Our proof needs two tools from the literature. The first is due to \citet{Eppstein93}. A collection $\mathcal{H}$ of images of a graph $H$ in a graph $G$ is \emph{coherent} if for all distinct images $\phi_1, \phi_2 \in \mathcal{H}$ and for all distinct vertices $x,y\in V(H)$, we have $\phi_1(x) \neq \phi_2(y)$.
\begin{lem}[\citep{Eppstein93}]
\label{coherence}
Let $H$ be a graph with $h$ vertices and let $G$ be a graph. Every collection of at least $c_{\ref{coherence}}(h,t):=h!^2 t^h$ images of $H$ in $G$ contains a coherent subcollection of size at least $t$.
\end{lem}
We also use the following result of \citet{ER60}; see~\citep{ALWZ,BCW21} for recent quantitative improvements. A \emph{$t$-sunflower} is a collection $\mathcal{S}$ of $t$ sets for which there exists a set $R$ such that $X\cap Y=R$ for all distinct $X,Y\in\mathcal{S}$. The set $R$ is called the \emph{kernel} of $\mathcal{S}$.
\begin{lem}[Sunflower Lemma~\citep{ER60}] \label{sunflower}
Every collection of at least $c_{\ref{sunflower}}(h,t):=h!(t-1)^h +1$ many $h$-subsets of a set contains a $t$-sunflower.
\end{lem}
Consider graphs $H$ and $G$. An \emph{$H$-model} in a graph $G$ is a collection $(X_v:v\in V(H))$ of pairwise disjoint connected subgraphs of $G$ indexed by the vertices of $H$, such that for each edge $vw\in E(H)$ there is an edge of $G$ joining $X_v$ and $X_w$. Each subgraph $X_v$ is called a \emph{branch set}. A graph $G$ contains an $H$-model if and only if $H$ is a minor of $G$. An $H$-model $(X_v:v\in V(H))$ in $G$ is \emph{$c$-shallow} if $X_v$ has radius at most $c$ for each $v\in V(H)$. An $H$-model $(X_v:v\in V(H))$ in $G$ is \emph{$c$-small} if $|V(X_v)|\leq c$ for each $v\in V(H)$. Shallow models are key components in the sparsity theory of \citet{Sparsity}. Small models have also been studied \citep{CHJR19,FJTW12,Montgomery15,SS15}.
The next lemma is the heart of the paper. To describe the result we need the following construction, illustrated in \cref{Construction}. For a graph $H$, and $s,t\in\mathbb{N}$, and $v\in V(H)$ let
$$\compdeg{H}{s}{v} := \max\{s+1-\deg_H(v),0\}.$$
Then define $\blah{H}{s,t}$ to be the graph with vertex set
\begin{align*}
V( \blah{H}{s,t}) := \,
& \{(v,i):v\in V(H),i\in[t]\} \; \cup\\
& \{(v,j)^\star:v\in V(H),j\in[\compdeg{H}{s}{v}]\}
\end{align*}
and edge set
\begin{align*}
E( \blah{H}{s,t}) := \,
& \{(v,i)(w,i):vw\in E(H),i\in[t]\}\; \cup\\
& \{(v,i)(v,j)^\star:v\in V(H),i\in[t],j\in[\compdeg{H}{s}{v}]\}.
\end{align*}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{Hst}
\vspace*{-3ex}
\caption{$\blah{H}{3,4}$ where $V(H)=\{a,b,c,d,e\}$. \label{Construction}}
\end{figure}
Several notes about $\blah{H}{s,t}$ are in order:
\begin{enumerate}[(A)]
\item For each $i\in[t]$, let $X_i$ be the subgraph of $\blah{H}{s,t}$ induced by $\{(v,i):v\in V(H)\}$. Then $X_i\cong H$.
Contracting each $X_i$ to a single vertex produces $K_{s',t}$ where
$$s':=\!\! \sum_{v\in V(H)} \!\! \compdeg{H}{s}{v}
\geq \!\! \sum_{v\in V(H)} \!\! ( s+1-\deg_H(v) )
= (s+1)\,|V(H)| -2|E(H)|.$$
If $H$ is a non-empty tree then $s'\geq |V(H)|(s-1)+2\geq s+1$, implying $K_{s+1,t}$ is a minor of $\blah{H}{s,t}$.
\item Each vertex $(v,j)^\star$ has degree $t$ and each vertex $(v,i)$ has degree $\deg_H(v)+\compdeg{H}{s}{v}\geq s+1$. In particular, if $t\geq s+1$ then $\blah{H}{s,t}$ has minimum degree at least $s+1$.
\item If $H$ is connected then $\text{diameter}(\blah{H}{s,t})\leq\text{diameter}(H)+2$.
\end{enumerate}
Define the \emph{density} of a graph $G$ to be $\rho(G):=\frac{|E(G)|}{|V(G)|}$. For a graph class $\mathcal{G}$, let $\rho(\mathcal{G}):=\sup\{ \rho(G): G\in \mathcal{G}\}$
\begin{lem}
\label{UpperBound}
For all $s,t, h \in \mathbb{N}$ and $\rho \in \mathbb{R}_{\geq0}$, there exists a constant $c:=c_{\ref{UpperBound}}(s,t,h,\rho) := c_{\ref{coherence}}( h, c_{\ref{sunflower}}(h,t))\, (\rho+1)^h$
such that for every forest $T$ with $h$ vertices, if $G$ is a graph with $\rho(G)\leq \rho$ and $I(T,G) \geq c\,|V(G)|^{\alpha_s(T)}$, then $G$ contains $\blah{U}{s,t}$ as a subgraph for some (non-empty) subtree $U$ of $T$.
\end{lem}
\begin{proof}
Let $S:=\{ v \in V(T): \deg_T(v)\leq s\}$. Let $X$ be a stable set in $F:=T[S]$ of size $k:=\alpha_s(T)$. Since $F$ is bipartite, by Konig's Edge Cover Theorem~\citep{Konig36}, there is a set $Y\subseteq V(F)\cup E(F)$ with $|Y|=|X|$ such that each vertex of $F$ is either in $Y$ or is incident to an edge in $Y$. In fact, $Y\cap V(F)$ is the set of isolated vertices of $F$, although we will not need this property.
Let $G$ be an $n$-vertex graph with $\rho(G)\leq \rho$ and $I(T,G) \geq c\,n^k$. Let $\mathcal{I}$ be the set of images of $T$ in $G$. So $|\mathcal{I}|\geq c\,n^{k}$. Let $\mathcal{X} := \binom{ V(G) \cup E(G)}{k}$. Note that $|\mathcal{X}| \leq \binom{(\rho+1)n}{k} \leq (\rho+1)^k n^k$. For each $\phi\in\mathcal{I}$, let
$$Y_\phi := \{ \phi(x):x\in Y\cap V(F)\} \cup \{ \phi(x)\phi(y) : xy \in Y \cap E(F)\},$$
which is an element of $\mathcal{X}$ since $|Y|=k$.
For each $Z\in\mathcal{X}$, let $\mathcal{I}_Z:= \{\phi\in\mathcal{I}: Y_\phi=Z\}$. By the pigeonhole principle, there exists $Z\in\mathcal{X}$ such that
$$|\mathcal{I}_Z|\geq
|\mathcal{I}| / |\mathcal{X}| \geq
c /(\rho+1)^k \geq
c /(\rho+1)^h =
c_{\ref{coherence}}( h, c_{\ref{sunflower}}(h,t)).$$
By \cref{coherence} applied to $\mathcal{I}_Z$, there is a coherent family $\mathcal{I}_1\subseteq \mathcal{I}_Z$ with $|\mathcal{I}_1|=c_{\ref{sunflower}}(h,t)$.
We claim that the vertex sets in $G$ corresponding to the images of $T$ in $\mathcal{I}_1$ are all distinct. Suppose that $V(\phi_1(V(T)))=V(\phi_2(V(T)))$ for $\phi_1,\phi_2\in \mathcal{I}_1$. Let $x$ be any vertex in $T$. If $\phi_1(x)\neq\phi_2(x)$, then $\phi_2(y)=\phi_1(x)$ for some vertex $y$ of $T$ with $y\neq x$ (since $V(\phi_1(V(T)))=V(\phi_2(V(T)))$), which contradicts the definition of coherence. Thus $\phi_1(x)=\phi_2(x)$ for each vertex $x$ of $T$. Thus $\phi_1=\phi_2$. This proves our claim.
Therefore, by \cref{sunflower} applied to $\{ \phi(V(T)) : \phi\in\mathcal{I}_1\}$, there is a set $R$ of vertices in $G$ and a subfamily $\mathcal{I}_2 \subseteq \mathcal{I}_1$ such that $\phi_1(V(T))\cap \phi_2(V(T))=R$ for all distinct $\phi_1, \phi_2 \in \mathcal{I}_2$, and $|\mathcal{I}_2|=t$.
Fix $\phi_0 \in \mathcal{I}_2$ and let $K:=\phi_0^{-1}(R)$. Note that $K$ does not depend on the choice of $\phi_0$. Moreover,
$S \subseteq K$ because $Y_\phi=Z$ for every $\phi\in \mathcal{I}_2$, and each vertex in $S$ is either in $Y$ or is incident to an edge in $Y$. Let $U$ be some connected component of $T - K$. Note that $V(U) \cap S=\emptyset$, since $S \subseteq K$. Thus each vertex $v\in V(U)$ has $\deg_T(v)\geq s+1$ and thus there is a set $N_v$ of at least $\compdeg{U}{s}{v}$ neighbours of $v$ in $K$. Again by coherence, $\phi_1(N_v)=\phi_2(N_v)$ for all $\phi_1,\phi_2\in\mathcal{I}_2$ and $v\in V(U)$. Observe that $N_{v_1}\cap N_{v_2}=\emptyset$ for distinct $v_1, v_2 \in U$, as otherwise $T$ would contain a cycle. Thus $(\phi(U):\phi\in\mathcal{I}_2)$ and $(\phi_0(N_v):v\in V(U))$ define a subgraph of $G$ isomorphic to $\blah{U}{s,t}$.
\end{proof}
We now prove our first main result.
\begin{proof}[Proof of \cref{Degeneracy}]
Since every graph with treewidth $k$ is in $\mathcal{D}_k$, \cref{LowerBound} implies $C(T,\mathcal{D}_k,n)\in \Omega(n^{\alpha_k(T)})$. For the upper bound, let $G$ be a $k$-degenerate graph. So $\rho(G)\leq k$. By \cref{UpperBound} with $s=k$ and $t=k+1$, if $I(T,G) \geq c\,|V(G)|^k$ then $G$ contains $\blah{U}{k,k+1}$ as a subgraph for some subtree $U$ of $T$. However, $\blah{U}{k,k+1}$ has minimum degree $k+1$, contradicting the $k$-degeneracy of $G$. Hence $I(T,G) \leq c\,|V(G)|^k$ and $C(T,\mathcal{D}_k,n)\in O(n^{\alpha_k(T)})$ by \cref{CIC}.
\end{proof}
The following special case of \cref{UpperBound} will be useful. Say $\{X_1,\dots,X_s;Y_1,\dots,Y_t\}$ is a \emph{$(p,q)$-model} of $K_{s,t}$ in a graph $G$ if:
\begin{compactitem}
\item $X_1,\dots,X_s,Y_1,\dots,Y_t$ are pairwise disjoint connected subgraphs of $G$,
\item for each $i\in[s]$ and $j\in[t]$ there is an edge in $G$ between $X_i$ and $Y_j$,
\item $|V(X_i)|\leq p$ for each $i\in[s]$ and $|V(Y_j)|\leq q$ for each $j\in[t]$.
\end{compactitem}
\begin{cor}
\label{UpperBoundCorollary}
For all $s,t, h \in \mathbb{N}$ and $\rho \in \mathbb{R}_{\geq0}$,
for every forest $T$ with $h$ vertices,
if $G$ is a graph with $\rho(G)\leq \rho$ and
$I(T,G) \geq c_{\ref{UpperBound}}(s,t,h,\rho) \,|V(G)|^{\alpha_s(T)}$,
then for some $h' \in [h]$, $G$ contains a subgraph of diameter at most $h'+1$ that contains a $(1,h')$-model of $K_{h'(s-1)+2,t}$. In particular, $G$ contains a $(1,h)$-model of $K_{s+1,t}$.
\end{cor}
\begin{proof}
By \cref{UpperBound}, $G$ contains $\blah{U}{s,t}$ as a subgraph for some subtree $U$ of $T$. The main claim follows from (A) and (C) where $h':=|V(U)|$. The final claim follows since $h'\in[h]$, implying $h'(s-1)+2 \geq s+1$.
\end{proof}
\section{Minor-Closed Classes}
\label{MinorExamples}
\cref{MinorClosedClass} is implied by \cref{LowerBound,UpperBoundCorollary} and since every minor-closed class has bounded density \citep{Thomason84,Kostochka84}. We now give several examples of \cref{MinorClosedClass}.
\subsection*{Treewidth:}
Let $\mathcal{T}_k$ be the class of graphs with treewidth at most $k$. Then $\mathcal{T}_k$ is a minor-closed class, and every graph in $\mathcal{T}_k$ has minimum degree at most $k$, implying $\rho(\mathcal{T}_k)\leq k$ and $K_{k+1,k+1}\not\in \mathcal{T}_k$. Thus \cref{MinorClosedClass} with $s=k$ implies that for every fixed forest $T$,
$$C(T,\mathcal{T}_k,n) \in \Theta(n^{\alpha_k(T)}). \label{Treewidth}$$
\subsection*{Surfaces:}
Let $\SS_{\Sigma}$ be the class of graphs that embed\footnote{For $h\geq 0$, let $\mathbb{S}_h$ be the sphere with $h$ handles. For $c\geq 0$, let $\mathbb{N}_c$ be the sphere with $c$ cross-caps. Every surface is homeomorphic to $\mathbb{S}_h$ or $\mathbb{N}_c$. The \emph{Euler genus} of $\mathbb{S}_h$ is $2h$. The \emph{Euler genus} of $\mathbb{N}_c$ is $c$. The \emph{Euler genus} of a graph $G$ is the minimum Euler genus of a surface in which $G$ embeds with no crossings. See~\citep{MoharThom} for background about graphs embedded in surfaces.} in a surface $\Sigma$. Then $\SS_{\Sigma}$ is a minor-closed class. \citet{HJW20} proved that for every $H\in\SS_\Sigma$,
$$C(H, \SS_{\Sigma}, n)\in \Theta(n^{f(H)}),$$
where $f(H)$ is a graph invariant called the \emph{flap-number} of $H$, which is independent of $\Sigma$.
\citet{HJW20} noted that $f(T)=\alpha_2(T)$ for a forest $T$. So, in particular,
$$C(T, \SS_{\Sigma}, n) \in \Theta(n^{\alpha_2(T)}).$$
This result is also implied by \cref{MinorClosedClass} since
for every surface $\Sigma$ of Euler genus $g$,
Euler's formula implies that $K_{3,2g+3}$ is not in $\SS_\Sigma$ (first observed by \citet{Ringel65}), and
$$\rho(\SS_\Sigma)\leq \rho_g:= \max\{3,\tfrac14 (5 + \sqrt{24g+1} \};$$
see \citep{OOW19} for a proof.
\subsection*{Excluding a Complete Bipartite Minor:}
Let $\mathcal{B}_{s,t}$ be the class of graphs containing no complete bipartite graph $K_{s,t}$ minor, where $t\geq s$. Since $K_{s,t}$ has treewidth $s$, every graph with treewidth at most $s-1$ is in $\mathcal{B}_{s,t}$. By \cref{MinorClosedClass}, for every fixed forest $T$, \begin{equation}
\label{ExcludedCompleteBipartiteMinor}
C(T,\mathcal{B}_{s,t},n)\in \Theta(n^{\alpha_{s-1}(T)}).
\end{equation}
This answers affirmatively a question raised by \citet{HJW20}.
\subsection*{Excluding a Complete Minor:}
Let $\mathcal{C}_k$ be the class of graphs containing no complete graph $K_k$ minor. Then $K_{k-1,k-1}\not\in \mathcal{C}_k$ (since contracting a $(k-2)$-edge matching in $K_{k-1,k-1}$ gives $K_k$). Every graph with treewidth at most $k-2$ is in $\mathcal{C}_k$. Thus \cref{MinorClosedClass} with $s=k-2$ implies that for every fixed forest $T$,
$$C(T,\mathcal{C}_k,n) \in \Theta(n^{\alpha_{k-2}(T)}).$$
\subsection*{Colin de Verdi\'ere Number:}
The Colin de Verdi\`ere parameter $\mu(G)$ is an important graph invariant introduced by \citet{CdV90,CdV93}; see~\citep{HLS,Schrijver97} for surveys. It is known that $\mu(G)\leq 1$ if and only if $G$ is a disjoint union of paths, $\mu(G)\leq 2$ if and only if $G$ is outerplanar, $\mu(G)\leq 3$ if and only if $G$ is planar, and $\mu(G)\leq 4$ if and only if $G$ is linklessly embeddable.
Let $\mathcal{V}_k:=\{G:\mu(G)\leq k\}$. Then $\mathcal{V}_k$ is a minor-closed class~\citep{CdV90,CdV93}. \citet{GB11} proved that $\mu(G) \leq\tw(G)+1$. So every graph with treewidth at most $k-1$ is in $\mathcal{V}_k$. \citet{HLS} proved that $\mu(K_{s,t}) = s+1$ for $t\geq\max\{s,3\}$, so $K_{k,\max\{k,3\}} \not\in \mathcal{V}_k$. Thus \cref{MinorClosedClass} with $s=k-1$ and $t=\max\{k,3\}$ implies that for every fixed forest $T$,
\begin{equation}
\label{CdV}
C(T,\mathcal{V}_k,n) \in \Theta(n^{\alpha_{k-1}(T)}).
\end{equation}
\subsection*{Linkless Graphs:}
A graph is \emph{linklessly embeddable} if it has an embedding in $\mathbb{R}^3$ with no two linked cycles~\citep{Sachs83,RST93a}. Let $\mathcal{L}$ be the class of linklessly embeddable graphs. Then $\mathcal{L}$ is a minor-closed class whose minimal excluded minors are the so-called Petersen family~\citep{RST95}, which includes $K_6$, $K_{4,4}$ minus an edge, and the Petersen graph. As mentioned above, $\mathcal{L}=\mathcal{V}_4$. Thus \cref{CdV} with $k=4$ implies for every fixed forest $T$, $$C(T,\mathcal{L},n) \in \Theta(n^{\alpha_{3}(T)}).$$
\subsection*{Knotless Graphs:}
A graph is \emph{knotlessly embeddable} if it has an embedding in $\mathbb{R}^3$ in which every cycle forms a trivial knot; see~\citep{Alfonsin05} for a survey. Let $\mathcal{K}$ be the class of knotlessly embeddable graphs. Then $\mathcal{K}$ is a minor-closed class whose minimal excluded minors include $K_7$ and $K_{3,3,1,1}$ (see \citep{CG83,Foisy02}). More than 260 minimal excluded minors are known~\cite{GMN14}, but the full list of minimal excluded minors is unknown. Since $K_7\not\in\mathcal{K}$, we have $\rho(\mathcal{K})\leq\rho(\mathcal{C}_7)<5$ by a theorem of \citet{Mader68}. \citet{Shimabara88} proved that $K_{5,5}\not\in\mathcal{K}$. By \cref{MinorClosedClass},
$$C(T,\mathcal{K},n)\in O(n^{\alpha_4(T)}).$$
This bound would be tight if every treewidth 4 graph is knotlessly embeddable, which is an open problem of independent interest.
The above results all depend on excluded complete bipartite minors. We now show that excluded complete bipartite minors determine $C(T,\mathcal{G},n)$ for a broad family of minor-closed classes.
\begin{thm}
\label{BiconnectedForbiddenMinors}
Let $\mathcal{G}$ be a minor-closed class such that every minimal forbidden minor of $\mathcal{G}$ is 2-connected. Let $s$ be the maximum integer such that $K_{s,t} \in \mathcal{G}$ for every $t\in \mathbb{N}$. Then for every forest $T$,
$$C(T,\mathcal{G},n) = \Theta( n^{\alpha_s(T)} ).$$
\end{thm}
\begin{proof}
Note that the condition that every minimal forbidden minor of $\mathcal{G}$ is 2-connected is equivalent to saying that $\mathcal{G}$ is closed under the 1-sum operation (that is, if $G_1,G_2\in\mathcal{G}$ and $|V(G_1\cap G_2)|\leq 1$, then $G_1\cup G_2\in \mathcal{G}$).
The proof of \cref{LowerBound} shows that for all sufficiently large $n\in\mathbb{N}$ there exists an $n$-vertex graph $G$ with $C(T,G)\geq c n^{\alpha_s(T)}$, where $G$ is obtained from 1-sums of complete bipartite graphs $K_{s',t}$ with $s'\leq s$. By the definition of $s$ and since $\mathcal{G}$ is closed under 1-sums, $G\in\mathcal{G}$. Thus $C(T,\mathcal{G},n) \in \Omega( n^{\alpha_s(T)} )$.
Now we prove the upper bound. Since $\mathcal{G}$ is minor-closed, $\mathcal{G}$ has bounded
density~\citep{Thomason84,Kostochka84}. By the definition of $s$, there exists $t\in\mathbb{N}$ such that $K_{s+1,t}\not\in \mathcal{G}$. By (A), we have $\blah{U}{s,t}\not\in\mathcal{G}$ for every non-empty subtree $U$ of $T$. Thus $I(T,\mathcal{G},n) \in O( n^{\alpha_s(T)})$ by \cref{UpperBound}.
\end{proof}
Note that minor-closed classes with bounded pathwidth (that is, those excluding a fixed forest as a minor \citep{BRST91}) are examples not covered by \cref{BiconnectedForbiddenMinors}. Determining $C(T,\mathcal{G}_k,n)$, where $\mathcal{G}_k$ is the class of pathwidth $k$ graphs, is an interesting open problem.
\section{Beyond Minor-Closed Classes}
\label{Beyond}
This section asymptotically determines $C(T,\mathcal{G},n)$ for several non-minor-closed graph classes $\mathcal{G}$.
\subsection{Shortcut Systems}
\citet{DMW} introduced the following definition which generalises the notion of shallow immersion~\citep{NesOss15} and provides a way to describe a graph class in terms of a simpler graph class. Then properties of the original class are (in some sense) transferred to the new class. Let $\mathcal{P}$ be a set of non-trivial paths in a graph $G$. Each path $P\in\mathcal{P}$ is called a \emph{shortcut}; if $P$ has endpoints $v$ and $w$ then it is a \emph{$vw$-shortcut}. Given a graph $G$ and a shortcut system $\mathcal{P}$ for $G$, let $G^\mathcal{P}$ be the simple supergraph of $G$ obtained by adding the edge $vw$ for each $vw$-shortcut in $\mathcal{P}$. \citet{DMW} defined $\mathcal{P}$ to be a \emph{$(k,d)$-shortcut system} (for $G$) if:
\begin{compactitem}
\item every path in $\mathcal{P}$ has length at most $k$, and
\item for every $v\in V(G)$, the number of paths in $\mathcal{P}$ that use $v$ as an internal vertex is at most $d$.
\end{compactitem}
We use the following variation. Say $\mathcal{P}$ is a \emph{$(k,d)^\star$-shortcut system} (for $G$) if:
\begin{compactitem}
\item every path in $\mathcal{P}$ has length at most $k$, and
\item for every $v\in V(G)$, if $M_v$ is the set of vertices $u\in V(G)$ such that there exists a $uw$-shortcut in $\mathcal{P}$ in which $v$ is an internal vertex, then $|M_v| \leq d$. \end{compactitem}
Clearly, every $(k,d)^\star$-shortcut system is a $(k,\binom{d}{2})$-shortcut system (since $G^\mathcal{P}$ is simple), and every $(k,d)$-shortcut system is a $(k,2d)^\star$-shortcut system.
The next lemma shows that if $G^\mathcal{P}$ contains a `small' model of a `large' complete bipartite graph, then so does $G$.
\begin{lem}
\label{pqModelShortcut}
For all $s,t,d,k,p,q\in\mathbb{N}$, let
$s':= (d(k-1)(p-1) + 1)(s-1)+1$ and
$t':= ( 2d(k-1)(s+q-1) + 1 )(t-1) + 1 + sd ( p+(k-1)(p-1) )$.
Let $\mathcal{P}$ be a $(k,d)^\star$-shortcut system for a graph $G$.
If $G^{\mathcal{P}}$ contains a $(p,q)$-model of $K_{s',t'}$, then $G$ contains a
$( p+(k-1)(p-1),q+(k-1)(s+q-1))$-model of $K_{s,t}$.
\end{lem}
\begin{proof}
Let $(X_1,\dots,X_{s'};Y_1,\dots,Y_{t'})$ be a $(p,q)$-model of $K_{s',t'}$ in $G^{\mathcal{P}}$. We may assume that each edge of $G$ is (a path of length 1) in $\mathcal{P}$. Let $I:=[s']$ and $J:=[t']$. We may assume that $X_i$ and $Y_j$ are subtrees of $G^{\mathcal{P}}$ for $i\in I$ and $j\in J$.
Consider each $i\in I$. Let $C_i$ be the set of all vertices internal to some $uw$-shortcut with $uw\in E(X_i)$. Since $|E(X_i)| \leq p-1$, we have $|C_i|\leq (k-1)(p-1)$. For each $i\in I$, let $\hat{X}_i$ be the subgraph of $G$ induced by $V(X_i)\cup C_i$. By construction, $\hat{X}_i$ is connected and $|V(\hat{X}_i)| \leq p+(k-1)(p-1)$.
Consider the graph $A$ with $V(A):=I$ where two vertices $i,i'\in V(A)$ are adjacent if $V(\hat{X_i}) \cap V(\hat{X_{i'}}) \neq\emptyset$.
For each $ii' \in E(A)$, fix a vertex $v_{i,i'}$ in $V(\hat{X}_i)\cap V(\hat{X}_{i'})$, which is in $C_i \cup C_{i'}$ since $V(X_i) \cap V(X_{i'}) = \emptyset$. For $i\in I$ and $v \in C_i$, define $E_{v,i}$ to be the set of all edges $ii'\in E(A)$ with $v_{i,i'} = v$. If $ii'$ is in $E_{v,i}$ and $v\not \in X_{i'}$, then $|M_v\cap X_{i'}|\geq 2$. Also $|M_v\cap X_i|\geq 2$. Since $v$ is in at most one $X_{i'}$, in total, $|M_v|\geq 2|E_{v,i}|$, implying $|E_{v,i}|\leq \frac{d}{2}$. Since $|I|=|V(A)|$ and $|C_i|\leq (k-1)(p-1)$,
$$|E(A)|
\leq \sum_{i\in I} \sum_{v\in C_i} |E_{v,i}|
\leq \tfrac{d}{2}(k-1)(p-1)\, |V(A)|.$$
Thus $A$ has average degree at most $d(k-1)(p-1)$. By Tur\'an's Theorem, $A$ contains a stable set $I'$ of size $\ceil{ |I|/ ( d(k-1)(p-1) + 1 )} =s$. For distinct $i,i'\in I'$, the subgraphs $\hat{X}_i$ and $\hat{X}_{i'}$ are disjoint. Let $\mathcal{X}:=\bigcup_{i \in I'} V(\hat{X}_i)$. Note that $|\mathcal{X}| \leq s ( p+(k-1)(p-1) )$.
Let $Z:=\bigcup_{x\in \mathcal{X}}M_x$. Then $|Z|\leq sd ( p+(k-1)(p-1) )$.
Thus $Y_j$ intersects $Z$ for at most $sd ( p+(k-1)(p-1) )$ elements $j\in J$.
Hence $J$ contains a subset $K$ of size $( 2d(k-1)(s+q-1) + 1 )(t-1) +1$ such that $V(Y_j) \cap Z=\emptyset$ for each $j\in K$.
Consider each $j\in K$. Initialise $D_j:=\emptyset$. For each $i\in I'$, choose $x\in V(X_i)$ and $w\in V(Y_j)$ such that $xw \in E(G^{\mathcal P})$, and add all the internal vertices of the $xw$-shortcut $P \in \mathcal P$ to $D_j$. For each edge $uw$ of $Y_j$, add all the internal vertices of the $uw$-shortcut $P\in \mathcal{P}$ to $D_j$. Note that
$$|D_j| \leq (k-1)|I'| + (k-1)|E(Y_j)| \leq (k-1)(s+q-1),$$
since $Y_j$ has at most $q-1$ edges.
Moreover, $D_j \cap \mathcal{X} = \emptyset$ since $V(Y_j)\cap Z=\emptyset$.
For each $j\in K$, let $\hat{Y}_j$ be the subgraph of $G$ induced by $V(Y_j)\cup D_j$. By construction, $\hat{Y}_j$ is connected with at most $q+(k-1)(s+q-1)$ vertices and is disjoint from $\mathcal{X}$.
Consider the graph $B$ with $V(B):=K$ where two vertices $j,j'\in V(B)$ are adjacent if $V(\hat{Y_j}) \cap V(\hat{Y_{j'}}) \neq\emptyset$. For each $jj' \in E(B)$, fix a vertex $v_{j,j'}$ in $V(\hat{Y}_j)\cap V(\hat{Y}_{j'})$, which is in $D_j \cup D_{j'}$ since $V(Y_j) \cap V(Y_{j'}) = \emptyset$. For $j\in K$ and $v \in D_j$, define $E_{v,j}$ to be the set of all edges $jj'\in E(B)$ with $v_{j,j'} = v$.
We now bound $|E(B)|$. If $jj'$ is in $E_{v,j}$ and $v\not \in Y_{j'}$, then $|M_v\cap Y_{j'}|\geq 1$. Also $|M_v\cap Y_j|\geq 1$. Since $v$ is in at most one $Y_{j'}$, in total, $|M_v| \geq |E_{v,j}|$, implying $|E_{v,j}| \leq d$. Since $|K|=|V(B)|$ and $|D_j|\leq (k-1)(s+q-1)$,
$$|E(B)|
\leq \sum_{j\in K} \sum_{v\in D_j} |E_{v,j}|
\leq d(k-1)(s+q-1)\, |V(B)|,$$
implying $B$ has average degree at most $2d(k-1)(s+q-1)$. By Tur\'an's Theorem, $B$ contains a stable set $L$ of size $\ceil{ |K|/ ( 2d(k-1)(s+q-1) + 1 )} =t$.
For distinct $j,{j'}\in L$, since $L$ is a stable set in $B$, $\hat{Y}_j$ and $\hat{Y}_{j'}$ are disjoint. For each $j\in L$, $Y_j$ and $\mathcal{X}$ are disjoint by assumption, and $D_j$ and $\mathcal{X}$ are disjoint by construction. Also, for each $i\in I'$ and $j\in L$, there is an edge between $\hat{X}_i$ and $\hat{Y}_j$ by construction. Thus $\{\hat{X}_i:i\in I'\}$ and $\{\hat{Y}_j:j \in L\}$ form a $( p+(k-1)(p-1),q+k(s+q-1))$-model of $K_{s,t}$ in $G$.
\end{proof}
\cref{pqModelShortcut} with $p=1$ implies the following result. We emphasise that the value of $s$ does not change in the two models.
\begin{cor}
\label{1qModelShortcut}
Fix $s,t,k,d,q\in\mathbb{N}$. Let $t':= ( 2d(k-1)(s+q-1) + 1 )(t-1) + 1 + sd$.
Let $\mathcal{P}$ be a $(k,d)^\star$-shortcut system for a graph $G$.
If $G^{\mathcal{P}}$ contains a $(1,q)$-model of $K_{s,t'}$,
then $G$ contains a $(1,q+(k-1)(s+q-1))$-model of $K_{s,t}$.
\end{cor}
\subsection{Low-Degree Squares of Graphs}
The above result on shortcut systems leads to the following extension of our results for minor-closed classes. For a graph $G$ and $d\in\mathbb{N}$, let $G^{(d)}$ be the graph obtained from $G$ by adding a clique on $N_G(v)$ for each vertex $v\in V(G)$ with $\deg_G(v)\leq d$. (This definition incorporates and generalises the square of a graph with maximum degree $d$.)\ Note that $G^{(d)}=G^{\mathcal{P}}$, where $\mathcal{P}$ is the $(2,d)^\star$-shortcut system $\{uvw: v\in V(G);\deg_G(v)\leq d; u,w\in N_G(v); u\neq w\}$. For a graph class $\mathcal{G}$, let $\mathcal{G}^{(d)}:=\{G^{(d)}:G\in \mathcal{G}\}$. Note that $\rho(G^{(d)}) \leq \rho(G)+\binom{d}{2}$. \cref{UpperBoundCorollary} and \cref{1qModelShortcut} with $k=2$ and $q=h$ imply:
\begin{cor}
\label{SquareGraph}
Fix $s,t,d,h\in\mathbb{N}$ and $\rho\in\mathbb{R}_{\geq 0}$.
Let $T$ be fixed forest with $h$ vertices.
Let $t':= ( 2d(s+h-1) + 1 )(t-1) + 1 + sd$.
Let $G$ be a graph with $\rho(G)\leq \rho$ and containing no $(1,2h+s-1)$-model of $K_{s,t}$.
Then $G^{(d)}$ contains no $(1,h)$-model of $K_{s,t'}$, and
$$C(T,G^{(d)}) \leq I(T,G^{(d)})
\leq c_{\ref{UpperBound}}(s-1,t',h,\rho+\tbinom{d}{2}) \,|V(G)|^{\alpha_{s-1}(T)}.$$
\end{cor}
With \cref{LowerBound} we have:
\begin{thm}
\label{SquareClass}
Fix $s,t,d,h\in\mathbb{N}$ and $\rho\in\mathbb{R}_{\geq 0}$.
Let $T$ be fixed forest with $h$ vertices.
Let $t':= ( 2d(s+h-1) + 1 )(t-1) + 1 + sd$.
Let $\mathcal{G}$ be a graph class such that $\rho(\mathcal{G})\leq \rho$,
every graph with treewidth at most $s-1$ is in $\mathcal{G}$, and
no graph in $\mathcal{G}$ contains a $(1,2h+s-1)$-model of $K_{s,t}$.
Then no graph in $\mathcal{G}^{(d)}$ contains a $(1,h)$-model of $K_{s,t'}$, and
$$C(T,\mathcal{G}^{(d)},n) = \Theta( n^{\alpha_{s-1}(T)} ).$$
\end{thm}
\cref{SquareClass} is applicable to all the minor-closed classes discussed in \cref{MinorExamples}. For example, we have the following extension of \cref{ExcludedCompleteBipartiteMinor}. Recall that $\mathcal{B}_{s,t}^{(d)}$ is the class of graphs $G^{(d)}$ where $G$ contains no $K_{s,t}$-minor. Then for every fixed forest $T$,
$$C(T,\mathcal{B}_{s,t}^{(d)},n) = \Theta( n^{\alpha_{s-1}(T)}).$$
\subsection{Map Graphs}
Map graphs are defined as follows. Start with a graph $G_0$ embedded in a surface $\Sigma$, with each face labelled a ``nation'' or a ``lake'', where each vertex of $G_0$ is incident with at most $d$ nations. Let $G$ be the graph whose vertices are the nations of $G_0$, where two vertices are adjacent in $G$ if the corresponding faces in $G_0$ share a vertex. Then $G$ is called a \emph{$(\Sigma,d)$-map graph}. A $(\mathbb{S}_0,d)$-map graph is called a (plane) \emph{$d$-map graph}; such graphs have been extensively studied \citep{FLS-SODA12,Chen07,DFHT05,CGP02,Chen01}.
Let $\mathcal{M}_{\Sigma,d}$ be the set of all $(\Sigma,d)$-map graphs.
Since $\mathcal{M}_{\Sigma,3}=\SS_\Sigma$ (see \citep{CGP02,DEW17}), map graphs provide a natural generalisation of graphs embeddable in a surface.
Let $G\in\mathcal{M}_{\Sigma,d}$ where $\Sigma$ has Euler genus $g$. Let $T$ be a fixed forest with $h$ vertices. \citet{DMW} proved that $G$ is a subgraph of $G_0^\mathcal{P}$ for some graph $G_0\in\SS_\Sigma$ and some $(2,\frac12 d(d-3))$-shortcut system $\mathcal{P}$ of $G_0$. Inspecting the proof in \citep{DMW} one observes that $\mathcal{P}$ is a $(2,d)^\star$-shortcut system. In the plane case, \citet{Chen01} proved that $\rho(\mathcal{M}_{\mathbb{S}_0,d})< d$. An analogous argument shows that $\rho(\mathcal{M}_{\Sigma,d})\in O(d \sqrt{g+1})$. The same bound can also be concluded from \cref{gkCloseEdges}.
Since $G_0$ contains no $K_{3,2g+3}$ minor, by \cref{1qModelShortcut},
for each $q\in\mathbb{N}$,
$G_0^{\mathcal{P}}$ and thus $G$ contains no $(1,q)$-model of $K_{3,t'}$
where $t':= ( 2d(q+2) + 1 )(2g+2) +1 + 3d$.
With $q=h$, \cref{UpperBoundCorollary} then implies that
$C(T,G) \leq I(T,G) \leq c_{\ref{UpperBound}}(2,t',h,\rho) \,|V(G)|^{\alpha_2(T)}$. Hence
$$C(T,\mathcal{M}_{\Sigma,d},n) \in \Theta( n^{\alpha_2(T)} ),$$
where the lower bound follows from \cref{LowerBound} since every graph with treewidth 2 is planar and is thus a $(\Sigma,d)$-map graph. Also note the $q=1$ case above shows that
$$K_{3,( 6d + 1 )(2g+2)+1 + 3d} \not\in \mathcal{M}_{\Sigma,d}.$$
\subsection{Bounded Number of Crossings}
Here we consider drawings of graphs with a bounded number of crossings per edge. Throughout the paper, we assume that no three edges cross at a single point in a drawing of a graph. For a surface $\Sigma$ and $k\in \mathbb{N}$, let $\SS_{\Sigma,k}$ be the class of graphs $G$ that have a drawing in $\Sigma$ such that each edge is in at most $k$ crossings. Since $\SS_{\Sigma,0}=\SS_\Sigma$, this class provides a natural generalisation of graphs embeddable in surfaces and is widely studied~\citep{PachToth-DCG02,DMW,OOW19}. Graphs in $\SS_{\mathbb{S}_0,k}$ are called \emph{$k$-planar}. The case $k=1$ is particularly important in the graph drawing literature; see \citep{KLM17} for a bibliography with over 100 references.
Let $T$ be a fixed forest with $h$ vertices. Let $G\in \SS_{\Sigma,k}$ where $\Sigma$ has Euler genus $g$. \citet{DMW} noted that by replacing each crossing point by a dummy vertex we obtain a graph $G_0\in\SS_\Sigma$ such that $G$ is a subgraph of $G_0^\mathcal{P}$ for some $(k+1,2)$-shortcut system $\mathcal{P}$, which is a $(k+1,4)^\star$-shortcut system.
Results of \citet{OOW19} show that $\rho(\SS_{\Sigma,k}) \leq 2\sqrt{k+1}\rho_g$ (see \cref{gkCloseEdges} below).
Since $G_0$ contains no $K_{3,2g+3}$ minor, by \cref{1qModelShortcut},
for all $q\in\mathbb{N}$, $G_0^\mathcal{P}$ and thus $G$ contains no $(1,q)$-model of
$K_{3,t'}$ where $t':= ( 8k(q+2) + 1 )(2g+2) + 13$. Applying this result with $q=h$, \cref{UpperBoundCorollary} then implies
$C(T,G) \leq I(T,G) \leq c_{\ref{UpperBound}}(2,t',h,2\sqrt{k+1}\rho_g) \,|V(G)|^{\alpha_{2}(T)}$. Hence
\begin{equation}
\label{gkPlanar}
C(T,\SS_{\Sigma,k},n) \in \Theta(n^{\alpha_2(T)}),
\end{equation}
where the lower bound follows from \cref{LowerBound} since every treewidth 2 graph is planar and is thus in $\SS_{\Sigma,k}$. Also note the $q=1$ case above shows that
$$K_{3,( 24k + 1 )(2g+2) + 13} \not\in \SS_{\Sigma,k}.$$
\subsection{Bounded Average Number of Crossings}
\label{Crossings}
Here we generalise the results from the previous section for graphs that can be drawn with a bounded average number of crossings per edge. \citet{OOW19} defined a graph $G$ to be \emph{$k$-close to Euler genus $g$} if every subgraph $G'$ of $G$ has a drawing in a surface of Euler genus at most $g$ with at most $k\,|E(G')|$ crossings\footnote{The case $g=0$ is similar to other definitions from the literature, as we now explain. \citet{EG17} defined the \emph{crossing graph} of a drawing of a graph $G$ to be the graph with vertex set $E(G)$, where two vertices are adjacent if the corresponding edges in $G$ cross. \citet{EG17} defined a graph to be a \emph{$d$-degenerate crossing graph} if it admits a drawing whose crossing graph is $d$-degenerate. Independently, \citet{GapPlanar18} defined a graph $G$ to be \emph{$k$-gap-planar} if $G$ has a drawing in the plane in which each crossing is assigned to one of the two involved edges and each edge is assigned at most $k$ of its crossings. This is equivalent to saying that the crossing graph has an orientation with outdegree at most $k$ at every vertex. \citet{Hakimi65} proved that any graph $H$ has such an orientation if and only if every subgraph of $H$ has average degree at most $2k$. So a graph $G$ is $k$-gap-planar if and only if $G$ has a drawing such that every subgraph of the crossing graph has average degree at most $2k$ if and only if $G$ has a drawing such that every subgraph $G'$ of $G$ has at most $k\,|E(G')|$ crossings in the induced drawing of $G'$.
The only difference between ``$k$-close to planar'' and ``$k$-gap planar'' is that a $k$-gap planar graph has a single drawing in which every subgraph has the desired number of crossings. To complete the comparison, the definition of \citet{EG17} is equivalent to saying that $G$ has a drawing in which the crossing graph has an acyclic orientation with outdegree at most $k$ at every vertex. Thus every $k$-degenerate crossing graph is $k$-gap-planar graph, and every $k$-gap-planar graph is a $2k$-degenerate crossing graph.
}. Let $\mathcal{E}_{g,k}$ be the class of graphs $k$-close to Euler genus $g$.
This is a broader class than $\SS_{\Sigma,k}$ since it allows an average of $k$ crossings per edge, whereas $\SS_{\Sigma,k}$ requires a maximum of $k$ crossings per edge. In particular, if $\Sigma$ has Euler genus $g$, then $\SS_{\Sigma,k} \subseteq \mathcal{E}_{g,k/2}$.
The next lemma is of independent interest.
\begin{lem}
\label{gkCloseShallow}
Fix $g,r\in\mathbb{N}_0$ and $d\in\mathbb{N}$ and $k\in\mathbb{R}_{\geq 0}$. Assume that graph $G\in\mathcal{E}_{g,k}$ contains an $r$-shallow $H$-model $(X_v:v\in V(H))$ such that for every vertex $v\in V(H)$ we have $\deg_H(v)\leq d$ or $|V(X_v)|=1$. Then $H$ is in $\mathcal{E}_{g,2kd^2(2r+1)}$.
\end{lem}
\begin{proof}
For each $v\in V(H)$, let $a_v$ be the central vertex of $X_v$. We may assume that $X_v$ is a BFS spanning tree of $G[V(X_v)]$ rooted at $a_v$ and with radius at most $r$. Orient the edges of $X_v$ away from $a_v$.
Let $H'$ be an arbitrary subgraph of $H$. For each $v\in V(H')$, let $X'_v$ be a minimal subtree of $X_v$ rooted at $a_v$, such that $(X'_v:v\in V(H'))$ is an $r$-shallow $H'$-model. By minimality, $X'_v$ has at most $\deg_{H'}(v)$ leaves. Each edge of $X'_v$ is on a path from a leaf to $a_v$, implying $|E(X'_v)| \leq r\deg_{H'}(v)$.
Let $G'$ be the subgraph of $G$ consisting of $\bigcup_{v\in V(H')}X'_v$ along with one undirected edge $y_{vw}y_{wv}$ for each edge $vw\in E(H')$, where $y_{vw}\in V(X'_v)$ and $y_{wv}\in V(X'_w)$. Let $P_{vw}$ be the directed $a_vy_{vw}$-path in $X'_v$. Note that $$|E(G')|
\,=\,
|E(H')| + \!\! \sum_{v\in V(H')} \!\!\! |E(X'_v)|
\,\leq\,
|E(H')| + r \!\! \sum_{v\in V(H')} \!\!\! \deg_{H'}(v)
\,=\, (2r+1) |E(H')| . $$
Since $G$ is $k$-close to Euler genus $g$, $G'$ has a drawing in a surface of Euler genus at most $g$ with at most $k\,|E(G')|$ crossings. For each $e\in E(G')$, let $\ell(e)$ be the number of crossings on $e$ in this drawing of $G'$. Since each crossing contributes towards $\ell$ for exactly two edges, $$\sum_{e\in E(G')}\!\!\ell(e) \leq 2k\,|E(G')| \leq 2k(2r+1)|E(H')| .$$
Let $G''$ be the multigraph obtained from $G'$ as follows: for each vertex $v\in V(H')$ and edge $e$ in $X'_v$, let the multiplicity of $e$ in $G''$ equal the number of edges $vw\in E(H')$ for which the path $P_{vw}$ uses $e$. Edges of $G''$ inherit their orientation from $G'$. Note that $G''$ has multiplicity at most $d$. By replicating edges in the drawing of $G'$ we obtain a drawing of $G''$ such that every edge of $G''$ corresponding to $e\in E(G')$ is in at most $d\, \ell(e)$ crossings. Since each edge $e\in E(G')$ has multiplicity at most $d$ in $G''$, the number of crossings in the drawing of $G''$ is at most
$\sum_{e\in E(G')} d^2\ell(e) \leq 2kd^2(2r+1)\,|E(H')|$.
Note that at each vertex $y$ in $G''$, in the circular ordering of edges in $G''$ incident to $y$ determined by the drawing of $G''$, all the incoming edges form an interval. We now use the drawing of $G'$ to produce a drawing of a graph $G'''$, which is a subdivision of $H'$, where each vertex $v\in V(H')$ is drawn at the location of $a_v$. Here is the idea (see \cref{gkCloseH}): First `assign' each edge $y_{vw}y_{wv}$ of $G'$ to the edge $vw$ of $H'$. Next `assign' each edge of $G'$ arising from some $X'_v$ to exactly one edge incident to $v$, such that for each edge $vw$ of $H'$ incident to $v$ there is a path in $G'$ from $a_v$ to $y_{vw}$ consisting of edges assigned to $vw$. Then each edge $vw$ in $H'$ is drawn by following this path.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{DrawModel}
\vspace*{-3ex}
\caption{Construction of the drawing of $H$.\label{gkCloseH}}
\end{figure}
We now provide the details of this idea. Initialise $V(G'''):= V(G')$ and $E(G'''):=\{ y_{vw}y_{wv}: vw\in E(H)\}$. Consider each vertex $v\in V(H')$. Consider the vertices $y\in V(X'_v)\setminus\{a_v\}$ in non-increasing order of $\text{dist}_{X'_v}(a_v,y)$ (that is, we consider the vertices of $X'_v$ furthest from $a_v$ first, and then move towards the root). Let $x$ be the parent of $y$ in $X'_v$. The incoming edges at $y$ are copies of $xy$. Each outgoing/undirected edge $yz$ at $y$ is already assigned to one edge $vw$ incident to $v$. Say $yz_1,\dots,yz_q$ are the outgoing/undirected edges of $G''$ incident to $y$ in clockwise order in the drawing of $G''$, where $yz_i$ is assigned to edge $vw_i$.
If $e_1,\dots,e_q$ are the incoming edges at $y$ in clockwise order, then assign $e_{q-i+1}$ to $vw_i$ for each $i\in[q]$.
Now in $G'''$ replace vertex $y$ by vertices $y_1,\dots,y_q$ drawn in a sufficiently small disc around $y$, where $y_i$ is incident to $e_{q-i+1}$ and $y_iz_i$ in $G'''$.
Thus the edges in $G'''$ assigned to $vw$ form a path from $a_v$ to $y_{vw}$ and a path from $a_w$ to $y_{wv}$. Hence $G'''$ is a subdivision of $H'$ (since $y_{vw}y_{wv}$ is an edge of $G'''$). Each edge of $G'''$ has the same number of crossings as the corresponding edge of $G''$.
Thus, the total number of crossings in the drawing of $G'''$ is at most $2kd^2(2r+1)|E(H')|$.
Since $G'''$ is a subdivision of $H'$, the drawing of $G'''$ determines a drawing of $H'$ with the same number of crossings.
Therefore $H$ is $2kd^2(2r+1)$-close to Euler genus $g$.
\end{proof}
We need the following results of \citet{OOW19}:
\begin{align}
\label{gkCloseEdges} \rho( \mathcal{E}_{k,g} ) & \leq 2\sqrt{2k+1}\,\rho_g\\
\label{K3t} K_{3,3k(2g+3)(2g+2)+2} & \not\in \mathcal{E}_{g,k}.
\end{align}
We now reach the main result of this section.
\begin{thm}
\label{gkClose}
For fixed $k,g\in\mathbb{N}_0$ and every fixed forest $T$,
$$C(T,\mathcal{E}_{g,k},n) \in \Theta(n^{\alpha_2(T)}).$$
\end{thm}
\begin{proof}
First we prove the lower bound. By \cref{LowerBound} with $s=2$, for all sufficiently large $n\in\mathbb{N}$, there exists a graph $G$ with $|V(G)|\leq n$ and $\tw(G)\leq 2$ and $C(T,G)\geq c_{\ref{LowerBound}}(\alpha_2(T))\, n^{\alpha_2(T)}$. Since $\tw(G)\leq 2$, $G$ is planar and is thus in $\mathcal{E}_{g,k}$. Hence $C(T,\mathcal{E}_{g,k},n)\in \Omega(n^{\alpha_2(T)})$.
Now we prove the upper bound. Let $s:=2$ and $r:=|V(T)|$ and
$t:= 54k(2r+1)(2g+3)(2g+2)+2$.
Let $G$ be an $n$-vertex graph in $\mathcal{E}_{g,k}$. By \cref{gkCloseEdges}, $\rho(G) \leq 2\sqrt{2k+1}\,\rho_g $. Suppose on the contrary that $I(T,G)\geq cn^{\alpha_2(T)}$ where $c:=c_{\ref{UpperBound}}(s,t,r,2\sqrt{2k+1}\,\rho_g )$.
Let $H:=K_{3,t}$. \cref{UpperBoundCorollary} implies that $G$ contains a $(1,r)$-model $(X_v:v\in V(H))$ of $H$. This model is $r$-shallow and for every vertex $v\in V(H)$ we have $\deg_H(v)\leq 3$ or $|V(X_v)|=1$. Thus \cref{gkCloseShallow} is applicable with $d=3$, implying that $K_{3,t} \in \mathcal{E}_{g,18k(2r+1)}$, which contradicts \cref{K3t}.
\end{proof}
An almost identical proof to that of \cref{gkCloseShallow} shows the following analogous result for $\SS_{\Sigma,k}$. This can be used to prove \cref{gkPlanar} without using shortcut systems.
\begin{lem}
\label{gkPlanarShallow}
Fix a surface $\Sigma$ and $k,r\in\mathbb{N}_0$ and $d\in\mathbb{N}$. Let $G$ be a graph in $\SS_{\Sigma,k}$ that contains an $r$-shallow $H$-model $(X_v:v\in V(H))$ such that for every vertex $v\in V(H)$ we have $\deg_H(v)\leq d$ or $|V(X_v)|=1$. Then $H$ is in $\SS_{\Sigma,kd^2(2r+1)}$.
\end{lem}
\section{Open Problems}
\label{OpenProblems}
In this paper we determined the asymptotic behaviour of $C(T,\mathcal{G},n)$ as $n\to \infty$ for various sparse graph classes $\mathcal{G}$ and for an arbitrary fixed forest $T$. One obvious question is what happens when $T$ is not a forest?
For arbitrary graphs $H$, the answer is no longer given by $\alpha_s(H)$. \citet{HJW20} define a more general graph parameter, which they conjecture governs the behaviour of $C(H,\mathcal{G},n)$. An \emph{$s$-separation} of $H$ is a pair $(A,B)$ of edge-disjoint subgraphs of $H$ such that $A \cup B=H$, $V(A) \setminus V(B) \neq \emptyset$, $V(B) \setminus V(A) \neq \emptyset$, and $|V(A) \cap V(B)|=s$. A \emph{$(\leq s)$-separation} is an $s'$-separation for some $s' \leq s$. Separations $(A,B)$ and $(C,D)$ of $H$ are \emph{independent} if $E(A) \cap E(C) = \emptyset$ and $(V(A) \setminus V(B)) \cap (V(C) \setminus V(D))=\emptyset$. If $H$ has no $(\leq s)$-separation, then let $f_s(H):=1$; otherwise, let $f_s(H)$ be the maximum number of pairwise independent $(\leq s)$-separations in $H$.
\begin{conj}[\citep{HJW20}] \label{flopnumber}
Let $\mathcal{B}_{s,t}$ be the class of graphs containing no $K_{s,t}$ minor, where $t\geq s \geq 1$. Then for every fixed graph $H$ with no $K_{s,t}$ minor,
$$C(H,\mathcal{B}_{s,t},n) \in \Theta(n^{f_{s-1}(H)}).$$
\end{conj}
As evidence for \cref{flopnumber}, \citet{Eppstein93} proved it when $f_{s-1}(H)=1$ and \citet{HJW20} proved it when $s \leq 3$ (and that the lower bound holds for all $s \geq 1$). It is easy to show that $f_s(T)=\alpha_s(T)$ for all $s \geq 1$ and every forest $T$. Thus, if true, \cref{flopnumber} would simultaneously generalise \cref{MinorClosedClass} and results from \citep{HJW20}.
In light of \cref{Degeneracy} we also conjecture the following generalisation.
\begin{conj}
Let $\mathcal{D}_k$ be the class of $k$-degenerate graphs. Then for every fixed $k$-degenerate graph $H$,
$$C(H,\mathcal{D}_k,n) \in \Theta(n^{f_{k}(H)}).$$
\end{conj}
\subsection*{Acknowledgements}
Many thanks to both referees for several helpful comments.
\subsection*{Note}
Subsequent to this work, \citet{Liu21} disproved Conjectures~16 and 17, amongst many other results.
\def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7
by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7
\hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax
\rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if
d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if
D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if
l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if
L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex
\char'47}}#1\relax\else\message{accent \string\soft \space #1 not
defined!}#1\relax\fi\fi\fi\fi\fi\fi}
| {
"timestamp": "2021-07-06T02:28:40",
"yymm": "2009",
"arxiv_id": "2009.12989",
"language": "en",
"url": "https://arxiv.org/abs/2009.12989",
"abstract": "What is the maximum number of copies of a fixed forest $T$ in an $n$-vertex graph in a graph class $\\mathcal{G}$ as $n\\to \\infty$? We answer this question for a variety of sparse graph classes $\\mathcal{G}$. In particular, we show that the answer is $\\Theta(n^{\\alpha_d(T)})$ where $\\alpha_d(T)$ is the size of the largest stable set in the subforest of $T$ induced by the vertices of degree at most $d$, for some integer $d$ that depends on $\\mathcal{G}$. For example, when $\\mathcal{G}$ is the class of $k$-degenerate graphs then $d=k$; when $\\mathcal{G}$ is the class of graphs containing no $K_{s,t}$-minor ($t\\geq s$) then $d=s-1$; and when $\\mathcal{G}$ is the class of $k$-planar graphs then $d=2$. All these results are in fact consequences of a single lemma in terms of a finite set of excluded subgraphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Tree densities in sparse graph classes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211539104333,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7096458613679176
} |
https://arxiv.org/abs/1907.09046 | Fujita's conjecture for quasi-elliptic surfaces | We show that Fujita's conjecture is true for quasi-elliptic surfaces. Explicitly, for any quasi-elliptic surface $X$ and an ample line bundle $A$ on $X$, we have $K_X + tA$ is base point free for $t \geq 3$ and is very ample for $t \geq 4$. | \section{Introduction}
Let $X$ be a smooth projective variety and $A$ be an ample line bundle.
The classical problem is to understand whether the adjoint linear system $K_X+A$ is base point free or very ample.
Thanks to Serre's theorem, we know that $K_X+mA$ is very ample for $m$ sufficiently large,
and there is a great interest in understanding the smallest value of $m$ for which this holds.
In \cite{fujita1987polarized}, Fujita raised the following conjecture.
\begin{conj}[Fujita]
Let $X$ be a smooth projective variety of dimension $n$ and $A$ be an ample line bundle.
Then $K_X+kA$ is base point free (resp. very ample) whenever $k\geq n+1$ (resp. $k\geq n+2$).
\end{conj}
\begin{rmk}
In the conjecture, $n+1$ (resp. $n+2$) is optimal when $X = \bP^n$ and $A = \cO(1)$.
\end{rmk}
For the curve case, this conjecture follows from the Riemann-Roch theorem.
For the surface case in characteristic zero, the conjecture follows from Reider's theorem,
which utilizes the Kawamata-Viehweg vanishing theorem.
But in positive characteristic, this approach fails since \mbox{Raynaud} gave a counterexample to the Kawamata-Viehweg vanishing in \cite{raynaud1978conterexample}.
However, Shepherd-Barron showed in \cite{shepherd1991unstable} that the conjecture still holds for surfaces if $X$ is not quasi-elliptic or of general type.
In this paper, we will show the following.
\begin{thm}\label{main}
Fujita's conjecture holds for quasi-elliptic surfaces $X$.
That is, given a quasi-elliptic surface $X$ and any ample divisor $A$ on $X$, we have
\begin{enumerate}
\item $K_X+kA$ is base point free for $k\geq 3$; and
\item $K_X+kA$ is very ample for $k\geq 4$.
\end{enumerate}
\end{thm}
To prove this result, we follow the ideas of \cite{di2015effective} and a careful case by case study.
Note that, in \cite{di2015effective}, it is proved that, when $p=3$, $K_X+kA$ is base point free for $k\geq 4$ and it is very ample for $k\geq 8$;
and when $p=2$, $K_X+kA$ is base point free for $k\geq 5$ and it is very ample for $k\geq 19$.
\section{Preliminaries}
\subsection{Hodge Inequality}
\begin{thm}\label{Hodeg_ineq}
Let $X$ be a smooth projective surface over an algebraically closed field $k$ and $N$ a nef divisor on $X$.
Then for any divisor $D$ on $X$, we have the following
\[N^2D^2\leq (N.D)^2.\]
Moreover, if $N$ is ample, then the equality holds only when $D$ is numerically proportional to $N$.
\end{thm}
\begin{pf}
Since we can approximate nef divisors by ample $\bQ$-divisors and the desired inequality is homogeneous,
we can reduce to the case when $N$ is ample.
Now let's consider $E = (N.D)N - N^2D$.
Notice that $E.N = 0$.
Then by the Hodge index theorem, we have $E^2\leq 0$ and we get the desired inequality.
Moreover, the equality holds only when $E\equiv 0$.
That is, $D$ is numerically proportional to $N$. \qed
\end{pf}
\subsection{Unstability}
In this section we recall some classical results about smooth surfaces in positive characteristic.
\begin{defn}
A rank-two vector bundle $\cE$ on $X$ is unstable if it fits into a short exact sequence
\[\xymatrix{0\ar[r] & \cO_X(D_1)\ar[r] & \cE \ar[r] & \cI_Z\cdot\cO_X(D_2) \ar[r] & 0}\]
where $D_1$ and $D_2$ are effective Cartier divisors such that $D' := D_1 - D_2$ is big and $(D')^2>0$
and $Z$ is an effective 0-cycle on $X$.
\end{defn}
\begin{defn}
A big divisor $D$ on a smooth surface $X$ with $D^2>0$ is $m$-unstable for a positive integer $m$ if either
\begin{itemize}
\item $H^1(X,\cO_X(-D)) = 0$; or
\item $H^1(X,\cO_X(-D)) \neq 0$ and there exists a nonzero effective divisor $E$ such that
\begin{itemize}
\item $mD-2E$ is big;
\item $(mD-E).E\leq 0$.
\end{itemize}
\end{itemize}
\end{defn}
In \cite{bogomolov1978holomorphic}, Bogomolov showed that, in characteristic zero, every rank-two vector bundle $\cE$ on a smooth surface
with $c_1^2(\cE)>4c_2(\cE)$ is unstable.
Also, in positive characteristic, there is a result related to the unstability of vector bundles.
\begin{thm}[Bogomolov]\label{lift}
Let $\cE$ be a rank-two vector bundle on a smooth projective surface $X$ over a field of positive characteristic
such that Bogomolov's inequality $c_1^2(\cE)\leq 4c_2(\cE)$ does not hold (that is, such that $c_1^2(\cE)>4c_2(\cE)$).
Then there exists a reduced and irreducible surface $Y$ contained in the ruled threefold $\bP(\cE)$ such that
\begin{itemize}
\item the restriction $\rho : Y \to X$ is $p^e$-purely inseparable for some $e>0$.
\item $(F^e)^*\cE$ is unstable.
\end{itemize}
Moreover, we have \[K_Y \equiv \rho^*\left(K_X-\frac{p^e-1}{p^e}D'\right)\]
where \[\xymatrix{0\ar[r] & \cO_X(D_1)\ar[r] & F^{e*}\cE \ar[r] & \cI_Z\cdot\cO_X(D_2) \ar[r] & 0}\]
is a unstablizing sequence for $(F^e)^*\cE$ and $D' = D_1-D_2$.
\end{thm}
\begin{pf}
See \cite[Theorem 1]{shepherd1991unstable}.
\end{pf}
\begin{rmk}\label{key_rmk}
If $H^1(X,\cO_X(-D))\neq 0$, $D^2>0$, and $D$ is big,
then $D$ is $p^e$-unstable for some $e>0$.
Indeed, from the assumption, there exists a non-split short exact sequence
\[\xymatrix{0\ar[r] & \cO_X \ar[r] & \cE \ar[r] & \cO_X(D) \ar[r] & 0}\]
given by a nonzero element of $\mbox{Ext}^1(\cO_X(D),\cO_X) \cong H^1(X,\cO_X(-D))$,
where $\cE$ is a vector bundle of rank two.
Note that $c_1^2(\cE)-4c_2(\cE) = D^2>0$.
By Theorem~\ref{lift}, we have the following diagram.
\[\xymatrix{ & & 0 \ar[d] & & \\
& & \cO_X \ar[d]^-{g_1} & & \\
0 \ar[r] & \cO_X(D_1) \ar[r]^-{f_1} \ar[rd]_-\tau & (F^e)^*\cE \ar[r]^-{f_2} \ar[d]^-{g_2} & \cI_Z\cdot\cO_X(D_2) \ar[r] & 0 \\
& & \cO_X(p^eD) \ar[d] & & \\
& & 0 & & }\]
We would like to claim that $\tau = g_2\circ f_1$ is not zero.
Indeed, if $\tau = 0$, then $f_1 = g_1\circ\tau'$ where $\tau'$ is a nonzero map from $\cO_X(D_1)$ to $\cO_X$.
That means $-D_1$ is linearly equivalent to an effective divisor.
Since $D_1-D_2$ is big, we have $-D_2$ is also big.
Now notice that $D_1+D_2\equiv c_1((F^e)^*\cE) \equiv p^eD$ is big and intersection of any big divisor and ample divisor is positive.
Thus, for any ample divisor $H$, we have
\[0<p^eD.H = (D_1+D_2).H = -(-D_1).H - (-D_2).H < 0, \mbox{ which is impossible.}\]
Hence, we may assume that $\tau\neq 0$ and so $D_2 \equiv c_1((F^e)^*\cE) -D_1\equiv p^eD-D_1$ is effective.
So $p^eD-2D_2\equiv D_1-D_2$ is big and
\[(p^eD-D_2).D_2 = D_1.D_2 = c_2((F^e)^*\cE) - \deg Z = -\deg Z \leq 0.\]
Also $D_2\neq 0$ since otherwise the vertical exact sequence
\[\xymatrix{0\ar[r] & \cO_X \ar[r] & \cE \ar[r] & \cO_X(D) \ar[r] & 0}\]
splits, which is a contradiction.
To sum up, $D$ is $p^e$-unstable. \qed
\end{rmk}
\subsection{Bend and Break}
We recall a well-known result in birational geometry.
\begin{thm}[Bend-and-Break]\label{BB}
Let $X$ be a variety over an algebraically closed field and let $C$ be a smooth, projective, and irreducible curve with a morphism
$h : C \to X$ such that $X$ has only local complete intersection singularities along $h(C)$ and $h(C)$ intersects the smooth locus of $X$.
Assume $K_X.C<0$,
then for every point $x\in h(C)$, there exists a rational curve $C_x$ in $X$ passing through $x$ such that
we have an algebraically equivalence \[h_*[C] \approx k_0[C_x] + \sum_{i\neq 0}k_i[C_i]\]
with $k_i\geq 0$ for all $i$ and $-K_X.C_x\leq \dim X+1$.
\end{thm}
For a reference, see \cite[Theorem II.5.14, its proof, Remark II.5.15, and Theorem II.5.7]{kollar2013rational}.
\section{Proof}
Recall that if $X$ is a quasi-elliptic surface over an algebraically closed field $k$, then the characteristic of $k$ is 2 or 3.
From now on, $X$ and $Y$ are quasi-elliptic surfaces and $A$ is an ample divisor on $X$.
Also let $p\in\{2,3\}$ be the characteristic of the base field.
\begin{prop}\label{unstable}
Let $X$ be a quasi-elliptic surface and $D$ a big divisor on $X$ with $D^2>0$.
Then $D$ is $p$-unstable. Moreover, if $H^1(X,\cO_X(-D))\neq 0$, we have
\begin{itemize}
\item $(3D-2E).F = 1$ when $p=3$
\item $(D-E).F = 1$ when $p=2$
\end{itemize}
where $E$ is a non-zero effective divisor associated to the $p$-unstability of $D$ and $F$ is a general fiber of the canonical fibration on $X$.
\end{prop}
\begin{pf}
Assume that $H^1(X,\cO_X(-D))\neq 0$.
Then any non-zero element of $H^1(X,\cO_X(-D))$ gives a non-split extension
\[0\to\cO_X\to \cE \to \cO_X(D) \to 0.\]
By Theorem~\ref{lift}, $(F^e)^*\cE$ is unstable for some $e>0$ and $\rho : Y \to X$ be the $p^e$-purely inseparable morphism.
Following Remark~\ref{key_rmk}, we want to show that $e=1$.
Let $F$ be the general fiber of the canonical fibration $f : X \to B$, $C = \rho^*F$, and $g = f\circ \rho$.
Note that $F$ is rational and
\begin{eqnarray*}
-K_Y.C &=& \rho^*\left(\frac{p^e-1}{p^e}D'-K_X\right).C \\
&=& \rho^*\left(\frac{p^e-1}{p^e}(p^eD-2E)-K_X\right).C \\
&=& p^e\left(\frac{p^e-1}{p^e}(p^eD-2E)-K_X\right).F \\
&=& (p^e-1)(p^eD-2E).F >0
\end{eqnarray*}
where the first equality follows from Theorem~\ref{lift},
the second follows from $D'\equiv p^eD-2E$ in Remark~\ref{key_rmk},
the fourth follows since the arithmetic genus of $F$ is 1,
and the last inequality follows from bigness of $p^eD-2E$ and $F$ being a fiber.
More precisely, $p^eD-2E\sim_\bQ A+(\mbox{effective})$, $A.F>0$, and $(\mbox{effective}).F\geq 0$ since $F$ is a fiber.
Now because $Y$ is defined\footnotemark via a quasi-section of $\bP((F^e)^*\cE)$,
\footnotetext{See the proof of Theorem~\ref{lift}, which is \cite[Theorem 1]{shepherd1991unstable}. }
it has hypersurface singularities along $C$.
By applying Theorem~\ref{BB} for any general point $y$, there exists a rational curve $C_y$ passing through $y$ such that
\[-K_Y.C_y\leq 3 \mbox{ with } C \approx k_0[C_y] + \sum_{i\neq 0}k_i[C_i]\]
By exercise II.4.1.10 in \cite{kollar2013rational}, we know that every curve $C_i$ on the right hand side of the equivalence above is in the fiber of $g$.
Note that any general fiber of $g$ is irreducible.
So each $C_i$ and $C_y$ is algebraically equivalent to $C$.
Thus, we have $-K_Y.C\leq 3$.
This gives
\[3\geq -K_Y.C = (p^e-1)(p^eD-2E).F>0\]
When $p=3$, we have $\mbox{RHS}\geq 3^e-1\geq 8$ if $e\geq 2$, which is impossible.
When $p=2$, we have $\mbox{RHS} = 2(2^e-1)(2^{e-1}D-E).F\geq 2(2^e-1)\geq 6$ if $e\geq 2$, which is impossible.
Thus, $e$ must be $1$ and $D$ is $p$-unstable.
When $p=3$, we have
\[(p^e-1)(p^eD-2E).F = 2(3D-2E).F\]
which is a positive even integer less than $3$.
So, $(3D-2E).F=1$.
When $p=2$, we have
\[(p^e-1)(p^eD-2E).F = (2D-2E).F\]
which is a positive even integer less than $3$.
So, $(D-E).F=1$. \qed
\end{pf}
\begin{prop}\label{key_prop}
Let $\pi : Y \to X$ be a birational morphism between two smooth surfaces and let $\wt{D}$ be a big Cartier divisor on $Y$ such that $\wt{D}^2>0$.
Assume there is a non-zero effective divisor $\wt{E}$ such that
\begin{itemize}
\item $\wt{D}-2\wt{E}$ is big and
\item $(\wt{D}-\wt{E}).\wt{E}\leq 0$.
\end{itemize}
Set $D = \pi_*\wt{D}$, $E = \pi_*\wt{E}$ and $\alpha = D^2-\wt{D}^2$.
If $D$ is nef and $E$ is a nonzero effective divisor, then
\begin{itemize}
\item $0\leq D.E<\alpha/2$
\item $D.E-\alpha/4\leq E^2\leq (D.E)^2/D^2$.
\end{itemize}
\end{prop}
\begin{pf}
See \cite[Proposition 2]{sakai1990reider}.
\end{pf}
\begin{cor}\label{blowup}
Let $\pi : Y \to X$ be a birational morphism between two smooth surfaces and let $\wt{D}$ be a big Cartier divisor on $Y$ such that $\wt{D}^2>0$.
Assume that
\begin{itemize}
\item $H^1(X,\cO_X(-\wt{D}))\neq 0$;
\item $\wt{D}$ is $m$-unstable for some $m>0$.
\end{itemize}
That means, there exists a nonzero effective divisor $\wt{E}$ such that
\begin{itemize}
\item $m\wt{D} - 2\wt{E}$ is big.
\item $(m\wt{D}-\wt{E}).\wt{E}\leq 0$
\end{itemize}
Set $D = \pi_*\wt{D}$, $E = \pi_*\wt{E}$ and $\alpha = D^2-\wt{D}^2$.
If $D$ is nef and $E$ is a nonzero effective divisor, then
\begin{itemize}
\item $0\leq D.E<m\alpha/2$
\item $mD.E-m^2\alpha/4\leq E^2\leq (D.E)^2/D^2$.
\end{itemize}
\end{cor}
\begin{pf}
Write $\wt{B} = m\wt{D}$.
Since $\wt{D}$ is $m$-unstable, $\wt{B}$ is 1-unstable.
Thus, we can use Proposition~\ref{key_prop} above.
Note that $\alpha_B = B^2-\wt{B}^2 = m^2\alpha_D$. \qed
\end{pf}
\vspace{12pt}
\begin{pf}[Proof of Theorem~\ref{main}]
We divide the proof into several steps.
Recall that for any quasi-elliptic surfaces $X$, the characteristic $p$ of the base field is $2$ or $3$.
\begin{itemize}
\item Base point freeness.
\begin{enumerate}[(a)]
\item Let $D = kA$ and assume that $|K_X+D|$ has a base point at $x\in X$.
Let $\pi : Y \to X$ be the blow-up at $x$.
Since $x$ is a base point, we have that
\[H^1(X,\cO_X(K_X+D)\otimes \mathfrak{m}_x) = H^1(Y,\cO_Y(K_Y+\pi^*D-2E_x))\neq 0\] where $E_x$ is the exceptional divisor of $\pi$.
Let $\wt{D} = \pi^*D-2E_x$.
In order to apply Proposition~\ref{unstable}, we need to check $\wt{D}$ is big and $\wt{D}^2>0$.
Note that
\begin{eqnarray*}
h^0(Y,\cO_Y(\ell(\pi^*D-2E_x))) &=& h^0(X, \cO_X(\ell D)\otimes \mathfrak{m}_x^{2\ell}) \\
&\geq & \frac{D^2}{2}\ell^2 + O(\ell) -
{\left(\begin{matrix}
2\ell + 1\\
2
\end{matrix}\right)} \\
&=& \frac{k^2A^2-4}{2}\ell^2+O(\ell)
\end{eqnarray*}
So $\wt{D}$ is big whenever $k\geq 3$.
Also note that $\wt{D}^2 = D^2-4 = k^2A^2-4\geq 5$.
By Proposition~\ref{unstable} on $Y$ and $\wt{D}$, we have that $\wt{D}$ is $p$-unstable.
So there is a nonzero effective divisor $\wt{E}$ such that $p\wt{D}-2\wt{E}$ is big and $(p\wt{D}-\wt{E}).\wt{E}\leq 0$.
This implies that $\wt{E}$ is not a positive multiple of the exceptional divisor and so $E = \pi_*\wt{E}$ is a non-zero effective divisor.
Also $\pi_*\wt{D} = D = kA$ is ample and $\alpha = D^2 - \wt{D}^2 = 4$.
Hence, by Corollary~\ref{blowup}, we have
\[\begin{array}{l}
0\leq kA.E < 2p\leq 6 \\
pkA.E-p^2\leq E^2\leq (A.E)^2/A^2
\end{array}\]
So we get $0<A.E < \frac{6}{k}\leq 2$.
Thus, $A.E = 1$ and so $E$ is an irreducible curve.
The second inequality becomes
\begin{equation}\label{bpf}
pk-p^2\leq E^2\leq 1/A^2\leq 1
\end{equation}
\item If $p=2$, then $2\leq 2k-4 \leq E^2\leq 1$, which is impossible.
\item If $p=3$, then $3k-9\leq E^2\leq 1$.
This only happens when $k=3$ and $E^2=0$ or $1$.
Now, by Proposition~\ref{unstable}, we have
\begin{equation}\label{p3bpf}
(3\wt{D}-2\wt{E}).F = (9A-2E).F=1 \mbox{ because $F$ is a general fiber.\footnotemark}
\end{equation}
\footnotetext{By abuse of notation, $F$ denotes a general fiber of $X$ and $Y$.}
Since $F$ is nef and $A$ is ample, we get $9A.F\geq 9$ and so, by~(\ref{p3bpf}), we have
\begin{equation}\label{cf4}
E.F\geq 4.
\end{equation}
By an easy computation, we have that $E+F$ is nef.
\item If $E^2 = 1$, then $A^2 = 1$ by equation~(\ref{bpf}) and $A$ is numerically equivalent to $E$ by Theorem~\ref{Hodeg_ineq}.
Thus, from~(\ref{p3bpf}), we have $7A.F = 1$, which is impossible.
\item So $E^2 = 0$.
Now by Theorem~\ref{Hodeg_ineq} applied to $9A-2E$ and $E+F$, we have
\[(9A-2E)^2(E+F)^2\leq \left((9A-2E).(E+F)\right)^2.\]
Thus, by an easy computation, we have
\[(81A^2-36)(2F.E)\leq (9A.E+(9A-2E).F)^2.\]
Since $A.E=1$ and $(9A-2E).F = 1$, the right hand side equals to $100$ and so
\[5\leq 9A^2-4\leq \frac{100}{18(F.E)} \leq \frac{100}{18\times 4}\leq 2, \mbox{ which is impossible.}\]
Hence, we have shown the freeness part of Fujita's conjecture.
\end{enumerate}
\item Very ampleness.
\begin{enumerate}[(a)]
\item Let $D = kA$. We want to show $|K_X+D|$ separates points and tangents.\footnotemark
\footnotetext{See~\cite[Proposition II.7.3]{hartshorne1977algebraic}. }
Assume that $|K_X+D|$ doesn't separate points $x$ and $y$ (resp. doesn't separate tangents at $x$).
Thus, it suffices to show that
\begin{eqnarray*}
& & H^1(X,\cO_X(K_X+D)\otimes\mathfrak{m}_x\otimes\mathfrak{m}_y) = H^1(Y,\cO_Y(K_Y+\pi^*D-2E_x-2E_y))\neq 0 \\
&\mbox{resp.}& H^1(X,\cO_X(K_X+D)\otimes\mathfrak{m}^2_x) = H^1(Y,\cO_Y(K_Y+\pi^*D-3E_x))\neq 0
\end{eqnarray*}
is \emph{impossible} where $\pi : Y\to X$ is the blow-up of $X$ at $x, y$ and $E_x, E_y$ are the exceptional divisor
(resp. $\pi : Y\to X$ is the blow-up of $X$ at $x$ and $E_x$ is the exceptional divisor.)
Now let $\wt{D} = \pi^*D-2E_x-2E_y$ (resp. $\wt{D} = \pi^*D-3E_x$).
By the above argument, $\wt{D}$ is big and $\wt{D}^2>0$ whenever $k\geq 4$.
Applying Proposition~\ref{unstable} to $Y$ and $\wt{D}$, we have that $\wt{D}$ is $p$-unstable.
So there is a nonzero effective divisor $\wt{E}$ such that $p\wt{D}-2\wt{E}$ is big and $(p\wt{D}-\wt{E}).\wt{E}\leq 0$.
This implies that $\wt{E}$ is not a sum of multiples of the exceptional divisors and so $E = \pi_*\wt{E}$ is a non-zero effective divisor.
Also $\pi_*\wt{D} = D = kA$ is ample and $\alpha = D^2 - \wt{D}^2 = 8$ (resp. $9$).
Hence, by Corollary~\ref{blowup}, we have
\begin{equation}\label{va}
\begin{array}{l}
0\leq kA.E < p\alpha/2\\
pkA.E-p^2\alpha/4\leq E^2\leq (A.E)^2/A^2
\end{array}
\end{equation}
\item By Proposition~\ref{unstable}, when $p=3$, we have
\[(3\wt{D}-2\wt{E}).F = 1\]
Then we get
\begin{equation}\label{va2}
(3kA-2E).F=1.
\end{equation}
If $k$ is even, then the left hand side is $\geq 2$, which is impossible.
\item If $k$ is odd, using~(\ref{va}), we get
\begin{equation}\label{vap3kodd}
\begin{array}{l}
0< kA.E < \frac{3}{2}\alpha\leq \frac{27}{2}\\
3kA.E-\frac{9}{4}\alpha\leq E^2\leq (A.E)^2/A^2
\end{array}
\end{equation}
Then we have $A.E=1$ or $2$ since $A.E<\frac{27}{2k}\leq\frac{27}{10}<3$.
If $A.E=2$, then we have
\[9< 6k-\frac{81}{4}\leq E^2\leq 4/A^2\leq 4, \mbox{ which is impossible.}\]
Thus $A.E=1$ and so $E$ is an irreducible curve.
Also, from~(\ref{vap3kodd}), we have
\[3k-\frac{81}{4}\leq 3k-\frac{9\alpha}{4}\leq E^2\leq\frac{1}{A^2}\leq 1.\]
Thus $k$ is 5 or 7 and
\begin{equation}\label{E2geq-5}
-5\leq E^2\leq 1.
\end{equation}
Using~(\ref{va2}) again, we have $1+2E.F = 3kA.F\geq 15$.
So
\begin{equation}\label{efgeq7}
E.F\geq 7
\end{equation}
and $E+F$ is nef.
Now by Theorem~\ref{Hodeg_ineq} applied to $3kA-2E$ and $E+F$, we have
\[(3kA-2E)^2(E+F)^2\leq ((3kA-2E).(E+F))^2\]
Note that the left hand side
\begin{eqnarray*}
(3kA-2E)^2(E+F)^2 &=& (9k^2A^2-12k+4E^2)(E^2+2E.F) \\
&\geq & (4E^2+141)(E^2+2E.F) \\
&\geq & (4E^2+141)(E^2+14)
\end{eqnarray*}
where the first inequality comes from $A^2\geq 1$, $5\leq k\leq 7$, and nefness of $E+F$;
and the second inequality comes from~(\ref{E2geq-5}) and (\ref{efgeq7}).
And the right hand side
\begin{eqnarray*}
((3kA-2E).(E+F))^2 &=& (1+3k-2E^2)^2 \\
&\leq & (2E^2-22)^2
\end{eqnarray*}
where the equality comes from~(\ref{va2}) and (\ref{E2geq-5}) and the inequality comes from $k\leq 7$.
Thus, by an easy computation, we get $E^2\leq -6$, which contradicts to (\ref{E2geq-5}).
\item Now we deal with $p=2$.
The inequalities~(\ref{va}) becomes
\[\begin{array}{l}
0< kA.E < \alpha\leq 9\\
2kA.E-\alpha\leq E^2\leq (A.E)^2/A^2
\end{array}\]
Hence, $A.E = 1$ or $2$.
\item If $A.E = 2$, then we have $7\leq 4k-\alpha \leq E^2\leq 4/A^2\leq 4$, which is impossible.
\item Thus we have $A.E = 1$ and so $E$ is an irreducible curve.
Then \[2k-9\leq 2k-\alpha\leq E^2\leq 1/A^2\leq 1.\]
So $k=5$ and $E^2=1$; or $k=4$ and $-1\leq E^2\leq 1$.
Now again by Proposition~\ref{unstable}, we have $(kA-E).F=1$.
So $E.F\geq 3$.
Thus, $E+F$ is nef.
Applying Theorem~\ref{Hodeg_ineq} to $kA-E$ and $E+F$, we get
\begin{equation}\label{va_p2}
(kA-E)^2(E+F)^2\leq \left((kA-E).(E+F)\right)^2.
\end{equation}
\item If $k = 5$ and $E^2 = 1$, then $A\equiv E$.
But \[1 = (5A-E).F = 4A.F\]
which is impossible.
\item Thus $k=4$.
When $E^2 = 1$, then by the above argument, this case is impossible.
When $E^2 = 0$, from~(\ref{va_p2}), we have
\[2A^2-1\leq \frac{25}{16(E.F)}\leq \frac{25}{16\times 3}<1,\]
which is impossible.
When $E^2 = -1$, from~(\ref{va_p2}), we have
\[(16A^2-9)(2E.F-1)\leq 36\]
Since $E.F\geq 3$, we have $A^2 = 1$. \\
Thus $E.F=3$ and so $A.F = 1$ from $(4A-E).F=1$.
However, it is also impossible since, by Theorem~\ref{Hodeg_ineq}, we have
\[5 = A^2(E+F)^2\leq (A.(E+F))^2 = 4.\]\qed
\end{enumerate}
\end{itemize}
\end{pf}
\bibliographystyle{amsalpha}
\addcontentsline{toc}{chapter}{\bibname}
\normalem
| {
"timestamp": "2019-07-23T02:14:26",
"yymm": "1907",
"arxiv_id": "1907.09046",
"language": "en",
"url": "https://arxiv.org/abs/1907.09046",
"abstract": "We show that Fujita's conjecture is true for quasi-elliptic surfaces. Explicitly, for any quasi-elliptic surface $X$ and an ample line bundle $A$ on $X$, we have $K_X + tA$ is base point free for $t \\geq 3$ and is very ample for $t \\geq 4$.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Fujita's conjecture for quasi-elliptic surfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.962673113726775,
"lm_q2_score": 0.7371581684030623,
"lm_q1q2_score": 0.7096423492857024
} |
https://arxiv.org/abs/0811.1075 | Resolution Trees with Lemmas: Resolution Refinements that Characterize DLL Algorithms with Clause Learning | Resolution refinements called w-resolution trees with lemmas (WRTL) and with input lemmas (WRTI) are introduced. Dag-like resolution is equivalent to both WRTL and WRTI when there is no regularity condition. For regular proofs, an exponential separation between regular dag-like resolution and both regular WRTL and regular WRTI is given.It is proved that DLL proof search algorithms that use clause learning based on unit propagation can be polynomially simulated by regular WRTI. More generally, non-greedy DLL algorithms with learning by unit propagation are equivalent to regular WRTI. A general form of clause learning, called DLL-Learn, is defined that is equivalent to regular WRTL.A variable extension method is used to give simulations of resolution by regular WRTI, using a simplified form of proof trace extensions. DLL-Learn and non-greedy DLL algorithms with learning by unit propagation can use variable extensions to simulate general resolution without doing restarts.Finally, an exponential lower bound for WRTL where the lemmas are restricted to short clauses is shown. |
\section{A Lower Bound for RTLW with short lemmas}
\label{sec:smlem}
In this section we prove a lower bound showing that learning only
short clauses does not help a DLL algorithm for certain hard formulas.
The proof system corresponding to DLL algorithms with learning
restricted to clauses of length $k$ is, according to
\pref{sec:regwrti-dll}, regWRTI with the additional restriction that
every used lemma is a clause of length at most $k$. We prove a lower
bound for a stronger proof system that allows arbitrary lemmas instead of just
input lemmas, drops the regularity restriction, and uses the general
weakening rule instead of just w-resolution, i.e., RTLW as defined
in \pref{sec:wrtl}. We define RTLW($k$) to be the restriction of RTLW
in which every lemma used, i.e., every leaf label that does not occur
in the initial formula, is of size at most $k$.
The hard example formulas we prove the lower bound for are the
well-known Pigeonhole Principle formulas. This principle states that
there can be no 1-to-1 mapping from a set of size $n+1$ into a set of
size $n$. In propositional logic, the negation of this principle
gives rise to an unsatisfiable set of clauses $PHP_n$ in the variables
$x_{i,j}$ for $1 \leq i\leq n+1$ and $1 \leq j\leq n$\@. The variable
$x_{i,j}$ is intended to state that $i$ is mapped to $j$. The set
$PHP_n$ consists of the following clauses:
\begin{enumerate}[$\bullet$]
\item the \emph{pigeon clause} $P_i = \bigl\{ x_{i,j} \, ;\, 1\leq j \leq n
\bigr\}$ for every $1\leq i\leq n+1$.
\item the \emph{hole clause} $H_{i,j,k} = \{ \bar{x}_{i,k} , \bar{x}_{j,k} \}$
for every $1\leq i<j \leq n+1$ and $k \leq n$.
\end{enumerate}
It is well-known that the pigeonhole principle requires exponential
size dag-like resolution proofs: Haken \cite{Haken85} shows that every
RD refutation of $PHP_n$ is of size $2^{\Omega(n)}$.
Note that the number of variables is $O(n^2)$, so that this lower
bound is far from maximal. In fact, Iwama and Miyazaki \cite{iwamiy99}
prove a larger lower bound for tree-like refutations.
\begin{thm}[Iwama and Miyazaki \cite{iwamiy99}] \label{the:treelb}
Every resolution tree refutation of $PHP_n$ is of size at least
$(n/4)^{n/4}$.
\end{thm}
We will show that for $k\leq n/2$, RTLW($k$) refutations of $PHP_n$
are asymptotically of the same size $2^{\Omega(n\log n)}$ as
resolution trees. On the other hand, it is known \cite{BusPit97} that
dag-like resolution proofs need not be much larger than Haken's lower
bound: there exist RD refutations of $PHP_n$ of size $2^n\cdot n^2$.
These refutations are even regular, and thus can be simulated by
regWRTI. Hence $PHP_n$ can be solved in time $2^{O(n)}$ by some
variant of \textsc{DLL-L-UP} when learning arbitrary long clauses,
whereas our lower bound shows that any DLL algorithm that learns only
clauses of size at most $n/2$ needs time $2^{\Omega(n\log n)}$.
In fact, we will prove our lower bound for the weaker
\emph{functional} pigeonhole principle $FPHP_n$, which also includes
the following clauses:
\begin{enumerate}[$\bullet$]
\item The functional clause $F_{i,j,k} = \{ \bar{x}_{i,j} , \bar{x}_{i,k} \}$
for every $1\leq i \leq n+1$ and every $1\leq j<k\leq n$.
\end{enumerate}
While the lower bound of Iwama and Miyazaki is only stated for the
clauses $PHP_n$, it is easily verified that their proof works as well
when the functional clauses are added to the formula.
Our lower bound proof uses the fact that resolution trees with
weakening (RTW) are natural, i.e., preserved under restrictions in the
following sense:
\begin{prop}
Let $R$ be a RTW proof of $C$ from $F$ of size $s$, and $\rho$ a
restriction. There is an RTW proof $R'$ for $\rest{C}{\rho}$ from
$\rest{F}{\rho}$ of size at most $s$.
\end{prop}
We denote the resolution tree $R'$ by $\rest{R}{\rho}$. Since this
proposition is well-known a proof will not be given.
Next, we need to bring refutations in RTLW($k$) to a certain normal
form. First, we show that it is unnecessary to use clauses as lemmas
that are subsumed by axioms in the refuted formula.
\begin{lem} \label{lem:subs} If there is a RTLW($k$) refutation of
some formula $F$ of size $s$, then there is a RTLW($k$) refutation
of $F$ of size at most $2s$ in which no clause $C$ with $C \supseteq
D$ for some clause $D$ in $F$ is used as a lemma.
\end{lem}
\proof
If a clause $C$ with $C\supseteq D$ for some $D\in F$ is used as a
lemma, replace every leaf labeled $C$ by a weakening inference of
$C$ from $D$.
\qed
Secondly, we need the fact that an RTLW($k$) refutation does not
need to use any tautological clauses, i.e., clauses of the form
$C \cup \{ x , \bar{x}\}$ for a variable $x$.
\begin{lem} \label{lem:taut}
If there is a RTLW($k$) refutation of some formula $F$ of size $s$,
then there is a RTLW$(k$) refutation of $F$ of size at most $s$ that
contains no tautological clause.
\end{lem}
\proof
Let $P$ be an RTLW($k$)-refutation of $F$ of size $s$ that contains
$t$ occurrences of tautological clauses. We transform $P$ into a
refutation $P'$ of size $|P'|\leq s$ such that $P'$ contains fewer
than $t$ occurrences of tautological clauses. Finitely many
iterations of this process yields the claim.
We obtain $P'$ as follows.
Since the final clause of~$P$ is not tautological, if
$t>0$, there must be a
tautological clause $C \cup
\{x,\bar{x}\}$ which is resolved with a clause $D\cup \{x\}$
to yield a non-tautological clause $C\cup D\cup \{x\}$.
The idea is to cut out the subtree~$T_0$ that derives
the clause $C\cup\{x,\bar x\}$, and derive $C\cup D\cup\{x\}$
by a weakening from $D\cup \{ x\}$. This gives a ``proof''~$P_0$
with fewer tautological clauses than~$P$.
However, $P_0$~may not be a valid proof, since
some of the clauses in~$T_0$ might be used as lemmas in $P_0$.
To fix this, we shall extract
parts of~$T_0$ and plant them onto~$P_0$
so that all lemmas used are derived. In order to make this
construction precise, we need the notion of trees in which some of
the used lemmas are not derived.
A \emph {partial RTLW} from~$F$
is defined to be a tree~$T$ which satisfies all the
conditions of an RTLW, except that some leaves may be
labeled by clauses that occur neither in~$F$ nor earlier in
$T$; these are called the \emph{open leaves} of~$T$.
We construct $P'$ in stages by defining, for $i\geq 0$, a partial
RTLW refutation~$P_i$ of~$F$ and a partial RTLW derivation~$T_i$
of $C\cup\{x,\bar x\}$ from~$F$
with the following properties:
\begin{enumerate}[$\bullet$]
\item All open leaves in $P_i$ appear
in $T_i$. The first open leaf in~$P_i$ is denoted $C_i$.
\item All open leaves in $T_i$ appear in $P_i$ before~$C_i$.
\item $|P_i| + |T_i| = |P|$ .
\end{enumerate}
$P_0$ and~$T_0$ were defined above and certainly satisfy the two properties.
Given $P_i$ and~$T_i$, we construct $P_{i+1}$ and~$T_{i+1}$ as follows:
We locate the first occurrence of $C_i$ in $T_i$ and
let $T^\ast_i$ be the subtree of~$T_i$ rooted at this occurrence.
We form $T_{i+1}$ by replacing in~$T_i$ the subtree~$T^\ast_i$ by
a leaf labeled~$C_i$. And, we form $P_{i+1}$ by replacing the
first open leaf,~$C_i$, in~$P_i$ by the tree~$T^\ast_i$.
The invariants are easily seen to be preserved. Obviously,
$|P_{i+1}| + |T_{i+1}| = |P_i| + |T_i| = |P|$.
The open leaves of~$T^\ast_i$ appear in~$P_i$ before~$C_i$, and therefore,
any open leaf in~$P_{i+1}$, and in particular,
$C_{i+1}$ if it exists, must occur after the (formerly open leaf)
clause~$C_i$.
New open
leaves in~$T_i$ are~$C_i$ and possibly some lemmas derived in~$T^\ast_i$,
and these all occur in~$P_{i+1}$ before~$C_{i+1}$.
Since $P_{i+1}$ contains fewer open leaves than~$P_i$ for every~$i$,
there is an~$m$ such that $P_m$ contains no open leaves, and thus
is an RTLW refutation. We then discard~$T_m$ and set $P' := P_m$.
Each lemma used in~$P'$ was a lemma in~$P$, thus $P'$ is also an
RTLW($k$) refutation.
Note that the total number of occurrences
of tautological clauses in $P_{i+1}$ and~$T_{i+1}$ combined is
the same as in $P_i$ and~$T_i$ combined. This is also equal to the
number of tautological clauses in~$P$. Furthermore, $T_m$~must
contain at least one tautological clause, namely its root
$C\cup\{x,\bar x\}$. It follows that $P^\prime$ has fewer tautological
clauses than~$P$.
\qed
A matching $\rho$ is a set of pairs
$\bigl\{ (i_1,j_1) , \ldots , (i_k,j_k) \bigr\} \subset \{1,\ldots,n+1\} \times
\{ 1 ,\ldots,n\}$
such that all the $i_\nu$ as well as all the $j_\nu$ are
pairwise distinct. The size of $\rho$ is $|\rho| = k$.
A matching $\rho$ induces a partial assignment to the variables of
$PHP_n$ as follows:
\[ \rho(x_{i,j}) = \begin{cases} 1 & \text{if } (i,j) \in \rho \\
0 & \text{if there is } (i,j') \in \rho \text{ with } j\neq j' \\
& \text{ or } (i',j) \in \rho \text{ with } i\neq i'\\
\text{undefined} & \text{otherwise.}
\end{cases} \]
We will identify a matching and the assignment it induces.
The crucial property of such a matching restriction $\rho$ is that
$\rest{FPHP_n}\rho$ is -- up to renaming of variables -- the same as
$FPHP_{n-|\rho|}$.
The next lemma states that a short clause occurring as a lemma
in an RTLW refutation can always be falsified by a small matching
restriction.
\begin{lem}\label{lem:smallrestr}
Let $C$ be a clause of size $k \leq n/2$ such that
\begin{enumerate}[$\bullet$]
\item $C$ is not tautological,
\item $C \not\supseteq H_{i,i',j}$ for any hole clause $H_{i,i',j }$,
\item $C \not\supseteq F_{i,j,j'}$ for any functional clause $F_{i,j,j'}$.
\end{enumerate}
Then there is a matching $\rho$ of size $|\rho| \leq k$ such that
$\rest{C}{\rho} = \Box$.
\end{lem}
\proof
First, we let $\rho_1$ consist of all those pairs $(i,j)$ such that
the negative literal $\bar{x}_{i,j}$ occurs in $C$. By the second
and third assumption, these pairs form a matching. All the negative
literals in $C$ are set to $0$ by $\rho_1$, and by the first
assumption, no positive literal in $C$ is set to $1$ by $\rho_1$.
Now consider all pigeons $i_1, \ldots , i_r$ mentioned in
positive literals in $C$ that are not already set to $0$ by
$\rho_1$, i.e., that are not mentioned in any of the negative
literals in $C$.
Pick $j_1, \ldots , j_r$ from the $n/2$ holes not mentioned in $C$,
and set $\rho_2 := \bigl\{ (i_1 , j_1) , \ldots , (i_r,j_r) \bigr\}$.
This matching sets the remaining positive literals to $0$, thus
for $\rho := \rho_1 \cup \rho_2$, we have $\rest{C}{\rho} = \Box$.
Clearly the size of~$\rho$ is at most~$k$ since we have picked
at most one pair for each literal in~$C$.
\qed
Finally, we are ready to put all ingredients together to prove our
lower bound.
\begin{thm} \label{the:wrtlklb}
For every $k\leq n/2$, every RTLW($k$)-refutation of $FPHP_n$
is of size $2^{\Omega(n \log n)}$.
\end{thm}
\proof
Let $R$ be an RTLW($k$)-refutation of $FPHP_n$ of size~$s$. By
Lemmas \ref{lem:subs} and~\ref{lem:taut},
$R$~can be transformed into~$R'$ of size at most~$2s$
in which no clause is tautological and
no clause used as a lemma is subsumed by a clause in $FPHP_n$.
Let $C$ be the first clause in~$R'$ which is used as a lemma;
$C$~is of size at most~$k$. The subtree~$R_C$ of~$R'$
rooted at~$C$ is a resolution tree for~$C$ from $FPHP_n$.
By \pref{lem:smallrestr},
there is a matching restriction~$\rho$ of size $|\rho|\leq k$ such
that $\rest{C}{\rho} = \Box$. Then $\rest{R_C}{\rho}$ is a
resolution tree with weakening refutation of $\rest{FPHP_n}{\rho}$,
which is the same as $FPHP_{n-k}$. By \pref{pro:rtweak},
applications of the weakening rule can be eliminated from
$\rest{R_C}\rho$ without increasing the size.
Therefore by \pref{the:treelb}, $R_C$ is of size
\[ \Bigl(\frac{n-k}{4}\Bigr)^{\frac{n-k}{4}} \geq \Bigl(\frac{n}{8}\Bigr)^{\frac{n}{8}} \]
and hence the size of $R$ is at least
$$ s \geq \frac12|R_C| \geq 2^{\Omega(n \log n)}.\eqno{\qEd}$$
\section{Preliminaries}\label{sec:prelim}
\paragraph{Propositional logic.}
Propositional formulas are formed using Boolean
connectives $\lnot$, $\land$, and $\lor$. However, this paper works only
with formulas in conjunctive normal form, namely formulas that can be
expressed as a set of clauses. We write $\overline x$ for the negation
of~$x$, and $\overline{\overline{x}}$ denotes~$x$.
A {\em literal}~$l$ is defined to be
either a variable~$x$ or a negated variable~$\overline x$. A clause~$C$
is a finite set of literals, and is interpreted as being the disjunction
of its members. The empty clause is denoted~$\Box$.
A {\em unit} clause is a clause containing a single literal.
A set~$F$ of clauses is interpreted as the conjunction
of its clauses, i.e., a conjunctive normal form formula (CNF).
An assignment~$\alpha$ is a (partial)
mapping from the set of variables to $\{0,1\}$, where we identify $1$ with {\em True}
and $0$ with {\em False}. The assignment~$\alpha$
is implicitly extended to assign values to literals
by letting $\alpha(\overline x) = 1-\alpha(x)$,
and the domain, $\dom(\alpha)$, of~$\alpha$ is the set of literals
assigned values by~$\alpha$.
The {\em restriction} of
a clause~$C$ under~$\alpha$ is the clause
\begin{equation*}
\rest{C}{\alpha} = \left\{ \begin{array}{llll}
1 & \text{if there is a } l \in C \text{ with } \alpha(l)=1\\
0 & \text{if } \alpha(l)=0 \text{ for every } l \in C\\
\set{l \in C}{l \not\in \dom(\alpha)}& \text{otherwise}
\end{array} \right.
\end{equation*}
The \emph{restriction}
of a set $F$ of clauses under~$\alpha$ is
\begin{equation*}
\rest{F}{\alpha} = \left\{ \begin{array}{llll}
0 & \text{if there is a } C \in F \text{ with } \rest{C}{\alpha}=0\\
1 & \text{if } \rest{C}{\alpha}=1 \text{ for every } C \in F\\
\set{\rest{C}{\alpha}}{C \in F} \setminus \{1\} & \text{otherwise}
\end{array} \right.
\end{equation*}
If $\rest F \alpha = 1$, then we say $\alpha$ \emph{satisfies}~$F$.
An assignment is called {\em total} if it assigns values to all variables.
We call two CNFs $F$ and~$F^\prime$ \emph{equivalent} and write $F\equiv F^\prime$
to indicate that $F$ and~$F^\prime$ are satisfied by exactly the same
total assignments.
Note, however, that $F\equiv F^\prime$ does not always imply that
they are satisfied by the same
partial assignments.
If $\epsilon \in \{0,1\}$ and $x$~is a variable, we define $x^\epsilon$ by
letting $x^0$ be~$x$ and $x^1$ be~$\overline{x}$.
\paragraph{Resolution.}
Suppose that $C_0$ and~$C_1$ are clauses and $x$ is a variable with
$x \in C_0$ and $\overline x \in C_1$. Then the {\em resolution rule}
can be used to derive the clause
$C = (C_0\setminus\penalty10000\{x\})\cup (C_1\setminus \{\overline x\})$.
In this case we write $C_0,C_1 \vdash_x C$ or just
$C_0,C_1 \vdash C$.
A \emph{resolution proof} of a clause~$C$ from a CNF $F$ consists of
repeated applications of the resolution rule to derive the clause~$C$
from the clauses of~$F$.
If $C = \Box$, then $F$~is unsatisfiable and
the proof is called a {\em resolution refutation}.
We represent resolution proofs either as graphs or as trees.
A {\em resolution dag} (RD) is a dag $G=(V,E)$ with labeled
edges and vertices satisfying the
following properties.
Each node is labeled with a
clause and a variable, and,
in addition, each edge is labeled with a literal.
There must be
a single node of out-degree zero, labeled with
the conclusion clause.
Further, all nodes with in-degree zero
are labeled with clauses from the initial
set~$F$. All other nodes must have in-degree two and are labeled with a
variable~$x$ and a clause $C$ such that $C_0,C_1\vdash_x C$ where
$C_0$ and~$C_1$ are the labels on the the two immediate
predecessor nodes and $x\in C_0$ and $\overline x\in C_1$.
The edge from $C_0$ to~$C$ is labeled~$\overline x$,
and the edge from $C_1$ to~$C$ is labeled~$x$. (The convention
that
that $x\in C_0$ and $\overline x$ is on the edge from~$C_0$
might seem strange,
but it
allows a more natural formulation of
Theorem~\ref{the:regWprops} below.)
A resolution dag~$G$ is \emph{$x$-regular} iff every path in~$G$
contains at most one node that is labeled with the variable~$x$.
$G$~is \emph{regular} (or a
regRD) if $G$ is $x$-regular for every~$x$.
We define the {\em size} of a resolution dag~$G=(V,E)$ to be the
number $|V|$ of vertices in the dag.
$\var(G)$ is the set of variables used
as resolution variables in~$G$. Note that if $G$ is a resolution proof rather
than a refutation, then $\var(G)$ may not include all the variables
that appear in clause labels of~$G$.
A {\em resolution tree} (RT) is a resolution dag which is tree-like,
i.e., a dag in which every vertex other then the conclusion clause has
out-degree one.
A regular resolution tree is called a regRT for short.
The notion of (p-)simulation is
an important tool for comparing the strength of proof systems.
If $\mathcal Q$ and~$\mathcal R$ are refutation systems,
we say that $\mathcal Q$ {\em simulates}~$\mathcal R$
provided there is a polynomial~$p(n)$
such that, for every
~$\mathcal R$-refutation of a CNF~$F$ of size~$n$ there is
a $\mathcal Q$-refutation of~$F$ of size~$\le p(n)$.
If the $\mathcal Q$-refutation can be found by a polynomial time procedure,
then this called a {\em p-simulation}. Two systems that simulate
(resp, p-simulate)
each other are called {\em equivalent} (resp, {\em p-equivalent}).
Some basic prior results for simulations of resolution systems include:
\begin{thm}
\hspace*{1ex}
\begin{enumerate}[\em(a)]
\setlength{\itemsep}{0pt}
\item {\rm \cite{Tseitin68}}
Regular tree resolution (regRT) p-simulates tree resolution (RT).
\item {\rm \cite{Goerdt1993,Alekhnovich2002}}
Regular resolution (regRD) does not simulate resolution (RD).
\item {\rm \cite{BEGJ00}}
Tree resolution (RT) does not simulate regular resolution (regRD).
\end{enumerate}
\end{thm}
\paragraph{Weakening and w-resolution.}
The {\em weakening} rule
allows the derivation of any clause $C^\prime \supseteq C$
from a clause~$C$. However, instead of using the weakening
rule, we introduce a {\em w-resolution} rule that essentially incorporates
weakening into the resolution rule. Given two clauses $C_0$ and~$C_1$,
and a variable~$x$, the {\em w-resolution rule} allows one to
infer
$C = (C_0\setminus\{x\})\cup (C_1\setminus \{\overline x\})$. We
denote this condition $C_0, C_1 \vdash^w_x C$. Note
that $x\in C_0$ and $\overline x\in C_1$ are not required for the
w-resolution inference.
We use
the notations WRD, regWRD, WRT, and regWRT for the proof systems that
correspond to RD, regRD, RT, and regRT (respectively) but with the resolution
rule replaced with the w-resolution rule.
That is, given a node labeled with $C$, an edge from $C_0$ to $C$ labeled
with $\bar x$ and an edge from $C_1$ to $C$ labeled with $x$, we have
$C = (C_0\setminus\{x\})\cup (C_1\setminus \{\overline x\})$.
Similarly, we use the notations RDW and RTW for the proof systems that
correspond to RD and RT, but with the general weakening rule added. In
an application of the weakening rule, the edge connecting a clause
$C^\prime \supseteq C$ with its single predecessor $C$ does not bear
any label.
The resolution and weakening rules
can certainly p-simulate the w-resolution rule,
since a use of the w-resolution rule can be replaced by weakening
inferences that derive $C_0\cup\{x\}$ from $C_0$ and
$C_1\cup\{\overline x\}$ from $C_1$, and then a resolution inference
that derives~$C$. The converse is not true, since w-resolution
cannot completely simulate weakening; this is because w-resolution
cannot introduce completely new variables that do not occur in the
input clauses. According to the well-known subsumption principle,
weakening cannot increase the strength of resolution though, and the
same reasoning implies the same about w-resolution; namely, we
have:
\begin{prop}\label{pro:subsumeweak}
Let $R$ be a WRD proof of~$C$ from~$F$ of size~$n$. Then there is an
RD proof~$S$ of~$C^\prime$ from~$F$ of size $\le n$ for
some $C^\prime\subseteq C$. Furthermore, if $R$ is regular, so
is~$S$, and if $R$ is a tree, so is~$S$.
\end{prop}
\proof
The proof of the theorem is straightforward. Writing $R$ as a
sequence $C_0, C_1, \ldots, C_n = C$, define clauses $C_i^\prime
\subseteq C_i$ by induction on~$i$ so that the new clauses form the
desired proof~$S$. For $C_i\in F$, let $C^\prime_i =C_i$. Otherwise
$C_i$~is inferred by w-resolution from $C_j$ and~$C_k$ w.r.t.\ a
variable~$x$. If $x\in C_j$ and $\overline x \in C_k$, let
$C_i^\prime$ be the resolvent of $C_j^\prime$
and~$C_k^\prime$ as obtained by the usual resolution
rule; if not, then let $C^\prime_i$ be $C^\prime_j$ if
$x\notin C^\prime_j$, or~$C^\prime_k$ if $\overline x \notin
C^\prime_k$. It is easy to check that each $C_i^\prime \subseteq C_i$
and that, after removing duplicate clauses, the clauses~$C^\prime_j$
form a valid resolution proof~$S$. If $R$~is regular, then so is~$S$,
and if $R$~is a tree so is~$S$.
\qed
Essentially the same proof shows the same property for the system with the
full weakening rule:
\begin{prop}\label{pro:rtweak}
Let $R$ be a RDW proof of~$C$ from~$F$ of size~$s$. Then there is an
RD proof~$S$ of~$C^\prime$ from~$F$ of size $\le s$ for
some $C^\prime\subseteq C$. Furthermore, if $R$ is regular, so is~$S$,
and if $R$ is a tree, so is~$S$.
\end{prop}
There are several reasons why we prefer to work with w-resolution, rather
than with the weakening rule. First, we find it to be an elegant
way to combine weakening with resolution.
Second, it works well for using resolution
trees (with input lemmas, see the next section) to simulate DLL search
algorithms. Third, since weakening and resolution together are stronger than
w-resolution, w-resolution is a more refined restriction on
resolution. Fourth, for regular resolution, using
w-resolution instead of general weakening can be a quite restrictive condition,
since any w-resolution inference
$C_0, C_1 \wres_x C$
``uses up'' the variable~$x$, making it unavailable for other
resolution inferences on the same path, even if the variable does not
occur at all in $C_0$ and~$C_1$. The last two reasons mean that
w-resolution can be rather weak; this strengthens
our results below
(Theorems \ref{the:regWRTIforLearnables} and~\ref{the:rWRTIsimDLL})
about the existence
of regular proofs that use w-resolution.
The following simple theorem gives some
useful properties for regular w-resolution.
\begin{thm}\label{the:regWprops}
Let $G$ be a regular w-resolution refutation. Let $C$ be a clause in~$G$.
\begin{enumerate}[\em(a)]
\item Suppose that $C$~is
derived from $C_0$ and~$C_1$ with the edge from~$C_0$ (resp.~$C_1$)
to~$C$ labeled with~$\overline x$ (resp.~$x$). Then $\overline x\notin C_0$,
and $x\notin C_1$.
\item
Let $\alpha$ be an assignment such that for every literal~$l$ labeling
an edge on the path from~$C$ to the final clause, $\alpha(l) = True$.
Then $\rest C \alpha = 0$.
\end{enumerate}
\end{thm}
\proof
The proof of part a.~is based on the observation that if $\overline x \in C_0$,
then also $\overline x \in C$. However, by the regularity of the
resolution refutation, every clause on the path from~$C$ to the final
clause~$\Box$ must contain~$\overline x$. But clearly $\overline x\notin\Box$.
Part b.~is a well-known fact for regular resolution proofs. It holds
for similar reasons for regular w-resolution proofs: the proof proceeds
by induction on clauses in the proof, starting at the final clause~$\Box$
and moving up towards the leaves. Part~a.\ makes the induction step trivial.
\qed
\paragraph{Directed acyclic graphs}
We define some basic concepts that will be useful
for analyzing both resolution proofs and
conflict graphs (which are defined below in \pref{sec:dll-up}).
Let $G=(V,E)$ be a dag.
The set of leaves (nodes in~$V$ of in-degree~0) of~$G$ is denoted $\leafs{G}$.
The {\em depth} of a node~$u$ in~$V$ is defined
to equal the maximum number of edges
on any path from a leaf of~$G$ to the node~$u$. Hence leaves have depth~$0$.
The subgraph rooted at~$u$ in~$G$ is denoted $\graph u G$; its nodes
are the nodes~$v$ for which there is a path from $v$ to~$u$ in~$G$, and its
edges are the induced edges of~$G$.
\section{DLL algorithms with clause learning}
\label{sec:dll-up}
\subsection{The basic DLL algorithm}
\label{sec:basic_dll}
The DLL proof search algorithm is named after the
authors Davis, Logeman and Loveland of
the paper where it was introduced~\cite{DavisLogemann1962}. Since they built
on the work of Davis and Putnam~\cite{DavisPutnam1960}, the algorithm
is sometimes called the DPLL algorithm.
There are several variations on the DLL algorithm, but the basic
algorithm is shown in \pref{fig:dll}. The input is a set~$F$ of
clauses, and a partial assignment~$\alpha$. The assignment~$\alpha$
is a set of ordered pairs $(x,\epsilon)$, where $\epsilon\in\{0,1\}$,
indicating that $\alpha(x)=\epsilon$.
The DLL algorithm is implemented
as a recursive procedure and returns
either
\texttt{UNSAT} if $F$ is unsatisfiable or otherwise a satisfying assignment
for~$F$.
\begin{figure}[htbp]
\begin{center}
\begin{minipage}{1.0\linewidth}
\tt \small
\begin{tabbing}
123\=123455\=12345\=12345\=12345\=12345\=12345\=12345 \kill
\>{\sc DLL}($F,\alpha$)\\
\>1\>if $\rest{F}{\alpha} = 0$ then\\
\>2\>\>return UNSAT\\
\>3\>if $\rest{F}{\alpha} = 1$ then\\
\>4\>\>return $\alpha$ \\
\>5\>choose $x \in \var(\rest{F}{\alpha})$ and $\epsilon\in\{0,1\}$ \\
\>6\>$\beta \leftarrow${\sc DLL}($F,\alpha \cup \{(x,\epsilon)\}$)\\
\>7\>if $\beta \neq$ UNSAT then\\
\>8\>\>return $\beta$\\
\>9\>else\\
\>10\>\>return {\sc DLL}($F,\alpha \cup \{(x,1-\epsilon)\}$)
\end{tabbing}
\end{minipage}
\caption{The basic DLL algorithm.}
\label{fig:dll}
\end{center}
\end{figure}
Note that the DLL algorithm is not fully specified, since line~5 does not
specify how
to choose the branching variable~$x$
and its value~$\epsilon$.
Rather one can think of the algorithm either as being nondeterministic or as
being an algorithm schema.
We prefer to think of the algorithm as an algorithm schema, so that it
incorporates a variety of possible algorithms. Indeed, there has
been extensive research into how to choose the branching variable
and its value \cite{Freeman1995,Nadel2002}.
There is a well-known close connection between regular
resolution and DLL algorithms.
In particular, a run of DLL can be viewed as a regular resolution
tree, and vice-versa. This can be formalized by the following two propositions.
\begin{prop} \label{pro:DLL_RT2} Let $F$ be an unsatisfiable set of
clauses and $\alpha$~an
assignment. If there is an execution of \hbox{\rm{\sc DLL}($F,\alpha$)}
that
returns \texttt{UNSAT} and performs $s$ recursive calls, then there
exists a clause $C$ with $\rest{C}{\alpha} = 0$ such that $C$~has a
regular resolution tree~$T$ from~$F$ with $|T| \leq s+1$ and
$\var(T) \cap \dom(\alpha) = \varnothing$.
\end{prop}
The converse simulation of \pref{pro:DLL_RT2} holds, too, that is, a
regular resolution tree can be transformed directly in a run of
\textsc{DLL}.
\begin{prop} \label{pro:DLL_RT1}
Let $F$ be an unsatisfiable
set of clauses.
Suppose that $C$~has
a regular resolution proof tree~$T$ of size~$s$ from~$F$.
Let $\alpha$ be an assignment with $\rest{C}{\alpha} = 0$ and
$\var(T) \cap \dom(\alpha) = \varnothing$. Then there is an
execution of \hbox{\rm{\sc DLL}($F,\alpha$)}, that returns \texttt{UNSAT}
after at most $s-1$ recursive calls.
\end{prop}
The two propositions are based on the following correspondence between
resolution trees and a DLL search tree: first, a leaf clause in
a resolution tree corresponds to a clause falsified by~$\alpha$ (so that
$\rest F \alpha = 0$), and second, a resolution inference with respect to
a variable~$x$ corresponds to the use of $x$~as a
branching variable in the DLL algorithm.
Together the two propositions give the following
well-known exact correspondence between regular
resolution trees and DLL search.
\begin{thm}\label{the:DLL_RT}
If $F$ is unsatisfiable, then there is an execution of
\hbox{\rm {\sc DLL}($F,\varnothing$)} that executes with $< s$ recursive calls
if and only if there exists a regular refutation tree for~$F$ of
size $\le s$.
\end{thm}
\subsection{Learning by unit propagation}
Two of the most successful enhancements of DLL that are used by most
modern SAT solvers are unit propagation and clause learning.
\emph{Unit clause propagation} (also called Boolean
constraint propagation) was already part of the original DLL algorithm
and is based on the following observation: If
$\alpha$ is a partial assignment for a set of clauses~$F$ and if there
is a clause $C\in F$ with $\rest C \alpha = \{ l \}$ a unit clause,
then any $\beta\supset \alpha$ that satisfies~$F$ must assign $l$ the
value {\em True}.
There are a couple of methods that the DLL algorithm can
use to implement unit propagation.
One method is to just use unit propagation
to guide the choice of a branching variable by modifying line~5 so that,
if there is a unit clause in~$\rest F \alpha$, then $x$ and~$\epsilon$
are chosen to make the literal true. More commonly though, DLL algorithms
incorporate unit propagation as a separate phase during which the
assignment~$\alpha$ is iteratively extended to make any unit clause
true until there are no unit clauses remaining. As the unit propagation
is performed, the DLL algorithm keeps track of which variables were
set by unit propagation and which clause was used as the basis for
the unit propagation. This information is then useful for clause
learning.
\emph{Clause learning} in DLL algorithms was first introduced
by Silva and Sakallah~\cite{SilvaSakallah1996} and
means that
new clauses are effectively added to~$F$.
A learned clause~$D$ must be implied by~$F$, so that adding $D$ to~$F$
does not change the space of satisfying assignments.
In theory, there are many potential methods for clause learning; however,
in practice, the only useful method for learning clauses is based on
unit propagation as in the original proposal \cite{SilvaSakallah1996}.
In fact, all deterministic state of the art
SAT solvers for structured (non-random) instances of SAT are
based on clause learning via unit propagation. This includes
solvers such as Chaff~\cite{MoskewiczMalik2001},
Zchaff~\cite{MahajanFu2004} and MiniSAT~\cite{EenBiere2005}.
These DLL algorithms apply clause learning when the set~$F$
is falsified by the current assignment~$\alpha$. Intuitively,
they analyze the {\em reason} some clause~$C$ in~$F$ is falsified
and use this reason to infer a
clause~$D$ from~$F$ to be learned. There are two ways
in which a DLL algorithm assigns values to variables, namely, by unit
propagation and by setting a branching variable. However, if unit propagation
is fully carried out, then the first time a clause is falsified is during
unit propagation. In particular, this happens when there are two unit
clauses $\rest{C_1}\alpha = \{x \}$ and $\rest{C_2}\alpha = \{ \overline x \}$
requiring a variable~$x$ to be set both {\em True} and {\em False}. This
is called a {\em conflict}.
The reason for a conflict is analyzed by building a
conflict graph. Generally, this is done by maintaining an \emph{unit
propagation graph} that tracks, for each variable which
has been assigned a value, the reason that implies the setting of the variable.
The two possible reasons are that either (a)~the variable was set by
unit propagation when a
particular clause~$C$ became a unit clause, in which case $C$~is the reason,
or (b)~the variable was set arbitrarily as a branching variable. The
unit propagation graph~$G$ has literals as its nodes. The leaves of~$G$
are literals that were set true as branching variables,
and the internal nodes are variables that were set true
by unit propagation. If a literal~$l$ is an internal node in~$G$,
then it was set
true by unit propagation applied to some clause~$C$. In this case, for
each literal~${l^\prime}\not= l$ in~$C$, $\overline{l^\prime}$~is
a node in~$G$ and there
is an edge from $\overline{ l^\prime }$ to~$l$. If the
unit propagation graph contains a conflict it is called a \emph{conflict graph}.
More formally, a conflict graph is defined as follows.
\begin{defi}
A {\em conflict graph}~$G$ for a set~$F$ of clauses
under the assignment~$\alpha$
is a
dag $G=(V\cup\{\Box\},E)$
where $V$ is a set of literals and where the following hold:
\begin{enumerate}[(a)]
\setlength{\itemsep}{1pt}
\item For each $l\in V$, either (i)~$l$ has in-degree~0 and
$\alpha(l)=1$,
or (ii)~there is
a clause~$C\in F$ such that
$C = \{l\} \cup \{ l^\prime : (\overline {l^\prime},l)\in E\}$.
For a fixed conflict graph~$G$, we denote this clause as~$C_l$.
\item There is a unique variable~$x$ such that
$V\supseteq\{x,\overline x\}$.
\item The node~$\Box$ has only the two incoming edges
$(x,\Box)$ and $(\overline x,\Box)$.
\item The node $\Box$ is the only node with outdegree zero.
\end{enumerate}
\end{defi}
Let $\leafs{G}$ denote the nodes in~$G$ of in-degree zero. Then, letting
$\alpha_G = \{ (x,\epsilon) : x^\epsilon \in \leafs G \}$, the conflict
graph~$G$ shows that every vertex~$l$ must be made true
by any satisfying assignment for~$F$ that extends~$\alpha$. Since
for some~$x$, both $x$ and $\overline x$ are nodes of~$G$, this implies
$\alpha$ cannot be extended to a satisfying assignment for~$F$.
Therefore, the clause $D = \{ \overline l : l\in\leafs G \}$ is implied
by~$F$, and $D$~can be taken as a learned clause. We call this clause~$D$
the {\em conflict clause} of~$G$ and denote it $\Cc G$.
There is a second type of clause that can be learned from the conflict
graph~$G$ in
addition to the conflict clause~$\Cc G$. Namely, let $l\not = \Box$
be any non-leaf node
in~$G$. Further, let $\leafs { \graph l G }$ be the set of leaves~$l^\prime$
of~$G$
such that there is a path from~$l^\prime$ to~$l$. Then, the clauses in~$F$
imply that if all the leaves~$l^\prime \in \leafs{\graph l G}$ are assigned
true, then $l$~is assigned true. Thus, the clause
$D = \{ l \} \cup
\{ \overline {l^\prime} : l^\prime \in \leafs{\graph l G} \}$
is
implied by~$F$ and can be taken as a learned clause. This clause~$D$
is called the {\em induced clause} of $G_l$ and is denoted $\ic l G$.
In the degenerate case where $\graph l G$ consists of only the single
literal~$l$, this would make $\ic l G$ equal to $\{ l, \overline l \}$; rather
than permit this as a clause, we instead say that the induced clause does
not exist.
In practice, both conflict clauses $\Cc G$ and induced clauses~$\ic l G$
are used by SAT solvers. It appears that most SAT solvers learn the
\emph{first-UIP} clauses~\cite{SilvaSakallah1996}, which equal $\Cc{G}$
and $\ic l {G^\prime}$ for appropriately formulated~$G$ and~$G^\prime$.
Other conflict clauses that can be learned include
\emph{all-UIP} clauses~\cite{ZhangMadigan2001},
\emph{rel-sat} clauses~\cite{BayardoSchrag:CSPlookback},
\emph{decision} clauses~\cite{ZhangMadigan2001},
and
\emph{first cut} clauses~\cite{BeameKautzSabharwal2004}.
All of these are conflict clauses $\Cc{G}$ for appropriate~$G$.
Less commonly, multiple clauses are learned, including clauses
based on the cuts
advocated by the mentioned works \cite{SilvaSakallah1996,ZhangMadigan2001}, which
are a type of induced clauses.
In order to prove the correspondence in \pref{sec:regwrti-dll} between
DLL with clause learning and regWRTI proofs, we must put some restrictions
on the kinds of clauses that can be (simultaneously)
learned. In essence, the point is that for DLL with clause learning to
simulate regWRTI proofs it is necessary to learn multiple clauses at
once in order to learn all the clauses in a regular input subproof.
But on the other hand, for regWRTI to simulate DLL with clause learning,
regWRTI must be able to include regular input proofs that derive
all the learned clauses so as to have them available for subsequent use as
input lemmas. Thus, we define a notion of ``compatible clauses'' which
is a set of clauses that can be simultaneously learned. For this, we
define the notion of a series-parallel decomposition of a conflict graph~$G$.
\begin{defi}
A graph~$H=(W,E^\prime)$ is a {\em subconflict graph} of
the conflict graph~$G=(V,E)$ provided that
$H$~is a conflict graph with $W\subseteq V$ and $E^\prime\subseteq E$,
and that each non-leaf vertex of~$H$ (that is,
each vertex in $W\setminus \leafs H$)
has the same in-degree in~$H$ as in~$G$.
$H$~is a {\em proper} subconflict graph of~$G$ provided
there is no path in~$G$ from any non-leaf vertex of~$H$
to a vertex in~$\leafs H$.
\end{defi}
Note that if $l$~is a non-leaf vertex in the subconflict graph~$H$ of~$G$,
then the clause $C_l$ is the same whether it is defined with respect to~$H$ or
with respect to~$G$.
\begin{defi}
Let $G$~be a conflict graph. A
{\em decomposition} of~$G$ is a sequence
$H_0\subset H_1\subset \cdots\subset H_k$, $k\ge 1$, of distinct
proper
subconflict graphs of~$G$ such that $H_k=G$ and
$H_0$~is the dag on the three nodes~$\Box$ and its
two predecessors $x$ and~$\overline x$.
\end{defi}
A decomposition of~$G$ will be used to describe sets of clauses
that can be simultaneously learned. For this, we put a structure
on the decomposition that describes the exact types of clauses
that can be learned:
\begin{defi}
A {\em series-parallel decomposition}~$\mathcal H$ of~$G$ consists of a
decomposition $H_0,\ldots,H_k$ plus, for each $0\le i<k$,
a sequence
$H_i=H_{i,0}\subset H_{i,1}\subset \cdots \subset H_{i,m_i}=H_{i+1}$
of proper subconflict graphs of~$G$.
Note that the sequence
\[
H_0=H_{0,0}, H_{0,1}, H_{0,2},\ldots,
H_{0,m_0}=H_1=H_{1,0}, H_{1,1}, \ldots,
H_{k-1,m_{k-1}} = H_k
\]
is itself a decomposition of~$G$. However, we prefer to view
it as a two-level decomposition.
A {\em series} decomposition is a series-parallel decomposition with
trivial parallel part, i.e., with $k=1$.
A {\em parallel} decomposition is series-parallel decomposition
in which $m_i=1$ for all~$i$.
Note that
we always have $H_i\not= H_{i+1}$
and $H_{i,j}\not=H_{i,j+1}$.
\end{defi}
Figure~\ref{seriesparallelFig} illustrates a series-parallel decomposition.
\begin{defi}
For $\mathcal H$ a series-parallel decomposition, the set of {\em learnable
clauses}, $\Cc {\mathcal H}$, for~$\mathcal H$ consists of the following
induced clauses and conflict clauses:
\begin{enumerate}[$\bullet$]
\item For each $1\le j \le m_0$, the conflict clause
$\Cc {H_{0,j}}$, and
\item For each $0<i<k$ and $0<j\le{m_i}$
and each $l \in \leafs{H_i} \setminus \leafs{H_{i,j}}$, the induced
clause $\ic l {H_{i,j}}$.
\end{enumerate}
\end{defi}
\begin{figure}[t]
\psset{unit=0.04cm}
\begin{center}
\begin{pspicture}(-50,0)(175,200)
\cnodeput(0,0){box}{\makebox(0, 6.6){$\Box$}}
\cnodeput(20,20){abar}{\makebox(0, 6.6){$\overline a$}}
\cnodeput(-20,20){a}{\makebox(0, 6.6){$a$}}
\ncline{a}{box}
\ncline{abar}{box}
\cnodeput(20,40){c}{\makebox(0, 6.6){$c$}}
\cnodeput(-20,50){b}{\makebox(0, 6.6){$b$}}
\cnodeput(20,60){d}{\makebox(0, 6.6){$d$}}
\cnodeput(0,80){e}{\makebox(0, 6.6){$e$}}
\ncline{c}{abar}
\ncline{b}{a}
\ncline{b}{abar}
\ncline{d}{c}
\ncline{e}{b}
\ncline{e}{d}
\cnodeput(-20,105){f}{\makebox(0, 6.6){$f$}}
\cnodeput(20,105){g}{\makebox(0, 6.6){$g$}}
\cnodeput(-25,130){h}{\makebox(0, 6.6){$h$}}
\cnodeput(15,130){i}{\makebox(0, 6.6){$i$}}
\ncline{f}{e}
\ncline{g}{e}
\ncline{h}{f}
\ncline{i}{f}
\ncline{i}{g}
\cnodeput(-20,160){j}{\makebox(0, 6.6){$j$}}
\cnodeput(20,160){k}{\makebox(0, 6.6){$k$}}
\cnodeput(-5,180){ell}{\makebox(0, 6.6){$\ell$}}
\cnodeput(25,180){m}{\makebox(0, 6.6){$m$}}
\ncline{j}{h}
\ncline{j}{i}
\ncline{k}{i}
\ncline{ell}{j}
\ncline{ell}{k}
\ncline{m}{k}
\psline[linewidth=1.5pt](-30,30)(30,30)
\psline[linewidth=1.5pt](-30,92)(30,92)
\psline[linewidth=1.5pt](-30,147)(30,147)
\psline[linewidth=1.5pt](-30,195)(30,195)
\pscurve[linewidth=1.5pt,linestyle=dotted](-30,62)(-25,62)(20,50)(30,50)
\pscurve[linewidth=1.5pt,linestyle=dotted](-30,67)(-25,67)(25,72)(30,72)
\psline[linewidth=1.5pt,linestyle=dotted](-30,115)(30,115)
\pscurve[linewidth=1.5pt,linestyle=dotted]%
(-30,120)(-25,120)(10,138)(20,140)(30,140)
\rput[l](33,30){$H_0 = H_{0,0}$}
\rput[l](33,50){$H_{0,1}$}
\rput[l](33,72){$H_{0,2}$}
\rput[l](33,92){$H_1=H_{0,3}=H_{1,0}$}
\rput[l](33,115){$H_{1,1}$}
\rput[l](33,137){$H_{1,2}$}
\rput[l](33,147){$H_2=H_{1,3}=H_{2,0}$}
\rput[l](33,194){$H_3=H_{2,1}$}
\rput[c](150,200){\underline{Learnable clauses}}
\rput[c](150,188){$\{ \overline\ell, h \}$ }
\rput[c](150,175){$\{ \overline\ell, \overline m, i \}$ }
\rput[c](150,150){$\{ \overline h, \overline i, e \}$ }
\rput[c](150,138){$\{ \overline f, \overline i, e \}$ }
\rput[c](150,115){$\{ \overline f, \overline g, e \}$ }
\rput[c](150,92){$\{ \overline e \}$ }
\rput[c](150,72){$\{ \overline b, \overline d \}$ }
\rput[c](150,50){$\{ \overline b, \overline c \}$ }
\end{pspicture}
\end{center}
\caption{A series-parallel decomposition. Solid lines define
the sets~$H_i$ of the parallel part of the decomposition, and dotted lines
define the sets $H_{i,j}$ in the series part. Each line (solid or dotted)
defines the set of nodes that lie below the line.
The learnable clauses associated with each
set are shown in the right column.
}
\label{seriesparallelFig}
\end{figure}
It should be noted that the definition of the parallel decomposition
incorporates the notion of ``cut'' used
by Silva and Sakallah~\cite{SilvaSakallah1996}.
The DLL algorithm shown in \pref{fig:dll-l-up} chooses
a single series-parallel decomposition~$\mathcal H$
and learns some subset of the learnable clauses in~$\Cc {\mathcal H}$.
It is clear
that this generalizes all of the clause learning algorithms
mentioned above.
The algorithm schema \textsc{DLL-L-UP} that is given in
\pref{fig:dll-l-up} is a modification of the schema \textsc{DLL}. In
addition to returning a satisfying assignment
or \texttt{UNSAT}, it returns a modified formula that might include
learned clauses. If $F$ is a set of clauses
and $\alpha$~is an assignment then \textsc{DLL-L-UP}($F,\,\alpha$)
returns $(F',\alpha')$ such that $F^\prime \supseteq F$ and
$F^\prime$ is equivalent to~$F$ and such that
$\alpha^\prime$~either
is \texttt{UNSAT} or is a satisfying assignment for~$F$.\footnote{
Our definition of \textsc{DLL-L-UP} is slightly different from the version of
the algorithm as originally defined in Hoffmann's thesis \cite{Hoffmann2007}. The first
main difference is that we use series-parallel decompositions
rather the compatible set of subconflict graphs of Hoffmann~\cite{Hoffmann2007}.
The second
difference is that our algorithm does not build the implication
graph incrementally by the use of explicit unit propagation;
instead, it builds the implication graph once
a conflict has been found.}
\begin{figure}[htbp]
\begin{center}
\begin{minipage}{1.0\linewidth}
\tt \small
\begin{tabbing}
123\=123455\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345 \kill
\>{\sc DLL-L-UP}($F,\alpha$)\\
\>1\>if $\rest{F}{\alpha} = 1$ then return ($F,\alpha$) \\
\>2\>if there is a conflict graph for~$F$ under~$\alpha$ then \\
\>3\>\>choose a conflict graph~$G$ for~$F$ under~$\alpha$ \\
\>4\>\>\>and a series-parallel decomposition~$\mathcal H$ of~$G$ \\
\>5\>\>choose a subset $S$ of $\Cc{{\mathcal H}}$ ~~ -- the learned clauses \\
\>6\>\>return ($F\cup S$, UNSAT) \\
\>7\>choose $x \in \var(\rest{F}{\alpha})$ and $\epsilon \in \{0,1\}$\\
\>8\>($G,\beta$)$\leftarrow${\sc DLL-L-UP}($F,\alpha \cup \{(x,\epsilon)\}$)\\
\>9\>if $\beta \neq $ UNSAT then \\
\>10\>\>return ($G,\beta$)\\
\>11\>return {\sc DLL-L-UP}($G,\alpha \cup \{(x,1-\epsilon)\})$
\end{tabbing}
\end{minipage}
\caption{DLL with Clause Learning.}
\label{fig:dll-l-up}
\end{center}
\end{figure}
The \textsc{DLL-L-UP} algorithm as shown in \pref{fig:dll-l-up} does not
explicitly include unit propagation. Rather, the use of unit propagation is
hidden in the test on line~2 of whether unit propagation can be used to
find a conflict graph. In practice, of course, most algorithms set
variables by unit propagation as soon as possible and update
the implication graph each time a new unit variable is set. The
algorithm as formulated in \pref{fig:dll-l-up} is more general, and thus
covers more possible implementations of \textsc{DLL-L-UP}, including
algorithms that may change the implication graph retroactively or may pick
among several conflict graphs depending on the details of how $F$~can be
falsified. There is at
least one implemented clause learning algorithm that does this \cite{FMM:SAT04zChaff}.
As shown in \pref{fig:dll-l-up}, if $\rest F \alpha$ is false, then the
algorithm must return \texttt{UNSAT} (lines 2-6).
Sometimes, however,
we use instead a ``non-greedy'' version of \textsc{DLL-L-UP}. For the
non-greedy version it is optional for the algorithm to immediately return
\texttt{UNSAT} once $F$ has a conflict graph.
Thus the non-greedy \textsc{DLL-L-UP} algorithm can set
a branching variable (lines 7-11) even if $F$~has already been falsified
and even if there are unit clauses present.
This non-greedy version of \textsc{DLL-L-UP} will
be used in the next section to simulate
regWRTI proofs.
The constructions of
Section~\ref{sec:regwrti-dll}
also imply that \textsc{DLL-L-UP} is p-equivalent
to the restriction of \textsc{DLL-L-UP} in which only series
decompositions are allowed. That is to say, \textsc{DLL-L-UP}
with only series decompositions can simulate any run of
\textsc{DLL-L-UP} with at most polynomially many more recursive
calls.
\section{w-resolution trees with lemmas}\label{sec:wrtl}
This section first gives an alternate characterization of resolution
dags by using \emph{resolution trees with lemmas}. We then refine the notion
of lemmas to allow only \emph{input lemmas}. For non-regular derivations,
resolution trees with lemmas and resolution trees with input lemmas
are both proved below to be p-equivalent to resolution.
However, for regular proofs,
the notions are apparently different. (In fact we give an exponential
separation between regular resolution and regular w-resolution trees with
input lemmas.) Later in the paper we will
give a tight correspondence between resolution trees with input lemmas
and DLL search algorithms.
The intuition for the definition of a resolution tree with lemmas is
to allow any clause proved earlier in the resolution tree to be reused as
a leaf clause. More formally, assume we are
given a resolution proof tree~$T$, and further assume~$T$ is {\em ordered} in
that each internal node has a left child and a right child.
We define
$ <_T $ to be the post-ordering of~$T$,
namely, the linear ordering of the nodes of~$T$
such that if $u$~is a node in~$T$ and $v$~is in the subtree rooted
at $u$'s left child, and $w$~is in the subtree rooted at $u$'s right
child, then $v <_T w <_T u$.
For $F$ a set of clauses, a {\em resolution tree with lemmas} (RTL) proof
from~$F$
is an ordered binary tree such that
(1)~each leaf node~$v$ is labeled with either a member
of~$F$ or with a clause that labels some node $u <_T v$, and
(2)~each internal node~$v$ is labeled with a variable~$x$ and a clause~$C$,
such that~$C$
is inferred by resolution w.r.t.~$x$ from the clauses labeling the two children
of~$v$, and (3)~the unique out-degree zero node is labeled
with the conclusion clause~$D$. If $D=\Box$, then the RTL proof is
a refutation.
{\em w-resolution trees with lemmas} (WRTL) are defined just like
RTL's, but allowing w-resolution in place of resolution, and
\emph{resolution trees with lemmas and weakening} (RTLW) are defined
in the same way, but allowing the weakening rule in addition to
resolution.
An RTL or WRTL proof is {\em regular} provided
that no path in the proof tree contains more than one (w-)resolution
using a given variable~$x$. Note that paths follow the tree edges
only; any maximal path starts at a leaf node (possibly
a lemma) and ends at the conclusion.
It is not hard to see that resolution trees with lemmas (RTL) and resolution dags (RD)
p-simulate each other. Namely, an RD can be converted into an RTL by doing
a depth-first, leftmost traversal of the RD. In addition, it is clear
that regular RTL's p-simulate regular RD's. The converse
is open,
and it is false for regular WRTL, as we prove in \pref{sec:regwrti-dll}:
intuitively, the problem is that
when one converts an RTL proof into an RD, new path connections are
created when leaf clauses are replaced with edges back to the node
where the lemma was derived.
We next define resolution trees with input lemma (RTI) proofs. These
are a restricted version of resolution trees with lemmas, where the lemmas
are required to have been derived earlier in the proof by \emph{input proofs}.
Input proofs have also been called \emph{trivial proofs} by
Beame et al.~\cite{BeameKautzSabharwal2004}, and they are useful for
characterizing the clause learning permissible for DLL algorithms.
\begin{defi}
An {\em input resolution tree} is a resolution tree such
that every internal node
has at least one child that is a leaf.
Let $v$~be a node
in a tree~$T$ and let $T_v$ be the subtree of~$T$ with root~$v$.
The node~$v$ is called
an {\em input-derived node} if $T_v$~is an input resolution tree.
\end{defi}
Often the node~$v$ and its label~$C$ are identified. In this case,
$C$~is called an {\em input-derived clause}. In RTI proofs,
input-derived clauses may be reused as lemmas. Thus, in an RTI proof,
an input-derived clause is derived by an input proof whose leaves
either are initial clauses or are clauses that were already input-derived.
\begin{defi}
A {\em resolution tree with input lemmas} (RTI) proof~$T$
is an RTL proof with the
extra condition that every lemma in~$T$ must appear earlier in~$T$ as
an input-derived clause. That is to say, every leaf node~$u$ in~$T$ is
labeled either with an initial clause from~$F$ or with a clause that labels
some input-derived node $v <_T u$.
\end{defi}
The notions of w-resolution trees with input lemmas (WRTI), regular
resolution trees with input lemmas (regRTI), and regular w-resolution
trees with input lemmas (regWRTI) are defined similarly.%
\footnote{A small, but
important point is that w-resolution inferences are not allowed in
input proofs, even for input proofs that are part of WRTI proofs.
We have chosen the definition of input proofs so as
to make the results in \pref{sec:regwrti-dll} hold that show the equivalence
between regWRTI proofs and DLL-L-UP search algorithms.
Although similar results could be
obtained if the definition of input proof were changed to allow
w-resolution inferences, it would require also using a modified, and
less natural, version of clause learning.}
It is clear that the resolution dags (RD) and resolution trees with lemmas (RTL)
p-simulate resolution trees with input lemmas (RTI). Somewhat surprisingly,
the next theorem shows that the converse p-simulation holds as well.
\begin{thm}\label{the:RD->RTI}
Let $G$ be a resolution dag of
size~$s$ for the clause~$C$ from the set~$F$ of clauses.
Let $d$ be the depth of~$C$ in~$G$.
Then there is an RTI proof~$T$ for~$C$ from~$F$ of size $< 2sd$.
If $G$ is regular then $T$ is also regular.
\end{thm}
\proof
The dag proof~$G$ can be unfolded into a proof tree~$T^\prime$, possibly
exponentially bigger. The proof idea is to prune clauses away
from~$T^\prime$ leaving a RTI proof~$T$ of the desired size.
Without loss of generality, no clause appears more than once in~$G$; hence,
for a given clause~$C$ in the tree~$T^\prime$, every occurrence of~$C$
in~$T^\prime$ is derived by the same subproof~$T^\prime_C$.
Let $d_C$ be the depth of~$C$
in the proof, i.e., the height of the tree~$T^\prime_C$. Clauses
at leaves have depth~$0$.
We give the proof tree~$T^\prime$ an arbitrary left-to-right order, so that it
makes sense to talk about the $i$-th occurrence of a clause~$C$ in~$T^\prime$.
We define the
$j$-th occurrence of a clause~$C$ in~$T^\prime$ to be \emph{leafable},
provided $j > d_C$. The intuition is that the leafable clauses
will have been proved as a input clause earlier in~$T$, and thus
any leafable clause may be used as a lemma in~$T$.
To form~$T$ from~$T^\prime$, remove from~$T^\prime$ any clause~$D$ if it has a
successor that is leafable, so that every leafable occurrence of a clause
either does not appear in~$T$ or appears in~$T$
as a leaf.
To prove that $T$~is a valid RTI proof, it suffices
to prove, by induction on~$i$, that if $C$~has depth $d_C=i>0$,
then the
$i$-th occurrence of~$C$ is input-derived in~$T$.
Note that the two children $C_0$ and~$C_1$ of~$C$
must have depth $<d_C$.
Since every occurrence of~$C$ is derived from the same two clauses, these
occurrences of $C_0$ and~$C_1$ must be at least their
$i$-th occurrences. Therefore, by the induction hypothesis, the
children $C_0$ and~$C_1$ are leafable and appear in~$T$ as leaves.
Thus, since it is derived by a single
inference from two leaves, the $i$-th occurrence of~$C$ is input-derived.
It follows that $T$ is a valid RTI proof. If the proof~$G$ was regular,
clearly $T$~is regular too.
To prove the size bound for~$T$,
note that $G$ has at most
$s-1$ internal nodes. Each one occurs at most $d$ times as an internal
node in~$T$, so $T$ has at most $d(s-1)$ internal nodes. Thus, $T$~has
at most $2d\cdot (s-1) +1 < 2sd$ nodes in all.
\qed
The following two theorems summarize the relationships between our
various proof systems. We write ${\mathcal R}\equiv{\mathcal Q}$ to denote
that $\mathcal R$ and~$\mathcal Q$ are p-equivalent, and ${\mathcal Q}\le {\mathcal R}$
to denote that $\mathcal R$ p-simulates $\mathcal Q$. The notation
${\mathcal Q} < {\mathcal R}$ means that $\mathcal R$ p-simulates~$\mathcal Q$ but
$\mathcal Q$ does not simulate~$\mathcal R$.
\begin{thm}\label{the:all_equiv}
$\text{RD} \equiv \text{WRD} \equiv \text{RTI} \equiv \text{WRTI}
\equiv \text{RTL} \equiv \text{WRTL}$
\end{thm}
\proof
The p-equivalences
$\text{RD} \equiv \text{WRD}$ and $\text{RTI} \equiv \text{WRTI}$ and
$\text{RTL} \equiv \text{WRTL}$ are shown by (the proof of)
\pref{pro:subsumeweak}. The simulations
$\text{RTI} \le \text{RTL} \equiv \text{RD}$ are straightforward.
Finally, $\text{RD} \le \text{RTI}$ is shown by Theorem~\ref{the:RD->RTI}.
\qed
For regular resolution, we have the following theorem.
\begin{thm}\label{the:hierarchy}
$\text{regRD} \equiv \text{regWRD} \leq \text{regRTI}
\leq \text{regRTL} \leq \text{regWRTL} \leq \text{RD}$ and
$\text{regRTI} \leq \text{regWRTI} \leq \text{regWRTL}$.
\end{thm}
\proof
$\text{regRD} \equiv \text{regWRD}$ and $\text{regWRTL} \leq \text{RD}$
follow from the definitions and the proof of \pref{pro:subsumeweak}.
The p-simulations $\text{regRTI} \leq \text{regRTL}
\leq \text{regWRTL}$ and $\text{regRTI} \leq
\text{regWRTI} \leq \text{regWRTL}$ follow from the definitions.
The p-simulation $\text{regRD} \leq \text{regRTI}$ is shown by
Theorem~\ref{the:RD->RTI}.
\qed
Below, we prove, as \pref{the:regRDnosimregWRTI}, that
$\text{regRD} < \text{regWRTI}$.
This is the only separation in the hierarchy that is known.
In particular, it is open whether
$\text{regRD} < \text{regRTI}$,
$\text{regRTI} < \text{regRTL}$, $ \text{regRTL} < \text{regWRTL}$, $
\text{regWRTL} < \text{RD}$ or $\text{regWRTI} < \text{regWRTL}$ hold.
It is also open whether regWRTI and regRTL are comparable.
\section{Conclusion}
\section{Introduction}\label{sec:intro}
Although
the satisfiability problem for propositional logic (SAT) is NP-complete,
there exist SAT solvers that can decide SAT on present-day computers
for many formulas that are relevant in practice
\cite{SilvaSakallah1996, MoskewiczMalik2001, MahajanFu2004,
BerreSimon2003, BerreSimon2004, BerreSimon2005}.
The fastest SAT solvers for structured problems are based on the basic
backtracking procedures known as DLL algorithms
\cite{DavisLogemann1962}, extended with additional techniques such as
clause learning.
DLL algorithms can be seen as a kind of proof search procedure since
the execution of a DLL algorithm on an unsatisfiable CNF formula
yields a tree-like resolution refutation of that formula. Conversely,
given a tree-like resolution refutation, an execution of a DLL
algorithm on the refuted formula can be constructed whose runtime is
roughly the size of the refutation. By this exact correspondence,
upper and lower bounds on the size of tree-like resolution proofs
transfer to bounds on the runtime of DLL algorithms.
This paper generalizes this exact correspondence to extensions of DLL
by clause learning. To this end, we define natural, rule-based
resolution proof systems and then prove that they correspond to DLL
algorithms that use various forms of clause learning.
The motivation for this is that
the correspondence between
a clause learning DLL algorithm and a proof system helps explain the
power of the algorithm by giving
a description of the space of proofs which is
searched by it.
In addition, upper and lower bounds
on proof complexity can be transferred to upper and lower bounds on the
possible runtimes of large classes of DLL algorithms with clause learning.
We introduce, in {\pref{sec:wrtl}}, tree-like
resolution refinements using the notions of a resolution tree with
lemmas (RTL) and a resolution tree with input lemmas (RTI).
An RTL is a tree-like resolution proof in which every
clause needs only to be derived once and can be copied to be used as a
leaf in the tree (i.e., a lemma) if it is used several times. As the
reader might guess, RTL is polynomially equivalent to general
resolution.
Since DLL algorithms use learning based on unit propagation, and since
unit propagation is equivalent to input resolution (sometimes
called ``trivial resolution'' \cite{BeameKautzSabharwal2004}), it is
useful to restrict the lemmas that are used in a RTL to those that
appear as the root of input subproofs.
This gives rise to proof systems based on resolution
trees with input lemmas (RTI). Somewhat surprisingly, we
show that RTI can also simulate general resolution.
A resolution proof is called {\em regular} if no variable is used as a
resolution variable twice along any path in the tree. Regular proofs
occur naturally in the present context, since a backtracking algorithm
would never query the same variable twice on one branch of its
execution.
It is known that regular resolution is weaker than
general resolution~\cite{Goerdt1993,Alekhnovich2002},
but it is unknown whether
regular resolution can simulate regular RTL or regular RTI. This
is because, in regular RTL/RTI proofs, variables that are used for
resolution to derive a clause can be reused on paths where this clause
appears as a lemma.
For resolution and regular resolution,
the use of a weakening rule does not increase the power
of the proof system (by the subsumption principle). However,
for RTI and regular RTL proofs, the weakening rule may increase the strength
of the proof system (this is an open question, in fact),
since eliminating uses
of weak inferences may require pruning away parts of the proof that contain
lemmas needed later in the proof.
Accordingly, \pref{sec:wrtl} also defines proof systems
regWRTL and regWRTI that consist of regular RTL and regular RTI
(respectively), but with a modified form
of resolution, called ``w-resolution'', that incorporates
a restricted form
of the weakening rule.
In {\pref{sec:dll-up}} we propose a general framework
for DLL
algorithms with clause learning, called \textsc{DLL-L-UP}.
The schema \textsc{DLL-L-UP} is an attempt to give a short and abstract
definition of
modern SAT solvers and it
incorporates all common learning
strategies, including all the specific strategies discussed by
Beame et al.~\cite{BeameKautzSabharwal2004}.
{\pref{sec:regwrti-dll}} proves that,
for any of these learning strategies, a proof search tree
can be transformed into a regular WRTI proof with only a polynomial
increase in size. Conversely, any regular WRTI proof can be simulated by
a ``non-greedy'' DLL search tree with clause learning, where by ``non-greedy''
is meant that the algorithm can continue decision branching even after
unit propagation could yield a contradiction.
In {\pref{sec:dll-learn}} we give another
generalization of
DLL with clause learning called {\textsc{DLL-Learn}}.
The algorithm {\textsc{DLL-Learn}}{} can simulate the clause learning algorithm
\textsc{DLL-L-UP}.
More precisely, we prove that
{\textsc{DLL-Learn}}{} p-simulates, and is
p-simulated by, regular WRTL.
The
{\textsc{DLL-Learn}}{} algorithm
is very similar to the ``pool resolution'' algorithm that
has been introduced by Van Gelder~\cite{VanGelder2005}
but differs from pool resolution by using the
``w-resolution'' inference in place of the ``degenerate'' inference
used by Van Gelder (the terminology ``degenerate'' is
used by Hertel et al.~\cite{BHPvG:clauselearn}).
Van Gelder has shown that
pool resolution can simulate not only regular resolution, but
also any resolution refutation which has a regular depth-first search
tree.
The latter proof system is the same as
the proof system regRTL in our framework, therefore
the same holds for {\textsc{DLL-Learn}}{}.
It is unknown whether {\textsc{DLL-Learn}}{} or \textsc{DLL-L-UP} can p-simulate
pool resolution or vice versa.
Sections \ref{sec:dll-up}-\ref{sec:dll-learn} prove the equivalence
of clause learning algorithms with the two proof systems regWRTI and
regWRTL.
Our really novel system is regWRTI: this system has the
advantage of using
input lemmas in a manner that closely matches the range of clause
learning algorithms that can be used by practical DLL algorithms.
In particular, the regWRTI proof system's use of input lemmas
corresponds directly to the clause learning strategies
of Silva and Sakallah \cite{SilvaSakallah1996}, including
first-UIP, relsat, and other clauses based on cuts, and
including learning multiple clauses at a time.
Van Gelder~\cite{VanGelder2005} shows that pool resolution can also
simulate these kinds of clause learning (at least, for learning
single clauses), but the correspondence is much
more natural for the system regWRTI than for either pool resolution
or {\textsc{DLL-Learn}}{}.
It is known that DLL algorithms with clause learning
and restarts can simulate full (non-regular, dag-like)
resolution by learning every derived clause,
and doing a restart
each time a clause is learned~\cite{BeameKautzSabharwal2004}.
Our proof systems, regWRTI and {\textsc{DLL-Learn}}{}, do not handle
restarts; instead, they can be viewed as capturing what can happen
between restarts. Another approach to simulating full resolution
is via the use of ``proof trace extensions'' introduced by
Beame et al.~\cite{BeameKautzSabharwal2004}.
Proof trace extensions allow resolution to be simulated by
clause learning DLL algorithms, and a related construction is
used by Hertel et al.~\cite{BHPvG:clauselearn}
to show that pool resolution can ``effectively''
p-simulate full resolution. These constructions require introducing
new variables and clauses in a way that does not affect satisfiability, but
allow a clause learning
DLL algorithm or pool resolution to establish non-satisfiability.
However,
the constructions by Beame et al.~\cite{BeameKautzSabharwal2004} and the initially
circulated
preprint of Hertel et al.~\cite{BHPvG:clauselearn} had
the drawback that the number of extra
introduced variables depends on the size of the (unknown) resolution
refutation.
\pref{sec:varexp} introduces an improved form of proof trace
extensions called
``variable extensions''. Theorem~\ref{the:pte_trick} shows that
variable extensions can be used to give a p-simulation
of full resolution by regWRTI (at the cost of changing the
formula that is being refuted).
Variable extensions are simpler and more powerful than proof
trace extensions.
Their main advantage
is that a variable extension depends only on the number of variables,
not on the size of the (unknown) resolution proof.
The results of \pref{sec:varexp} were first published
in the second author's diploma thesis \cite{Hoffmann2007};
the subsequently published version of the article of Hertel et al.~\cite{BHPvG:clauselearn}
gives a similarly improved construction (for pool resolution)
that does not depend on the
size of the resolution proof and, in addition, does not use
degenerate resolution inferences.
One consequence of Theorem~\ref{the:pte_trick} is that
regWRTI can effectively p-simulate full resolution. This
improves on the results of Hertel et al.~\cite{BHPvG:clauselearn}
since regWRTI is not known to be
as strong as pool resolution.
It remains open whether regWRTI or pool resolution
can p-simulate general resolution without variable extensions.
\pref{sec:smlem}
proves a lower bound that shows that for certain hard formulas,
the pigeonhole principle $PHP_n$, learning only small
clauses does not help a DLL-algorithm. We show that resolution trees
with lemmas require size exponential in $n\log n$ to refute $PHP_n$
when the size of clauses used as lemmas is restricted to be less than
$n/2$. This bound is asymptotically the same as the lower bound shown
for tree-like resolution refutations of $PHP_n$ \cite{iwamiy99}. On
the other hand, there are
regular resolution refutations of $PHP_n$
of size exponential in~$n$~\cite{BusPit97},
and our results show that
these can be simulated by \textsc{DLL-L-UP}. Hence the ability of
learning large clauses can give a DLL-algorithm a superpolynomial
speedup over one that learns only short clauses.
\section{Equivalence of regWRTI and DLL-L-UP}\label{sec:regwrti-dll}
\subsection{regWRTI simulates DLL-L-UP}
We shall prove that regular WRTI proofs are equivalent to
non-greedy \hbox{\textsc{DLL-L-UP}} searches. We start by showing that
every \textsc{DLL-L-UP} search can be converted into a
regWRTI proof. As a first step, we prove that, for a
given series-parallel decomposition~$\mathcal H$ of a conflict graph, there
is a single regWRTI proof~$T$ such that every learnable clause of~$\mathcal H$
appears as an input-derived
clause in~$T$. Furthermore, $T$~is polynomial size;
in fact,
$T$ has size at most quadratic in the number of distinct variables
that appear in the conflict graph.
This theorem generalizes earlier, well-known results of Chang
\cite{Chang1970} and Beame et al.~\cite{BeameKautzSabharwal2004} that
any individual learned clause can be derived by input resolution (or, more
specifically, that unit resolution is equivalent to input resolution).
The theorem states a similar fact about proving an entire
set of learnable clauses simultaneously.
\begin{thm} \label{the:regWRTIforLearnables}
Let $G$~be a conflict graph of size~$n$ for~$F$ under the assignment~$\alpha$.
Let $\mathcal H$ be a series-parallel decomposition for~$G$.
Then there is a regWRTI proof~$T$ of size~$\le n^2$
such that every learnable clause
of~$\mathcal H$ is an input-derived clause in~$T$. The final clause of~$T$
is equal to $\Cc{G}$.
Furthermore, $T$~uses as
resolution variables, only variables that are used as
nodes (possibly negated) in
$G\setminus \leafs{G}$.
\end{thm}
First we prove a lemma.
Let the subconflict graphs $H_0\subset H_1\subset \cdots \subset H_k$
and $H_{0,0}\subset H_{0,1} \subset \cdots \subset H_{k-1,m_{k-1}}$
be as in the definition of series-parallel decomposition.
\begin{lem} \label{lem:lemmaA}
\hspace*{1em}
\begin{enumerate}[\em(a)]
\item There is an input proof~$T_0$ from~$F$ which contains
every conflict clause $\CC {H_{0,j}}$, for $j=1,\ldots,m_0$.
Every resolution variable in~$T_0$
is a non-leaf node (possibly negated) in~$H_1$.
\item Suppose that $1\le i<k$ and $u$~is a literal in $\leafs{H_i}$.
Then there is an input proof~$T^u_i$ which contains every
(existing) induced clause $\ic{u}{H_{i,j}}$ for $j=1,\ldots,m_i$.
Every resolution variable in~$T^u_i$ is a non-leaf node (possibly negated)
in the subgraph $(H_{i+1})_u$ of~$H_{i+1}$ rooted at~$u$.
\end{enumerate}
\end{lem}
\proof
We prove part~a.\ of the lemma and then indicate the minor
modifications needed to prove part~b.
The construction of~$T_0$ proceeds by induction on~$j$ to build
proofs $T_{0,j}$; at the end, $T_0$~is set equal to~$T_{0,m_0}$.
Each proof $T_{0,j}$ ends with the clause $\CC{H_{0,j}}$ and contains
the earlier proof~$T_{0,j-1}$ as a subproof.
In addition, the only variables used as resolution variables in~$T_{0,j}$
are variables that are non-leaf nodes (possibly negated) in~$H_{0,j}$.
To prove the base case $j=1$, we must show that
$\CC{H_{0,1}}$ has an input proof~$T_{0,1}$.
Let the two immediate predecessors of~$\Box$ in~$G$ be the literals $x$
and~$\overline x$.
Define a clause~$C$ as follows. If $x$ is not a leaf in~$H_{0,1}$,
then we let $C = C_x$; recall that $C_x$~is the clause that contains
the literal~$x$ and the negations of literals that are immediate
predecessors of~$x$ in the conflict graph. Otherwise,
since $H_{0,1}\not= H_0$, $\overline x$ is not a leaf in~$H_{0,1}$,
and we let
$C=C_{\overline x}$.
By inspection,
$C$~has the
property that it contains only negations of literals that are in~$H_{0,1}$.
For $l\in C$, define
the $\{0,1\}$-depth of~$l$ as the maximum length
of a path to~$\overline l$ from a leaf of~$H_{0,1}$. If all literals in~$C$
have $\{0,1\}$-depth equal to zero, then $C = \CC{H_{0,1}}$,
and $C$~certainly has an input proof from~$F$ (in fact, since $C=C_x$
or $C=C_{\overline x}$, we must have $C\in F$).
Suppose on the other hand,
that $C$ is a subset of the nodes of~$H_{0,1}$ with
some literals of non-zero $\{0,1\}$-depth.
Choose a literal~$l$ in~$C$
of maximum $\{0,1\}$-depth~$d$ and
resolve $C$ with the clause $C_{\overline {l}}\in F$ to
obtain a new clause~$C^\prime$. Since $C_{\overline {l}}\in F$,
the resolution step introducing~$C^\prime$ preserves the property of
having an input proof from~$F$.
Furthermore, the new literals in~$C^\prime\setminus C$
have $\{0,1\}$-depth strictly less than~$d$.
Redefine $C$~to be the just constructed clause~$C^\prime$. If
this new $C$ is a subset of~$\CC{H_{0,1}}$ we are done constructing~$C$.
Otherwise,
some literal in~$C$ has non-zero $\{0,1\}$-depth. In this latter case,
we repeat the above construction to obtain a new~$C$, and continue
iterating this process
until we obtain~$C\subset \CC{H_{0,1}}$.
When the above construction is finished, $C$~is constructed as a clause
with a regular input proof~$T_{0,1}$ from~$F$ (the regularity follows by the
fact that variables introduced in~$C^\prime$ have $\{0,1\}$ depth less than
that of the resolved-upon variable). Furthermore $C\subset \CC{H_{0,1}}$.
In fact, $C = \CC{H_{0,1}}$ must hold, because there is a path, in~$H_{0,1}$,
from each leaf of~$H_{0,1}$ to~$\Box$.
That completes the proof of the $j=1$ base case.
For the induction step, with $j>1$,
the induction hypothesis is that we have constructed
an input proof~$T_{0,j}$ such that
$T_{0,j}$ contains all the clauses $\CC{H_{0,p}}$ for $1\le p \le j$ and
such that the final clause in~$T_{0,j}$ is the clause $\CC{H_{0,j}}$.
We are seeking to extend this input proof to an input proof
$T_{0,j+1}$ that ends with the
clause $\CC{H_{0,j+1}}$. The construction of~$T_{0,j+1}$ proceeds exactly
like the construction above of~$T_{0,1}$, but now we start with
the clause $C = \CC{H_{0,j}}$ (instead of $C=C_x$ or~$C_{\overline x}$),
and we update~$C$ by choosing the literal~${l}\in C$
of maximum $\{0,j+1\}$-depth
and resolving with~$C_{\overline {l}}$ to derive the next~$C$.
The rest of the construction of~$T_{0,j+1}$ is similar to
the previous argument.
For the regularity of the proof it is essential that $H_{0,j}$ is a
proper subconflict graph of $H_{0,j+1}$.
By inspection, any literal~$l$ used for resolution in the new
part of~$T_{0,j+1}$ is a non-leaf node in~$H_{0,j+1}$ and has a path
from~$l$ to some leaf node of~$H_{0,j}$. Since $H_{0,j}$ is proper,
it follows that $l$~is not an inner node of~$H_{0,j}$ and thus is
not used as a resolution literal in~$T_{0,j}$. Thus $H_{0,j+1}$ is
regular.
This completes the proof of part~a.
The proof for part~b.\ is very similar to the proof for part~a.
Fixing $i>0$, let $u$~be any literal in $\leafs{H_{i,0}}$. We
need to prove, for $1\le j\le m_i$, there is an input proof~$T_{i,j}^u$
from~$F$
such that
(a)~$T_{i,j}^u$~contains every existing induced clause $\ic u {H_{i,k}}$ for
$1\le k<j$, and (b)~$T_{i,j}^u$ ends with the
induced clause $\ic u {H_{i,j}}$,
and (c)~the resolution variables used in~$T^u_{i,j}$ are all non-leaf nodes
(possibly negated) of $V_{(H_{i,j})_u}$. The proof is by induction on~$j$.
One starts with the clause $C = C_u$. The main step of the construction
of $T^u_{i,j+1}$ from~$T^u_{i,j}$ is to find the literal $v\not=u$ in~$C$
of maximum $\{i,j\}$-depth, and resolve $C$ with~$C_{\overline v}$
to obtain the next~$C$. This process proceeds iteratively
exactly like the construction
used for part~a.
This completes the proof of \pref{lem:lemmaA}.
\qed
We now can prove \pref{the:regWRTIforLearnables}. \pref{lem:lemmaA}
constructed separate regular input resolution
proofs $T_{0,m_0}=T_0$ and~$T_{i,m_i}^u=T_i^u$ that included
all the learnable clauses of~$\mathcal H$. To complete the proof
of \pref{the:regWRTIforLearnables}, we combine all these proofs
into one single regWRTI proof. For this, we construct
proofs $T^*_i$ of the clause $\CC {H_i}$. $T^*_1$~is just~$T_{0}$.
The proof~$T^*_{i+1}$ is constructed from~$T^*_i$ by
successively resolving the final clause of~$T^*_i$ with the final clauses
of the proofs~$T^u_i$,
using each $u \in \leafs {H_i}\setminus \leafs{H_{i+1}}$ as
a resolution variable, taking
the~$u$'s in order of increasing $\{i,m_i\}$-depth to preserve
regularity.
Letting $T = T^*_k$,
it is clear
that $T^*_k$~contains all the clauses from~$\Cc{\mathcal H}$,
and, by construction, $T^*_k$~is regular.
To bound the size of~$T$, note that any
regular input proof~$S$ has size $2r+1$ where $r$ is
the number of distinct variables used as resolution variables in~$S$.
Since $T$ is regular, and is formed by combining the regular
input proofs $T_0$, $T^u_i$ in a linear fashion, the total size
of~$T$ is less than $ n + \sum_{k=0}^{n-1}(2k+1) = n^2+1$.
This completes the proof of \pref{the:regWRTIforLearnables}.
\hfill $\Box$
Note that, since the final clause of~$T$ contains only literals
from $\leafs G$, $T$~does not use any variable that occurs in its final
clause as a resolution variable.
\medskip
We can now prove the first main result of this section, namely, that
regWRTI proofs polynomially simulate \textsc{DLL-L-UP} search trees.
\begin{thm}\label{the:rWRTIsimDLL}
Suppose that $F$~is an unsatisfiable set of clauses and that there is
an execution of a
(possibly non-greedy)
\textsc{DLL-L-UP} search algorithm on input~$F$ that outputs \texttt{UNSAT}
with $s$~recursive calls. Then there is a regWRTI refutation of~$F$
of size at most $s\cdot n^2$ where $n = |\var(F) |$.
\end{thm}
\proof
Let $S$ be the search tree associated with the \textsc{DLL-L-UP}
algorithm's execution.
We order~$S$ so that the \textsc{DLL-L-UP} algorithm effectively
traverses~$S$ in
a depth-first, left-to-right order. We transform~$S$ into
a regWRTI proof tree~$T$ as follows. The tree~$T$ contains a copy
of~$S$, but adds subproofs at the leaves of~$S$ (these subproofs will
be derivations of learned clauses). For each internal node in~$S$,
if the corresponding branching variable was~$x$ and was first set
to the value~$x^\epsilon$, then the corresponding node in~$T$ is
labeled with $x$ as the resolution variable, and its left incoming
edge is labeled with~$x^{\epsilon}$ and its right incoming edge
is labeled with~$x^{1-\epsilon}$.
For each node~$u$ in~$S$, let $\alpha_u$~be
the assignment at that node that is held by the
\textsc{DLL-L-UP} algorithm upon
reaching that node.
By construction, $\alpha_u$~is equivalently defined as the assignment
that has $\alpha_u(l) = 1$ for literal~$l$
that labels an edge
on the path (in~$T$) between~$u$ and the root of~$T$.
For a node~$u$ that is a leaf of~$S$, the \textsc{DLL-L-UP} algorithm chooses
a conflict graph~$G_u$ with
a series-parallel decomposition~${\mathcal H}_u$ such that every
leaf node~$l$ of~$G_u$ is a literal set to
true by~$\alpha_u$. Also, let~$F_u$ be the
set~$F$ of original clauses augmented with all clauses learned
by the \textsc{DLL-L-UP} algorithm before reaching node~$u$.
By \pref{the:regWRTIforLearnables},
there is a proof~$T_u$ from the clauses~$F_u$ such that
every learnable clause of~${\mathcal H}_u$ appears in $T_u$ as in
input-derived clause. Hence, of course, every clause learned at~$u$ by the
\textsc{DLL-L-UP} algorithm appears in~$T_u$ as an input-derived clause.
The leaf node~$u$ of~$S$ is then
replaced by the proof~$T_u$ in~$T$.
Note that by \pref{the:regWRTIforLearnables} and
the definition of conflict graphs, the final clause~$C_u$ of~$T_u$
is a clause that contains only literals falsified by~$\alpha_u$.
So far, we have defined the clauses~$C_u$ that label nodes~$u$ in~$T$
only for leaf nodes~$u$. For internal nodes~$u$, we define $C_u$~inductively
by letting $v$ and~$w$ be the immediate predecessors of~$u$ in~$T$ and
defining $C_u$~to be the clause obtained by (w-)resolution
from the clauses $C_v$ and~$C_w$ with respect to the branching
variable~$x$ that was picked at node~$u$ by the \textsc{DLL-L-UP}
algorithm. Clearly, using induction from the leaves of~$S$,
the clause~$C_u$ contains only variables that are falsified by the
assignment~$\alpha_u$. This makes $T$ a regWRTI
proof.
Let $r$~be the root node of~$S$. Since $\alpha_r$~is the empty assignment,
the clause~$C_r$
must equal the empty clause~$\Box$. Thus $T$~is a regWRTI refutation
of~$F$ and \pref{the:rWRTIsimDLL} is proved.
\qed
Since DLL clause learning based on first cuts has been shown
to give exponentially shorter proofs than
regular resolution~\cite{BeameKautzSabharwal2004},
and since
\pref{the:rWRTIsimDLL} states that regWRTI can simulate DLL
search algorithms (including ones that learn first cut clauses),
we have proved that regRD does not simulate regWRTI:
\begin{thm}\label{the:regRDnosimregWRTI}
$\text{regRD} < \text{regWRTI}$.
\end{thm}
Hoffmann \cite{Hoffmann2007} gave a
direct proof of \pref{the:regRDnosimregWRTI} based on the variable
extensions described below in \pref{sec:varexp}.
\subsection{DLL-L-UP simulates regWRTI}
We next show that the non-greedy \textsc{DLL-L-UP} search procedure can simulate
any regWRTI proof~$T$. The intuition is that we split~$T$ into two
parts: the \emph{input parts} are the subtrees of~$T$ that contain
only input-derived clauses. The \emph{interior part} of~$T$ is the rest of~$T$.
The interior part will be simulated by a \textsc{DLL-L-UP} search procedure
that traverses the tree~$T$ and at each node, chooses the resolution
variable as the branching variable and sets the branching variable
according to the label on the left incoming edge. In this way, the
tree~$T$ is traversed in a depth-first, left-to-right order. The
input parts of~$T$ are not traversed however. Once an input-derived clause
is reached, the \textsc{DLL-L-UP} search learns all the clauses in
that input subproof and backtracks returning \texttt{UNSAT}.
The heart of the procedure is how a conflict graph and corresponding
series-parallel decomposition can be picked so as to make all the
clauses in a given input subproof learnable. This is the content of
the next lemma.
\begin{lem}\label{lem:lemmaB}
Let $T$~be a regular input proof of~$C$ from a set of clauses~$F$.
Suppose that $\alpha$ falsifies~$C$, that is, $\rest C \alpha = 0$.
Further suppose no variable in~$C$ is used as a resolution variable
in~$T$.
Then there is a conflict graph~$G$ for~$F$ under~$\alpha$ and
a series decomposition~$\mathcal H$ for~$G$ such that the set of learnable
clauses of~${\mathcal H}$ is equal to the set of input-derived clauses of~$T$.
\end{lem}
Recall that a series decomposition just means a series-parallel decomposition
with a trivial parallel part, i.e, $k=1$ in the definition of
series-parallel decompositions.
\proof
Without loss of generality, $F$~is just the set of initial
clauses of~$T$. Let the input proof~$T$ contain clauses $C_{m+1}=C,
C_{m}, \ldots,C_1, D_{m},\ldots,D_1$ as illustrated in
\pref{fig:regRTI} with $m=4$. Each $C_{i+1}$~is inferred from $C_{i}$
and~$D_{i}$ by resolution on~$l_{i}$, where
$\overline{l_i}\in C_i$ and $l_i \in D_i$.
For each~$i$, we have
$D_i = \{l_i\} \cup D^\prime_i$, where $ D^\prime_i\subseteq C_{i+1}$.
Likewise,
$C_i = \{\overline{l_i}\} \cup C^\prime_i$,
where $ C^\prime_i\subseteq C_{i+1}$.
\begin{figure}
\begin{center}
\psset{unit=1cm}
\begin{pspicture}(-6,-0.2)(3,4)
\pscircle*(-4,4){0.07}
\pscircle*(-2,4){0.07}
\pscircle*(-3,3){0.07}
\pscircle*(-1,3){0.07}
\pscircle*(-2,2){0.07}
\pscircle*(0,2){0.07}
\pscircle*(-1,1){0.07}
\pscircle*(1,1){0.07}
\pscircle*(0,0){0.07}
\psline(-4,4)(0,0)
\psline(-2,4)(-3,3)
\psline(-1,3)(-2,2)
\psline(0,2)(-1,1)
\psline(1,1)(0,0)
\uput[-45](0.5,0.5){$\overline{l_4}$}
\uput[-45](-0.5,1.5){$\overline{l_3}$}
\uput[-45](-1.5,2.5){$\overline{l_2}$}
\uput[-45](-2.5,3.5){$\overline{l_1}$}
\uput[-135](-0.5,0.5){$l_4$}
\uput[-135](-1.5,1.5){$l_3$}
\uput[-135](-2.5,2.5){$l_2$}
\uput[-135](-3.5,3.5){$l_1$}
\uput[45](1,1){$D_4$}
\uput[45](0,2){$D_3$}
\uput[45](-1,3){$D_2$}
\uput[45](-2,4){$D_1$}
\uput[225](-1,1){$C_4$}
\uput[225](-2,2){$C_3$}
\uput[225](-3,3){$C_2$}
\uput[135](-4,4){$C_1$}
\uput[-90](0,0){$C_{5}=C$}
\end{pspicture}
\end{center}
\caption{A regular input proof of~$C$. Edges are labeled $l_i$ or
$\overline{l_i}$. The $C_i$'s and $D_i$'s are clauses.}
\label{fig:regRTI}
\end{figure}
As illustrated in Figure~\ref{fig:conflictDecomp},
we construct conflict graphs
$H_{0,0} = \{\Box,l_1,\overline{l_1}\} \subset H_{0,1}
\subset \cdots \subset H_{0,m} =G$ which form a series decomposition
of~$G$. $H_{0,i}$~will be a conflict graph
from the set of clauses $\{C_1,D_1,\ldots,D_i\}$ under~$\alpha_i$ where
$\alpha_i$~is the assignment that falsifies all the literals in~$C_{i+1}$.
Indeed, the leaves of~$H_{0,i}$ are precisely the negations
of literals in~$C_{i+1}$.
For $i>0$, the
non-leaf nodes of $H_{0,i}$ are $\overline{l_1}$ and $l_1,\ldots,l_i$. The
predecessors of~$\overline{l_1}$ are defined to be the literals~$u$
with $\overline{u} \in C_1^\prime$, that is $C_{\overline{l_1}} = C_1$.
Likewise, the predecessors of~$l_i$ are
the literals~$u$ with $\overline{u} \in D_i^\prime$ so that
$C_{l_i} = D_i$.
To start with, we define $H_{0,0}$ to equal $\{\Box, l_1, \overline l_1\}$.
Let $H_{0,i}$ be already
constructed. Then we have $\overline{l}_{i+1} \in C_{i+1}$ since
$C_{i+2}$ is inferred by
resolution on~$l_{i+1}$ from~$C_{i+1}$.
It follows that $\alpha_i(l_{i+1}) = 1$ and that $l_{i+1}$~is
a leaf in~$H_{0,i}$. We obtain~$H_{0,i+1}$ from~$H_{0,i}$ by adding the
predecessors of~$l_{i+1}$ (i.e., the literals~$u$ with $\overline{u} \in
D_{i+1}^\prime $) to~$H_{0,i}$. The leaves of~$H_{0,i+1}$ are now exactly
the negations of the literals in the clause~$C_{i+2}^\prime$. Finally
the graph $H_{0,m} = G$ and the series decomposition $\mathcal{H}$ defined
by the graphs $H_{0,i}$ is as wanted. This completes the proof of
\pref{lem:lemmaB}.
\qed
\begin{figure}
\begin{center}
\psset{yunit=1.2cm}
\psset{xunit=0.8cm}
\begin{pspicture}
\begin{pspicture}(-6,-0.2)(7,8)
\rput(0,0){$\Box$}
\pnode(0,0){BOX}
\pscircle(0,0){0.4cm}
\rput(0,6){$l_4$}
\pnode(0,6){L4}
\pscircle(0,6){0.4cm}
\rput(1.5,4.5){$l_3$}
\pnode(1.5,4.5){L3}
\pscircle(1.5,4.5){0.4cm}
\rput(3.0,3.0){$l_2$}
\pnode(3.0,3.0){L2}
\pscircle(3.0,3.0){0.4cm}
\rput(4.5,1.5){$l_1$}
\pnode(4.5,1.5){L1}
\pscircle(4.5,1.5){0.4cm}
\rput(-4.5,1.5){$\overline{l_1}$}
\pnode(-4.5,1.5){L1neg}
\pscircle(-4.5,1.5){0.4cm}
\psset{nodesep=0.4cm}
\ncline{->}{L4}{L3}
\ncline{->}{L3}{L2}
\ncline{->}{L2}{L1}
\ncline{->}{L1}{BOX}
\ncline{->}{L4}{L1neg}
\ncline{->}{L3}{L1neg}
\ncline{->}{L2}{L1neg}
\ncline{->}{L1neg}{BOX}
\rput(5.5,3.0){$D^{\prime\prime}_1$}
\rput(4.2,4.7){$D^{\prime\prime}_2$}
\rput(2.7,6.2){$D^{\prime\prime}_3$}
\rput(0.0,7.5){$D^{\prime\prime}_4$}
\rput(-5.2,3.2){$C^{\prime\prime}_1$}
\pnode(5.5,3.0){D1}
\pnode(4.2,4.7){D2}
\pnode(2.7,6.2){D3}
\pnode(0.0,7.5){D4}
\pnode(-5.2,3.2){C1}
\ncline[doubleline=true]{->}{D1}{L1}
\ncline[doubleline=true]{->}{D2}{L2}
\ncline[doubleline=true]{->}{D3}{L3}
\ncline[doubleline=true]{->}{D4}{L4}
\ncline[doubleline=true]{->}{C1}{L1neg}
\psset{arcangle=-25}
\ncarc{->}{L4}{L2}
\ncarc{->}{L3}{L1}
\psset{arcangle=-35}
\ncarc{->}{L4}{L1}
\psset{linestyle=dotted,linewidth=1.5pt}
\psline(-6.5,2.25)(6.5,2.25)
\psline(-6.1,3.75)(6.1,3.75)
\psline(-5.7,5.25)(5.7,5.25)
\psline(-5.3,6.75)(5.3,6.75)
\psline(-4.9,8.25)(4.9,8.25)
\uput[0](6.5,2.25){$H_{0,0}$}
\uput[0](6.1,3.75){$H_{0,1}$}
\uput[0](5.7,5.25){$H_{0,2}$}
\uput[0](5.3,6.75){$H_{0,3}$}
\uput[0](4.9,8.25){$H_{0,4}$}
\end{pspicture}
\end{center}
\caption{A conflict graph and a series decomposition. The solid lines
and arcs
indicate edges that may or may not be present.
The notations $C^{\prime\prime}_1$
and $D^{\prime\prime}_i$ indicate zero or more literals, and
the double lines indicate an edge from each literal in the set.
The dashed lines indicate
cuts, and thereby the sets $H_{0,i}$ in the
series decomposition. Namely,
the set~$H_{0,i}$ contains the nodes below the corresponding
dotted line.}
\label{fig:conflictDecomp}
\end{figure}
We can now finish the proof that \textsc{DLL-L-UP} simulates
regWRTI.
\begin{thm}\label{the:DLLsimrWRTI}
Suppose that $F$ has a regWRTI proof of size~$s$. Then there is
an execution of the non-greedy {\rm \textsc{DLL-L-UP}}
algorithm with the input \hbox{\rm ($F,\varnothing$)}
that makes $<s$~recursive calls.
\end{thm}
\proof
Let $T$~be a regWRTI refutation of~$F$. The \textsc{DLL-L-UP} algorithm
works by traversing the proof tree~$T$ in a depth-first, left-to-right order.
At each non-input-derived node~$u$ of~$T$, labeled with
a clause~$C$, the resolution variable for that clause
is chosen as the branching variable~$x$, and the variable~$x$ is
assigned the value 1 or~0, corresponding to the label on the
edges coming into~$u$. By part~b.\ of \pref{the:regWprops},
the clause~$C$ is
falsified by the assignment~$\alpha$. At each input-derived node of~$T$,
the \textsc{DLL-L-UP} algorithm learns the clauses in the input subproof
above~$u$ by using the conflict graph and series decomposition given
by \pref{lem:lemmaB}. Since the \textsc{DLL-L-UP} search cannot find
a satisfying assignment, it must terminate after traversing the (non-input)
nodes in the regWRTI refutation tree. The number of recursive calls will
equal twice the number of non-input-derived nodes of~$T$,
which is less than~$s$.
\qed
\section{Generalized DLL with clause learning}\label{sec:dll-learn}
\subsection{The algorithm DLL-Learn}
This section presents a new formulation of DLL with learning
called \textsc{DLL-Learn}. This algorithm differs from
\textsc{DLL-L-UP} in two important ways. First, unit propagation is
no longer used explicitly (although it can be simulated). Second,
the \textsc{DLL-Learn} algorithm uses more information that arises
during the DLL search process, namely, it can infer clauses
by resolution at each node in the search tree. This makes it
possible for \textsc{DLL-Learn} to simulate regular resolution trees with
full lemmas; more
specifically, \textsc{DLL-Learn} is equivalent to
regWRTL.
The {\textsc{DLL-Learn}}{} algorithm is very similar to the pool resolution
system introduced by Van Gelder~\cite{VanGelder2005}. Furthermore,
our Theorem~\ref{the:DLL-Learn=regwRTL}
is similar to results obtained by Van Gelder
for pool resolution.
Our constructions differ mostly
in that we use w-resolution in place
of the degenerate resolution inference of Van Gelder~\cite{VanGelder2005}.
Loosely speaking,
Van Gelder's degenerate resolution inference is a method of allowing
resolution to operate on any two clauses without any weakening. Conversely,
our w-resolution is a method for allowing resolution to operate on
any two clauses, but with the maximum reasonable amount of weakening.
The idea of \textsc{DLL-Learn} is to extend DLL
so that it can learn a new clause~$C$ at each node in the
search tree. As usual, the new clause will
satisfy $F \equiv F\cup\{C\}$.
At leaves, \textsc{DLL-Learn} does not learn a
new clause, but marks a preexisting falsified clause as ``new''.
At internal nodes, after branching on a variable~$x$ and
making two recursive calls, the \textsc{DLL-Learn} algorithm can
use w-resolution to infer a new clause,
$C_{DLL(F,\alpha)}$, from the two identified new clauses, $C_0$ and~$C_1$
returned
by the recursive calls.
Since $x$ does not have to occur in $\var(C_0)$ and~$\var(C_1)$,
$C$~is obtained by a w-resolution instead of resolution.
The \textsc{DLL-Learn}
algorithm shown in \pref{fig:dll-learn}
uses non-greedy detection of contradictions.
Namely, the ``{\tt optionally do}'' on line~2 of \pref{fig:dll-learn}
allows the algorithm to
continue to branch on variables even if the formula is already
unsatisfied.
This feature is
needed for a direct proof of \pref{the:DLL-Learn=regwRTL}.
In addition, it could be helpful in an implementation of the
algorithm: Think of a call of \textsc{DLL}$(F,\alpha)$ such that
$\rest{F}{\alpha} = 0$ and suppose that all of the falsified clauses
$C \in F$ are very large and thus undesirable to learn.
It might, for example, be the case that $\rest{F}{\alpha}$ contains
two conflicting unit clauses $\rest{C_0}{\alpha}=\{x\}$ and
$\rest{C_1}{\alpha}= \{\neg x\}$, where $C_0$ and~$C_1$ are small.
In that case, it could be better to branch on the
variable~$x$ and to learn the resolvent of $C_0$ and~$C_1$.
There is one situation where it is not optional to
execute lines 3-4;
namely, if $\alpha$~is a total assignment and
has assigned values to all variables, then the algorithm must do
lines 3-4.
Note that it is possible to remove $C_0$ and~$C_1$ from $F$ in line~13
if they were previously learned. Additionally, in an implementation
of \textsc{DLL-Learn} it could be helpful to tag~$C_i$ as the new
clause in~$H$ in line~13 if $C_i \subseteq C$ for an $i\in\{0,1\}$ instead of
learning~$C$ --- this would be essentially equivalent to using
Van Gelder's degenerate resolution instead of
w-resolution.
\begin{figure}[htbp]
\begin{center}
\begin{minipage}{1.0\linewidth}
\tt \small
\begin{tabbing}
123\=123455\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345 \kill
\>{\sc DLL-Learn}($F,\alpha$)\\
\>1\>if $\rest{F}{\alpha} = 1$ then return ($F,\alpha$) \\
\>2\>if $\rest{F}{\alpha} = 0$ then optionally do \>\>\>\>\>\>\>\>\> \\
\>3\>\>tag a $C \in F$ with $\rest{C}{\alpha}=0$ as the new clause\\
\>4\>\>return ($F,$\hspace{0.2em}UNSAT)\\
\>5\>choose $x \in \var(F) \setminus \dom(\alpha)$ and a value $\epsilon \in \{0,1\}$\\
\>6\>($G,\beta$)$\leftarrow${\sc DLL-Learn}($F,\alpha \cup \{(x,\epsilon)\})$\\
\>7\>if $\beta \neq $ UNSAT then return ($G,\beta$)\\
\>8\>($H,\gamma$)$\leftarrow${\sc DLL-Learn}($G,\alpha \cup \{(x,1-\epsilon)\})$\\
\>9\>if $\gamma \neq$ UNSAT then return ($H,\gamma$)\\
\>10\>select the new $C_0 \in G$ and the new $C_1 \in H$\\
\>11\>$C \leftarrow (C_0 - \{x^{1-\epsilon}\}) \cup (C_1 - \{x^\epsilon\})$\\
\>12\>$H \leftarrow H \cup \{C\}$ \>\>\>\>\>\>\>\>\> -- {\sl learn a clause}\\
\>13\>tag $C$ as the new clause in~$H$. \\
\>14\>return ($H,$\hspace{0.2em}UNSAT)
\end{tabbing}
\end{minipage}
\caption{DLL with a generalized learning.}
\label{fig:dll-learn}
\end{center}
\end{figure}
It is easy to verify that, at any point in the \textsc{DLL-Learn}
algorithm, when a clause~$C$ is tagged as new, then $\rest C \alpha = 0$.
There is a straightforward, and direct, translation between executions
of the \textsc{DLL-Learn} search algorithm on input $(F,\varnothing)$ and
regWRTL proofs of~$F$. An execution of \textsc{DLL-Learn}($F,\varnothing$)
can be
viewed as traversing a tree in depth-first, left-to-right order. If there
are $s-1$ recursive calls to \textsc{DLL-Learn}, the tree has $s$~nodes.
Each node of the search tree is labeled with the clause tagged in the
corresponding call to \textsc{DLL-Learn}. Thus, leaves of the
tree are labeled with clauses that either are from~$F$ or were learned
earlier in the tree. The clause on an internal node of the tree
is inferred from the clauses on the two
children using w-resolution with respect to the branching variable.
Finally, the clause~$C$ labeling the root node,
where $\alpha = \varnothing$, must
be the empty clause, since $\alpha$~must falsify~$C$.
In this way the search algorithm describes precisely a regWRTL
proof tree. Conversely, any regWRTL refutation of~$F$ corresponds exactly
to an execution of the \textsc{DLL-Learn}($F,\varnothing$).
This translation between \textsc{DLL-Learn} and regWRTI proof trees
gives the following theorem.
\begin{thm}\label{the:DLL-Learn=regwRTL}
Let $F$~be a set of clauses.
There exists a regWRTL refutation of~$F$ of size~$s$
if and only if there is an execution of
\textsc{DLL-Learn}$(F,\varnothing)$ that performs exactly $s-1$
recursive calls.
\end{thm}
It follows as a corollary of
Theorems \ref{the:hierarchy} and~\ref{the:DLL-Learn=regwRTL}
that \textsc{DLL-Learn} can polynomially
simulate \textsc{DLL-L-UP}.
\section{Variable Extensions}\label{sec:varexp}
This section introduces the notion of a \emph{variable extension} of a CNF
formula. A variable extension augments a set~$F$ of clauses with additional
clauses such that modified formula $\ve F$~is satisfiable if and only if $F$
is satisfiable.
Variable extensions will be used to prove that regWRTI
proofs can simulate resolution dags, in the sense
that if there is an RD refutation of~$F$, then there is a
polynomial size regWRTI refutation of~$\ve F$.
Hence,
\textsc{DLL-Learn}
and the non-greedy version of \textsc{DLL-L-UP}
can simulate full (non-regular) resolution in the same sense.
Our definition of
variable extensions is inspired by the proof trace extensions
of Beame et al.~\cite{BeameKautzSabharwal2004} that were used to separate
DLL with clause learning from regular resolution dags.
A similar construction was used by Hertel et~al.~\cite{BHPvG:clauselearn}
to show that pool resolution can simulate full resolution.
Our results strengthen and extend the prior results by applying
directly to regWRTI proofs.
More importantly,
in contrast to proof trace extensions, variable extensions do
not depend on the size of a (possibly unknown) resolution proof but only
on the number of variables in the formula.
\begin{defi}
Let $F$ be a set of clauses
and $|\var(F)|=n$.
The set of \emph{extension variables} of~$F$ is $\ev{F} = \{q,p_1,
\ldots, p_n\}$, where $q$ and~$p_i$ are new variables.
The \emph{variable extension} of~$F$ is the set of clauses
\[
\ve{F} ~=~ F
\cup \big\{ \{q, \bar l\}:l \in C \in F\big\}
\cup \big\{\{p_1,p_2, \ldots, p_n \}\big\}.
\]
\end{defi}
Obviously $\ve F$ is satisfiable if and only if~$F$ is. Furthermore,
$|\ve F| = O(|F|)$.
Suppose that $G$ is a resolution dag (RD) proof from~$F$.
We can reexpress~$G$ as a sequence of (derived) clauses $C_1,C_2,\ldots, C_t$
which has the following properties: (a)~$C_t$~is the final
clause of~$G$,
and (b)~each $C_i$ is inferred by resolution from two clauses $D$ and~$E$,
where each of $D$ and~$E$
either are in~$F$ or
appear earlier in the sequence as $C_j$ with $j<i$. Basically, the sequence
is an ordinary resolution refutation, but with the clauses from~$F$ omitted.
\begin{lem}\label{lem:pte_trick_helper}
Suppose that $D,E\vdash_x C$. Then, there is
an input resolution proof tree~$T_C$ of the clause~$\{q\}$ from
$\ve F \cup \{D,E\}$ such that $C$~appears in~$T_C$ and such that
$|T_C| = 2\cdot |C|+3$.
\end{lem}
\proof
The proof~$T_C$ starts by resolving $D$ and~$E$ to yield~$C$. It
then resolves successively with the clauses $\{q,\overline l\}$,
for $l\in C$, to derive~$\{q\}$.
\qed
\begin{thm}\label{the:pte_trick}
Let $F$ be a set of clauses, $n = |\var(F)|$, and let $C$~be a clause.
Suppose that $G$ is a resolution dag proof of~$C$ from~$F$ of size~$s$.
Then, there is a regWRTI proof~$T$ of~$C$ from~$\ve F$
of size $\le 2s\cdot(d+2)+1$ where $d = \max \{ |D| : D\in G \}\le n$.
\end{thm}
\proof
Let $C_1,\ldots, C_t$ be a sequence of the derived clauses in~$G$ as above.
Without loss of generality, $t< 2^n$ since $F$~also has a regular resolution
tree refutation, and this has depth at most~$n$, and thus has $<2^n$
internal nodes.
Let $T^\prime$~be a binary tree with $t$~leaves and
of height~$h = \lceil \log_2 t \rceil \le n$. For each
node~$u$ in~$T^\prime$, let $l(u)$~be the level of~$u$ in ~$T^\prime$, namely,
the number of edges between $u$ and the root.
Label $u$ with the variable~$p_{l(u)}$. Also, label every node~$u$ in~$T^\prime$
with the clause~$\{q\}$. $T^\prime$ will form the middle part of
a regWRTI proof:
each clause $\{q\}$ at level $i$ is inferred by w-resolution from
its two children clauses (also equal to~$\{q\}$) with respect to the
variable~$p_i$.
Now, we expand $T^\prime$ into a
regWRTI proof tree~$T^{\prime\prime}$. For this, for $1\le i\le t$,
we replace the
$i$-th leaf of~$T^\prime$ with a new subproof~$T_{C_i}$ defined as follows.
Letting $C_i$ be as above, let $D_i$ and~$E_i$ be the
two clauses from which $C_i$~is inferred in~$G$.
Then replace $i$-th leaf of~$T^\prime$ by the input proof~$T_{C_i}$ from
\pref{lem:pte_trick_helper} which contains~$C_i$ and ends with the
clause~$\{q\}$. Note that each of $D_i$ and~$E_i$ either is in~$F$ or appeared
as an input clause in a proof, $T_{D_i}$ or~$T_{E_i}$,
inserted at an earlier leaf of~$T^\prime$. Therefore $T^{\prime\prime}$ is
a valid regWRTI proof of~$\{q\}$ from~$\ve F$.
Since there are at most $s-1$ internal nodes in~$T^\prime$ and each
$T_{C_i}$ has size $\le 2d+3$,
$T^{\prime\prime}$ has size
at most $(s-1) + s\cdot(2d+3)$.
Finally, we form a regWRTI proof of~$C$ by modifying~$T^{\prime\prime}$ by
adding a new root labeled with the clause~$C$ and the resolution
variable~$q$. Let the
left child of this new root be the root of~$T^{\prime\prime}$,
and let the right child be a new node labeled also with~$C$.
(This is permissible since $C$~is input-derived in~$T^{\prime\prime}$.)
Label the left edge coming to the new root with the literal~$\overline q$,
and the right edge with the literal~$q$. This makes $C$ inferred from
$\{q\}$ and~$C$ by w-resolution with respect to~$q$.
$T$~is a valid regWRTI of size at most $s+1+s\cdot(2d+3) = 2s\cdot(d+2)+1$.
\qed
Since \textsc{DLL-L-UP} and \textsc{DLL-Learn} simulate
regWRTI, \pref{the:pte_trick}
implies that these two systems p-simulate full resolution by the
use of variable extensions:
\begin{cor}
Suppose that $F$ has a resolution dag refutation of size~$s$. Then both
\textsc{DLL-L-UP} and \textsc{DLL-Learn}, when
given $\ve F$ as input, have executions that return
\texttt{UNSAT} after at most $p(s)$ recursive calls, for some
polynomial~$p$.
\end{cor}
We now consider some issues about ``naturalness'' of proofs based
on resolution with lemmas.
Beame et al.~\cite{BeameKautzSabharwal2004} defined a refutation system
to be natural provided that, whenever $F$ has a refutation of size~$s$,
then $\rest F \alpha$ has a refutation of size at most~$s$. We need
a somewhat relaxed version of this notion:
\begin{defi}
Let $\mathcal R$ be a refutation system for sets of clauses.
The system~$\mathcal R$ is {\em p-natural} provided, there is a polynomial~$p(s)$,
such that, whenever a set~$F$
has an $\mathcal R$-refutation of size~$s$, and $\alpha$~is a
restriction, then $\rest F \alpha$ has an $\mathcal R$-refutation
of size $\le p(s)$.
\end{defi}
The next proposition is well-known.
\begin{prop}
Resolution dags (RD) and regular resolution dags (regRD) are
natural proof systems.
\end{prop}
As a corollary to Theorem~\ref{the:pte_trick} we obtain the following
theorem.
\begin{thm}\label{the:equivNatural}
\hspace*{1em}
\begin{enumerate}[\em(a)]
\item regWRTI is equivalent to RD if and only if
regWRTI is p-natural.
\item regWRTL is equivalent to RD if and only if
regWRTL is p-natural.
\end{enumerate}
\end{thm}
\proof
Suppose that $\text{regWRTI}\equiv \text{RD}$. Then, since RD is
natural, we have immediately that regWRTI is p-natural.
Conversely, suppose that regWRTI is p-natural. By \pref{the:hierarchy},
RD p-\penalty10000simulates regWRTI. So it suffices to
prove that regWRTI p-simulates RD. Let $F$~have an
RD refutation of size~$s$. By \pref{the:pte_trick}, $\ve F$
has a regWRTI proof of size~$2s(s+2)+1$.
Let $\alpha$~be the assignment that assigns the value~$1$ to each of
the extension variables $q$ and $p_1,\ldots,p_n$.
Since $\rest {\ve F} \alpha$ is~$F$ and
since regWRTI is
p-natural, $F$~has a regWRTI proof of size at most $p(2s(s+2)+1)$. This
proves that regWRTI p-simulates RD, and completes the proof of~a.
The proof of {b.} is similar.
\qed
Theorem~\ref{the:equivNatural} is stated for the equivalence of
systems with RD. It could also be stated for {\em p-equivalent} but then
one needs an ``effective'' version of p-natural, where the
$\mathcal R$-refutation of~$\rest F \alpha$
is computable in
polynomial time from $\alpha$ and a $\mathcal R$-refutation of~$F$.
| {
"timestamp": "2008-12-05T11:34:34",
"yymm": "0811",
"arxiv_id": "0811.1075",
"language": "en",
"url": "https://arxiv.org/abs/0811.1075",
"abstract": "Resolution refinements called w-resolution trees with lemmas (WRTL) and with input lemmas (WRTI) are introduced. Dag-like resolution is equivalent to both WRTL and WRTI when there is no regularity condition. For regular proofs, an exponential separation between regular dag-like resolution and both regular WRTL and regular WRTI is given.It is proved that DLL proof search algorithms that use clause learning based on unit propagation can be polynomially simulated by regular WRTI. More generally, non-greedy DLL algorithms with learning by unit propagation are equivalent to regular WRTI. A general form of clause learning, called DLL-Learn, is defined that is equivalent to regular WRTL.A variable extension method is used to give simulations of resolution by regular WRTI, using a simplified form of proof trace extensions. DLL-Learn and non-greedy DLL algorithms with learning by unit propagation can use variable extensions to simulate general resolution without doing restarts.Finally, an exponential lower bound for WRTL where the lemmas are restricted to short clauses is shown.",
"subjects": "Logic in Computer Science (cs.LO); Computational Complexity (cs.CC)",
"title": "Resolution Trees with Lemmas: Resolution Refinements that Characterize DLL Algorithms with Clause Learning",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9626731126558705,
"lm_q2_score": 0.7371581684030623,
"lm_q1q2_score": 0.7096423484962763
} |
https://arxiv.org/abs/1807.11675 | The V-monoid of a weighted Leavitt path algebra | We compute the $V$-monoid of a weighted Leavitt path algebra of a row-finite weighted graph, correcting a wrong computation of the $V$-monoid that exists in the literature. Further we show that the description of $K_0$ of a weighted Leavitt path algebra that exists in the literature is correct (although the computation was based on a wrong $V$-monoid description). | \section{Introduction}
The weighted Leavitt path algebras (wLpas) were introduced by R. Hazrat in \cite{hazrat13}. They generalise the Leavitt path algebras (Lpas). While the Lpas only embrace Leavitt's algebras $L_K(1,1+k)$ where $K$ is a field and $k\geq 0$, the wLpas embrace all of Leavitt's algebras $L_K(n,n+k)$ where $K$ is a field, $n\geq 1$ and $k\geq 0$. In \cite{hazrat-preusser} linear bases for wLpas were obtained. They were used to classify the simple and graded simple wLpas and the wLpas which are domains. In \cite{preusser} the Gelfand-Kirillov dimension of a weighted Leavitt path algebra $L_K(E,w)$, where $K$ is a field and $(E,w)$ is a row-finite weighted graph, was determined. Further finite-dimensional wLpas were investigated. In \cite{preusser1} locally finite wLpas were investigated.
The $V$-monoid $V(R)$ of an associative, unital ring $R$ is the set of all isomorphism classes of finitely generated projective right $R$-modules, which becomes an abelian monoid by defining $[P]+[Q]:=[P\oplus Q]$ for any $[P],[Q]\in V(R)$. It can also be defined using matrices and the definition can be extended to include all associative rings, see Section 4. For an associative ring $R$ with local units, the Grothendieck group $K_0(R)$ is the group completion $V(R)^+$ of $V(R)$.
A presentation for $V(L_K(E,w))$ where $K$ is a field and $(E,w)$ is a row-finite weighted graph was given in \cite[Theorem 5.21]{hazrat13}. In \cite{hazrat-preusser} this presentation was used in order to show that there is a huge class $C$ of wLpas that are domains but neither are isomorphic to an Lpa nor to a Leavitt algebra. Unfortunately, \cite[Theorem 5.21]{hazrat13} is wrong as we will show in Section 4. In this paper we correct the false \cite[Theorem 5.21]{hazrat13}. It turns out that the statement of \cite[Theorem 5.21]{hazrat13} is true at least for row-finite weighted graphs $(E,w)$ that have the property that for any vertex $v\in E^0$ all the edges in $s^{-1}(v)$ have the same weight. Further it turns out, surprisingly, that \cite[Theorem 5.23]{hazrat13}, which gives a presentation for the Grothendieck group $K_0(L_K(E,w))$ of a wLpa, is correct.
The rest of this paper is organised as follows. In Section 2 we recall some standard notation which is used throughout the paper. In Section 3 we recall the definition of a wLpa. In Section 4 we prove our main result Theorem \ref{thmm} and show that the description of $K_0(L_K(E,w))$ obtained in \cite{hazrat13} is correct. Moreover, we show that there is a class $D$, containing the class $C$ mentioned above, that consists of wLpas that are domains but neither are isomorphic to an Lpa nor to a Leavitt algebra. In the last section we determine the $V$-monoids of some concrete examples of wLpas.
\section{Notation}
Throughout the paper $K$ denotes a field. $\mathbb{N}$ denotes the set of positive integers and $\mathbb{N}_0$ the set of nonnegative integers. If $m,n\in\mathbb{N}$ and $R$ is a ring, then $\operatorname{\mathbb{M}}_{m\times n}(R)$ denotes the set of $m\times n$-matrices whose entries are elements of $R$. Instead of $\operatorname{\mathbb{M}}_{n\times n}(R)$ we might write $\operatorname{\mathbb{M}}_{n}(R)$.
\section{Weighted Leavitt path algebras}
\begin{definition}\label{defdg}
A {\it directed graph} is a quadruple $E=(E^0,E^1,s,r)$ where $E^0$ and $E^1$ are sets and $s,r:E^1\rightarrow E^0$ maps. The elements of $E^0$ are called {\it vertices} and the elements of $E^1$ {\it edges}. If $e$ is an edge, then $s(e)$ is called its {\it source} and $r(e)$ its {\it range}. $E$ is called {\it row-finite} if $s^{-1}(v)$ is a finite set for any vertex $v$ and {\it finite} if $E^0$ and $E^1$ are finite sets.
\end{definition}
\begin{definition}
A {\it weighted graph} is a pair $(E,w)$ where $E$ is a directed graph and $w:E^1\rightarrow \mathbb{N}$ is a map. If $e\in E^1$, then $w(e)$ is called the {\it weight} of $e$. $(E,w)$ is called {\it row-finite} (resp. {\it finite}) if $E$ is row-finite (resp. finite). In this article all weighted graphs are assumed to be row-finite. For a vertex $v\in E^0$ we set $w(v):=\max\{w(e)\mid e\in s^{-1}(v)\}$ with the convention $\max \emptyset=0$.
\end{definition}
\begin{remark}
In \cite{hazrat13} and \cite{hazrat-preusser}, $E^1$ was denoted by $E^{\operatorname{st}}$. The set $\{e_i \mid e\in E^1, 1\leq i\leq w(e)\}$ was denoted by $E^1$.
\end{remark}
\begin{definition}\label{def3}
Let $(E,w)$ be a weighted graph. The associative $K$-algebra presented by the generating set $E^0\cup \{e_i,e_i^*\mid e\in E^1, 1\leq i\leq w(e)\}$ and the relations
\begin{enumerate}[(i)]
\item $uv=\delta_{uv}u\quad(u,v\in E^0)$,
\item $s(e)e_i=e_i=e_ir(e),~r(e)e_i^*=e_i^*=e_i^*s(e)\quad(e\in E^1, 1\leq i\leq w(e))$,
\item $\sum\limits_{e\in s^{-1}(v)}e_ie_j^*= \delta_{ij}v\quad(v\in E^0,1\leq i, j\leq w(v))$ and
\item $\sum\limits_{1\leq i\leq w(v)}e_i^*f_i= \delta_{ef}r(e)\quad(v\in E^0,e,f\in s^{-1}(v))$
\end{enumerate}
is called {\it weighted Leavitt path algebra (wLpa) of $(E,w)$} and is denoted by $L_K(E,w)$. In relations (iii) and (iv), we set $e_i$ and $e_i^*$ zero whenever $i > w(e)$.
\end{definition}
\begin{example}\label{exex1}
If $(E,w)$ is a weighted graph that $w(e)=1$ for all $e \in E^{1}$, then $L_K(E,w)$ is isomorphic to the usual Leavitt path algebra $L_K(E)$.
\end{example}
\begin{example}\label{wlpapp}
Let $n\geq 1$ and $k\geq 0$. If $(E,w)$ is a weighted graph with precisely one vertex and precisely $n+k$ edges each of which has weight $n$, then $L_K(E,w)$ is isomorphic to the Leavitt algebra $L_K(n,n+k)$, for details see \cite[Example 4]{hazrat-preusser}.
\end{example}
\begin{remark}
Let $(E,w)$ be a weighted graph. Then there is an involution $*$ on $L_K(E,w)$ mapping $x\mapsto x$, $v\mapsto v$, $e_i\mapsto e_i^*$ and $e_i^*\mapsto e_i$ for any $x\in K$, $v\in E^0$, $e\in E^1$ and $1\leq i\leq w(e)$, see \cite[Proof of Proposition 5.7]{hazrat13}. If $m,n\in\mathbb{N}$, then $*$ induces a map $\operatorname{\mathbb{M}}_{m,n}(L_K(E,w))\rightarrow \operatorname{\mathbb{M}}_{n,m}(L_K(E,w))$ mapping a matrix $\sigma$ to the matrix $\sigma^*$ one gets by transposing $\sigma$ and then applying the involution $*$ to each entry.
\end{remark}
\section{The $V$-monoid of a weighted Leavitt path algebra}
Consider the weighted graphs\\
\[
(E,w):\xymatrix@C+15pt{ u& v\ar[l]_{e,2}\ar[r]^{f,2}& x}\quad\text{ and }\quad(E,w'):\xymatrix@C+15pt{ u& v\ar[l]_{e,1}\ar[r]^{f,2}& x}.
\]\\
Set $L:=L_K(E,w)$ and $L':=L_K(E,w')$. According to \cite[Theorem 5.21]{hazrat13} it is true that
\[V(L)\cong V(L')\cong \mathbb{N}_0^{E^0}/\langle 2\alpha_v=\alpha_u+\alpha_x\rangle\]
where for any $y\in E^0$, $\alpha_y$ denotes the element of $\mathbb{N}_0^{E^0}$ whose $y$-component is one and whose other components are zero. Hence $V(L')$ is not a refinement monoid. But in \cite[Example 47]{preusser} it was shown that $L'\cong M_3(K)\oplus M_3(K)$. Hence $V(L')\cong \mathbb{N}_0^2$ and therefore $V(L')$ is a refinement monoid. In view of this contradiction, is \cite[Theorem 5.21]{hazrat13} wrong?
First we consider the algebra $L$. By the relations for the generators of a wLpa (see Definition \ref{def3}), the matrix $A:=\begin{pmatrix}e_1&f_1\\e_2&f_2\end{pmatrix}\in \operatorname{\mathbb{M}}_2(L)$ defines a ``universal" (in the sense of ``as general as possible") isomorphism $uL\oplus xL\rightarrow vL\oplus vL$ (by left multiplication). Set
\begin{align*}
B_0&:=K^{E^0}\text{ and }\\
B_1&:=B_0\langle i,i^{-1}:\overline {\alpha_uB_0\oplus\alpha_xB_0}\cong \overline {\alpha_vB_0\oplus\alpha_vB_0}\rangle
\end{align*}
(see \cite[p. 38]{bergman74}) where for any $y\in E^0$, $\alpha_y$ denotes the element of $B_0$ whose $y$-component is one and whose other components are zero. One checks easily that $L\cong B_1$. It follows from \cite[Theorem 5.2]{bergman74} that
\[V(L)\cong \mathbb{N}_0^{E^0}/\langle \alpha_u+\alpha_x=2\alpha_v\rangle\]
and hence \cite[Theorem 5.21]{hazrat13} yields a correct presentation for $V(L)$.
For the algebra $L'$ the situation is a bit different. By the relations for the generators of a wLpa, the matrix $A:=\begin{pmatrix}e_1&f_1\\0&f_2\end{pmatrix}\in \operatorname{\mathbb{M}}_2(L')$ defines an isomorphism $uL'\oplus xL'\rightarrow vL'\oplus vL'$, but this isomorphism is not universal since an entry of $A$ is zero. Since $A$ defines an isomorphism $uL'\oplus xL'\rightarrow vL'\oplus vL'$, we have $vL'\oplus vL'=P\oplus Q$ where $P=\operatorname{im} \begin{pmatrix}e_1\\0\end{pmatrix}$ and $Q=\operatorname{im}\begin{pmatrix}f_1\\f_2\end{pmatrix}$. But this splitting into two direct summands is not universal, since $P$ is already a direct summand of $vL'$. Instead $vL'$ universally breaks up into two direct summands, namely $vL'\cong P\oplus O$ where $O=\operatorname{im} \begin{pmatrix}f_1\\0\end{pmatrix}$. Clearly $e_1$ defines a universal isomorphism $uL'\cong P$ and $\begin{pmatrix}f_1\\f_2\end{pmatrix}$ defines a universal isomorphism $xL'\cong O \oplus vL'$. Hence we can describe $L'$ as follows. Set
\begin{align*}
B_0&:=K^{E^0},\\
B_1&:=B_0\langle\epsilon:\overline{\alpha_vB_0}\rightarrow\overline{\alpha_vB_0};\epsilon^2=\epsilon\rangle,\\
B_2&:=\langle i,i^{-1}:\overline {\alpha_u B_1}\cong \overline {\ker \epsilon}\rangle\text{ and }\\
B_3&:=\langle j,j^{-1}:\overline {\alpha_x B_2}\cong \overline {\operatorname{im} \epsilon\oplus \alpha_vB_2}\rangle
\end{align*}
(see \cite[pp. 38-39]{bergman74}). One checks easily that $L'\cong B_3$ (for details see Theorem \ref{thmm}, Part II). It follows from \cite[Theorems 5.1, 5.2]{bergman74} that
\[V(L')\cong \mathbb{N}_0^{E^0\cup\{p,q\}}/\langle \alpha_p+\alpha_q=\alpha_v,\alpha_u=\alpha_p,\alpha_x=\alpha_q+\alpha_v\rangle\cong \mathbb{N}_0^2.\]
Thus \cite[Theorem 5.21]{hazrat13} indeed is wrong.
In this section we repair \cite[Theorem 5.21]{hazrat13}. Further we show that \cite[Theorem 5.23]{hazrat13}, which gives a presentation for $K_0(L_K(E,w))$ where $(E,w)$ is any weighted graph, is correct. In particular $K_0(L)\cong K_0(L')$ where $L$ and $L'$ are the wLpas defined above, while $V(L)\not\cong V(L')$.
We denote by $\operatorname{\mathcal{G}^w}$ the category whose objects are all weighted graphs and whose morphisms are the complete weighted graph homomorphisms between weighted graphs (see \cite[p. 884]{hazrat13}). Further we denote the category of associative $K$-algebras by $\operatorname{\mathcal{A}_K}$ and the category of abelian monoids by $\operatorname{\mathcal{M}^{ab}}$. We start by defining three functors, $L_K:\operatorname{\mathcal{G}^w}\rightarrow \operatorname{\mathcal{A}_K}$, $V:\operatorname{\mathcal{A}_K}\rightarrow \operatorname{\mathcal{M}^{ab}}$ and $M:\operatorname{\mathcal{G}^w}\rightarrow \operatorname{\mathcal{M}^{ab}}$. We will then show that $V\circ L_K\cong M$.
\begin{definition}
In Definition \ref{def3} we associated to any weighted graph $(E,w)$ an associative $K$-algebra $L_K(E,w)$. If $\phi:(E,w)\rightarrow (E',w')$ is a morphism in $\mathcal{G}^w$, then there is a unique $K$-algebra homomorphism $L_K(\phi):L_K(E,w)\rightarrow L_K(E',w')$ such that $L_K(\phi)(v)=\phi^0(v)$, $L_K(\phi)(e_i)=(\phi^1(e))_i$ and $L_K(\phi)(e_i^*)=(\phi^1(e))_i^*$ for any $v\in E^0$, $e\in E^1$ and $1\leq i \leq w(e)$. One checks easily that $L_K:\operatorname{\mathcal{G}^w} \rightarrow \operatorname{\mathcal{A}_K}$ is a functor that commutes with direct limits.
\end{definition}
\begin{definition}
Let $A$ be an associative $K$-algebra. Let $\operatorname{\mathbb{M}}_\infty(A)$ be the directed union of the rings $M_n(A)~(n\in\mathbb{N})$, where the transition maps $M_n(A)\rightarrow M_{n+1}(A)$ are given by $x\mapsto \begin{pmatrix}x&0\\0&0\end{pmatrix}$. Let $I(\operatorname{\mathbb{M}}_\infty(A))$ denote the set of all idempotent elements of $\operatorname{\mathbb{M}}_\infty(A)$. If $e, f\in I(\operatorname{\mathbb{M}}_\infty(A))$, write $e\sim f$ iff there are $x,y\in \operatorname{\mathbb{M}}_\infty(A)$ such that
$e = xy$ and $f = yx$. Then $\sim$ is an equivalence relation on $I(\operatorname{\mathbb{M}}_\infty(A))$. Let $V(A)$ be the set of all $\sim$-equivalence classes, which becomes an abelian monoid by defining
\[[e]+[f]=\left[\begin{pmatrix}e&0\\0&f\end{pmatrix}\right]\]
for any $[e],[f]\in V(A)$. If $\phi:A\rightarrow B$ is a morphism in $\operatorname{\mathcal{A}_K}$, let $V(\phi):V(A)\rightarrow V(B)$ be the canonical monoid homomorphism induced by $\phi$. One checks easily that $V:\operatorname{\mathcal{A}_K}\rightarrow \operatorname{\mathcal{M}^{ab}}$ is a functor that commutes with direct limits.
\end{definition}
\begin{remark}\label{rempro}
Let $A$ be an associative, unital $K$-algebra. Let $V'(A)$ denote the set of isomorphism classes of finitely generated projective right $A$-modules, which becomes an abelian monoid by defining $[P]+[Q]:=[P\oplus Q]$ for any $[P],[Q]\in V'(A)$. Then $V'(A)\cong V(A)$ as abelian monoids, see \cite[Definition 3.2.1]{abrams-ara-molina}.
\end{remark}
\begin{definition}\label{defM}
Let $(E,w)$ be a weighted graph. For any $v\in E^0$ write $w(s^{-1}(v))=\{w_1(v),\dots,$ $w_{k_v}(v)\}$ where $k_v\geq 0$ and $w_1(v)<\dots<w_{k_v}(v)$ (hence $k_v$ is the number of different weights of the edges in $s^{-1}(v)$). Further set $w_0(v):=0$ for any $v\in E^0$ (note that with this convention one has $w_{k_v}(v)=w(v)$ for any $v\in E^0$). Let $M(E,w)$ be the abelian monoid presented by the generating set $\{v,q_1^v,\dots,q^v_{k_v-1}\mid v\in E^0\}$ and the relations
\begin{equation}
q^v_{i-1}+(w_i(v)-w_{i-1}(v))v=q_i^v+\sum\limits_{\substack{e\in s^{-1}(v),\\w(e)=w_i(v)}}r(e)\quad\quad(v\in E^0,1\leq i\leq k_v)
\end{equation}
where $q^v_0=q^v_{k_v}=0$. If $\phi:(E,w)\rightarrow (E',w')$ is a morphism in $\mathcal{G}^w$, then there is a unique monoid homomorphism $M(\phi):M(E,w)\rightarrow M(E',w')$ such that $M(\phi)([v])=[\phi^0(v)]$ and $M(\phi)([q_i^v])=[q_i^{\phi^0(v)}]$ for any $v\in E^0$ and $1\leq i \leq k_v-1$. One checks easily that $M:\operatorname{\mathcal{G}^w}\rightarrow \operatorname{\mathcal{M}^{ab}}$ is a functor that commutes with direct limits.
\end{definition}
\begin{remark}
If $k_v\leq 1$ for any $v\in E^0$, then $M(E,w)$ is the abelian monoid $M_E$ defined in \cite[Theorem 5.21]{hazrat13}.
\end{remark}
\begin{lemma}\label{lemmon}
Let $G$ be an abelian group (resp. an abelian monoid) presented by a generating set $X$ and relations
\[l_i=r_i~ (i\in I)\text{ and }y=\sum\limits_{x\in X\setminus\{y\}}n_xx\]
where for any $i\in I$, $l_i$ and $r_i$ are elements of the free abelian group (resp. the free abelian monoid) $G\langle X \rangle$ generated by $X$, $y$ is an element of $X$, the $n_x$ are integers (resp. nonnegative integers) and only finitely of them are nonzero. Let $G\langle X\setminus\{y\}\rangle$ be the free abelian group (resp. the free abelian monoid) generated by $X\setminus \{y\}$ and $f:G\langle X \rangle\rightarrow G\langle X\setminus\{y\}\rangle$ the homomorphism which maps each $x\in X\setminus\{y\}$ to $x$ and $y$ to $\sum\limits_{x\in X\setminus\{y\}}n_xx$. Then $G$ is also presented by the generating set $X\setminus\{y\}$ and the relations $f(l_i)=f(r_i)~(i\in I)$.
\end{lemma}
\begin{proof}
Straightforward.
\end{proof}
\begin{theorem}\label{thmm}
$V\circ L_K\cong M$. Moreover, if $(E,w)$ is finite, then $L_K(E,w)$ is left and right hereditary.
\end{theorem}
\begin{proof}
We have divided the proof in two parts, Part I and Part II. In Part I we define a natural transformation $\theta:M\rightarrow V\circ L_K$. In Part II we show that $\theta$ is a natural isomorphism and further that $L_K(E,w)$ is left and right hereditary provided that $(E,w)$ is finite.\\
\\
{\bf Part I}
Let $(E,w)$ be a weighted graph. Let $v\in E^0$ be a vertex that emits edges (i.e. $s^{-1}(v)\neq \emptyset$). Write $s^{-1}(v)=\{e^{1,v}, \dots, e^{n(v),v}\}$ where $w(e^{1,v})\leq\dots \leq w(e^{n(v),v})$. Let $A=A(v)\in\operatorname{\mathbb{M}}_{w(v)\times n(v)}(L_K(E,w))$ be the matrix whose entry at position $(i,j)$ is $e^{j,v}_i$ (we set $e^{j,v}_i:=0$ if $i>w(e^{j,v})$). By relations (iii) and (iv) in Definition \ref{def3} we have that
\begin{equation}
AA^*=\begin{pmatrix} v&& \\&\ddots &\\&&v\end{pmatrix}\in\operatorname{\mathbb{M}}_{w(v)}(L_K(E,w))
\end{equation}
and
\begin{equation}
A^*A=\begin{pmatrix} r(e^{1,v})&& \\&\ddots &\\&&r(e^{n(v),v})\end{pmatrix}\in\operatorname{\mathbb{M}}_{n(v)}(L_K(E,w)).
\end{equation}
As in Definition \ref{defM}, set $w_0(v):=0$ and write $w(s^{-1}(v))=\{w_1(v),\dots,w_{k_v}(v)\}$ where $w_1(v)<\dots<w_{k_v}(v)$. For any $0\leq l\leq k_v$ set $n_l(v):=|s^{-1}(v)\cap w^{-1}(\{w_0(v), \dots , w_l(v)\})|$ (note that $n_0(v)=0$ and $n_{k_v}(v)=n(v)$). For $0\leq l<t\leq k_v$ and $0\leq l'<t'\leq k_v$ let $A^{n_{l'},n_{t'}}_{w_{l},w_{t}}=A^{n_{l'},n_{t'}}_{w_{l},w_{t}}(v)\in\operatorname{\mathbb{M}}_{(w_{t}(v)-w_{l}(v))\times (n_{t'}(v)-n_{l'}(v))}(L_K(E,w))$ be the matrix whose entry at position $(i,j)$ is $e^{n_{l'}(v)+j,v}_{w_{l}(v)+i}$. Then $A$ has the block form
\[A=\begin{pmatrix}
A^{n_0,n_1}_{w_0,w_1}&A^{n_1,n_2}_{w_0,w_1}&\dots&A^{n_{k_v-1},n_{k_v}}_{w_0,w_1}\\
0&A^{n_1,n_2}_{w_1,w_2}&\dots&A^{n_{k_v-1},n_{k_v}}_{w_1,w_2}\\
0&0&\ddots&\vdots\\
0&0&0&A^{n_{k_v-1},n_{k_v}}_{w_{k_v-1},w_{k_v}}
\end{pmatrix}\]
and $A^*$ has the block form
\[A^*=\begin{pmatrix}
(A^{n_0,n_1}_{w_0,w_1})^*&0&0&0\\
(A^{n_1,n_2}_{w_0,w_1})^*&(A^{n_1,n_2}_{w_1,w_2})^*&0&0\\
\vdots&\vdots&\ddots&0\\
(A^{n_{k_v-1},n_{k_v}}_{w_0,w_1})^*&(A^{n_{k_v-1},n_{k_v}}_{w_1,w_2})^*&\hdots&(A^{n_{k_v-1},n_{k_v}}_{w_{k_v-1},w_{k_v}})^*
\end{pmatrix}.\]
For any $1\leq l \leq k_v-1$ set $\epsilon_l=\epsilon_l(v):=A^{n_{l},n_{k_v}}_{w_0,w_l}(A^{n_{l},n_{k_v}}_{w_0,w_l})^*\in \operatorname{\mathbb{M}}_{w_l(v)}(L_K(E,w))$.
It follows from equation (2) that
\begin{equation}
\epsilon_l=\begin{pmatrix} v&& \\&\ddots &\\&&v\end{pmatrix}-A^{n_{0},n_l}_{w_0,w_l}(A^{n_{0},n_l}_{w_0,w_l})^*.
\end{equation}
By equation (3) we have
\begin{equation}
(A^{n_{0},n_l}_{w_0,w_l})^*A^{n_{0},n_l}_{w_0,w_l}=\begin{pmatrix} r(e^{1,v})&& \\&\ddots &\\&&r(e^{n_l(v),v})\end{pmatrix}.
\end{equation}
Equations (4) and (5) imply that $\epsilon_l$ is an idempotent matrix for any $1\leq l\leq k_v-1$.\\
Let $F$ be the free abelian monoid generated by the set $\{v,q_1^v,\dots,q^v_{k_v-1}\mid v\in E^0\}$. There is a unique monoid homomorphism $\psi:F\rightarrow V(L_K(E,w))$ such that $\psi(v)=[(v)]$ and $\psi(q_l^v)=[\epsilon_l(v)]$ for any $v\in E_0$ and $1\leq l \leq k_v-1$. In order to show that $\psi$ induces a monoid homomorphism $M(E,w)\rightarrow V(L_K(E,w))$ we have to check that $\psi$ preserves the relations (1), i.e.
\begin{align}
&\psi(q^v_{l-1}+(w_l(v)-w_{l-1}(v))v)=\psi(q_l^v+\sum\limits_{\substack{e\in s^{-1}(v),\\w(e)=w_l(v)}}r(e))\nonumber
\\
\Leftrightarrow &\left[\begin{pmatrix}\epsilon_{l-1}(v)&&&\\&v&&\\&&\ddots&\\&&&v\end{pmatrix}\right]=\left[\begin{pmatrix}\epsilon_{l}(v)&&&\\&r(e^{n_{l-1}(v)+1,v})&&\\&&\ddots&\\&&&r(e^{n_l(v),v})\end{pmatrix}\right]
\end{align}
for any $v\in E^0$ and $1\leq l \leq k_v$ (where $\epsilon_{0}(v)$ and $\epsilon_{k_v}(v)$ are the empty matrix). Set
\[X_l(v):=\begin{pmatrix}\epsilon_l(v)& A^{n_{l-1},n_l}_{w_0,w_l}(v)\end{pmatrix}\text{ and }Y_l(v):=(X_l(v))^*=\begin{pmatrix}\epsilon_l(v)\\ (A^{n_{l-1},n_l}_{w_0,w_l}(v))^*\end{pmatrix}.\]
Clearly
\begin{align}
X_l(v)Y_l(v)&=\epsilon_l(v)+A^{n_{l-1},n_l}_{w_0,w_l}(v)(A^{n_{l-1},n_l}_{w_0,w_l}(v))^*.
\end{align}
Writing $A^{n_{0},n_l}_{w_0,w_l}(v)$ in block form
\[A^{n_{0},n_l}_{w_0,w_l}(v)=\begin{pmatrix}A^{n_{0},n_{l-1}}_{w_0,w_{l-1}}(v)&A^{n_{l-1},n_l}_{w_0,w_{l-1}}(v)\\0&A^{n_{l-1},n_{l}}_{w_{l-1},w_l}(v)\end{pmatrix}\]
it is easy to deduce from equation (4) that
\begin{equation}
\epsilon_l(v)=\begin{pmatrix}\epsilon_{l-1}(v)&&&\\&v&&\\&&\ddots&\\&&&v\end{pmatrix}-A^{n_{l-1},n_l}_{w_0,w_{l}}(v)(A^{n_{l-1},n_l}_{w_0,w_{l}}(v))^*.
\end{equation}
By equations (7) and (8) we have
\begin{equation*}
X_l(v)Y_l(v)=\begin{pmatrix}\epsilon_{l-1}(v)&&&\\&v&&\\&&\ddots&\\&&&v\end{pmatrix}.
\end{equation*}
On the other hand one checks easily that
\[Y_l(v)X_l(v)=\begin{pmatrix}\epsilon_{l}(v)&&&\\&r(e^{n_{l-1}(v)+1,v})&&\\&&\ddots&\\&&&r(e^{n_l(v),v})\end{pmatrix}\]
(note that $(A^{n_{l},n_{k_v}}_{w_0,w_l}(v))^*A^{n_{l-1},n_l}_{w_0,w_l}(v)=0$ by equation (3); hence $\epsilon_l(v)A^{n_{l-1},n_l}_{w_0,w_l}(v)=0$ and $(A^{n_{l-1},n_l}_{w_0,w_l}(v))^*\epsilon_l(v)=0$). Thus equation (6) holds for any $v\in E^0$ and $1\leq l \leq k_v$ and therefore $\psi$ induces a monoid homomorphism $\theta_{(E,w)}:M(E,w)\rightarrow V(L_K(E,w))$. It is an easy exercise to show that $\theta:M\rightarrow V\circ L_K$ is a natural transformation (note that for $v\in E^0$ and $1\leq l\leq k_v-1$, the matrix $\epsilon_l(v)$ does not depend on the weight-respecting order of the elements of $s^{-1}(v)$ chosen in the second line of Part I).\\
\\
{\bf Part II} We want to show that the natural transformation $\theta:M\rightarrow V\circ L_K$ defined in Part I is a natural isomorphism, i.e. that $\theta_{(E,w)}:M(E,w)\rightarrow V(L_K(E,w))$ is an isomorphism for any weighted graph $(E,w)$. By \cite[Lemma 5.19]{hazrat13} any weighted graph is a direct limit of a direct system of finite weighted graphs. Hence it is sufficient to show that $\theta_{(E,w)}$ is an isomorphism for any finite weighted graph $(E,w)$ (note that $M$, $V$ and $L_K$ commute with direct limits).\\
Let $(E,w)$ be a finite weighted graph. Set $B_0:=K^{E^0}$. We denote by $\alpha_v$ the element of $B_0$ whose $v$-component is $1$ and whose other components are $0$. Let $\{v_1, \dots, v_m\}$ be the elements of $E^0$ which emit vertices. Let $1\leq t \leq m$ and assume that $B_{t-1}$ has already been defined. We define an associative $K$-algebra $B_t$ as follows. Set $C_{t,0}:=B_{t-1}$ and let $\beta^{t,0}:C_{t,0}\rightarrow C_{t,0}$ be the map sending any element to $0$. For $1\leq l \leq k_{v_t}-1$ define inductively $C_{t,l}:=C_{t,l-1}\langle \beta^{t,l}: \overline{O_{t,l}}\rightarrow \overline{O_{t,l}};(\beta^{t,l})^2=\beta^{t,l}\rangle $ (see \cite[p. 39]{bergman74}) where
\[O_{t,l}=\operatorname{im} (\beta^{t,l-1})\oplus \bigoplus\limits_{h=w_{l-1}(v_t)+1}^{w_l(v_t)} \alpha_{v_t}C_{t,l-1}.\]
Set $D_{t,0}:=C_{t,k_{v_t}-1}$. For $1\leq l \leq k_{v_t}-1$ define inductively $D_{t,l}:=D_{t,l-1}\langle \gamma^{t,l},(\gamma^{t,l})^{-1}:\overline{P_{t,l}}\cong\overline{Q_{t,l}}\rangle $ (see \cite[p. 38]{bergman74}) where
\[P_{t,l}=\bigoplus\limits_{h=n_{l-1}(v_t)+1}^{n_l(v_t)}\alpha_{r(e^{h,v_t})}D_{t,l-1}\text{ and }Q_{t,l}=\ker(\beta^{t,l}).\]
Finally define $B_t:=D_{t,k_{v_t}-1}\langle \gamma^{t,k_{v_t}},(\gamma^{t,k_{v_t}})^{-1}:\overline{P_{t,k_{v_t}}}\cong\overline{Q_{t,k_{v_t}}}\rangle $ where
\begin{align*}
&P_{t,l}=\bigoplus\limits_{h=n_{k_{v_t}-1}(v_t)+1}^{n_{k_{v_t}}(v_t)}\alpha_{r(e^{h,v_t})}D_{t,k_{v_t}-1}\text{ and }\\
&Q_{t,k_{v_t}}=\operatorname{im}(\beta^{t,k_{v_t}-1})\oplus\bigoplus\limits_{h=w_{k_{v_t}-1}(v_t)+1}^{w_{k_{v_t}}(v_t)} \alpha_{v_t}D_{t,k_{v_t}-1}.
\end{align*}
We will show that $L_K(E,w)\cong B_m$.\\
Investigating the proofs of \cite[Theorems 3.1, 3.2]{bergman74} we see that $B_{m}$ is presented by the generating set
\begin{align*}
X:=&\{\alpha_{v}\mid v\in E_0\}\cup\{\beta^{t,l}_{i,j}\mid 1\leq t \leq m, 1\leq l\leq k_{v_t}-1, 1\leq i,j\leq w_l(v_t)\}\\
&\cup \{\gamma^{t,l}_{i,j},(\gamma^{t,l}_{j,i})^*\mid 1\leq t \leq m, 1\leq l\leq k_{v_t}, 1\leq i\leq w_l(v_t), 1\leq j \leq n_l(v_t)-n_{l-1}(v_t)\}
\end{align*}
and the relations
\begin{center}
\begin{tabular}{r l l}
(i)&$\alpha_u\alpha_v=\delta_{uv}\alpha_u$&$(u,v\in E^0)$,\\
(ii)& $\operatorname{id}_{O_{t,l}}\beta^{t,l}=\beta^{t,l}=\beta^{t,l}\operatorname{id}_{O_{t,l}}$&$(1\leq t \leq m, 1\leq l\leq k_{v_t}-1),$\\
(iii)&$(\beta^{t,l})^2=\beta^{t,l}$&$(1\leq t \leq m, 1\leq l\leq k_{v_t}-1),$\\
(iv)&$\gamma^{t,l}\operatorname{id}_{P_{t,l}}=\gamma^{t,l}=\operatorname{id}_{Q_{t,l}}\gamma^{t,l}$&$(1\leq t \leq m, 1\leq l\leq k_{v_t})$,\\
(v)& $(\gamma^{t,l})^*\operatorname{id}_{Q_{t,l}}=(\gamma^{t,l})^*=\operatorname{id}_{P_{t,l}}(\gamma^{t,l})^*$&$(1\leq t \leq m, 1\leq l\leq k_{v_t}),$\\
(vi)&$\gamma^{t,l}(\gamma^{t,l})^*=\operatorname{id}_{Q_{t,l}}$&$(1\leq t \leq m, 1\leq l\leq k_{v_t}),$\\
(vii)&$(\gamma^{t,l})^*\gamma^{t,l}=\operatorname{id}_{P_{t,l}}$&$(1\leq t \leq m, 1\leq l\leq k_{v_t})$
\end{tabular}
\end{center}
where $\beta^{t,l}\in\operatorname{\mathbb{M}}_{w_l(v_t)}(K\langle X \rangle)$ (we denote by $K\langle X \rangle$ the free associative $K$-algebra generated by $X$) is the matrix whose entry at position $(i,j)$ is $\beta^{t,l}_{i,j}$, $\gamma^{t,l}\in\operatorname{\mathbb{M}}_{w_l(v_t)\times (n_l(v_t)-n_{l-1}(v_t))}(K\langle X \rangle)$ is the matrix whose entry at position $(i,j)$ is $\gamma^{t,l}_{i,j}$, $(\gamma^{t,l})^*\in\operatorname{\mathbb{M}}_{(n_l(v_t)-n_{l-1}(v_t))\times w_l(v_t)}(K\langle X \rangle)$ is the matrix whose entry at position $(i,j)$ is $(\gamma^{t,l}_{j,i})^*$ and further
\begin{align*}
\operatorname{id}_{O_{t,l}}&=\begin{pmatrix}
\beta^{t,l-1}&&&\\
&\alpha_{v_t}&&\\
&&\ddots&\\
&&&\alpha_{v_t}
\end{pmatrix}\in\operatorname{\mathbb{M}}_{w_l(v_t)}(K\langle X \rangle),\\
\operatorname{id}_{P_{t,l}}&=\begin{pmatrix}
\alpha_{r(e^{n_{l-1}(v_t)+1,v_t})}&&\\
&\ddots&\\
&&\alpha_{r(e^{n_{l}(v_t),v_t})}
\end{pmatrix}\in\operatorname{\mathbb{M}}_{n_{l}(v_t)-n_{l-1}(v_t)}(K\langle X \rangle),\\
\operatorname{id}_{Q_{t,l}}&=\operatorname{id}_{O_{t,l}}-\beta^{t,l}\in\operatorname{\mathbb{M}}_{w_l(v_t)}(K\langle X \rangle)\text{ if } l<k_{v_t}\text{ and }\\
\operatorname{id}_{Q_{t,k_{v_t}}}&=\begin{pmatrix}
\beta^{t,k_{v_t}-1}&&&\\
&\alpha_{v_t}&&\\
&&\ddots&\\
&&&\alpha_{v_t}
\end{pmatrix}\in\operatorname{\mathbb{M}}_{w_{k_{v_t}}(v_t)}(K\langle X \rangle)
\end{align*}
(we let $\beta^{t,0}$ be the empty matrix). Define an $K$-algebra homomorphism $\zeta:L_K(E,w)\rightarrow B_m$ by \\
\[\zeta(v)=\alpha_v~(v\in E^0),\quad\zeta(A_{w_0,w_l}^{n_{l-1},n_l}(v_t))=\gamma^{t,l},\quad\zeta((A_{w_0,w_l}^{n_{l-1},n_l}(v_t))^*)=(\gamma^{t,l})^*~(1\leq t\leq m,1\leq l\leq k_{v_t})\]\\
(meaning that each entry of $A_{w_0,w_l}^{n_{l-1},n_l}(v_t)$ (resp. $(A_{w_0,w_l}^{n_{l-1},n_l}(v_t))^*$) is mapped to the corresponding entry of $\gamma^{t,l}$ (resp. $(\gamma^{t,l})^*$). Define an $K$-algebra homomorphism $\xi:B_m\rightarrow L_K(E,w)$ by\\
\begin{align*}
&\xi(\alpha_v)=v~(v\in E^0),\quad \xi(\beta^{t,l})=\epsilon_l(v_t)~(1\leq t\leq m,1\leq l\leq k_{v_t}-1)\\
&\xi(\gamma^{t,l})=A_{w_0,w_l}^{n_{l-1},n_l}(v_t), \quad\xi((\gamma^{t,l})^*)=(A_{w_0,w_l}^{n_{l-1},n_l}(v_t))^*~(1\leq t\leq m,1\leq l\leq k_{v_t}).
\end{align*}\\
We leave it to the reader to show that $\zeta$ and $\xi$ are well-defined and further $\xi\circ\zeta=\operatorname{id}_{L_K(E,w)}$ and $\zeta\circ\xi=\operatorname{id}_{B_m}$ (a hint: in order to show that $\zeta(\xi(\beta^{t,l}))=\beta^{t,l}$, it is convenient to use equation (8) and relation (vi) above). Thus $L_K(E,w)\cong B_m$.\\
By \cite[Theorems 5.1, 5.2]{bergman74}, the abelian monoid $V'(B_m)$ (see Remark \ref{rempro}) is presented by the generating set $\{v,p_1^v,\dots,p^v_{k_v-1},q_1^v,\dots,q^v_{k_v-1}\mid v\in E^0\}$ and the relations
\begin{enumerate}[(i)]
\item $q^v_{i-1}+(w_i(v)-w_{i-1}(v))v=q_i^v+p_i^v\quad(v\in E^0,1\leq i\leq k_v-1)$,
\item $p_i^v=\sum\limits_{\substack{e\in s^{-1}(v),\\w(e)=w_i(v)}}r(e)\quad(v\in E^0,1\leq i\leq k_v-1)$ and
\item $q^v_{k_v-1}+(w_{k_v}(v)-w_{k_v-1}(v))v=\sum\limits_{\substack{e\in s^{-1}(v),\\w(e)=w_{k_v}(v)}}r(e)\quad(v\in E^0)$
\end{enumerate}
where $q^v_0=0$. It follows from Lemma \ref{lemmon} that $M(E,w)\cong V'(B_m)\cong V'(L_K(E,w))\cong V(L_K(E,w))$. One checks easily that the monoid isomorphism $M(E,w)\rightarrow V(L_K(E,w))$ one gets in this way is precisely $\theta_{(E,w)}$. \\
Furthermore, the right global dimension of $B_m\cong L_K(E,w)$ is $\leq 1$ by \cite[Theorems 5.1, 5.2]{bergman74}, i.e. $L_K(E,w)$ is right hereditary. Since $L_K(E,w)$ is a ring with involution, we have $L_K(E,w)\cong L_K(E,w)^{op}$. Thus $L_K(E,w)$ is also left hereditary.
\end{proof}
\begin{corollary}\label{cornum}
Let $(E,w)$ be a weighted graph. If there is a vertex $v\in E^0$ such that $k_v>1$ (i.e. there are $e,f\in s^{-1}(v)$ such that $w(e)\neq w(f)$), then $|V(L_K(E,w))|=\infty$.
\end{corollary}
\begin{proof}
Let $v\in E^0$ be a vertex such that $k_v>1$. For any $n\in \mathbb{N}_0$ let $[nq_1^v]$ denote the equivalence class of $nq^v_1$ in $M(E,w)$. One checks easily that $[nq_1^v]=\{nq_1^v\}$. Hence the elements $[nq_1^v]~(n\in\mathbb{N}_0)$ are pairwise distinct in $M(E,w)$ (and therefore we have an embedding $\mathbb{N}_0\hookrightarrow M(E,w)$ defined by $n\mapsto [nq_1^v]$). It follows from Theorem \ref{thmm} that $|V(L_K(E,w))|=\infty$.
\end{proof}
In \cite[Section 4]{hazrat-preusser} it was ``proved" by using the false \cite[Theorem 5.21]{hazrat13}, that if $(E,w)$ is an LV-rose (see \cite[Definition 38]{hazrat-preusser}) such that the minimal weight is $2$, the maximal weight is $l\geq 3$ and the number of edges is $l+m$ for some $m > 0$, then the domain $L_K(E,w)$ is not isomorphic to any of the Leavitt algebras $L_K(n,n+k)$ where $n,k\geq 1$. Using Theorem \ref{thmm} we prove a stronger statement:
\begin{corollary}
Let $(E,w)$ be an LV-rose such that there are edges of different weights. Then $L_K(E,w)$ is a domain that is neither $K$-algebra isomorphic to a Leavitt path algebra $L_K(F)$ nor to a Leavitt algebra $L_K(n,n+k)$.
\end{corollary}
\begin{proof}
First we show that $L_K(E,w)$ is not isomorphic to a Leavitt path algebra. By \cite[Theorem 41]{hazrat-preusser}, $L_K(E,w)$ is a domain (i.e. a nonzero ring without zero divisors). It is well-known that if $F$ is a directed graph such that $L_K(F)$ is a domain, then $F$ is either the graph $\xymatrix{\bullet}$ and we have $L_K(F)\cong K$, or the graph $\xymatrix{\bullet\ar@(ur,dr)}$\hspace{0.5cm} and we have $L_K(F)\cong K[x,x^{-1}]$. In both cases we have $V(L_K(F))\cong\mathbb{N}_0$ by Example \ref{exex1} and Theorem \ref{thmm}. Assume that there is an isomorphism $\phi:\mathbb{N}_0\rightarrow M(E,w)$. One checks easily that if $q_1^v=a+b$ for some $a,b\in M(E,w)$, then $a=0$ and $b=q_1^v$ or vice versa. Hence $\phi(1)=q_1^v$. But then $\phi$ cannot be surjective (see the proof of the previous corollary). Hence $V(L_K(E,w))\not\cong \mathbb{N}_0$ and therefore $L_K(E,w)$ is not isomorphic to a Leavitt path algebra $L_K(F)$.\\
Next we show that $L_K(E,w)$ is not isomorphic to a Leavitt algebra $L_K(n,n+k)$ where $n\geq 1$ and $k\geq 0$. It follows from Example \ref{wlpapp} and Theorem \ref{thmm} that $V(L_K(n,n+k))\cong \mathbb{N}_0/\langle n=n+k\rangle$. If $k=0$, then $V(L_K(n,n+k))\cong \mathbb{N}_0$ and therefore $L_K(E,w)$ is not isomorphic to $L_K(n,n+k)$ by the previous paragraph. Suppose now that $k\geq 1$. Then $|V(L_K(n,n+k))|=n+k<\infty$. But by Corollary \ref{cornum}, $|V(L_K(E,w))|=\infty$. Hence $L_K(E,w)$ is not isomorphic to a Leavitt algebra $L_K(n,n+k)$.
\end{proof}
Now we consider $K_0$ of a wLpa. Let $(E,w)$ denote a weighted graph. Since $L_K(E,w)$ is clearly a ring with local units, $K_0(L_K(E,w))$ is the group completion $(V(L_K(E,w)))^+$ of the abelian monoid $V(L_K(E,w))$, see \cite[p. 77]{abrams-ara-molina}. By Theorem \ref{thmm}, $(V(L_K(E,w)))^+\cong (M(E,w))^+$. It follows from \cite[Equation (45)]{hazrat13} that $(M(E,w))^+$ is presented as abelian group by the generating set $\{v,q_1^v,\dots,q^v_{k_v-1}\mid v\in E^0\}$ and the relations
\begin{equation*}
q^v_{i-1}+(w_i(v)-w_{i-1}(v))v=q_i^v+\sum\limits_{\substack{e\in s^{-1}(v),\\w(e)=w_i(v)}}r(e)\quad\quad(v\in E^0,1\leq i\leq k_v)
\end{equation*}
where $q^v_0=q^v_{k_v}=0$. We can rewrite the relations above in the form
\begin{align*}
&q_i^v=q^v_{i-1}+(w_i(v)-w_{i-1}(v))v-\sum\limits_{\substack{e\in s^{-1}(v),\\w(e)=w_i(v)}}r(e)\quad\quad(v\in E^0,1\leq i\leq k_v).
\end{align*}
By successively applying Lemma \ref{lemmon} we get that $(M(E,w))^+$ is presented by the generating set $E^0$ and the relations
\[w(v)v=\sum\limits_{e\in s^{-1}(v)}r(e)\quad\quad(v\in E^0).\]
It follows that \cite[Theorem 5.23]{hazrat13} is correct!
\section{Examples}
\begin{example}
Consider the weighted graph\\
\[
(E,w):\xymatrix@C+15pt{ u& v\ar[l]_{e,1}\ar[r]^{f,2}& x}.
\]\\
As mentioned at the beginning of the previous section, $L_K(E,w)\cong \operatorname{\mathbb{M}}_3(K)\oplus \operatorname{\mathbb{M}}_3(K)$.
By Theorem \ref{thmm} and Lemma \ref{lemmon},
\[V(L_K(E,w))\cong \mathbb{N}_0^{\{u,v,q_1^v,x\}}/\langle \alpha_v=\alpha_{q_1^v}+\alpha_u,\alpha_{q_1^v}+\alpha_v=\alpha_x\rangle\cong \mathbb{N}_0^2.\]
\end{example}
\begin{example}
Consider the weighted graph
\[
(E,w):\xymatrix@C+15pt{ u& v\ar@/_1.7pc/[l]_{e,1}\ar@/^1.7pc/[l]^{f,2}}.
\]
Let $F$ be the directed graph
\[
F:\xymatrix@R+15pt@C+25pt{ u_1&u_2\ar[l]_{g}&u_3\ar@(dr,ur)_{j}\ar[l]_{h}\ar@/_2pc/[ll]_{i}\\ &v\ar@/_0.8pc/[u]_{e^{(2)}}\ar@/^0.8pc/[u]^{f}\ar[ul]^{e^{(1)}}\ar[ur]_{e^{(3)}}&}.
\]
There is a $*$-algebra isomorphism $L_K(E,w)\rightarrow L_K(F)$ mapping $u\mapsto \sum u_i$, $v\mapsto v$, $e_1 \mapsto \sum e^{(i)}$, $f_1\mapsto f$ and $f_2\mapsto fg+e^{(1)}i^*+e^{(2)}h^* +e^{(3)}j^*$.
By Theorem \ref{thmm} and Lemma \ref{lemmon},
\[V(L_K(E,w))\cong \mathbb{N}_0^{\{u,v,q_1^v\}}/\langle \alpha_v=\alpha_{q_1^v}+\alpha_u,\alpha_{q_1^v}+\alpha_v=\alpha_u\rangle\cong \mathbb{N}_0^2/\langle(1,0)=(1,2)\rangle.\]
\cite[Theorem 3.2.5]{abrams-ara-molina} (or Theorem \ref{thmm}, which generalises \cite[Theorem 3.2.5]{abrams-ara-molina}) yields the same result for $V(L_K(F))$.
\end{example}
\begin{example}
Consider the weighted graph\\
\[
(E,w):\xymatrix@C+15pt{ v\ar@(dl,ul)^{e,1}\ar@(dr,ur)_{f,2}}.
\]\\
By Theorem \ref{thmm},
\[V(L_K(E,w))\cong \mathbb{N}_0^{\{v,q_1^v\}}/\langle \alpha_v=\alpha_{q_1^v}+\alpha_v,\alpha_{q_1^v}+\alpha_v=\alpha_v\rangle\cong \mathbb{N}_0^2/\langle(1,0)=(1,1)\rangle.\]
Let $F$ be the directed graph\\
\[
F:\xymatrix@C+15pt{u\ar@(dl,ul)^{e}\ar[r]^{f}&v}.
\]\\
Its Leavitt path algebra $L_K(F)$ is called {\it algebraic Toeplitz $K$-algebra}, see \cite[Example 1.3.6]{abrams-ara-molina}. By \cite[Theorem 3.2.5]{abrams-ara-molina} we have $V(L_K(F))\cong V(L_K(E,w))$. But $\operatorname{GKdim} L_K(F)=2$ by \cite[Theorem 5]{zel12} while $\operatorname{GKdim} L_K(E,w)=\infty$ by \cite[Theorem 22]{preusser}. Hence $L_K(F)\not\cong L_K(E,w)$.
\end{example}
\begin{example}
Consider the LV-roses
\[(E,w):\xymatrix{
v \ar@(ur,dr)^{e,3} \ar@(dr,dl)^{f,3} \ar@(dl,ul)^{g,3}\ar@(ul,ur)^{h,3}&
}\quad \text{ and }\quad (E,w'):\xymatrix{
v \ar@(ur,dr)^{e,2} \ar@(dr,dl)^{f,3} \ar@(dl,ul)^{g,3}\ar@(ul,ur)^{h,3}&
}.
\]
By Theorem \ref{thmm},
\[V(L_K(E,w))\cong \mathbb{N}_0^{\{v\}}/\langle 3\alpha_v=4\alpha_v\rangle\]
and
\[V(L_K(E,w'))\cong \mathbb{N}_0^{\{v,q_1^v\}}/\langle 2\alpha_v=\alpha_{q_1^v}+\alpha_v,\alpha_{q_1^v}+\alpha_v=3\alpha_v\rangle.\]\\
Since the image of $L_K(E,w)$ in $V(L_K(E,w))$ (resp. of $L_K(E,w')$ in $V(L_K(E,w'))$) is $\alpha_v$ (see Remark \ref{rempro}), the module type of $L_K(E,w)$ is $(3,1)$ and the module type of $L_K(E,w')$ is $(2,1)$.
\end{example}
| {
"timestamp": "2018-08-01T02:06:32",
"yymm": "1807",
"arxiv_id": "1807.11675",
"language": "en",
"url": "https://arxiv.org/abs/1807.11675",
"abstract": "We compute the $V$-monoid of a weighted Leavitt path algebra of a row-finite weighted graph, correcting a wrong computation of the $V$-monoid that exists in the literature. Further we show that the description of $K_0$ of a weighted Leavitt path algebra that exists in the literature is correct (although the computation was based on a wrong $V$-monoid description).",
"subjects": "Rings and Algebras (math.RA)",
"title": "The V-monoid of a weighted Leavitt path algebra",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9626731147976795,
"lm_q2_score": 0.7371581626286834,
"lm_q1q2_score": 0.709642344516289
} |
https://arxiv.org/abs/0910.0096 | Gröbner-Shirshov bases for Coxeter groups I | A conjecture of Gröbner-Shirshov basis of any Coxeter group has proposed by L.A. Bokut and L.-S. Shiao \cite{bs01}. In this paper, we give an example to show that the conjecture is not true in general. We list all possible nontrivial inclusion compositions when we deal with the general cases of the Coxeter groups. We give a Gröbner-Shirshov basis of a Coxeter group which is without nontrivial inclusion compositions mentioned the above. | \section{Introduction}
Let $M=\|m_{ij}\|_{n\times n}$ be a symmetric $n\times n$ matrix
such that $m_{ii}=1,\ 2\leq m_{ij}\leq\infty$. The Coxeter group
$W=W(M)$ is defined by the generators $s_1,\cdots, s_n$ and the
defining relations $(s_is_j)^{m_{ij}}=1$.
A conjecture of Gr\"{o}bner-Shirshov basis of any Coxeter group has
proposed by L.A. Bokut and L.-S. Shiao \cite{bs01}.
Gr\"{o}bner-Shirshov bases of all finite Coxeter groups were given
in \cite{bs01, Lee, Sv}. As it is hypothesis, the conjecture is
true for any finite Coxeter group. In this paper, we give an example
to show that the above conjecture is not true in general. We list
all possible nontrivial inclusion compositions (four cases) when we
deal with the general cases of the Coxeter groups. We then give a
new conjecture and prove it is true in some cases. We give a
Gr\"{o}bner-Shirshov basis of a Coxeter group which is without
nontrivial inclusion compositions mentioned the above. We give some
examples of such Coxeter groups but not the finite Coxeter groups.
We will consider other cases in
another papers in the future.
\section{Preliminaries}
We first cite some concepts and results from the literature
\cite{Sh, b72, b76} which are related to Gr\"{o}bner-Shirshov bases
for associative algebras. A notion of the pre-Gr\"{o}bner-Shirshov
basis is new.
Let $X$ be a set and $F$ a field, $F\langle X\rangle$ the free
associative algebra over $F$ generated by $X$, and $X^*$ the free
monoid generated by $X$. A well ordering $<$ on $X^*$ is monomial if
for any $u, v\in X^*$,
$$
u < v \Rightarrow w_{1}uw_{2} < w_{1}vw_{2}, \ for \ all \
w_{1}, \ w_{2}\in X^*.
$$
For any $u\in X^*$, denote by $|u|$ the length of $u$.
A standard example of monomial ordering on $X^*$ is the deg-lex
ordering which first compare two words by length and then by
comparing them lexicographically, where $X$ is a well ordered set.
Then, for any polynomial $f\in F\langle X\rangle$, $f$ has the
leading (maximal) word $\overline{f}$. We call $f$ {\it monic} if
the coefficient of $\overline{f}$ is 1.
Let $f,\ g\in F\langle X\rangle$ be two monic polynomials and $w\in
X^*$.
If $w=\overline{f}b=a\overline{g}$ for some $a,b\in X^*$ such that
$|\overline{f}|+|\overline{g}|>|w|$, then $(f,g)_w=fb-ag$ is called
the {\it intersection composition }of $f,g$ relative to $w$.
If $w=\overline{f}=a\overline{g}b$ for some $a, b\in X^*$, then
$(f,g)_w=f-agb$ is called the {\it inclusion composition} of $f,g$
relative to $w$. The transformation $f\mapsto f-agb$ is called the
elimination of leading word (ELW) of $g$ in $f$.
In $(f,g)_w$, $w$ is called the {\it ambiguity} of the composition.
Let $S\subset F\langle X\rangle$ be a monic set. A composition
$(f,g)_w$ is called trivial modulo $(S,w)$, denoted by
$$
(f,g)_w\equiv0 \ \ \ mod(S,w)
$$
if $(f,g)_w=\sum\alpha_ia_is_ib_i,$ where every $\alpha_i\in F, \
s_i\in S,\ a_i,b_i\in X^*$, and $a_i\overline{s_i} b_i<w$.
Generally, for $f,g\in F\langle X\rangle,\ f\equiv g \ \ \ mod(S,w)$
we mean $f-g=\sum\alpha_ia_is_ib_i,$ where every $\alpha_i\in F, \
s_i\in S,\ a_i,b_i\in X^*$, and $a_i\overline{s_i} b_i<w$.
Recall that $S$ is a {\it Gr\"{o}bner-Shirshov basis} if any
composition of polynomials from $S$ is trivial modulo $S$.
\ \
Let $f$ and $r_1$ be two polynomials. Then $f\mapsto f_1$ by ELW of
$r_1$ in $f$ means $f=\alpha_1a_1r_1b_1+f_1$ where $a_1,b_1\in X^*,\
\alpha_1\in F$ and $\bar f=a_1\overline{r_1} b_1$. Generally,
$f\mapsto f_1\mapsto\cdots \mapsto f_n\mapsto r$ means that $f=\sum
\alpha_ia_ir_ib_i+r$ where $\bar f=a_1\overline{r_1}
b_1>a_2\overline{r_2}b_2>\cdots
>a_n\overline{r_n}b_n>r$. If this is the case, we say that $f$ can be reduced to
$r$ via $\{r_1,\dots, r_n\}$.
Clearly, if $(f,g)_w$ can be reduced to zero by ELW of $S$, then
$(f,g)_w\equiv 0\ \ mod(S,w)$.
The following lemma was first proved
by Shirshov \cite{Sh} for free Lie algebras (with deg-lex ordering)
(see also Bokut \cite{b72}). Bokut \cite{b76} specialized the
approach of Shirshov to associative algebras (see also Bergman
\cite{b}). For commutative polynomials, this lemma is known as
Buchberger's Theorem (see \cite{bu65, bu70}).
\begin{lemma}\label{l1}
{\em (Composition-Diamond Lemma)} \ Let $F$ be a field, $A=F\langle
X|S\rangle=F\langle X\rangle/Id(S)$ and $<$ a monomial ordering on
$X^*$, where $Id(S)$ is the ideal of $F\langle X\rangle$ generated
by $S$. Then the following statements are equivalent:
\begin{enumerate}
\item[(1)] $S$ is a Gr\"{o}bner-Shirshov basis.
\item[(2)] $f\in Id(S)\Rightarrow \bar{f}=a\bar{s}b$
for some $s\in S$ and $a,b\in X^*$.
\item[(\ref{e3})] $Irr(S) = \{ u \in X^* | u \neq a\bar{s}b,s\in S,a ,b \in X^*\}$
is a $F$-basis of the algebra $A=F\langle X| S \rangle$.
\end{enumerate}
\end{lemma}
If a subset $S$ of $F\langle X\rangle$ is not a Gr\"{o}bner-Shirshov
basis then one can add all nontrivial compositions of polynomials of
$S$ to $S$. Continuing this process repeatedly, we finally obtain a
Gr\"{o}bner-Shirshov basis $S^{comp}$ that contains $S$. Such a
process is called Shirshov algorithm.
A set $S$ is called {\it reduced Gr\"{o}bner-Shirshov basis} if it
is a Gr\"{o}bner-Shirshov basis and there are no inclusion
compositions in $S$.
A set $S$ is called {\it pre-Gr\"{o}bner-Shirshov basis} if there
exists a subset $R\subset F\langle X\rangle$ such that the following
conditions hold.
(i) $Id(R)=Id(S)$ and $R$ is a Gr\"{o}bner-Shirshov basis. $R$ is
called a Gr\"{o}bner-Shirshov basis with related to $S$.
(ii) For any $r\in R$, there exists $s\in S$ with $|\bar s|=|\bar
r|$ such that either $r=s$ or there exists a finite sequence of
ELW's of $S\setminus\{s\}$, $ s=s_0\mapsto s_1\mapsto\cdots \mapsto
s_n=r$, i.e., $s$ can be reduced to $r$ via $S\setminus\{s\}$.
\begin{lemma}
Let $S\subset F\langle X\rangle$ be an effective set (in a plurally
algebraic language, one may say that for any $n\geq 0$, one knows
all polynomials $s\in S_n$ of degree less or equal $n$ from $S$, and
there are finite number of these polynomials.) If $S$ is a
pre-Gr\"{o}bner-Shirshov basis, then the word problem is solvable
for the algebra $F\langle X| S\rangle=F\langle X\rangle/Id (S)$.
\end{lemma}
\textbf{Proof} Let $f\in F\langle X\rangle$ be a polynomial of
degree $n\geq 1$, $R$ be a Gr\"{o}bner-Shirshov basis with related
to $S$. Then $f\in Id(S)$ iff $f$ goes to $0$ by the ELW of $R$. So
we need only to know all polynomials $r\in R_n$ of degree less or
equal than $n$ from $R$. From the definition of a
pre-Gr\"{o}bner-Shirshov basis, $R_n$ is a result of the ELW of
$S_n$ for polynomials from $S_n$. Since we know $S_n$, we can find
$R_n$ effectively. \hfill $\blacksquare$
\ \
Let $A=sgp\langle X|S\rangle$ be a semigroup presentation. Then $S$
is also a subset of $F\langle X \rangle$ and we can find
Gr\"{o}bner-Shirshov basis
$S^{comp}$. We also call $S^{comp}$ a
Gr\"{o}bner-Shirshov basis of $A$. The set $Irr(S^{comp})=\{u\in
S^*|u\neq a\overline{f}b,\ a ,b \in X^*,\ f\in S^{comp}\}$ is a
linear basis of $F\langle X|S\rangle$ which is also a set of all
normal forms of $A$.
\section{Gr\"{o}bner-Shirshov bases of Coxeter groups}
Let $\Sigma=\{\sigma_1, \cdots, \sigma_n\}$ be a finite set. Let
$M=(m_{ij})$ be a symmetric $n\times n$ matrix over the natural
numbers together with $\infty$, such that $m_{ii}=1,\ 2\leq
m_{ij}\leq\infty$ for $i\neq j$. Such an M is called a Coxeter
matrix. Now, we use $W$ to denote
$$
W=W(M)=sgp\langle \Sigma|(\sigma_i\sigma_j)^{m_{ij}}=1,\ 1\leq i,\
j\leq n,\ m_{ij}\neq\infty\rangle.
$$
W is called the Coxeter group (see, for example, \cite{S02}) with
respect to Coxeter matrix $M$.
We order $\Sigma^*$ by the deg-lex ordering, where
$\sigma_1<\dots<\sigma_n$.
For any $i,j\ \ (1\leq i,j\leq n)$, denote by
$m_{\sigma_{_i}\sigma_{_j}}=m_{ij}$. For any $s,s'\in \Sigma$, we
now define for finite $m_{ss'}$ the following notation:
\ \ \ \ \ \ $m(s,s')=ss'\cdots $(there are $m_{ss'}$ alternative
letters $s,s')$,
\ \ \ \ \ $(m-i)(s,s')=ss'\cdots$ (there are $m_{ss'}-i$ alternative
letters $s,s'$, $1\leq i\leq m_{ss'}$). \\
With the above notation, the defining relations of W can be
presented in the following forms
\begin{eqnarray}
\label{e1}&&s^2=1 \\
\label{e2}&&m(s,s')=m(s',s),\ s>s'
\end{eqnarray}
for all $s,s'\in \Sigma$ and finite $m_{ss'}$.
Define $s\rhd s'$ if $s>s'$ and $m_{ss'}=2$.
\begin{lemma} (\cite{bs01})\label{s.1}
In group $W$, we have
\begin{eqnarray}
\nonumber &&(m-1)(s_{0},s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})
m(s_{k+1},s'_{k+1})\\
\label{e3}&=&m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})
\end{eqnarray}
where $k\geq 0,\ s_0,s'_0,\dots,s_{k+1}, s'_{k+1}\in \Sigma$
and for any $i, \ 0\leq i\leq k$
$s'_{i+1}= \left\{
\begin{array}{ll} s'_{i} \ \ \ \ \ \
\mbox{ if }\ m_{s_is'_{i}}\ \mbox{ is
even},\\
s_{i} \ \ \ \ \ \ \mbox{ if }\ m_{s_{i}s'_{i}}\ \mbox{ is odd}.
\end{array}\right.$
\end{lemma}
\textbf{Proof}\ Since
\begin{eqnarray*}
&&(m-1)(s_{0},s'_0)(m-1)(s_1,s'_1)\cdots (m-1)(s_{k},s'_{k})
m(s_{k+1},s'_{k+1})\\
&=&(m-1)(s_{0},s'_0)(m-1)(s_1,s'_1)\cdots (m-1)(s_{k},s'_{k})
m(s'_{k+1},s_{k+1})\\
&=&(m-1)(s_{0},s'_0)(m-1)(s_1,s'_1)\cdots m(s_{k},s'_{k})
(m-1)(s_{k+1},s'_{k+1})\\
&=&\cdots\\
&=&m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1}),
\end{eqnarray*}
we obtain the result. \hfill $\blacksquare$
\ \
Denote by
$$
S=\{(\ref{e1}),(\ref{e2}),(3')\}
$$
where $(3')$ consists of all relations in (\ref{e3}) with the extra
properties
\begin{eqnarray}
\label{e4}&&s_0>s'_0,\ s_1<s'_1,\ \cdots,\ s_k<s'_k,\ s_{k+1}<s'_{k+1}\\
\label{e5}&&\{s_i,s'_i\}\neq\{s_{i+1},s'_{i+1}\},\ 0\leq i\leq k
\end{eqnarray}
It was conjectured in \cite{bs01} that a Gr\"{o}bner-Shirshov basis
of $W$ can be obtained from $S$ using only commutative relations of
$W$ $(m(s,s')=m(s',s)$ where $m_{ss'}=2$). The following example
shows that this conjecture is not true in general.
\begin{example}\label{ex1}
Let $\Sigma=\{s_1,s_2,s_3,s_4\}$ with $s_1<s_2<s_3<s_4$,
$M=(m_{ij})$ the $4\times4$ Coxeter matrix where
$m_{s_1s_2}=m_{s_2s_3}=m_{s_2s_4}=\infty,\ m_{s_1s_3}=3,\
m_{s_1s_4}=2,\ m_{s_3s_4}=5$ and $m_{s_is_i}=1,\ i=1,2,3,4$. Then
\begin{eqnarray*}
(\ref{e1})&=&\{s_i^2=1,\ i=1,2,3,4\},\\
(\ref{e2})&=&\{s_4s_1=s_1s_4,\
s_3s_1s_3=s_1s_3s_1,\ s_4s_3s_4s_3s_4=s_3s_4s_3s_4s_3\},\\
(3')&=&\{(m-1)(s_4,s_3)m(s_1,s_4)=m(s_3,s_4)(m-1)(s_1,s_4),\\
&&\
(m-1)(s_4,s_3)(m-1)(s_1,s_4)m(s_3,s_4)=m(s_3,s_4)(m-1)(s_1,s_4)(m-1)(s_3,s_4),\\
&&\
(m-1)(s_4,s_3)(m-1)(s_1,s_4)(m-1)(s_3,s_4)m(s_1,s_3)\\
&&=m(s_3,s_4)(m-1)(s_1,s_4)(m-1)(s_3,s_4)(m-1)(s_1,s_3)\}.
\end{eqnarray*}
A Gr\"{o}bner-Shirshov basis of $W$ is $(\ref{e1})\cup
(\ref{e2})\cup (3'')$, where
\begin{eqnarray*}
(3'')&=&\{(m-1)(s_4,s_3)m(s_1,s_4)=m(s_3,s_4)(m-1)(s_1,s_4),\\
&&
(m-3)(s_4,s_3)s_1s_4s_3s_1(m-1)(s_4,s_3)=m(s_3,s_4)(m-1)(s_1,s_4)(m-1)(s_3,s_4),\\
&&
(m-3)(s_4,s_3)s_1s_4s_3s_1(m-3)(s_4,s_3)s_1s_4(m-1)(s_3,s_1)\\
&&=m(s_3,s_4)(m-1)(s_1,s_4)(m-1)(s_3,s_4)(m-1)(s_1,s_3)\}
\end{eqnarray*}
which are obtained from $(3')$ by using the relations
$s_4s_1=s_1s_4,\ s_3s_1s_3=s_1s_3s_1$. \hfill $\blacksquare$
\end{example}
\ \
Then we give the following conjecture.
\ \
\noindent\textbf{Conjecture (L.A. Bokut):} The set of relations
(\ref{e1}),(\ref{e2}),(\ref{e3}) is a pre-Gr\"{o}bner-Shirshov basis
of $W$.
\ \
In this paper, we will show that the above new conjecture is true
when $M$ satisfies some conditions.
\begin{theorem} \label{s.2}
Let $S=\{(\ref{e1}),(\ref{e2}),(3')\}$. Then if $S$ is a
pre-Gr\"{o}bner-Shirshov basis of $W$ then so is
$\{(\ref{e1}),(\ref{e2}),(\ref{e3})\}$.
\end{theorem}
\textbf{Proof} It suffices to show that for any
\begin{eqnarray*}
f&=&(m-1)(s_{0},s'_0)(m-1)(s_1,s'_1)\cdots (m-1)(s_{k},s'_{k})
m(s_{k+1},s'_{k+1})\\
&&-m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})
\end{eqnarray*}
in $(\ref{e3})$
without property $(\ref{e4})$ or $(\ref{e5})$, $f$ has an
expression: $f=\sum a_ir_ib_i$, where $r_i\in S,\ a_i,b_i\in X^*$.
We prove this by induction on $k$.
For $k=0$,
$f=(m-1)(s_0,s'_0)m(s_1,s'_1)-m(s'_0,s_0)(m-1)(s_1,s'_1)$. There are
two cases to consider.
Case 1. $f$ is without property $(\ref{e4})$.
If $s_1>s'_1$, then
$$
f=(m-1)(s_0,s'_0)(m(s_1,s'_1)-m(s'_1,s_1))+(m(s_0,s'_0)-m(s'_0,s_0))(m-1)(s_1,s'_1).
$$
If $s_0<s'_0$, then
$$
f=-(m(s'_0,s_0)-m(s_0,s'_0))(m-1)(s_1,s'_1)+(m-1)(s_0,s'_0)(m(s_1,s'_1)-m(s'_1,s_1)).
$$
Case 2. $f$ is without property $(\ref{e5})$.
If $\{s_0,s'_0\}=\{s_1,s'_1\}$, then by ELW's of $s_0^2=1$ and
$s_0'^2=1$, $f\mapsto \cdots \mapsto
f_{m_{_{s_0s_0'}-1}}=s'_0-s'_0=0$.
Thus the result is true for $k=0$.
For $k>0$, there are also two cases to consider.
Case 1. $f$ is without property $(\ref{e4})$.
If $s_{k+1}>s'_{k+1}$, then
$$
f=(m-1)(s_{0},s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})r_1+r_2(m-1)(s_{k+1},s'_{k+1})
$$
where $r_1=m(s_{k+1},s'_{k+1})-m(s'_{k+1},s_{k+1})\in (\ref{e2})$
and
$$r_2=(m-1)(s_{0},s'_0)(m-1)(s_1,s'_1)\cdots
m(s_{k},s'_{k})-m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})\in (\ref{e3}).$$ By induction, $r_2$ is a
combination of relations in $(3')$. Then the result follows.
If $s_0'>s_0$, then
$$
f=-r_1(m-1)(s_1,s'_1)\cdots (m-1)(s_{k},s'_{k})
(m-1)(s'_{k+1},s_{k+1})+(m-1)(s_0,s'_0)r_2
$$
where $r_1=m(s'_0,s_0)-m(s_0,s'_0)\in (\ref{e2})$ and
$
r_2=(m-1)(s_1,s'_1)\cdots (m-1)(s_{k},s'_{k})
m(s'_{k+1},s_{k+1})-m(s'_1,s_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})
$
is in (\ref{e3}). By induction, the result follows.
If there exists $i,\ 0<i<k+1$ such that $s_i>s'_i,\ s_0>s'_0,\
s_{k+1}<s'_{k+1}$, then
$$
f=(m-1)(s_{0},s'_0)\cdots
(m-1)(s_{i-1},s'_{i-1})r_1+r_2(m-1)(s_{i},s'_{i})\cdots
(m-1)(s_{k+1},s'_{k+1})
$$
where $r_1=(m-1)(s_i,s'_i)\cdots
m(s_{k+1},s'_{k+1})-m(s'_i,s_i)\cdots (m-1)(s_{k+1},s'_{k+1})$,
$r_2=(m-1)(s_{0},s'_0)\cdots m(s_{i-1},s'_{i-1})-m(s'_0,s_0)\cdots
(m-1)(s_{i-1},s'_{i-1})$, and both of them are in (\ref{e3}). By
induction, the result follows.
Case 2. $f$ is without property $(\ref{e5})$.
Let us have $f$ with condition (\ref{e4}). Suppose
$\{s_i,s'_i\}=\{s_{i+1},s'_{i+1}\},\ 0\leq i\leq k.$
If $i<k$, then by ELW's of $s_i^2=1$ and $s_i'^2=1$,
\begin{eqnarray*} f\mapsto\cdots &\mapsto&(m-1)(s_0,s'_0)\cdots
(m-1)(s_{i-1},s'_{i-1})(m-1)(s_{i+2},s'_{i+2})\cdots
m(s_{k+1},s'_{k+1})\\
&&-m(s_0',s_0)\cdots
(m-1)(s_{i-1},s'_{i-1})(m-1)(s_{i+2},s'_{i+2})\cdots(m-1)(s_{k+1},s'_{k+1})
\end{eqnarray*}
is in (\ref{e3}) since $s'_{i+2}$ is the last second letter of
$(m-1)(s_{i+1},s'_{i+1})$ which, in fact, is $s'_i$. By induction,
the result follows.
If $i=k$ then by ELW's of $s_k^2=1$ and $s_k'^2=1$,
\begin{eqnarray*}
f\mapsto\cdots &\mapsto&(m-1)(s_0,s'_0)\cdots
m(s_{k-1},s'_{k-1})-m(s'_0,s_0)\cdots (m-1)(s_{k-1},s'_{k-1})
\end{eqnarray*}
is in (\ref{e3}). By induction, the result follows. \hfill
$\blacksquare$
\ \
We will deal with inclusion compositions $(f,g)_w,\ \bar f=a\bar gb,
\ w=\bar f$ and $f\in (3'), \ g\in (2)\cup(3')$. We will prove that
in the most cases they are trivial except six cases in Theorems
\ref{s.11}, \ref{s.14}, \ref{s.16} and \ref{s.17}.
\ \
\noindent {\bf Notation}:
\ \
We will fix two ``typical" relations in $(3')$.
Let $f$ be a relation in $(3')$,
\begin{eqnarray}\label{e6}
&&f=u_0u_1\cdots u_ku_{k+1}y_{k+1}-s_0'u_0u_1\cdots u_ku_{k+1}=\bar f-f_0\\
\nonumber && u_i=(m-1)(s_i,s'_i),\\
\nonumber &&x_{i}\ \mbox{ the\ last\ letter\ of}\ (m-1)(s_i,s'_i),\\
\nonumber &&y_{i}\ \mbox{ the\ last\ letter\ of}\ m(s_i,s'_i),\
\ \ \ \ \ \ \ \ \ \ 0\leq i\leq k+1
\end{eqnarray}
where
$
\{x_i,s'_{i+1}\}=\{s_i,s'_i\},\ y_i=s'_{i+1},\
m(s_is'_i)=(m-1)(s_i,s'_i)s'_{i+1},\ \ \ \ \ 0\leq i\leq k.
$
Let $g$ be an other relation in $(3')$,
\begin{eqnarray}\label{e7}
&&g=v_0v_1\cdots v_qv_{q+1}z_{q+1}-p'_0v_0v_1\cdots v_qv_{q+1}=\bar g-g_0\\
\nonumber && v_i=(m-1)(p_i,p'_i),\\
\nonumber &&t_{i}\ \mbox{ the\ last\ letter\ of}\ v_i,\\
\nonumber &&z_{i}\ \mbox{ the\ last\ letter\ of}\ m(p_i,p'_i),\
\ \ \ \ \ \ \ \ \ \ 0\leq i\leq q+1
\end{eqnarray}
where $\{t_i,p'_{i+1}\}=\{p_i,p'_i\},\ z_i=p'_{i+1},\
m(p_ip'_i)=(m-1)(p_i,p'_i)p'_{i+1},\ \ \ \ \ \ 0\leq i\leq q$.
\ \
In Lemmas (Theorems) \ref{s.3}--\ref{s.14}, we always assume that
$f,g\in (3')$ with the forms (\ref{e6}), (\ref{e7}) respectively
and $\bar f=a\bar gb$ for some words $a,b$.
\begin{lemma}\label{s.3}
If $\bar f=a\bar g$, then $a=1$ and $f=g$.
\end{lemma}
\textbf{Proof}\ Since $y_{k+1}=z_{q+1}$ and $x_{k+1}=t_{q+1}$,
$u_{k+1}y_{k+1}=v_{q+1}z_{q+1}$. Since $x_{k}=t_{q}$ and
$y_k=s'_{k+1}=p'_{q+1}=z_{q}$, $u_{k}y_{k}=v_{q}z_{q}$. Similarly,
we have $u_{k-1}y_{k-1}=v_{q-1}z_{q-1},\cdots,u_0y_0=v_0z_0$. Then
$a=1$ and $\bar f=\bar g$.
Noting that $u_0\cdots u_{k+1}=v_0\cdots v_{q+1}$, in order to prove
$\bar f=\bar g$ it is sufficient to show that $s'_0=p'_0$. Induction
on $k$.
If $k=0$, then $y_{1}=z_{q+1}$ and $x_{1}=t_{q+1}$. Then
$u_{1}y_{1}=v_{q+1}z_{q+1}$. Since $x_0=t_q$ and $s'_1=p'_{q+1}$,
$u_{0}y_0=v_qp'_{q+1}$. Then $q=k=0$ and $s'_0=p'_0$.
For $k>0$, we have $y_{k+1}=z_{q+1}$ and $x_{k+1}=t_{q+1}$,
$u_{k+1}y_{k+1}=v_{q+1}z_{q+1}$. Then $y_k=z_q$.
Let $h=u_0\cdots u_{k}y_k-s'_0u_0\cdots u_{k}$ and $ q= v_0\cdots
v_{q}z_{q}-p'_0v_0\cdots v_{q}$. Clearly, $\bar h=\bar q$, Then by
induction, we have $s'_0=p'_0$. \hfill $\blacksquare$
\begin{lemma}\label{s.4}
If there exist $i,j$ such that $s_i=p_j$, $s_i'=p_j'$ and $u_i$ is a
subword of $\bar g$, then $\bar f=\bar g$.
\end{lemma}
\textbf{Proof}\ If $i=0$ then $j=0$ since $u_iy_i=v_jz_{j}$. Then
$s'_1=y_0=z_0=p'_1$. Since $\bar g$ is a subword of $\bar f$,
$s_1=p_1$ and $u_2y_2=v_2z_2$. Hence $u_iy_i=v_iz_{i}$ for any $i,\
1\leq i\leq k+1$. Then $\bar f=\bar g$.
If $i\neq 0$, then $j\neq 0$. Otherwise, we have
$p_0=s_i<s'_{i+1}=p'_0$, a contradiction. Then $x_{i-1}=t_{j-1}$.
Since $y_i=s'_{i+1}=p'_{j+1}=z_j$, $u_{i-1}y_i=v_{j-1}z_{j}$.
Similarly, we have
$u_{i-2}y_{i-2}=v_{j-2}z_{j-2},\cdots,u_0y_0=v_0z_0$ and $j=i$.
Also, $s_{i+1}=p_{i+1},\ s'_{i+1}=y_{i}=z_{i}=p'_{i+1}$ imply that
$u_{i+1}y_{i+1}=v_{i+1}z_{i+1}$. Therefore,
$u_{i+2}y_{i+2}=v_{i+2}z_{i+2},\cdots,u_{k+1}y_{k+1}=v_{k+1}z_{k+1}$.
\hfill $\blacksquare$
\ \
In what follows we assume that $\bar f\neq \bar g$.
\begin{lemma}\label{ls.1}
If there exists $i>0$ such that $\ |u_i|>1$, $\bar g=cu_id$, $\bar
f=acu_idb$, $ac=u_0\cdots u_{i-1}$ and $c=v_0\cdots v_{j-1}$, then
$u_i=v_jv_{j+1}\cdots v_n$ and $|v_j|=\cdots=|v_n|=1$.
Moreover, if $u_{i+1}$ is also a subword of $\bar g$, then $
u_{i+1}=v_{n+1}\cdots v_{l}$ such that $|v_j|=\cdots=|v_{l}|=1$.
\end{lemma}
\textbf{Proof} By Lemma \ref{s.4} and $\bar f\neq \bar g$, we have
$u_i\neq v_j$.
Since $v_j=(m-1)(s_i,p'_{j})$ and $u_i\neq v_j$, $p'_j\neq s'_{i}$.
Then $|v_j|=1$ and $v_{j+1}=(m-1)(s'_{i},p'_j)$. If $|v_{j+1}|>1$,
then $p'_j=s_{i+1}$ and $u_i=s_is'_{i}$. Now,
$s'_{i}<p'_j=s_{i+1}<s'_{i+1}=s_i$, a contradiction. Then
$|v_{j+1}|=1$. This shows that $u_i=v_jv_{j+1}\cdots v_n$ such that
$|v_j|=\cdots=|v_n|=1$.
If $u_{i+1}$ is also a subword of $\bar g$, we have
$v_{n+1}=(m-1)(s_{i+1},p'_j)$. If $|u_{i+1}|>1$, then by a similar
proof of the above, we have $u_{i+1}=v_{n+1}\cdots v_{l}$ such that
$|v_{n+1}|=\cdots =|v_{l}|=1$. If $|u_{i+1}|=1$ and $|v_{n+1}|>1$,
then $p'_j=s_{i+2}$,
$s'_{i+1}<s_{i+2}<s'_{i+2}\in\{s_{i+1},s'_{i+1}\}$, a contradiction.
Therefore, $|v_{n+1}|=1$ and $u_{i+1}=v_{n+1}$. \hfill
$\blacksquare$
\ \
\begin{lemma}\label{ls.2}
If there exist $i,i'\ (i'\geq 1)$ such that $u_i\cdots
u_l=v_{i'}\cdots v_{q+1}$, then $|u_i|=\cdots =|u_l|=1$.
\end{lemma}
\textbf{Proof}\ Suppose there exists a minimal $ j\ (i\leq j\leq l)$
such that $|u_j|>1$. We will show that $\bar g=cu_jd$, where
$c=v_0\cdots v_n,\ i'-1\leq n\leq q$. Otherwise, $s_j$ is a subword
of $v_n$. Then $v_n=(m-1)(s_{j-1},s_j)=s_{j-1}s_j$ and
$v_{n+1}=(m-1)(s'_{j},s_{j-1})\ (j> 1)$. So, $s'_{j}<s_{j-1}<s_j$, a
contradiction.
Then by Lemma \ref{ls.1}, we have $u_j=v_{n+1}\cdots v_{l'}$ such
that $|v_{n+1}|=\cdots=|v_{l'}|=1$.
Moreover, $u_{j+1}\cdots u_l=v_{l'+1}\cdots v_{q+1}$ such that
$|v_{l'+1}|=\cdots=|v_{q+1}|=1$. Then $z_{q+1}=s_{l+1}$ and there
exists $v_p\ (n+1\leq p\leq q+1)$ such that $s'_{l+1}=v_p<s_{l+1}$,
a contradiction. \hfill $\blacksquare$
\ \
\begin{lemma} \label{s.7}
If $\bar f=\bar gb$ with $b\neq 1$, then $|u_0|=1$ or $|u_0|=2$.
\end{lemma}
\textbf{Proof} If $|u_0|>2$, then $|v_0|=1$. Otherwise, by Lemma
\ref{s.4}, $\bar f=\bar g$, a contradiction. Clearly, $|v_1|=1$.
Then $p_2=s_{0}$ and $p_2<p'_2=p'_0<p_0=s_{0}$, a contradiction.
\hfill $\blacksquare$
\begin{lemma} \label{s.8}
Suppose that $\bar f=\bar gb=\bar
g(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1}y_{k+1}$. Then
$|u_1|=\cdots =|u_l|=1$, $|v_0|=1$ and $(f,g)_{\bar f}\equiv 0$.
\end{lemma}
\textbf{Proof} There are two cases to consider.
Case 1. $u_0=s_0$.
We will show that $v_0=s_{0}$. Otherwise, $v_0=s_{0}s_1$. If
$|u_1|>1$, then $v_1=(m-1)(s'_1,s_0)=(m-1)(s'_0,s_0)=s'_0=s'_1$,
$u_1=s_1s'_1$ and $v_2\cdots v_{q+1}=u_2\cdots u_l$. By Lemma
\ref{ls.2}, we have $|u_2|=\cdots=|u_l|=1$. Then there exists
$s_{j}\in\{s_2,\cdots,s_{l+1}\}$ such that $s_{j}=s_{0}$, a
contradiction. Then $|u_1|=1$ and $v_1\cdots v_{q+1}=u_2\cdots u_l$.
By Lemma \ref{ls.2}, we have $|u_2|=\cdots=|u_l|=1$. This implies
that there exists $l+1\geq j>1$ such that $s_{j}=s_0$, a
contradiction.
Since $v_0=s_0$ and $v_1\cdots v_{q+1}=u_1\cdots u_l$, by Lemma
\ref{ls.2}, we have $|u_1|=\cdots=|u_l|=1$.
Suppose $p'_0=s_{j}$ where $s_{j}\in\{s_2,\cdots,s_{l+1}\}$. If
$j<l+1$, there exists an $i$ such that $|v_i|>1$ and so
$|u_{l+1}|=1$. By Lemma \ref{s.6}, $(f,g)_{\bar f}\equiv 0$.
If $j=l+1$, then $s_0\rhd s_{l+1}\rhd s_j$, $s_1\rhd s'_{l+1}\rhd
s_j$ for any $j,\ 1\leq j\leq l$ and
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& s_{l+1}s_0s_1\cdots
s_l(m-2)(s'_{l+1},s_{l+1})\cdots
m(s_{k+1},s'_{k+1})\\
&&-s'_0s_{l+1}s_0s_1\cdots s_l(m-2)(s'_{l+1},s_{l+1})\cdots
(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&s_{l+1}s'_{l+1}s_0s_1\cdots
s_l(m-3)(s_{l+1},s'_{l+1})\cdots m(s_{k+1},s'_{k+1})\\
&&-s'_0s_{l+1}s'_{l+1}s_0s_1\cdots s_l(m-3)(s_{l+1},s'_{l+1})\cdots
(m-1)(s_{k+1},s'_{k+1})\\
&&\cdots \\
&\equiv& (m-1)(s_{l+1},s'_{l+1})s_0s_1\cdots
s_l(m-1)(s_{l+2},s'_{l+2})\cdots
m(s_{k+1},s'_{k+1})\\
&&-s'_0(m-1)(s_{l+1},s'_{l+1})s_0s_1\cdots
s_l(m-1)(s_{l+2},s'_{l+2})\cdots(m-1)(s_{k+1},s'_{k+1})\\
&\equiv& (m-1)(s_{l+1},s'_{l+1})s'_{l+2}s_0s_1\cdots
s_l(m-1)(s_{l+2},s'_{l+2})\cdots
(m-1)(s_{k+1},s'_{k+1})\\
&&-(m-1)(s_{l+1},s'_{l+1})s'_{l+2}s_0s_1\cdots
s_l(m-1)(s_{l+2},s'_{l+2})\cdots (m-1)(s_{k+1},s'_{k+1})\\
&\equiv&0
\end{eqnarray*}
since $s'_{l+1}=\cdots =s'_0$,
$(m-1)(s_{l+1},s'_{l+1})s'_{l+2}=m(s_{l+1},s'_{l+1})$ and
$h=s_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}y_{k+1}-s'_{l+2}s_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}$ in (\ref{e3}) with property (\ref{e4}).
Case 2. $u_0=s_0s'_0$. Then $v_0=s_0,\ v_1=(m-1)(s'_0,p'_0)$. There
are two subcases to consider.
Subcase 1. $|v_1|>1$. Then $p'_0=s_1$ and $|u_1|=1$. If $|v_1|>2$,
then $s_2=s'_0,\ v_1=s'_0s_1s'_0$ and $u_2=s_2s'_2=s'_0s_0$. This
shows $v_2=(m-1)(s_0,s_1)$, a contradiction. Then $v_1=s'_0s_1$ and
$v_2\cdots v_{q+1}=u_2\cdots u_l$. By Lemma \ref{ls.2},
$|u_2|=\cdots=|u_l|=1$. Clearly, $s'_0\not\in
\{s_{2},\cdots,s_{l-1}\}$, otherwise, there exists $u_i\ (2\leq
i\leq l)$ such that $s_{i-1}=s'_0$ and $u_{i}=(m-1)(s'_0,s_0)$ which
contradicts $|u_i|=1$.
Then $|v_2|=\cdots=|v_{q+1}|=1$ and $s_{l+1}=s'_0$, $s'_{l+1}=s_0$.
Now,
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv &s_{1}s_0s'_0s_1\cdots
s_ls'_{l+1}u_{l+2}\cdots u_{k+1}y_{k+1}-s'_0s_{1}s_0s'_0s_1\cdots
s_ls'_{l+1}u_{l+2}\cdots
u_{k+1}\\
&\equiv&s_{1}s'_0s_0s'_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}y_{k+1}-s'_0s_{1}s'_0s_0s'_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}\\
&\equiv&s_{1}s'_0s_0s_1s'_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}-s'_0s_{1}s'_0s_0s'_0s_1\cdots s_lu_{l+2}\cdots u_{k+1}
\end{eqnarray*}
since $h=s'_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}y_{k+1}-s_1s'_0s_1\cdots s_lu_{l+2}\cdots u_{k+1}$ in
(\ref{e3}) with property (\ref{e4}) and $s'_{l+2}=s_{l+1}=s'_0$.
Since $s_0\rhd s_1$, $m_{s_1s'_0}=3$ and $s_1>s'_0$, we have
$s_{1}s'_0s_0s_1\mapsto s_{1}s'_0s_1s_0\mapsto s'_0s_1s'_0s_0$ and
hence $(f,g)_{\bar f}\equiv 0$.
Subcase 2. $|v_1|=1$. Then $v_2\cdots v_{q+1}=u_1\cdots u_l$. By
Lemma \ref{ls.2}, we have $|u_1|=\cdots=|u_l|=1$.
Suppose $p'_0=s_{j}$ where $s_{j}\in \{s_1,\cdots,s_{l+1}\}$. If
$j<l+1$, then $|u_{l+1}|=1$. If $j=l+1$, we have $|u_{l+1}|=1$ since
$s_0\rhd s_{l+1}$. Then
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv &s_{j}s_0s'_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}y_{k+1}-s'_0s_{j}s_0s'_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}\\
&\equiv&s_{j}s'_0s_0s'_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}-s'_0s_{j}s_0s'_0s_1\cdots s_lu_{l+2}\cdots u_{k+1} \\
&\equiv&s'_0s_{j}s_0s'_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}-s'_0s_{j}s_0s'_0s_1\cdots s_lu_{l+2}\cdots u_{k+1}\\
&\equiv& 0
\end{eqnarray*}
since $s_0s'_0s_1\cdots s_lu_{l+2}\cdots
u_{k+1}y_{k+1}-s'_0s_0s'_0s_1\cdots s_lu_{l+2}\cdots u_{k+1}$ is in
(\ref{e3}). \hfill $\blacksquare$
\ \
\begin{lemma} \label{s.6}
If $|u_i|=\cdots =|u_{l+1}|=1$ and $u_i\cdots u_{l+1}=\bar g$, then
$(f,g)_{\bar f}\equiv 0$.
\end{lemma}
\textbf{Proof} Clearly, $\bar g=u_i\cdots u_{l+1}\mapsto
u_ju_i\cdots u_{l}=g_0$ for some $i<j\leq{l+1}$.
If $i=0$, then
$$
(f,g)_{\bar f}\equiv u_ju_0\cdots u_lu_{l+2}\cdots
u_{k+1}y_{k+1}-u_js'_0u_i\cdots u_lu_{l+2}\cdots u_{k+1}\equiv s_jh
$$
where $h=u_0\cdots u_lu_{l+2}\cdots u_{k+1}y_{k+1}-s'_0u_0\cdots
u_lu_{l+2}\cdots u_{k+1}$ is in (\ref{e3}) with property (\ref{e4})
and $s_j\bar h<\bar f$. By Theorem \ref{s.2}, the result follows.
If $i>0$, then
$$
(f,g)_{\bar f}\equiv u_0\cdots u_{i-1}u_ju_i\cdots u_lu_{l+2}\cdots
u_{k+1}y_{k+1}-s_0u_0\cdots u_{i-1}u_ju_i\cdots u_lu_{l+2}\cdots
u_{k+1}\triangleq h
$$
where $h$ is in (\ref{e3}) with property (\ref{e4}) and $\bar
h<\bar f$. By Theorem \ref{s.2}, the result follows. \hfill
$\blacksquare$
\ \
The following lemmas are dealing with the case $\bar f=a\bar g b$,
$a\neq 1,\ b\neq 1$.
\ \
In Lemmas (Theorems) \ref{s.9}--\ref{s.15}, $i$ and $l$ are fixed
such that $0\leq i<l\leq k$, $u_0\cdots u_{i-1}=1$ if $i=0$ and
$u_{l+2}\cdots u_{k+1}=1$ if $l=k$.
\ \
\begin{lemma} \label{s.9}
If $\bar f=u_0\cdots u_{i-1}(m-2)(s_i,s'_i)\bar
g(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1} y_{k+1}$, then
$|u_{i+1}|=\cdots =|u_l|=1$.
\end{lemma}
\textbf{Proof} There are three cases to consider.
Case 1. $v_0=x_{i}$. Then $v_1\cdots v_{q+1}=u_{i+1}\cdots u_{l}$.
By Lemma \ref{ls.2}, we have $|u_{i+1}|=\cdots=|u_l|=1$.
Case 2. $v_0=(m-1)(x_{i},s_{i+1})$ and $|v_0|>2$. Then $|u_{i+1}|=1$
and $s_{i+2}=x_{i}$. If $|u_i|>1$, then $v_0=x_{i}s_{i+1}x_{i}$ and
$v_1=(m-1)(s'_{i+2},s_{i+1})$, where $s'_{i+2}=s'_{i+1}<s_{i+1}$, a
contradiction. Then, $|u_i|$=1 and $v_0=u_iu_{i+1}\cdots u_j$ such
that $|u_i|=\cdots=|u_j|=1$ for some $j$. Then $v_2\cdots
v_{q+1}=u_{j+1}\cdots u_l$ and by Lemma \ref{ls.2},
$|u_{j+1}|=\cdots =|u_l|=1$. Moreover, $|u_{l+1}|=1$.
Case 3. $v_0=(m-1)(x_{i},s_{i+1})$ and $|v_0|=2$, i.e.,
$v_0=x_{i}s_{i+1}$. If $|u_{i+1}|>1$, we have
$v_1=(m-1)(s'_{i+1},x_{i})$. If $i=0$, then $x_{0}=s_{0}$ and $
s'_1=s'_0$. If $|v_1|>1$, then $s_{2}=x_{0}=s_{0}$, a contradiction.
Then $|u_0|=|v_1|=1, p_0=u_0$, a contradiction. Then $i>0$.
Moreover, $s'_{i+1}=s_{i}$, $m_{s_{i}s'_{i}}$ is odd,
$x_{i}=s_{i+2}$ and $u_{i+1}=s_{i+1}s'_{i+1}$. Then
$s'_{i+2}=s_{i+1}$ and $s'_{i+1}<s_{i+2}<s'_{i+2}=s_{i+1}$, also a
contradiction. Thus $|u_{i+1}|=1$ and $u_{i+2}\cdots u_l=v_1\cdots
v_{q+1}$. By Lemma \ref{ls.2}, we have $|u_{i+2}|=\cdots=|u_l|=1$.
\hfill $\blacksquare$
\begin{lemma} \label{s.10}
If $\bar f=u_0\cdots u_i(m-2)(s_i,s'_i)\bar
g(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1} y_{k+1}$, and either
$|u_i|= 1$ or $|u_{l+1}|= 1$, then $(f,g)_{\bar f}\equiv 0$.
\end{lemma}
\textbf{Proof} There are two cases to consider.
Case 1. $|u_i|=1$. Suppose $p'_0=s_j$. Then $g=s_i\cdots s_{l+1}-
s_js_i\cdots s_l$.
If $j=l+1$, i.e., $p'_0=s_{l+1}$, then we have $s'_{l+1}\rhd s_i\rhd
s_{l+1},\ s_{l+1}\rhd s_n,\ s'_{l+1}\rhd s_n$ for all $n,\ i+1\leq
n\leq l$, and
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots u_{i-1}s_{l+1}s_i\cdots
s_l(m-2)(s'_{l+1},s_{l+1})\cdots u_{k+1}y_{k+1}\\
&&-s'_0u_0\cdots u_{i-1}s_{l+1}s_i\cdots
s_l(m-2)(s'_{l+1},s_{l+1})\cdots u_{k+1}\\
&\equiv& u_0\cdots u_{i-1}s_{l+1}s'_{l+1}s_i\cdots
s_l(m-3)(s_{l+1},s'_{l+1})\cdots u_{k+1}y_{k+1}\\
&&-s'_0u_0\cdots u_{i-1}s_{l+1}s'_{l+1}s_i\cdots
s_l(m-3)(s_{l+1},s'_{l+1})\cdots u_{k+1}\\
&\equiv& u_0\cdots u_{i-1}u_{l+1}s_i\cdots
s_lu_{l+2}\cdots u_{k+1}y_{k+1}\\
&&-s'_0u_0\cdots u_{i-1}u_{l+1}s_i\cdots
s_lu_{l+2}\cdots u_{k+1}\\
&\equiv& 0
\end{eqnarray*}
since $u_0\cdots u_{i-1}u_{l+1}u_i\cdots u_{l}u_{l+2}\cdots
u_{k+1}y_{k+1}-s_0u_0\cdots u_{i-1}u_{l+1}u_i\cdots
u_{l}u_{l+2}\cdots u_{k+1}$ is in (\ref{e3}).
If $j<l+1$, then there exists $i'$ such that $|v_{i'}|>1$ which
implies $|u_{l+1}|=1$. Then by Lemma \ref{s.6}, $(f,g)_{\bar
f}\equiv 0$.
Case 2. $|u_{l+1}|=1$ and $|u_i|\neq 1$. Suppose $p'_0=s_j$. Then
$x_i\rhd s_j\rhd s_{i+1},\cdots,s_{j-1}$, $s'_{i+1}=s'_{j+1}\rhd
s_j$ and
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}(m-2)(s_i,s'_i)s_{j}x_i\cdots
s_lu_{l+2}\cdots u_{k+1}y_{k+1}\\
&&-s'_0u_0\cdots u_{i-1}(m-2)(s_i,s'_i)s_{j}x_i\cdots
s_lu_{l+2}\cdots u_{k+1}.
\end{eqnarray*}
Since $(m-2)(s_i,s'_i)s_{j}\mapsto\cdots\mapsto s_j(m-2)(s_i,s'_i)$,
we have
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}s_{j}(m-1)(s_i,s'_i)s_{i+1}\cdots
s_lu_{l+2}\cdots u_{k+1}y_{k+1}\\
&&-s'_0u_0\cdots u_{i-1}s_{j}(m-1)(s_i,s'_i)s_{i+1}\cdots
s_lu_{l+2}\cdots u_{k+1}\\
&\equiv& 0
\end{eqnarray*}
since $u_0\cdots u_{i-1}u_ju_iu_{i+1}\cdots u_{l}u_{l+2}\cdots
u_{k+1}y_{k+1}-s'_0u_0\cdots u_{i-1}u_{j}u_is_{i+1}\cdots
u_{l}u_{l+1}\cdots u_{k+1}$ is in (\ref{e3}). \hfill $\blacksquare$
\ \
\begin{theorem} \label{s.11}
Suppose that $\bar f=u_0\cdots u_i(m-2)(s_i,s'_i)\bar
g(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1} y_{k+1}$, $|u_i|> 1$
and $|u_{l+1}|>1$. Then one of the following holds:
\begin{enumerate}
\item[(i)] $|v_n|=1$ for all $n,\ 0\leq n\leq q+1$
and
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}(m-2)(s_i,s'_i)s_{l+1}x_is_{i+1}\cdots
s_l(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1}y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
\item[(ii)] $|v_0|=2,\ |v_n|=1$ for all $n,\ 1\leq n\leq q+1$ and
if $m_{s_is'_i}=3$, then $(f,g)_{\bar f}\equiv 0$;
if $m_{s_is'_i}>3$, then $s_{l+1}=x_i$, $s'_{l+1}=s'_{i+1}=y_i$ and
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}(m-3)(s_i,s'_i)s_{i+1}s'_{i+1}x_is_{i+1}\cdots
s_l(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1} y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
\end{enumerate}
\end{theorem}
\textbf{Proof} By Lemma \ref{s.9}, we have $|u_{i+1}|=\cdots
=|u_{l}|=1$ and so $\bar g=x_is_{i+1}\cdots s_{l}s_{l+1}$. There are
two cases to consider.
Case 1. $v_0=x_i$. Then $|v_n|=1$ for all $n,\ 1\leq n\leq q+1$.
Otherwise, $z_{q+1}=s_{l+1}\in \{s_{i+1},\cdots, s_l\}$ which shows
$|u_{l+1}|=1$, a contradiction. Then
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}(m-2)(s_i,s'_i)s_{l+1}x_is_{i+1}\cdots
s_l(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1} y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
Case 2. $v_0=x_is_{i+1}$. Then $|v_n|=1$ for all $n,\ 1\leq n\leq
q+1$. Otherwise, we have $s_j=p_0=x_i$ for some $j\ \ (i+1<j<l+1)$.
Then $u_j=(m-1)(x_i,s'_{i+1})$ and $|u_j|=|u_i|>1$, a contradiction.
Therefore $z_{q+1}=x_i=s_{l+1}$, $u_{l+1}=(m-1)(x_i,s'_{i+1})$ and
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}(m-2)(s_i,s'_i)s_{i+1}x_is_{i+1}\cdots
s_l(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1}y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}\\
&\equiv&
u_0\cdots u_{i-1}(m-3)(s_i,s'_i)s_{i+1}s'_{i+1}x_is_{i+1}\cdots
s_l(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1} y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
If $m_{s_is'_i}=3$, we have $s'_{i+1}=s_i,\
x_i=s'_{i}=s_{l+1}<s'_{l+1}=s'_{i+1}=s_i$. Therefore $i=0$ and
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& s_{1}s_{0}s'_0s_{1}\cdots
s_ls_0u_{l+2}\cdots u_{k+1} y_{k+1}-s'_0s_{1}s_{0}s'_0s_{1}\cdots
s_ls_0u_{l+2}\cdots u_{k+1}\\
&\equiv&s_{1}s'_0s_{0}s'_0s_{1}\cdots s_lu_{l+2}\cdots u_{k+1}
y_{k+1}-s'_0s_{1}s'_0s_{0}s'_0s_{1}\cdots s_lu_{l+2}\cdots
u_{k+1}\\
&\equiv&s_{1}s'_0s_{0}s_1s'_0s_{1}\cdots s_lu_{l+2}\cdots
u_{k+1}-s_{1}s'_0s_1s_{0}s'_0s_{1}\cdots s_lu_{l+2}\cdots
u_{k+1}\\
&\equiv&s_{1}s'_0s_1s_{0}s'_0s_{1}\cdots s_lu_{l+2}\cdots
u_{k+1}-s_{1}s'_0s_1s_{0}s'_0s_{1}\cdots s_lu_{l+2}\cdots
u_{k+1}\\
&\equiv& 0.
\end{eqnarray*}
If $m_{s_is'_i}>3$, we have
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}(m-3)(s_i,s'_i)s_{i+1}s'_{i+1}x_is_{i+1}\cdots
s_l(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1} y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
The proof is completed. \hfill $\blacksquare$
\ \
\begin{lemma} \label{s.12}
Suppose $\bar f=u_0\cdots u_{i-1}(m-3)(s_i,s'_i)\bar
g(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1} y_{k+1}$. Then
$|u_{i+1}|=\cdots =|u_l|=1$.
\end{lemma}
\textbf{Proof} Clearly, $v_0=s'_{i+1}$. There are two cases to
consider.
Case 1. $v_1=x_{i}$. Since $u_{i+1}\cdots u_l=v_2\cdots v_{q+1}$, by
Lemma \ref{ls.2}, we have $|u_{i+1}|=\cdots=|u_l|=1$.
Case 2. $v_1=(m-1)(x_{i},s_{i+1})$. We have $m_{s_{i+1}s'_{i+1}}=2$,
i.e., $|u_{i+1}|=1$.
If $|v_1|>2$, then $s_{i+2}=x_{i}$. We have $|u_{i+2}|>1$,
$|v_1|=3$, $v_2=(m-1)(s'_{i+2},s_{i+1})$ and
$s'_{i+1}=s'_{i+2}<s_{i+1}$, a contradiction. Then $|v_1|=2$ and
$v_2\cdots v_{q+1}=u_{i+2}\cdots u_l$. By Lemma \ref{ls.2}, we have
$|u_{i+2}|=\cdots =|u_l|=1$. \hfill $\blacksquare$
\ \
\begin{theorem} \label{s.14}
Suppose that $\bar f=u_0\cdots u_i(m-3)(s_i,s'_i)\bar
g(m-2)(s'_{l+1},s_{l+1})u_{l+2}\cdots u_{k+1} y_{k+1}$ and
$|u_{l+1}|>1$. Then one of the following holds.
\begin{enumerate}
\item[(i)] $|v_0|=|v_1|=1$ and
if $i=0$, then $(f,g)_{\bar f}\equiv 0$;
if $i>0$, then
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}s_is_j(m-2)(s'_{i},s_{i})s_{i+1}\cdots s_{l}u_{l+2}\cdots
u_{k+1}y_{k+1}-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
\item[(ii)] $|v_0|=1,\ |v_1|=2,\ |v_n|=1$ for all $n \ (1<n\leq q+1)$ and
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}(m-3)(s_{i},s'_{i})s_{i+1}s'_{i+1}x_{i}s_{i+1}\cdots
s_{l}(m-2)(s'_{i+1}x_{i})u_{l+2}\cdots u_{k+1}y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
\end{enumerate}
\end{theorem}
\textbf{Proof} By Lemma \ref{s.12}, $|u_{i+1}|=\cdots=|u_{l}|=1$ and
$v_0=s'_{i+1}$. There are two cases to consider.
Case 1. $v_1=x_i$. There exists $s_j=p'_0$, where $s_j\in
\{s_{i+1},\cdots, s_{l+1}\}$. If $j=l+1$, then $|v_{n}|=1$ for all
$n$ and $s'_{l+1}=s'_{i+1}\rhd s_{l+1}$. Thus, $|u_{l+1}|=1$. If
$j\neq l+1$, there exists $|v_{n}|>1\ (n>1)$. Then
$s_{l+1}\in\{s_{i+1},\cdots,s_l\}$ and $|u_{l+1}|=1$.
If $i>0$, then $s'_{i+1}=s'_{i},\ x_{i}=s_{i}$. Hence
$m_{s_{i}s'_{i}}$ is even and $s'_{i+1}=\cdots =s'_{l+1}=s'_{i}$.
Now by ELW's, we have
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}(m-3)(s_{i},s'_{i})s_js'_{i}s_{i}s_{i+1}\cdots
s_{l}u_{l+2}\cdots u_{k+1}y_{k+1}-s'_0u_0\cdots u_{k+1}\\
&\equiv& u_0\cdots
u_{i-1}s_is_j(m-2)(s'_{i},s_{i})s_{i+1}\cdots s_{l}u_{l+2}\cdots
u_{k+1}y_{k+1}-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
If $i=0$, $s_1'=s_0$ since $s'_{1}>x_0$. Then $m_{s_0s'_0}$ is odd
and $s_0=s'_{1}=s'_2=\cdots s'_{l+1}=s'_{l+2}$. Since
$h=u_0s_1\cdots s_{l}u_{l+2}\cdots u_{k+1}y_{k+1}-s'_0u_0s_1\cdots
s_{l}u_{l+2}\cdots u_{k+1}$ is in (\ref{e3}), we have
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& (m-3)(s_0,s'_0)s_{j}s_0s'_0s_1\cdots
s_{l}u_{l+2}\cdots
u_{k+1}y_{k+1}\\
&& -(m-2)(s'_0,s_0)s_{j}s_0s'_0s_1\cdots s_{l}u_{l+2}\cdots
u_{k+1}\\
&\equiv& s_{j}u_0s_1\cdots s_{l}u_{l+2}\cdots
u_{k+1}y_{k+1}-s'_0s_ju_0s_1\cdots s_{l}u_{l+2}\cdots
u_{k+1}\\
&\equiv& s_{j}u_0s_1\cdots s_{l}u_{l+2}\cdots
u_{k+1}y_{k+1}-s_js'_0u_0s_1\cdots s_{l}u_{l+2}\cdots u_{k+1}\\
&\equiv& 0.
\end{eqnarray*}
Case 2. $v_1=x_is_{i+1}$. Clearly,
$x_{i}\not\in\{s_{i+1},\cdots,s_{l}\}$. Otherwise, $x_{i}=s_{j}$ for
some $j\ (i+1\leq j\leq l)$ and so $|u_{j}|=|u_i|>1$, a
contradiction. Then $x_{i}=s_{l+1}$, $|v_2|=\cdots=|v_{q+1}|=1$ and
$u_{l+1}=(m-1)(x_i,s'_{i+1})$. Moreover, we have $s'_{i+1}\rhd
s_{i+1}>x_i$ and $m_{s_{i}s'_{i}}>2,\ m_{x_{i}s_{i+1}}=3$. By ELW's,
we have
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv&u_0\cdots
u_{i-1}(m-3)(s_{i},s'_{i})s_{i+1}s'_{i+1}x_{i}s_{i+1}\cdots
s_{l}(m-2)(s'_{i+1}x_{i})u_{l+2}\cdots u_{k+1}y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
If $m_{s_is'_i}=3$, then $|u_i|=2$, $x_i=s'_i$ and $s'_{i+1}=s_{i}$.
Since $s'_{i+1}>s_{i+1}>x_i$, we have $i=0$ and $a=1$, which
contradicts $a\neq 1$. Then $|u_i|>2$ and $m_{s_is'_i}$ is even.
\hfill $\blacksquare$
\ \
Now, let
\begin{eqnarray*}\label{e8}
g=m(s,s')-m(s',s),\ \ s>s'
\end{eqnarray*}
be a relation in (\ref{e2}) and $f$ be as (\ref{e6}) again. In the
following Lemma (Theorems) \ref{s.15}--\ref{s.17}, we will deal with
another inclusion compositions $(f,g)_w,\ w=\bar f=a\bar gb,\ \bar
g=m(s,s')$. There are another two nontrivial cases which will be
mentioned in Theorems \ref{s.16} and \ref{s.17}.
\begin{lemma} \label{s.15}
If $\bar f=u_0\cdots u_{i-1}\bar gu_{l+2}\cdots u_{k+1} y_{k+1}$,
then $|u_{i}|\cdots=|u_{l+1}|=1$ and $(f,g)_{\bar f}\equiv 0$.
\end{lemma}
\textbf{Proof} If there exists $j$ such that $|u_j|>1$, there will
be three different letters in $\bar g$, a contradiction. Therefore,
$|u_i|=\cdots =|u_{l+1}|=1$.
Similarly to the proof of Lemma \ref{s.6}, the result holds. \hfill
$\blacksquare$
\ \
\begin{theorem} \label{s.16}
Suppose $\bar f=u_0\cdots u_{i-1}(m-2)(s_i,s'_i)\bar g
(m-2)(s'_{i+1},s_{i+1})u_{i+2}\cdots u_{k+1} y_{k+1}$, where $0\leq
i\leq k$, $u_0\cdots u_{i-1}=1$ if $i=0$, $u_{i+2}\cdots u_{k+1}=1$
if $i\leq k$. Then the following statements hold.
\begin{enumerate}
\item[(i)]\ $g=x_is_{i+1}-s_{i+1}x_i,\
x_i>s_{i+1}$.
\item[(ii)]\ $(f,g)_{\bar f}\equiv 0$ if $|u_i|=1$ or
$|u_{i+1}|=1$.
\item[(iii)]\ If $|u_i|>1$ and $|u_{i+1}|>1$, then
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv&u_0\cdots u_{i-1}(m-2)(s_i,s'_i)s_{i+1}x_i
(m-2)(s'_{i+1},s_{i+1})u_{i+2}\cdots u_{k+1} y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
\end{enumerate}
\end{theorem}
\textbf{Proof} (i) is clear.
(ii) Suppose $|u_i|=1$. Since $u_0\cdots
u_{i-1}u_{i+1}s_iy_{i+1}-s'_0u_0\cdots u_{i-1}u_{i+1}s_i$ is in
(\ref{e3}), we have
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots u_{i-1}s_{i+1}s_i
(m-2)(s'_{i+1},s_{i+1})u_{i+2}\cdots u_{k+1} y_{k+1}\\
&&-s'_0u_0\cdots u_{i-1}s_{i+1}s_i
(m-2)(s'_{i+1},s_{i+1})u_{i+2}\cdots u_{k+1}\\
&\equiv& u_0\cdots u_{i-1}u_{i+1}s_i u_{i+2}\cdots u_{k+1}
y_{k+1}-s'_0u_0\cdots u_{i-1}u_{i+1}s_iu_{i+2}\cdots u_{k+1}\\
&\equiv& u_0\cdots u_{i-1}u_{i+1}s_iy_{i+1} u_{i+2}\cdots u_{k+1}
-s'_0u_0\cdots u_{i-1}u_{i+1}s_iu_{i+2}\cdots u_{k+1}\\
&\equiv& 0.
\end{eqnarray*}
Suppose $|u_{i+1}|=1$. Since $u_0\cdots
u_{i-1}u_{i+1}s_iy_{i+1}-s'_0u_0\cdots u_{i-1}u_{i+1}s_i$ is in
(\ref{e3}), we have
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv&u_0\cdots u_{i-1}(m-2)(s_i,s'_i)s_{i+1}x_i
u_{i+2}\cdots u_{k+1} y_{k+1}\\
&&-s'_0u_0\cdots u_{i-1}(m-2)(s_i,s'_i)s_{i+1}x_i
u_{i+2}\cdots u_{k+1}\\
&\equiv& u_0\cdots u_{i-1}u_{i+1}s_i u_{i+2}\cdots u_{k+1}
y_{k+1}-s'_0u_0\cdots u_{i-1}u_{i+1}s_iu_{i+2}\cdots u_{k+1}\\
&\equiv& u_0\cdots u_{i-1}u_{i+1}s_iy_{i+1} u_{i+2}\cdots u_{k+1}
-s'_0u_0\cdots u_{i-1}u_{i+1}s_iu_{i+2}\cdots u_{k+1}\\
&\equiv& 0.
\end{eqnarray*}
(iii) Suppose $|u_i|>1$ and $|u_{i+1}|>1$. Then by ELW's, we have
\begin{eqnarray*}
(f,g)_{\bar f}\equiv u_0\cdots u_{i-1}(m-2)(s_i,s'_i)s_{i+1}x_i
(m-2)(s'_{i+1},s_{i+1})u_{i+2}\cdots u_{k+1} y_{k+1}-s'_0u_0\cdots
u_{k+1}.
\end{eqnarray*}
The proof is completed. \hfill $\blacksquare$
\ \
\begin{theorem} \label{s.17}
Suppose $\bar f=u_0\cdots u_{i-1}(m-2)(s_i,s'_i)\bar g
(m-2)(s'_{i+2},s_{i+2})u_{i+3}\cdots u_{k+1} y_{k+1}$, where $0\leq
i\leq k-1$, $u_0\cdots u_{i-1}=1$ if $i=0$, $u_{i+3}\cdots
u_{k+1}=1$ if $i=k-1$. Then the following statements hold.
\begin{enumerate}
\item[(i)]\ $g=x_is_{i+1}x_i-s_{i+1}x_is_{i+1},\
x_i>s_{i+1}$.
\item[(ii)]\ $(f,g)_{\bar f}\equiv 0$ if $|u_i|=1$
or $|u_{i}|=2$.
\item[(iii)]\ If $|u_i|>2$, then
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv &u_0\cdots
u_{i-1}(m-3)(s_i,s'_i)s_{i+1}s'_{i+1}x_is_{i+1}
(m-2)(s'_{i+2},x_i)u_{i+3}\cdots u_{k+1} y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}.
\end{eqnarray*}
\end{enumerate}
\end{theorem}
\textbf{Proof} (i) is clear.
(ii) If $|u_i|=1$, then $|u_{i+1}|=|u_{i+2}|=1$. Similar to Lemma
\ref{s.6}, we have $(f,g)_{\bar f}\equiv 0$.
If $|u_i|=2$, then $u_i=s_is'_i$,
$s'_i=x_i=s_{i+2}<s'_{i+2}=s'_{i+1}=s_i$ which implies $i=0$. Since
$s_0s'_0s_1s_0-s'_0s_0s'_0s_1$ and $s'_0s_1u_3\cdots
u_{k+1}y_{k+1}-s_1s'_0s_1u_3\cdots u_{k+1}$ are in (3),
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& s_0s_1s'_0s_1s_0u_3\cdots
u_{k+1}y_{k+1}-s'_0s_0s_1s'_0s_1s_0u_3\cdots u_{k+1}\\
&\equiv&s_1s_0s'_0s_1s_0u_3\cdots
u_{k+1}y_{k+1}-s'_0s_1s_0s'_0s_1s_0u_3\cdots u_{k+1}\\
&\equiv&s_1s'_0s_0s'_0s_1u_3\cdots
u_{k+1}y_{k+1}-s'_0s_1s'_0s_0s'_0s_1u_3\cdots u_{k+1}\\
&\equiv&s_1s'_0s_0s_1s'_0s_1u_3\cdots
u_{k+1}-s_1s'_0s_1s_0s'_0s_1u_3\cdots u_{k+1}\\
&\equiv&s_1s'_0s_1s_0s'_0s_1u_3\cdots
u_{k+1}-s_1s'_0s_1s_0s'_0s_1u_3\cdots u_{k+1}\\
&\equiv& 0.
\end{eqnarray*}
(iii) Suppose $|u_i|>2$. By ELW's, we have
\begin{eqnarray*}
(f,g)_{\bar f}&\equiv& u_0\cdots
u_{i-1}(m-2)(s_i,s'_i)s_{i+1}x_is_{i+1}
(m-2)(s'_{i+2},x_i)u_{i+3}\cdots u_{k+1} y_{k+1}\\
&&-s'_0u_0\cdots u_{k+1}\\
&\equiv& u_0\cdots
u_{i-1}(m-3)(s_i,s'_i)s_{i+1}s'_{i+1}x_is_{i+1}
(m-2)(s'_{i+2},x_i)u_{i+3}\cdots u_{k+1} y_{k+1}\\
&&-s'_0 u_0\cdots u_{k+1}.
\end{eqnarray*}
The proof is completed. \hfill $\blacksquare$
\ \
Now we finish all the cases of inclusion compositions. Most of them
are trivial except six cases which are mentioned in Theorems
\ref{s.11}, \ref{s.14}, \ref{s.16} and \ref{s.17}. But in fact, we
can classify these six cases into four cases.
\ \
Now we consider that in what instances the nontrivial cases may
happen.
The first nontrivial case, which is the first case of Theorem
\ref{s.11} and the nontrivial case of Theorem \ref{s.16}, happens
if the following $f$ exists:
{\bf C1}: $f=(m-1)(s_0,s'_0)\cdots (m-1)(s_i,s'_i)\cdots
(m-1)(s_{l+1},s'_{l+1})\cdots m(s_{k+1},s'_{k+1})-m(s'_0,s_0)\cdots
(m-1)(s_{k+1},s'_{k+1})$, where $0\leq i\leq l\leq k$, such that
\begin{eqnarray*}
&&(a)\
|(m-1)(s_i,s'_i)|\geq 2, \ |(m-1)(s_{l+1},s'_{l+1})|\geq 2,\ x_i\rhd s_{l+1};\\
&&(b)\ (m-1)(s_j,s'_j)=s_j, \ \ s_{l+1}\rhd s_j\ \mbox{ for any }j,\
i+1\leq j\leq l.
\end{eqnarray*}
\noindent {\bf Remarks}: In the case {\bf C1}, we have
1) $\bar f$ contains $\bar g$ as a subword where $g=x_is_{i+1}\cdots
s_{l+1}-s_{l+1}s_{i+1}\cdots s_l$, $x_i\rhd s_{l+1}$ and
$s_{l+1}\rhd s_j\ \mbox{ for any }j,\ i+1\leq j\leq l$.
2) If there is no $f\in (3')$ with {\bf C1} where $0\leq i= l\leq
k$, then for any $f\in (3'), \ f$ is not with property {\bf C1}.
\ \
The second nontrivial case, which is the second case of Theorem
\ref{s.11} and the nontrivial case of Theorem \ref{s.17}, happens if
the following $f$ exists:
{\bf C2}: $f=(m-1)(s_0,s'_0)\cdots (m-1)(s_i,s'_i)\cdots
(m-1)(s_{l+1},s'_{l+1})\cdots m(s_{k+1},s'_{k+1})-m(s'_0,s_0)\cdots
(m-1)(s_{k+1},s'_{k+1})$, where $0\leq i<l\leq k$, such that
\begin{eqnarray*}
&&(a)\ |(m-1)(s_i,s'_i)|>2,\ (m-1)(s_{i+1},s'_{i+1})=s_{i+1},\ x_i=s_{l+1}>s_{i+1},\ m_{x_is_{i+1}}=3;\\
&&(b)\ (m-1)(s_j,s'_j)=s_j,\ s_{l+1}\rhd s_j\mbox{ for any }j,\
i+2\leq j\leq l.
\end{eqnarray*}
\noindent {\bf Remarks}: In the case {\bf C2}, we have
1) $\bar f$ contains $\bar g$ as a subword where $g=x_is_{i+1}\cdots
s_{l+1}-s_{l+1}s_{i+1}\cdots s_l$, $m_{x_is_{i+1}}=3$ and
$s_{l+1}\rhd s_j\mbox{ for any }j,\ i+2\leq j\leq l$.
2) If there is no $f\in (3')$ with {\bf C2} where $0\leq i=
l-1\leq k-1$, then for any $f\in (3'), \ f$ is not with property
{\bf C2}.
\ \
The third nontrivial case, which is the first case of Theorem
\ref{s.14}, happens if the following $f$ exists:
{\bf C3}:
$f=(m-1)(s_0,s'_0)\cdots (m-1)(s_i,s'_i)\cdots
(m-1)(s_{l+1},s'_{l+1})\cdots m(s_{k+1},s'_{k+1})-m(s'_0,s_0)\cdots
(m-1)(s_{k+1},s'_{k+1})$, where $0\leq i\leq l\leq k$, such that
\begin{eqnarray*}
&&(a)\ (m-1)(s_{i},s'_{i})\geq 2,\ m_{s_is'_i}\ \mbox{is\ even and
there\ exists\ }m \ (\ i+1\leq m\leq l+1)\ \mbox{such \
that\ }\\
&&\ \ \ \ s'_{i+1}\rhd s_m\rhd x_i;\\
&&(b)\ (m-1)(s_j,s'_j)=s_j\mbox{ for any } j,\ i+1\leq j\leq l+1,\\
&&\ \ \ \ s_m\rhd s_n \mbox{ for any }n,\ i+1 \leq n\leq
m-2\ \mbox{and}\\
&&\ \ \ \ s_{m-1}\cdots s_{i_1}=(m-1)(s_{m-1},s_m),\ s_{m-1}<s_m,\\
&&\ \ \ \ s_{i_1+1}\cdots
s_{i_2}=(m-1)(s_{i_1+1},s_{i_1+2}),\ s_{i_1+1}<s_{i_1+2},\cdots,\\
&&\ \ \ \ s_{i_n+1}\cdots s_{l+1}=m(s_{i_n+1},s_{i_n+2}),\
s_{i_n+1}<s_{i_n+2}.
\end{eqnarray*}
\noindent {\bf Remarks}: In the case {\bf C3}, we have
1) $\bar f$ contains $\bar g$ as a subword where
$g=s'_{i+1}x_is_{i+1}\cdots s_{l+1}-s_ms'_{i+1}x_is_{i+1}\cdots
s_{l}\in(3')$ such that $s'_{i+1}\rhd s_m\rhd x_i$ for some $m\ (\
i+1\leq m\leq l+1)$ and $s_m\rhd s_n$ for any $n,\ i+1 \leq n\leq
m-2.$
2) If there is no $f\in (3')$ with {\bf C3} where $0\leq i= l\leq
k$, then for any $f\in (3'), \ f$ is not with property {\bf C3}.
\ \
The fourth nontrivial case, which is the second case of Theorem
\ref{s.14}, happens if the following $f$ exists:
{\bf C4}: $f=(m-1)(s_0,s'_0)\cdots (m-1)(s_i,s'_i)\cdots
(m-1)(s_{l+1},s'_{l+1})\cdots m(s_{k+1},s'_{k+1})-m(s'_0,s_0)\cdots
(m-1)(s_{k+1},s'_{k+1})$, where $0\leq i<l\leq k$, such that
\begin{eqnarray*}
&&(a)\ |(m-1)(s_i,s'_i)|>2,\ (m-1)(s_{i+1},s'_{i+1})=s_{i+1},\ s_{i+1}>x_i=s_{l+1},\ m_{x_is_{i+1}}=3;\\
&&(b)\ (m-1)(s_j,s'_j)=s_j,\ s_{l+1}\rhd s_j \mbox{ for any }j,\
i+2\leq j\leq l.
\end{eqnarray*}
\noindent {\bf Remarks}: In the case {\bf C4}, we have
1) $\bar f$ contains $\bar g$ as a subword where
$g=s'_{i+1}x_i\cdots s_{l+1}-s_{i+1}s'_{i+1}x_i\cdots s_l\in (3')$,
$s'_{i+1}\rhd s_{i+1},\ m_{x_is_{i+1}}=3$ and $s_{l+1}\rhd s_j$ for
any $j,\ i+2\leq j\leq l$.
2) If there is no $f\in (3')$ with {\bf C4} where $0\leq i=
l-1\leq k-1$, then for any $f\in (3'), \ f$ is not with property
{\bf C4}.
\ \
\noindent\textbf{Remark:} In the Example \ref{ex1}, there exist
relations in $(3')$ with properties {\bf C1} and {\bf C2}.
\begin{theorem}\label{t3.19}
$S=\{(\ref{e1}),(\ref{e2}),(3')\}$ is a Gr\"{o}bner-Shirshov basis
of $W$ if there is no $f\in (3')$ with properties ${\bf C1}\vee {\bf
C2}\vee {\bf C3}\vee {\bf C4}$.
\end{theorem}
\textbf{Proof}
We will prove that all possible compositions are trivial
modulo $S$. Denote by $(i\wedge j)_w$ the composition of the type
$(i)$ and type $(j)$ with respect to the ambiguity $w$.
By Lemmas \ref{s.6}, \ref{s.10} and \ref{s.15}, and Theorems
\ref{s.11}, \ref{s.14}, \ref{s.16} and \ref{s.17}, we know that all
inclusion compositions are trivial. Thus, we need only to check the
intersection compositions.
\begin{enumerate}
\item[($1\wedge2$)]\ $w=sm(s,s'),\ s>s'$.
\begin{eqnarray*}
(1\wedge2)_w&=&-(m-1)(s',s)+sm(s',s)\\
&\equiv&-(m-1)(s',s)+(m+1)(s,s')\\
&\equiv&-(m-1)(s',s)+(m-1)(s',s)\\
&\equiv&0.
\end{eqnarray*}
\item[($1\wedge3'$)]\
$w=s_0(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_k,s'_k)m(s_{k+1},s'_{k+1}).$
\begin{eqnarray*}
(1\wedge3)_w&=&-(m-2)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots(m-1)(s_{k},s'_{k})m(s_{k+1},s'_{k+1})\\
&&+s_0m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&-(m-1)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&&+(m-1)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&0.
\end{eqnarray*}
\item[($2\wedge1$)]\ $w=m(s,s')x,\ s>s'$, where $x$ is the last letter of $m(s,s')$.
\begin{eqnarray*}
(2\wedge1)_w&=&-m(s',s)x+(m-1)(s,s')\\
&\equiv&-(m+1)(s',s)+(m-1)(s,s')\\
&\equiv&-(m-1)(s,s')+(m-1)(s,s')\\
&\equiv&0.
\end{eqnarray*}
\item[($2\wedge2$)]\ There are two cases to consider.
Case 1. $w=m(s,s')(m-1)(s'',x),\ s>s',\ x>s''$, where $x$ is the
last letter of $m(s,s')$.
$$
(2\wedge2)_w=-m(s',s)(m-1)(s'',x)+(m-1)(s,s')m(s'',x)\equiv0.
$$
Case 2. $w=(2i)(s,s')m(s,s'),\ s>s',\ 1\leq i< m_{ss'}/2$. We just
prove the case that $m_{ss'}$ is even. For the case that $m_{ss'}$
is odd, the proof is similar. Assume that $m_{ss'}$ is even. Then
$$
(2\wedge2)_w\equiv
-m(s',s)(2i)(s,s')+(2i)(s,s')m(s',s)\equiv-(m-2i)(s',s)+(m-2i)(s',s)\equiv
0.
$$
\item[($2\wedge3'$)]\ There are two cases to consider.
Case 1. $ w=(m-1)(s,s')(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_k,s'_k)m(s_{k+1},s'_{k+1})$, $s>s',\ s_0>s'_0$, where $s_0$
is the last letter of $m(s,s')$. Since
$h=(m-1)(s,s')m(s'_0,s_0)-m(s',s)(m-1)(s'_0,s_0)\in (3')$, we have
\begin{eqnarray*}
&&(2\wedge3')_w\\
&=&-m(s',s)(m-2)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_k,s'_k)m(s_{k+1},s'_{k+1})\\
&&+(m-1)(s,s')m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&-m(s',s)(m-1)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&&+m(s',s)(m-1)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&0.
\end{eqnarray*}
Case 2. $ w=(2i)(s_0,s'_0)(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_k,s'_k)m(s_{k+1},s'_{k+1})$, \ $1\leq i<m_{s_0s'_0}/2$. We
prove only the case that $m_{s_0s'_0}$ is even. For the case that
$m_{s_0s'_0}$ is odd, the proof is similar. Assume that
$m_{s_0s'_0}$ is even. Then
\begin{eqnarray*}
&&(2\wedge3')_w\\
&=&-m(s'_0,s_0)(2i-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots(m-1)(s_k,s'_k)m(s_{k+1},s'_{k+1})\\
&&+(2i)(s_0,s'_0)m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&=&-(m-2i+1)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots(m-1)(s_k,s'_k)m(s_{k+1},s'_{k+1})\\
&&+(m-2i)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&-(m-2i+1)(s'_0,s_0)s'_1(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&&+(m-2i)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&-(m-2i)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&&+(m-2i)(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&0.
\end{eqnarray*}
\item[($3'\wedge1$)]
$w=(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_k,s'_k)m(s_{k+1},s'_{k+1})y_{k+1}$, where $y_{k+1}$ is the
last letter of $m(s_{k+1},s'_{k+1})$.
\begin{eqnarray*}
&&(3'\wedge1)_w\\
&=&-m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})y_{k+1}\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&-m(s'_0,s_0)s'_1(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&-(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&\equiv&0.
\end{eqnarray*}
\item[($3'\wedge2$)]\ There are two cases to consider.
Case 1. $w=(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
m(s_{k+1},s'_{k+1})(m-1)(t,y_{k+1}), \ y_{k+1}>t$, where $y_{k+1}$
is the last letter of $m(s_{k+1},s'_{k+1})$. Then
\begin{eqnarray*}
&&(3'\wedge2)_w\\
&=&-m(s'_0,s_0)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})(m-1)(t,y_{k+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k+1},s'_{k+1})m(t,y_{k+1})\\
&\equiv&0.
\end{eqnarray*}
Case 2. $w=(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_k,s'_k)s_{k+1}(2i)(s'_{k+1},s_{k+1})$ $m(s'_{k+1},s_{k+1}),\
0\leq i\leq (m_{s_{k+1}s'_{k+1}}-2)/2$. We consider only the case
that $m_{s_{k+1}s'_{k+1}}$ is odd. The proof is similar for
$m_{s_{k+1}s'_{k+1}}$ to be even. Assume that $m_{s_{k+1}s'_{k+1}}$
is odd. Then
\begin{eqnarray*}
&&(3'\wedge2)_w\\
&=&-m(s'_0,s_0)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})(1+2i)(s'_{k+1},s_{k+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_k,s'_k)s_{k+1}(2i)(s'_{k+1},s_{k+1})m(s_{k+1},s'_{k+1})\\
&\equiv&-m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k-1},s'_{k})(m-2-2i)(s_{k+1},s'_{k+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots(m-1)(s_{k},s'_{k})(m-1-2i)(s'_{k+1},s_{k+1})\\
&\equiv&-(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
m(s_{k},s'_{k})(m-2-2i)(s_{k+1},s'_{k+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots(m-1)(s_{k},s'_{k})(m-1-2i)(s'_{k+1},s_{k+1})\\
&\equiv&-(m-1)(s_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1-2i)(s'_{k+1},s_{k+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots(m-1)(s_{k},s'_{k})(m-1-2i)(s'_{k+1},s_{k+1})\\
&\equiv&0.
\end{eqnarray*}
\item[($3'\wedge3'$)]\ There are two cases to consider.
Case 1. $w=(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_k,s'_k)m(s_{k+1},s'_{k+1})
(m-2)(t,y_{k+1})(m-1)(t_1,t'_1)\cdots
(m-1)(t_l,t'_l)m(t'_{l+1},t_{l+1}), \ \ y_{k+1}>t,$ where $y_{k+1}$
is the last letter of $m(s_{k+1},s'_{k+1})$.
\begin{eqnarray*}
&&(3'\wedge3')_w\\
&=&-m(s'_0,s_0)\cdots (m-1)(s_{k+1},s'_{k+1})\cdot(m-2)(t,y_{k+1})
(m-1)(t_1,t'_1)\cdots
m(t'_{l+1},t_{l+1})\\
&&+(m-1)(s_0,s'_0)\cdots (m-1)(s_{k+1},s'_{k+1})m(t,y_{k+1})\cdots
(m-1)(t_{l+1},t'_{l+1})\\
&\equiv&-m(s'_0,s_0)\cdots(m-1)(s_{k+1},s'_{k+1})(m-1)(t,y_{k+1})\cdots
(m-1)(t_{l+1},t'_{l+1})\\
&&+m(s'_0,s_0)\cdots(m-1)(s_{k+1},s'_{k+1})(m-1)(t,y_{k+1})\cdots(m-1)(t_{l+1},t'_{l+1})\\
&\equiv&0.
\end{eqnarray*}
Case 2. $w=(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_k,s'_k)s_{k+1}(2i)(s'_{k+1},s_{k+1})
(m-1)(s'_{k+1},s_{k+1})(m-1)(t_1,t'_1)\cdots
(m-1)(t_l,t'_l)m(t_{l+1},t'_{l+1}), \ \ \ 0\leq i\leq
(m_{s_{k+1}s'_{k+1}}-2)/2$. We only consider the case that
$m_{s_{k+1},s'_{k+1}}$ is odd and the proof is similar for
$m_{s_{k+1},s'_{k+1}}$ to be even. Assume that
$m_{s_{k+1},s'_{k+1}}$ is odd. Then
\begin{eqnarray*}
&&(3'\wedge3')_w\\
&=&-m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1)(s_{k+1},s'_{k+1})\\
&&\ \ \cdot (2i)(s'_{k+1},s_{k+1})(m-1)(t_1,t'_1)\cdots
(m-1)(t_l,t'_l)m(t_{l+1},t'_{l+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})s_{k+1}(2i)(s'_{k+1},s_{k+1})\\
&&\ \ \cdot m(s_{k+1},s'_{k+1}) (m-1)(t_1,t'_1)\cdots
(m-1)(t_{l+1},t'_{l+1})\\
&\equiv&-m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1-2i)(s_{k+1},s'_{k+1})\\
&&\ \ \cdot (m-1)(t_1,t'_1)\cdots(m-1)(t_{l},t'_{l})m(t_{l+1},t'_{l+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1-2i)(s'_{k+1},s_{k+1})\\
&&\ \ \cdot (m-1)(t_1,t'_1)\cdots
(m-1)(t_{l+1},t'_{l+1})\\
&\equiv&-m(s'_0,s_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-2-2i)(s_{k+1},s'_{k+1})\\
&&\ \ \cdot (m-1)(t_1,t'_1)\cdots(m-1)(t_{l},t'_{l})(m-1)(t_{l+1},t'_{l+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1-2i)(s'_{k+1},s_{k+1})\\
&& \ \ \cdot (m-1)(t_1,t'_1)\cdots
(m-1)(t_{l+1},t'_{l+1})\\
&\equiv&-(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1-2i)(s'_{k+1},s_{k+1})\\
&&\ \ \cdot (m-1)(t_1,t'_1)\cdots(m-1)(t_{l},t'_{l})(m-1)(t_{l+1},t'_{l+1})\\
&&+(m-1)(s_0,s'_0)(m-1)(s_1,s'_1)\cdots
(m-1)(s_{k},s'_{k})(m-1-2i)(s'_{k+1},s_{k+1})\\
&& \ \ \cdot (m-1)(t_1,t'_1)\cdots
(m-1)(t_{l+1},t'_{l+1})\\
&\equiv&0.
\end{eqnarray*}
\end{enumerate}
Thus, the theorem is proved. \hfill $\blacksquare$
\ \
We give some examples which are in the case of Theorem \ref{t3.19}
but not the finite Coxeter groups (see \cite{bs01, Lee, Sv}).
\begin{example}
Let $W$ be the Coxeter group with respect to Coxeter matrix
$M=(m_{ij})$. Suppose that one of the following conditions holds:
\begin{enumerate}
\item[(i)]\ for any $i,j\ (i>j)$, $ m_{ij}\geq 3$;
\item[(ii)]\ for any $i,j\ (i>j)$,
either $ m_{ij}=2$ or $m_{ij}=\infty$;
\item[(iii)]\ $ m_{i1}=2$ for any $i\geq 2$ and $m_{ij}\geq 3$
for any $i,j\ (i>j\geq2)$.
\end{enumerate}
Then in $(3')$, there are no relations with property ${\bf C1}\vee
{\bf C2}\vee {\bf C3}\vee {\bf C4}$. By Theorem \ref{t3.19},
$S=\{(\ref{e1}),(\ref{e2}),(3')\}$ is a Gr\"{o}bner-Shirshov basis
of such a Coxeter group $W$.
\end{example}
\ \
In the next paper, we will try to prove that the new conjecture is
true if $W$ is a Coxeter group without ${\bf C2}\vee {\bf C3}\vee
{\bf C4}$.
\ \
\noindent{\bf Acknowledgement}: The authors would like to thank
Professor L.A. Bokut for his guidance, useful discussions and
enthusiastic encouragement in writing up this paper.
| {
"timestamp": "2009-10-01T09:11:02",
"yymm": "0910",
"arxiv_id": "0910.0096",
"language": "en",
"url": "https://arxiv.org/abs/0910.0096",
"abstract": "A conjecture of Gröbner-Shirshov basis of any Coxeter group has proposed by L.A. Bokut and L.-S. Shiao \\cite{bs01}. In this paper, we give an example to show that the conjecture is not true in general. We list all possible nontrivial inclusion compositions when we deal with the general cases of the Coxeter groups. We give a Gröbner-Shirshov basis of a Coxeter group which is without nontrivial inclusion compositions mentioned the above.",
"subjects": "Group Theory (math.GR)",
"title": "Gröbner-Shirshov bases for Coxeter groups I",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.962673111584966,
"lm_q2_score": 0.7371581626286834,
"lm_q1q2_score": 0.709642342148011
} |
https://arxiv.org/abs/2104.01965 | AuTO: A Framework for Automatic differentiation in Topology Optimization | A critical step in topology optimization (TO) is finding sensitivities. Manual derivation and implementation of the sensitivities can be quite laborious and error-prone, especially for non-trivial objectives, constraints and material models. An alternate approach is to utilize automatic differentiation (AD). While AD has been around for decades, and has also been applied in TO, wider adoption has largely been absent.In this educational paper, we aim to reintroduce AD for TO, and make it easily accessible through illustrative codes. In particular, we employ JAX, a high-performance Python library for automatically computing sensitivities from a user defined TO problem. The resulting framework, referred to here as AuTO, is illustrated through several examples in compliance minimization, compliant mechanism design and microstructural design. | \section{Introduction}
\label{sec:introduction}
\paragraph{} Fueled by improvements in manufacturing capabilities and computational modeling, the field of topology optimization (TO) has witnessed tremendous growth in recent years. To further accelerate the development of TO, we consider here automating a critical step in TO, namely computing the sensitivities, i.e., computing the derivatives of objectives, constraints, material models, projections and filters, with respect to the design variables, typically the elemental pseudo-densities, in the popular density-based TO.
\paragraph{} Conceptually, there are four different methods for computing sensitivities \cite{baydin2017automatic}: (1) numerical, (2) symbolic, (3) manual, and (4) automatic differentiation. Numerical, i.e., finite difference, based sensitivity computation suffers from truncation and floating-point errors, and is therefore not recommended. Symbolic differentiation using software packages such as SymPy \cite{symPy} or Mathematica ~\cite {Mathematica} is a reasonable choice for simple expressions. However, it is impractical when the quantity of interest involves loops (such as when assembling stiffness matrices), and/or flow-control (if-then-else). The default method today for computing sensitivities is manual. While theoretically straightforward, the manual process is unfortunately cumbersome and error prone; it is often the bottle-neck in the development of new TO modules and exploratory studies. In this educational paper, we therefore \emph {illustrate and promote the use of automatic differentiation for computing sensitivities in TO}.
\paragraph{} Automatic differentiation (AD), is a collection of methods for efficiently and accurately computing derivatives of numeric functions expressed as computer programs \cite{baydin2017automatic}. AD has been around for decades \cite{Rumelhart1986BackProp} and has been exploited in a wide range of problems ranging from molecular dynamics simulations \cite{schoenholz2019JaxMD} to the design of photonic crystals \cite{minkov2020InversePhotonicCrystalDesign}; see \cite{rall2006perspectivesOnAD} for a critical review. AD in the context of finite element analysis is reviewed in \cite{ozaki1995higherorderDerivUsingAD} and \cite{van2005reviewSensAnalForTO}. More recently, AD was demonstrated for for shape optimization in \cite{paganini2021fireshape} using the Firedrake framework \cite{gangl2020ADforShapeOpt}. AD has also been exploited in TO of turbulent fluid flow systems \cite{dilgen2018TO_turbulentFlowUsingAD}, \cite{dilgen2018ADforHeatTransfer}.
\paragraph{} Despite the pioneering research, AD is not widely used in TO. The objective of this educational paper \emph {is to accelerate the adoption of AD in TO by providing standalone codes for popular TO problems.} In particular, we employ JAX \cite{jax2018github}, a high-performance Python library for end-to-end AD. Its NumPy \cite{harris2020NumPy} like syntax, low memory footprint and support of just-in-time (JIT) compilation for accelerated code performance makes it an ideal candidate for the task. We demonstrate the use of AD within the popular density-based TO framework \cite{bendsoe2013topology}, by replicating existing educational TO codes for compliance minimization \cite{sigmund2001Code99}, compliant mechanism design \cite{Bendsoe2003} and microstructural design \cite{xia2015design}. Critical code snippets are highlighted in this article; the complete codes are available at \href{https://github.com/UW-ERSL/AuTO}{https://github.com/UW-ERSL/AuTO}
\section{ Compliance minimization }
\label{sec:compliance}
\subsection{Problem Formulation }
First we consider compliance minimization as modeled \cite{sigmund2001Code99} in subject to a volume constraint; this TO problem is very popular due to its self-adjoint nature. In a mesh discretized form, the problem can be posed as:
\begin{subequations}
\begin{align}
& \underset{\boldsymbol{\rho}}{\text{minimize}}
& &J = \boldsymbol{u}^\mathsf{T}\boldsymbol{K}(\boldsymbol{\rho})\boldsymbol{u}\label{eqnObj_compliance}\\
& \text{subject to}
& & \boldsymbol{K}(\boldsymbol{\rho})\boldsymbol{u} = \boldsymbol{f}\label{eqn:GoverningEqn_compliance}\\
& & & \sum_e \rho_e v_e \leq V^*\label{eqn:volcons_compliance}
\end{align}
\label{eq:complianceMinimization}
\end{subequations}
where $\boldsymbol{u}$ is the displacement (in structural problems) or temperature (in thermal problems), $\boldsymbol{K}$ is the stiffness matrix, $\boldsymbol{\rho}$ is the pseudo-density design variables, $\boldsymbol{f}$ is the structural/thermal load and $V^*$ is the volume constraint. To solve this problem, one must define the material model (see below), rely on finite element analysis to solve Equation \ref{eqn:GoverningEqn_compliance}, and use design update schemes such as MMA \cite{svanberg1987MMA} or Optimality Criteria \cite{bendsoe1995optimization}.
\paragraph{}A critical ingredient for the design update schemes is the sensitivity, i.e., derivative, of the objective and constraint with respect to the pseudo-density variables. As mentioned earlier, this is typically carried out manually. For the above self-adjoint problem, the sensitivity of the compliance, for the solid isotropic material with penalization (SIMP) \cite{bendsoe1995optimization} material model, can be easily derived:
\begin{equation}
\frac{\partial J}{\partial \rho_e} = -\boldsymbol{u}^T \frac{\partial \boldsymbol{K}}{\partial \rho_e}\boldsymbol{u} = -p\rho^{p-1}{u_e}^T K{u_e}
\label{eq:sensCompliance}
\end{equation}
\emph {However, in this paper, we will rely on automatic differentiation (AD) framework for sensitivity analysis.}
\subsection{AuTO Framework }
\paragraph{} The algorithm for solving the above compliance minimization is illustrated in \ref{alg:complianceMinimization}. Code snippets that illustrate the use of AD are discussed below.
\begin{algorithm}[H]
\caption{Compliance Minimization}
\label{alg:complianceMinimization}
\begin{algorithmic}[1]
\Procedure{complianceMin}{mesh, material, filter, BC, $V^*$}
\State $i = 0$ \Comment{Iteration index}
\State $\rho = V^*$ \Comment{Design variable initialization}
\State $\Delta = 1.0$ \Comment{Design change}
\While{$\Delta > \epsilon \; \text{and} \; i \leq \text{MaxIter}$}
\State $i \gets i+1 $
\State $E \gets \rho$ \Comment{Material model} \label{algo:materialModel}
\State $\bm{K} \gets E$ \Comment{Compute stiffness matrix and assemble } \label{algo:stiffness}
\State $ u \; \text{via} \; Ku=f$ \Comment{Solve with imposed BC} \label{algo:solve}
\State $J \gets (K, u)$ \Comment{Objective} \label{algo:compliance}
\State $ \frac{\partial \mathcal{J}}{\partial \rho} \gets AD(\rho \rightarrow J)$ \label{algo:AD_objective} \Comment{Automatic differentiation of objective}
\label{algo:constraint}
\State $g \gets (\bar{\rho},V^*)$ \Comment{Vol. Constraint}
\State $ \frac{\partial g}{\partial \rho} \gets AD(\rho \rightarrow g)$ \Comment{Automatic differentiation of constraint}
\label{algo:AD_constraint}
\State $\phi^i \gets (J, g , \frac{\partial J}{\partial \rho},\frac{\partial g}{\partial \rho})$ \Comment{MMA Solver \cite{svanberg1987MMA}} \label{algo:callMMA}
\State $\Delta = (||\rho^i - \rho^{i-1}||) $
\EndWhile
\State \textbf{end while}
\EndProcedure
\State \textbf{end procedure}
\end{algorithmic}
\end{algorithm}
Steps \ref{algo:materialModel}-\ref{algo:compliance} are captured through the following Python code, where the \textcolor{purple}{@jit} directive refers to the "just-in-compilation", i.e., the compiler translates the Python functions to optimized machine code at run-time, approaching the speeds of C or FORTRAN \cite{lam2015numba}.
\begin{python}
@jit
def computeCompliance(rho):
E = MaterialModel(rho)
K = assembleK(E)
u = solveKuf(K)
J = jnp.dot(u.T, jnp.dot(K,u))
return J
\end{python}
SIMP \cite{bendsoe1995optimization} is a typical material model, and implemented as follows (the \textcolor{purple}{@jit} directive has been removed here to avoid repetition).
\begin{python}
def MaterialModel(rho):
E = Emin + (Emax-Emin)*rho**penal # SIMP
return E
\end{python}
The stiffness matrix is assembled in a compact manner as follows.
\begin{python}
def assembleK(E):
K = jnp.zeros((ndof,ndof))
sK = D0.flatten()[np.newaxis]*E.T.flatten()
K = jax.ops.index_add(K, idx, sK)
return K;
\end{python}
where $D0 = \int\limits_{\Omega_e}[B]^T[C_0][B] d \Omega_e$ is the element base stiffness matrix \cite{bathe2006finite} with $E = 1$ and prescribed $\nu$; \textit{idx} reflects the global numbering of the element nodes. The underlying linear system is solved using a direct solver.
\begin{python}
def solveKuf(K):
u_free = jax.scipy.linalg.solve(K[free,:][:,free],force[free])
u = jnp.zeros((ndof))
u = jax.ops.index_add(u, free,u_free.reshape(-1))
return u;
\end{python}
Finally, to compute the compliance and its sensitivity in step \ref{algo:AD_objective}, we simply request for the function and its derivative as follows. The JAX environment automatically traces the chain of function calls, and ensures an end-to-end automatic differentiation.
\begin{python}
J, gradJ = value_and_grad(computeCompliance)(rho)
\end{python}
The global volume constraint (in step \ref{algo:constraint}) is defined as follows,
\begin{python}
@jit
def globalVolumeConstraint(rho):
vc = jnp.mean(rho)/vf - 1.
return vc;
\end{python}
As before, the value and its gradient (via AD) can be computed via
\begin{python}
g, gradg = value_and_grad(globalVolumeConstraint)(rho)
\end{python}
As summarized in step \ref{algo:callMMA} of the algorithm, the computed objective, objective gradient, constraint, constraint gradient are then passed to standard optimizers (MMA in our case) \cite{svanberg1987MMA}. The reader is referred to the complete code provided.
\subsection{Illustrative Examples}
\paragraph{} We illustrate the above AD framework using two popular examples of compliance minimization \cite{bendsoe2013topology}: (a) minimizing structural compliance of a tip-loaded cantilever (see Figure \ref{fig:Compliance_all_BC}a) and (b) minimizing thermal compliance of a square plate under a uniform heat load (see Figure \ref{fig:Compliance_all_BC}b). The mesh was chosen to be 60 $\times$ 30 grid for the structural problem, and 60 $\times$ 60 grid for the thermal problem. The target volume fraction in both problems is $V^* = 0.5$. The material properties are $E = 1$, $\nu = 0.3$, $k = 1$, and MMA was used as the design update scheme, with default parameters. The computed designs illustrated in \ref{fig:Compliance_all_BC}a and \ref{fig:Compliance_all_BC}b matches those in the literature \cite{bendsoe2013topology}.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[scale=0.6]{figures/result/compliance/Compliance_all_BC.pdf}%
\caption{Compliance minimization examples: (a) Tip loaded cantilever and optimized topology at $V^* = 0.5$ (b) Heat conduction on a square plate and optimized topology at $V^* = 0.5$}
\label{fig:Compliance_all_BC}
\end{center}
\end{figure}
The totla time taken for optimization using analytical derivatives and AD are compared in Figure \ref{fig:timing_compliance}. We observe that AD is marginally more expensive, but we will observe later that this is not always the case.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5,trim={100 90 30 80},clip]{figures/result/compliance/cost_ADvsAnal.pdf}%
\caption{Computational cost for compliance minimization using AD and analytical implementation.}
\label{fig:timing_compliance}
\end{center}
\end{figure}
\subsection{Advantages of AD}
\paragraph{} While SIMP is a popular material model, other models have been proposed \cite{dzierzanowski2012comparisonSIMPvRAMP}. The advantage of AD is that one can easily replace SIMP with, for example, RAMP ~\cite{stolpe2001RAMP}, by simply changing the material model.
\begin{python}
def MaterialModel(rho):
E = Emax*rho/(1.+S*(1.-rho)) # RAMP
return Y
\end{python}
All downstream sensitivity computations are handled automatically.
\paragraph{} Often additional filters and projections are used in TO. For instance, they can be used to remove checkerboard patterns \cite{Sigmund1998NumericalInstabTO}, impose minimum length scale \cite{guest2004MinLengthScaleProjection}, limit gray elements \cite{Wu2017}, etc. The filters apart from being complex in their own right, they are often used in tandem. For instance, in \cite{Wu2017ShellInfill}, eight such schemes were compounded to obtain shell-infill type structures. This results in highly complicated sensitivity expressions that can be laborious to derive. However, using an AD framework, the user simply needs to include the desired projections in the pipeline and the sensitivity is taken care of.
For instance, we can introduce the following filter to reduce grayness in design, just before computing the material model.
\begin{python}
def projectionFilter(rho):
if(projection['isOn']):
nmr = np.tanh(c0*beta) + jnp.tanh(beta*(rho-c0))
dnmr = np.tanh(c0*beta) + np.tanh(beta*(1-c0))
rho_tilde = nmr/dnmr
return rho_tilde
else:
return rho
\end{python}
\paragraph{} Finally, manufacturing constraints \cite{vatanabe2016ManufCons}, \cite{liu2018current} are often imposed in TO; these include limiting overhang of structures \cite{Qian2017}, connectivity \cite{li2016structuralConnectivityConstraint}, material utilization \cite{Sanders2018Multimaterial}, and length scale control \cite{Guest2009MaxLengthScale}. Such constraints are easy to impose within the AD framework. For example, the local volume constraint proposed in \cite{Guest2009MaxLengthScale} may be implemented as follows.
\begin{python}
def maxLengthScaleConstraint(rho):
v = jnp.matmul(L, (1.01-rho)**n); # L averaged prior
cons = 1 - jnp.power(jnp.sum(v**p),1./p)/vstar;
return cons;
\end{python}
As before, one can calculate the value and gradient of the constraint via
\begin{python}
vc, gradvc = value_and_grad(maxLengthScaleConstraint)(rho)
\end{python}
The computed constraint and gradient can then be passed on to MMA. To illustrate, for the tip cantilever problem in Figure \ref{fig:Compliance_all_BC}(a), with the additional max length scale radius of $r = 30$, and maximum void volume at $0.75 \pi r^2$, the resulting topology is illustrated in Figure \ref{fig:tipCantilever_50vf_maxLS}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.25,trim={10 90 0 70},clip]{figures/result/lengthScale/tipCantilever_50vf.pdf}%
\caption{Tip cantilever beam with length scale control. }
\label{fig:tipCantilever_50vf_maxLS}
\end{center}
\end{figure}
\section{Compliant Mechanism Design}
\label{sec:compliantMechanism}
We next illustrate the AuTO framework using compliant mechanisms (CMs) \cite{howell2013compliant}; see \cite{zhu2020design} for a comprehensive review on TO for CMs.
\subsection{Problem Formulation}
\label{sec:compmech_probFormulation}
Consider the displacement inverter considered in the 104-line educational MATLAB code \cite{bendsoe2013topology}. The objective is to maximize the output displacement $u_{out}$ at the point of interest when a force $f_{in}$ is applied, as illustrated in Figure \ref{fig:Compliant_Mech_BC}. The spring constants are specified by the user to control the behavior of the CM.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale = 0.8]{figures/result/compliantMechanism/CM_fig_hal.pdf}
\caption{The displacement inverter compliant mechanism.}
\label{fig:Compliant_Mech_BC}
\end{center}
\end{figure} \\
This TO problem can be written as:
\begin{subequations}
\begin{align}
& \underset{\boldsymbol{\rho}}{\text{maximize}}
& &\boldsymbol{u}_{out}\label{eqnObj_compliantMech}\\
& \text{subject to}
& & \boldsymbol{K}(\boldsymbol{\rho})\boldsymbol{u} = \boldsymbol{f_{in}}\label{eqn:GoverningEqn_compliantMech}\\
& & & \sum_e \rho_e v_e \leq V^*\label{eqn:volcons_compliantMech}
\end{align}
\end{subequations}
The standard implementation entails computing the elemental sensitivity $\frac{\partial u_{out}}{\partial \rho_e}$ given by:
\begin{equation}
\frac{\partial u_{out}}{\partial \rho_e} = \boldsymbol{\lambda}^T \frac{\partial \boldsymbol{K} }{\partial \rho_e}\boldsymbol{u} = p\rho^{p-1}{\lambda_e}^T K{u_e}
\end{equation}
where $\boldsymbol{\lambda}$ is the solution to the adjoint load problem $\boldsymbol{K\lambda} = -\boldsymbol{l}$. $\boldsymbol{l}$ is a vector with the value 1 at the degree of freedom corresponding to the output point, and with zeros elsewhere. Observe that, in the manual method, two sets of analysis (one to compute $\boldsymbol{u}$, and the other to compute $\boldsymbol{\lambda}$) are required per iteration for evaluating sensitivities.
\subsection{AuTO Framework}
The implementation in AuTO for CM design is similar to the compliance minimization problem, with two minor changes: (a) the stiffness matrix assembly includes the spring constants, and (b) the objective is the displacement at the output node.
The relevant code snippets are provided below.
\begin{python}
def assembleKWithSprings(E):
K = jnp.zeros((ndof,ndof))
sK = D0.flatten()[np.newaxis]*E.T.flatten()
K = jax.ops.index_add(K, idx, sK)
# springs at input and output nodes
K = jax.ops.index_add(K, jax.ops.index[nodeIn, nodeIn], kspringIn)
K = jax.ops.index_add(K, jax.ops.index[nodeOut, nodeOut], kspringOut)
return K;
\end{python}
\begin{python}
def CompliantMechanism(rho):
E = MaterialModel(rho)
K = assembleKWithSprings(E)
u = solveKuf(K)
return u[bc['nodeOut']]
\end{python}
To compute the objective and its gradient, we rely on JAX as follows.
\begin{python}
J, gradJ = value_and_grad(CompliantMechanism)(rho)
\end{python}
The design update using MMA is as per Section \ref{sec:compliance}. Using the problem specification in \cite{Bendsoe2003}, the resulting topology for the inverter is illustrated in Figure \ref{fig:output_compliant_mech_all}a; this is in agreement with the result in \cite{Bendsoe2003}.
\subsection{Advantages of AD}
For the design of CMs using TO, a key advantage of AD stems from the following observation \cite{zhu2020design} "\textit{no universally accepted objective formulation exists}". For example, consider two additional objectives:
\begin{enumerate}
\item $\min: -\omega MSE + (1 - \omega)SE$ \cite{nishiwaki1998topology}
\item $\min: -MSE/SE$ \cite{saxena2000optimal}
\end{enumerate}
where $MSE = \boldsymbol{v}^T\boldsymbol{K}\boldsymbol{u}$ is the mutual strain energy, which describes the flexibility of the designed mechanism and $SE = \boldsymbol{u}^T\boldsymbol{K}\boldsymbol{u}$ is the strain energy, $\boldsymbol{v}$ is the output displacement when a unit dummy load applied at the degree of freedom corresponding to the output point.
In the AuTO framework, one can easily explore various objectives as follows.
\begin{python}
def CompliantMechanism(rho):
E = MaterialModel(rho)
K = assembleKWithSprings(E)
u = solveKuf(K)
v = solve_dummy(K)
MSE = jnp.dot(v.T, jnp.dot(K,u))
SE = jnp.dot(u.T, jnp.dot(K,u));
J = -MSE/SE # or
# w = 0.9
# J = -w*MSE + (1 - w)*SE
return J
\end{python}
The topologies obtained with the two additional objectives are illustrated in Figure \ref{fig:output_compliant_mech_all}b and Figure \ref{fig:output_compliant_mech_all}c.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.6]{figures/result/compliantMechanism/cm_results_all.pdf}%
\caption{Displacement inverter design using three formulations at $V^* = 0.35$}
\label{fig:output_compliant_mech_all}
\end{center}
\end{figure}
For the objective of maximizing output displacement, the computational costs using analytical and AD methods are illustrated in Figure \ref{fig:timing_call_CM}. Observe that the the analytical method is more expensive since one must solve an adjoint problem explicitly. On the other hand, JAX internally optimizes the code for computing sensitivities via AD.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5,trim={100 90 30 80},clip]{figures/result/compliantMechanism/compliantMechanism_cost_ADvsAnal.pdf}%
\caption{Computational cost of optimization using AD vs analytical implementation. }
\label{fig:timing_call_CM}
\end{center}
\end{figure}
\section{Design of Materials}
\label{sec:microstructuralDesign}
In this section, we replicate the educational article \cite{xia2015design} for the design of microstructures using AuTO. In particular, we consider (a) maximizing bulk modulus (b) maximizing shear modulus, and (c) designing microstructures with negative Poisson's ratio.
\subsection{Problem setup}
The mathematical formulation is as follows \cite{xia2015design}:
\begin{subequations}
\begin{align}
& \underset{\boldsymbol{\rho}}{\text{minimize}}
& &c( E_{ijkl}^H( \boldsymbol{\rho}))\label{eqnObj_microstr}\\
& \text{subject to}
& & \boldsymbol{K}(\boldsymbol{\rho})\boldsymbol{U}^{A(kl)} = \boldsymbol{F}^{(kl)}, k,l= 1,2,\ldots,d\label{eqn:GoverningEqn_microstr}\\
& & & \sum_e \rho_e v_e \leq V^*\label{eqn:volcons_microstr}
\end{align}
\end{subequations}
where the objective $c(E_{ijkl}^H)$ represents the material property we intend to minimize, $\boldsymbol{K}$ is the stiffness matrix, $\boldsymbol{U}^{A(kl)}$ and $\boldsymbol{F}^{(kl)}$ are the displacement vector and the external force vector for test case $(kl)$ respectively. The different test cases correspond to the unit strain tests along different directions, where $d$ is the spatial dimension.
\paragraph{}In 2D, maximization of bulk modulus corresponds to:
\begin{equation}\label{eq:bulk}
c = -(E_{1111}+E_{1122}+E_{2211}+E_{2222})
\end{equation}
and maximization of shear modulus corresponds to:
\begin{equation}\label{eq:shear}
c = -E_{1212}
\end{equation}
Finally, for the design of materials with negative Poisson's ratio, the following was proposed \cite{xia2015design}:
\begin{equation}\label{eq:poisson}
c = -E_{1122} - \beta^l(E_{1111}+E_{2222})
\end{equation}
where $\beta \in (0,1)$ is a user-defined fixed parameter and $l$ is the design iteration number. Observe that, in the manual method, computing the sensitivity requires solving for the adjoint \cite{xia2015design},
\subsection{Implementation on AuTO}
In AuTO, the bulk modulus objective, for example, can be captured as follows:
\begin{python}
def MicrosructuralDesign(rho):
E = MaterialModel(rho)
K = assembleK(E)
Kr, F = computeSubMatrices(K)
U = performFE(Kr, F)
EMatrix = homogenizedMatrix(U, rho)
bulkModulus = -EMatrix['0_0']-EMatrix['0_1']-EMatrix['1_1']-EMatrix['1_0']
return bulkModulus
\end{python}
Other objectives can be similarly captured. Figure \ref{fig:method_fiberOptimization} illustrates three different microstuctures for the three different objectives, for a volume fraction of 0.25.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.5,trim={10 240 20 80},clip]{figures/result/microstrOpt/microstrOpt.pdf}%
\caption{Maximization of bulk modulus, shear modulus and design of material with negative Poisson's ratio, with $v_f^* = 0.25$.}
\label{fig:method_fiberOptimization}
\end{center}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
\paragraph{}In this paper, we demonstrated the simplicity and benefits of AD in TO. Possible extensions include multi-physics \cite{alexandersen2020review, semmler2018material, deng2017TO_thermoelasticBuckling} and non-linear problems \cite{wang2014design,clausen2015topology}. In the current implementation,
direct solvers were employed. For large scale problems, sparse pre-conditioned iterative solvers \cite{andersen2013cvxopt}, \cite{Yadav2013AssemblyFree} will be critical (but not fully supported by JAX). One of the advantages of the manual approach (that is lost in AD) is that the expressions can provide key insights to the problem.
\section*{Replication of results}
The Python code pertinent to this paper is available at \href{https://github.com/UW-ERSL/AuTO}{https://github.com/UW-ERSL/AuTO}.
\section*{Acknowledgments}
The authors would like to thank the support of National Science Foundation through grant CMMI 1561899.
\section*{Compliance with ethical standards}
The authors declare that they have no conflict of interest.
\bibliographystyle{unsrt}
\section{Introduction}
\label{sec:introduction}
\paragraph{} Fueled by improvements in manufacturing capabilities and computational modeling, the field of topology optimization (TO) has witnessed tremendous growth in recent years. To further accelerate the development of TO, we consider here automating a critical step in TO, namely computing the sensitivities, i.e., computing the derivatives of objectives, constraints, material models, projections and filters, with respect to the design variables, typically the elemental pseudo-densities, in the popular density-based TO.
\paragraph{} Conceptually, there are four different methods for computing sensitivities \cite{baydin2017automatic}: (1) numerical, (2) symbolic, (3) manual, and (4) automatic differentiation. Numerical, i.e., finite difference, based sensitivity computation suffers from truncation and floating-point errors, and is therefore not recommended. Symbolic differentiation using software packages such as SymPy \cite{symPy} or Mathematica ~\cite {Mathematica} is a reasonable choice for simple expressions. However, it is impractical when the quantity of interest involves loops (such as when assembling stiffness matrices), and/or flow-control (if-then-else). The default method today for computing sensitivities is manual. While theoretically straightforward, the manual process is unfortunately cumbersome and error prone; it is often the bottle-neck in the development of new TO modules and exploratory studies. In this educational paper, we therefore \emph {illustrate and promote the use of automatic differentiation for computing sensitivities in TO}.
\paragraph{} Automatic differentiation (AD), is a collection of methods for efficiently and accurately computing derivatives of numeric functions expressed as computer programs \cite{baydin2017automatic}. AD has been around for decades \cite{Rumelhart1986BackProp} and has been exploited in a wide range of problems ranging from molecular dynamics simulations \cite{schoenholz2019JaxMD} to the design of photonic crystals \cite{minkov2020InversePhotonicCrystalDesign}; see \cite{rall2006perspectivesOnAD} for a critical review. AD in the context of finite element analysis is reviewed in \cite{ozaki1995higherorderDerivUsingAD} and \cite{van2005reviewSensAnalForTO}. More recently, AD was demonstrated for for shape optimization in \cite{paganini2021fireshape} using the Firedrake framework \cite{gangl2020ADforShapeOpt}. AD has also been exploited in TO of turbulent fluid flow systems \cite{dilgen2018TO_turbulentFlowUsingAD}, \cite{dilgen2018ADforHeatTransfer}.
\paragraph{} Despite the pioneering research, AD is not widely used in TO. The objective of this educational paper \emph {is to accelerate the adoption of AD in TO by providing standalone codes for popular TO problems.} In particular, we employ JAX \cite{jax2018github}, a high-performance Python library for end-to-end AD. Its NumPy \cite{harris2020NumPy} like syntax, low memory footprint and support of just-in-time (JIT) compilation for accelerated code performance makes it an ideal candidate for the task. We demonstrate the use of AD within the popular density-based TO framework \cite{bendsoe2013topology}, by replicating existing educational TO codes for compliance minimization \cite{sigmund2001Code99}, compliant mechanism design \cite{Bendsoe2003} and microstructural design \cite{xia2015design}. Critical code snippets are highlighted in this article; the complete codes are available at \href{https://github.com/UW-ERSL/AuTO}{https://github.com/UW-ERSL/AuTO}
\section{ Compliance minimization }
\label{sec:compliance}
\subsection{Problem Formulation }
First we consider compliance minimization as modeled \cite{sigmund2001Code99} in subject to a volume constraint; this TO problem is very popular due to its self-adjoint nature. In a mesh discretized form, the problem can be posed as:
\begin{subequations}
\begin{align}
& \underset{\boldsymbol{\rho}}{\text{minimize}}
& &J = \boldsymbol{u}^\mathsf{T}\boldsymbol{K}(\boldsymbol{\rho})\boldsymbol{u}\label{eqnObj_compliance}\\
& \text{subject to}
& & \boldsymbol{K}(\boldsymbol{\rho})\boldsymbol{u} = \boldsymbol{f}\label{eqn:GoverningEqn_compliance}\\
& & & \sum_e \rho_e v_e \leq V^*\label{eqn:volcons_compliance}
\end{align}
\label{eq:complianceMinimization}
\end{subequations}
where $\boldsymbol{u}$ is the displacement (in structural problems) or temperature (in thermal problems), $\boldsymbol{K}$ is the stiffness matrix, $\boldsymbol{\rho}$ is the pseudo-density design variables, $\boldsymbol{f}$ is the structural/thermal load and $V^*$ is the volume constraint. To solve this problem, one must define the material model (see below), rely on finite element analysis to solve Equation \ref{eqn:GoverningEqn_compliance}, and use design update schemes such as MMA \cite{svanberg1987MMA} or Optimality Criteria \cite{bendsoe1995optimization}.
\paragraph{}A critical ingredient for the design update schemes is the sensitivity, i.e., derivative, of the objective and constraint with respect to the pseudo-density variables. As mentioned earlier, this is typically carried out manually. For the above self-adjoint problem, the sensitivity of the compliance, for the solid isotropic material with penalization (SIMP) \cite{bendsoe1995optimization} material model, can be easily derived:
\begin{equation}
\frac{\partial J}{\partial \rho_e} = -\boldsymbol{u}^T \frac{\partial \boldsymbol{K}}{\partial \rho_e}\boldsymbol{u} = -p\rho^{p-1}{u_e}^T K{u_e}
\label{eq:sensCompliance}
\end{equation}
\emph {However, in this paper, we will rely on automatic differentiation (AD) framework for sensitivity analysis.}
\subsection{AuTO Framework }
\paragraph{} The algorithm for solving the above compliance minimization is illustrated in \ref{alg:complianceMinimization}. Code snippets that illustrate the use of AD are discussed below.
\begin{algorithm}[H]
\caption{Compliance Minimization}
\label{alg:complianceMinimization}
\begin{algorithmic}[1]
\Procedure{complianceMin}{mesh, material, filter, BC, $V^*$}
\State $i = 0$ \Comment{Iteration index}
\State $\rho = V^*$ \Comment{Design variable initialization}
\State $\Delta = 1.0$ \Comment{Design change}
\While{$\Delta > \epsilon \; \text{and} \; i \leq \text{MaxIter}$}
\State $i \gets i+1 $
\State $E \gets \rho$ \Comment{Material model} \label{algo:materialModel}
\State $\bm{K} \gets E$ \Comment{Compute stiffness matrix and assemble } \label{algo:stiffness}
\State $ u \; \text{via} \; Ku=f$ \Comment{Solve with imposed BC} \label{algo:solve}
\State $J \gets (K, u)$ \Comment{Objective} \label{algo:compliance}
\State $ \frac{\partial \mathcal{J}}{\partial \rho} \gets AD(\rho \rightarrow J)$ \label{algo:AD_objective} \Comment{Automatic differentiation of objective}
\label{algo:constraint}
\State $g \gets (\bar{\rho},V^*)$ \Comment{Vol. Constraint}
\State $ \frac{\partial g}{\partial \rho} \gets AD(\rho \rightarrow g)$ \Comment{Automatic differentiation of constraint}
\label{algo:AD_constraint}
\State $\phi^i \gets (J, g , \frac{\partial J}{\partial \rho},\frac{\partial g}{\partial \rho})$ \Comment{MMA Solver \cite{svanberg1987MMA}} \label{algo:callMMA}
\State $\Delta = (||\rho^i - \rho^{i-1}||) $
\EndWhile
\State \textbf{end while}
\EndProcedure
\State \textbf{end procedure}
\end{algorithmic}
\end{algorithm}
Steps \ref{algo:materialModel}-\ref{algo:compliance} are captured through the following Python code, where the \textcolor{purple}{@jit} directive refers to the "just-in-compilation", i.e., the compiler translates the Python functions to optimized machine code at run-time, approaching the speeds of C or FORTRAN \cite{lam2015numba}.
\begin{python}
@jit
def computeCompliance(rho):
E = MaterialModel(rho)
K = assembleK(E)
u = solveKuf(K)
J = jnp.dot(u.T, jnp.dot(K,u))
return J
\end{python}
SIMP \cite{bendsoe1995optimization} is a typical material model, and implemented as follows (the \textcolor{purple}{@jit} directive has been removed here to avoid repetition).
\begin{python}
def MaterialModel(rho):
E = Emin + (Emax-Emin)*rho**penal # SIMP
return E
\end{python}
The stiffness matrix is assembled in a compact manner as follows.
\begin{python}
def assembleK(E):
K = jnp.zeros((ndof,ndof))
sK = D0.flatten()[np.newaxis]*E.T.flatten()
K = jax.ops.index_add(K, idx, sK)
return K;
\end{python}
where $D0 = \int\limits_{\Omega_e}[B]^T[C_0][B] d \Omega_e$ is the element base stiffness matrix \cite{bathe2006finite} with $E = 1$ and prescribed $\nu$; \textit{idx} reflects the global numbering of the element nodes. The underlying linear system is solved using a direct solver.
\begin{python}
def solveKuf(K):
u_free = jax.scipy.linalg.solve(K[free,:][:,free],force[free])
u = jnp.zeros((ndof))
u = jax.ops.index_add(u, free,u_free.reshape(-1))
return u;
\end{python}
Finally, to compute the compliance and its sensitivity in step \ref{algo:AD_objective}, we simply request for the function and its derivative as follows. The JAX environment automatically traces the chain of function calls, and ensures an end-to-end automatic differentiation.
\begin{python}
J, gradJ = value_and_grad(computeCompliance)(rho)
\end{python}
The global volume constraint (in step \ref{algo:constraint}) is defined as follows,
\begin{python}
@jit
def globalVolumeConstraint(rho):
vc = jnp.mean(rho)/vf - 1.
return vc;
\end{python}
As before, the value and its gradient (via AD) can be computed via
\begin{python}
g, gradg = value_and_grad(globalVolumeConstraint)(rho)
\end{python}
As summarized in step \ref{algo:callMMA} of the algorithm, the computed objective, objective gradient, constraint, constraint gradient are then passed to standard optimizers (MMA in our case) \cite{svanberg1987MMA}. The reader is referred to the complete code provided.
\subsection{Illustrative Examples}
\paragraph{} We illustrate the above AD framework using two popular examples of compliance minimization \cite{bendsoe2013topology}: (a) minimizing structural compliance of a tip-loaded cantilever (see Figure \ref{fig:Compliance_all_BC}a) and (b) minimizing thermal compliance of a square plate under a uniform heat load (see Figure \ref{fig:Compliance_all_BC}b). The mesh was chosen to be 60 $\times$ 30 grid for the structural problem, and 60 $\times$ 60 grid for the thermal problem. The target volume fraction in both problems is $V^* = 0.5$. The material properties are $E = 1$, $\nu = 0.3$, $k = 1$, and MMA was used as the design update scheme, with default parameters. The computed designs illustrated in \ref{fig:Compliance_all_BC}a and \ref{fig:Compliance_all_BC}b matches those in the literature \cite{bendsoe2013topology}.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[scale=0.6]{figures/result/compliance/Compliance_all_BC.pdf}%
\caption{Compliance minimization examples: (a) Tip loaded cantilever and optimized topology at $V^* = 0.5$ (b) Heat conduction on a square plate and optimized topology at $V^* = 0.5$}
\label{fig:Compliance_all_BC}
\end{center}
\end{figure}
The totla time taken for optimization using analytical derivatives and AD are compared in Figure \ref{fig:timing_compliance}. We observe that AD is marginally more expensive, but we will observe later that this is not always the case.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5,trim={100 90 30 80},clip]{figures/result/compliance/cost_ADvsAnal.pdf}%
\caption{Computational cost for compliance minimization using AD and analytical implementation.}
\label{fig:timing_compliance}
\end{center}
\end{figure}
\subsection{Advantages of AD}
\paragraph{} While SIMP is a popular material model, other models have been proposed \cite{dzierzanowski2012comparisonSIMPvRAMP}. The advantage of AD is that one can easily replace SIMP with, for example, RAMP ~\cite{stolpe2001RAMP}, by simply changing the material model.
\begin{python}
def MaterialModel(rho):
E = Emax*rho/(1.+S*(1.-rho)) # RAMP
return Y
\end{python}
All downstream sensitivity computations are handled automatically.
\paragraph{} Often additional filters and projections are used in TO. For instance, they can be used to remove checkerboard patterns \cite{Sigmund1998NumericalInstabTO}, impose minimum length scale \cite{guest2004MinLengthScaleProjection}, limit gray elements \cite{Wu2017}, etc. The filters apart from being complex in their own right, they are often used in tandem. For instance, in \cite{Wu2017ShellInfill}, eight such schemes were compounded to obtain shell-infill type structures. This results in highly complicated sensitivity expressions that can be laborious to derive. However, using an AD framework, the user simply needs to include the desired projections in the pipeline and the sensitivity is taken care of.
For instance, we can introduce the following filter to reduce grayness in design, just before computing the material model.
\begin{python}
def projectionFilter(rho):
if(projection['isOn']):
nmr = np.tanh(c0*beta) + jnp.tanh(beta*(rho-c0))
dnmr = np.tanh(c0*beta) + np.tanh(beta*(1-c0))
rho_tilde = nmr/dnmr
return rho_tilde
else:
return rho
\end{python}
\paragraph{} Finally, manufacturing constraints \cite{vatanabe2016ManufCons}, \cite{liu2018current} are often imposed in TO; these include limiting overhang of structures \cite{Qian2017}, connectivity \cite{li2016structuralConnectivityConstraint}, material utilization \cite{Sanders2018Multimaterial}, and length scale control \cite{Guest2009MaxLengthScale}. Such constraints are easy to impose within the AD framework. For example, the local volume constraint proposed in \cite{Guest2009MaxLengthScale} may be implemented as follows.
\begin{python}
def maxLengthScaleConstraint(rho):
v = jnp.matmul(L, (1.01-rho)**n); # L averaged prior
cons = 1 - jnp.power(jnp.sum(v**p),1./p)/vstar;
return cons;
\end{python}
As before, one can calculate the value and gradient of the constraint via
\begin{python}
vc, gradvc = value_and_grad(maxLengthScaleConstraint)(rho)
\end{python}
The computed constraint and gradient can then be passed on to MMA. To illustrate, for the tip cantilever problem in Figure \ref{fig:Compliance_all_BC}(a), with the additional max length scale radius of $r = 30$, and maximum void volume at $0.75 \pi r^2$, the resulting topology is illustrated in Figure \ref{fig:tipCantilever_50vf_maxLS}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.25,trim={10 90 0 70},clip]{figures/result/lengthScale/tipCantilever_50vf.pdf}%
\caption{Tip cantilever beam with length scale control. }
\label{fig:tipCantilever_50vf_maxLS}
\end{center}
\end{figure}
\section{Compliant Mechanism Design}
\label{sec:compliantMechanism}
We next illustrate the AuTO framework using compliant mechanisms (CMs) \cite{howell2013compliant}; see \cite{zhu2020design} for a comprehensive review on TO for CMs.
\subsection{Problem Formulation}
\label{sec:compmech_probFormulation}
Consider the displacement inverter considered in the 104-line educational MATLAB code \cite{bendsoe2013topology}. The objective is to maximize the output displacement $u_{out}$ at the point of interest when a force $f_{in}$ is applied, as illustrated in Figure \ref{fig:Compliant_Mech_BC}. The spring constants are specified by the user to control the behavior of the CM.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale = 0.8]{figures/result/compliantMechanism/CM_fig_hal.pdf}
\caption{The displacement inverter compliant mechanism.}
\label{fig:Compliant_Mech_BC}
\end{center}
\end{figure} \\
This TO problem can be written as:
\begin{subequations}
\begin{align}
& \underset{\boldsymbol{\rho}}{\text{maximize}}
& &\boldsymbol{u}_{out}\label{eqnObj_compliantMech}\\
& \text{subject to}
& & \boldsymbol{K}(\boldsymbol{\rho})\boldsymbol{u} = \boldsymbol{f_{in}}\label{eqn:GoverningEqn_compliantMech}\\
& & & \sum_e \rho_e v_e \leq V^*\label{eqn:volcons_compliantMech}
\end{align}
\end{subequations}
The standard implementation entails computing the elemental sensitivity $\frac{\partial u_{out}}{\partial \rho_e}$ given by:
\begin{equation}
\frac{\partial u_{out}}{\partial \rho_e} = \boldsymbol{\lambda}^T \frac{\partial \boldsymbol{K} }{\partial \rho_e}\boldsymbol{u} = p\rho^{p-1}{\lambda_e}^T K{u_e}
\end{equation}
where $\boldsymbol{\lambda}$ is the solution to the adjoint load problem $\boldsymbol{K\lambda} = -\boldsymbol{l}$. $\boldsymbol{l}$ is a vector with the value 1 at the degree of freedom corresponding to the output point, and with zeros elsewhere. Observe that, in the manual method, two sets of analysis (one to compute $\boldsymbol{u}$, and the other to compute $\boldsymbol{\lambda}$) are required per iteration for evaluating sensitivities.
\subsection{AuTO Framework}
The implementation in AuTO for CM design is similar to the compliance minimization problem, with two minor changes: (a) the stiffness matrix assembly includes the spring constants, and (b) the objective is the displacement at the output node.
The relevant code snippets are provided below.
\begin{python}
def assembleKWithSprings(E):
K = jnp.zeros((ndof,ndof))
sK = D0.flatten()[np.newaxis]*E.T.flatten()
K = jax.ops.index_add(K, idx, sK)
# springs at input and output nodes
K = jax.ops.index_add(K, jax.ops.index[nodeIn, nodeIn], kspringIn)
K = jax.ops.index_add(K, jax.ops.index[nodeOut, nodeOut], kspringOut)
return K;
\end{python}
\begin{python}
def CompliantMechanism(rho):
E = MaterialModel(rho)
K = assembleKWithSprings(E)
u = solveKuf(K)
return u[bc['nodeOut']]
\end{python}
To compute the objective and its gradient, we rely on JAX as follows.
\begin{python}
J, gradJ = value_and_grad(CompliantMechanism)(rho)
\end{python}
The design update using MMA is as per Section \ref{sec:compliance}. Using the problem specification in \cite{Bendsoe2003}, the resulting topology for the inverter is illustrated in Figure \ref{fig:output_compliant_mech_all}a; this is in agreement with the result in \cite{Bendsoe2003}.
\subsection{Advantages of AD}
For the design of CMs using TO, a key advantage of AD stems from the following observation \cite{zhu2020design} "\textit{no universally accepted objective formulation exists}". For example, consider two additional objectives:
\begin{enumerate}
\item $\min: -\omega MSE + (1 - \omega)SE$ \cite{nishiwaki1998topology}
\item $\min: -MSE/SE$ \cite{saxena2000optimal}
\end{enumerate}
where $MSE = \boldsymbol{v}^T\boldsymbol{K}\boldsymbol{u}$ is the mutual strain energy, which describes the flexibility of the designed mechanism and $SE = \boldsymbol{u}^T\boldsymbol{K}\boldsymbol{u}$ is the strain energy, $\boldsymbol{v}$ is the output displacement when a unit dummy load applied at the degree of freedom corresponding to the output point.
In the AuTO framework, one can easily explore various objectives as follows.
\begin{python}
def CompliantMechanism(rho):
E = MaterialModel(rho)
K = assembleKWithSprings(E)
u = solveKuf(K)
v = solve_dummy(K)
MSE = jnp.dot(v.T, jnp.dot(K,u))
SE = jnp.dot(u.T, jnp.dot(K,u));
J = -MSE/SE # or
# w = 0.9
# J = -w*MSE + (1 - w)*SE
return J
\end{python}
The topologies obtained with the two additional objectives are illustrated in Figure \ref{fig:output_compliant_mech_all}b and Figure \ref{fig:output_compliant_mech_all}c.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.6]{figures/result/compliantMechanism/cm_results_all.pdf}%
\caption{Displacement inverter design using three formulations at $V^* = 0.35$}
\label{fig:output_compliant_mech_all}
\end{center}
\end{figure}
For the objective of maximizing output displacement, the computational costs using analytical and AD methods are illustrated in Figure \ref{fig:timing_call_CM}. Observe that the the analytical method is more expensive since one must solve an adjoint problem explicitly. On the other hand, JAX internally optimizes the code for computing sensitivities via AD.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5,trim={100 90 30 80},clip]{figures/result/compliantMechanism/compliantMechanism_cost_ADvsAnal.pdf}%
\caption{Computational cost of optimization using AD vs analytical implementation. }
\label{fig:timing_call_CM}
\end{center}
\end{figure}
\section{Design of Materials}
\label{sec:microstructuralDesign}
In this section, we replicate the educational article \cite{xia2015design} for the design of microstructures using AuTO. In particular, we consider (a) maximizing bulk modulus (b) maximizing shear modulus, and (c) designing microstructures with negative Poisson's ratio.
\subsection{Problem setup}
The mathematical formulation is as follows \cite{xia2015design}:
\begin{subequations}
\begin{align}
& \underset{\boldsymbol{\rho}}{\text{minimize}}
& &c( E_{ijkl}^H( \boldsymbol{\rho}))\label{eqnObj_microstr}\\
& \text{subject to}
& & \boldsymbol{K}(\boldsymbol{\rho})\boldsymbol{U}^{A(kl)} = \boldsymbol{F}^{(kl)}, k,l= 1,2,\ldots,d\label{eqn:GoverningEqn_microstr}\\
& & & \sum_e \rho_e v_e \leq V^*\label{eqn:volcons_microstr}
\end{align}
\end{subequations}
where the objective $c(E_{ijkl}^H)$ represents the material property we intend to minimize, $\boldsymbol{K}$ is the stiffness matrix, $\boldsymbol{U}^{A(kl)}$ and $\boldsymbol{F}^{(kl)}$ are the displacement vector and the external force vector for test case $(kl)$ respectively. The different test cases correspond to the unit strain tests along different directions, where $d$ is the spatial dimension.
\paragraph{}In 2D, maximization of bulk modulus corresponds to:
\begin{equation}\label{eq:bulk}
c = -(E_{1111}+E_{1122}+E_{2211}+E_{2222})
\end{equation}
and maximization of shear modulus corresponds to:
\begin{equation}\label{eq:shear}
c = -E_{1212}
\end{equation}
Finally, for the design of materials with negative Poisson's ratio, the following was proposed \cite{xia2015design}:
\begin{equation}\label{eq:poisson}
c = -E_{1122} - \beta^l(E_{1111}+E_{2222})
\end{equation}
where $\beta \in (0,1)$ is a user-defined fixed parameter and $l$ is the design iteration number. Observe that, in the manual method, computing the sensitivity requires solving for the adjoint \cite{xia2015design},
\subsection{Implementation on AuTO}
In AuTO, the bulk modulus objective, for example, can be captured as follows:
\begin{python}
def MicrosructuralDesign(rho):
E = MaterialModel(rho)
K = assembleK(E)
Kr, F = computeSubMatrices(K)
U = performFE(Kr, F)
EMatrix = homogenizedMatrix(U, rho)
bulkModulus = -EMatrix['0_0']-EMatrix['0_1']-EMatrix['1_1']-EMatrix['1_0']
return bulkModulus
\end{python}
Other objectives can be similarly captured. Figure \ref{fig:method_fiberOptimization} illustrates three different microstuctures for the three different objectives, for a volume fraction of 0.25.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.5,trim={10 240 20 80},clip]{figures/result/microstrOpt/microstrOpt.pdf}%
\caption{Maximization of bulk modulus, shear modulus and design of material with negative Poisson's ratio, with $v_f^* = 0.25$.}
\label{fig:method_fiberOptimization}
\end{center}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
\paragraph{}In this paper, we demonstrated the simplicity and benefits of AD in TO. Possible extensions include multi-physics \cite{alexandersen2020review, semmler2018material, deng2017TO_thermoelasticBuckling} and non-linear problems \cite{wang2014design,clausen2015topology}. In the current implementation,
direct solvers were employed. For large scale problems, sparse pre-conditioned iterative solvers \cite{andersen2013cvxopt}, \cite{Yadav2013AssemblyFree} will be critical (but not fully supported by JAX). One of the advantages of the manual approach (that is lost in AD) is that the expressions can provide key insights to the problem.
\section*{Replication of results}
The Python code pertinent to this paper is available at \href{https://github.com/UW-ERSL/AuTO}{https://github.com/UW-ERSL/AuTO}.
\section*{Acknowledgments}
The authors would like to thank the support of National Science Foundation through grant CMMI 1561899.
\section*{Compliance with ethical standards}
The authors declare that they have no conflict of interest.
\bibliographystyle{unsrt}
| {
"timestamp": "2021-04-06T02:38:38",
"yymm": "2104",
"arxiv_id": "2104.01965",
"language": "en",
"url": "https://arxiv.org/abs/2104.01965",
"abstract": "A critical step in topology optimization (TO) is finding sensitivities. Manual derivation and implementation of the sensitivities can be quite laborious and error-prone, especially for non-trivial objectives, constraints and material models. An alternate approach is to utilize automatic differentiation (AD). While AD has been around for decades, and has also been applied in TO, wider adoption has largely been absent.In this educational paper, we aim to reintroduce AD for TO, and make it easily accessible through illustrative codes. In particular, we employ JAX, a high-performance Python library for automatically computing sensitivities from a user defined TO problem. The resulting framework, referred to here as AuTO, is illustrated through several examples in compliance minimization, compliant mechanism design and microstructural design.",
"subjects": "Mathematical Software (cs.MS); Computational Engineering, Finance, and Science (cs.CE); Numerical Analysis (math.NA)",
"title": "AuTO: A Framework for Automatic differentiation in Topology Optimization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9626731158685837,
"lm_q2_score": 0.7371581510799252,
"lm_q1q2_score": 0.7096423341880358
} |
https://arxiv.org/abs/2111.05386 | Computing Area-Optimal Simple Polygonizations | We consider methods for finding a simple polygon of minimum (Min-Area) or maximum (Max-Area) possible area for a given set of points in the plane. Both problems are known to be NP-hard; at the center of the recent CG Challenge, practical methods have received considerable attention. However, previous methods focused on heuristic methods, with no proof of optimality. We develop exact methods, based on a combination of geometry and integer programming. As a result, we are able to solve instances of up to n=25 points to provable optimality. While this extends the range of solvable instances by a considerable amount, it also illustrates the practical difficulty of both problem variants. | \section{Introduction}
\label{sec:introduction}
While the classic geometric Traveling Salesman Problem (TSP) is to find a (simple)
polygon with a given set of vertices that has shortest perimeter, it is natural
to look for a simple polygon with a given set of vertices that minimizes
another basic geometric measure: the enclosed area.
The problem {\sc Min-Area} asks for a simple polygon with minimum enclosed
area, while {\sc Max-Area} demands one of maximum area; see Figure~\ref{fig:example}
for an illustration.
Both problem variants were shown to be $\mathcal{NP}$-complete by Fekete \cite{f-gtsp-92,fekete2000simple,fekete1993area},
who also showed that no {polynomial-time approximation scheme} (PTAS) exists for \textsc{Min-Area} problem and
gave a $\frac{1}{2}$-approximation algorithm for \textsc{Max-Area}.
\begin{figure}[h]
\centering
\includegraphics[trim={25mm 16mm 21mm 16mm},clip,width=.35\textwidth]{figures/sample_point_set_min_max.pdf}
\vspace{.2cm}
\includegraphics[trim={25mm 16mm 21mm 16mm},clip,width=.35\textwidth]{figures/polygon_min.pdf}\hfil
\includegraphics[trim={25mm 16mm 21mm 16mm},clip,width=.35\textwidth]{figures/polygon_max.pdf}
\caption{(Top) A set of 20 points. (Bottom Left) \textsc{Min-Area} solution. (Bottom Right) \textsc{Max-Area} solution.}
\label{fig:example}
\end{figure}
\subsection{Related Work}
Most previous practical work has focused on finding heuristics for both problems.
Taranilla et al.~\cite{taranilla2011approaching} proposed
three different heuristics.
Peethambaran~\cite{peethambaran2015randomized,peethambaran2016empirical} later proposed
randomized and greedy algorithms on solving \textsc{Min-Area} as well as the
$d$-dimensional variant of both \textsc{Min-Area} and \textsc{Max-Area}.
Considerable recent attention and progress was triggered by the 2019 CG Challenge,
which asked contestants to find good solutions for a wide spectrum of benchmark instances with up to
100,000 points; details are described in the survey by Demaine et al.~\cite{challenge19}.
Despite this focus, there has only been a limited amount of work on computing
provably optimal solutions for instances of interesting size. The only
notable exception is by Fekete et al.~\cite{fekete2015area}, who were able to solve
all instances of \textsc{Min-Area} with up to $n=14$ and some with up to $n=16$ points, as well as
all instances of \textsc{Max-Area} with up to $n=17$ and some with up to $n=19$ points.
This illustrates the inherent practical difficulty, which differs considerably from
the closely related TSP, for which even straightforward IP-based approaches can yield provably optimal solutions
for instances with hundreds of points, and sophisticated methods can solve instances with tens of thousands
of points.
\subsection{Our Results}
We present a systematic study of exact methods for \textsc{Min-Area} and \textsc{Max-Area} polygonizations.
We show that a number of careful enhancements can help to extend the range of instances
that can be solved to provable optimality, with different approaches working better
for the two problem variants. On the other hand, our work shows that the problems appear
to be practically harder than geometric optimization problems such as the Euclidean TSP.
\section{Context and Preliminaries}
\label{sec:preliminaries}
A detailed description of background, context and further connections
can be found in the survey by Demaine et al.~\cite{challenge19}.
\section{Tools}
\label{sec:tools}
We considered two models based on integer programming: an \emph{edge-based formulation}
(described in Section~\ref{sec:edge}) and a \emph{triangle-based formulation}
(described in Section~\ref{sec:triangle}). In addition, we developed a number
of further refinements and improvements (described in Section~\ref{sec:enhance}).
\subsection{Edge-Based Formulation}
\label{sec:edge}
The first formulation is based on considering \emph{directed} edges of the polygon boundary.
We denote the boundary of a polygon ${\mathcal P}$ by $\partial {{\mathcal P}}$.
For two points $s_i, s_j \in S$, we consider the two directed half-edges
$e_{ij}$ and $e_{ji}$. Let $E^r$ be the set of half-edges between the
points of $S$. As shown in Figure~\ref{fig:reference_point_approach:area_calculation},
the area $A_{{\mathcal P}}$ of a polygon ${\mathcal P}$ can be computed by adding the signed triangle areas $f_e$
that are formed by directed half-edges $e$ and an arbitrary, fixed reference point $r$:
$f_e$ is positive if $e$ and $r$ form a triangle for which $e$ has a counterclockwise orientation
along its boundary, and negative for a clockwise orientation. Therefore, we have to choose an appropriate
set $E_{{\mathcal P}}\subseteq E^r$ that optimizes the total area, as follows.
\begin{equation}
\label{eq:reference_point_approach:area_calculation}
A_{{\mathcal P}} = A(E_{{\mathcal P}}) = \sum_{e \in E_{{\mathcal P}}} f_e
\end{equation}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{.5\linewidth}
\centering
\resizebox{.6\columnwidth}{!}{\input{figures/reference_point_approach_polygon_triangles}}%
\caption{Triangles of the polygon.}
\label{fig:reference_point_approach:polygon_triangles}
\end{subfigure}%
\begin{subfigure}[b]{.5\linewidth}
\centering
\resizebox{.6\columnwidth}{!}{\input{figures/reference_point_approach_polygon_triangles_positive}}%
\caption{Positive edge triangles.}
\label{fig:reference_point_approach:polygon_positive}
\end{subfigure}%
\begin{subfigure}[b]{.5\textwidth}
\centering
\resizebox{.6\columnwidth}{!}{\input{figures/reference_point_approach_polygon_triangles_negative}}%
\caption{Negative edge triangles.}
\label{fig:reference_point_approach:polygon_negative}
\end{subfigure}\begin{subfigure}[b]{.5\textwidth}
\centering
\resizebox{.6\columnwidth}{!}{\input{figures/reference_point_approach_polygon_triangles_sum}}%
\caption{Calculated difference between (b) and (c)}
\label{fig:reference_point_approach:polygon_area}
\end{subfigure}
\caption[Area calculation in the boundary based approach]{Area calculation of a polygon using a reference point $r$ (red point)}
\label{fig:reference_point_approach:area_calculation}
\end{figure}
This gives rise to an integer program in which the choice of half-edges $e=(i,j)$ is modeled by 0-1 variables $z_e=z_{ij}$.
\begin{equation}
\{\min , \max\} \sum_{e \in E^r} z_{e} \cdot f_e \label{ip:reference_point_approach:objective}\\
\end{equation}
\begin{align}
\vphantom{\sum_{i=1}^{m}} \forall s_i \in S: \qquad& \sum_{(j,i) \in \delta^+(s_i)} z_{ji} = 1 \label{ip:reference_point_approach:in_degree}\\
\vphantom{\sum_{i=1}^{m}} \forall s_i \in S: \qquad& \sum_{(i,j) \in \delta^-(s_i)} z_{ij} = 1 \label{ip:reference_point_approach:out_degree}\\
\vphantom{\sum_{i=1}^{m}} \forall e=\{i,j\} \in E: \qquad& z_{ij} + z_{ji} \leq 1 \label{ip:reference_point_approach:one_of_edges}\\
\vphantom{\sum_{i=1}^{m}} \forall \text{intersecting}\quad \{i,j\}, \{k,l\} \in E: \qquad& z_{ij} + z_{ji} + z_{kl} + z_{lk} \leq 1 \label{ip:reference_point_approach:intersections}\\
\vphantom{\sum_{i=1}^{m}} (\forall \text{slabs}\quad L) (\forall m=1,\ldots,\vert L \vert): \qquad& \sum_{i=1}^{m} z_{e_{i_L}^{lr}}-z_{e_{i_L}^{rl}} \begin{array}{l}
\leq 1\\
\geq 0
\end{array} \label{ip:reference_point_approach:slabs}\\
\vphantom{\sum_{i=1}^{m}} \forall \emptyset \neq D \subsetneq S: \qquad& \begin{array}{l}\sum_{(k,l) \in \delta^-(D)} z_{kl} \geq 1 \\ \sum_{(k,l) \in \delta^+(D)} z_{kl} \geq 1\end{array} \label{ip:reference_point_approach:subtours}\\
\forall {i,j} \in E: \qquad& z_{ij},z_{ji} \in \{0,1\}
\end{align}
The \emph{objective function} \eqref{ip:reference_point_approach:objective} arises from signed triangle areas, as described.
The constraints \eqref{ip:reference_point_approach:in_degree} and
\eqref{ip:reference_point_approach:out_degree} ensure that each point $s_i \in
S$ has one outgoing edge and one incoming edge in the resulting polygon.
Furthermore, constraints \eqref{ip:reference_point_approach:one_of_edges}
guarantee, that for each possible edge, only one of the half-edges can be in
$\partial P$. Intersecting edges in the resulting polygon are excluded by
constraints~\eqref{ip:reference_point_approach:intersections}.
\begin{figure}[h]
\centering
\resizebox{5.5cm}{!}{\input{figures/reference_point_approach_slabs}}%
\caption{Visualization of slabs in a polygon}
\label{fig:reference_point_approach:slabs}
\end{figure}
\begin{figure}[h]
\begin{subfigure}[b]{.5\textwidth}
\centering
\resizebox{2.5cm}{!}{\input{figures/reference_point_approach_slabs_allowed}
\end{subfigure}\begin{subfigure}[b]{.5\textwidth}
\centering
\resizebox{2.5cm}{!}{\input{figures/reference_point_approach_slabs_not_allowed}
\end{subfigure}
\caption[Visualization of slab constraints]{Visualization of slab constraints. Left side shows a valid configuration. The right side shows a violated slab constraint.}
\label{fig:reference_point_approach:slabs_allowed}
\end{figure}
The next set of inequalities \eqref{ip:reference_point_approach:slabs} are called \emph{slab constraints};
they ensure that the polygon is oriented in a counterclockwise manner.
A \emph{slab} $L$ is a vertical strip bounded by the $x$-coordinates of two consecutive points in
the order of $x$-coordinates of points. Figure~\ref{fig:reference_point_approach:slabs} shows the slabs of a given
point set. The edges of slab $L$ get ordered by the $y$-coordinate at point
$c_L=\frac{c^{x_{max}}_L-c^{x_{min}}_L}{2}$, where $c^{x_{max}}_L$ ($c^{x_{min}}_L$, resp.) is the $x$-coordinate of the right (left, resp.) boundary of slab $L$.
Figure~\ref{fig:reference_point_approach:slabs} illustrates $c_L$ with the
blue arrow within the indicated slab. Now the bottommost chosen edge has
to be oriented from left to right and the topmost one from right
to left, while chosen edges in between have to alternate in their
direction. This is enforced by the slab constraints
for all possible sums $\sum_{i=1}^{m} z_{e_{i_L}^{lr}}-z_{e_{i_L}^{rl}}$ with
$m=1,\dots,\vert L \vert$, where $e_{i_L}^{lr}$ ($e_{i_L}^{rl}$, resp.) denotes the $i$-th bottom most half-edge along slab $L$ going from left to right (from right to left, resp.).
Figure~\ref{fig:reference_point_approach:slabs_allowed} shows two possible
configurations of a slab. The left one is a valid slab and satisfies all
inequalities \eqref{ip:reference_point_approach:slabs}, while the right one does
violate the constraint
$$ \underbrace{e^{lr}_1-e_1^{rl}}_{=1} + \underbrace{e^{lr}_2-e_2^{rl}}_{=-1} + \underbrace{e^{lr}_3-e_3^{rl}}_{=0} + \underbrace{e^{lr}_4-e_4^{rl}}_{=0} +
\underbrace{e^{lr}_5-e_5^{rl}}_{=-1} = -1 \ngeq 0 \quad m=5$$
Note that we have to add inequalities for all $m = 1,\dotsc,\vert L \vert$. For the previous example all other inequalities are satisfied.
\begin{align*}
e^{lr}_1-e_1^{rl} &&&&&&&&&&&=1 \begin{array}{l}\geq 0\\\leq 1\end{array} \quad m=1\\
e^{lr}_1-e_1^{rl} &+
&e^{lr}_2-e_2^{rl} &&&&&&&&&= 0\begin{array}{l}\geq 0\\\leq 1\end{array} \quad m=2\\
e^{lr}_1-e_1^{rl} &+
&e^{lr}_2-e_2^{rl} &+
&e^{lr}_3-e_3^{rl}
&&&&&&&= 0\begin{array}{l}\geq 0\\\leq 1\end{array} \quad m=3\\
e^{lr}_1-e_1^{rl} &+
&e^{lr}_2-e_2^{rl} &+
&e^{lr}_3-e_3^{rl} &+
&e^{lr}_4-e_4^{rl}
&&&&&= 0\begin{array}{l}\geq 0\\\leq 1\end{array} \quad m=4\\
\underbrace{e^{lr}_1-e_1^{rl}}_{=1} &+ &\underbrace{e^{lr}_2-e_2^{rl}}_{=-1} &+ &\underbrace{e^{lr}_3-e_3^{rl}}_{=0} &+ &\underbrace{e^{lr}_4-e_4^{rl}}_{=0} &+
&\underbrace{e^{lr}_5-e_5^{rl}}_{=-1} &+
&\underbrace{e^{lr}_6-e_6^{rl}}_{=1} &= 0\begin{array}{l}\geq 0\\\leq 1\end{array} \quad m=6
\end{align*}
The last set of constraints \eqref{ip:reference_point_approach:subtours} are
\emph{subtour constraints}; they ensure that all non-trivial subsets of vertices
have at least one incoming and one outgoing edge.
These constraints are very common for many related
optimization problems, including (in undirected form) for the TSP.
Overall, the size of the resulting IP is as follows.
\begin{itemize}
\item There is a total of $O(2n) = O(n)$ point degree constraints \eqref{ip:reference_point_approach:in_degree} and \eqref{ip:reference_point_approach:out_degree}, two for each vertex.
\item There is a total of $O(\binom{n}{2})=O(n^2)$ half-edge constraints, one for each half-edge.
\item There is a total of $O(n^4)$ intersections constraints \eqref{ip:reference_point_approach:intersections}, one for each pair of intersecting edges.
\item There is a total of $O(n^3)$ slab constraints \eqref{ip:reference_point_approach:slabs}, one for each combination of one of the $n-1$ slabs and the $O(n^2)$ possible edges crossing it.
\item There is a total of $O(2^n)$ subtour constraints \eqref{ip:reference_point_approach:subtours}, one for each non-trivial subset of vertices.
\end{itemize}
In the practical implementation we cannot add all subtour constraints and try
to avoid adding intersection constraints before starting the branch and cut
algorithm. While solving the IP we get access to partial solutions and only add
new constraints when necessary. Therefore, we start with a slim IP with only $O(n +
n^2 + n^3) = O(n^3)$ constraints and $O(n^2)$ variables. During the branch-and-cut
algorithm at most $O(2^n)$ constraints are added.
\subsection{Triangle-Based Formulation}
\label{sec:triangle}
An alternative is the {triangle-based formulation}, which considers the set $T(P)$ of
possibly $\binom{n}{3}$ many empty triangles of a point set $P$; see Figure~\ref{fig:triangulation_approach:max_triangles}
for an illustration.
Making use of the fact that a simple polygon with $n$ vertices consists of $(n-2)$ empty
triangles with non-intersection interiors, we get the following IP formulation, in which the presence of
an empty triangle $\triangle$ with unsigned area $f_\triangle$ is described by a 0-1 variable $x_{\scaleto{\triangle}{5pt}}$.
\begin{figure}[t]
\centering
\input{figures/triangulation_approach_max_triangles}
\caption[Empty triangle amount bound]{A set of five points and its ten empty triangles.}
\label{fig:triangulation_approach:max_triangles}
\end{figure}
\begin{equation}
\{\min , \max\} \sum_{{\scaleto{\triangle}{5pt}} \in T(P)} f_{\scaleto{\triangle}{5pt}} \cdot x_{\scaleto{\triangle}{5pt}} \label{ip:triangulation_approach:objective}\\
\end{equation}
\begin{align}
\vphantom{\sum_{i=1}^{n}} \qquad& \sum_{{\scaleto{\triangle}{5pt}} \in T} x_{\scaleto{\triangle}{5pt}} = n-2 \label{ip:triangulation_approach:triangle_count}\\
\vphantom{\sum_{i=1}^{n}} \forall s_i \in S: \qquad& \sum_{{\scaleto{\triangle}{5pt}} \in \delta(s_i)} x_{\scaleto{\triangle}{5pt}} \geq 1 \label{ip:triangulation_approach:in_each_point}\\
\vphantom{\sum_{i=1}^{m}} \forall \text{intersecting}\quad \triangle_i, \triangle_j \in T(P): \qquad& x_{{\scaleto{\triangle}{5pt}}_i} + x_{{\scaleto{\triangle}{5pt}}_j} \leq 1 \label{ip:triangulation_approach:intersections}\\
\vphantom{\sum_{i=1}^{n}} \forall \emptyset \neq D \subsetneq T(P), \vert D \vert \leq n-3: \qquad& \sum_{{\scaleto{\triangle}{5pt}} \in D} x_{\scaleto{\triangle}{5pt}} - \sum_{{\scaleto{\triangle}{5pt}} \in \delta(D)} x_{\scaleto{\triangle}{5pt}}\leq \vert D \vert - 1 \label{ip:triangulation_approach:subtours}\\
\vphantom{\sum_{i=1}^{n}} \forall \triangle \in T(P): \qquad& x_{\scaleto{\triangle}{5pt}} \in \{0,1\}
\end{align}
The objective function \eqref{ip:triangulation_approach:objective} is the sum
over the chosen triangles areas.
Triangle constraint
\eqref{ip:triangulation_approach:triangle_count} ensures that we chose exactly $n-2$
triangles, which is the number of triangles in a triangulation of a simple polygon.
Furthermore, point constraints \eqref{ip:triangulation_approach:in_each_point} guarantee that a solution has
at least one adjacent triangle at each point $s_i \in S$.
Finally, intersection constraints \eqref{ip:triangulation_approach:intersections} ensure that we only
select triangles with disjoint interiors. As shown in Figure~\ref{fig:triangulation_approach:intersectons},
these are indeed necessary, even when minimizing total area.
\begin{figure}[h!]
\centering
\input{figures/triangulation_approach_intersections}
\caption[Triangulations of a point set]{Triangulations of a point set with (right) and without (left) intersection constraints. The difference of both areas is the area difference of both red triangles.}
\label{fig:triangulation_approach:intersectons}
\end{figure}
Finally, the subtour constraints \eqref{ip:triangulation_approach:subtours} ensure that the set of selected triangles
forms a simple polygon:
Either all triangles of $D$ are part of the solution and at least one new triangle must be adjacent to the boundary of $D$ (i.e., a triangle of $\delta(D)$), or one triangle of $D$ is not part of the solution (see Fig.~\ref{fig:subtour_constraint} for a visualization).
\begin{figure}
\centering
\includegraphics[scale=.6]{./figures/triangle_subtour.pdf}
\caption{Visualization of the subtour constraint.
Blue triangles represent the set $D$.
To satisfy the subtour constraint, either one of the red or orange triangles must be added, or one of the blue triangles must be removed.}
\label{fig:subtour_constraint}
\end{figure}
Overall, the size of the resulting IP is as follows.
\begin{itemize}
\item There is a single triangle constraint of type \eqref{ip:triangulation_approach:triangle_count}.
\item There are $O(n)$ point constraints \eqref{ip:triangulation_approach:in_each_point}, one for each point $s_i \in S$.
\item There are $O(n^6)$ intersection constraints \eqref{ip:triangulation_approach:intersections}, one for each intersecting pair of the $O(n^3)$ triangles.
\item There are $O(2^{n^3})$ subtour constraints \eqref{ip:triangulation_approach:subtours}, one for each $D\subsetneq T$ with $0 < \vert D \vert \leq n-3$ and all triangles in $D$.
\end{itemize}
Because of the enormous number of subtour constraints and the fact that most of the intersection constraints are not needed,
both types are dynamically added during the branch-and-cut algorithm.
\subsection{Enhancing the Integer Programs}
\label{sec:enhance}
Given the considerable size of the described IP formulations, we employed a number of enhancements to improve efficiency.
\subsubsection{Convex Hull}
\label{sec:convex_hull}
The area of the convex hull is an upper bound for every polygonization of a given point set.
Its combinatorial structure allows omitting a number of constraints:
The edge $e$ between two non-adjacent points on the boundary of the convex hull
divides the point set into two separate pieces,
so we can remove edges between two non-adjacent points of the convex hull from our set of variables.
\subsubsection{Initial Solutions}
When solving an optimization problem, it helps to start with an initial
integer solution, which leads to a polygon ${\mathcal P}_{initial}$. Until no better
solution has been found, the area of ${\mathcal P}_{initial}$ helps the bounding process
to cut off subtrees of possible solutions.
Starting with a solution that is very close to the optimal solution can accelerate the
computation speed a lot. As described in the survey article~\cite{challenge19}, there is an
approximation method by Fekete~\cite{f-gtsp-92} for the \textsc{Max-Area} problem, which guarantees to be at
most $\frac{1}{2}$ as large as the optimal solution.
For the minimization problem
we use a heuristic without performance guarantee.
We also used a simple greedy approach to obtain an initial value for \textsc{Min-Area}, based
on a heuristic of Taranilla et al.~\cite{taranilla2011approaching}.
The algorithm modifies an existing polygon ${\mathcal P}$
until all points are on the boundary.
\begin{enumerate}
\item Compute the convex hull of the point set $S$, resulting in the initial polygon ${\mathcal P}$.
\item Among all points inside ${\mathcal P}$ that are not part of ${\mathcal P}$, chose one point $s_1$. The point forms an empty triangle with some edge $\{s_2,s_3\}$ of ${\mathcal P}$, does not intersect with $\partial {\mathcal P}$ and is maximum in size. If no point $s_1$ inside ${\mathcal P}$ exists, we are finished, because ${\mathcal P}$ is a simple polygon that has all points of $S$ on its boundary.
\item Remove edge $\{s_2,s_3\}$ from ${\mathcal P}$ and add edges $\{s_1,s_2\}, \{s_1,s_3\}$. Repeat Step 2.
\end{enumerate}
\begin{figure}[h!]
\centering
\input{figures/greedy_map}
\caption[The \textsc{Greedy Min-Area} algorithm]{A possible step two of the \textsc{Greedy Min-Area} algorithm}
\label{fig:greedy_map}
\end{figure}
Figure~\ref{fig:greedy_map} illustrates a possible second step of the
\textsc{Greedy Min-Area} algorithm. The complexity of the first step is $O(n \log
n)$, because we need to compute a convex hull. For the second step we need to
consider each of the $n$ edges of ${\mathcal P}$ and compute triangles with possibly
$n$ points inside ${\mathcal P}$. We need to check whether the triangle is empty and
whether the triangle does not cross any of the $n$ edges of ${\mathcal P}$. The second
step has to be carried out for every point that is not part of the convex hull, i.e.,
$n$ times. This leads to an overall complexity of $O(n^4)$. The third step can
be done in constant time $O(1)$. Overall, this leads to a complexity of
of $O(n\log n + n^4 + 1) = O(n^4)$ for \textsc{Greedy Min-Area}. If we take away triangles with
minimum in step two of the algorithm, we get an heuristic for the maximization problem.
We call this variant \textsc{Greedy Max-Area}.
\subsubsection{Intersections}
\paragraph{Intersection Cliques}
If any pair $o_i,o_j$ in a given set $C=\{o_1,\ldots,o_k\}$ of objects (which may be edges or triangles)
intersect, they form an \emph{intersection clique}. Clearly, this allows replacing the $\Theta(k^2)$
pairwise intersection constraints by a single one, as follows.
$$\sum_{i=1}^k x_i \leq 1.$$
Because of the NP-completeness of finding maximum cardinality cliques, we simply use
\emph{maximal cliques}.
We only add intersection constraints incrementally, i.e., whenever we get a new integer solution,
we add new violated intersection constraints.
Because it is very time-consuming to compute all maximal
cliques in every iteration, we add cliques by consecutively adding
more edges to an existing clique $C$ until no edge can be found, which
intersects all edges in $C$. We start with $C=\{o_1,o_2\}$ with $o_1,o_2$ being
two intersecting objects of the solution. We then try to add more objects of the
current solution to the clique until no such object can be found. Afterwards all
other objects are considered, until no object
can be added to $C$. This provides good solutions in practice.
\paragraph{Halfspace Constraints}
\label{sec:halfspace_constraints}
For the triangle-based approach,
we introduce the concept of \emph{halfspace constraints}.
The goal is to reduce the number of intersections that need to be added during the optimization process by
excluding obvious intersections from the beginning.
Figure~\ref{fig:intersections_triangulation_example} shows the two types of
intersections which may occur. Two intersecting triangles may share no point, a single point, or two points.
\begin{figure}[h!]
\centering
\includegraphics[scale=.7]{./figures/triangle_intersections.pdf}
\caption{All three types of possible intersections in the triangle-based approach.}
\label{fig:intersections_triangulation_example}
\end{figure}
There are other intersections possible besides those in which two triangles share an edge.
Two triangles $\triangle_i, \triangle_j$ that share an edge $e$ intersect if both remaining points
lie on the same side of $e$.
Figure~\ref{fig:intersections_triangulation_halfspace} illustrates this idea
for three triangles. Points $s_i,s_j \in S$ lie on the same side of $e$.
Therefore, the triangles they form with $e$ do intersect. For points on the
opposite site of $e$, such as $s_k$, the induced triangles with $e$ cannot
intersect with those from $s_i,s_j$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.7]{./figures/intersection_triangle_edge.pdf}
\caption{Intersections in the triangle-based approach with triangles sharing an edge}
\label{fig:intersections_triangulation_halfspace}
\end{figure}
For every possible edge $e$ of the triangulation of the optimal
polygon, at most one triangle may be on each side of the edge. We
refer to $e$ as a hyperplane dividing the two-dimensional space into two halfspaces.
Each halfspace may have one triangle $\triangle$ containing $e$.
This allows us to formulate \emph{halfspace constraints}, as follows.
\begin{equation}\label{eq:halfspace_constraints}
\forall e=\{s_i,s_j\} \text{ with } s_i \neq s_j \land s_i,s_j \in S:
\begin{array}{l}
\sum_{{\scaleto{\triangle}{5pt}} \in H^+(e)} x_{\scaleto{\triangle}{5pt}} \leq 1\\
\sum_{{\scaleto{\triangle}{5pt}} \in H^-(e)} x_{\scaleto{\triangle}{5pt}} \leq 1
\end{array}
\end{equation}
$H^+$ and $H^-$ are the two halfspaces induced by $e$ (see Figure~\ref{fig:intersections_triangulation_halfspace}).
Because chords of the convex hull (CH-chords) cannot lie on the boundary of the polygonization,
we can use the \emph{halfspace constraints} to formulate a condition that applies to triangles that
contain such edges. For these triangles, the sum over both halfspaces
has to be the same to ensure that either the triangles are not part of
the solution or if they are, there has to be a triangle on both sides of the
edge.
\begin{equation}
\forall \text{CH-chords } e:
\sum_{{\scaleto{\triangle}{5pt}} \in H^+(e)} x_{\scaleto{\triangle}{5pt}} = \sum_{{\scaleto{\triangle}{5pt}} \in H^-(e)} x_{\scaleto{\triangle}{5pt}}
\end{equation}
\subsubsection{Branching on Variables}\label{sec:branch_on_variables}
Another fine-tuning trick is based on a simple concept that makes use of the CPLEX
callback API. As stated before, we may have not added all intersection constraints
from the beginning. This leads to many interim solutions with intersecting
edges.
Imagine a branching where two branches are created; subtree $T_1$ sets
$x_i=1$ and $T_0$ sets $x_i=0$. We are interested in the branch $T_1$. In
that subtree, $x_i$ is set to one. For all child nodes of $T_1$, object $o_i$ is
part of the solution. Therefore, all intersecting object may not be set to
one. In order to prevent the solver from unnecessary branching, we set all the
intersecting entities to zero when branching on variable $x_i$. This leads to
fewer intersection constraints and less branching in later stages.
For the triangle-based approach, we can make use of another
characteristic. If we are certain that a group of triangles will not be part of
a solution in the current branch, these triangles may be the last ones that
are able to connect two unconnected components. In case we already branched
variables of one component to one, we can exclude all variables of the other
component. If we branched variables of both components to one, we can prune the
current node, because there will be no future solution which connects both
components.
\subsubsection{Subtour Constraints} \label{sec:subtour_constraints}
Both integer program formulations make use of the concept of \emph{subtour
constraints}. As described, these are only added incrementally during the
optimization process.
\paragraph{Callback Graphs}
The computation of subtours is problem specific in certain areas. However, if
we abstract both problem's interim solutions on undirected callback graphs $G$,
the algorithmic approaches have multiple attributes in common. For the edge-based
approach we create a vertex for each point $s_i \in S$ and
connect those vertices $v_i,v_j$, where $(i,j)$ or $(j,i)$ are part of the
solution. In the triangle-base approach we build the dual graph, i.e., we have
a vertex for each triangle of the solution, and an edge between adjacent triangles.
\paragraph{Connected Components}
Finding violated subtour constraints in a given interim solution can simply be based
on searching for connected components in the callback graph.
For computing multiple connected component at once, we use a DFS-based approach that
starts at a vertex $v_i$ and finds its connected component by iterating through
the edges of the callback graph. If there is any non-visited vertex $v_j$, it
repeats the last step until all vertices were visited. This method operates in
time $O(n + \vert E \vert)$.
\paragraph{Edge-Based Approach}
For a given interim solution we compute the connected components;
for each such component $D$, we add one constraint over the sum of outgoing edges of $D$ and one constraint
over the incoming edges of $D$.
\begin{equation*}\sum_{e \in \delta^-(D)} z_e \geq 1 \qquad \sum_{e \in \delta^+(D)} z_e \geq 1\end{equation*}
The constraints can be generated in $O(\vert E \vert)$, because we need to
iterate over all edges to find out which one are leaving or entering
the component. Figure~\ref{fig:connected_components:reference_point_approach}
illustrates an example of three connected components in a solution.
Observe that one of $\delta^+$ and one of $\delta^-$ edges has to be part of a
correct solution in order to connect $D$ to the rest of the point set.
\begin{figure}[h]
\centering
\input{figures/subtour_constraints_connected_components}
\caption{Three connected components of an interim solution}
\label{fig:connected_components:reference_point_approach}
\end{figure}
\paragraph{Triangle-Based Approach}
For a given interim solution we compute the connected components. In contrast
to the edge-based approach, we cannot find two sets $\delta^+,\delta^-$,
because the vertices of the callback graph do not have to be part of the
optimal solution. This makes the problem of preventing subtours more complex,
because each subtour constraint has to include an information about which
vertices have been chosen so far. The main idea is to force a given component
to have at least one neighbor included in the optimal solution. Iterating these
constraints over the interim solutions will force two unconnected components to
get connected.
Let $D$ be a connected set of triangles and let $\delta(D)$ be the set of all
triangles having one edge on the outer boundary of $D$ and one point in
$T\setminus D$.
\begin{equation}
\sum_{{\scaleto{\triangle}{5pt}} \in D} x_{\scaleto{\triangle}{5pt}} \leq \vert D \vert - 1 + \sum_{{\scaleto{\triangle}{5pt}} \in \delta(D)} x_{\scaleto{\triangle}{5pt}}
\label{eq:triangulation_approach:simple_subtour_constraint}
\end{equation}
Observe that the constraint forces a solution containing $D$ to attach at least one triangle to $D$.
Note that the constraint is not as strong as the intersection constraint of the edge-based approach, because it depends on the configuration of $D$.
The constraint is also satisfied if one triangle of $D$ gets exchanged by another.
\paragraph{Finding Minimum Cuts}
If we consider fractional solutions of the Linear Programming relaxations of either problem,
finding violated subtour constraints requires searching for \emph{minimum cuts}.
The simplest approach is to use one of the well-known max.flow algorithms.
Ford and Fulkerson~\cite{ford1956maximal} gave an elegant proof that a maximum flow $f$ is also a minimal cut in a flow
network. For undirected graphs, Stoer and Wagner~\cite{stoer1997simple}
provided an algorithm that finds a minimal cut in $O(\vert V \vert \cdot \vert E \vert
+ \vert V \vert^2 \log \vert V \vert)$. To find more than one subtour,
we can also search for min-cuts separating two given vertices $v_i,v_j$. A
minimum cut of $G$ will be one of the cuts separating two vertices $v_i \in
V_1$ and $v_j \in V_2$. Gomory and Hu~\cite{gomory1961multi} introduced the
concept of edge-weighted \emph{Gomory-Hu-Trees} $T(G)$. For every pair of
vertices $v_i,v_j \in V(G)$ the minimum cut separating $v_i$ and $v_j$ in $G$
is the minimum weighted edge of the path from $v_i$ to $v_j$ in $T(G)$. Using
this technique we are able to add more than one subtour constraint at once.
\subsection{Subtour Angle Constraints}\label{sec:subtour_angle_constriant}
For the next idea in the triangle-based approach we consider possible subtours.
\begin{figure}[b]
\centering
\includegraphics[scale=0.7]{figures/triangle_angle_constraint.pdf}
\caption{Subtour types in the triangle-based approach: $C_1$ does not share a point with $C_2$ or $C_3$; $C_2$ and $C_3$ share the point $s_i$.}
\label{fig:subtour_types_triangulation_approach}
\end{figure}
Figure~\ref{fig:subtour_types_triangulation_approach} illustrates two different
types of subtours that may occur in interim solutions of the triangle-based
approach. The first type are connected components in the sense that two
components $C_i, C_j$ do not share a single point. In the illustration, $C_1$
and $C_2$ as well as $C_1$ and $C_3$ are components of such type. The second
type are components that share one or more points. In
Figure~\ref{fig:subtour_types_triangulation_approach}, this type is represented
by the components $C_2$ and $C_3$ that share a point $s_i \in S$.
\begin{figure}[h]
\centering
\input{figures/subtour_constraints_fan_expressions}
\caption{Visualization of subtour angle constraints}
\end{figure}
The construction of the callback graph will detect both component
types, as subtours and equations~\eqref{eq:triangulation_approach:simple_subtour_constraint}
would ensure later solutions that consist of one connected component. We consider two triangles
that share a point $s_i$, but are not connected in the callback graph. If both
triangles are part of the optimal solution, we can be sure that both
triangles are connected with other triangles containing $s_i$. Let $t_{top},
t_{bottom}$ be two triangles of different components sharing one point $s\in
S$. Let $\alpha_l,\alpha_r$ be the angles between the inner edges of
$t_{top},t_{bottom}$ and $s$. Figure~\ref{fig:fan_expressions:two_components}
illustrates both triangles and their respective angles. From now on we assume
that both triangles are part of an optimal solution. Because an optimal
solution has to be a valid polygon, both $t_{top}$ and $t_{bottom}$ have to be
in the same component. The resulting polygon may not have both components only
connected with triangles not including $s$. This leads to the fact that both
components have to be connected via triangles in $\alpha_l$ or $\alpha_r$.
Figures~\ref{fig:fan_expressions:left_side} and
\ref{fig:fan_expressions:right_side} show two possible cases of the optimal
polygon, one closes $\alpha_l$, while the other one closes $\alpha_r$. The sum of
inner angles at $s$ of these triangles has to be equal to $\alpha_l$ and
$\alpha_r$ respectively.
$$\beta_i + \beta_j = \alpha_l \qquad \beta_k + \beta_l + \beta_m = \alpha_r$$
Note that no solution can have both angles filled completely, because $s$ would
no longer be on the boundary of $P$. This leads to
so-called \emph{subtour angle constraints}. The main idea is that if both
triangles $t_{top},t_{bottom}$ are part of the solution, other triangles at $s$
need to close at least an angle of $\min \{\alpha_l, \alpha_r\}$.
\begin{equation}\label{eq:subtour_angle_constriant}
\min \{\alpha_l,\alpha_r\} \cdot (z_{top} + z_{bottom} - 1) \leq \sum_{{\scaleto{\triangle}{5pt}} \in \delta(s)} \beta_{{\scaleto{\triangle}{5pt}}}^s \cdot x_{\scaleto{\triangle}{5pt}}
\end{equation}
By $\beta_{\scaleto{\triangle}{5pt}}^s$ we denote the inner angle of the triangle
$\triangle$ at point $s$. Figures~\ref{fig:fan_expressions:left_side} and
\ref{fig:fan_expressions:right_side} illustrate how the angles
$\beta_{\scaleto{\triangle}{5pt}}^s$ add up to $\alpha_{l/r}$. At every integer
solution we obtain during the solving process, we have reason to believe that
triangles of the integer solution have a high probability to be part of the
optimal solution. Because of that, we add \emph{subtour angle constraints} at
every integer solution. In addition to the previous ideas, we generalize the
idea of two triangles at one point to so-called \emph{triangle fans}.
\begin{definition}
In an interim solution a \emph{triangle fan} is a set $F_s$ of connected triangles $\triangle$, which share a point $s \in S$. This means that there is a path $P$ for every $\triangle_i, \triangle_j \in F_s$ in the callback graph.
\end{definition}
Consider an interim solution obtained during the solving process.
After constructing the callback graph, we iterate over all
triangles of the solution and add each triangle to three triangle fans (one
for each point of $\triangle$). The triangle $\triangle$ will be added to an
existing fan $F_s$ if it is in the same component, because this indicates a
path between $\triangle$ and the other $\triangle_i \in F_s$. If no such fan is
found, a new fan will be created for this component. In the end
we return all triangle fans for each point $s\in S$.
Because a solution contains $n-2$ triangles, we will add at most $n-2$ triangles to their fan.
As the triangle has three points, each triangle can be part of three fans.
This leads to a time complexity of $O((n-2) \cdot 3 \cdot (n-2))=O(n^2)$ where $n$ is the number of points in $S$.
After computing all triangle fans we choose two triangles
$t_{top},t_{bottom}$ with a common point $s$ from two different fans in order to
formulate the constraint.
These triangles should not be chosen at random, because
the constraint gets much stronger if the following two conditions apply.
\begin{itemize}
\item The triangles should have a small area.
\item The angles $\alpha_l,\alpha_r$ are similar, i.e., they minimize $\vert \alpha_l-\alpha_r \vert$.
\end{itemize}
Consider two maximal fans $F_1, F_2$ having a point $s$ in common.
We choose triangles $t_{top}\in F_1,
t_{bottom}\in F_2$, such that $\vert \alpha_l-\alpha_r \vert$ is minimized.
If there are multiple candidates,
we choose those that minimize the area.
\subsubsection{Point-based Subtour Constraints}
Consider a point set $X \subseteq S$ of an interim solution with $3 \leq \vert X
\vert \leq n-2$. We denote $\dot\triangle \in \delta(X)$ and $\ddot\triangle \in
\delta(X)$ to be triangles with exactly one corner or two corners inside $X$
respectively. We know that the point set $X$ has to be connected to at least
two points outside of $X$. To connect both points to $X$, at least two triangles
are needed that have at least one point outside of $X$ (see Fig.~\ref{fig:pointbased_subtour} right). This leads to the
first point-based subtour constraint
\begin{equation}\label{eq:point_based_subtour_one}
\sum_{\dot\subscripttriangle \in \delta(X)} x_{\dot\subscripttriangle} + \sum_{\ddot\subscripttriangle \in \delta(X)} x_{\ddot\subscripttriangle} \geq 2
\end{equation}
Now suppose there is no triangle connecting two points from $X$.
This implies that each point $s\in X$ needs a triangle connecting $s$ with two points from $S\setminus X$.
Therefore, if $\sum_{\ddot\subscripttriangle \in \delta(X)} = 0$, then $\sum_{\dot\subscripttriangle \in \delta(X)} x_{\dot\subscripttriangle} = \vert X\vert$ (see Fig.~\ref{fig:pointbased_subtour} left).
If there is at least one triangle with two points in $X$, then a possible solution can exist with only one additional triangle.
Thus, if $\sum_{\ddot\subscripttriangle \in \delta(X)} x_{\ddot\subscripttriangle} \geq 1$, then $\sum_{\dot\subscripttriangle \in \delta(X)} x_{\dot\subscripttriangle} +\sum_{\ddot\subscripttriangle \in \delta(X)} x_{\ddot\subscripttriangle} \geq 2$ (see Fig.~\ref{fig:pointbased_subtour} right).
Combining both cases yields the following constraint for a point
set $X$.
\begin{equation}\label{eq:point_based_subtour_two}
\sum_{\dot\subscripttriangle \in \delta(X)} x_{\dot\subscripttriangle} \geq \vert X \vert - (\vert X \vert - 1) \cdot \sum_{\ddot\subscripttriangle \in \delta(X)} x_{\ddot\subscripttriangle}
\end{equation}
\begin{figure}
\centering
\includegraphics[page=1, scale=.6]{figures/point_based_subtour_constraint.pdf}
\hfil
\includegraphics[page=2, scale=.6]{figures/point_based_subtour_constraint.pdf}
\caption{Illustration of point-based subtour constraints.
Red points correspond to the set $X$.
Left: A valid solution when no outgoing triangle (blue triangles) has two points from $X$.
Then there must be $\vert X\vert$ triangles connecting $X$ to $S\setminus X$.
Right: If at least one triangle has two points from $X$, then there must exist at least one more outgoing triangle.}
\label{fig:pointbased_subtour}
\end{figure}
Separation over these constraints can be achieved analogous to regular subtour constraints
for the classic TSP, with triangles in our problem corresponding to vertices in the TSP,
and connected components corresponding to connected sets of triangles. This allows
polynomial-time separation, but requires iterating over the $O(n^3)$ triangles.
\section{Experiments}
\label{sec:experiments}
Based on the described approaches, we ran experiments on some machines with slightly different specifications and parameters.
We used CPLEX 12.9 with a time limit of 1800 seconds on an AMD Ryzen 7 5800X CPU \@ 4.2GHz with eight cores and 16 threads utilizing an L3 Cache with a size of 32MB.
The solver was able to use a maximum amount of 128GB RAM.
Our solver uses the default CPLEX parameters except \textsc{CPXPARAM\_Parallel}, which was set to \textsc{CPX\_PARALLEL\_OPPORTUNISTIC}.
We considered all instances from the CG:SHOP Challenge with up to 50 points;
see Section~\ref{subsec:cgshop} for a detailed description. Because the
original CG:SHOP benchmark set mostly aims at heuristic and experimental
methods developed in the competition, it reaches all the way up to 1,000,000
points, but is relatively sparse within the range of exact methods. We
accounted for this sparsity by generating additional instances of similar type;
because of fast run times, we used 20 instances and 5 iterations each for
instance sizes 12-20; for instance sizes 21-23, we considered 20 instances and
1 iteration each; for larger sizes, we limited the number to 10 instances each.
\subsection{Solver Types}
In this section we will introduce different versions for each solver that implement features that we mentioned in Section~\ref{sec:tools}. For both the triangulation-based and edge-based approach, we pass a start solution that was generated by a \textsc{Greedy \textsc{Min-Area}} heuristic that is inspired by the work of Taranilla et al.~\cite{taranilla2011approaching},
Fekete's \textsc{Max-Area} approximation~\cite{f-gtsp-92} or solutions from the CG:SHOP competition. \textsc{Greedy \textsc{Min-Area}} starts with a polygon $P=conv(S)$ and carves out the largest triangles by replacing an edge $(p_i,p_j)$ of $P$ by two edges $(p_i, q)(q, p_j)$ to an inner point $q$.
\subsubsection{Edge-Based Solvers}
\textsc{EdgeV1} is a basic integer program of the edge-based approach.
It adds all intersection constraints and slab constraints before starting the solving process and
adds subtour constraints in every integer solution. This integer program is an
improvement to the edge-based \textsc{MinArea} integer program presented by
Papenberg et al. \cite{Papenberg2014, fekete2015area}.
In the former approach
cycle based subtour constraints were added after an optimal solution has been
found. This resulted in poor computing times even for small point sets.
We also utilize properties of the convex hull to exclude certain variables, i.e., edges that connect two
non-adjacent points on the convex hull, from the computation.
\textsc{EdgeV1} makes use of this concept by setting these variables to
zero.
\textsc{EdgeV2} extends the previous version by adding intersection
constraints at interim solutions.
\textsc{EdgeV3} includes a branching extension where branching on a variable $z_e$ results in intersecting edges getting branched to zero.
In \textsc{EdgeV4} we additionally search for subtours in fractional interim
solutions and add slab constraints during the solving process. The upcoming sections will show that the edge-based approach is better suited for \textsc{Max-Area} instances.
\subsubsection{Triangle-Based Solvers}
\textsc{TriangulationV1} is the first version of the triangle-based approach.
Compared to the basic triangulation approach of Papenberg \cite{Papenberg2014},
we have fewer variables and different subtour constraints
\eqref{ip:triangulation_approach:subtours}. We added further \emph{halfspace inequalities}
as well as equalities for edges which connect
non-adjacent vertices of the convex hull.
In \textsc{TriangulationV1} we add subtour constraints and intersection
constraints in every integer solution. \textsc{TriangulationV2} extends the
first version with so-called \emph{subtour angle constraints}.
These are added
at every integer solution. We are able to reuse the connected components we need to compute
along the way.
This allows us to add constraints~\eqref{ip:triangulation_approach:subtours} without much additional computation
time. \textsc{TriangulationV3} makes use of additional results on ineffective subtour constraints.
In addition to the constraints of
\textsc{TriangulationV2}, we add point-based subtour constraints
to every intermediate integer solution. The upcoming sections will show that the triangulation-based approach is better suited for \textsc{Min-Area} instances.
\subsection{Analysis}
\label{sec:computation-time}
\begin{figure}[h]
\includegraphics[width=.9\linewidth]{figures/plots/random_12_25.pdf}
\caption{Runtimes for different solver versions on random instances of size $12-25$ and a time limit of $1800$ seconds. The line is the average runtime over five iterations on $20$ instances for sizes $12-20$. For sizes $21-25$ we solved one iteration on $20$ instances for sizes $21-23$ and $10$ instances of size $24-25$.}
\label{fig:random-detailed-runtime}
\end{figure}
\begin{figure}[h]
\includegraphics[width=.9\linewidth]{figures/plots/gaps_random_19_25.pdf}
\caption{Optimality gap for instances with 19 to 25 points. Shown is the best gap over multiple runs for each instance.}
\label{fig:random-detailed-gap}
\end{figure}
\begin{figure}[h]
\includegraphics[width=.9\linewidth]{figures/plots/ram_usage_random_12_25.pdf}
\caption{RAM usage for the experiments in Figure~\ref{fig:random-detailed-runtime}.}
\label{fig:random-detailed-ram-usage}
\end{figure}
\subsubsection{Edge-Based Solvers}
We compared two different approaches. The
first one, adds intersection constraints at every
integer and at every fractional solution. The second one, adds intersection constraints at every integer solution.
Our observations showed that searching for intersections in fractional solutions
increases the computation time. We assume that the intersection constraints
we obtain from fractional solutions are not needed for computing the optimal
solution and that their generation wastes a lot of computation time. We denote
\textsc{EdgeV2} as the version which adds intersection constraints in every
integer solution.
Figure~\ref{fig:random-detailed-runtime} provides a detailed view on the
runtime of the major \textsc{Edge} versions we consider in our experiments for instances of size $12-25$. The scatter plot shows the runtimes for different iterations and instances while the line is the average runtime over five iterations and $20$ instances for each $n \leq 20$. For $n > 20$, we computed one iteration on $20$ instances (for sizes $21-23$) and $10$ instances (for sizes $24-25$).
Figure~\ref{fig:random-detailed-gap} provides an overview over the optimality gap for instances with at least 19 points.
Figure~\ref{fig:random-detailed-ram-usage} illustrates the amount of memory that was used during the execution.
As in other problems like TSP, adding constraints in interim solutions instead of adding all constraints in the beginning will only make significant impact when the instances reach a certain size.
Our goal is to add fewer constraints in the beginning while preserving the baseline runtime and the high success rate of \textsc{EdgeV1} for smaller instances.
The best approach can then be used to solve larger instances where the construction of the complete integer program consumes too much space.
In comparison to the first version, \textsc{EdgeV2} adds intersection
constraints at every integer solution. Whenever many intersections constraints were needed to obtain the optimal solution, we observed that the runtime and the depth of the branch and bound tree increased. Figure~\ref{fig:random-detailed-runtime} shows that the runtime for this approach was slightly higher than the \textsc{EdgeV1} baseline.
\textsc{EdgeV3} further adds intersection constraints during branching, which preserved the low runtime on some instances while mitigating the negative effect on instances that needed a lot of intersection constraints.
Apart from intersections one might want to add the slab constraints in interim solutions.
Slab constraints ensure that the resulting polygon is oriented in the right direction.
\textsc{EdgeV4} adds slabs constraints during execution and searches for connected components, i.e. min-cuts in every fractional solution.
We noticed that almost all of the slab constraints are added to the IP at some point during the execution of \textsc{EdgeV4}.
As a consequence these constraints should be added from the beginning.
In our implementation of \textsc{EdgeV4}, we computed the connected components of the interim solutions, i.e., minimum cuts with value one.
We implemented multiple versions for finding minimum cuts of greater size.
All approaches deteriorated the time needed for the computation.
Moreover, the approach was unable to solve all instances of larger sizes within the time constraint.
This shows that the computation of larger cuts is computationally expensive and the obtained inequalities not worth the effort.
Despite the low runtime on smaller instances, the number of unneeded intersection
constraints added by \textsc{EdgeV1} grows fast for larger instances.
Therefore, the other approaches add fewer constraints, which results in better computation times.
Overall the \textsc{EdgeV2} approach which adds intersections in integral interim solutions or \textsc{EdgeV3} which further uses some advanced branching techniques appear to be the best approaches for solving larger instances.
\subsubsection{Triangle-Based Solver}
As shown in Figure~\ref{fig:random-detailed-ram-usage}, the RAM usage of both approaches depend on the problem variant we are trying to solve.
In Section~\ref{sec:triangle} we proposed the triangle-based IP which uses $O(n^3)$ variables and $O(n^6)$ constraints (excluding subtour constraints).
As a consequence, the formulation, branch and bound tree and temporary variables of the integer program require a lot of space on the executing machine. This is especially relevant for the \textsc{Max-Area} variant, for which many intersection constraints are needed to obtain a feasible solution.
Figure~\ref{fig:random-detailed-runtime} provides a detailed view on the
runtime of the major \textsc{Triangle} versions we considered in our experiments. Due to the triangulation approach performing worse on the \textsc{Max-Area} variant, we excluded instances of size $\geq 19$ in the experiment.
The scatter plot shows the runtimes for different iterations and instances while the line is the average runtime over five iterations and $20$ instances for each $n \leq 20$. For $n > 20$, we computed one iteration on $20$ instances (for sizes $21-23$) and $10$ instances (for sizes $24-25$).
As described above, there are only few
intersections in interim solutions of the triangle-based approach when solving
\textsc{Min-Area} instances, because overlapping areas are counterproductive for obtaining a
minimal solution. Nevertheless, intersection constraints are possible and
needed to be eliminated if no other constraints were found. The results of the
edge-based approach showed that adding intersection clique constraints can be very efficient.
We adapted the idea and searched for intersection cliques in \textsc{TriangulationV1} as well.
In \textsc{TriangulationV2}, we search for subtour angle constraints as well.
Subtour angle constraints can help to decrease the runtime for some instances. In other occasions, the approach that added these constraints performed worse. We assume that better results can be achieved, if one improves the algorithm that finds the possible subtour constraints. We did not further investigate this opportunity as the triangulation approach has large space requirements for larger instances. Figure~\ref{fig:random-detailed-ram-usage} shows that \textsc{TriangulationV1} has a higher RAM usage than the edge-based approaches for $n\geq 24$. When attempting to solve the CG:SHOP instances to optimality in the next section, we reached the limit of $128GB$ for some instances of size $\geq 30$ and all instances of size $\geq 40$. The point based subtour constraints that were added in \textsc{TriangulationV3} did not help to improve the performance of the approach and should not be considered as a valuable extension.
\subsubsection{Convex Hull}
During our experiments, we noticed a strong relation between the number of points on the convex hull in an instance and the runtime that is needed to find an optimal solution.
We therefore generated instances with $n$ points and $3 \leq k \leq n$ points on the convex hull.
This was done by first generating $k$ points in convex position, taking points uniformly at random,
discarding all points that would either lie in the interior or cause some previously selected points to lie in the interior.
We then added $n-k$ points within the convex hull chosen uniformly at random.
\begin{figure}[h]
\includegraphics[width=.85\linewidth]{figures/plots/17_edge_convex.pdf}
\caption{Runtime for the solver \textsc{EdgeV1} with $n=17$ points and $20$ instances with a convex hull size of $k$ for every $k=3,\dots,17$.}
\label{fig:17-convex-edge}
\end{figure}
\begin{figure}[h]
\includegraphics[width=.85\linewidth]{figures/plots/17_triangle_convex.pdf}
\caption{Runtime for the solver \textsc{TriangulationV1} with $n=17$ points and $20$ instances with a convex hull size of $k$ for every $k=3,\dots,17$.}
\label{fig:17-convex-triangle}
\end{figure}
We performed an experiment for the solver \textsc{EdgeV1} with $n=17$ points and generated $20$ instances for every $k=3,\dots,17$.
Figure~\ref{fig:17-convex-edge} compares the average aggregated CPU times, that is, the sum of time used by all processors during the optimization phase.
We can clearly observe that the CPU time drastically decreases when more points lie on the convex hull of a point set.
We assume that this is due to the fact that fewer edges are possible candidates and the number of possible polygonizations is smaller for these instances.
Moreover, edges that connect two non-adjacent points of $conv(S)$ are set to zero, simplifying many constraints of the IP.
On the other hand, Figure~\ref{fig:17-convex-triangle} shows the average CPU times for the triangulation approach \textsc{TriangulationV1}. Apart from a drastic decrease at $k=16,17$, we could only observe a minor trend towards shorter runtimes for larger $k$.
\subsection{CG:SHOP Results}
\label{subsec:cgshop}
In this section, we discuss the results both IPs obtained on the CG:SHOP competition instances. We start off with small instances that the approaches were able to solve optimally in short time. Table~\ref{tab:cgshop-small-10-15} shows the runtimes for the smallest instances of the competition.\\
\begin{table}[h]
\footnotesize
\input{cgshop_small_table.tex}
\caption{CG:SHOP results for \textsc{Min-Area} and \textsc{Max-Area} for instances of size $<20$.}
\label{tab:cgshop-small-10-15}
\end{table}
As point sizes $\leq 15$ have been observed to be solvable by both approaches in a very short time period (see Section~\ref{sec:computation-time}), we only show results from the \textsc{Edge} approach. In comparison with the \textsc{Min-Area} runtimes on uniformly distributed random instances in Figure~\ref{fig:random-detailed-runtime},
the runtimes on the competition instances appear to be similar. Note that the runtimes in the table are the best runtimes, we observed on these instances instead of the average runtime.\\
As explained earlier,
the space requirements for the triangulation approach prevents experiments with larger instance sizes. Results from Section~\ref{sec:computation-time} imply that the edge-based approach is better suited for the \textsc{Max-Area} variant. In this paragraph, we investigate the results that were obtained on instances of size $20-50$ from the competition. For larger instances, we raised the maximum runtime to $43,800$ seconds because it is very unlikely to find an optimal solution especially for the larger instances.
As other competitors submitted their best solutions on the same instances, we were able to provide the IP with these solutions, i.e. the best solution that was found during the competition. As most solutions are most likely the optimal solution of the corresponding instance, the main objective was to find good bounds within the time constraint. Keep in mind that we only present the best results that we obtained after numerous attempts on solving the instances.
{
\footnotesize
\input{max_cgshop_table.tex}
}
Table~\ref{tab:cgshop-gaps} shows the gap between the best solution and the best bound that was found by CPLEX. In accordance with CPLEX, we use $gap(obj,b)=\frac{|b-obj|}{10^{-10}+|obj|}$ for calculating the gap, where $b$ is the best bound and $obj$ is the objective function value of the best integer solution. The table also includes the gap to the size of the convex hull for comparison. The edge-based approach was able to prove optimality for all instances of size $\leq25$. Most gaps between the convex hull and the best integer solution are below $0.10$ for larger instances. Despite the absolute differences, which are not accounted for by the specified bounds, the relative differences are quite small. The largest instance that was proven to be optimal was the \emph{euro-night-000045} instance, which contains $45$ points. To the best of our knowledge, this is the largest \textsc{Max-Area} instance that was solved to provable optimality. For instance sizes $>45$ we could not observe much improvement in the upper bounds in comparison to the trivial bound. One needs to add better cuts during the solving process to gather better bounds for these sizes.
{
\footnotesize
\input{min_cgshop_table.tex}
}
For the edge-based approach the minimization variant is significantly harder than the \textsc{Max-Area} problem for most instances. Unfortunately, the triangulation-based approach consumes a lot of space on larger point sets. Table~\ref{tab:cgshop-gaps-min} summarizes our results on \textsc{Min-Area}. For instances of size $20-35$ (except \emph{london-0000035}), the best results could be obtained using \textsc{TriangulationV1}. For the other instances the space requirements were too high and thus the edge-based approach was used. Apart from the instance \emph{uniform-0000025-1} all instances of size $20-25$ could be solved to optimality.
For most of the larger instances of size $>35$, the edge-based approach was unable to improve the trivial bound of 1 (all solutions in the competition must have integral area). However, the competition results for the instances \emph{london-0000045}, \emph{uniform-0000050-2} and \emph{stars-0000050} were proven to be optimal. To the best of our knowledge \emph{uniform-0000050-2} and \emph{stars-0000050} are the largest \textsc{Min-Area} instances that could be solved optimally. For instance size $>50$ we did not observe any improvement in the bounds. Apart from the optimal solutions, the edge-based approach is not well-suited for the minimization variant. Other approaches or improvements to the triangulation-based IP will most likely help to find better bounds.
\section{Conclusions}
\label{sec:conclusions}
While our work shows that with some amount of algorithm engineering,
it is possible to extend the range of instances that can be solved to
provable optimality, it also illustrates the practical difficulty of the problem.
This reflects the limitations of such IP-based methods: The edge-based approach
makes use of an asymmetric variant of the TSP, which is known to be harder than the symmetric TSP,
while the triangle-based approach suffers from its inherently large number of variable and constraints.
Furthermore, the non-local nature of \textsc{Min-Area} and \textsc{Max-Area} polygons
(which may contain edges that connect far-away points) makes it difficult to reduce
the set of candidate edges.
As a result, \textsc{Min-Area} and \textsc{Max-Area}
turn out to be prototypes of geometric optimization problems that are difficult
both in theory and practice. This differs fundamentally from a problem such as
\textsc{Minimum Weight Triangulation}, for which provably optimal solutions
to huge point sets can be found~\cite{mwt1}, and practically difficult instances
seem elusive~\cite{mwt2}
\section*{Acknowledgments}
This work was supported by DFG grant FE407/21-1, ``Computational Geometry: Solving Hard Optimization Problems (CG:SHOP)''.
We thank an anonymous reviewer for various constructive comments that helped to improve the overall presentation.
\bibliographystyle{ACM-Reference-Format}
\section{Tools}
\label{sec:tools}
We considered two models based on integer programming: an \emph{edge-based formulation}
(described in Section~\ref{sec:edge}) and a \emph{triangle-based formulation}
(described in Section~\ref{sec:triangle}). In addition, we developed a number
of further refinements and improvements (described in Section~\ref{sec:enhance}).
\subsection{Edge-Based Formulation}
\label{sec:edge}
The first formulation is based on considering \emph{directed} edges of the polygon boundary.
As shown in Figure~\ref{fig:reference_point_approach:area_calculation},
the area $A_{\cal P}$ of a polygon $\cal P$ can be computed by adding the (signed) triangle areas $f_e$
that are formed by edges $e$ and an arbitrary, fixed reference point $r$.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{.5\textwidth}
\centering
\input{figures/reference_point_approach_polygon_triangles}%
\caption{Triangles of the polygon}
\label{fig:reference_point_approach:polygon_triangles}
\end{subfigure}%
\begin{subfigure}[b]{.5\textwidth}
\centering
\input{figures/reference_point_approach_polygon_triangles_positive}%
\caption{Positive edge triangles}
\label{fig:reference_point_approach:polygon_positive}
\end{subfigure}%
\begin{subfigure}[b]{.5\textwidth}
\centering
\input{figures/reference_point_approach_polygon_triangles_negative}%
\caption{Negative edge triangles}
\label{fig:reference_point_approach:polygon_negative}
\end{subfigure}\begin{subfigure}[b]{.5\textwidth}
\centering
\input{figures/reference_point_approach_polygon_triangles_sum}%
\caption{Calculated difference between (b) and (c)}
\label{fig:reference_point_approach:polygon_area}
\end{subfigure}
\caption[Area computation according to the edge-based approach]{Area computation of a polygon using a reference point $r$}
\label{fig:reference_point_approach:area_calculation}
\end{figure}
This leads to the following IP formulation, where the \emph{slab inequalities}~(\ref{ip:reference_point_approach:slabs}) ensure
that the polygon has a counterclockwise orientation, while (\ref{ip:reference_point_approach:subtours}) are \emph{subtour constraints}.
\begin{equation}
\{\min , \max\} \sum_{e^+ \in E^r} z_{e^+} \cdot f_e - \sum_{e^- \in E^r} z_{e^-} \cdot f_e \label{ip:reference_point_approach:objective}\\
\end{equation}
\begin{align}
\vphantom{\sum_{i=1}^{m}} \forall s_i \in S: \qquad& \sum_{(j,i) \in \delta^+(s_i)} z_{ji} = 1 \label{ip:reference_point_approach:in_degree}\\
\vphantom{\sum_{i=1}^{m}} \forall s_i \in S: \qquad& \sum_{(i,j) \in \delta^-(s_i)} z_{ij} = 1 \label{ip:reference_point_approach:out_degree}\\
\vphantom{\sum_{i=1}^{m}} \forall e=\{i,j\} \in E: \qquad& z_{ij} + z_{ji} \leq 1 \label{ip:reference_point_approach:one_of_edges}\\
\vphantom{\sum_{i=1}^{m}} \forall \text{ intersecting}\quad \{i,j\}, \{k,l\} \in E: \qquad& z_{ij} + z_{ji} + z_{kl} + z_{lk} \leq 1 \label{ip:reference_point_approach:intersections}\\
\vphantom{\sum_{i=1}^{m}} (\forall \text{ slabs}\quad D) (\forall m=1,\ldots,\vert D \vert): \qquad& \sum_{i=1}^{m} z_{e_{i_D}^{lr}}-z_{e_{i_D}^{rl}} \label{ip:reference_point_approach:slabs}\\
\vphantom{\sum_{i=1}^{m}} \forall \emptyset \neq D \subsetneq S: \qquad& \begin{array}{l}\sum_{(k,l) \in \delta^-(D)} z_{kl} \geq 1 \\ \sum_{(k,l) \in \delta^+(D)} z_{kl} \geq 1\end{array} \label{ip:reference_point_approach:subtours}\\
\forall e \in E^+ \mathrel{\dot\cup} E^-: \qquad& e \in \{0,1\}
\end{align}
As there are $\Theta(n^2)$ possible edges, the number of intersection constraints may be as big as $\Theta(n^4)$. Moreover,
the number of subtour constraints~(\ref{ip:reference_point_approach:subtours}) may be exponential, so they are only added
when necessary in an incremental fashion.
\subsection{Triangle-Based Formulation}
\label{sec:triangle}
An alternative is the {triangle-based formulation}, which considers the set $T(P)$ of
possibly $\binom{n}{3}$ many empty triangles of a point set $P$; see Figure~\ref{fig:triangulation_approach:max_triangles}
for an illustration. Making use of the fact that a simple polygon with $n$ vertices consists of $(n-2)$ empty
triangles with non-intersection interiors, we get the following IP formulation.
\begin{figure}[h]
\centering
\input{figures/triangulation_approach_max_triangles}
\caption[Empty triangle amount bound]{A set of five points and its ten empty triangles.}
\label{fig:triangulation_approach:max_triangles}
\end{figure}
\begin{equation}
\{\min , \max\} \sum_{{\scaleto{\triangle}{5pt}} \in T} f_{\scaleto{\triangle}{5pt}} \cdot x_{\scaleto{\triangle}{5pt}} \label{ip:triangulation_approach:objective}\\
\end{equation}
\begin{align}
\vphantom{\sum_{i=1}^{n}} \qquad& \sum_{{\scaleto{\triangle}{5pt}} \in T} x_{\scaleto{\triangle}{5pt}} = n-2 \label{ip:triangulation_approach:triangle_count}\\
\vphantom{\sum_{i=1}^{n}} \forall s_i \in S: \qquad& \sum_{{\scaleto{\triangle}{5pt}} \in \delta(s_i)} x_{\scaleto{\triangle}{5pt}} \geq 1 \label{ip:triangulation_approach:in_each_point}\\
\vphantom{\sum_{i=1}^{m}} \forall \text{intersecting}\quad \triangle_i, \triangle_j \in T: \qquad& x_{{\scaleto{\triangle}{5pt}}_i} + x_{{\scaleto{\triangle}{5pt}}_j} \leq 1 \label{ip:triangulation_approach:intersections}\\
\vphantom{\sum_{i=1}^{n}} \forall \emptyset \neq D \subsetneq T, \vert D \vert \leq n-3: \qquad& \sum_{{\scaleto{\triangle}{5pt}} \in D} x_{\scaleto{\triangle}{5pt}} \leq \sum_{{\scaleto{\triangle}{5pt}} \in \delta(D)} x_{\scaleto{\triangle}{5pt}} + \vert D \vert - 1 \label{ip:triangulation_approach:subtours}\\
\vphantom{\sum_{i=1}^{n}} \forall \triangle \in T: \qquad& x_{\scaleto{\triangle}{5pt}} \in \{0,1\}
\end{align}
As there are $\Theta(n^3)$ possible empty triangles, the number of intersection constraints may be as big as $\Theta(n^6)$. Again,
the number of subtour constraints~(\ref{ip:triangulation_approach:subtours}) may be exponential, so they are only added
when necessary in an incremental fashion.
\subsection{Enhancing the Integer Programs}
\label{sec:enhance}
Given the considerable size of the described IP formulations, we employed a number of enhancements to improve efficiency.
For points on the \textbf{convex hull}, only a reduced number of neighbors need to be considered.
Employing good \textbf{initial solutions} improves the performance in branch-and-bound searching;
we used a number of greedy heuristics, as well as the $\frac{1}{2}$-approximation of Fekete.
The large number of corresponding inequalities made it particularly important to deal with \textbf{intersections}
in an efficient manner: we condensed the constraints for cliques of intersecting objects into single
inequalities, and introduced special \emph{halfspace inequalities} for the triangle-based approach.
Further increases in efficiency were obtained by careful choices of how to \textbf{branch on variables}
and careful maintenance of \textbf{subtour constraints}.
| {
"timestamp": "2021-11-11T02:01:28",
"yymm": "2111",
"arxiv_id": "2111.05386",
"language": "en",
"url": "https://arxiv.org/abs/2111.05386",
"abstract": "We consider methods for finding a simple polygon of minimum (Min-Area) or maximum (Max-Area) possible area for a given set of points in the plane. Both problems are known to be NP-hard; at the center of the recent CG Challenge, practical methods have received considerable attention. However, previous methods focused on heuristic methods, with no proof of optimality. We develop exact methods, based on a combination of geometry and integer programming. As a result, we are able to solve instances of up to n=25 points to provable optimality. While this extends the range of solvable instances by a considerable amount, it also illustrates the practical difficulty of both problem variants.",
"subjects": "Computational Geometry (cs.CG); Data Structures and Algorithms (cs.DS)",
"title": "Computing Area-Optimal Simple Polygonizations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877717925421,
"lm_q2_score": 0.731058584489497,
"lm_q1q2_score": 0.7096296284279197
} |
https://arxiv.org/abs/1408.4895 | Simple Parametrization Methods for Generating Adomian Polynomials | In this paper, we discuss two simple parametrization methods for calculating Adomian polynomials for several nonlinear operators, which utilize the orthogonality of functions einx, where n is an integer. Some important properties of Adomian polynomials are also discussed and illustrated with examples. These methods require minimum computation, are easy to implement, and are extended to multivariable case also. Examples of different forms of nonlinearity, which includes the one involved in the Navier Stokes equation, is considered. Explicit expression for the n-th order Adomian polynomials are obtained in most of the examples. | \section{Introduction}
The Adomian decomposition method (ADM) \cite{Adomian1986,Adomian1989,Adomian1994} provides an analytical approximate solution for nonlinear functional equation in terms of a rapidly converging series, without linearization, perturbation or discretization. Consider a functional equation
\begin{equation}\label{1.1}
u=f+L(u)+N(u),
\end{equation}
\noindent where $L$ and $N$ are respectively, linear and nonlinear operators from a Hilbert space $H$ into $H$ and $f$ is a known function in $H$. In ADM, the solution $u(x,t)$ of (\ref{1.1}) is decomposed in the form of an infinite series given by
\begin{equation}\label{1.2}
u(x,t)= \sum_{k=0}^{\infty}u_k(x,t).
\end{equation}
\noindent Further, the nonlinear function $N(u)$ is assumed to admit the representation
\begin{equation}\label{1.3}
N(u)=\sum_{k=0}^{\infty}A_k(u_0,u_1,\ldots,u_k),
\end{equation}
where $A_k$'s are called $k$-th order Adomian polynomials. In the linear case $N(u)=u$, $A_k$ simply reduces to $u_k$. Adomian's method is simple in principle, but involves tedious calculations of Adomian polynomials. Adomian \cite{Adomian1986} gave a method for determining these Adomian polynomials, by parametrizing $u(x,t)$ as
\begin{equation}\label{1.4}
u_\lambda(x,t)=\sum_{k=0}^{\infty}u_k(x,t)\lambda^k
\end{equation}
and assuming $N(u_\lambda)$ to be analytic in $\lambda$, which decomposes as
\begin{equation}\label{1.5}
N(u_\lambda)=\sum_{k=0}^{\infty}A_k(u_0,u_1,\ldots,u_k)\lambda^k.
\end{equation}
Hence, the Adomian polynomials $A_n$ are given by
\begin{equation}\label{1.6}
A_n(u_0,u_1,\ldots,u_n)=\left.\frac{1}{n!}\frac{\partial^n N(u_\lambda)}{\partial \lambda^n} \right|_{\lambda=0},\ \forall\ n\in\mathbb{N}_0,
\end{equation}
where $\mathbb{N}_0=\mathbb{N}\cup \{0\}$ and $\mathbb{N}$ denotes the set of positive integers. Rach \cite{Adomian1989,Adomian1994,Rach1984415} suggested the following formula for determining Adomian polynomials:
\begin{eqnarray}\label{1.7}
A_0(u_0)&=&N(u_0),\nonumber\\
A_n(u_0,u_1,...,u_n)&=&\sum_{k=1}^{n}C(k,n)N^{(k)}(u_0),\ \forall\ n\in\mathbb{N},
\end{eqnarray}
\noindent where $C(k,n)$ is the product (or sum of products) of $k$ components of $u(x,t)$ whose subscripts sum to $n$ divided by the factorial of the number of repeated subscripts, that is,
\begin{equation}\label{1.8}
C(k,n)=\underset{\sum_{j=1}^nk_j=k\ ,\ k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{n}jk_j=n}}\prod_{j=1}^{n}\frac{u_j^{k_j}}{k_j!}.
\end{equation}
Wazwaz \cite{Wazwaz200033} suggested a new algorithm in which after separating $A_0=N(u_0)$ from other terms of the Taylor series expansion of the nonlinear function $N(u)$, we collect all terms of the expansion obtained such that the sum of the subscripts of the components of $u(x,t)$ in each term is same. The limitations of this algorithm is that it is difficult to keep track of the terms after some time. Zhu \textit{et al.} \cite{Zhu2005402} suggested another useful method, but it also involves tedious calculations of $n$-th derivative to obtain $A_n$. Biazar and Shafiof \cite{Biazar2007975} proposed a recursive method to calculate Adomian polynomials, in which only one time differentiation is required. However, the disadvantage is that we do not have explicit form for $A_n$'s.
In this paper, we develop a general parametrization technique for calculating Adomian polynomials and discuss some of their important properties. Indeed, we develop two new simple methods to generate Adomian polynomials using the orthogonality of functions $\{e^{inx}, n\in\mathbb{Z}\}$. The first method determines these polynomials explicitly, whereas the second method generates them recursively. The newly developed techniques are more viable, require less computation and generate Adomian polynomials in a fewer steps. Both the methods are extended to the case of several variables. Different forms of nonlinearity are discussed as applications of our methods.
\section{Adomian polynomials and parametrization methods}
\setcounter{equation}{0}
\noindent We assume the following hypotheses \cite{Cherruault1993103}:
\begin{itemize}
\item[$H1:$] The series solution $u=\sum_{k=0}^{\infty}u_k$ of (\ref{1.1}) is absolutely convergent and,
\item[$H2:$] The nonlinear function $N(u)$ is developable into an entire series with a convergence radius equal to infinity, that is,
\begin{equation}\label{2.1}
N(u)=\sum_{k=0}^{\infty}N^{(k)}(0)\frac{u^k}{k!}\ ,\ \ \ \ |u|<\infty.
\end{equation}
\end{itemize}
The second assumption is almost always satisfied in concrete physical problems. By $H1$ and $H2$, we have Adomian series as a generalization of Taylor series \cite{Cherruault1993103},
\begin{equation}\label{2.2}
N(u)=\sum_{k=0}^{\infty}A_k(u_0,u_1,\ldots,u_k)=\sum_{k=0}^{\infty}N^{(k)}(u_0)\frac{(u-u_0)^k}{k!}.
\end{equation}
Note that (\ref{2.2}) is a rearrangement of an absolutely convergent series (\ref{2.1}). We look at a more general form of parametrization than the one given in (\ref{1.4}). That is, we consider
\begin{equation}\label{2.3}
u_\lambda(x,t)=\sum_{k=0}^{\infty}u_k(x,t)f^k(\lambda),
\end{equation}
\noindent where $\lambda$ is a real parameter and $f$ is any real or complex valued function with $|f|<1$. Note that for such parametrization function $f$, series (\ref{2.3}) is also absolutely convergent.
\begin{remark}\label{r1}
When chosen parametrization function $f$ in (\ref{2.3}) is complex valued and the complex conjugate $\overline{u}(x,t)$ of $u(x,t)$ appears in nonlinear function $N(u)$, then $\overline{u}(x,t)$ is parametrized as
\begin{equation}\label{2.4}
\overline{u}_\lambda(x,t)=\sum_{k=0}^{\infty}\overline{u}_k(x,t)f^k(\lambda).
\end{equation}
\end{remark}
\noindent Now substituting (\ref{2.3}) in (\ref{2.2}) we have
\begin{equation}\label{2.5}
N(u_\lambda)=\sum_{k=0}^{\infty}N^{(k)}(u_0)\frac{\left(\sum_{j=1}^{\infty}u_j(x,t)f^j(\lambda)\right)^k}{k!}\ .
\end{equation}
Since $\sum_{j=1}^{\infty}u_j(x,t)f^j(\lambda)$ is absolutely convergent, we can rearrange $N(u_\lambda)$ in a series form of the type, $\sum_{k=0}^{\infty}A_kf^k(\lambda)$. Therefore, from (\ref{2.5}) we collect the coefficients $A_k$ of $f^k(\lambda)$, which leads to Adomian polynomials. That is,
\begin{eqnarray}\label{2.6}
N(u_\lambda)&=&N(u_0)+N^{(1)}(u_0)\left(u_1f(\lambda)+u_2f^2(\lambda)+\ldots\right)\nonumber\\&&+\frac{N^{(2)}(u_0)}{2!}\left(u_1f(\lambda)+u_2f^2(\lambda)+\ldots\right)^2\nonumber\\&&+\frac{N^{(3)}(u_0)}{3!}\left(u_1f(\lambda)+u_2f^2(\lambda)+\ldots\right)^3+\ldots\nonumber\\
&=&N(u_0)+N^{(1)}(u_0)u_1f(\lambda)+\left(N^{(1)}(u_0)u_2+N^{(2)}(u_0)\frac{u_1^2}{2!}\right)f^2(\lambda)\nonumber\\&&
+\left(N^{(1)}(u_0)u_3+N^{(2)}(u_0)u_1u_2+N^{(3)}(u_0)\frac{u_1^3}{3!}\right)f^3(\lambda)+\ldots \nonumber\\
&=&\sum_{k=0}^{\infty}A_k(u_0,u_1,\ldots,u_k)f^k(\lambda).
\end{eqnarray}
Also note that $A_k$'s are polynomials in $u_0,u_1,\ldots,u_k$ only. For a suitable choice of $f$, we possibly can develop a convenient method to determine these Adomian polynomials. One such method was given by Adomian himself where he choose $f(\lambda)=\lambda$ and then taking $n$-th derivative on both sides of (\ref{2.6}) obtained (\ref{1.6}). In Section 4, we choose $f(\lambda)=e^{i\lambda}$ and develop two new methods to determine Adomian polynomials.
\section{Some properties of Adomian polynomials}
\setcounter{equation}{0}
In this section, we discuss some important properties of Adomian polynomials, which are very useful and in many cases we can get Adomian polynomials for certain nonlinear operators without explicit calculations. As far as calculations of Adomian polynomials are concerned, formal power series can be used efficiently. Formal power series are purely algebraic objects, and can be defined without the notion of convergence. In order to obtain Adomian polynomials, we utilize some well known operations on formal power series.\\
Let $f$ and $g$ be formal power series in $x$ with $f(x)=\sum_{k=0}^{\infty}a_kx^k$ and $g(x)=\sum_{k=0}^{\infty}b_kx^k$. Then,
\begin{equation}\label{3.1}
\frac{g(x)}{f(x)}=\sum_{k=0}^{\infty}c_kx^k,\ \ c_0=\frac{b_0}{a_0},\ c_k=\frac{1}{a_0}\left(b_k-\sum_{j=1}^{k}a_jc_{k-j}\right),
\end{equation}
and
\begin{equation}\label{3.2}
f^n(x)=\sum_{k=0}^{\infty}c_kx^k,\ \ c_0=a_0^n,\ c_k=\frac{1}{ka_0}\sum_{j=1}^{k}(jn-k+j)a_jc_{k-j},
\end{equation}
provided $a_0$ is invertible in the ring of scalars.\\
\begin{theorem}\label{t1}
Let $A_{1_n},A_{2_n},\ldots,A_{m_n},\ n\geq1,$ be the Adomian polynomials corresponding to nonlinear operators $N_1,N_2,\ldots,N_m$, respectively. Then the Adomian polynomials of
\begin{itemize}
\item[(i)] $N(u)=\sum_{k=1}^{m}\alpha_k N_k(u)$ are given by $A_n=\sum_{k=1}^{m}\alpha_k A_{k_n}\ \forall\ n\in\mathbb{N}_0$, where the $\alpha_k$ are scalars.
\item[(ii)] $N(u)=\prod_{k=1}^{m}N_k(u)$ are given by
\begin{equation}\label{3.3}
A_n=\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{m}k_j=n}}\prod_{j=1}^{m}A_{j_{k_j}},\ \forall\ n\in\mathbb{N}_0.
\end{equation}
In particular, Adomian polynomials of $N(u)=N_1(u)N_2(u)$ are
\begin{equation*}
A_n=\sum_{k=0}^{n}A_{1_k}A_{2_{n-k}}.
\end{equation*}
\item[(iii)] $N(u)=\frac{N_1(u)}{N_2(u)}$ are given by $A_0=\frac{A_{1_0}}{A_{2_0}}$ and
\begin{equation}\label{3.4}
A_n=\frac{1}{A_{2_0}}\left(A_{1_n}-\sum_{k=1}^{n}A_{2_k}A_{n-k}\right),\ \forall\ n\in\mathbb{N}.
\end{equation}
\item[(iv)] $N(u)=N^p_1(u)$ for any $p\in\mathbb{N}$ are given by $A_0=A^p_{1_0}$ and
\begin{equation}\label{3.5}
A_n=\frac{1}{nA_{1_0}}\sum_{k=1}^{n}(kp-n+k)A_{1_k}A_{n-k},\ \forall\ n\in\mathbb{N}.
\end{equation}
\item[(v)] $N(u)=N_1\left(N_2(u)\right)$ are given by $A_0=N_1\left(A_{2_0}\right)$ and
\begin{equation}\label{3.6}
A_n=\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{n}jk_j=n}}N_1^{(\sum_{j=1}^{n}k_j)}\left(A_{2_0}\right)\prod_{j=1}^{n}\frac{A_{2_j}^{k_j}}{k_j!},\ \forall\ n\in\mathbb{N}.
\end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
\begin{itemize}
\item[(i)] Directly follows from (\ref{1.6}).
\item[(ii)] Note that Leibniz rule \cite{Johnson02thecurious} for higher derivatives of product of $m$ functions is given by
\begin{equation}\label{3.7}
\frac{d^n}{dt^n}\left(f_1(t)f_2(t)\ldots f_m(t)\right)=\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{m}k_j=n}}n!\prod_{j=1}^{m}\frac{f_j^{(k_j)}(t)}{k_j!}.
\end{equation}
Using (\ref{1.6}) and (\ref{3.7}), the Adomian polynomials are
\begin{eqnarray*}
A_n(u_0,u_1,\ldots,u_n)&=&\left.\frac{1}{n!}\frac{\partial^n N_1(u_\lambda)N_2(u_\lambda)\ldots N_m(u_\lambda)}{\partial \lambda^n} \right|_{\lambda=0}\\
&=&\frac{1}{n!}\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{m}k_j=n}}n!\prod_{j=1}^{m}\left.\frac{1}{k_j!}\frac{\partial^{k_j}N_j(u_\lambda)}{\partial\lambda^{k_j}}\right|_{\lambda=0}\\
&=&\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{m}k_j=n}}\prod_{j=1}^{m}A_{j_{k_j}},\ \forall\ n\in\mathbb{N}_0.
\end{eqnarray*}
\item[(iii)] Follows directly from (\ref{1.5}) and (\ref{3.1}) whereas (iv) follows from (\ref{1.5}) and (\ref{3.2}) .
\item[(v)] Adomian \cite{Adomian1986504} proposed an algorithm for the Adomian polynomials of composite nonlinearity. We hereby give an explicit formula for the same by using Fa\`{a} di Bruno's formula \cite{Johnson02thecurious} for generalized chain rule for higher derivatives of composition of two functions given by
\begin{equation}\label{3.8}
\frac{d^n}{dt^n}g\big(f(t)\big)=\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{n}jk_j=n}}n!g^{(\sum_{j=1}^{n}k_j)}\big(f(t)\big)\prod_{j=1}^{n}\frac{1}{k_j!}\left(\frac{f^{(j)}(t)}{j!}\right)^{k_j},\ \forall\ n\in\mathbb{N}.
\end{equation}
Hence, from (\ref{1.6}) and using (\ref{3.8}), we have
\begin{eqnarray*}
A_n(u_0,u_1,\ldots,u_n)&=&\left.\frac{1}{n!}\frac{\partial^n N_1\left(N_2(u_\lambda)\right)}{\partial \lambda^n} \right|_{\lambda=0}\\
&=&\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{n}jk_j=n}}N_1^{(\sum_{j=1}^{n}k_j)}\left(N_2(u_\lambda)\right)\prod_{j=1}^{n}\left.\frac{1}{k_j!}\left(\frac{1}{j!}\frac{\partial^jN_2(u_\lambda)}{\partial \lambda^j}\right)^{k_j}\right|_{\lambda=0}\\
&=&\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{n}jk_j=n}}N_1^{(\sum_{j=1}^{n}k_j)}\left(A_{2_0}\right)\prod_{j=1}^{n}\frac{A_{2_j}^{k_j}}{k_j!},\ \forall\ n\in\mathbb{N}.
\end{eqnarray*}
\end{itemize}
\end{proof}
\begin{remark} Rach formula (\ref{1.7}) is a particular case of (\ref{3.6}) for composed function $N(u_\lambda)$.
\end{remark}
\section{Two new methods to calculate Adomian polynomials}
\setcounter{equation}{0}
In this section, we give two new methods to calculate Adomian polynomials. The basic idea is to avoid the tedious calculations of higher derivatives involved in prevalent methods. Consider the set of orthogonal functions $\{e^{inx}, n\in\mathbb{Z}\},$ which indeed forms a basis for the Hilbert space $L^2[-\pi,\pi]$ with inner product
\begin{equation*}\label{4.1}
<f ,g >=\int^\pi_{-\pi} f(x)\overline{g(x)} \,dx.
\end{equation*}
Specifically, we use the fact
\begin{equation}\label{4.2}
<e^{in\lambda},e^{im\lambda} >=\int^\pi_{-\pi} e^{in\lambda} e^{-im\lambda} \,d\lambda =\left\{
\begin{array}{ll}
0 & \mbox{if } m\neq n,\\
2\pi & \mbox{if } m=n.
\end{array}
\right.
\end{equation}
\noindent We choose $f(\lambda)=e^{i\lambda}$ in (\ref{2.3}), to obtain
\begin{equation}\label{4.3}
u_\lambda=\sum_{k=0}^{\infty}u_k e^{ik\lambda}
\end{equation}
and from Remark \ref{r1}, its complex conjugate, $\overline{u}(x,t)$ is parametrized as $\overline{u}_\lambda=\sum_{k=0}^{\infty}\overline{u}_k e^{ik\lambda}$.
\begin{remark}
Note that $u_\lambda$ in (\ref{4.3}), as a function of $\lambda$, is a series of periodic functions each of period $2\pi$ and therefore $N(u_\lambda)$ is also $2\pi$-periodic. The absolute convergence of $u_\lambda(x,t)=\sum_{k=0}^{\infty}u_k e^{ik\lambda}$ and $N(u_\lambda)$ follow from hypotheses $H1$ and $H2$. Also, for parametrization (\ref{4.3}), Adomian polynomials for the nonlinear function $N(u)$ turn out to be the Fourier coefficients of the periodic function $N(u_\lambda)$.
\end{remark}
\begin{theorem}\label{t2}
Let $u_\lambda=\sum_{k=0}^{\infty}u_k e^{ik\lambda}$ be a parametrized representation of $u(x,t)$, where $\lambda$ is a real parameter and $N$ be the nonlinear function defined in (\ref{1.1}). Then,
\begin{equation}\label{4.4}
\int^\pi_{-\pi} N\left(u_\lambda\right) e^{-in\lambda} \,d\lambda = \int^\pi_{-\pi} N\left(\sum_{k=0}^{n}u_k e^{ik\lambda}\right) e^{-in\lambda} \,d\lambda,\ \forall\ n\in\mathbb{N}_0.
\end{equation}
\end{theorem}
\begin{proof}
From the first assumption $H1$, $\sum_{k=0}^{\infty}|u_k|=M<\infty$. Therefore, we have from (\ref{2.2}),
\begin{eqnarray*}\label{4.5}
\left|N(u_\lambda)\right|=\left|\frac{N^{(k)}(u_0)}{k!}\left(\sum_{j=1}^{\infty}u_j e^{ij\lambda}\right)^k\right| &\leq&
\left|\frac{N^{(k)}(u_0)}{k!}\right|\left(\sum_{j=1}^{\infty}|u_j|\right)^k=\left|\frac{N^{(k)}(u_0)}{k!}\right|M^k,
\end{eqnarray*}
where $M=\sum_{j=1}^{\infty}|u_j|$. Since (\ref{2.2}) is an absolutely convergent series with infinite radius of convergence, $\sum_{k=0}^{\infty}\left|\frac{N^{(k)}(u_0)}{k!}\right|M^k$ converges. By Weierstrass M-test, the series
\begin{equation*}
\sum_{k=0}^{\infty}\frac{N^{(k)}(u_0)}{k!}\left(\sum_{j=1}^{\infty}u_j e^{ij\lambda}\right)^k
\end{equation*}
converges uniformly. Hence, using (\ref{2.2}), we get for $n\in\mathbb{N}_0$
\begin{eqnarray*}\label{new}
\int^\pi_{-\pi} N(u_\lambda) e^{-in\lambda} \,d\lambda
&=& \int^\pi_{-\pi}\sum_{k=0}^{\infty} \frac{N^{(k)}(u_0)}{k!}\left(\sum_{j=1}^{n}u_j e^{ij\lambda}+\sum_{j=n+1}^{\infty}u_j e^{ij\lambda}\right)^ke^{-in\lambda} \,d\lambda\nonumber\\
&=& \int^\pi_{-\pi}\underset{m\rightarrow \infty}{\lim}\sum_{k=0}^{m} \frac{N^{(k)}(u_0)}{k!}\left(\sum_{j=1}^{n}u_j e^{ij\lambda}+\sum_{j=n+1}^{\infty}u_j e^{ij\lambda}\right)^ke^{-in\lambda} \,d\lambda\nonumber\\
&=&\underset{m\rightarrow \infty}{\lim}\int^\pi_{-\pi}\sum_{k=0}^{m}\frac{N^{(k)}(u_0)}{k!}\left(\sum_{j=1}^{n}u_j e^{ij\lambda}+\sum_{j=n+1}^{\infty}u_j e^{ij\lambda}\right)^ke^{-in\lambda} \,d\lambda\\
&=& \underset{m\rightarrow \infty}{\lim}\sum_{k=0}^{m}\int^\pi_{-\pi} \frac{N^{(k)}(u_0)}{k!}\left(\sum_{j=1}^{n}u_j e^{ij\lambda}\right)^ke^{-in\lambda} \,d\lambda\ \ \ \ \mathrm{(using\ (\ref{4.2}))}\\
&=& \underset{m\rightarrow \infty}{\lim}\int^\pi_{-\pi} \sum_{k=0}^{m}\frac{N^{(k)}(u_0)}{k!}\left(\sum_{j=1}^{n}u_j e^{ij\lambda}\right)^ke^{-in\lambda} \,d\lambda\\
&=& \int^\pi_{-\pi}\sum_{k=0}^{\infty}\frac{N^{(k)}(u_0)}{k!}\left(\sum_{j=0}^{n}u_j e^{ij\lambda}-u_0\right)^ke^{-in\lambda} \,d\lambda\\
&=& \int^\pi_{-\pi} N\left(\sum_{k=0}^{n}u_k e^{ik\lambda}\right) e^{-in\lambda} \,d\lambda,
\end{eqnarray*}
where the last step follows from (\ref{2.2}). This completes the proof.
\end{proof}
\noindent Using Theorem \ref{t2}, we propose two new methods to calculate Adomian polynomials.
\subsubsection*{First Method}
\noindent Let $u_\lambda=\sum_{k=0}^{\infty}u_k e^{ik\lambda}$, and $N(u_\lambda)=\sum_{k=0}^{\infty}A_k e^{ik\lambda},$ where $A_k$'s are Adomian polynomials. Then
\begin{equation}\label{4.6}
\int^\pi_{-\pi} N\left(\sum_{k=0}^{\infty}u_k e^{ik\lambda}\right) e^{-in\lambda} \,d\lambda =\int^\pi_{-\pi} \sum_{k=0}^{\infty}A_k e^{ik\lambda} e^{-in\lambda}\,d\lambda=2\pi A_n.
\end{equation}
The last equality in (\ref{4.6}) follows due to the uniform convergence of $\sum_{k=0}^{\infty}A_k e^{i(k-n)\lambda}$ and by using (\ref{4.2}). Hence,
\begin{equation}\label{4.7}
A_n(u_0,u_1,\ldots,u_n)=\frac{1}{2\pi}\int^\pi_{-\pi} N\left(\sum_{k=0}^{\infty}u_k e^{ik\lambda}\right) e^{-in\lambda} \,d\lambda.
\end{equation}
Applying Theorem \ref{t2},
\begin{equation}\label{4.8}
A_n(u_0,u_1,\ldots,u_n)=\frac{1}{2\pi}\int^\pi_{-\pi} N\left(\sum_{k=0}^{n}u_k e^{ik\lambda}\right) e^{-in\lambda} \,d\lambda,\ \forall\ n\in\mathbb{N}_0.
\end{equation}
\subsubsection*{Second Method}
\noindent We can also calculate Adomian polynomials recursively. Define an operator $T$ by
\begin{equation}\label{4.9}
T(A_{n}(u_0,u_1,\ldots,u_{n}))=\frac{1}{2\pi}\int^\pi_{-\pi}A_{n}(v_0,v_1,\ldots,v_{n})e^{-i\lambda} \,d\lambda,
\end{equation}
where\ $v_k=u_k+(k+1)u_{k+1}e^{i\lambda}$ and in view of Remark \ref{r1}, we put $\overline{v}_k=\overline{u}_k+(k+1)\overline{u}_{k+1}e^{i\lambda}$, $\ \forall\ k\in\{0,1,2,\ldots,n\}$.
\begin{lemma}\label{l4.1}
Let $u=\sum_{k=0}^{\infty}u_k$ be the solution of (\ref{1.1}) and $N$ be a nonlinear operator. Then, operator $T$ given by (\ref{4.9}) satisfies the following properties.
\begin{itemize}
\item[(i)] $T(u_k)=(k+1)u_{k+1},\ \forall\ k\in\mathbb{N}_0,$
\item[(ii)] $T(N^{(k)}(u_0))=u_1N^{(k+1)}(u_0),\ \forall\ k\in\mathbb{N}_0,$
\item[(iii)] $T(u_{k_1}u_{k_2}\ldots u_{k_m})=u_{k_1}T(u_{k_2}u_{k_3}\ldots u_{k_m})+u_{k_2}u_{k_3}\ldots u_{k_m}T(u_{k_1}),\ \forall\ m\in\mathbb{N},\ m\geq2,$
\item[(iv)] $T(u_{k_1}\ldots u_{k_m}N^{(k)}(u_0))=u_{k_1}\ldots u_{k_m}T(N^{(k)}(u_0))+T(u_{k_1}\ldots u_{k_m})N^{(k)}(u_0),\ \forall\ m\in\mathbb{N},\ m\geq2,$
\item[(v)] $T(\alpha u_{k_1}\ldots u_{k_m}N^{(k)}(u_0)+\beta u_{j_1}\ldots u_{j_l}N^{(k')}(u_0))=\alpha T(u_{k_1}\ldots u_{k_m}N^{(k)}(u_0))\\+\beta T(u_{j_1}\ldots u_{j_l}N^{(k')}(u_0)),\ \forall\ m,l\in\mathbb{N},\ m,l\geq2,\ where\ \alpha,\beta\ are\ scalars.$
\end{itemize}
\end{lemma}
\begin{proof}
Parts (i), (iii) and (v) follow easily by using (\ref{4.2}).
\begin{itemize}
\item[(ii)] From (\ref{4.9}), we have
\begin{equation}\label{4.10}
T(N^{(k)}(u_0))=\frac{1}{2\pi}\int^\pi_{-\pi}N^{(k)}(u_0+u_1e^{i\lambda})e^{-i\lambda} \,d\lambda,\ \forall\ k\in\mathbb{N}_0.
\end{equation}
From (\ref{4.8}), lhs of (\ref{4.10}) is $A_1$ for $N^{(k)}(u)$, which by (\ref{1.7}) is equal to $u_1N^{(k+1)}(u_0)$.
\item[(iv)] Using (\ref{2.2}) and (\ref{4.2}), we get
\begin{equation}\label{4.11}
\frac{1}{2\pi}\int^\pi_{-\pi}N^{(k)}(u_0+u_1e^{i\lambda})e^{-in\lambda} \,d\lambda=0,\ \forall\ n\in\mathbb{N}.
\end{equation}
From (\ref{4.9}) and (\ref{4.11}),
\begin{eqnarray*}
&&T(u_{k_1}\ldots u_{k_m}N^{(k)}(u_0))\\&=&\frac{1}{2\pi}\int^\pi_{-\pi}\prod_{j=1}^{m}(u_{k_j}+(k_j+1)u_{k_j+1}e^{i\lambda})N^{(k)}(u_0+u_1e^{i\lambda})e^{-i\lambda} \,d\lambda\\
&=&\prod_{j=1}^{m}u_{k_j}\frac{1}{2\pi}\int^\pi_{-\pi}N^{(k)}(u_0+u_1e^{i\lambda})e^{-i\lambda} \,d\lambda\\
&&+\sum_{l=1}^{m}(k_l+1)u_{k_l+1}\underset{j\neq l}{\prod_{j=1}^{m}}u_{k_j}\frac{1}{2\pi}\int^\pi_{-\pi}N^{(k)}(u_0+u_1e^{i\lambda}) \,d\lambda\\
&=&u_{k_1}\ldots u_{k_m}T(N^{(k)}(u_0))+T(u_{k_1}\ldots u_{k_m})N^{(k)}(u_0)\ \forall\ m\geq2.
\end{eqnarray*}
\end{itemize}
The last equality follows from (\ref{4.10}) and Theorem \ref{t2}.
\end{proof}
\noindent For an operator $T$ satisfying above properties, the following result due to Babolian and Javadi \cite{Babolian2004253} holds:
\begin{equation}\label{4.12}
A_n(u_0,u_1,\ldots,u_n)=\frac{1}{n}T(A_{n-1}(u_0,u_1,\ldots,u_{n-1})).
\end{equation}
After calculating $A_0$ from (\ref{4.8}) as
\begin{equation}\label{4.13}
A_0(u_0)=N(u_0),
\end{equation}
$A_n$ can be calculated by the following recursive formula, obtained using (\ref{4.9}) and (\ref{4.12}),
\begin{equation}\label{4.14}
A_n(u_0,u_1,\ldots,u_n)=\frac{1}{2n\pi}\int^\pi_{-\pi} A_{n-1}(v_0,v_1,\ldots,v_{n-1})e^{-i\lambda} \,d\lambda,\ \forall\ n\in \mathbb{N},
\end{equation}
where\ $v_k=u_k+(k+1)u_{k+1}e^{i\lambda}$ and $\overline{v}_k=\overline{u}_k+(k+1)\overline{u}_{k+1}e^{i\lambda}$, $\ \forall\ k\in\{0,1,2,\ldots,n-1\}$.
\section{New methods applied to different forms of nonlinearity}
\setcounter{equation}{0}
We will frequently apply the first method to calculate Adomain polynomials as it is much simpler. By using proposed methods and properties of Adomian polynomials, $A_n$ can be determined easily without manipulations of indices, rearrangement of infinite series and any calculations of derivatives. The second method is efficient in cases where Taylor series expansion is required, as for example in case of exponential, logarithmic and trigonometric nonlinearity. The advantage is that second algorithm requires at most the first two terms of the Taylor series expansion. Applications of properties of Adomian polynomials discussed in third section are also illustrated.
\subsection{Nonlinear polynomials}
\begin{example}
(By First Algorithm) Adomian polynomials for $N(u)=u^m$, where $m\in\mathbb{N}$.\\
\noindent We use (\ref{4.8}) to find $A_n$. Obviously, $A_0=u_0^m$ and
\begin{eqnarray*}
A_1&=&\frac{1}{2\pi}\int^\pi_{-\pi} (u_0 + u_1e^{i\lambda})^m e^{-i\lambda} \,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\sum_{k=0}^{m}{\binom{m}{k}}u_0^k(u_1e^{i\lambda})^{m-k}\right]e^{-i\lambda} \,d\lambda\\
&=&mu_0^{m-1}u_1,\\
A_2&=&\frac{1}{2\pi}\int^\pi_{-\pi} (u_0 + u_1e^{i\lambda} + u_2e^{2i\lambda} )^me^{-2i\lambda} \,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\sum_{\sum_{j=1}^3k_j=m}{\binom{m}{k_1,k_2,k_3}}u_0^{k_1}(u_1e^{i\lambda})^{k_2}(u_2e^{2i\lambda})^{k_3}\right]e^{-2i\lambda} \,d\lambda\\
&=&mu_0^{m-1}u_2+\frac{1}{2}m(m-1)u_0^{m-2}u_1^2.
\end{eqnarray*}
\noindent Similarly $A_3, A_4,\ldots$ can be calculated. Indeed, from Theorem \ref{t1} (ii), the $n$-th order Adomian polynomial is given by
\begin{equation}\label{5.1}
A_n(u_0,u_1,\ldots,u_n)=\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^m{k_j}=n}}\ \prod_{j=1}^{m} u_{k_j},\ \forall\ n\in \mathbb{N}_0.
\end{equation}
Note that from Theorem \ref{t1} (iv), $A_0=u_0^m$ and $A_n=\frac{1}{nu_0}\sum_{k=1}^{n}(km-n+k)u_kA_{n-k}.$
\end{example}
\begin{remark}
The case $m=2$ with $N(u)=u^2$ appears in nonlinear fractional wave equation
\begin{equation}\label{5.2}
(D_t^\frac{3}{2}-D_t^\frac{1}{2})u+u_{xx}+u^2=0,\ u(x,0)=x,\ u_t(x,0)=\sin{x},\ t>0.
\end{equation}
Here, $D_t^{\alpha}$ is the Caputo fractional derivative of order $\alpha$, defined by
\begin{equation}\label{5.3}
D_t^{\alpha}u(x,t) =\left\{
\begin{array}{ll}
\frac{1}{\Gamma{(m-\alpha)}}\int^t_{0} (t-\tau)^{m-\alpha-1}\frac{\partial^mu(x,\tau)}{\partial\tau^m}\,d\tau, & \mbox{for } m-1<\alpha<m,\\\\
\frac{\partial^mu(x,t)}{\partial t^m}, & \mbox{for } \alpha=m\in\mathbb{N}.
\end{array}
\right.
\end{equation}
Gejji and Bhalekar \cite{DaftardarGejji2008113} used existing method to calculate Adomian ploynomials. From (\ref{5.1}), Adomian polynomials for $u^2$ are
\begin{equation}\label{5.4}
A_n(u_0,u_1,\ldots,u_n)=\sum_{k=0}^{n}u_ku_{n-k},\ \forall\ n\in\mathbb{N}_0,
\end{equation}
or recursively, $A_0=u_0^2$ and $A_n=\frac{1}{nu_0}\sum_{k=1}^{n}(3k-n)u_kA_{n-k},\ \forall\ n\in\mathbb{N}.$
\end{remark}
\subsection{Nonlinear fractional derivatives}
\begin{example}\label{e5.2}
Let $m\in\mathbb{N}$ and consider $N(u)=u^mL(u)$, where $L$ is fractional differential or integral operator given by (\ref{5.3}) or (\ref{7.5}) respectively. For such operators the Adomian polynomials are simply $L(u_n)$. By using Theorem \ref{t1} (ii), we get
\begin{eqnarray}\label{5.5}
A_n(u_0,u_1,\ldots,u_n)&=&\sum_{k=0}^{n}B_kL(u_{n-k})\nonumber\\
&=&\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{m+1}{k_j}=n}}\ \prod_{j=1}^{m} u_{k_j}L(u_{k_{m+1}}),\ \forall\ n\in \mathbb{N}_0,
\end{eqnarray}
\noindent where $B_k$'s are the Adomian polynomials for $u^m$ given by (\ref{5.1}).
\end{example}
\begin{remark}
When $m=2$ and $L(u)=\frac{\partial u}{\partial x}\ ,$ the nonlinear term in
\begin{equation}\label{5.6}
u_t+u^2\frac{\partial u}{\partial x}=0,\ u(x,0)=3x,
\end{equation}
is $N(u)=u^2\frac{\partial u}{\partial x}$. Wazwaz \cite{Wazwaz200033} used sum of the indices technique and later Biazar \textit{et al.} \cite{Biazar2003523} calculated the Adomian polynomials using different approaches. From (\ref{5.5}), the $n$-th order Adomian polynomial is
\begin{equation}\label{5.7}
A_n(u_0,u_1,\ldots,u_n) =\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{3}k_j=n}}u_{k_1}u_{k_2}\frac{\partial u_{{k_3}}}{\partial x},\ \forall\ n\in \mathbb{N}_0.
\end{equation}
The case $m=1$ with $L(u)=D_x^{\beta}u$, and $N(u)=u D_x^{\beta}u$ appears in time and space fractional nonlinear Burger's equation,
\begin{eqnarray}\label{5.8}
D_t^{\alpha}u=vD_x^{\beta}(D_x^{\beta}u)-\lambda u D_x^{\beta}u,\ u(x,0)=x^2,\ t>0,\ 0<\alpha,\beta\leq 1.
\end{eqnarray}
Gepreel \cite{Gepreel2012636} used existing techniques to compute the Adomian polynomials for $N(u)=uD_x^{\beta}u$, where $ D_x^{\alpha}u $ is given by (\ref{5.3}). From (\ref{5.5}), we obtain
\begin{equation}\label{5.9}
A_n(u_0,u_1,\ldots,u_n) =\sum_{k=0}^{n}u_kD_x^{\beta}u_{n-k},\ \forall\ n\in \mathbb{N}_0.
\end{equation}
\end{remark}
\subsection{Trigonometric and hyperbolic functions}
\begin{example}\label{e5.3}
(By First Algorithm) Adomian polynomials for $N(u)= \cosh{u}+\sin{u}$.\\
\noindent Using (\ref{4.8}), $A_0=\cosh{u_0}+\sin{u_0}$, and
\begin{eqnarray*}
A_1&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\cosh({u_0+u_1e^{i\lambda}})+\sin({u_0+u_1e^{i\lambda}})\right]e^{-i\lambda}\,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\cosh{u_1e^{i\lambda}}\cosh{u_0} +\sinh{u_1e^{i\lambda}}\sinh{u_0} \right.\\&&\left.+\cos{u_1e^{i\lambda}}\sin{u_0} +\sin{u_1e^{i\lambda}}\cos{u_0}\right]e^{-i\lambda} \,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\left\{1+\frac{u_1^2e^{2i\lambda}}{2!}+\ldots\right\}\cosh{u_0}+\left\{u_1e^{i\lambda}+\frac{u_1^3e^{3i\lambda}}{3!}+\ldots\right\}\sinh{u_0}\right.\\
&&\left.+\left\{1-\frac{u_1^2e^{2i\lambda}}{2!}+\ldots\right\}\sin{u_0}+ \left\{u_1e^{i\lambda}-\frac{u_1^3e^{3i\lambda}}{3!}+\ldots\right\}\cos{u_0}\right]e^{-i\lambda}\,d\lambda\\
&=&u_1(\cos{u_0}+\sinh{u_0}),\\
A_2&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\cosh({u_0+u_1e^{i\lambda}+u_2e^{2i\lambda}})+\sin({u_0+u_1e^{i\lambda}+u_2e^{2i\lambda}})\right]e^{-2i\lambda}\,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\left\{1+\frac{(u_1e^{i\lambda}+u_2e^{2i\lambda})^2}{2!}+\ldots\right\}\cosh{u_0}+\left\{(u_1e^{i\lambda}+u_2e^{2i\lambda})+\ldots\right\}\sinh{u_0}\right.\\&&\left.+\left\{1-\frac{(u_1e^{i\lambda}+u_2e^{2i\lambda})^2}{2!}+\ldots\right\}\sin{u_0}+ \left\{(u_1e^{i\lambda}+u_2e^{2i\lambda})-\ldots\right\}\cos{u_0}\right]e^{-2i\lambda}\,d\lambda\\
&=&\frac{1}{2}u_1^2(\cosh{u_0}-\sin{u_0})+u_2(\cos{u_0}+\sinh{u_0}),\\
A_3&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\cosh({u_0+u_1e^{i\lambda}+u_2e^{2i\lambda}+u_3e^{3i\lambda}})\right.\\&&\left.+\sin({u_0+u_1e^{i\lambda}+u_2e^{2i\lambda}+u_3e^{3i\lambda}})\right]e^{-3i\lambda}\,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\left\{1+\frac{(u_1e^{i\lambda}+u_2e^{2i\lambda}+u_3e^{3i\lambda})^2}{2!}+\ldots\right\}\cosh{u_0}\right.\\&&\left.+\left\{(u_1e^{i\lambda}+u_2e^{2i\lambda}+u_3e^{3i\lambda})+\frac{(u_1e^{i\lambda}+u_2e^{2i\lambda}+u_3e^{3i\lambda})^3}{3!}+\ldots\right\}\sinh{u_0}\right.\\&&+ \left.\left\{1-\frac{(u_1e^{i\lambda}+u_2e^{2i\lambda}+u_3e^{3i\lambda})^2}{2!}+\ldots\right\}\sin{u_0}\right.\\&&\left.+\left\{(u_1e^{i\lambda}+u_2e^{2i\lambda}+u_3e^{3i\lambda})-\frac{(u_1e^{i\lambda}+u_2e^{2i\lambda}+u_3e^{3i\lambda})^3}{3!}+\ldots\right\} \cos{u_0}\right]e^{-3i\lambda}\,d\lambda\\
&=&\frac{1}{6}u_1^3(\sinh{u_0}-\cos{u_0})+u_3(\sinh{u_0}+\cos{u_0})+u_1u_2(\cosh{u_0}-\sin{u_0}).
\end{eqnarray*}
\noindent Similarly, $A_4, A_5,\ldots$ can be calculated.
\end{example}
\begin{example}
Adomian polynomials for $N(u)= u^2(\cosh{u}+\sin{u})$.\\
\noindent Using Theorem \ref{t1} (ii), the $n$-th order Adomian polynomial for product of two nonlinear operators is
\begin{equation}\label{5.10}
A_n(u_0,u_1,\ldots,u_n)=\sum_{k=0}^{n}B_kC_{n-k},
\end{equation}
\noindent where $B_n$ and $C_n$ are Adomian polynomials for $u^2$ and $(\cosh{u}+\sin{u})$ respectively. From (\ref{5.4}) and Example \ref{e5.3}, Adomian polynomials for $u^2(\cosh{u}+\sin{u})$ are
\begin{eqnarray*}
A_0&=&u_0^2(\cosh{u_0}+\sin{u_0}),\\
A_1&=&u_0^2u_1(\sinh{u_0}+\cos{u_0})+2u_0u_1(\cosh{u_0}+\sin{u_0}),\\
A_2&=&(u_0^2u_2+2u_0u_1^2)(\sinh{u_0}+\cos{u_0})\\& &+(u_1^2+2u_0u_2)(\cosh{u_0}+\sin{u_0})+\frac{1}{2}u_0^2u_1^2(\cosh{u_0}-\sin{u_0}),
\end{eqnarray*}
and etc.
\end{example}
\subsection{Exponential and logarithmic functions}
\begin{example}
(By First Algorithm) Adomian polynomials for $N(u)= e^u$.\\
\noindent From (\ref{4.8}), we have $A_0= e^{u_0}$ and
\begin{eqnarray*}
A_1&=&\frac{1}{2\pi}\int^\pi_{-\pi} e^{u_0 + u_1e^{i\lambda}} e^{-i\lambda}\,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} e^{u_0}\left\{1+ \frac{u_1e^{i\lambda}}{1!} +\ldots\right\} e^{-i\lambda}\,d\lambda\\
&=&u_1e^{u_0},\\
A_2&=&\frac{1}{2\pi}\int^\pi_{-\pi} e^{u_0 + u_1e^{i\lambda} + u_2e^{2i\lambda}} e^{-2i\lambda}\,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} e^{u_0}\left\{1+ \frac{(u_1e^{i\lambda}+ u_2e^{2i\lambda})}{1!} +\frac{(u_1e^{i\lambda}+ u_2e^{2i\lambda})^2}{2!} + \ldots\right\} e^{-2i\lambda}\,d\lambda\\
&=&\left(u_2 + \frac{u_1^2}{2}\right)e^{u_0},
\end{eqnarray*}
\noindent and etc. Indeed, from Theorem \ref{t1} (v), the $n$-th order Adomian polynomial for $e^u$ is
\begin{equation}\label{5.11}
A_n(u_0,u_1,\ldots,u_n)=e^{u_0}\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{n}jk_j=n}}\prod_{j=1}^{n}\frac{u_j^{k_j}}{k_j!},\ \forall\ n\in \mathbb{N}.
\end{equation}
\end{example}
\begin{example}
(By Second Algorithm) Adomian polynomials for $N(u)= \ln{u}$.\\
Obviously, $A_0=\ln{u_0}$, from (\ref{4.13}). Also, from (\ref{4.14}),
\begin{eqnarray*}
A_1&=&\frac{1}{2\pi}\int^\pi_{-\pi} (\ln{u_0 + u_1e^{i\lambda}}) e^{-i\lambda} \,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} \ln{\left[u_0\left(1 +\frac{u_1e^{i\lambda}}{u_0}\right)\right]}e^{-i\lambda} \,d\lambda\\
&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[\ln{u_0}+\left\{\frac{u_1e^{i\lambda}}{u_0}+\ldots \right\}\right] e^{-i\lambda} \,d\lambda\\
&=&\frac{u_1}{u_0},\\
A_2&=&\frac{1}{4\pi}\int^\pi_{-\pi} \frac{(u_1+2u_2e^{i\lambda})}{(u_0+u_1e^{i\lambda})}e^{-i\lambda}\,d\lambda\\
&=&\frac{1}{4\pi}\int^\pi_{-\pi} \frac{(u_1+2u_2e^{i\lambda})}{u_0}\left(1+\frac{u_1e^{i\lambda}}{u_0}\right)^{-1}e^{-i\lambda} \,d\lambda\\
&=&\frac{1}{4\pi}\int^\pi_{-\pi} \frac{(u_1+2u_2e^{i\lambda})}{u_0}\left\{1-\frac{u_1e^{i\lambda}}{u_0}+\ldots\right\}e^{-i\lambda}\,d\lambda\\
&=&\frac{u_2}{u_0} -\frac{u_1^2}{2u_0^2},
\end{eqnarray*}
and etc. Indeed, from Theorem \ref{t1} (v), we get
\begin{equation*}\label{5.12}
A_n(u_0,u_1,\ldots,u_n)=\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{n}jk_j=n}}\frac{(-1)^{\sum_{j=1}^{n}k_j-1}\left(\sum_{j=1}^{n}k_j-1\right)!}{u_0^{\sum_{j=1}^{n}k_j}}\prod_{j=1}^{n}\frac{u_j^{k_j}}{k_j!},\ \forall\ n\in \mathbb{N}.
\end{equation*}
\end{example}
\subsection{Composite nonlinearity}
\begin{example}
(By First Algorithm) Adomian polynomials for $N(u)= e^{\sin{u}}$.\\
\noindent Using (\ref{4.8}) $A_0=e^{\sin{u_0}}$, and
\begin{eqnarray*}
A_1&=&\frac{1}{2\pi}\int^\pi_{-\pi} e^{\sin{(u_0+u_1e^{i\lambda})}}e^{-i\lambda} \,d\lambda\\
&=&\frac{e^{\sin{u_0}}}{2\pi}\int^\pi_{-\pi} e^{\left(\sin{u_1e^{i\lambda}}\cos{u_0}-2\sin^2{\frac{u_1e^{i\lambda}}{2}}\sin{u_0}\right)}e^{-i\lambda} \,d\lambda\\
&=&\frac{e^{\sin{u_0}}}{2\pi}\int^\pi_{-\pi} \left[1+\left(\left\{u_1e^{i\lambda}-\ldots\right\}\cos{u_0}-2\left\{\frac{1}{2}u_1e^{i\lambda}-\ldots\right\}^2\sin{u_0}\right)+\ldots\right]e^{-i\lambda}\,d\lambda\\
&=&u_1\cos{u_0}e^{\sin{u_0}},\\
A_2&=&\frac{1}{2\pi}\int^\pi_{-\pi} e^{\sin{(u_0+u_1e^{i\lambda}+u_2e^{2i\lambda})}} e^{-2i\lambda}\,d\lambda\\
&=&\frac{e^{\sin{u_0}}}{2\pi}\int^\pi_{-\pi} \left[1+\left(\left\{(u_1e^{i\lambda}+u_2e^{2i\lambda})-\ldots\right\}\cos{u_0}-2\left\{\frac{1}{2}(u_1e^{i\lambda}+u_2e^{2i\lambda})-\ldots\right\}^2\sin{u_0}\right)\right.\\&&+\left.\frac{1}{2!}\left(\left\{(u_1e^{i\lambda}+u_2e^{2i\lambda})-\ldots\right\}\cos{u_0}-2\left\{\frac{1}{2}(u_1e^{i\lambda}+u_2e^{2i\lambda})-\ldots\right\}^2\sin{u_0}\right)^2+\ldots\right]e^{-2i\lambda}\,d\lambda\\
&=&\left(u_2\cos{u_0}-\frac{1}{2}u_1^2\sin{u_0}+\frac{1}{2}u_1^2\cos^2{u_0}\right)e^{\sin{u_0}}.
\end{eqnarray*}
Using Theorem \ref{t1} (v), we obtain
\begin{equation}\label{5.13}
A_n(u_0,u_1,\ldots,u_n)=e^{\sin{u_0}}\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{n}jk_j=n}}\prod_{j=1}^{n}\frac{B_j^{k_j}}{k_j!},\ \forall\ n\in \mathbb{N},
\end{equation}
\noindent where $B_n$ are Adomian polynomials of $\sin{u}$, calculated in Example \ref{e5.3}.
\end{example}
\begin{example}
Adomian polynomials for $N(u)= e^{-\sin^2\frac{u}{2}}$.\\
\noindent Adomian and Rach \cite{Adomian1986504} calculated Adomian polynomials for this nonlinear term and later Zhu \textit{et al.} \cite{Zhu2005402} used their algorithm.\\
Note that $N(u)= e^{-\sin^2\frac{u}{2}}=e^{-\frac{1}{2}}e^{\frac{1}{2}\cos{u_0}}$. From Theorem \ref{t1} (v), we get
\begin{equation}\label{5.14}
A_n(u_0,u_1,\ldots,u_n)=e^{-\sin^2\frac{u_0}{2}}\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{n}jk_j=n}}\prod_{j=1}^{n}\frac{B_j^{k_j}}{k_j},\ \forall\ n\in \mathbb{N},
\end{equation}
\noindent where $B_n$ are Adomian polynomials for $\frac{1}{2}\cos{u}$, which can be easily calculated by our first method. Using (\ref{4.8}), we have $A_0=e^{-\sin^2\frac{u_0}{2}}$ and from (\ref{5.14}),
\begin{eqnarray*}
A_1&=&-\frac{u_1}{2}\sin{u_0}e^{-\sin^2{\frac{u_0}{2}}},\\
A_2&=&\left(-\frac{u_2}{2}\sin{u_0}+\frac{u_1^2}{8}\sin^2{u_0}-\frac{u_1^2}{4}\cos{u_0}\right)e^{-\sin^2{\frac{u_0}{2}}},\\
A_3&=&\left(-\frac{u_3}{2}\sin{u_0}+\frac{u_1^3}{12}\sin{u_0}+\frac{u_1u_2}{4}\sin^2{u_0}\right.\\&&\left.-\frac{u_1u_2}{2}\cos{u_0}+\frac{u_1^3}{16}\sin{2u_0}-\frac{u_1^3}{48}\sin^3{u_0}\right)e^{-\sin^2{\frac{u_0}{2}}},
\end{eqnarray*}
and etc.
\end{example}
\section{Extension to nonlinearity of several variables}
\setcounter{equation}{0}
Our methods can be extended to calculate Adomian polynomials for multivariable case also. Consider the system of $m$ functional equations,
\begin{equation}\label{6.1}
u_j=f_j+L_j(u_1,u_2,\ldots,u_m)+N_j(u_1,u_2,\ldots,u_m),\ j=1,2,\ldots,m.
\end{equation}
Here $L_j$ and $N_j$ are linear and nonlinear operators respectively and $f_j$ are known functions. As assumed earlier, we shall suppose
\begin{itemize}
\item[$H3:$] Solution $u_j=\sum_{k=0}^{\infty}u_{j_k}$ of (\ref{6.1}) are absolutely convergent for $j=1,2,\ldots,m$.
\item[$H4:$] The nonlinear function $N_j(u_1,u_2,\ldots,u_m)$ is developable into an entire series with infinite radius of convergence, so that for each $j=1,2,\ldots,m$ we have,
\begin{equation}\label{6.2}
N_j(u_1,u_2,\ldots,u_m)=\sum_{k_1=0}^{\infty}\sum_{k_2=0}^{\infty}\ldots\sum_{k_m=0}^{\infty}\frac{\partial^{k_1+\ldots+k_m}N_j(0,0,\ldots,0)}{\partial^{k_1}u_1\ldots\partial^{k_m}u_m}\prod_{j=1}^{m}\frac{u_j^{k_j}}{k_j!}.
\end{equation}
\end{itemize}
Since (\ref{6.2}) is absolutely convergent, it can be rearranged as
\begin{eqnarray}\label{6.3}
N_j(u_1,u_2,\ldots,u_m)&=&\sum_{k=0}^{\infty}A_{j_k}(u_{1_0},\ldots,u_{1_k},u_{2_0},\ldots,u_{2_k},\ldots,u_{m_0},\ldots,u_{m_k})\ \nonumber \\
&=&\sum_{k_1=0}^{\infty}\sum_{k_2=0}^{\infty}\ldots\sum_{k_m=0}^{\infty}\frac{\partial^{k_1+\ldots+k_m}N_j(u_0,\ldots,u_0)}{\partial^{k_1}u_1\ldots\partial^{k_m}u_m}\prod_{j=1}^{m}\frac{(u_j-u_0)^{k_j}}{k_j!}.\nonumber\\
\end{eqnarray}
Note that (\ref{6.3}) is a rearrangement of an absolutely convergent series (\ref{6.2}).\\
Parameterize $u_j(x,t)$ and its complex conjugate $\overline{u}_j(x,t)$ as follows:
\begin{equation}\label{6.4}
u_{j_\lambda}=\sum_{k=0}^{\infty}u_{j_k}f^k(\lambda)\ \text{and}\ \overline{u}_{j_\lambda}(x,t)=\sum_{k=0}^{\infty}\overline{u}_{j_\lambda}f^k(\lambda)\ ,\ \ \forall\ \ j=1,2,\ldots,m,
\end{equation}
\noindent where $\lambda$ is a real parameter, $f$ is any real or complex valued function with $|f|<1$.\\ Since series (\ref{6.3}) is absolutely convergent, $N_j(u_{1_\lambda},u_{2_\lambda},\ldots,u_{m_\lambda})$ can be decomposed as
\begin{equation}\label{6.5}
N_j(u_{1_\lambda},u_{2_\lambda},\ldots,u_{m_\lambda})=\sum_{k=0}^{\infty}A_{j_k}(u_{1_0},\ldots,u_{1_k},u_{2_0},\ldots,u_{2_k},\ldots,u_{m_0},\ldots,u_{m_k})f^k(\lambda),
\end{equation}
for $j=1,2,\ldots,m.$ Taking $f(\lambda)=e^{i\lambda}$, the parametrized form of $u_j(x,t)$, for each $j$, is given by
\begin{equation}\label{6.6}
u_{j_\lambda}=\sum_{k=0}^{\infty}u_{j_k} e^{ik\lambda}
\end{equation}
and its complex conjugates, $\overline{u}_j(x,t)$ is parametrized as $ \overline{u}_{j_\lambda}=\sum_{k=0}^{\infty}\overline{u}_{j_k} e^{ik\lambda}$. We first give the extended version of Theorem \ref{t2} for the multivariable case.
\begin{theorem}\label{t3}
Let the parametrized representation of $u_j(x,t)$ for $j=1,2,\ldots,m$ be given by (\ref{6.6}), where $\lambda$ is a real parameter and $N_j(u_1,u_2,\ldots,u_m)$ are the nonlinear terms in (\ref{6.1}). Then
\begin{equation}\label{6.7}
\int^\pi_{-\pi} N_j(u_{1_\lambda},u_{2_\lambda},\ldots,u_{m_\lambda}) e^{-in\lambda} \,d\lambda = \int^\pi_{-\pi} N_j\left(\sum_{k=0}^{n}u_{1_k} e^{ik\lambda},\ldots,\sum_{k=0}^{n}u_{m_k} e^{ik\lambda}\right) e^{-in\lambda} \,d\lambda.
\end{equation}
\end{theorem}
\begin{proof}
For convenience, we use $m$ dimensional multi-index $m$-tuple notation.\\ Let $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_m)$, $\textbf{u}=(u_1,u_2,\ldots,u_m)$, $\textbf{u}_\lambda=\left(\sum_{k=0}^{\infty}u_{1_k} e^{ik\lambda},\ldots,\sum_{k=0}^{\infty}u_{m_k} e^{ik\lambda}\right)$, $\textbf{u}_{n_\lambda}=\left(\sum_{k=0}^{n}u_{1_k} e^{ik\lambda},\ldots,\sum_{k=0}^{n}u_{m_k} e^{ik\lambda}\right)$, and $\textbf{u}_0=(u_0,u_0,\ldots,u_0)$. Then $|\alpha|=\sum_{k=1}^m\alpha_k$, $\alpha!=\prod_{k=1}^m\alpha_k!$, $\textbf{u}^\alpha=\prod_{k=1}^m u_k^{\alpha_k}$ and $\partial^\alpha=\prod_{k=1}^m\frac{\partial^{\alpha_k}}{\partial u^{\alpha_k}_k}$.\\
From $H3$, $\sum_{k=0}^{\infty}|u_{j_k}|=M_j<\infty$ for $j=1,2,\ldots,m$ and therefore\\
\begin{equation}\label{6.8}
\left|\partial^\alpha N_j(\textbf{u}_0)\frac{(\textbf{u}_\lambda-\textbf{u}_0)^\alpha}{\alpha!}\right|\leq
\left|\frac{\partial^\alpha N_j(\textbf{u}_0)}{\alpha!}\right|\textbf{M}^{\alpha},
\end{equation}
where $\textbf{M}=(M_1,M_2,\ldots,M_m)$. Using $H4$, $\sum_{|\alpha|\geq0}\left|\frac{\partial^\alpha N_j(\textbf{u}_0)}{\alpha!}\right|\textbf{M}^{\alpha}$ converges. Hence, by Weierstrass M-test, $$\sum_{|\alpha|\geq0}\partial^\alpha N_j(\textbf{u}_0)\frac{(\textbf{u}_\lambda-\textbf{u}_0)^\alpha}{\alpha!}$$ converges uniformly. Hence, for $n\in\mathbb{N}_0$ and using (\ref{6.3}), we get
\begin{eqnarray*}\label{new2}
\int^\pi_{-\pi} N_j(\textbf{u}_\lambda) e^{-in\lambda} \,d\lambda
&=& \int^\pi_{-\pi}\sum_{|\alpha|\geq0}\partial^\alpha N_j(\textbf{u}_0)\frac{(\textbf{u}_\lambda-\textbf{u}_0)^\alpha}{\alpha!} e^{-in\lambda} \,d\lambda\nonumber\\
&=& \int^\pi_{-\pi}\underset{m\rightarrow \infty}{\lim}\sum_{|\alpha|=m}\partial^\alpha N_j(\textbf{u}_0)\frac{(\textbf{u}_\lambda-\textbf{u}_0)^\alpha}{\alpha!} e^{-in\lambda} \,d\lambda\nonumber\\
&=&\underset{m\rightarrow \infty}{\lim}\int^\pi_{-\pi}\sum_{|\alpha|=m}\partial^\alpha N_j(\textbf{u}_0)\frac{(\textbf{u}_\lambda-\textbf{u}_0)^\alpha}{\alpha!} e^{-in\lambda} \,d\lambda.\\
&=&\underset{m\rightarrow \infty}{\lim}\sum_{|\alpha|=m}\int^\pi_{-\pi}\partial^\alpha N_j(\textbf{u}_0)\frac{(\textbf{u}_{n_\lambda}-\textbf{u}_0)^\alpha}{\alpha!} e^{-in\lambda} \,d\lambda\ \ \ \ \mathrm{(using (\ref{4.2}))}\\
&=&\underset{m\rightarrow \infty}{\lim}\int^\pi_{-\pi}\sum_{|\alpha|=m}\partial^\alpha N_j(\textbf{u}_0)\frac{(\textbf{u}_{n_\lambda}-\textbf{u}_0)^\alpha}{\alpha!} e^{-in\lambda} \,d\lambda\\
&=& \int^\pi_{-\pi}\underset{m\rightarrow \infty}{\lim}\sum_{|\alpha|=m}\partial^\alpha N_j(\textbf{u}_0)\frac{(\textbf{u}_{n_\lambda}-\textbf{u}_0)^\alpha}{\alpha!} e^{-in\lambda} \,d\lambda\\
&=& \int^\pi_{-\pi}\sum_{|\alpha|\geq0}\partial^\alpha N_j(\textbf{u}_0)\frac{(\textbf{u}_{n_\lambda}-\textbf{u}_0)^\alpha}{\alpha!} e^{-in\lambda} \,d\lambda\\
&=&\int^\pi_{-\pi} N_j(\textbf{u}_{n_\lambda}) e^{-in\lambda} \,d\lambda,
\end{eqnarray*}
and thus the proof is complete using (\ref{6.3}).
\end{proof}
\subsubsection*{Extension of the first method}
Note that nonlinear terms $N_j(u_{1_\lambda},u_{2_\lambda},\ldots,u_{m_\lambda})$ decompose as
\begin{equation}\label{6.9}
N_j(u_{1_\lambda},u_{2_\lambda},\ldots,u_{m_\lambda})=\sum_{k=0}^{\infty}A_{j_k}(u_{1_0},\ldots,u_{1_k},u_{2_0},\ldots,u_{2_k},\ldots,u_{m_0},\ldots,u_{m_k}) e^{ik\lambda},
\end{equation}
for $j=1,2,\ldots,m$. To determine $A_{j_n}$, multiply $e^{-in\lambda}$ in (\ref{6.9}) and integrate both sides w.r.t $\lambda$ from $-\pi$ to $\pi$, to get
\begin{equation}\label{6.10}
\int^\pi_{-\pi} N_j\left(\sum_{k=0}^{\infty}u_{1_k} e^{ik\lambda},\ldots,\sum_{k=0}^{\infty}u_{m_k} e^{ik\lambda}\right) e^{-in\lambda}\,d\lambda = \int^\pi_{-\pi} \sum_{k=0}^{\infty}A_{j_k} e^{ik\lambda} e^{-in\lambda} \,d\lambda=2\pi A_{j_n}.
\end{equation}
The last equality in (\ref{6.10}) follows due to the uniform convergence of $\sum_{k=0}^{\infty}A_{j_k} e^{i(k-n)\lambda}$. Hence,
\begin{equation}\label{6.11}
A_{j_n}(u_{1_0},\ldots,u_{1_n},\ldots,u_{m_0},\ldots,u_{m_n})=\frac{1}{2\pi}\int^\pi_{-\pi} N_j\left(\sum_{k=0}^{\infty}u_{1_k} e^{ik\lambda},\ldots,\sum_{k=0}^{\infty}u_{m_k} e^{ik\lambda}\right) e^{-in\lambda} \,d\lambda.
\end{equation}
Applying Theorem \ref{t3}, we get for $j=1,2,\ldots,m$ and $n\in \mathbb{N}_0$,
\begin{equation}\label{6.12}
A_{j_n}(u_{1_0},\ldots,u_{1_n},\ldots,u_{m_0},\ldots,u_{m_n})=\frac{1}{2\pi}\int^\pi_{-\pi} N_j\left(\sum_{k=0}^{n}u_{1_k} e^{ik\lambda},\ldots,\sum_{k=0}^{n}u_{m_k} e^{ik\lambda}\right) e^{-in\lambda} \,d\lambda.
\end{equation}
\subsubsection*{Extension of the second method}
\noindent As seen earlier, the Adomian polynomials can also be calculated recursively. We define an operator $T$ as
\begin{equation}\label{6.13}
T(A_{j_n}(u_{1_0},\ldots,u_{1_n},\ldots,u_{m_0},\ldots,u_{m_n}))=\frac{1}{2\pi}\int^\pi_{-\pi} A_{j_{n}}(v_{1_0},\ldots,v_{1_{n}},\ldots,v_{m_0},\ldots,v_{m_{n}})e^{-i\lambda} \,d\lambda,
\end{equation}
where\ $v_{j_k}=u_{j_k}+(k+1)u_{j_{k+1}}e^{i\lambda},\ \forall\ k\in\{0,1,2,\ldots,n\}$. From (\ref{6.12}), we get for $j=1,2,\ldots,m$,
\begin{equation}\label{6.14}
A_{j_0}(u_{1_0},u_{2_0},\ldots,u_{m_0})= N_j(u_{1_0},u_{2_0},\ldots,u_{m_0}).
\end{equation}
Note that operator $T$ defined in (\ref{6.13}) satisfies all the properties of Lemma \ref{l4.1}. Therefore, by applying (\ref{4.12}), we get the following recursive formula for $A_{j_n}\ (1\leq j\leq m,\ n\in\mathbb{N})$ as
\begin{equation}\label{6.15}
A_{j_n}(u_{1_0},\ldots,u_{1_n},\ldots,u_{m_0},\ldots,u_{m_n})=\frac{1}{2n\pi}\int^\pi_{-\pi} A_{j_{n-1}}(v_{1_0},\ldots,v_{1_{n-1}},\ldots,v_{m_0},\ldots,v_{m_{n-1}})e^{-i\lambda} \,d\lambda,
\end{equation}
where $v_{j_k}=u_{j_k}+(k+1)u_{j_{k+1}}e^{i\lambda}$ and $\overline{v}_{j_k}=\overline{u}_{j_k}+(k+1)\overline{u}_{j_{k+1}}e^{i\lambda},\ \forall\ k\in\{0,1,2,\ldots,n-1\}$.
\begin{example}
(Extended First Method) Consider the nonlinear equation
\begin{equation*}
N_j(u_1,u_2,u_3)=u_1\frac{\partial u_j}{\partial x}+u_2\frac{\partial u_j}{\partial y}+u_3\frac{\partial u_j}{\partial z}\ \forall\ j=1,2,3.
\end{equation*}
\noindent These nonlinear terms appear in the Navier Stokes equation for an incompressible fluid flow defined by
\begin{equation}\label{6.16}
\frac{\partial V}{\partial t}+(V.\nabla)V=\frac{\eta}{\rho}\Delta v-\frac{1}{\rho}\nabla p.
\end{equation}
\noindent Here $x,y,z$ are spatial components and $V=(u_1,u_2,u_3)$ denotes the speed vector. Seng \textit{et al.} \cite{Seng199659} computed the Adomian polynomials for the nonlinear term $(V.\nabla)V$ in (\ref{6.16}). Using our first algorithm, we calculate $A_n$ with a few steps. From (\ref{6.12}), Adomian polynomials $A_{j_n}$ for $j=1,2,3$ are\\
\begin{eqnarray*}
A_{j_0}&=&u_{1_0}\frac{\partial u_{j_0}}{\partial x}+u_{2_0}\frac{\partial u_{j_0}}{\partial y}+u_{3_0}\frac{\partial u_{j_0}}{\partial z},\\
A_{j_1}&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[(u_{1_0}+u_{1_1}e^{i\lambda})\frac{\partial (u_{j_0}+u_{j_1}e^{i\lambda})}{\partial x}+(u_{2_0}+u_{2_1}e^{i\lambda})\frac{\partial (u_{j_0}+u_{j_1}e^{i\lambda})}{\partial y}\right.\\&&\left.+(u_{3_0}+u_{3_1}e^{i\lambda})\frac{\partial (u_{j_0}+u_{j_1}e^{i\lambda})}{\partial z} \right]e^{-i\lambda}\,d\lambda\\
&=&u_{1_0}\frac{\partial u_{j_1}}{\partial x}+u_{1_1}\frac{\partial u_{j_0}}{\partial x}+u_{2_0}\frac{\partial u_{j_1}}{\partial y}+u_{2_1}\frac{\partial u_{j_0}}{\partial y}+u_{3_0}\frac{\partial u_{j_1}}{\partial z}+u_{3_1}\frac{\partial u_{j_0}}{\partial z},\\
A_{j_2}&=&\frac{1}{2\pi}\int^\pi_{-\pi} \left[(u_{1_0}+u_{1_1}e^{i\lambda}+u_{1_2}e^{2i\lambda})\frac{\partial (u_{j_0}+u_{j_1}e^{i\lambda}+u_{j_2}e^{2i\lambda})}{\partial x}\right.\\&&\left.+(u_{2_0}+u_{2_1}e^{i\lambda}+u_{2_2}e^{2i\lambda})\frac{\partial (u_{j_0}+u_{j_1}e^{i\lambda}+u_{j_2}e^{2i\lambda})}{\partial y}\right.\\&&\left.+(u_{3_0}+u_{3_1}e^{i\lambda}+u_{3_2}e^{2i\lambda})\frac{\partial (u_{j_0}+u_{j_1}e^{i\lambda}+u_{j_2}e^{2i\lambda})}{\partial z} \right]e^{-2i\lambda} \,d\lambda\\
&=&u_{1_0}\frac{\partial u_{j_2}}{\partial x}+u_{1_1}\frac{\partial u_{j_1}}{\partial x}+u_{1_2}\frac{\partial u_{j_0}}{\partial x}+u_{2_0}\frac{\partial u_{j_2}}{\partial y}\\&&+u_{2_1}\frac{\partial u_{j_1}}{\partial y}+u_{2_2}\frac{\partial u_{j_0}}{\partial y}+u_{3_0}\frac{\partial u_{j_2}}{\partial z}+u_{3_1}\frac{\partial u_{j_1}}{\partial z}+u_{3_2}\frac{\partial u_{j_0}}{\partial z}.
\end{eqnarray*}
Thus, by using the extended first method, the Adomian polynomials are given by
\begin{equation*}\label{6.17}
A_{j_n}(u_{1_0},\ldots,u_{1_n},\ldots,u_{3_0},\ldots,u_{3_n}) =\sum_{(k,w)\in\{(1,x),(2,y),(3,z)\}}\underset{a,b\in\mathbb{N}_0}{\sum_{a+b=n}}u_{k_a}\frac{\partial u_{j_b}}{\partial w},\ \forall\ n\in\mathbb{N}_0,\ j=1,2,3.
\end{equation*}
\end{example}
\section{An Application to PDE's}
\setcounter{equation}{0}
Adomian polynomials for $N(u)=u^m\overline{u}$, where $m\in\mathbb{N}$, can be calculated easily by using (\ref{5.5}) as
\begin{equation}\label{7.1}
A_n(u_0,u_1,\ldots,u_n)=\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^{m+1}{k_j}=n}}\ \prod_{j=1}^{m} u_{k_j}\overline{u}_{k_{m+1}},\ \forall\ \ n\in \mathbb{N}_0.
\end{equation}
\noindent Note when $m=2$, the nonlinear term, $u^2\overline{u}$ appears in time fractional nonlinear Schr\"{o}dinger equation
\begin{equation}\label{7.2}
iD^\alpha_tu(x,t)+\frac{1}{2}\frac{\partial^2u}{\partial x^2} +\left|u\right|^2u=0,\ u(x,0)=e^{ix},\ 0<\alpha\leq 1,\ t>0,
\end{equation}
where $ D_t^{\alpha}$ is the Caputo fractional derivative of order $\alpha$ given by (\ref{5.3}). This equation has been solved using ADM by Rida \textit{et al.} \cite{Rida2008553}. The initial value problem (IVP) (\ref{7.2}) is equivalent to the integral equation
\begin{equation*}\label{7.3}
u(x,t)=e^{ix}+iI^\alpha_t\left(\frac{1}{2}\frac{\partial^2u}{\partial x^2}\right)+iI^\alpha_t\left(\left|u\right|^2u\right),
\end{equation*}
that is,
\begin{equation*}\label{7.4}
\sum_{k=0}^{\infty}u_k=u_0(x,t)+iI^\alpha_t\left(\sum_{k=0}^{\infty}\frac{1}{2}\frac{\partial^2u_k}{\partial x^2}\right)+iI^\alpha_t\left(\sum_{k=0}^{\infty}A_k\right),
\end{equation*}
where $I^\alpha_t$ is the Riemann-Liouville fractional integral operator of order $\alpha>0$ defined by
\begin{equation}\label{7.5}
I^\alpha_tu(x,t)=\left\{
\begin{array}{ll}
\frac{1}{\Gamma{(\alpha)}}\int^t_{0} (t-\tau)^{\alpha-1}u(x,\tau)\,d\tau, & \mbox{for } 0<\alpha,\\
u(x,t), & \mbox{for } \alpha=0.
\end{array}
\right.
\end{equation}
From (\ref{7.1}), the Adomian polynomials for the nonlinear term $N(u)=\left|u\right|^2u=u^2\overline{u}$ are
\begin{equation*}\label{7.6}
A_n(u_0,u_1,\ldots,u_{n-1},u_n)=\underset{k_j\in\mathbb{N}_0}{\sum_{\sum_{j=1}^3k_j=n}}u_{k_1}u_{k_2}\overline{u}_{k_3}\ ,\ \ \forall\ \ n\in \mathbb{N}_0.
\end{equation*}
Therefore, the first four Adomian polynomials are
\begin{eqnarray*}
A_0(u_0)&=&\left|u_0\right|^2u_0,\\
A_1(u_0,u_1)&=&u_0^2\overline{u}_1+2\left|u_0\right|^2u_1,\\
A_2(u_0,u_1,u_2)&=&u_0^2\overline{u}_2+u_1^2\overline{u}_0+2\left|u_1\right|^2u_0+2\left|u_0\right|^2u_2,\\
A_3(u_0,u_1,u_2,u_3)&=&u_0^2\overline{u}_3+2u_0u_1\overline{u}_2+2u_0u_2\overline{u}_1+2u_1u_2\overline{u}_0+\left|u_1\right|^2u_1+2\left|u_0\right|^2u_3.
\end{eqnarray*}
Applying ADM, we get
\begin{equation*}\label{7.7}
u_n=\frac{e^{ix}(it^\alpha)^n}{2^n\Gamma(n\alpha+1)},\ \forall\ n\in\mathbb{N}_0.
\end{equation*}
Hence, the solution to (\ref{7.2}) is
\begin{equation*}\label{7.8}
u(x,t)=e^{ix}E_\alpha\left(\frac{it^\alpha}{2}\right),
\end{equation*}
where $E_\alpha(x)$ is the Mittag-Leffler function \cite{Rida2008553} of order $\alpha$ defined by
\begin{equation*}\label{7.9}
E_\alpha(x)=\sum_{k=0}^{\infty}\frac{x^k}{\Gamma(\alpha x+1)}.
\end{equation*}
\section{Conclusions}
The Adomian decomposition method is a very powerful method for solving nonlinear functional equations of any kind (algebraic, differential, partial differential, integral, fractional differential etc). The crucial step in the method is the employment of the ``Adomian polynomials" which allow the solution to converge for the nonlinear portion of the equation, without linearization, perturbation or discretization. However, calculation of Adomian polynomials is in general very tedious as it requires lots of computations.
In this paper, we have discussed some important properties of Adomian polynomials and have developed two new methods which avoid draggy calculation of higher derivatives involved in prevalent methods. Another advantage is that at every stage we don't have to keep track of sum of the indices of components of $u(x,t)$. Also, the second algorithm is efficient in cases where Taylor series expansion is required, as for example in case of exponential, logarithmic and trigonometric nonlinearity, and it just requires the first two terms of the Taylor series expansion. We have illustrated our approach using typical examples.
\printbibliography
\end{document}
| {
"timestamp": "2014-08-22T02:04:00",
"yymm": "1408",
"arxiv_id": "1408.4895",
"language": "en",
"url": "https://arxiv.org/abs/1408.4895",
"abstract": "In this paper, we discuss two simple parametrization methods for calculating Adomian polynomials for several nonlinear operators, which utilize the orthogonality of functions einx, where n is an integer. Some important properties of Adomian polynomials are also discussed and illustrated with examples. These methods require minimum computation, are easy to implement, and are extended to multivariable case also. Examples of different forms of nonlinearity, which includes the one involved in the Navier Stokes equation, is considered. Explicit expression for the n-th order Adomian polynomials are obtained in most of the examples.",
"subjects": "Mathematical Physics (math-ph)",
"title": "Simple Parametrization Methods for Generating Adomian Polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877700966099,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.7096296271880941
} |
https://arxiv.org/abs/1109.4019 | Tate-Hochschild homology and cohomology of Frobenius algebras | We study Tate-Hochschild homology and cohomology for a two-sided Noetherian Gorenstein algebra. These (co)homology groups are defined for all degrees, non-negative as well as negative, and they agree with the usual Hochschild (co)homology groups for all degrees larger than the injective dimension of the algebra. We prove certain duality theorems relating the Tate-Hochschild (co)homology groups in positive degree to those in negative degree, in the case where the algebra is Frobenius. We explicitly compute all Tate-Hochschild (co)homology groups for certain classes of Frobenius algebras, namely, certain quantum complete intersections. | \section{Introduction}\label{intro}
Hochschild cohomology was introduced by Hochschild in \cite{Hochschild1, Hochschild2} as a tool for
studying the structure of associative algebras. A bit later, Tate introduced a cohomology theory based on complete resolutions, which consequently defined cohomology in all degrees, positive \emph{and} negative (cf. the end of \cite{Tate}). In this paper we combine these two notions of cohomology and extend Hochschild cohomology to the `negative side,'
arriving at what we call \emph{Tate-Hochschild cohomology}.
It turns out that the `positive side' of Tate-Hochschild cohomology agrees with the usual Hochschild cohomology. We show
that in some cases the `positive' and `negative' sides are symmetric. However, this is not the case in
general, and we illustrate this by computing explicitly both sides of Tate-Hochschild cohomology
for certain classes of algebras.
More specifically, let $k$ be a field and $\Lambda$ denote a two-sided Noetherian
Gorenstein $k$-algebra. Then $\Lambda$ has a complete resolution $\mathbb T$ over the enveloping algebra
$\Lambda^e$ of $\Lambda$, and for a $\Lambda$-$\Lambda$-bimodule $B$ one can define the
Tate-Hochschild cohomology groups with coefficients in $B$ by
\[
\operatorname{\widehat{HH}}\nolimits^n(\Lambda,B)=\operatorname{H}\nolimits^n(\operatorname{Hom}\nolimits_{\Lambda^e}(\mathbb T,B))
\]
for all $n\in\mathbb Z$. (See Section \ref{tatehochschild} for details.)
When $\Lambda$ is a finite dimensional algebra and $B$ is finitely generated, then the Tate-Hochschild cohomology groups are finite dimensional vector spaces over $k$. We prove in Section
\ref{tatehochschild}
general duality results which relate the vector space dimensions of the positive cohomology to
those of the negative cohomology with coefficients in a dual module. We use these
results in Section \ref{frobeniusalg} to establish, for example, the following consequence when $\Lambda$ is moreover a Frobenius algebra:
\begin{theoremempty} Let $\Lambda$ be a Frobenius algebra, with Nakayama automorphism
$\nu$. Then
\[
\dim_k \operatorname{\widehat{HH}}\nolimits^n(\Lambda,\Lambda) = \dim_k\operatorname{\widehat{HH}}\nolimits^{-(n+1)}(\Lambda,{_{\nu^2}\Lambda_1})
\]
for all $n\in\mathbb Z$, where ${_{\nu^2}\Lambda_1}$ denotes the bimodule $\Lambda$ twisted on the right by the automorphism $\nu^2$.
\end{theoremempty}
Thus Tate-Hochschild cohomology is symmetric when $\nu$ squares to the identity automorphism, and this
is the case, for example, when $\Lambda$ is a symmetric algebra or an exterior algebra. On the other hand, for certain classes
of Frobenius algebras, Tate-Hochschild cohomology is not symmetric. In Section \ref{quantumci} we compute
the Tate-Hochschild cohomology for the quantum complete intersection $A = k \langle X,Y \rangle / (X^a, XY-qYX, Y^b)$ with $a,b\ge 2$ and $q$ not a root of unity in $k$, finding that
\[
\dim \operatorname{\widehat{HH}}\nolimits^n(A,A) = \left \{
\begin{array}{ll}
1 & \text{if } n=0 \\
2 & \text{if } n=1 \\
1 & \text{if } n=2 \\
0 & \text{if } n \neq 0,1,2. \\
\end{array}
\right.
\]
Throughout the paper we simultaneously treat the homology version as well, \emph{Tate-Hochschild homology}.
It turns out that the Tate-Hochschild homology behaves quite different than does the cohomology.
For example, in Section \ref{frobeniusalg} we give the companion to the theorem above, showing
that Tate-Hochschild homology is always symmetric when $\Lambda$ is a Frobenius algebra. This result was first proved in \cite{EuSchedler}.
\begin{theoremempty} Let $\Lambda$ be a Frobenius algebra. Then
\[
\dim_k \operatorname{\widehat{HH}}\nolimits_n(\Lambda,\Lambda) = \dim_k\operatorname{\widehat{HH}}\nolimits_{-(n+1)}(\Lambda,\Lambda)
\]
for all $n\in\mathbb Z$.
\end{theoremempty}
Again, this theorem is a consequence of more general duality statements which we prove in Section \ref{tatehochschild}.
\section{Tate-Hochschild (co)homology}\label{tatehochschild}
Let $k$ be a commutative ring and $\Lambda$ a $k$-algebra. We denote by $\Lambda^{\operatorname{op}\nolimits}$ the opposite algebra of $\Lambda$, and by $\Lambda^{\e}$ the enveloping algebra $\Lambda
\otimes_k \Lambda^{\operatorname{op}\nolimits}$ of $\Lambda$. The $k$-dual $\operatorname{Hom}\nolimits_k(-,k)$ is denoted by
$D(-)$, and the ring dual $\operatorname{Hom}\nolimits_{\Lambda}(-, \Lambda )$ by $(-)^*$.
The classical Hochschild cohomology groups of an algebra were introduced by Hochschild in \cite{Hochschild1, Hochschild2}. For every non-negative integer $n$, let $Q_n$ denote the $n$-fold tensor product $\Lambda \otimes_k \cdots \otimes_k \Lambda$ of $\Lambda$ over $k$, with $Q_0=k$. If $B$ is a $\Lambda$-$\Lambda$-bimodule, the corresponding \emph{Hochschild cohomology complex}
$$\cdots \to 0 \to 0 \to H^0 \xrightarrow{\partial^0} H^1 \xrightarrow{\partial^1} H^2 \xrightarrow{\partial^2} H^3 \to \cdots$$
is defined as follows:
$$H^n = \left \{
\begin{array}{ll}
0 & \text{for } n<0, \\
B & \text{for } n=0, \\
\operatorname{Hom}\nolimits_k(Q_n,B) & \text{for } n>0,
\end{array} \right.$$
with differentiation given by
\begin{eqnarray*}
( \partial^0b)( \lambda ) & = & \lambda b-b \lambda \\
( \partial^nf)( \lambda_1 \otimes \cdots \otimes \lambda_{n+1} ) & = & \lambda_1 f( \lambda_2 \otimes \cdots \otimes \lambda_{n+1} ) \\
& + & \sum_{i=1}^n(-1)^if( \lambda_1 \otimes \cdots \otimes \lambda_i \lambda_{i+1} \otimes \cdots \otimes \lambda_{n+1} ) \\
& + & (-1)^{n+1} f( \lambda_1 \otimes \cdots \otimes \lambda_n ) \lambda_{n+1}.
\end{eqnarray*}
The cohomology of this complex is the \emph{Hochschild cohomology of $\Lambda$, with coefficients in $B$}. We denote this by $\operatorname{HH}\nolimits^* ( \Lambda, B)$. The homological counterpart to Hochschild cohomology is defined using tensor product instead of the $\operatorname{Hom}\nolimits$-functor. The \emph{Hochschild homology complex}
$$\cdots \to H_3 \xrightarrow{\partial_3} H_2 \xrightarrow{\partial_2} H_1 \xrightarrow{\partial_1} H_0 \to 0 \to 0 \to \cdots$$
is defined as follows:
$$H_n = \left \{
\begin{array}{ll}
0 & \text{for } n<0, \\
B & \text{for } n=0, \\
B \otimes_k Q_n & \text{for } n>0,
\end{array} \right.$$
with differentiation given by
\begin{eqnarray*}
\partial_n ( b \otimes \lambda_1 \otimes \cdots \otimes \lambda_n ) & = & b \lambda_1 \otimes \lambda_2 \otimes \cdots \otimes \lambda_n \\
& + & \sum_{i=1}^{n-1}(-1)^i b \otimes \lambda_1 \otimes \cdots \otimes \lambda_i \lambda_{i+1} \otimes \cdots \otimes \lambda_n \\
& = & (-1)^n \lambda_n b \otimes \lambda_1 \otimes \cdots \otimes \lambda_{n-1}.
\end{eqnarray*}
The homology of this complex is the \emph{Hochschild homology of $\Lambda$, with coefficients in $B$}. We denote this by $\operatorname{HH}\nolimits_* ( \Lambda, B)$.
When the algebra $\Lambda$ is projective as a module over the ground ring $k$, the Hochschild cohomology and homology groups can be interpreted using $\operatorname{Ext}\nolimits$ and $\operatorname{Tor}\nolimits$ over the enveloping algebra $\Lambda^{\e}$. Namely, for each non-negative integer $n$, let $P_n = Q_{n+2}$, that is, the $(n+2)$-fold tensor product of $\Lambda$ over $k$. We endow $P_n$ with a left $\Lambda^{\e}$-module structure (that is, a bimodule structure) by defining
$$( \lambda \otimes \lambda' )( \lambda_0 \otimes \cdots \otimes \lambda_{n+1} ) = \lambda \lambda_0 \otimes \cdots \otimes \lambda_{n+1} \lambda',$$
and for each $n \ge 1$, define a bimodule homomorphism $P_n \xrightarrow{d_n} P_{n-1}$ by
$$\lambda_0 \otimes \cdots \otimes \lambda_{n+1} \mapsto \sum_{i=0}^n(-1)^i \lambda_0 \otimes \cdots \otimes \lambda_i \lambda_{i+1} \otimes \cdots \otimes \lambda_{n+1}.$$
The sequence
$$\mathbb{S} \colon \cdots \to P_3 \xrightarrow{d_3} P_2 \xrightarrow{d_2} P_1 \xrightarrow{d_1} P_0 \xrightarrow{\mu} \Lambda \to 0$$
of bimodules and homomorphisms, where $\mu$ is the multiplication map, is exact (cf.\ \cite[p.\ 174-175]{CartanEilenberg}), and we denote by $\mathbb{S}_{\Lambda}$ the complex obtained by deleting $\Lambda$. Since $P_n$ and $\Lambda^{\e} \otimes_k Q_n$ are isomorphic as $\Lambda^{\e}$-modules, adjointness gives
$$\operatorname{Hom}\nolimits_{\Lambda^{\e}}(P_n,B) \cong \operatorname{Hom}\nolimits_k(Q_n, \operatorname{Hom}\nolimits_{\Lambda^{\e}}( \Lambda^{\e},B)) \cong \operatorname{Hom}\nolimits_k(Q_n,B),$$
and the Hochschild cohomology complex is isomorphic to the complex $\operatorname{Hom}\nolimits_{\Lambda^{\e}}( \mathbb{S}_{\Lambda}, B)$ (where we view $B$ as a left $\Lambda^{\e}$-module). Similarly, the Hochschild homology complex is isomorphic to the complex $B \otimes_{\Lambda^{\e}} \mathbb{S}_{\Lambda}$ (where we view $B$ as a right $\Lambda^{\e}$-module). Now, if $\Lambda$ is projective as a module over $k$, then so is $Q_n$, hence the functor $\operatorname{Hom}\nolimits_k(Q_n,-)$ is exact. By adjointness, this functor is isomorphic to the functor $\operatorname{Hom}\nolimits_{\Lambda^{\e}}(P_n,-)$, and therefore $P_n$ is a projective bimodule. Thus the sequence $\mathbb{S}$ is a projective bimodule resolution of $\Lambda$, giving isomorphisms
\begin{eqnarray*}
\operatorname{HH}\nolimits^*( \Lambda, B ) & \cong & \operatorname{Ext}\nolimits_{\Lambda^{\e}}^*( \Lambda, B) \\
\operatorname{HH}\nolimits_*( \Lambda, B ) & \cong & \operatorname{Tor}\nolimits^{\Lambda^{\e}}_*( B, \Lambda ).
\end{eqnarray*}
The Hochschild cohomology of an algebra lives only in positive degrees, as does the Hochschild homology. The focus of this paper is a (co)homological theory which extends the classical one. In order to give the definition, we recall some general notions from \cite{AvramovMartsinkovsky}. Suppose $\Lambda$ is a two-sided Noetherian Gorenstein ring, say of Gorenstein dimension $d$. That is to say, the injective dimensions of $\Lambda$, both as a left and as a right module over itself, are equal to $d$. Then every finitely generated left $\Lambda$-module $M$ admits a \emph{complete resolution}
$$\mathbb{T} \colon \cdots \to T_2 \to T_1 \to T_0 \to T_{-1} \to T_{-2} \to \cdots,$$
i.e.\ an acyclic complex of finitely generated projective modules with the following properties
(see \cite[Theorem 3.2]{AvramovMartsinkovsky}):
\begin{enumerate}
\item the dual complex $\mathbb{T}^*$ is acyclic,
\item there exists a projective resolution $\mathbb{P}$ of $M$ and a chain map $\mathbb{T} \xrightarrow{f} \mathbb{P}$ with the property that $f_n$ is bijective for $n \ge d$.
\end{enumerate}
Property (2) implies that $\mathbb{T}$ is ``eventually" a projective resolution of $M$. Given another $\Lambda$-module $N$ and an integer $n \in \mathbb{Z}$, the \emph{Tate cohomology group} $\operatorname{\widehat{Ext}}\nolimits_{\Lambda}^n(M,N)$ is the $n$th cohomology of the complex $\operatorname{Hom}\nolimits_{\Lambda}( \mathbb{T},N)$. If $N$ is a right module, the \emph{Tate homology group} $\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n(N,M)$ is the $n$th homology of the complex $N \otimes_{\Lambda} \mathbb{T}$. Naturally, the Tate (co)homology is independent of the complete resolution of $M$, and, in the homological case, it can be computed using a complete resolution of $N$ \cite{ChristensenJorgensen}. Moreover, by property (2) there are isomorphisms
\begin{eqnarray*}
\operatorname{\widehat{Ext}}\nolimits_{\Lambda}^n(M,N) & \cong & \operatorname{Ext}\nolimits_{\Lambda}^n(M,N) \\
\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n(N,M) & \cong & \operatorname{Tor}\nolimits^{\Lambda}_n(N,M)
\end{eqnarray*}
for all $n \ge d+1$. The original cohomological definition is due to Tate, who introduced the cohomology groups for modules over the integral group ring of a finite group in order to study class field theory (cf.\ \cite[XII, \S 3]{CartanEilenberg}).
Having recalled the classical defnition of Tate cohomology and homology, we may now define the Hochschild cohomological and homological versions.
\begin{definition}
Let $k$ be a commutative ring and $\Lambda$ a $k$-algebra such that the enveloping algebra $\Lambda^{\e}$ is two-sided Noetherian and Gorenstein. For an integer $n \in \mathbb{Z}$ and a bimodule $B$, the $n$th \emph{Tate-Hochschild cohomology} group $\operatorname{\widehat{HH}}\nolimits^n ( \Lambda, B)$ and the $n$th \emph{Tate-Hochschild homology} group $\operatorname{\widehat{HH}}\nolimits_n ( \Lambda, B)$ are defined as
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits^n( \Lambda, B ) & = & \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^n( \Lambda, B) \\
\operatorname{\widehat{HH}}\nolimits_n( \Lambda, B ) & = & \operatorname{\widehat{Tor}}\nolimits^{\Lambda^{\e}}_n( B, \Lambda ).
\end{eqnarray*}
\end{definition}
Note that if the Gorenstein dimension of the enveloping algebra is $d$, then for every $n \ge d+1$ there are isomorphisms $\operatorname{\widehat{HH}}\nolimits^n( \Lambda, B ) \cong \operatorname{Ext}\nolimits_{\Lambda^{\e}}^n( \Lambda, B)$ and $\operatorname{\widehat{HH}}\nolimits_n( \Lambda, B ) \cong \operatorname{Tor}\nolimits^{\Lambda^{\e}}_n( B, \Lambda )$. In particular, when $\Lambda$ is projective as a $k$-module, then there are isomorphisms
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits^n( \Lambda, B ) & \cong & \operatorname{HH}\nolimits^n( \Lambda, B ) \\
\operatorname{\widehat{HH}}\nolimits_n( \Lambda, B ) & \cong & \operatorname{HH}\nolimits_n( \Lambda, B )
\end{eqnarray*}
whenever $n \ge d+1$. A special case appears when the enveloping algebra is two-sided Noetherian and selfinjective. By definition, the enveloping algebra is then of Gorenstein dimension zero, and the Tate-Hochschild (co)homology groups are therefore defined and agree with the classical Hochschild (co)homology groups in all positive degrees. In particular, this is the case for finite dimensional Frobenius algebras (see the next section); for such algebras, the Tate-Hochschild (co)homology agrees with the \emph{stable Hochschild (co)homology} introduced in \cite{EuSchedler}.
We shall mainly be working with finite dimensional $k$-algebras (hence $k$ will be a field), hence the requirement (in the definition of Gorenstein algebras) that the enveloping algebra be two-sided Noetherian is unnecessary. In other words, a finite dimensional algebra is Gorenstein if and only if its injective dimensions as a left and right module over itself are finite. It is known that in this case the two injective dimensions are the same. The following result shows that if a finite dimensional algebra is Gorenstein, then so is its enveloping algebra. We include a proof due to the lack of a reference. Consequently, Tate-Hochschild (co)homology is defined for finite dimensional Gorenstein algebras. Note that the result shows in particular that the enveloping algebra of a selfinjective algebra is again selfinjective.
\begin{lemma}\label{tensorgorenstein}
If $k$ is a field and $\Lambda$ and $\Gamma$ are finite dimensional Gorenstein $k$-algebras of Gorenstein dimensions $s$ and $t$, respectively, then their tensor product $\Lambda \otimes_k \Gamma$ is Gorenstein of Gorenstein dimension at most $s+t$. In particular, the enveloping algebra $\Lambda^{\e}$ is Gorenstein of Gorenstein dimension at most $2s$.
\end{lemma}
\begin{proof}
Choose injective resolutions
$$0 \to \Lambda \to I^0_{\Lambda} \to \cdots \to I^s_{\Lambda} \to 0$$
and
$$0 \to \Gamma \to I^0_{\Gamma} \to \cdots \to I^t_{\Gamma} \to 0$$
over $\Lambda$ and $\Gamma$, respectively, both as left modules. When we delete the algebras and tensor the resulting complexes over $k$, we obtain a complex
$$\mathbb{E} \colon 0 \to E^0 \to E^1 \to \cdots \to E^{s+t} \to 0$$
in which $E^n = \oplus_{j=0}^n (I^j_{\Lambda} \otimes_k I^{n-j}_{\Gamma})$. In general, if $I_{\Lambda}$ and $I_{\Gamma}$ are injective left modules over $\Lambda$ and $\Gamma$, respectively, then the right modules $D(I_{\Lambda})$ and $D(I_{\Gamma})$ are projective, and so $D(I_{\Lambda}) \otimes_k D(I_{\Gamma})$ is a projective right ($\Lambda \otimes_k \Gamma$)-module. But this right ($\Lambda \otimes_k \Gamma$)-module is isomorphic to $D(I_{\Lambda} \otimes_k I_{\Gamma})$, and consequently the left ($\Lambda \otimes_k \Gamma$)-module $I_{\Lambda} \otimes_k I_{\Gamma}$ is injective. This shows that the complex $\mathbb{E}$ is an injective resolution of $\Lambda \otimes_k \Gamma$ as a left module over itself. Similarly, by starting with injective resolutions of right modules, we end up with an injective resolution (of length $s+t$) of $\Lambda \otimes_k \Gamma$ as a right module over itself. This proves the first part of the lemma. The second part follows immediately, since the opposite algebra of a Gorenstein algebra is also Gorenstein of the same dimension.
\end{proof}
Note also that when $\Lambda$ is finite dimensional algebra and $B$ is a $\Lambda$-$\Lambda$-bimodule which is finitely
generated as either a left or right $\Lambda$-module, then the Tate-Hochschild homology $\operatorname{\widehat{HH}}\nolimits_n(\Lambda,B)$
and cohomology $\operatorname{\widehat{HH}}\nolimits^n(\Lambda,B)$ are just finite dimensional vector spaces over $k$ for all
$n\in\mathbb Z$.
The main results in this section establish Tate-Hochschild duality isomorphisms for Gorenstein algebras. These results follow from a more general duality result for Tate homology, which we prove after the following two lemmas. The first lemma is well known in the case of ordinary (co)homology: over any finite dimensional algebra $\Gamma$ there is an isomorphism
$$D \left ( \operatorname{Ext}\nolimits_{\Gamma}^i(X,Y) \right ) \cong \operatorname{Tor}\nolimits^{\Gamma}_i(D(Y),X)$$
for all $i \ge 0$ and all modules $X,Y$ (cf.\ \cite[VI, Proposition 5.3]{CartanEilenberg}).
\begin{lemma}\label{dualityexttor}
Let $\Lambda$ be a finite dimensional Gorenstein algebra and $M$ and $N$ two left $\Lambda$-modules, with $M$ finitely generated. Then there is an isomorphism
$$D \left ( \operatorname{\widehat{Ext}}\nolimits_{\Lambda}^n( M,N ) \right ) \cong \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n( D(N),M )$$
for all $n \in \mathbb{Z}$. In particular, if $B$ is a bimodule, then there is an isomorphism
$$D \left ( \operatorname{\widehat{HH}}\nolimits^n( \Lambda, B ) \right ) \cong \operatorname{\widehat{HH}}\nolimits_n( \Lambda, D(B) )$$
for all $n \in \mathbb{Z}$.
\end{lemma}
\begin{proof}
Let $\mathbb{T}$ be a complete resolution of $M$, and for each $i \in \mathbb{Z}$, denote by $\Omega_{\Lambda}^i ( \mathbb{T} )$ the image of the $i$th differential in $\mathbb{T}$. Fix $n\in\mathbb Z$, and denote the Gorenstein dimension of $\Lambda$ by $d$. Let $m$ be any integer with the property that $m+n>d$. Then there are isomorphisms
\begin{eqnarray*}
D \left ( \operatorname{\widehat{Ext}}\nolimits_{\Lambda}^n( M,N ) \right ) & \cong & D \left ( \operatorname{H}\nolimits^n( \operatorname{Hom}\nolimits_{\Lambda}( \mathbb{T},N ) ) \right ) \\
& \cong & D \left ( \operatorname{\widehat{Ext}}\nolimits_{\Lambda}^{n+m}( \Omega_{\Lambda}^{-m} ( \mathbb{T} ),N ) \right ) \\
& \cong & D \left ( \operatorname{Ext}\nolimits_{\Lambda}^{n+m}( \Omega_{\Lambda}^{-m} ( \mathbb{T} ),N ) \right ) \\
& \cong & \operatorname{Tor}\nolimits^{\Lambda}_{n+m}( D(N), \Omega_{\Lambda}^{-m} ( \mathbb{T} ) ) \\
& \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_{n+m}( D(N), \Omega_{\Lambda}^{-m} ( \mathbb{T} ) ) \\
& \cong & \operatorname{H}\nolimits_n( D(N) \otimes_{\Lambda} \mathbb{T} )\\
& \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_{n}( D(N), M ),
\end{eqnarray*}
and we have proved the first part. The second part follows from the first and the definition of Tate-Hochschild (co)homology.
\end{proof}
The second lemma seems to be well known; it is a special case of \cite[Proposition 20.10]{AndersonFuller}. We include a proof.
\begin{lemma}\label{dualityprojective}
Let $\Lambda$ be any ring and $M$ a left $\Lambda$-module. If $P$ is a finitely generated projective left $\Lambda$-module, then there is an isomorphism
$$\psi_P \colon \operatorname{Hom}\nolimits_{\Lambda}(P, \Lambda ) \otimes_{\Lambda} M \to \operatorname{Hom}\nolimits_{\Lambda} (P,M)$$
given by $\psi_P(f \otimes m)(p) = f(p)m$. This isomorphism is natural in $P$.
\end{lemma}
\begin{proof}
The map $\psi_P$ is well defined since the pairing
\begin{eqnarray*}
\operatorname{Hom}\nolimits_{\Lambda}(P, \Lambda ) \times M & \xrightarrow{\theta} & \operatorname{Hom}\nolimits_{\Lambda} (P,M) \\
(f,m) & \mapsto & (p \mapsto f(p)m )
\end{eqnarray*}
satisfies $\theta ( f \lambda, m ) = \theta (f, \lambda m )$ for all $\lambda \in \Lambda$. When $P = \Lambda$, this map is just the composition of the isomorphisms
$$\operatorname{Hom}\nolimits_{\Lambda}( {_{\Lambda}\Lambda}, {_{\Lambda}\Lambda}_{\Lambda} ) \otimes_{\Lambda} M \to \Lambda_{\Lambda} \otimes_{\Lambda} M \to M \to \operatorname{Hom}\nolimits_{\Lambda} ( {_{\Lambda}\Lambda},M)$$
and hence an isomorphism itself. Extending to the case when $P$ is a finitely generated free module, and then to the case when $P$ is a summand of such a module, we see that the first half of the lemma holds.
As for the naturality in $P$, let $P_1 \xrightarrow{h} P_2$ be a map between finitely generated projective left $\Lambda$-modules, and consider the diagram
$$\xymatrix{
\operatorname{Hom}\nolimits_{\Lambda}(P_2, \Lambda ) \otimes_{\Lambda} M \ar[d]^{\psi_{P_2}} \ar[r]^{h^* \otimes 1_M} & \operatorname{Hom}\nolimits_{\Lambda}(P_1, \Lambda ) \otimes_{\Lambda} M \ar[d]^{\psi_{P_1}} \\
\operatorname{Hom}\nolimits_{\Lambda} (P_2,M) \ar[r]^{h^*} & \operatorname{Hom}\nolimits_{\Lambda} (P_1,M) }$$
If $f \in \operatorname{Hom}\nolimits_{\Lambda}(P_2, \Lambda ), m \in M$ and $p \in P_1$, then
\begin{eqnarray*}
\left [ (h^* \circ \psi_{P_2})(f \otimes m) \right ] (p) & = & ( \psi_{P_2}(f \otimes m) \circ h) (p) \\
& = & \psi_{P_2} (f \otimes m) \left ( h(p) \right ) \\
& = & f \left ( h(p) \right ) m \\
& = & (f \circ h)(p)m \\
& = & \psi_{P_1}(f \circ h \otimes m) (p) \\
& = & \left [ ( \psi_{P_1} \circ (h^* \otimes 1_M))(f \otimes m) \right ] (p),
\end{eqnarray*}
hence the diagram commutes.
\end{proof}
We are now ready to prove the general duality result for Tate homology and cohomology. Recall first that when $k$ is a field and $\Lambda$ is a finite dimensional $k$-algebra, then every finitely generated module $M$ admits a \emph{minimal projective resolution}
$$\cdots \to P_2 \xrightarrow{d_2} P_1 \xrightarrow{d_1} P_0 \xrightarrow{d_0} M \to 0.$$
This projective resolution appears as a direct summand of every projective resolution of $M$, and it is unique up to isomorphism. For every $n \ge 0$, the $n$th \emph{syzygy} of $M$, denoted $\Omega_{\Lambda}^n(M)$, is the image of the map $d_n$.
\begin{theorem}\label{dualitygeneral}
Let $\Lambda$ be a finite dimensional Gorenstein algebra, $M,L$ two finitely generated left modules, and $N$ a finitely generated right module. If the Gorenstein dimension of $\Lambda$ is at most $d$, then there are vector space isomorphisms
\begin{eqnarray*}
\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n(N,M) & \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_{-(n-d+1)}( \Omega_{\Lambda}^d(M)^*, D(N)) \\
\operatorname{\widehat{Ext}}\nolimits_{\Lambda}^n(M,L) & \cong & \operatorname{\widehat{Ext}}\nolimits_{\Lambda}^{-(n-d+1)}( L,D( \Omega_{\Lambda}^d(M)^*))
\end{eqnarray*}
for all $n \in \mathbb{Z}$.
\end{theorem}
\begin{proof}
Consider the minimal projective resolution
$$\cdots \to P_2 \to P_1 \to P_0 \to M \to 0$$
of $M$. It follows from \cite[Lemma 2.5 and Construction 3.6]{AvramovMartsinkovsky} that $M$ admits a complete resolution
$$\mathbb{T} \colon \cdots \to T_2 \xrightarrow{\partial_2} T_1 \xrightarrow{\partial_1} T_0 \xrightarrow{\partial_0} T_{-1} \xrightarrow{\partial_{-1}} T_{-2} \to \cdots$$
such that there exists a chain map
$$\xymatrix{
\cdots \ar[r] & T_2 \ar[r]^{\partial_2} \ar[d]^{f_2} & T_1 \ar[r]^{\partial_1} \ar[d]^{f_1} & T_0 \ar[r]^{\partial_0} \ar[d]^{f_0} & T_{-1} \ar[r]^{\partial_{-1}} \ar[d]^{f_{-1}} & T_{-2} \ar[d]^{f_{-2}} \ar[r] & \cdots \\
\cdots \ar[r] & P_2 \ar[r] & P_1 \ar[r] & P_0 \ar[r] & 0 \ar[r] & 0 \ar[r] & \cdots }$$
in which $f_n$ is bijective for $n \ge d$. Consequently the image of the map $\partial_d$ is isomorphic to $\Omega_{\Lambda}^d(M)$; we denote this module by $X$. We must show that
$$\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n(N,X) \cong \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_{-(n+1)}( X^*, D(N))$$
for all $n$, since $\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n(N,X)$ is isomorphic to $\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_{n+d}(N,M)$.
By adjointness, the complexes $\operatorname{Hom}\nolimits_k( N \otimes_{\Lambda} \mathbb{T}, k )$ and $\operatorname{Hom}\nolimits_{\Lambda} ( \mathbb{T}, \operatorname{Hom}\nolimits_k(N,k) )$ are isomorphic, that is, there is an isomorphism
$$D( N \otimes_{\Lambda} \mathbb{T} ) \cong \operatorname{Hom}\nolimits_{\Lambda} ( \mathbb{T}, D(N) )$$
of complexes. Moreover, by Lemma \ref{dualityprojective}, there is an isomorphism
$$\xymatrix{
\cdots \to \operatorname{Hom}\nolimits_{\Lambda} ( T_{n-1} , D(N) ) \ar[d]^{\psi_{T_{n-1}}^{-1}} \ar[r]^{\partial_n} & \operatorname{Hom}\nolimits_{\Lambda} ( T_{n} , D(N) ) \ar[d]^{\psi_{T_{n}}^{-1}} \ar[r] & \cdots \\
\cdots \to \operatorname{Hom}\nolimits_{\Lambda} ( T_{n-1}, \Lambda ) \otimes_{\Lambda} D(N) \ar[r]^{\partial_n} & \operatorname{Hom}\nolimits_{\Lambda} ( T_{n}, \Lambda ) \otimes_{\Lambda} D(N) \ar[r] & \cdots }$$
between the complexes $\operatorname{Hom}\nolimits_{\Lambda} ( \mathbb{T}, D(N) )$ and $\mathbb{T}^* \otimes_{\Lambda} D(N)$. In general, note that if $C$ is a complex of finite dimensional vector spaces over $k$, then $\operatorname{H}\nolimits_n(C)$ and
$\operatorname{H}\nolimits_{-n}(D(C))$ have the same dimension, and are therefore isomorphic as vector spaces. This explains
the second isomorphism below. Now since $\mathbb{T}^*$ is a complete resolution of $X^*$, we see that
\begin{eqnarray*}
\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n(N,X) & \cong & \operatorname{H}\nolimits_{n+d} ( N \otimes_{\Lambda} \mathbb{T} ) \\
& \cong & \operatorname{H}\nolimits_{-(n+d)} \left ( D( N \otimes_{\Lambda} \mathbb{T} ) \right ) \\
& \cong & \operatorname{H}\nolimits_{-(n+d)} \left ( \operatorname{Hom}\nolimits_{\Lambda} ( \mathbb{T}, D(N) ) \right ) \\
& \cong & \operatorname{H}\nolimits_{-(n+d)} \left ( \mathbb{T}^* \otimes_{\Lambda} D(N) \right ) \\
& \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_{-(n+1)}( X^*, D(N))
\end{eqnarray*}
and the proof of the homology part is complete.
For the cohomology part, we use Lemma \ref{dualityexttor} twice, together with the homology part we just proved:
\begin{eqnarray*}
D \left ( \operatorname{\widehat{Ext}}\nolimits_{\Lambda}^n( M,L ) \right ) & \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n( D(L),M ) \\
& \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_{-(n-d+1)}( \Omega_{\Lambda}^d(M)^*, D^2(L)) \\
& \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_{-(n-d+1)}( \Omega_{\Lambda}^d(M)^*, L) \\
& \cong & D \left ( \operatorname{\widehat{Ext}}\nolimits_{\Lambda}^{-(n-d+1)}( L,D( \Omega_{\Lambda}^d(M)^*))\right ).
\end{eqnarray*}
Hence $\operatorname{\widehat{Ext}}\nolimits_{\Lambda}^n( M,L )$ and $\operatorname{\widehat{Ext}}\nolimits_{\Lambda}^{-(n-d+1)}( L,D( \Omega_{\Lambda}^d(M)^*))$ are isomorphic.
\end{proof}
We can now prove the duality result for Tate-Hochschild (co)homology; this is just a direct application of Theorem \ref{dualitygeneral}.
\begin{theorem}\label{dualityTHgorenstein}
If $\Lambda$ is a finite dimensional Gorenstein algebra of Gorenstein dimension $d$, and $B$ is a $\Lambda$-$\Lambda$-bimodule which is finitely generated as either a left or right $\Lambda$-module, then
there are isomorphisms of vector spaces
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits_n( \Lambda, B ) & \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda^{\e}}_{-(n-2d+1)}(\Omega_{\Lambda^{\e}}^{2d}( \Lambda )^* , D(B)) \\
\operatorname{\widehat{HH}}\nolimits^n( \Lambda, B ) & \cong & \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^{-(n-2d+1)} \left ( B, D \left ( \Omega_{\Lambda^{\e}}^{2d}( \Lambda )^* \right ) \right )
\end{eqnarray*}
for all $n \in \mathbb{Z}$, where $(-)^* = \operatorname{Hom}\nolimits_{\Lambda^{\e}}(-, \Lambda^{\e} )$. In particular, there are isomorphisms
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits_n( \Lambda, \Lambda ) & \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda^{\e}}_{-(n-2d+1)}( \Omega_{\Lambda^{\e}}^{2d}( \Lambda )^*, D( \Lambda ) ) \\
\operatorname{\widehat{HH}}\nolimits^n( \Lambda, \Lambda ) & \cong & \operatorname{\widehat{HH}}\nolimits^{-(n-2d+1)} \left ( \Lambda , D \left ( \Omega_{\Lambda^{\e}}^{2d}( \Lambda )^* \right ) \right )
\end{eqnarray*}
for all $n \in \mathbb{Z}$.
\end{theorem}
\begin{proof}
By Lemma \ref{tensorgorenstein}, the enveloping algebra $\Lambda^{\e}$ is Gorenstein of dimension at most $2d$, hence the isomorphisms in the first part follow immediately from Theorem \ref{dualitygeneral}:
$$\operatorname{\widehat{HH}}\nolimits_n( \Lambda, B ) = \operatorname{\widehat{Tor}}\nolimits^{\Lambda^{\e}}_n( B, \Lambda ) \cong \operatorname{\widehat{Tor}}\nolimits^{\Lambda^{\e}}_{-(n-2d+1)}( \Omega_{\Lambda^{\e}}^{2d}( \Lambda )^*, D(B) )$$
$$\operatorname{\widehat{HH}}\nolimits^n( \Lambda, B ) = \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^n( \Lambda, B ) \cong \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^{-(n-2d+1)} \left ( B, D \left ( \Omega_{\Lambda^{\e}}^{2d}( \Lambda )^* \right ) \right ).$$
The last part of the theorem follows directly from the first.
\end{proof}
We end this section by specializing to selfinjective algebras. Such and algebra is by definition Gorenstein, and its Gorenstein dimension is zero. Therefore, for this class of algebras, Theorem \ref{dualityTHgorenstein} takes the following form.
\begin{theorem}\label{dualityTHselfinjective}
If $\Lambda$ is a finite dimensional selfinjective algebra, and $B$ is a bimodule, then there are isomorphisms
of vector spaces
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits_n( \Lambda, B ) & \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda^{\e}}_{-(n+1)}( \Lambda^*, D(B) ) \\
\operatorname{\widehat{HH}}\nolimits^n( \Lambda, B ) & \cong & \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^{-(n+1)} \left ( B, D( \Lambda^*) \right )
\end{eqnarray*}
for all $n \in \mathbb{Z}$, where $(-)^* = \operatorname{Hom}\nolimits_{\Lambda^{\e}}(-, \Lambda^{\e} )$. In particular, there are isomorphisms
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits_n( \Lambda, \Lambda ) & \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda^{\e}}_{-(n+1)}( \Lambda^*, D( \Lambda ) ) \\
\operatorname{\widehat{HH}}\nolimits^n( \Lambda, \Lambda ) & \cong & \operatorname{\widehat{HH}}\nolimits^{-(n+1)} \left ( \Lambda , D( \Lambda^* ) \right )
\end{eqnarray*}
for all $n \in \mathbb{Z}$.
\end{theorem}
\section{Frobenius algebras}\label{frobeniusalg}
In this section, we apply the Tate-Hochschild duality results from the last section to a special class of selfinjective algebras. Recall that a finite dimensional algebra $\Lambda$ is \emph{Frobenius} if $\Lambda$ and $D(\Lambda)$ are
isomorphic as left $\Lambda$-modules, and \emph{symmetric} if they are
isomorphic as bimodules. Suppose $\Lambda$ is Frobenius, and fix an
isomorphism $\phi \colon \Lambda \to D( \Lambda )$ of left modules. Let $y
\in \Lambda$ be any element, and consider the linear functional $\phi(1)
\cdot y \in D(\Lambda)$. This is the $k$-linear map $\Lambda \to k$ defined
by $\lambda \mapsto \phi(1)(y \lambda)$, where $k$ is the ground field. Since $\phi$ is surjective,
there is an element $x \in \Lambda$ having the property that $\phi(x) =
\phi(1) \cdot y$, giving $x \cdot \phi(1) = \phi(1) \cdot y$ since
$\phi$ is a map of left $\Lambda$-modules. The map $y \mapsto x$ defines
a $k$-algebra automorphism on $\Lambda$, and its inverse $\nu$ is the
\emph{Nakayama automorphism} of $\Lambda$ (with respect to $\phi$). Thus
$\nu$ is defined by $\phi(1)(\lambda x) = \phi(1)( \nu(x) \lambda)$
for all $\lambda, x \in \Lambda$. The Nakayama automorphism is unique up
to an inner automorphism. Namely, if $\phi' \colon \Lambda \to D( \Lambda )$
is another isomorphism of left modules yielding a Nakayama
automorphism $\nu'$, then there exists an invertible element $z \in
\Lambda$ such that $\nu = z \nu' z^{-1}$.
Note that $\Lambda$ is symmetric if and only if the Nakayama
automorphism is the identity.
Since $D( \Lambda )$ is an injective left $\Lambda$-module, a Frobenuis
algebra is always left selfinjective. However, the definition is
left-right symmetric. For if $\phi \colon _{\Lambda}\Lambda \to
D(\Lambda_{\Lambda})$ is an isomorphism of left $\Lambda$-modules, we can
dualize and obtain an isomorphism $D( \phi ) \colon D^2( \Lambda_{\Lambda}
) \to D( _{\Lambda}\Lambda )$ of right modules. Composing with the natural
isomorphism $\Lambda_{\Lambda} \cong D^2( \Lambda_{\Lambda} )$, we obtain an
isomorphism $\Lambda_{\Lambda} \to D( _{\Lambda}\Lambda)$ of right $\Lambda$-modules. This left-right symmetry implies that the opposite algebra of a Frobenius algebra is also Frobenius, and that its Nakayama automorphism is the inverse of the original one.
\begin{lemma}\label{frobeniusopposite}
If $\Lambda$ is a Frobenius algebra with a Nakayama automorphism
$\nu$, then $\Lambda^{\operatorname{op}\nolimits}$ is Frobenius with $\nu^{-1}$ as a Nakayama
automorphism.
\end{lemma}
\begin{proof}
As seen above, the definition of a Frobenius algebra is left-right
symmetric. Moreover, an isomorphism $\Lambda \to D( \Lambda )$ of right
$\Lambda$-modules may be viewed as an isomorphism $\Lambda^{\operatorname{op}\nolimits} \to D(
\Lambda^{\operatorname{op}\nolimits} )$ of left $\Lambda^{\operatorname{op}\nolimits}$-modules. Hence $\Lambda$ is Frobenius
if and only if $\Lambda^{\operatorname{op}\nolimits}$ is.
Now suppose $\Lambda$ is Frobenius, and let $\phi \colon \Lambda \to D( \Lambda
)$ be an isomorphism of left modules with corresponding Nakayama
automorphism $\nu \colon \Lambda \to \Lambda$. The composition of
isomorphisms
$$\Lambda \to D^2( \Lambda ) \xrightarrow{D( \phi )} D( \Lambda )$$
of right $\Lambda$-modules can then be viewed as an isomorphism
$\phi^{\operatorname{op}\nolimits}$ of left $\Lambda^{\operatorname{op}\nolimits}$-modules. Thus $\phi^{\operatorname{op}\nolimits}(1)(
\lambda ) = \phi ( \lambda )(1)$ for all $\lambda \in \Lambda^{\operatorname{op}\nolimits}$.
Denote the multiplication of two elements $x$ and $y$ in $\Lambda^{\operatorname{op}\nolimits}$
by $x \cdot y$, so that $x \cdot y = yx$, where $yx$ is the ordinary
product in $\Lambda$. Then
\begin{eqnarray*}
\phi^{\operatorname{op}\nolimits}(1)( \lambda \cdot x ) & = & \phi^{\operatorname{op}\nolimits}(1)( x \lambda ) \\
& = & \phi (x \lambda )(1) \\
& = & \phi ( \lambda \nu^{-1}(x))(1) \\
& = & \phi^{\operatorname{op}\nolimits}(1)( \lambda \nu^{-1}(x)) \\
& = & \phi^{\operatorname{op}\nolimits}(1)( \nu^{-1}(x) \cdot \lambda)
\end{eqnarray*}
for all $\lambda, x \in \Lambda^{\operatorname{op}\nolimits}$, hence $\nu^{-1}$ is a Nakayama
automorphism for $\Lambda^{\operatorname{op}\nolimits}$.
\end{proof}
The tensor product of two Frobenius algebras is also Frobenius, with the obvious Nakayama automorphism. We record this in the following lemma.
\begin{lemma}\label{frobeniustensor}
If $\Lambda$ and $\Gamma$ are Frobenius $k$-algebras with Nakayama
automorphisms $\nu_{\Lambda}$ and $\nu_{\Gamma}$, respectively, then
$\Lambda \otimes_k \Gamma$ is Frobenius with $\nu_{\Lambda} \otimes
\nu_{\Gamma}$ as a Nakayama automorphism.
\end{lemma}
\begin{proof}
Let $\phi_{\Lambda} \colon \Lambda \to D( \Lambda )$ and $\phi_{\Gamma} \colon
\Gamma \to D( \Gamma )$ be isomorphisms of left modules, with
corresponding Nakayama automorphisms $\nu_{\Lambda}$ and
$\nu_{\Gamma}$, respectively. Then the composition
$$\Lambda \otimes_k \Gamma \xrightarrow{\phi_{\Lambda} \otimes \phi_{\Gamma}} D( \Lambda ) \otimes_k
D( \Gamma ) \to D( \Lambda \otimes_k \Gamma )$$ of left $( \Lambda
\otimes_k \Gamma )$-isomorphisms shows that $\Lambda \otimes_k \Gamma$
is Frobenius.
Denote this composition by $\phi$. Then
\begin{eqnarray*}
\phi ( 1 \otimes 1 ) \left ( [ \lambda \otimes \gamma ][x \otimes
y] \right ) &
= & \phi ( 1 \otimes 1 ) ( \lambda x \otimes \gamma y ) \\
& = & \phi_{\Lambda} ( \lambda x ) \phi_{\Gamma} ( \gamma y ) \\
& = & \phi_{\Lambda} ( \nu_{\Lambda}(x) \lambda ) \phi_{\Gamma} ( \nu_{\Gamma}(y) \gamma ) \\
& = & \phi ( 1 \otimes 1 ) ( \nu_{\Lambda}(x) \lambda \otimes
\nu_{\Gamma}(y) \gamma ) \\
& = & \phi ( 1 \otimes 1 ) ( [ \nu_{\Lambda}(x) \otimes
\nu_{\Gamma}(y) ][ \lambda \otimes \gamma ] )
\end{eqnarray*}
for all $\lambda \otimes \gamma$ and $x \otimes y$ in $\Lambda \otimes
\Gamma$. This shows that $\nu_{\Lambda} \otimes \nu_{\Gamma}$ is a
Nakayama automorphism for $\Lambda \otimes \Gamma$.
\end{proof}
Combining Lemma \ref{frobeniusopposite} and Lemma \ref{frobeniustensor}, we see that the enveloping algebra of a Frobenius algebra is again Frobenius.
\begin{corollary}\label{frobeniusenveloping}
If $\Lambda$ is a Frobenius algebra with a Nakayama automorphism
$\nu$, then $\Lambda^{\e}$ is Frobenius with $\nu \otimes \nu^{-1}$ as a
Nakayama automorphism.
\end{corollary}
The Tate-Hochschild duality results below for Frobenius algebras involve twisted modules. If $\Lambda$ is an arbitrary ring and $\Lambda \xrightarrow{f} \Lambda$ is an automorphism, then we can endow a $\Lambda$-module $M$ with a new $\Lambda$-module structure as follows: for $\lambda \in \Lambda$ and $m \in M$, let $\lambda \cdot m = f( \lambda )m$. We denote this twisted $\Lambda$-module by ${_fM}$. If $B$ is a bimodule over $\Lambda$, and $\Lambda \xrightarrow{g} \Lambda$ another automorphism, then we can twist on both sides and obtain a bimodule ${_fB}_g$. A special case is the bimodule ${{_f}\Lambda}_g$, which is isomorphic to ${_{g^{-1}}\Lambda}_{f^{-1}}$ when the two automorphisms $f$ and $g$ commute. In particular, the bimodules ${{_f}\Lambda}_1$ and ${{_1}\Lambda}_{f^{-1}}$ are isomorphic, and so are ${{_f}\Lambda}_f$ and $\Lambda$ itself. Note that the twisted module ${_fM}$ is isomorphic to ${{_f}\Lambda}_1 \otimes_{\Lambda} M$.
Suppose now, as before, that $\Lambda$ is Frobenius with an isomorphism $\phi \colon \Lambda \to D( \Lambda )$ of left modules, and let $\nu$ be a corresponding Nakayama automorphism. Then $\phi$ is an isomorphism between the bimodules $_1\Lambda_{\nu^{-1}}$ and $D(\Lambda)$, and from above we see that $D(\Lambda)$ is also isomorphic to ${_{\nu}\Lambda}_1$. We can use this to show that the ring dual of a $\Lambda$-module is just the $k$-dual twisted by $\nu$.
\begin{lemma}\label{ringdual}
If $\Lambda$ is a Frobenius algebra with a Nakayama automorphism
$\nu$, then for any finitely generated left module $M$, the right modules $M^*$ and $D(M)_{\nu}$ are isomorphic.
\end{lemma}
\begin{proof}
Standard $\operatorname{Hom}\nolimits$-tensor adjunction gives
\begin{eqnarray*}
M^* & = & \operatorname{Hom}\nolimits_{\Lambda}(M, \Lambda ) \\
& \cong & \operatorname{Hom}\nolimits_{\Lambda}(M, D^2( \Lambda ) ) \\
& = & \operatorname{Hom}\nolimits_{\Lambda}(M, \operatorname{Hom}\nolimits_k(D( \Lambda ),k) ) \\
& \cong & \operatorname{Hom}\nolimits_k( D( \Lambda ) \otimes_{\Lambda} M,k ) \\
& \cong & \operatorname{Hom}\nolimits_k( {_{\nu}\Lambda}_1 \otimes_{\Lambda} M,k ) \\
& \cong & \operatorname{Hom}\nolimits_k( {_{\nu}M},k ) \\
& = & D( {_{\nu}M} ).
\end{eqnarray*}
Since $D(M)_{\nu} = D( {_{\nu}M} )$, the result follows.
\end{proof}
Using this lemma, Theorem \ref{dualitygeneral} takes the following form for modules over a Frobenius algebra.
\begin{theorem}\label{dualitygeneralfrobenius}
Let $\Lambda$ be a Frobenius algebra with a Nakayama automorphism $\nu$, $M,L$ two finitely generated left modules, and $N$ a finitely generated right module. Then there are isomorphisms of vector spaces
\begin{eqnarray*}
\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n(N,M) & \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_{-(n+1)}( D(M)_{\nu}, D(N)) \\
\operatorname{\widehat{Ext}}\nolimits_{\Lambda}^n(M,L) & \cong & \operatorname{\widehat{Ext}}\nolimits_{\Lambda}^{-(n+1)}( L, {_{\nu}M} )
\end{eqnarray*}
for all $n \in \mathbb{Z}$.
\end{theorem}
\begin{proof}
The homology isomorphism is obtained directly by combining Theorem \ref{dualitygeneral} with Lemma \ref{ringdual}, and so does the cohomology isomorphism, when noting that there are isomorphisms
$$D( M^* ) \cong D \left ( D(M)_{\nu} \right ) = {_{\nu}D^2(M)} \cong {_{\nu}M}$$
of left $\Lambda$-modules.
\end{proof}
Before applying this to Tate-Hochschild (co)homology, we include a result which shows the following: if one of the modules in a Tate homology group is twisted by an automorphism, then we may instead twist the other module by the inverse.
\begin{lemma}\label{homologytwist}
Let $\Lambda$ be a ring with an automorphism $\Lambda \xrightarrow{f} \Lambda$, and $M$ and $N$ a right and a left $\Lambda$-module, respectively. Then there is an isomorphism $\operatorname{Tor}\nolimits^{\Lambda}_n( M_f,N) \cong \operatorname{Tor}\nolimits^{\Lambda}_n( M, {_{f^{-1}}N})$ for every $n \ge 0$. If in addition $\Lambda$ is a finite dimensional Gorenstein algebra, and $M$ is finitely generated, then there are isomorphisms $\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n( M_f,N) \cong \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n( M, {_{f^{-1}}N})$ for every $n \in \mathbb{Z}$.
\end{lemma}
\begin{proof}
For the first part, note that the map $M_f \otimes_{\Lambda} N \to M \otimes_{\Lambda} {_{f^{-1}}N}$ given by $m \otimes n \mapsto m \otimes n$ is well defined, and therefore an isomorphism. Moreover, if $P$ is a projective right $\Lambda$-module, then so is $P_f$, and the twisting operation is exact. Therefore, if $\mathbb{P}$ is a projective resolution of $M$, then $\mathbb{P}_f$ is a projective resolution of $M_f$, giving
$$\operatorname{Tor}\nolimits^{\Lambda}_n( M_f,N) \cong \operatorname{H}\nolimits_n \left ( \mathbb{P}_f \otimes _{\Lambda} N \right ) \cong \operatorname{H}\nolimits_n \left ( \mathbb{P} \otimes _{\Lambda} {_{f^{-1}}N} \right ) \cong \operatorname{Tor}\nolimits^{\Lambda}_n( M, {_{f^{-1}}N}).$$
Suppose now that $\Lambda$ is a finite dimensional Gorenstein algebra, and $M$ is finitely generated. If $\mathbb{T}$ is a complete resolution of $M$, then by definition there exists a projective resolution $\mathbb{P}$ of $M$ and a chain map $\mathbb{T} \xrightarrow{h} \mathbb{P}$ with the property that $h_n$ is bijective for $n \ge d$. Twisting by $f$, we see that $h$ is also a chain map $\mathbb{T}_f \xrightarrow{h} \mathbb{P}_f$. Moreover, we know that $\mathbb{P}_f$ is a projective resolution of $M_f$, and that the complex $\mathbb{T}_f$ is acyclic and consists of finitely generated projective modules. Now, if $X$ is an arbitrary right $\Lambda$-module and $Y$ is a bimodule, then the map $\operatorname{Hom}\nolimits_{\Lambda}(X_f,Y) \to \operatorname{Hom}\nolimits_{\Lambda}(X,Y_{f^{-1}})$ given by $g \mapsto g$ is an isomorphism of left $\Lambda$-modules. Therefore
$$( \mathbb{T}_f )^* = \operatorname{Hom}\nolimits_{\Lambda}( \mathbb{T}_f, \Lambda ) \cong \operatorname{Hom}\nolimits_{\Lambda}( \mathbb{T}, {_1\Lambda}_{f^{-1}} ) \cong \operatorname{Hom}\nolimits_{\Lambda} ( \mathbb{T}, {_f\Lambda}_1 ) \cong {_f( \mathbb{T}^* )},$$
hence $( \mathbb{T}_f )^*$ is also acyclic. Consequently $\mathbb{T}_f$ is a complete resolution of $M_f$, giving
$$\operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n( M_f,N) \cong \operatorname{H}\nolimits_n \left ( \mathbb{T}_f \otimes _{\Lambda} N \right ) \cong \operatorname{H}\nolimits_n ( \mathbb{T} \otimes _{\Lambda} {_{f^{-1}}N} ) \cong \operatorname{\widehat{Tor}}\nolimits^{\Lambda}_n( M, {_{f^{-1}}N}).$$
This completes the proof.
\end{proof}
We may now prove the Tate-Hochschild duality result for Frobenius algebras. The duality for the Tate-Hochschild homology $\operatorname{\widehat{HH}}\nolimits_n( \Lambda, \Lambda )$ was proved by Eu and Schedler in \cite{EuSchedler}.
\begin{theorem}\label{dualityTHfrobenius}
If $\Lambda$ is a Frobenius algebra with a Nakayama automorphism $\nu$, and $B$ a bimodule, then there are isomorphisms
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits_n( \Lambda, B )& \cong & \operatorname{\widehat{Tor}}\nolimits^{\Lambda^{\e}}_{-(n+1)}( \Lambda, {_{\nu^{-1}}D(B)}_1 ) \\
\operatorname{\widehat{HH}}\nolimits^n( \Lambda, B )& \cong & \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^{-(n+1)} ( B, {_{\nu^2}\Lambda}_1 )
\end{eqnarray*}
for all $n \in \mathbb{Z}$. In particular, there are isomorphisms
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits_n( \Lambda, \Lambda )& \cong & \operatorname{\widehat{HH}}\nolimits_{-(n+1)}( \Lambda, \Lambda ) \\
\operatorname{\widehat{HH}}\nolimits^n( \Lambda, \Lambda )& \cong & \operatorname{\widehat{HH}}\nolimits^{-(n+1)} ( \Lambda , {_{\nu^2}\Lambda}_1 )
\end{eqnarray*}
for all $n \in \mathbb{Z}$.
\end{theorem}
\begin{proof}
For the homology isomorphism, we use Theorem \ref{dualitygeneralfrobenius} together with Lemma \ref{homologytwist}:
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits_n( \Lambda, B ) & = & \operatorname{\widehat{Tor}}\nolimits_n^{\Lambda^{\e}}( B, \Lambda )\\
& \cong & \operatorname{\widehat{Tor}}\nolimits_{-(n+1)}^{\Lambda^{\e}}( D( \Lambda )_{\nu_{\Lambda^{\e}}}, D(B) ) \\
& \cong & \operatorname{\widehat{Tor}}\nolimits_{-(n+1)}^{\Lambda^{\e}}( ( {_{\nu}\Lambda}_1)_{( \nu \otimes \nu^{-1} )}, D(B) ) \\
& \cong & \operatorname{\widehat{Tor}}\nolimits_{-(n+1)}^{\Lambda^{\e}}( \Lambda_{( \nu \otimes 1 )}, D(B) ) \\
& \cong & \operatorname{\widehat{Tor}}\nolimits_{-(n+1)}^{\Lambda^{\e}}( \Lambda, {_{( \nu \otimes 1 )^{-1}}D(B)} ) \\
& = & \operatorname{\widehat{Tor}}\nolimits_{-(n+1)}^{\Lambda^{\e}}( \Lambda, {_{\nu^{-1}}D(B)}_1 ).
\end{eqnarray*}
For the cohomology isomorphism, we use Theorem \ref{dualitygeneralfrobenius} directly:
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits^n( \Lambda, B ) & = & \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^n( \Lambda, B ) \\
& \cong & \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^{-(n+1)} ( B, {_{\nu_{\Lambda^{\e}}}\Lambda} ) \\
& = & \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^{-(n+1)} ( B, {_{\nu \otimes \nu^{-1}}\Lambda} ) \\
& = & \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^{-(n+1)} ( B, {_{\nu}\Lambda}_{\nu^{-1}} ) \\
& \cong & \operatorname{\widehat{Ext}}\nolimits_{\Lambda^{\e}}^{-(n+1)} ( B, {_{\nu^2}\Lambda}_1 )
\end{eqnarray*}
When $B= \Lambda$, then the isomorphism $\operatorname{\widehat{HH}}\nolimits^n( \Lambda, \Lambda ) \cong \operatorname{\widehat{HH}}\nolimits^{-(n+1)} ( \Lambda , {_{\nu^2}\Lambda}_1 )$ follows directly, whereas the isomorphism $\operatorname{\widehat{HH}}\nolimits_n( \Lambda, \Lambda ) \cong \operatorname{\widehat{HH}}\nolimits_{-(n+1)}( \Lambda, \Lambda )$ follows from the fact that $D( \Lambda ) \cong {_{\nu}\Lambda}_1$.
\end{proof}
Of course, when the Nakayama automorphism squares to the identity, then the duality for Tate-Hochschild cohomology is as nice as for the homology. We end this section by recording this in the following corollary.
\begin{corollary}\label{nakayamasquare}
If $\Lambda$ is a Frobenius algebra with a Nakayama automorphism $\nu$ such that $\nu^2 = 1$, then there is an isomorphism
$$\operatorname{\widehat{HH}}\nolimits^n( \Lambda, \Lambda ) \cong \operatorname{\widehat{HH}}\nolimits^{-(n+1)} ( \Lambda , \Lambda )$$
for all $n \in \mathbb{Z}$.
\end{corollary}
\section{Quantum complete intersections}\label{quantumci}
Quantum complete intersections are noncommutative analogues of truncated polynomial rings, and are obtained by replacing the ordinary commutation relations between the generators by quantum versions. The terminology dates back to work by Avramov, Gasharov and Peeva, and, ultimately, Manin; the notion of \emph{quantum symmetric algebras} was introduced in \cite{Manin}, and that of \emph{quantum regular sequences} in \cite{AvramovGasharovPeeva}.
Fix a field $k$, let $c \ge 1$ be an integer, and let ${\bf{q}} =
(q_{ij})$ be a $c \times c$ commutation matrix with entries in $k$. That is, the diagonal entries $q_{ii}$ are all $1$, and $q_{ij}q_{ji}=1$ for all $i,j$. Furthermore, let ${\bf{a}}_c = (a_1, \dots, a_c)$ be an ordered sequence of $c$ integers with $a_i \ge 2$. The \emph{quantum complete intersection} $A_{\bf{q}}^{{\bf{a}}_c}$ determined by these data is the algebra
$$A_{\bf{q}}^{{\bf{a}}_c} \stackrel{\text{def}}{=} k \langle X_1, \dots, X_c \rangle / (X_i^{a_i}, X_iX_j-q_{ij}X_jX_i),$$
a finite dimensional algebra of dimension $\prod_{i=1}^c a_i$. The image of $X_i$ in this quotient will be denoted by $x_i$. Note that the class of all quantum complete intersections includes the exterior algebras
$$k \langle X_1, \dots, X_c \rangle / (X_i^2, X_iX_j+X_jX_i),$$
as well as finite dimensional commutative complete intersections of the form
$$k[X_1, \dots, X_c]/(X_1^{a_1}, \dots, X_c^{a_c}).$$
These two types of algebras are Frobenius, and the following result shows that this is the case with all quantum complete intersections.
\begin{lemma}\cite[Lemma 3.1]{Bergh}\label{nakayamaQCI}
A quantum complete intersection $A_{\bf{q}}^{{\bf{a}}_c}$ is
Frobenius, with an isomorphism $A_{\bf{q}}^{{\bf{a}}_c}
\xrightarrow{\phi} D(A_{\bf{q}}^{{\bf{a}}_c})$ of left modules and corresponding
Nakayama automorphism $A_{\bf{q}}^{{\bf{a}}_c}
\xrightarrow{\nu} A_{\bf{q}}^{{\bf{a}}_c}$ given by
\begin{eqnarray*}
\phi(1) \left ( \sum_{i_1, \dots, i_c} \alpha_{i_1, \dots, i_c} x_c^{i_c} \cdots x_1^{i_1} \right ) & = & \alpha_{a_1-1, \dots, a_c-1} \\
\nu (x_w) & = & \left ( \prod_{i=1}^c q_{iw}^{a_i-1} \right ) x_w
\end{eqnarray*}
for $1 \le w \le c$.
\end{lemma}
Thus the Tate-Hochschild duality results from the previous section applies to quantum complete intersections, in particular, there are isomorphisms
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits_n( A_{\bf{q}}^{{\bf{a}}_c}, A_{\bf{q}}^{{\bf{a}}_c} ) & \cong & \operatorname{\widehat{HH}}\nolimits_{-(n+1)}( A_{\bf{q}}^{{\bf{a}}_c}, A_{\bf{q}}^{{\bf{a}}_c} ) \\
\operatorname{\widehat{HH}}\nolimits^n( A_{\bf{q}}^{{\bf{a}}_c}, A_{\bf{q}}^{{\bf{a}}_c} ) & \cong & \operatorname{\widehat{HH}}\nolimits^{-(n+1)} ( A_{\bf{q}}^{{\bf{a}}_c} , {_{\nu^2}(A_{\bf{q}}^{{\bf{a}}_c})}_1 )
\end{eqnarray*}
for all $n \in \mathbb{Z}$. As in Corollary \ref{nakayamasquare}, the quantum complete intersections whose Nakayama automorphisms square to the identity satisfies the same nice duality for Tate-Hochschild cohomology as for homology. In particular, this holds for the exterior algebras.
\begin{theorem}\label{exterioralgebras}
If $k$ is a field and $A$ an exterior algebra
$$A = k \langle X_1, \dots, X_c \rangle / (X_i^2, X_iX_j+X_jX_i),$$
then there are isomorphisms
\begin{eqnarray*}
\operatorname{\widehat{HH}}\nolimits_n( A,A ) & \cong & \operatorname{\widehat{HH}}\nolimits_{-(n+1)}( A,A ) \\
\operatorname{\widehat{HH}}\nolimits^n( A,A ) & \cong & \operatorname{\widehat{HH}}\nolimits^{-(n+1)} ( A,A )
\end{eqnarray*}
for all $n \in \mathbb{Z}$.
\end{theorem}
\begin{proof}
Only the cohomology isomorphism needs explanation. By Lemma \ref{nakayamaQCI}, the Nakayama automorphism of $\Lambda$ is the identity when $c$ is odd, and maps a generator $x_i$ to $-x_i$ when $c$ is even. In either case, the automorphism squares to the identity.
\end{proof}
We shall calculate the dimensions of the Tate-Hochschild (co)homology groups of all exterior algebras and certain commutative complete intersections. Moreover, we shall also find lower bounds for the dimensions of the homology groups of a general quantum complete intersection. In order to do this, we need an explicit description of a complete bimodule resolution of these algebras ``near zero". Let therefore $A$ denote a general quantum complete intersection $A_{\bf{q}}^{{\bf{a}}_c}$, and consider the element
$$s = \sum_{\substack{0 \le i_1 <a_1 \\ \vdots \\ 0 \le i_c <a_c}} \left ( \prod_{1 \le u<v \le c} q_{uv}^{-i_v(a_u-i_u-1)} \right ) x_c^{i_c} \cdots x_1^{i_1} \otimes x_c^{a_c-i_c-1} \cdots x_1^{a_1-i_1-1}$$
in the enveloping algebra $A^{\e}$. Furthermore, let $( A^{\e} )^c \xrightarrow{f} A^{\e}$ be the map obtained by multiplying an element of $( A^{\e} )^c$ by the $c \times 1$ matrix
$$(1 \otimes x_1-x_1 \otimes 1 \cdots 1 \otimes x_c-x_c \otimes 1)^T$$
from the right. We claim that the sequence
$$( A^{\e} )^c \xrightarrow{f} A^{\e} \xrightarrow{\cdot s} A^{\e}$$
of left $A^{\e}$-homomorphisms is exact. A direct (but tedious) computation shows that, for $1 \le t \le c$, the expression
\begin{eqnarray*}
& & \sum_{\substack{0 \le i_1 <a_1 \\ \vdots \\ 0 \le i_c <a_c}} \left ( \prod_{1 \le u<v \le c} q_{uv}^{-i_v(a_u-i_u-1)} \right ) x_c^{i_c} \cdots x_1^{i_1} \otimes x_c^{a_c-i_c-1} \cdots x_1^{a_1-i_1-1}x_t \\
& - & \sum_{\substack{0 \le i_1 <a_1 \\ \vdots \\ 0 \le i_c <a_c}} \left ( \prod_{1 \le u<v \le c} q_{uv}^{-i_v(a_u-i_u-1)} \right ) x_tx_c^{i_c} \cdots x_1^{i_1} \otimes x_c^{a_c-i_c-1} \cdots x_1^{a_1-i_1-1}
\end{eqnarray*}
is zero in $A^{\e}$. But this expression is the product $(1 \otimes x_t-x_t \otimes 1)s$, hence the sequence is a complex. Since the cokernel of the left map is $A$, it is enough to show that the dimension of the image of the right map is at least the dimension of $A$, namely $a_1a_2 \cdots a_c$. This is easy: the elements
$$(x_c^{j_c} \cdots x_1^{j_1} \otimes 1)s \hspace{1cm} 0 \le j_1 < a_1, \dots, 0 \le j_c < a_c$$
are linearly independent in $A^{\e}$.
Consequently the sequence is exact, and may therefore be considered as the part
$$P_1 \xrightarrow{d_1} P_0 \xrightarrow{d_0} P_{-1}$$
of a complete bimodule resolution of $A$. We shall use it to calculate $\operatorname{\widehat{HH}}\nolimits_0(A, {_{\psi}A_1})$ for various twisted bimodules ${_{\psi}A_1}$, with the help of the following lemma.
\begin{lemma}\label{zeromaps}
Let $A=A_{\bf{q}}^{{\bf{a}}_c}$ be a quantum complete intersection, and $A \xrightarrow{\psi} A$ an automorphism given by $x_i \mapsto \alpha_i x_i$, where $\alpha_1, \dots, \alpha_c$ are nonzero scalars. Furthermore, let $e^c_w$ denote the $w$th standard generator in the $c$-fold direct sum ${_{\psi}A_1^c}$, and $\underline{\alpha}$ the element
$$(1+ \alpha_1 + \cdots + \alpha_1^{a_1-1})(1+ \alpha_2 + \cdots + \alpha_2^{a_2-1}) \cdots (1+ \alpha_c + \cdots + \alpha_c^{a_c-1}).$$
Then there is an isomorphism
$$\xymatrix{
{_{\psi}A_1} \otimes_{A^{\e}} ( A^{\e} )^c \ar[r]^{1 \otimes f} \ar[d]^{\wr} & {_{\psi}A_1} \otimes_{A^{\e}} A^{\e} \ar[r]^{1 \otimes ( \cdot s)} \ar[d]^{\wr} & {_{\psi}A_1} \otimes_{A^{\e}} A^{\e} \ar[d]^{\wr} \\
{_{\psi}A_1^c} \ar[r]^{d_1^{\psi}} & {_{\psi}A_1} \ar[r]^{d_0^{\psi}} & {_{\psi}A_1} }$$
of complexes, where the maps $d_i^{\psi}$ are given as follows:
\begin{eqnarray*}
d_1^{\psi} (x_c^{u_c} \cdots x_1^{u_1}e^c_w) & = & \left ( \alpha_w \prod_{i=w}^c q_{wi}^{u_i} - \prod_{j=1}^w q_{jw}^{u_j} \right ) (x_c^{u_c} \cdots x_w^{u_w+1} \cdots x_1^{u_1}) \\
d_0^{\psi} (x_c^{u_c} \cdots x_1^{u_1}) & = & \left \{
\begin{array}{ll}
0 & \text{if } u_i>0 \text{ for one } i \\
\underline{\alpha} x_c^{a_c-1} \cdots x_1^{a_1-1} & \text{if } u_1= \cdots =u_c =0.
\end{array} \right.
\end{eqnarray*}
\end{lemma}
\begin{proof}
Clearly
$$d_1^{\psi} (x_c^{u_c} \cdots x_1^{u_1}e^c_w) = (x_c^{u_c} \cdots x_1^{u_1}) \cdot (1 \otimes x_w - x_w \otimes 1),$$
where the product means right scalar action on ${_{\psi}A_1}$ from $A^{\e}$. Therefore
\begin{eqnarray*}
d_1^{\psi} (x_c^{u_c} \cdots x_1^{u_1}e^c_w) & = & \psi (x_w) x_c^{u_c} \cdots x_1^{u_1} - x_c^{u_c} \cdots x_1^{u_1}x_w \\
& = & \alpha_w x_w x_c^{u_c} \cdots x_1^{u_1} - x_c^{u_c} \cdots x_1^{u_1}x_w \\
& = & \left ( \alpha_w \prod_{i=w}^c q_{wi}^{u_i} - \prod_{j=1}^w q_{jw}^{u_j} \right ) (x_c^{u_c} \cdots x_w^{u_w+1} \cdots x_1^{u_1}).
\end{eqnarray*}
As for $d_0^{\psi}$, this is just right multiplication with $s$. Since the total weight of $x_i$ in $s$ is $a_i-1$, it is easy to see that $d_0^{\psi} (x_c^{u_c} \cdots x_1^{u_1})=0$ if $u_i \ge 1$. Thus only $d_0^{\psi}(1)$ remains:
\begin{eqnarray*}
d_0^{\psi}(1) & = & 1 \cdot s \\
& = & \sum_{\substack{0 \le i_1 <a_1 \\ \vdots \\ 0 \le i_c <a_c}} \left ( \prod_{1 \le u<v \le c} q_{uv}^{-i_v(a_u-i_u-1)} \right ) \psi ( x_c^{a_c-i_c-1} \cdots x_1^{a_1-i_1-1} ) x_c^{i_c} \cdots x_1^{i_1} \\
& = & \underline{\alpha} x_c^{a_c-1} \cdots x_1^{a_1-1}.
\end{eqnarray*}
\end{proof}
As a first application of Lemma \ref{zeromaps}, we calculate the Tate-Hochschild (co)homology groups of certain finite dimensional commutative complete intersections.
\begin{theorem}\label{ci}
Let $k$ be a field of characteristic $p$, and $A$ a finite dimensional commutative complete intersection of the form
$$A = k [ X_1, \dots, X_c ] / (X_1^{a}, \dots, x_c^{a}),$$
where $a \ge 2$.
Then
$$\dim \operatorname{\widehat{HH}}\nolimits_n(A,A) = \left \{
\begin{array}{ll}
\binom{c+n-1}{n} a^c & \text{if } p \mid a \\
a^c-1 & \text{if } p \nmid a \text{ and } n=0 \\
\sum_{t=0}^c \binom{c}{t} \binom{n-1}{n-c+t}a^t(a-1)^{c-t} & \text{if } p \nmid a \text{ and } n \ge 1
\end{array}
\right.$$
for $n \ge 0$, and
$$\dim \operatorname{\widehat{HH}}\nolimits_n(A,A) = \dim \operatorname{\widehat{HH}}\nolimits^n(A,A) = \dim \operatorname{\widehat{HH}}\nolimits_{-(n+1)}(A,A) = \dim \operatorname{\widehat{HH}}\nolimits^{-(n+1)}(A,A)$$
for all $n \in \mathbb{Z}$.
\end{theorem}
\begin{proof}
Since $A$ is symmetric, it follows from Lemma \ref{dualityexttor} that the dimension of $\operatorname{\widehat{HH}}\nolimits^n(A,A)$ equals that of $\operatorname{\widehat{HH}}\nolimits_n(A,A)$ for all $n \in \mathbb{Z}$. Together with Theorem \ref{dualityTHfrobenius}, this gives the three dimension equalities.
To calculate $\operatorname{\widehat{HH}}\nolimits_0(A,A)$, we use Lemma \ref{zeromaps} with $\psi =1$. The map $d^1_1$ is clearly the zero map, hence $\dim \operatorname{\widehat{HH}}\nolimits_0(A,A) = \dim \operatorname{Ker}\nolimits d^1_0$. Since
$$d_0^1 (x_c^{u_c} \cdots x_1^{u_1}) = \left \{
\begin{array}{ll}
0 & \text{if } u_i>0 \text{ for one } i \\
a^c x_c^{a-1} \cdots x_1^{a-1} & \text{if } u_1= \cdots =u_c =0,
\end{array} \right.$$
we obtain
$$\dim \operatorname{\widehat{HH}}\nolimits_0(A,A) = \left \{
\begin{array}{ll}
a^c -1 & \text{if } p \nmid a \\
a^c & \text{if } p \mid a.
\end{array} \right.$$
\sloppy To calculate $\operatorname{\widehat{HH}}\nolimits_n(A,A)$ for $n \ge 1$, we use the fact that $\dim \operatorname{\widehat{HH}}\nolimits^n(A,A) = \dim \operatorname{\widehat{HH}}\nolimits_n(A,A)$, and that, for such $n$, there is an isomorphism $\operatorname{\widehat{HH}}\nolimits^n(A,A) \cong \operatorname{HH}\nolimits^n(A,A)$. Moreover, by \cite[Theorem X.7.4]{Maclane}, there is an isomorphism
$$\operatorname{HH}\nolimits^n(A,A) = \bigoplus_{n_1+ \cdots +n_c =n} \left ( \operatorname{HH}\nolimits^{n_1} \left ( k[X]/(X^{a}) \right ) \otimes_k \cdots \otimes_k \operatorname{HH}\nolimits^{n_c} \left ( k[X]/(X^{a}) \right ) \right ),$$
hence
\begin{equation*}\label{formula}
\dim \operatorname{HH}\nolimits^n(A,A) = \sum_{n_1+ \cdots +n_c =n} \left ( \prod_{i=1}^c \dim \operatorname{HH}\nolimits^{n_i} \left ( k[X]/(X^{a}) \right ) \right ). \tag{$\dagger$}
\end{equation*}
By \cite[Proposition 2.2]{Holm}, the dimensions of the Hochschild cohomology groups of the truncated polynomial algebra $k[X]/(X^a)$ are given by
$$\dim \operatorname{HH}\nolimits^n \left ( k[X]/(X^a) \right ) = \left \{
\begin{array}{ll}
a & \text{when } n=0 \\
a & \text{when } n>0 \text{ and } p \mid a \\
a-1 & \text{when } n>0 \text{ and } p \nmid a.
\end{array}
\right.$$
Therefore, when $p \mid a$, then
\begin{eqnarray*}
\dim \operatorname{HH}\nolimits^n(A,A) &=& \sum_{\substack{n_1+ \cdots +n_c =n \\ n_i \ge 0}} a^c \\
&=& \binom{c+n-1}{n} a^c.
\end{eqnarray*}
When $p \nmid a$, then we have to keep track of how many times $\operatorname{HH}\nolimits^0 \left ( k[X]/(X^{a}) \right )$ appears in each summand in the formula (\ref{formula}), since now $\dim \operatorname{HH}\nolimits^0 \left ( k[X]/(X^{a}) \right ) =a$ whereas $\dim \operatorname{HH}\nolimits^m \left ( k[X]/(X^{a}) \right ) =a-1$ for $m \ge 1$. If exactly $t$ out the numbers $n_1, \dots, n_c$ are zero, then the remaining $c-t$ are nonzero. The number of integer solutions to
$$x_1 + \cdots +x_{c-t} = n, \hspace{5mm} x_i \ge 1$$
is the same as the number of solutions to
$$y_1 + \cdots +y_{c-t} = n-c+t, \hspace{5mm} y_i \ge 0,$$
namely
$$\binom{n-1}{n-c+t}.$$
Therefore, in the formula (\ref{formula}), the total contribution from all the summands in which precisely $t$ out the numbers $n_1, \dots, n_c$ are zero, is
$$\binom{c}{t} \binom{n-1}{n-c+t}a^t(a-1)^{c-t}.$$
Summing up, we see that when $p \nmid a$, then
$$\dim \operatorname{HH}\nolimits^n(A,A) = \sum_{t=0}^c \binom{c}{t} \binom{n-1}{n-c+t}a^t(a-1)^{c-t}.$$
\end{proof}
\begin{remark}\label{remarkCI}
Theorem \ref{ci} is probably well known to some, at least in terms of the ordinary Hochschild (co)homology. However, we were unable to find a reference. Note that the same method of proof also applies to general finite dimensional commutative complete intersections of the form
$$k [ X_1, \dots, X_c ] / (X_1^{a_1}, \dots, X_c^{a_c}).$$
However, the resulting formulas become much more complicated.
\end{remark}
Next, we calculate the Tate-Hochschild (co)homology groups of all exterior algebras.
\begin{theorem}\label{THexterioralgebras}
Let $k$ be a field of characteristic $p$, and $A$ an exterior algebra
$$A = k \langle X_1, \dots, X_c \rangle / (X_i^2, X_iX_j+X_jX_i).$$
Then
$$\dim \operatorname{\widehat{HH}}\nolimits_n(A,A) = \left \{
\begin{array}{ll}
2^c \binom{c+n-1}{c-1} & \text{if } p =2 \\
2^c-2^{c-1} & \text{if } p \neq 2 \text{ and } n=0 \\
2^{c-1} \binom{c+n-1}{c-1} & \text{if } p \neq 2 \text{ and } n \ge 1
\end{array}
\right.$$
for $n \ge 0$, and
$$\dim \operatorname{\widehat{HH}}\nolimits_n(A,A) = \dim \operatorname{\widehat{HH}}\nolimits^n(A,A) = \dim \operatorname{\widehat{HH}}\nolimits_{-(n+1)}(A,A) = \dim \operatorname{\widehat{HH}}\nolimits^{-(n+1)}(A,A)$$
for all $n \in \mathbb{Z}$.
\end{theorem}
\begin{proof}
For positive $n$, the dimensions of $\operatorname{\widehat{HH}}\nolimits_n(A,A)$ and $\operatorname{\widehat{HH}}\nolimits^n(A,A)$ are given by \cite[Theorem 2 and Theorem 3]{XuHan}, and those results also show equality. In view of Theorem \ref{exterioralgebras}, we therefore only have to calculate $\operatorname{\widehat{HH}}\nolimits_0(A,A)$ and $\operatorname{\widehat{HH}}\nolimits^0(A,A)$.
First we calculate the dimension of $\operatorname{\widehat{HH}}\nolimits_0(A,A)$. In the terminology of Lemma \ref{zeromaps}, we must calculate the homology of the complex
$$A^c \xrightarrow{d^1_1} A \xrightarrow{d^1_0} A,$$
with maps given by
\begin{eqnarray*}
d_1^1 (x_c^{u_c} \cdots x_1^{u_1}e^c_w) & = & \left ( (-1)^{u_{w+1}+ \cdots + u_c}- (-1)^{u_1+ \cdots + u_{w-1}} \right ) (x_c^{u_c} \cdots x_w^{u_w+1} \cdots x_1^{u_1}) \\
d_0^1 (x_c^{u_c} \cdots x_1^{u_1}) & = & \left \{
\begin{array}{ll}
0 & \text{if } u_i>0 \text{ for one } i \\
2^c x_c \cdots x_1 & \text{if } u_1= \cdots =u_c =0.
\end{array} \right.
\end{eqnarray*}
Suppose that $p \neq 2$. Then $x_c^{u_c} \cdots x_1^{u_1} \in \Im d^1_1$ if and only if $u_1 + \cdots + u_c$ is a positive even number, and so
$$\dim \Im d^1_1 = \binom{c}{2} + \binom{c}{4} + \cdots = 2^{c-1}-1.$$
Moreover, the dimension of $\operatorname{Ker}\nolimits d^1_0$ is $2^c-1$. If $p=2$, then $d^1_1$ and $d^1_0$ are both zero, and consequently
$$\dim \operatorname{\widehat{HH}}\nolimits_0(A,A) = \left \{
\begin{array}{ll}
2^c & \text{if } p =2 \\
2^c-2^{c-1} & \text{if } p \neq 2.
\end{array}
\right.$$
Next, we calculate the dimension of $\operatorname{\widehat{HH}}\nolimits^0(A,A)$. If $c$ is odd or $p=2$, then $A$ is symmetric, and so $\dim \operatorname{\widehat{HH}}\nolimits^0(A,A) = \dim \operatorname{\widehat{HH}}\nolimits_0(A,A)$ in this case. Suppose therefore that $c$ is even and $p \neq 2$. By Lemma \ref{dualityexttor}, the dimension of $\dim \operatorname{\widehat{HH}}\nolimits^0(A,A)$ equals that of $\dim \operatorname{\widehat{HH}}\nolimits_0(A, {_{\nu}A_1})$, and the Nakayama automorphism $\nu$ maps $x_i$ to $-x_i$. Using Lemma \ref{zeromaps}, we must therefore calculate the homology of the complex
$${_{\nu}A^c_1} \xrightarrow{d^{\nu}_1} {_{\nu}A_1} \xrightarrow{d^{\nu}_0} {_{\nu}A_1},$$
with
$$d_1^{\nu} (x_c^{u_c} \cdots x_1^{u_1}e^c_w) = - \left ( (-1)^{u_{w+1}+ \cdots + u_c}+ (-1)^{u_1+ \cdots + u_{w-1}} \right ) (x_c^{u_c} \cdots x_w^{u_w+1} \cdots x_1^{u_1})$$
and $d_0^{\nu} = 0$. We see that $x_c^{u_c} \cdots x_1^{u_1} \in \Im d^1_1$ if and only if $u_1 + \cdots + u_c$ is a positive odd number, and so
$$\dim \Im d^1_1 = \binom{c}{1} + \binom{c}{3} + \cdots = 2^{c-1}.$$
Consequently, the dimension of $\operatorname{\widehat{HH}}\nolimits^0(A,A)$ equals that of $\operatorname{\widehat{HH}}\nolimits_0(A,A)$ also in this case, namely $2^c-2^{c-1}$.
\end{proof}
The next result establishes lower bounds for the dimensions of the Tate-Hochschild homology groups of an arbitrary quantum complete intersection.
\begin{theorem}\label{lowerbound}
Let $k$ be a field of characteristic $p$, and
$$A_{\bf{q}}^{{\bf{a}}_c} = k \langle X_1, \dots, X_c \rangle / (X_i^{a_i}, X_iX_j-q_{ij}X_jX_i)$$
a quantum complete intersection with $a_i \ge 2$ for all $i$. Furthermore, suppose $p$ divides $d$ of the exponents $a_1, \dots, a_c$. Then
$$\dim \operatorname{\widehat{HH}}\nolimits_n ( A_{\bf{q}}^{{\bf{a}}_c}, A_{\bf{q}}^{{\bf{a}}_c} ) \ge \left \{
\begin{array}{ll}
\sum_{i=1}^ca_i - c & \text{if } n=0,-1 \text{ and } p \nmid a_i \text{ for all } i \\
\sum_{i=1}^ca_i - c+1 & \text{if } n=0,-1 \text{ and } p \mid a_i \text{ for some } i \\
\sum_{i=1}^ca_i - c+ d & \text{if } n \neq 0,-1,
\end{array}
\right.$$
in particular $\operatorname{\widehat{HH}}\nolimits_n ( A_{\bf{q}}^{{\bf{a}}_c}, A_{\bf{q}}^{{\bf{a}}_c} ) \neq 0$ for all $n \in \mathbb{Z}$.
\end{theorem}
\begin{proof}
Denote the algebra by $A$. It follows from \cite[Proposition 4.9]{BerghMadsen} that the dimension of $\operatorname{\widehat{HH}}\nolimits_n(A,A)$ is at least $\sum_{i=1}^ca_i - c+ d$ when $n \ge 1$. Therefore, by Theorem \ref{dualityTHfrobenius}, we only need to establish the bound for the dimension of $\operatorname{\widehat{HH}}\nolimits_0(A,A)$.
As before, we use Lemma \ref{zeromaps}: the space $\operatorname{\widehat{HH}}\nolimits_0(A,A)$ is the homology of the complex
$$A^c \xrightarrow{d^1_1} A \xrightarrow{d^1_0} A,$$
with maps given by
\begin{eqnarray*}
d_1^1 (x_c^{u_c} \cdots x_1^{u_1}e^c_w) & = & \left ( \prod_{i=w}^c q_{wi}^{u_i} - \prod_{j=1}^w q_{jw}^{u_j} \right ) (x_c^{u_c} \cdots x_w^{u_w+1} \cdots x_1^{u_1}) \\
d_0^1 (x_c^{u_c} \cdots x_1^{u_1}) & = & \left \{
\begin{array}{ll}
0 & \text{if } u_i>0 \text{ for one } i \\
\left ( \prod_{i=1}^c a_i \right ) x_c^{a_c-1} \cdots x_1^{a_1-1} & \text{if } u_1= \cdots =u_c =0.
\end{array} \right.
\end{eqnarray*}
For any $1 \le w \le c$ and $1 \le u \le a_w-1$, the element $x_w^u$ in $A$ is not contained in the image of $d^1_1$, because $d^1_1(x_w^{u-1}e^c_w)=0$. Also, the identity in $A$ is not contained in this image, hence
$$\dim \Im d^1_1 \le \dim A - \sum_{i=1}^c (a_i-1) -1.$$
As for the dimension of $\operatorname{Ker}\nolimits d^1_0$, this is $\dim A$ when $p$ divides one of the exponents $a_1, \dots, a_c$, and $\dim A-1$ if not. The lower bound for the dimension of $\operatorname{\widehat{HH}}\nolimits_0(A,A)$ follows immediately from this.
\end{proof}
Of course, for some quantum complete intersections, the difference between the actual dimensions of the Tate-Hochschild homology groups and the lower bound given in Theorem \ref{lowerbound} can be arbitrarily large. For example, suppose that $A$ is either a finite dimensional commutative complete intersection of the form
$$k [ X_1, \dots, X_c ] / (X_1^{a}, \dots, x_c^{a}),$$
or an exterior algebra
$$k \langle X_1, \dots, X_c \rangle / (X_i^2, X_iX_j+X_jX_i).$$
Then Theorem \ref{ci} and Theorem \ref{THexterioralgebras} show that
$$\lim_{n \to \pm \infty} \dim \operatorname{\widehat{HH}}\nolimits_n(A,A) = \infty,$$
in fact, when $n \ge 1$, then $\dim \operatorname{\widehat{HH}}\nolimits_n(A,A)$ is given by a polynomial of degree $c-1$.
However, as the following result shows, there are quantum complete intersections where the lower bound given in Theorem \ref{lowerbound} is the actual dimension of the Tate-Hochschild homology groups.
\begin{theorem}\label{homologycodim2}
Let $k$ be a field of characteristic $p$, and
$$A = k \langle X,Y \rangle / (X^a, XY-qYX, Y^b)$$
a quantum complete intersection with $a,b\ge 2$ and $q$ not a root of unity in $k$. Then
$$\dim \operatorname{\widehat{HH}}\nolimits_n(A,A) = \left \{
\begin{array}{ll}
a+b-2 & \text{if } n=0,-1 \text{ and } p \nmid a,b \\
a+b-1 & \text{if } n=0,-1 \text{ and } p \mid a \text { or } p \mid b \\
a+b-2 & \text{if } n \neq 0,-1 \text{ and } p \nmid a,b \\
a+b-1 & \text{if } n \neq 0,-1 \text{ and either } p \mid a \text { or } p \mid b \\
a+b & \text{if } n \neq 0,-1 \text{ and } p \mid a,b.
\end{array}
\right.$$
\end{theorem}
\begin{proof}
The dimensions of $\operatorname{\widehat{HH}}\nolimits_n(A,A)$ for $n \ge 1$ follow from \cite[Theorem 3.1]{BerghErdmann}, and so by Theorem \ref{dualityTHfrobenius}, we only need to calculate the dimension of $\operatorname{\widehat{HH}}\nolimits_0(A,A)$. By Lemma \ref{zeromaps}, this homology group is the homology of the complex
$$A^2 \xrightarrow{d_1^1} A \xrightarrow{d_0^1} A,$$
with maps given by
\begin{eqnarray*}
d_1^1 (y^u x^ve^2_1) & = & (q^u-1)y^ux^{v+1} \\
d_1^1 (y^u x^ve^2_2) & = & (1-q^v)y^{u+1}x^v \\
d_0^1 (y^u x^v) & = & \left \{
\begin{array}{ll}
0 & \text{if } u>0 \text{ or } v>0 \\
ab y^{b-1}x^{a-1} & \text{if } u=v=0.
\end{array} \right.
\end{eqnarray*}
It is easy to see that an element $y^u x^v \in A$ belongs to $\Im d^1_1$ if and only if both $u$ and $v$ are positive: for then $d_1^1 (y^{u-1} x^ve^2_2) = (1-q^v)y^ux^v$, and $1-q^v$ is nonzero since $q$ is not a root of unity. This shows that $\dim \Im d^1_1 = (a-1)(b-1) = ab-a-b+1$. The dimension of $\operatorname{Ker}\nolimits d^1_0$ is $ab-1$ if $p$ does not divide any of $a,b$, and $ab$ if not. The dimension of $\operatorname{\widehat{HH}}\nolimits_0(A,A)$ follows from this.
\end{proof}
When it comes to cohomology, the situation is totally different than for homology. On the one hand, if $A$ is either a finite dimensional commutative complete intersection, or an exterior algebra, then from Theorem \ref{ci} and Theorem \ref{THexterioralgebras} we see that $\operatorname{\widehat{HH}}\nolimits^n(A,A)$ is nonzero for all $n \in \mathbb{Z}$. In fact, just as for homology, when $n \ge 1$, then $\dim \operatorname{\widehat{HH}}\nolimits^n(A,A)$ is given by a polynomial of degree $c-1$ (where $c$ is the number of defining generators for $A$). On the other hand, if $A$ is as in Theorem \ref{homologycodim2}, then it follows from \cite[Theorem 3.2]{BerghErdmann} that $\operatorname{\widehat{HH}}\nolimits^n(A,A) =0$ when $n \ge 3$. Consequently, there is no cohomological counterpart to Theorem \ref{lowerbound}: there is no universal lower bound for the dimensions of the Tate-Hochschild cohomology groups of quantum complete intersections.
Our final main result in this paper is the cohomological version of Theorem \ref{homologycodim2}; we shall determine the dimensions of all the Tate-Hochschild cohomology groups of the quantum complete intersection
$$A = k \langle X,Y \rangle / (X^a, XY-qYX, Y^b)$$
when $q$ is not a root of unity in $k$. To do this, we first calculate the dimension of the (ordinary) Hochschild homology group $\operatorname{HH}\nolimits_n(A, {_{\nu^{-1}}A_1})$ for $n \ge 1$.
By Lemma \ref{nakayamaQCI}, the automorphism $\nu^{-1}$ is given by $x \mapsto q^{b-1}x, y \mapsto q^{1-a}y$.
For parameters $t,i,u,v$, all non-negative integers, define the following eight scalars:
\begin{align*}
K_1(t,i,u,v) & = q^{a+b-ab-1} \sum_{j=0}^{b-1} q^{j \left ( a+ \frac{ai}{2}+v-1 \right ) } & i \text{ even }, i \le 2t \\
K_2(t,i,u,v) & = \sum_{j=0}^{a-1} q^{j \left ( bt+b- \frac{bi}{2} +u-1 \right ) } & i \text{ even }, i \le 2t \\
K_3(t,i,u,v) & = q^{\frac{ai-a+2+2v}{2}} - q^{1-a} & i \text{ odd }, i \le 2t-1 \\
K_4(t,i,u,v) & = q^{\frac{2bt-bi+b+2u}{2}} -1 & i \text{ odd }, i \le 2t-1 \\
K_5(t,i,u,v) & = q^{1-a} - q^{\frac{ai+2v}{2}} & i \text{ even }, i \le 2t \\
K_6(t,i,u,v) & = \sum_{j=0}^{a-1} q^{j \left ( bt+b- \frac{bi}{2} +u \right ) } & i \text{ even }, i \le 2t \\
K_7(t,i,u,v) & = q^{a+b-ab-1} \sum_{j=0}^{b-1} q^{j \left ( a+ \frac{a(i-1)}{2}+v \right ) } & i \text{ odd }, i \le 2t+1 \\
K_8(t,i,u,v) & = q^{\frac{2bt-bi+3b+2u-2}{2}} - 1 & i \text{ odd }, i \le 2t+1.
\end{align*}
Note that since $q$ is not a root of unity and $a,b \ge 2$, all these scalars are nonzero in $k$. Next, for each integer $n \ge 0$, denote by $\oplus_{i=0}^n A e^n_i$ the vector space consisting of $n+1$ copies of $A$. Finally, for each $n \ge 1$, define a map
$$\oplus_{i=0}^n Ae^n_i \xrightarrow{\delta_n} \oplus_{i=0}^{n-1} Ae^{n-1}_i$$
by
$$
\begin{array}{l}
\delta_{2t} \colon y^ux^v e^{2t}_i \mapsto \vspace{2mm} \\
\left \{
\begin{array}{ll}
K_1(t,i,u,v)y^{u+b-1}x^ve^{2t-1}_i +
K_2(t,i,u,v)y^ux^{v+a-1}e^{2t-1}_{i-1}, & \text{ for $i$
even}
\\
\\
K_3(t,i,u,v)y^{u+1}x^ve^{2t-1}_i + K_4(t,i,u,v)y^ux^{v+1}e^{2t-1}_{i-1}, & \text{
for $i$ odd}
\end{array} \right. \\
\\
\delta_{2t+1} \colon y^ux^v e^{2t+1}_i \mapsto \vspace{2mm} \\
\left \{
\begin{array}{ll}
K_5(t,i,u,v)y^{u+1} x^ v e^{2t}_i +
K_6(t,i,u,v) y^u x^{v+a-1} e^{2t}_{i-1}, & \text{ for $i$
even}
\\
\\
K_7(t,i,u,v) y^{u+b-1} x^v e^{2t}_i + K_8(t,i,u,v)y^u x^{v+1} e^{2t}_{i-1}, &
\text{ for $i$ odd,}
\end{array} \right.
\end{array}
$$
where we use the convention $e^n_{-1} = e^n_{n+1} =0$. With this notation, it follows from \cite[page 510-511]{BerghErdmann} that $\operatorname{HH}\nolimits_n(A, {_{\nu^{-1}}A_1})$ is the homology of the complex
$$\cdots \to \oplus_{i=0}^{n+1} Ae^{n+1}_i \xrightarrow{\delta_{n+1}}
\oplus_{i=0}^n Ae^n_i \xrightarrow{\delta_n} \oplus_{i=0}^{n-1} Ae^{n-1}_i \to \cdots$$
of $k$-vector spaces.
\begin{proposition}\label{negativecohomologycodim2}
Let $k$ be a field and
$$A = k \langle X,Y \rangle / (X^a, XY-qYX, Y^b)$$
a quantum complete intersection with $a,b\ge 2$ and $q$ not a root of unity in $k$. Then $\operatorname{HH}\nolimits_n(A, {_{\nu^{-1}}A_1})=0$ for $n \ge 1$.
\end{proposition}
\begin{proof}
We first compute the kernel of $\delta_{2t}$ for $t \ge 1$. If $i$ is even, then
$$\delta_{2t}(y^ux^ve^{2t}_i) =0 \Leftrightarrow \left \{
\begin{array}{l}
u \ge 1, v \ge 1, i \in \{ 0,2, \dots, 2t \}, \text{ or} \\
u \ge 1, v=0, i=0, \text{ or} \\
u=0, v \ge 1, i=2t.
\end{array}
\right.$$
There are $(b-1)(a-1)(t+1)+(b-1)+(a-1)$ such vectors. If $i$ is odd, then
$$\delta_{2t}(y^ux^ve^{2t}_i) =0 \Leftrightarrow u=b-1, v=a-1, i \in \{ 1,3, \dots, 2t-1 \},$$
and there are $t$ such vectors. Finally, the nontrivial linear combinations in $\operatorname{Ker}\nolimits \delta_{2t}$ are
\begin{align*}
& x^ve^{2t}_i + C_1(t,i,u,v)y^{b-1}x^{v-1}e^{2t}_{i+1} & v \ge 1, i \in \{ 0, 2, \dots, 2t-2 \} \\
& y^ue^{2t}_i + C_2(t,i,u,v)y^{u-1}x^{a-1}e^{2t}_{i-1} & u \ge 1, i \in \{ 2,4, \dots, 2t \},
\end{align*}
where $C_1(t,i,u,v)$ and $C_2(t,i,u,v)$ are suitable nonzero scalars in $k$. There are $(a+b-2)t$ such linear combinations in total. Summing up, we see that the dimension of $\operatorname{Ker}\nolimits \delta_{2t}$ is $abt+ab-1$.
Next, we compute the kernel of $\delta_{2t+1}$ for $t \ge 0$. If $i$ is even, then
$$\delta_{2t+1}(y^ux^ve^{2t+1}_i) =0 \Leftrightarrow \left \{
\begin{array}{l}
u=b-1, v \ge 1, i \in \{ 0,2, \dots, 2t \}, \text{ or} \\
u=b-1, v=0, i=0.
\end{array}
\right.$$
There are $(a-1)(t+1)+1$ such vectors. If $i$ is odd, then
$$\delta_{2t+1}(y^ux^ve^{2t+1}_i) =0 \Leftrightarrow \left \{
\begin{array}{l}
u \ge 1, v=a-1, i \in \{ 1,3, \dots, 2t+1 \}, \text{ or} \\
u=0, v=a-1, i=2t+1,
\end{array}
\right.$$
and there are $(b-1)(t+1)+1$ such vectors. Finally, the nontrivial linear combinations in $\operatorname{Ker}\nolimits \delta_{2t+1}$ are
\begin{align*}
& y^ux^ve^{2t+1}_i + C_3(t,i,u,v)y^{u+1}x^{v-1}e^{2t+1}_{i+1} & u \le b-2, v \ge 1, i \in \{ 0, 2, \dots, 2t \} \\
& y^{b-1}e^{2t+1}_i + C_4(t,i,u,v)x^{a-1}e^{2t+1}_{i-1} & i \in \{ 2,4, \dots, 2t \},
\end{align*}
for suitable nonzero scalars $C_3(t,i,u,v)$ and $C_4(t,i,u,v)$ in $k$. There are $(b-1)(a-1)(t+1)+t$ such linear combinations. Consequently, the total dimension of $\operatorname{Ker}\nolimits \delta_{2t+1}$ is $abt+ab+1$.
We have shown that when $n \ge 1$, then
$$\dim \operatorname{Ker}\nolimits \delta_n = \left \{
\begin{array}{ll}
ab \frac{n+2}{2} -1 & \text{for $n$ even} \\
ab \frac{n+1}{2} +1 & \text{for $n$ odd}.
\end{array}
\right.$$
The exact sequence
$$0 \to \operatorname{Ker}\nolimits \delta_n \to \oplus_{i=0}^n Ae^n_i \xrightarrow{\delta_n} \Im \delta_n \to 0$$
gives $\dim \Im \delta_n = (n+1)ab- \dim \operatorname{Ker}\nolimits \delta_n$, and so
$$\dim \Im \delta_{n+1} = \left \{
\begin{array}{ll}
ab \frac{n+2}{2} -1 & \text{for $n$ even} \\
ab \frac{n+1}{2} +1 & \text{for $n$ odd}.
\end{array}
\right.$$
This shows that $\operatorname{HH}\nolimits_n(A, {_{\nu^{-1}}A_1})=0$ for $n \ge 1$.
\end{proof}
Using Proposition \ref{negativecohomologycodim2}, we can now compute all the Tate-Hochschild cohomology groups of $A$. Note that the characteristic of the ground field does not matter, contrary to the homology case in Theorem \ref{homologycodim2}.
\begin{theorem}\label{cohomologycodim2}
Let $k$ be a field and
$$A = k \langle X,Y \rangle / (X^a, XY-qYX, Y^b)$$
a quantum complete intersection with $a,b\ge 2$ and $q$ not a root of unity in $k$. Then
$$\dim \operatorname{\widehat{HH}}\nolimits^n(A,A) = \left \{
\begin{array}{ll}
1 & \text{if } n=0 \\
2 & \text{if } n=1 \\
1 & \text{if } n=2 \\
0 & \text{if } n \neq 0,1,2. \\
\end{array}
\right.$$
\end{theorem}
\begin{proof}
For $n \ge 1$, the dimensions follow from \cite[Theorem 3.2]{BerghErdmann}. Moreover, by Theorem \ref{dualityTHfrobenius} and Lemma \ref{dualityexttor} there are equalities
\begin{eqnarray*}
\dim \operatorname{\widehat{HH}}\nolimits^n(A,A) &=& \dim \operatorname{\widehat{HH}}\nolimits^{-(n+1)}(A, {_{\nu^2}A_1}) \\
&=& \dim \operatorname{\widehat{HH}}\nolimits_{-(n+1)}(A, D({_{\nu^2}A_1})) \\
&=& \dim \operatorname{\widehat{HH}}\nolimits_{-(n+1)}(A, {_{\nu^{-1}}A_1}) \\
\end{eqnarray*}
for all $n \in \mathbb{Z}$. It follows from Proposition \ref{negativecohomologycodim2} that $\operatorname{\widehat{HH}}\nolimits_n(A, {_{\nu^{-1}}A_1}) =0$ for $n \ge 1$, hence $\operatorname{\widehat{HH}}\nolimits^n(A,A)=0$ for $n \le -2$. What remains is therefore to compute $\operatorname{\widehat{HH}}\nolimits^0(A,A)$ and $\operatorname{\widehat{HH}}\nolimits^{-1}(A,A)$.
Since $\dim \operatorname{\widehat{HH}}\nolimits^0(A,A) = \dim \operatorname{\widehat{HH}}\nolimits_0(A, {_{\nu}A_1})$ by Lemma \ref{dualityexttor}, we use Lemma \ref{zeromaps}. Namely, the space $\operatorname{\widehat{HH}}\nolimits_0(A, {_{\nu}A_1})$ is the homology of the complex
$${_{\nu}A^2_1} \xrightarrow{d^{\nu}_1} {_{\nu}A_1} \xrightarrow{d^{\nu}_0} {_{\nu}A_1},$$
with maps given by
\begin{eqnarray*}
d^{\nu}_1(y^ux^ve^2_1) &=& (q^{u+1-b}-1)y^ux^{v+1} \\
d^{\nu}_1(y^ux^ve^2_2) &=& (q^{a-1}-q^{v})y^{u+1}x^v \\
d^{\nu}_0(y^ux^v) &=& \left \{
\begin{array}{ll}
0 & \text{if $u \ge 1$ or $v \ge 1$} \\
\frac{q^{a-ba}-1}{q^{1-b}-1} \frac{q^{ba-b}-1}{q^{a-1}-1} y^{b-1}x^{a-1} & \text{if $u=v=0$}.
\end{array}
\right.
\end{eqnarray*}
We see that
$$y^ux^v \in \Im d^{\nu}_1 \Leftrightarrow (u,v) \notin \{ (0,0),(b-1,a-1) \},$$
hence $\dim \Im d^{\nu}_1 = ab-2$. Since $\dim \operatorname{Ker}\nolimits d^{\nu}_0 = ab-1$, it follows that the dimension of $\operatorname{\widehat{HH}}\nolimits^0(A,A)$ is $1$.
\sloppy Finally, we compute $\operatorname{\widehat{HH}}\nolimits^{-1}(A,A)$. From the beginning of the proof we know that $\dim \operatorname{\widehat{HH}}\nolimits^{-1}(A,A) = \dim \operatorname{\widehat{HH}}\nolimits_0(A, {_{\nu^{-1}}A_1})$, so once again we use Lemma \ref{zeromaps}. The space $\operatorname{\widehat{HH}}\nolimits_0(A, {_{\nu^{-1}}A_1})$ is the homology of the complex
$${_{\nu^{-1}}A^2_1} \xrightarrow{d^{\nu^{-1}}_1} {_{\nu^{-1}}A_1} \xrightarrow{d^{\nu^{-1}}_0} {_{\nu^{-1}}A_1},$$
with maps given by
\begin{eqnarray*}
d^{\nu^{-1}}_1(y^ux^ve^2_1) &=& (q^{u+b-1}-1)y^ux^{v+1} \\
d^{\nu^{-1}}_1(y^ux^ve^2_2) &=& (q^{1-a}-q^{v})y^{u+1}x^v \\
d^{\nu^{-1}}_0(y^ux^v) &=& \left \{
\begin{array}{ll}
0 & \text{if $u \ge 1$ or $v \ge 1$} \\
\frac{q^{b-ba}-1}{q^{1-a}-1} \frac{q^{ba-b}-1}{q^{b-1}-1} y^{b-1}x^{a-1} & \text{if $u=v=0$}.
\end{array}
\right.
\end{eqnarray*}
Here we see that
$$y^ux^v \in \Im d^{\nu^{-1}}_1 \Leftrightarrow (u,v) \neq (0,0),$$
hence $\dim \Im d^{\nu^{-1}}_1 = ab-1$. Since $\dim \operatorname{Ker}\nolimits d^{\nu^{-1}}_0 = ab-1$, it follows that $\operatorname{\widehat{HH}}\nolimits^{-1}(A,A)=0$.
\end{proof}
| {
"timestamp": "2011-10-10T02:01:53",
"yymm": "1109",
"arxiv_id": "1109.4019",
"language": "en",
"url": "https://arxiv.org/abs/1109.4019",
"abstract": "We study Tate-Hochschild homology and cohomology for a two-sided Noetherian Gorenstein algebra. These (co)homology groups are defined for all degrees, non-negative as well as negative, and they agree with the usual Hochschild (co)homology groups for all degrees larger than the injective dimension of the algebra. We prove certain duality theorems relating the Tate-Hochschild (co)homology groups in positive degree to those in negative degree, in the case where the algebra is Frobenius. We explicitly compute all Tate-Hochschild (co)homology groups for certain classes of Frobenius algebras, namely, certain quantum complete intersections.",
"subjects": "K-Theory and Homology (math.KT); Rings and Algebras (math.RA)",
"title": "Tate-Hochschild homology and cohomology of Frobenius algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877658567787,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.709629624088529
} |
https://arxiv.org/abs/1808.00885 | Kodaira dimensions of almost complex manifolds I | This is the first of a series of papers, in which we study the plurigenera, the Kodaira dimension and more generally the Iitaka dimension on compact almost complex manifolds.Based on the Hodge theory on almost complex manifolds, we introduce the plurigenera, Kodaira dimension and Iitaka dimension on compact almost complex manifolds. We show that the plurigenera and the Kodaira dimension as well as the irregularity are birational invariants in almost complex category, at least in dimension $4$, where a birational morphism is defined to be a degree one pseudoholomorphic map. However, they are no longer deformation invariants, even in dimension $4$ or under tameness assumption. On the way to establish the birational invariance, we prove the Hartogs extension theorem in the almost complex setting by the foliation-by-disks technique.Some interesting phenomena of these invariants are shown through examples. In particular, we construct non-integrable compact almost complex manifolds with large Kodaira dimensions. Hodge numbers and plurigenera are computed for the standard almost complex structure on the six sphere $S^6$, which are different from the data of a hypothetical complex structure. | \section{Introduction}
The Iitaka dimension for a holomorphic line bundle $L$ over a compact complex manifold is a numerical invariant to measure the size of the space of holomorphic sections. It could be equivalently defined as the growth rate of the dimension of the space $H^0(X, L^{\otimes d})$, or the maximal image dimension of the rational map to projective space determined by powers of $L$, or $1$ less than the dimension of the section ring of $L$. The Iitaka dimension of the canonical bundle $\mathcal K_X$ of a compact complex manifold $X$ is called its Kodaira dimension and $H^0(X, \mathcal K_X^{\otimes d})$ is called the $d$-th plurigenus.
The Kodaira dimension, plurigenera and the canonical section ring are birational invariants. They play important roles in the study of complex manifolds. In particular, the Kodaira dimension is used to give a rough birational classification of complex manifolds. It is known that Kodaira dimension is a deformation invariant for compact complex surfaces, although it is no longer true for complex (non-K\"ahler) $3$-folds \cite{Nak}. Siu \cite{S1},\cite{S2} shows that plurigenera, and thus also the Kodaira dimension, are invariant with respect to projective deformations of algebraic varieties. Birkar-Cascini-Hacon-McKernan \cite{BCHM} shows that the canonical ring of a smooth projective variety is finitely generated, which implies that there is a unique canonical model for every variety of general type.
The theory of complex manifolds lies in the more general framework of almost complex manifolds. In \cite{Z2}, intersection theory of almost complex manifolds is introduced. As a consequence, pseudoholomorphic degree one maps are considered as birational morphisms in the almost complex category. An important step to develop birational geometry for almost complex manifolds is to introduce and study birational invariants.
In this series of papers, we will generalize the notions of Kodaira dimension, plurigenera as well as the space of holomorphic $p$-forms to compact almost complex manifolds. The crucial initial step is to have generalizations of holomorphic line bundle and its holomorphic sections. We have two equivalent versions.
The first is from differential geometry. A {\it pseudoholomorphic structure} on a complex vector bundle $E$ over an almost complex manifold $X$ is a differential operator $\bar{\partial}_E$ acting on smooth sections which satisfies the Leibniz rule (Definition \ref{ps}). In particular, the canonical bundle has a natural pseudoholomorphic structure inherited from the almost complex structure on $X$. By Koszul-Malgrange theorem, $\bar{\partial}_E^2=0$ on a complex manifold is equivalent to a holomorphic structure on the complex bundle $E$. Our generalized version of holomorphic sections is just the smooth sections in the kernel of $\bar{\partial}_E$.
To show these generalized holomorphic sections are of finite dimenstion, we apply the method of Hodge theory. Hodge theory is well developed on compact complex manifolds and on general compact Riemannian manifolds. On the way of making sense of the counting of pseudoholomorphic sections and defining plurigenera for almost complex manifolds, we develop the Hodge theory for Hermitian bundles over compact almost complex manifolds in details and show the following theorem.
\begin{thm}
Let $E$ be a complex vector bundle with a pseudoholomorphic structure over a compact almost complex manifold $(X, J)$. Then $H^0(X, E)$ is finite dimensional. In particular, $H^0(X, \mathcal K^{\otimes m})$ is finite dimensional and an invariant of $J$.
\end{thm}
This result gives us a good base to count pseudoholomorphic sections. In fact, we are able to define Dolbeault harmonic forms $\mathcal{H}_{\bar{\partial}_{E}}^{(p,q)}(X, E)$ which give Dolbeault type cohomology groups when $q=0$. The vector space $H^0(X, E)$ is simply the case of $p=q=0$. When $E$ is the trivial bundle, the space of harmonic forms of type $(p,q)$ has been defined in \cite{Hir}. The Problem 20 in Hirzebruch's list\cite{Hir}, raised by Kodaira and Spencer, asks whether their dimensions are independent of the choice of the Hermitian structure.\footnote{Recently, this problem is answered negatively in \cite{HZ}.} Our discussion gives affirmative answer to this problem when $q=0$ or, by Serre duality (Proposition \ref{Serre}), $q=\dim_{\C} X$.
The second description of pseudoholomorphic sections is more geometric. There are special almost complex structures on the total space of the complex vector bundle $E$, called {\it bundle almost complex structures}, introduced by De Bartolomeis-Tian in \cite{DeT}. The authors show that there is a bijection between bundle almost complex structures and the pseudoholomorphic structures on $E$ (also see Proposition \ref{bacs=ps}). We further observe, in Corollary \ref{holo}, that a section in the kernel of a pseudoholomorphic structure $\bar{\partial}_E$ is exactly a pseudoholomorphic section with respect to the bundle almost complex structure $\mathcal J$ corresponding to $\bar{\partial}_E$.
With these two equivalent descriptions understood, we are able to give our definition of $(E, \mathcal J)$-genus and the definition of Iitaka dimension, as well as their special cases - the plurigenera and the Kodaira dimension in the end of Section \ref{defIita}.
\begin{defn}\label{Iv1}
Let $E$ be a complex vector bundle with bundle almost complex structure $\mathcal J$ over an almost complex manifold $(X,J)$. The $(E, \mathcal J)$-genus is defined as $$P_{E, \mathcal J}:=\dim H^0(X, (E, \mathcal J)),$$ where $H^0(X, (E, \mathcal J))$ denotes the space of $(J,\mathcal J)$ pseudoholomorphic sections. The $m^{th}$ plurigenus of $(X,J)$ is defined to be $P_m(X, J)=\dim H^0(X, \mathcal K^{\otimes m})$.
Let $L$ be a complex line bundle with bundle almost complex structure $\mathcal J$ over $(X, J)$. The Iitaka dimension $\kappa^J(X, (L, \mathcal J))$ is defined as
$$\kappa^J(X, (L, \mathcal J))=\begin{cases}\begin{array}{cl} -\infty, &\ \text{if} \ P_{L^{\otimes m},\mathcal J}=0\ \text{for any} \ m\geq 0\\
\limsup_{m\rightarrow \infty} \dfrac{\log P_{L^{\otimes m},\mathcal J}}{\log m}, &\ \text{otherwise.}
\end{array}\end{cases}$$
The Kodaira dimension $\kappa^J(X)$ is defined by choosing $L=\mathcal K$ and $\mathcal J$ to be the bundle almost complex structure induced by $\bar{\partial}$.
\end{defn}
The advantage to have the second description is that the intersection theory of almost complex submanifolds developed by the second author in \cite{Z2} can come into play. The theory works particularly well when the base manifold $(X, J)$ is of dimension $4$. In this situation, the zero locus of a pseudoholomorphic section is a $J$-holomorphic curve in the first Chern class of the complex bundle $E$. With this understood, the rich theory of pseudoholomorphic curves are in our armory.
As we mentioned above, the plurigenera and thus the Kodaira dimension are classical birational invariants. We certainly expect it is true for our $P_m(X)$ and $\kappa^J(X)$.
As suggested by Theorem 1.5 in \cite{Z2}, a degree one pseudoholomorphic map is the right notion of birational morphism in the almost complex setting, at least in dimension $4$. The next result, as a combination of Theorems \ref{Kodbir} and \ref{hodgebir}, confirms the birational invariance of plurigenera, Kodaira dimension and the irregularity $h^{1,0}(X):=\dim H^{0}(X, \Omega^p(\mathcal O))$.
\begin{thm}\label{birintro}
Let $u: (X, J_X)\rightarrow (Y, J_Y)$ be a degree one pseudoholomorphic map between closed almost complex $4$-manifolds. Then $P_m(X, J_X)=P_m(Y, J_Y)$ and thus $\kappa^{J_X}(X)= \kappa^{J_Y}(Y)$. Moreover, the Hodge number $h^{1,0}(X)=h^{1,0}(Y)$.
\end{thm}
The most essential ingredient is to establish the desired Hartogs extension theorem in the almost complex setting, which certainly has its independent interest. It is only established in dimension $4$ by the foliation-by-disks technique (see {\it e.g.} \cite{T96, Z2}). It is Theorem \ref{hartogs} which we reproduce in the following.
\begin{thm}\label{hartogsintro}
Let $(E, \mathcal J)$ be a complex vector bundle with a bundle almost complex structure over the almost complex $4$-manifold $(X, J)$, and $p\in X$. Then any section in $H^0(X\setminus p, (E, \mathcal J)|_{X\setminus p})$ extends to a section in $H^0(X, (E, \mathcal J))$.
\end{thm}
The next step is to study the property of plurigenera under deformation of almost complex structures. For projective manifolds, the plurigenera are invariant under projective deformation. On complex surfaces, the plurigenera (hence the Kodaira dimension) are even diffeomorphism invariants \cite{FM, FQ}, although it is no longer true when the complex dimension is greater than $2$ (see \cite{R}). Moreover, the irregularity of a complex surface is a homotopy invariant.
By virtue of our Hodge theoretic description of plurigenera, they are upper semi-continuous functions under smooth deformation. However, it is easy to see that the dimensions could jump. When we deform an integrable almost complex structure of a surface of general type, a generic perturbed almost complex structure does not admit any pseudoholomorphic curve, while as mentioned above the zero locus of a non-trivial pseudoholomorphic section of a pluricanonical bundle is a pseudoholomorphic curve in the class $mK$. This argument itself does not exclude the possibility of invariance when the canonical class is torsion. In Section 6, we construct some explicit deformations on Kodaira-Thurston surface and $4$-torus, and show that the plurigenera, the Kodaira dimension and the irregularity are not constant under smooth deformation even when the canonical class is trivial.
Also in Section 6, we study the relation between non-integrability of almost complex structures and the Kodaira dimension. Namely, we search the possible values of Kodaira dimension if the almost complex structure is non-integrable. By applying the Riemann-Roch formula and the almost complex K\"unneth formula, we prove the following result (Theorem \ref{niKod}).
\begin{thm}
For every $k\in\{-\infty, 0, 1, \cdots, n-1\}$, $n\geq 2$, there are examples of compact 2n-dimensional non-integrable almost complex
manifolds with $\kappa^J=k$.
\end{thm}
We point out that the above range of Kodaira dimension for non-integrable almost complex structure is optimal. More precisely, we will show that if the Kodaira dimension equals the complex dimension of the manifold, then the almost complex structure must be integrable. This will appear in the second paper.
In the last section of the paper, we compute the Hodge numbers, the plurigenera and the Kodaira dimension on the six sphere $S^6$. It is classically known that there exist almost complex structures on $S^6$ \cite{Eh}. A standard construction is to use the cross product of $\mathbb R^7$ applying to the tangent space of $S^6$. Denote this standard almost complex structure by $\mathsf J$. In Theorem \ref{sphere}, we prove that
\begin{thm}
For the standard almost complex structure $\mathsf J$ on $S^6$, the following hold: (1) $h^{1,0}=h^{2,0}=h^{2,3}=h^{1,3}=0$; (2) $P_m(S^6, \mathsf J)=1$ for any $m\geq 1$ and $\kappa^{\mathsf J}=0$.
\end{thm}
This calculation is somewhat surprising since it is generally believed that the Kodaira dimension of a hypothetical complex structure is $-\infty$. Our plurigenera distinguish $\mathsf J$ from hypothetical complex structures on $S^6$, since for the latter $P_1=h^{3, 0}=0$.
In paper II, we will interpret the Kodaira dimension through the pluricanonical map and discuss the significant geometric consequences. We will also investigate its comparison with the symplectic Kodaira dimension \cite{L} on symplectic 4-manifolds. Some vanishing theorems on positively-curved almost Hermitian manifolds will also be proved.\\
\textbf{Acknowledgements} The authors are kindly informed by Tian-Jun Li that he has a joint project with Gabriel La Nave on Kodaira dimension for almost K\"ahler manifolds with a totally different strategy. The first author would also like to thank Professors Bo Guan, Jiaping Wang and Fangyang Zheng for their encouragement and thank Xiaolan Nie for her support.
\section{Notations}
We start by fixing our notations and explain the natural pseudoholomorphic structure on the pluricanonical bundles.
Let $(X,J)$ be a $2n$-dimensional almost complex manifold. The complexification of the cotangent bundle of $X$ decomposes as $T^*X\otimes \mathbb C=(T^*X)^{1, 0}\oplus (T^*X)^{0,1}$ where $(T^*X)^{1,0}$ annihilates the subspace in $TX\otimes \mathbb C$ where $J$ acts as $-i$. A $(1,0)$-form is a smooth section of $(T^*X)^{1, 0}$; similarly for a $(0, 1)$-form. The splitting of the cotangent bundle induces a splitting of all exterior powers. Write $\Lambda^{p, q}X=\Lambda^p((T^*X)^{1, 0})\otimes \Lambda^q((T^*X)^{0, 1})$. Then for any $r\ge 0$, we have the decomposition $$\Lambda^rT^*X\otimes \mathbb C=\oplus_{p+q=r}\Lambda^{p, q}X.$$ Let $\pi^{p,q}$ be the projection to $\Lambda^{p, q}X$. A $(p, q)$-form is a smooth section of the bundle $\Lambda^{p, q}X$. The space of all such sections is denoted $\Omega^{p, q}(X)=\Gamma(X, \Lambda^{p,q})$.
The $\bar{\partial}$ and $\partial$ operator can be defined by:
$$\bar{\partial}=\pi^{p,q+1}\circ d: \Omega^{p,q}(X)\rightarrow \Omega^{p,q+1}(X)$$
$$\partial=\pi^{p+1,q}\circ d: \Omega^{p,q}(X)\rightarrow \Omega^{p+1,q}(X),$$
where $d$ is the exterior differential. Both $\bar{\partial}$ and $\partial$ satisfy the Leibniz rule, but in general $\bar{\partial}^2$ and $\partial^2$ may not be zero. They contain important information of almost complex structures. Apply $\bar{\partial}$ to $\Lambda^{p, 0}$ and in particular $\mathcal K=\Lambda^{n,0}$, we have$$\bar{\partial}: \Lambda^{p,0}\rightarrow \Lambda^{p,1}\cong (T^*X)^{0,1} \otimes \Lambda^{p,0},$$
$$\bar{\partial}: \mathcal K\rightarrow \Lambda^{n,1}\cong (T^*X)^{0,1} \otimes \mathcal K.$$ Here we write $\mathcal K$ (or any vector bundle) in short for any smooth sections of $\mathcal K$ (the vector bundle). We can extend the $\bar{\partial}$ to an operator $\bar{\partial}_m: \mathcal K^{\otimes m}\rightarrow (T^*X)^{0,1}\otimes \mathcal K^{\otimes m}$ for $ m\geq 2$ inductively by the product rule
$$\bar{\partial}_m(s_1\otimes s_2)=\bar{\partial}s_1\otimes s_2+s_1\otimes \bar{\partial}_{m-1} s_2.$$
It satisfies the Leibniz rule $\bar{\partial}_m (fs)=\bar{\partial}f\otimes s+f\bar{\partial}_m s$ for any section $s$ of $\mathcal K^{\otimes m}$ and $\Lambda^{p,0}$.
\begin{defn}
The space of holomorphic sections of $\mathcal K^{\otimes m}$ is defined to be
$$H^0(X, \mathcal K^{\otimes m})=\{s\in \Gamma(X, \mathcal K^{\otimes m}): \bar{\partial}_m s=0\}.$$
\end{defn}
\begin{remk} The space $H^0(X, \mathcal K^{\otimes m})$ is categorical in the almost complex category. Indeed, if $(X', J')$ is another almost complex manifold which has a diffeomorphism $F: X'\rightarrow X$ to $(X,J)$ satisfying $dF\circ J'=J\circ dF$, we say that $(X', J')$ is \textit{pseudoholomorphic isomorphic} to $(X,J)$. Then $F^*(\Omega^{p,q}(X))=\Omega^{p,q}(X')$ and $F^*\circ \bar{\partial}_J=\bar{\partial}_{J'}\circ F^*$. So $s\in \Gamma(X, \mathcal K_J)$ satisfying $\bar{\partial}s=0$ if and only if $\bar{\partial}_{J'} F^* s=0$. Similar result holds on $\mathcal K_J^{\otimes m}$ where $F^*$ and $\bar{\partial}$ are replaced by an isomorphism $F^*_m$ and the operator $\bar{\partial}_m$. Therefore, $F$ induces an isomorphism $F^*_m: H^0(X, \mathcal K_J^{\otimes m})\rightarrow H^0(X',\mathcal K_{J'}^{\otimes m})$ for any $m\geq 1$.
\end{remk}
\section{Hodge theory on almost complex manifolds}
In this section, we will define Dolbeault cohomology groups for a complex bundle associated with a pseudoholomorphic structure. We show that they are finite dimensional when $X$ is compact in Theorem \ref{finite}. As a consequence, $H^0(X,\mathcal K^{\otimes m})$ is finite dimensional. We will follow the method of Hodge theory to define a formal adjoint operator of $\bar{\partial}_m$ and apply the elliptic theory.
Hodge theory is well developed on compact complex manifolds (see \cite{GH}, \cite{Huy}, \cite{V}, \cite{Z}). The Hodge theory on compact almost complex manifolds was initiated in \cite{Hir} for $(p,q)$ forms. To derive our results, we will set up the Hodge theory for $(p,q)$ forms with value in a Hermitian pseudoholomorphic bundle $E$. We then apply it to complex line bundles, in particular the bundle $\mathcal K^{\otimes m}$ and show that $H^0(X, \mathcal K^{\otimes m})$ is finite dimensional. One of our observations is that for a holomorphic vector bundle $E$ over a complex manifold, the $\bar{\partial}$ operator on the holomorphic dual $E^*$ coincides with the $(0,1)$ part of a Hermitian connection on $\bar{E}$ when identifying $E^*$ with $\bar{E}$ by a Hermitian metric.
Choose a Riemannian metric $g$ on $X$ compatible with $J$, namely $g(Ju, Jv)=g(u,v)$ for any $v, w\in TX$. Then $g$ induces a Hermitian structure $h$ on $TX\otimes \mathbb{C}$ by $h=g-i\omega$, where $\omega(u, v)=g(Ju, v)$. Also, $h$ can be extended to Hermitian structures on the bundles $\Lambda^{p,q}X$ for any $(p, q)$ which we still denote by $h$. Explicitly, assume that $\{e_i\}$ is a local unitary frame in $TX\otimes \mathbb{C}$ and $\{\phi_i\}$ is the unitary coframe so that $$h=\sum^n_{i=1} \phi_i\otimes \bar{\phi}_i.$$ If $\alpha=\{i_1,i_2,\cdots, i_p\}, \beta=\{j_1,j_2,\cdots, j_q\}$ is any ordered set of $(p,q)$ multiindices, denote $\phi_{\alpha}=\phi_{i_1}\wedge\phi_{i_2}\wedge\cdots\wedge \phi_{i_p}$, $\bar{\phi}_{\beta}=\bar{\phi}_{j_1}\wedge\bar{\phi}_{j_2}\wedge\cdots\wedge \bar{\phi}_{j_p}$. Then $h$ on $\Lambda^{p,q}$ is defined by letting $\{\phi_{\alpha}\wedge \bar{\phi}_{\beta}\}$ be orthogonal and $h(\phi_{\alpha}\wedge \bar{\phi}_{\beta}, \phi_{\alpha}\wedge \bar{\phi}_{\beta})=2^{p+q}$.
The $*$ operator on an almost Hermitian manifold is the unique $\mathbb{C}$-linear operator $$*: \Lambda^{p,q}\rightarrow \Lambda^{n-q,n-p}$$ satisfying \begin{align} h(\varphi_1, \varphi_2)dV=\varphi_1\wedge\overline{*\varphi_2}\end{align} where $dV$ is the volume form of $g$ and $\varphi_1, \varphi_2\in \Lambda^{p,q}.$
Using the unitary coframe $\{\phi_i\}$, we can write out the $*$ operator directly. Let $\hat{\alpha}, \hat{\beta}$ be the ordered set of the complement multi-indices of $\alpha, \beta$ in $\{1,2,\cdots,n\}$. As $\omega=i\sum_{i=1}^n \phi_i\wedge \bar{\phi}_i$ and $dV=\dfrac{\omega^n}{n!}$, if we define on the basis \begin{align} *(\phi_{\alpha}\wedge\bar{\phi}_{\beta})=2^{(p+q-n)}(-i)^n\epsilon_{\alpha\beta\hat{\beta}\hat{\alpha}} \phi_{\hat{\beta}}\wedge\bar{\phi}_{\hat{\alpha}} \end{align} where $\epsilon_{\alpha\beta\hat{\beta}\hat{\alpha}}$ is the sign of permutation of $$(i_1,\cdots, i_p, j_1,\cdots, j_q, \hat{j}_1,\cdots,\hat{j}_{n-q},\hat{i}_1,\cdots,\hat{i}_{n-p})\rightarrow (1,1',2,2'\cdots,n,n'),$$
then the operator of (2) satisfies (1). By uniqueness it gives the $*$ operator.
Define an inner product on $\Omega^{p,q}(X)$ by $\langle \varphi_1, \varphi_2\rangle=\int_X h(\varphi_1,\varphi_2)dV$ for $\varphi_1,\varphi_2 \in \Omega^{p,q}(X)$. Let $$\bar{\partial}^*=-*\partial*.$$ Then the following holds $$\langle \bar{\partial}\varphi_1, \varphi_2\rangle=\langle \varphi_1, \bar{\partial}^*\varphi_2\rangle.$$ The proof is the same with the integrable case, because the Leibniz rule holds and $\bar{\partial}=d$ acting on $A^{(n,n-1)}$.
Indeed, by (1), we have \begin{align*} \langle \bar{\partial}\varphi_1, \varphi_2\rangle&= \int_X h(\bar{\partial}\varphi_1,\varphi_2)dV=\int_X \bar{\partial}\varphi_1\wedge \overline{*\varphi_2}\\
&=\int_X \bar{\partial}(\varphi_1\wedge \overline{*\varphi_2})-(-1)^{p+q-1}\int_X \varphi_1\wedge \overline{\partial*\varphi_2}\\
&=\int_X \varphi_1\wedge \overline{*(\bar{\partial}^*\varphi_2)}\\
&=\int_X h(\varphi_1,\bar{\partial}^*\varphi_2)dV=\langle \varphi_1, \bar{\partial}^*\varphi_2\rangle,
\end{align*}
where we use the Stokes' theorem in the third line.
The above discussion produces the formal dual operator of $\bar{\partial}$ on $\Omega^{p,q}(X)$. The next important step is to generalize this operator to any Hermitian bundle $E$ with a pseudoholomorphic structure.
\begin{defn} Let $(E,h_E)$ be a Hermitian vector bundle over $(X,J)$. A connection $\nabla: \Gamma(X,E)\rightarrow \Gamma(X, (T^*X\otimes \C)\otimes E)$ is called a Hermitian connection if \begin{align} \label{def} d (h_E(s_1, s_2))=h_E(\nabla s_1, s_2)+h_E(s_1, \nabla s_2),\end{align}
for any two sections $s_1, s_2$ of $E$. \end{defn}
\begin{defn}\label{ps}
A pseudoholomorphic structure on $E$ is given by a differential operator $\bar{\partial}_E: \Gamma(X, E)\rightarrow \Gamma(X, (T^*X)^{0,1}\otimes E)$ which satisfies the Leibniz rule $$\bar{\partial}_E (fs)=\bar{\partial}f\otimes s+f\bar{\partial}_Es$$ where $f$ is a smooth function and $s$ is a section of $E$.
\end{defn}
If the pseudoholomorphic structure $\bar{\partial}_E$ satisfying $\bar{\partial}_E^2=0$ on a complex manifold, it is equivalent to a holomorphic structure on the complex bundle $E$ by Koszul-Malgrange's theorem. In particular, any pseudoholomorphic structure on a complex vector bundle over a Riemann surface $S$ is holomorphic, since $(T^*S)^{0,2}=0$.
Denote $\nabla^{(1,0)}, \nabla^{(0,1)}$ the $(1,0)$ and $(0,1)$ components of $\nabla$. We have
\begin{lem}\label{cancon}
For any Hermitian bundle $(E, h_E)$ with a pseudoholomorphic structure $\bar{\partial}_E$, there is a unique Hermitian connection $\nabla$ so that $\nabla^{(0,1)}=\bar{\partial}_E$.
\end{lem}
The lemma is well known when $J$ is integrable and should be known to experts for general $J$ (see \cite{DeT}). We include a proof for convenience of readers.
\begin{proof} We first prove the existence. Assume that $\{U_\alpha\}$ is an open chart covering of $X$ with partition of unity $\{\varphi_\alpha\}$ such that $E|_{U_\alpha}$ is trivial. For any $s\in \Gamma(X,E)$ and any connection $\nabla$, $\nabla s=\nabla\sum_{\alpha} \varphi_\alpha s=\sum_{\alpha} \nabla(\varphi_\alpha s)$. So we only need to define $\nabla$ locally on $U_\alpha$.
Let $\{s_{i}, 1\leq i\leq N\}$ be a unitary frame of $E$ on $U_\alpha$. Using summation notation, denote $\bar{\partial}_Es_i=\theta_i^js_j$, where $\theta_i^j\in T^{0,1}$. Let $\{s'_{i}, 1\leq i\leq N\}$ be another unitary frames of $E$ with $s'_i=f_i^j s_j$. As $\sum_jf_i^j\bar{f}_k^j=\delta_{ik}$, $(f^{-1})_i^j=\bar{f}^i_j$. Denote $\bar{\partial}_Es'_i=(\theta')_i^js_j'$. We have
$$(\theta')_i^j=\sum_k(\bar{\partial}f_i^k+f_i^l\theta_l^k)\bar{f}^k_j.$$
To define $\nabla$, let $\omega^j_i=\theta_i^j-\overline{\theta_j^i}$ and $\nabla s_{i}=\omega^j_i s_j$. Similarly, for $\{s'_{i}\}$, let $(\omega')^j_i=(\theta')_i^j-\overline{(\theta')_j^i}$ and $\nabla s'_{i}=(\omega')^j_i s'_j$. If $\{\omega^j_i\}$ and $\{(\omega')^j_i\}$ satisfy the transition equation
$(\omega')_i^j=\sum_k(df^k_i+f_i^l\omega_l^k)\bar{f}_j^k,$ they give a well defined connection. This follows by
\begin{align*}
(\omega')_i^j&=(\theta')_i^j-\overline{(\theta')_j^i}\\
&=\sum_k((\bar{\partial}f_i^k+f_i^l\theta_l^k)\bar{f}^k_j-(\partial\bar{f}_j^k+\bar{f}_j^l\overline{\theta_l^k})f^k_i)\\
&=\sum_k(df^k_i+f_i^l(\theta_l^k-\overline{\theta_k^l}))\bar{f}^k_j=\sum_k(df^k_i+f_i^l\omega_l^k)\bar{f}_j^k
\end{align*}
where we use $\partial(\sum_kf_i^k\bar{f}_j^k)=0$ for the third equality. So $\nabla$ is independent of the frames. From the skew symmetry of $\nabla$, we know that it is a Hermitian connection compatible with $h_E$.
The uniqueness follows easily if we restrict $\nabla$ to the open chart above.
\end{proof}
\begin{remk}\label{chern}
Recall that the almost Chern connection \cite{EL}(see also \cite{Gau2}) associated to $g$ is the unique connection $\nabla^c$ on the tangent bundle such that $\nabla^c J=\nabla^c g=0$ and that the torsion $\Theta$ has vanishing $(1,1)$ part. The $\bar{\partial}$ operator on $(T^*X)^{1,0}$ induces a natural pseudoholomrphic structure. It turns out that the unique Hermitian connection on $(T^*X)^{1,0}$ induced by $\bar{\partial}$ as in Lemma \ref{cancon} equals $\nabla^c$. To see this, assume that $\nabla^c e_i=\omega_i^j e_j$ for a unitary frame $\{e_i\}$. By the first structure equation, the $i^{th}$ component of $\Theta$ is $\Theta^i=d \phi_i+\omega_j^i \wedge \phi_j$, where $\{\phi_i\}$ is the coframe. Also $\nabla^c$ acts on $(T^*X)^{1,0}$ by $\nabla^c \phi_i=-\omega^i_j\phi_j$. Then $\Theta^i$ has vanishing $(1,1)$ part if and only if $\bar{\partial}\phi_i+(\omega_j^i)^{0,1} \wedge \phi_j=0$ which is equivalent to $(\nabla^c)^{(0,1)}=\bar{\partial}$.
Now suppose $E$ is the pluricanonical bundle $\mathcal K_J^{\otimes m}$ with the induced pseudoholomorphic structure $\bar{\partial}_m$. Following from the above discussion, the unique Hermitian connection on $\mathcal K_J^{\otimes m}$ induced by the Chern connection on $(T^*X)^{1,0}$ is just the unique Hermitian connection determined by $\bar{\partial}_m$ in Lemma \ref{cancon}.
\end{remk}
Let $(E,h_E)$ be the Hermitian bundle with a pseudoholomorphic structure $\bar{\partial}_E$. We can define a unique dual pseudoholomorphic structure on $E^*$: $$\bar{\partial}_{E^*}: \Gamma(X, E^*)\rightarrow \Gamma(X, (T^*X)^{0,1}\otimes E^*)$$ as follows. For any section $s^*\in \Gamma(X, E^*)$ and any section $s'\in \Gamma(X, E)$, let
\begin{align}\label{L} (\bar{\partial}_{E^*}(s^*))(s')=\bar{\partial} (s^*(s'))-s^*(\bar{\partial}_E(s')). \end{align}
It is easy to verify that $\bar{\partial}_{E^*}$ satisfies the Leibniz rule, giving a pseudoholomorphic structure.
With the Hermitian structure $h_E$, there exists a natural complex linear isomorphism $E^*\cong \bar{E}$, where $\bar{E}$ is the conjugate bundle of $E$. Therefore, $\bar{\partial}_{E^*}$ induces a pseudoholomorphic structure on $\bar{E}$. On the other side, by Lemma \ref{cancon}, there is a unique Hermitian connection $\nabla$ on $E$ determined by $\bar{\partial}_E$ and $h_E$. The conjugate of the $(1,0)$ part of $\nabla$ induces $$\overline{\nabla^{(1,0)}}: \bar{E}\rightarrow (T^*X)^{0,1}\otimes \bar{E}.$$
Define $\bar{\partial}_{\bar{E}}=\overline{\nabla^{(1,0)}}$. We have
\begin{lem}\label{lem3.5}
By identifying $\bar{E}$ with $E^*$, $\bar{\partial}_{\bar{E}}=\bar{\partial}_{E^*}.$
\end{lem}
\begin{proof} Let $\bar{s}\in \bar{E}$ and $s'\in E$. The inner product $h_E$ on $E$ induces a bilinear paring between $E$ and $\bar{E}$ which we still denote by $h_E$. Then by (\ref{def}), \begin{align} \label{bar} \bar{\partial}h_E(s',\bar{s})=h_E(\nabla^{0,1}s',\bar{s})+h_E(s',\overline{\nabla^{(1,0)}}\bar{s})=h_E(\bar{\partial}_Es',\bar{s})+h_E(s',\bar{\partial}_{\bar{E}}\bar{s}).\end{align} Therefore, $\bar{\partial}_{\bar{E}}$ satisfies the product rule (\ref{L}). Therefore, $\bar{\partial}_{\bar{E}}=\bar{\partial}_{E^*}$.
\end{proof}
Next we can extend the $\bar{\partial}_E$ operator to $\Lambda^{p,q}\otimes E$ and $\bar{\partial}_{\bar{E}}$ to $\Lambda^{r,s}\otimes \bar{E}$ by \begin{align} \label{tensor} \bar{\partial}_E (\varphi\otimes u)=(\bar{\partial}\varphi)\otimes u+(-1)^{p+q}\varphi\wedge \bar{\partial}_E u\\ \notag \bar{\partial}_{\bar{E}} (\phi\otimes v)=(\bar{\partial}\phi)\otimes v+(-1)^{r+s}\phi\wedge \bar{\partial}_{\bar{E}} v.\end{align} Then there is a wedge pairing $$\wedge: (\Lambda^{p,q}\otimes E)\ \times\ (\Lambda^{r,s}\otimes
\bar{E})\rightarrow \Lambda^{p+r,q+s}$$ defined by $(\varphi_1\otimes u)\wedge (\varphi_2\otimes v)=h_E(u,v)\varphi_1\wedge\varphi_2$. As before, in the situation, $h_E(u,v)$ is denoted to be the $\mathbb{C}$-bilinear product between $E$ and $\bar{E}$. We have the Leibniz rule for the wedge pairing
\begin{align}\label{Leibniz} \bar{\partial}_E(\varphi_1\otimes u)\wedge (\varphi_2\otimes v)=&(\bar{\partial}\varphi_1\otimes u+(-1)^{p+q}\varphi_1\wedge \bar{\partial}_E u)\wedge (\varphi_2\otimes v) \notag \\
=&h_E(u,v)\bar{\partial}\varphi_1\wedge\varphi_2+(-1)^{2(p+q)}h_E(\bar{\partial}_E u,v)\wedge\varphi_1\wedge\varphi_2 \notag\\
=&h_E(u,v)(\bar{\partial}(\varphi_1\wedge\varphi_2)-(-1)^{p+q}\varphi_1\wedge\bar{\partial}\varphi_2)\\
& +(\bar{\partial}h_E(u,v)-h_E(u, \bar{\partial}_{\bar{E}} v))\wedge\varphi_1\wedge\varphi_2 \notag \\
=&\bar{\partial}((\varphi_1\otimes u)\wedge (\varphi_2\otimes v))-(-1)^{p+q}(\varphi_1\otimes u)\wedge \bar{\partial}_{\bar{E}}(\varphi_2\otimes v) \notag
\end{align}
where we use (\ref{bar}) in the third line and $h(u, \bar{\partial}_{\bar{E}} v)\wedge\varphi_1\wedge\varphi_2 = (-1)^{p+q+r+s}(\varphi_1\otimes u)\wedge (\varphi_2\otimes \bar{\partial}_{\bar{E}}v)$ for the fourth line.
Now we are able to find the dual operator $\bar{\partial}^*_E$. Define $$*: \Lambda^{p,q}\otimes E\rightarrow \Lambda^{n-q,n-p}\otimes E$$ by $*(\varphi\otimes u)=(*\varphi)\otimes u$ for any $\varphi\in \Lambda^{p,q}, u\in E$. From the definition we have
$$h_E(\varphi_1\otimes u_1, \varphi_2\otimes u_2)dV=\varphi_1\otimes u_1\wedge \overline{*(\varphi_2\otimes u_2)}$$
where $\varphi_1\otimes u_1, \varphi_2\otimes u_2\in \Lambda^{p,q}\otimes E$, $h_E$ denotes the original inner product on $E$ and $dV$ is the volume form of $X$. The inner product on $\Gamma(X, \Lambda^{p,q}\otimes E)$ is given by $$
\langle \varphi_1\otimes u_1, \varphi_2\otimes u_2\rangle =\int_X h_E(\varphi_1\otimes u_1, \varphi_2\otimes u_2)dV.$$
Define $$\bar{\partial}_E^*=-*\nabla^{(1,0)}*.$$ We have
\begin{align*}
\langle \bar{\partial}_E(\phi\otimes w), \varphi\otimes u\rangle&=\int_X h_E(\bar{\partial}_E(\phi\otimes w), \varphi\otimes u)dV\\
&=\int_X\bar{\partial}_E(\phi\otimes w)\wedge \overline{*(\varphi\otimes u)}\\
&=\int_X \bar{\partial}(\phi\otimes w\wedge \overline{*(\varphi\otimes u)})-(-1)^{p+q-1}(\phi\otimes w)\wedge \bar{\partial}_{\bar{E}}(\overline{*(\varphi\otimes u)})\\
&=-\int_X (\phi\otimes w)\wedge (-1)^{p+q-1}\overline{\nabla^{(1,0)}*(\varphi\otimes u)}\\
&=\langle \phi\otimes w, \bar{\partial}_E^*(\varphi\otimes u) \rangle
\end{align*}
for $\phi\otimes w\in \Lambda^{p,q-1}\otimes E, \varphi\otimes u\in \Lambda^{p,q}\otimes E$. We use \eqref{Leibniz} in the third line and Stokes' theorem in the fourth line. So $\bar{\partial}_E^*$ gives the formal adjoint of $\bar{\partial}_E$.
Define the Laplacian \begin{align}\label{lap} \Delta_{\bar{\partial}_E}=\bar{\partial}_E\bar{\partial}_E^*+\bar{\partial}_E^*\bar{\partial}_E.
\end{align}
As $\langle \Delta_{\bar{\partial}_E} s, s\rangle=\langle \bar{\partial}_E s, \bar{\partial}_E s\rangle +\langle \bar{\partial}_E^*s, \bar{\partial}_E^* s\rangle$, $\Delta_{\bar{\partial}_E} s=0$ if and only if $\bar{\partial}_E s=0$ and $\bar{\partial}_E^*s=0$. Denote the space of harmonic $(p,q)$ form section of $E$ by:
$$\mathcal{H}_{\bar{\partial}_E}^{(p,q)}(X,E)=\{s\in \Gamma(X, \Lambda^{p,q}\otimes E)| \Delta_{\bar{\partial}_E} s=0\}.$$
\begin{thm} \label{finite}
$\Delta_{\bar{\partial}_E}$ is an elliptic differential operator and $\mathcal{H}_{\bar{\partial}_E}^{(p,q)}(X,E)$ is finite dimensional.
\end{thm}
When $E$ is the trivial bundle, this was pointed out in \cite{Hir}.
\begin{proof}
We first prove the case when $E$ is a trivial line bundle with $\bar{\partial}_E=\bar{\partial}$. We shall show that $\Delta_{\bar{\partial}}$ is elliptic at any point $p\in X$. As it is a local property, it suffices to discuss in a coordinate chart $U$. Let $(J_c, h_c)$ be the constant almost complex structure and Hermitian structure on $U$ with $J_c(p)=J(p), h_c(p)=h(p)$. $J_c$ is isomorphic to the canonical complex structure of open set in $\mathbb{C}^n$. Denote $\bar{\partial}_c$ the "dbar" operator of $J_c$ and $*_c$ the operator corresponding to $h_c$. We have
$$(\bar{\partial}\varphi-\bar{\partial}_c\varphi)(p)=0, \ *(\varphi)(p)=*_c(\varphi)(p)$$
for any $\varphi\in \Gamma(U, \Lambda^{p,q})$. So $\bar{\partial}$ and $\bar{\partial}_c$ differ by a differential operator whose coefficients vanish at $p$. As the principal symbol is only related to the highest degree differential, any operator from compositions of $\bar{\partial}, *, \bar{\partial}$ would have the same principal symbol at $p$ with the operators if we replace them by $\bar{\partial}_c, *_c, \bar{\partial}_c$. In particular, $\Delta_{\bar{\partial}}$ has the same principal symbol with $\Delta_{\bar{\partial}_c}$ at $p$. The latter is the flat Laplacian on $\mathbb{C}^n$ which is elliptic. Therefore $\Delta_{\bar{\partial}}$ is elliptic at $p$ and hence everywhere.
For a complex vector bundle $E$. Let $\{\phi_i\}$ be a local unitary coframe of $(T^*X)^{1,0}$ and $\{u_{\nu}\}$ a unitary frame of $E$. Using Einstein summation notation, any section $s$ of $\Lambda^{p,q}\otimes E$ can be expressed as:
$$s=f^{\alpha\beta\nu}\phi_{\alpha}\wedge\bar{\phi}_{\beta}\otimes u_{\nu},$$
where $(\alpha, \beta)$ runs over all multi-indices of $(p,q)$. Then $$\bar{\partial}_E s= \bar{\partial}f^{\alpha\beta\nu}\wedge\phi_{\alpha}\wedge\bar{\phi}_{\beta}\otimes u_{\nu}+f^{\alpha\beta\nu}\bar{\partial}_E(\phi_{\alpha}\wedge\bar{\phi}_{\beta}\otimes u_{\nu}).$$
The principal symbol is calculated solely from the first term.
We use $\simeq$ to mean operators with the same principal symbol. Let $\bar{\partial}\otimes id$ represent the differential operator determined by $\bar{\partial}\otimes id(f^{\alpha\beta\nu}\phi_{\alpha}\wedge\bar{\phi}_{\beta}\otimes u_{\nu}) := \bar{\partial}(f^{\alpha\beta\nu}\phi_{\alpha}\wedge\bar{\phi}_{\beta})\otimes u_{\nu}$ and the Leibniz rule (it depends on choice of $\{u_{\nu}\}$ and is only defined locally). Let $\Delta_{\bar{\partial}}\otimes id$ be the operator given by $\Delta_{\bar{\partial}}\otimes id (f^{\alpha\beta\nu}\phi_{\alpha}\wedge\bar{\phi}_{\beta}\otimes u_{\nu}):= \Delta_{\bar{\partial}}(f^{\alpha\beta\nu}\phi_{\alpha}\wedge\bar{\phi}_{\beta})\otimes u_{\nu}$. We have $$\bar{\partial}_E\simeq \bar{\partial}\otimes id \ \text{and}\ \Delta_{\bar{\partial}_E}\simeq \Delta_{\bar{\partial}}\otimes id.$$
We have shown above that $\Delta_{\bar{\partial}}$ is elliptic. Then $\Delta_{\bar{\partial}}\otimes id$ is elliptic. Therefore $\Delta_{\bar{\partial}_E}$ is elliptic.
The second part follows directly from the elliptic theory (e.g. \cite{LM}). Explicitly, there is a Green operator $G$ together with the projection operator $H: \Omega^{p,q}(X,E)\rightarrow \mathcal{H}_{\bar{\partial}_E}^{(p,q)}(X,E)$ such that $\Delta_{\bar{\partial}_E}\circ G+H=Id$. Also, $\Delta_{\bar{\partial}_E}$ and $G$ are both Fredholm operators and $\mathcal{H}_{\bar{\partial}_E}^{(p,q)}(X,E)$ is finite dimensional.
\end{proof}
Moreover, the following Serre duality also holds on compact almost complex manifolds.
\begin{prop}\label{Serre}
For any $0\leq p,q\leq n$, $\mathcal{H}_{\bar{\partial}_{E}}^{(p,q)}(X, E)\cong (\mathcal{H}_{\bar{\partial}_{E^*}}^{(n-p,n-q)}(X, E^*))^*$.
\end{prop}
\begin{proof} The argument is essentially the same as the classical case (see for example Proposition 4.1.15 of \cite{Huy}), except clarifying the operators on the bundle $E^*$. We only need to show that the natural paring between $\mathcal{H}_{\bar{\partial}_{E}}^{(p,q)}(X, E)$ and $\mathcal{H}_{\bar{\partial}_{E^*}}^{(n-p,n-q)}(X, E^*)$ is nondegenerate. For any nonzero $s\in \mathcal{H}_{\bar{\partial}_{E}}^{(p,q)}(X, E)$, since $\bar{\partial}_E s=\bar{\partial}_E^* s=0$, by Lemma \ref{lem3.5}, we have $$\bar{\partial}_{E^*}(\overline{*s})=\bar{\partial}_{\bar{E}}(\overline{*s})=\overline{\nabla^{(1,0)}*s}=0,$$ and $$\bar{\partial}^*_{E^*}(\overline{*s})=-*\overline{\bar{\partial}_E}*(\overline{*s})=-(-1)^{(p+q)(n-p-q)}*\overline{\bar{\partial}_E s}=0.$$ So $\overline{*s}\in \mathcal{H}_{\bar{\partial}_{E^*}}^{(n-p,n-q)}(X, E^*)$. As $\int_X s\wedge \overline{*s}=\|s\|^2\neq 0$, the non-degeneracy stands.
\end{proof}
Since $\Lambda^{p,1}\cong (T^*X)^{0,1}\otimes \Lambda^{p,0}$, the $\bar{\partial}$ operator induces a natural pseudoholomorphic structure on $\Lambda^{p,0}$ for $0\leq p\leq n$. Denote $\Omega^p(E)=\Lambda^{p,0}\otimes E$. The pseudoholomorphic structures on $\Lambda^{p,0}$ and $E$ gives a pseudoholomorphic structure on $\Omega^p(E)$. Identifying $\Lambda^{p,1}\otimes E$ with $(T^*X)^{0,1}\otimes \Omega^p(E)$ by a permutation sign, the pseudoholomorphic structure on $\Omega^p(E)$ coincides with the $\bar{\partial}_E$ operator given by (\ref{tensor}). Define $$H^0(X, \Omega^p(E))=\{s\in \Gamma(X,\Omega^p(E))=\Omega^{p,0}(X, E): \bar{\partial}_{E} s=0\}.$$ We have
\begin{prop}\label{finsecE}
Let $E$ be a complex vector bundle with a pseudoholomrphic structure over a compact almost complex manifold $X$, then $H^0(X, \Omega^p(E))$ is finite dimensional for $0\leq p\leq n$.
\end{prop}
\begin{proof}
As $\bar{\partial}^*_{E}=0$ on $ \Omega^{p,0}(X, E)$, $\bar{\partial}_{E} s=0$ is equivalent to $\Delta_{\bar{\partial}_{E} }s=0$. So $$H^0(X, \Omega^p(E))=\mathcal{H}_{\bar{\partial}_{E}}^{(p,0)}(X, E),$$ which is finite dimensional.
\end{proof}
\begin{cor}
$H^0(X,\mathcal K^{\otimes m})$ is finite dimensional.
\end{cor}
\begin{proof}
Let $E=\mathcal K^{\otimes m}$ with $\bar{\partial}_E=\bar{\partial}_m$ and $p=0$. Then it follows from Proposition \ref{finsecE}.
\end{proof}
Proposition \ref{finsecE} also implies that $\mathcal{H}_{\bar{\partial}_{E}}^{(p,0)}(X, E)$ is independent of the Hermitian metric used to define $\Delta_{\bar{\partial}_{E}}$. This should not hold for $\mathcal{H}_{\bar{\partial}_{E}}^{(p,q)}(X, E)$ for $q>0$. However, it is possible that the dimension of $\mathcal{H}_{\bar{\partial}_{E}}^{(p,q)}(X, E)$ is independent of the defining Hermitian metric. When $q=\dim_{\C}X$, Proposition \ref{Serre} implies this is true. In general, we have the following question as a generalization of Problem 20 (Kodaira-Spencer) in Hirzebruch's list \cite{Hir}.
\begin{que}\label{Dol}
Does $\dim \mathcal{H}_{\bar{\partial}_{E}}^{(p,q)}(X, E)$ depends only on $J$ and $\bar{\partial}_{E}$ for any $0<q<\dim_{\C} X$?
\end{que}
\section{Bundle almost complex structure and Iitaka dimension}\label{defIita}
Recall that for any holomorphic vector bundle over a complex manifold, the total space is also a complex manifold so that any smooth section $s$ is $\bar{\partial}$ closed if and only if $s$ induces a holomorphic map. For a complex vector bundle $E$ over the almost complex manifold $(X,J)$, a {\it bundle almost complex structure} as in \cite{DeT} (here we use the rephrasement from \cite{LS}) is an almost complex structure $\mathcal J$ on $TE$ so that
\begin{enumerate}[(i)]
\item the projection is $(\mathcal J, J)$-holomorphic,
\item $\mathcal J$ induces the standard complex structure on each fiber, i.e. multiplying by $i$,
\item the fiberwise addition $\alpha: E\times_M E\rightarrow E$ and the fiberwise multiplication by a complex number $\mu:\C\times E\rightarrow E$ are both pseudoholomorphic.
\end{enumerate}
It is shown in \cite{DeT} that a bundle almost complex structure $\mathcal J$ on $E$ determines a pseudoholomorphic structure $\bar{\partial}_{\mathcal J}$, and the map $\mathcal J\mapsto \bar{\partial}_{\mathcal J}$ is a bijection between the spaces of bundle almost complex structures and pseudoholomorphic structures on $E$.
We include here a direct proof for reader's convenience.
\begin{prop}[De Bartolomeis-Tian]\label{bacs=ps} There is a bijection between bundle almost complex structures and the pseudoholomorphic structures on $E$. \label{deT}
\end{prop}
\begin{proof}
Assume that $\mathcal{J}$ is a bundle almost complex structure. Let $s:X\longrightarrow E$ be in $\Gamma(X,E)$ and $\pi: E\longrightarrow X$ be the projection. We have $\pi\circ s=id_X$. Define $d''s: TX\longrightarrow TE$ by $d''s=\frac{1}{2}(ds+\mathcal{J}\circ ds\circ J)$. Then $d''s=0$ if and only if $s$ is a $(J,\mathcal{J})$ holomorphic map.
From $d''s$, we will define an element in $\Gamma(X,(T^*X)^{(0,1)}\otimes E)$. We have \begin{align*}
d\pi\circ d''s&=\frac{1}{2}(d\pi\circ ds+d\pi\circ \mathcal{J}\circ ds\circ J)\\
&=\frac{1}{2}(id+J\circ d\pi\circ ds\circ J)=\frac{1}{2}(id-id)=0,\end{align*}
where property i) is used in the second line. So $d''s(TX)\in ker(d\pi)=V(E)$ with $V(E)$ being the vertical tangent bundle of $E$. For a vector bundle, $V(E)$ is canonically isomorphic to $\pi^*(E)$ on $E$ and there is a commutative diagram:
\begin{center}
\begin{tikzcd}
V(E)\cong \pi^*(E) \arrow{r}{\pi^*} \arrow{d}
& E\arrow{d}
\\
E\arrow{r}{\pi}
& X
\end{tikzcd}
\end{center}
As $\pi\circ s=id_X$, we get a bundle homomorphism $\pi^*\circ d''s: TX\longrightarrow E$ over $id:X\longrightarrow X$. Define \begin{align}\label{partial} \bar{\partial}_Es=\pi^*\circ d''s|_{(TX)^{(0,1)}},\end{align} then $\bar{\partial}_Es\in \Gamma(X,(T^*X)^{(0,1)}\otimes E)$. To verify that $\bar{\partial}_E$ satisfies the Leibniz rule, from $d(fs)=df s+fds$, we get
\begin{align*} d''(fs)&=\frac{1}{2}(d(fs)+ \mathcal{J}\circ d(fs)\circ J)=\frac{1}{2}(df s+fds+\mathcal{J}\circ (dfs+fds)\circ J)\\
&=\frac{1}{2}((df+idf\circ J)s+f(ds+ \mathcal{J}\circ ds\circ J))\\
&=\bar{\partial}fs+fd''s
\end{align*}
where we use property ii) and iii) of $\mathcal{J}$ in the second line. Passing to $E$, we have $\bar{\partial}_E (fs)=\bar{\partial}f\otimes s+f\bar{\partial}_Es$. So $\bar{\partial}_E$ gives a pseudoholomorphic structure on $E$.
On the other hand, let $\bar{\partial}_E$ be a pseudoholomorphic structure and $s:X\longrightarrow E$ in $\Gamma(X,E)$. Then $\bar{\partial}_Es\in\Gamma(X,(T^*X)^{(0,1)}\otimes E)$ and $\overline{\bar{\partial}_Es}\in\Gamma(X,(T^*X)^{(1,0)}\otimes E)$. Together they give $\bar{\partial}_Es+\overline{\bar{\partial}_Es}: TX\longrightarrow E$. As $ V(E)\cong\pi^*(E)=\{(v_1,v_2)\in E\times E, \pi(v_1)=\pi(v_2)\}$, there is an embedding $s^*: E\longrightarrow V(E)$ induced by $s$ given by $$s^*(v)=(s\pi(v), v).$$
Composing it with $\bar{\partial}_Es+\overline{\bar{\partial}_Es}$, we have $s^*\circ (\bar{\partial}_Es+\overline{\bar{\partial}_Es}): TX\longrightarrow V(E)\subset TE$. At each point $v$ of $E$, from $d\pi\circ ds=id_{TX}$, we get $T_vE=ds(TX)\oplus V_v(E)$. Then define the bundle almost complex structure $\mathcal{J}$ on $TE$ by letting $$\mathcal{J}=(-2s^*\circ (\bar{\partial}_Es+\overline{\bar{\partial}_Es})+ds)\circ J\circ ds^{-1}$$ on $ds(TX)$ and $\mathcal{J}=J_{st}$ on the vertical tangent space where $J_{st}$ is the standard complex structure by multiplying with $i$. Using the Leibniz rule of $\bar{\partial}_E$, it can be showed that $\mathcal{J}$ is in independent of the smooth sections $s$ and satisfies properties i), ii), iii). Also, the constructions give the one-to-one correspondence discovered in \cite{DeT}.
\end{proof}
Denote $\bar{\partial}_{\mathcal{J}}$ the pseudoholomorphic structure determined by a bundle almost complex structure $\mathcal{J}$. We have
\begin{cor} \label{holo}
For any $s\in \Gamma(X, E)$, $\bar{\partial}_{\mathcal{J}}s=0$ if and only if $s$ is $(J, \mathcal{J})$ holomorphic.
\end{cor}
\begin{proof}
Since the result is used frequently in this note, we offer two separate proofs which have their own ingredients.
The first proof follows directly from Proposition \ref{deT}. From (\ref{partial}) we have $\pi^*\circ d's=\bar{\partial}_{\mathcal{J}}s+\overline{\bar{\partial}_{\mathcal{J}}s}$. Then $\bar{\partial}_{\mathcal{J}}s=0$ is equivalent to $d's=0$, which means that $s$ is $(J, \mathcal{J})$ holomorphic.
The second proof applies a local argument. For each 2-dimensional $J$-invariant subspace $P$ in $T_pX$ at a point $p$, we know there is a $J$-holomorphic disk $D$ passing through $p$ with the tangent plane $P$. Then $J$ is integrable on $D$ and $E|_D$ is a holomorphic bundle by dimension reason (see the argument after Definition 3.2). Restricted on $D$, it is known that $\bar{\partial}_{\mathcal{J}}s=0$ is equivalent to that $s$ is $(J, \mathcal{J})$ holomorphic. Since both $\bar{\partial}_{\mathcal{J}}s=0$ and $(J, \mathcal{J})$ holomorphic are local conditions and only depend on the complex directions, we can choose $P$ to be any directions and obtain the general equivalence.
\end{proof}
We call such a section $s$ a pseudoholomorphic section of $(E, \mathcal J)$. The above correspondence builds the bridge to the second author's paper \cite{Z2} on the intersections of almost complex submanifolds, and is used frequently in this paper. In particular, when $E$ is a complex line bundle over $4$-manifold $(X, J)$, the zero locus of a pseudoholomorphic section $s$ is a $J$-holomorphic $1$-subvariety in class $c_1(E)$ by Corollary 1.3 of \cite{Z2}.
For any $(E, \mathcal J)$, define $$H^0(X, (E, \mathcal J))=H^{0,0}_{\bar{\partial}_{\mathcal J}}(X, E)=\{s\in \Gamma(X, E): \bar{\partial}_{\mathcal J} s=0\}.$$ By Theorem \ref{finite}, it is finite dimensional. The $(E, \mathcal J)$-genus of $X$ is defined as $P_{E, \mathcal J}:=\dim H^0(X, (E, \mathcal J))$. When there is no confusion of the choice of bundle almost complex structure $\mathcal J$, we will simply write it as $P_E$. The bundle almost complex structure $\mathcal J$ on $E$ induces bundle almost complex structures on $E^{\otimes m}$, which is also denoted by $\mathcal J$. Thus the notation $P_{E^{\otimes m}, \mathcal J}$ or simply $P_{E^{\otimes m}}$ makes sense. When $E=\mathcal K$ endowed with the standard bundle almost complex structure, we have the $m^{th}$ plurigenus of $(X,J)$ is defined to be $P_m(X, J)=\dim H^0(X, \mathcal K^{\otimes m})$.
We are now ready to define the Iitaka dimension (and the Kodaira dimension).
\begin{defn}
The Iitaka dimension $\kappa^J(X, (L, \mathcal J))$ of a complex line bundle $L$ with bundle almost complex structure $\mathcal J$ over $(X, J)$ is defined as
$$\kappa^J(X, (L, \mathcal J))=\begin{cases}\begin{array}{cl} -\infty, &\ \text{if} \ P_{L^{\otimes m},\mathcal J}=0\ \text{for any} \ m\geq 0\\
\limsup_{m\rightarrow \infty} \dfrac{\log P_{L^{\otimes m},\mathcal J}}{\log m}, &\ \text{otherwise.}
\end{array}\end{cases}$$
The Kodaira dimension $\kappa^J(X)$ is defined by choosing $L=\mathcal K$ and $\mathcal J$ to be the bundle almost complex structure induced by $\bar{\partial}$.
\end{defn}
\section{Birational invariants}\label{birational}
As suggested by the results in \cite{Z2}, degree $1$ pseudoholomorphic maps are the right notion of birational morphism in almost complex category. We define two almost complex manifolds $M$ and $N$ to be birational to each other if there are almost complex manifolds $M_1, \cdots, M_{n+1}$ and $X_1, \cdots, X_{n}$ such that $M_1=M$ and $M_{n+1}=N$, and there are degree one pseudoholomorphic maps $f_i: X_i\rightarrow M_i$ and $g_i: X_{i}\rightarrow M_{i+1}$, $i=1, \cdots, n$.
The next natural step is to find birational invariants. In birational geometry, there are many important birational invariants, including the fundamental group, the Hodge numbers $h^{p,0}$, the plurigenera and in particular the Kodaira dimension. As shown in Theorem 1.5 of \cite{Z2}, $X=M\#k\overline{\mathbb CP^2}$ when there is a degree one pseudoholomorphic map $\phi: X\rightarrow M$ between $4$-dimensional almost complex manifolds. Hence, the fundamental group is apparently also birationally invariant in the almost complex category.
We will show in this section that the almost complex Kodaira dimension $\kappa^J$, plurigenera $P_m$ and Hodge numbers $h^{p,0}$ are birational invariants for $4$-dimensional almost complex manifolds.
We first show that plurigenera and Kodaira dimension for almost complex $4$-manifolds are non-increasing under pseudoholomorphic maps of non-zero degree.
\begin{lem}\label{kjsurj}
Let $u: (X, J)\rightarrow (Y, J_Y)$ be a surjective pseudoholomorphic map between closed almost complex $2n$-manifolds. Then $P_m(X, J)\ge P_m(Y, J_Y)$. Hence, $\kappa^{J}(X)\ge \kappa^{J_Y}(Y)$.
\end{lem}
\begin{proof}
Pullback of sections defines $$u^*_m: H^0(Y, \mathcal K_Y^{\otimes m})\rightarrow H^0(X, \mathcal K_X^{\otimes m})$$ for all $m\ge 1$. Combining the argument of Theorem 5.5 and the result of Theorem 3.8 in \cite{Z2}, we know that the singularity subset $\mathcal S_u$ has finite $(2n-2)$-dimensional Hausdorff measure. (Theorem 1.4 of \cite{Z2} shows that $\mathcal S_u$ supports a $J$-holomorphic $1$-subvariety when $n=2$.)
For any $s\in H^0(Y, \mathcal K_Y^{\otimes m})$, if $u^*_m(s)=0$, then the restriction $u_m^*(s)|_{X\setminus \mathcal S_u}=0$ would imply $s|_{Y\setminus u(\mathcal S_u)}=0$. Since $s$ is smooth and $\overline{Y\setminus u(\mathcal S_u)}=Y$, we know $s=0$. Hence $u_m^*$ is injective, which implies the inequalities.
\end{proof}
For any pseudoholomorphic structure $\bar\partial_E$ of a complex vector bundle $E$, it also induces a pseudoholomorphic structure on $E|_D$ for any non-compact embedded $J$-holomorphic curve $D\subset X$. By Koszul-Malgrange theorem, it is holomorphic. Since $D$ is Stein, by Oka's principle, any holomorphic bundle is isomorphic to the product $D\times \C^k$.
Let $s$ be a smooth section of $E$ over a compact almost complex manifold $X$. Then for any point $x\in X$ and a $J$-holomorphic disk $D$ passing through it, we could write $s|_{D}$ as a vector valued complex function $s': D\rightarrow \C^k$. In fact, $s'$ is the composition of $s$ with the projection from $D\times \C^k$ to $\C^k$. Since the projection is holomorphic, $s$ is $(J, \mathcal J)$-holomorphic if and only if $s'$ is holomorphic. In other words, $s'$ is a holomorphic function. Later, we will simply write $s$ instead of $s'$ by abuse of notation.
Since there is no local complex coordinate system for a general almost complex manifold, we use the $J$-fiber-diffeomorphism \cite{T96} to play such a role.
We start with any point $x\in M$, and want to choose an open neighborhood $U$ of $x$. Without loss of generality, as in \cite{T96, Z2}, we can assume the almost complex structure $J$ is on $\mathbb C^2$. It agrees with the standard almost complex structure $J_0$ at the origin, but typically nowhere else.
Denote a family of holomorphic disks $D_w:=\{(\xi, w)| |\xi|<\rho\}$, where $w\in D$. What we get from \cite{T96}, mainly Lemma 5.4, is a diffeomorphism $f: D\times D \rightarrow \mathbb C^2$ onto its image $U$, where $D\subset \mathbb C$ is the disk of radius $\rho$, such that:
\begin{itemize}
\item For all $w\in D$, $f(D_w)$ is a $J$-holomorphic submanifold containing $(0, w)$.
\item For all $w\in D$, dist$((\xi, w); f(\xi, w))\le z\cdot \rho\cdot |\xi|$. Here $z$ depends only on $\Omega$ and $J$.
\item For all $w\in D$, the derivatives of order $m$ of $f$ are bounded by $z_m\cdot \rho$, where $z_m$ depends only on $\Omega$ and $J$.
\end{itemize}
We call such a diffeomorphism $J$-fiber-diffeomorphism. We have freedom to choose the ``direction" of these disks by rotating the Gaussian coordinate system. As in \cite{T96, Z2, BZ}, we are also able to choose the center $f(0\times D)$ to be $J$-holomorphic.
With these preparations, we are able to derive the following version of Hartogs' extension theorem for almost complex manifolds.
\begin{thm}\label{hartogs}
Let $(E, \mathcal J)$ be a complex vector bundle with a bundle almost complex structure over the almost complex $4$-manifold $(X, J)$, and $p\in X$. Then any section in $H^0(X\setminus p, (E, \mathcal J)|_{X\setminus p})$ extends to a section in $H^0(X, (E, \mathcal J))$.
\end{thm}
\begin{proof}
Near $p$, as in \cite{Z2}, we choose a $J$-fiber-diffeomorphism of a neighborhood $U$ of $p$, $f: D\times D\rightarrow U$, such that $f(0\times D)$ and $f(D\times w), \forall w\in D$ are embedded $J$-holomorphic disks. By possibly shrinking $U$, our complex vector bundle $(E, \mathcal J)$ could be trivialized such that each section of it (on a subset of $U$) restricts to $f(D_w)$ and $f(0\times D)$ are complex vector valued functions. We can achieve it by first choose the trivialization along $f(0\times D)$ and then fiberwise along each $f(D_w)$.
Let $s\in H^0(X\setminus p, (E, \mathcal J)|_{X\setminus p})$. By choosing the above trivialization and the previous discussion, $s$ is a vector valued holomorphic function along each $D_w$ when $w\ne 0$, $(D\setminus \{0\})\times 0$ and $0\times (D\setminus \{0\})$. We use Cauchy integration formula to define $$a_j(z_2)=\frac{1}{2\pi i}\int_{|\xi|=\rho}\frac{s(\xi, z_2)}{\xi^{j+1}}d\xi, \, \,\, \forall j\in \mathbb Z.$$ It is a smooth (vector valued) function and $a_0(z_2)=s(0, z_2)$ when $z_2\ne 0$. Hence, in particular, $a_0(z_2)$ is holomorphic on $D\setminus \{0\}$. We let $a_j(0)=\lim_{z_2\rightarrow 0} a_j(z_2)$. Since for fixed $z_2\ne 0$, $ s(\xi, z_2)$ is holomorphic for $\xi \in D$, we know $a_{-j}(z_2)=0$ for $j>0$. By the continuity of $s$, we know $a_{-j}(0)=0, \forall j>0$. Hence, $s(\xi, 0)=\sum_{j=-\infty}^{\infty}a_j(0) \xi^j=\sum_{j=0}^{\infty}a_j(0) \xi^j$ is also holomorphic at $\xi=0$ with value $a_0(0)$ at $\xi=0$. In particular, $a_0(0)=\frac{1}{2\pi i}\int_{|\xi|=\rho}\frac{s(\xi, 0)}{\xi}d\xi$. Since $a_0(z_2)=s(0, z_2)$ is holomorphic when $z_2\ne 0$, the partial derivative $$\frac{\partial}{\partial \bar z_2}a_0(z_2)=\frac{1}{2\pi i}\int_{|\xi|=\rho}\frac{\frac{\partial}{\partial \bar z_2}s(\xi, z_2)}{\xi}d\xi=0.$$ This extends to $z_2= 0$ since $s$ is smooth. Hence, $a_0(z_2)=s(0, z_2)$ is also holomorphic at $z_2=0$.
To summarize, what we have proved in the above is that the extensions of holomorphic functions $s(0, z_2)$ and $s(z_1, 0)$ on $0\times (D\setminus\{0\})$ and $(D\setminus\{0\})\times 0$ to $(0, 0)$ have the same value $a_0(0)$, and are holomorphic at both disks $0\times D$ and $D_0$. As in \cite{Z2}, we can choose the center $f(0\times D)$ of the $J$-fiber diffeomorphism (that transverse to $f(D_0)$) to be (a subdisk of) any given $J$-holomorphic disk. Moreover, we can also choose a family of disks passing through $p$ whose complex tangent directions at $p$ form a disk around a given direction $\kappa$ in $\mathbb CP^1$. Moreover, each of them is the $D_0$ fiber of a $J$-fiber diffeomorphism (see Lemma 5.8 of \cite{T96} or Lemma 3.10 of \cite{Z2}). Since $\mathbb CP^1$ is compact, we can choose finite many such families such that their union covers a neighborhood of $p$, and their tangent directions cover $\mathbb CP^1$.
We choose $J$-fiber diffeomorphisms around $p$ whose fiber passing through $p$ lying in the above union of families and take the center of the foliation to be either $f(0\times D)$ or $f(D_0)$ in our above construction. By this process, we know all the disks in the above union of families have the same extended value at $p$, and are holomorphic at all the directions. Hence, our section $s\in H^0(X\setminus p, (E, \mathcal J)|_{X\setminus p})$ is extended over $p$ to a section in $H^0(X, (E, \mathcal J))$.
\end{proof}
We are ready to show the Kodaira dimension $\kappa^J$ is a birational invariant for almost complex $4$-manifolds.
\begin{thm}\label{Kodbir}
Let $u: (X, J)\rightarrow (Y, J_Y)$ be a degree one pseudoholomorphic map between closed almost complex $4$-manifolds. Then $P_m(X, J)=P_m(Y, J_Y)$ and thus $\kappa^{J}(X)= \kappa^{J_Y}(Y)$.
\end{thm}
\begin{proof}
First, by Corollary \ref{holo}, we know that any element in $H^0(X, \mathcal K_X^{\otimes m})$ is $(J, \mathcal J_{J})$-holomorphic where $\mathcal J_J$ is the bundle almost complex structure corresponding to $\bar{\partial}_m$.
By Lemma \ref{kjsurj}, we only need to show $u^*_m$ is surjective.
By Theorem 1.5 of \cite{Z2}, we know there is a finite set $Y_1\subset Y$ such that $$u: X\setminus u^{-1}(Y_1) \rightarrow Y\setminus Y_1$$ is a diffeomorphism. For $\sigma\in H^0(X, \mathcal K_X^{\otimes m})$, we could pull it back by $u^{-1}|_{Y\setminus Y_1}$ to get $(u^{-1})^*(\sigma)\in H^0(Y\setminus Y_1, \mathcal K_{Y\setminus Y_1}^{\otimes m})$. By Theorem \ref{hartogs}, we could extend point-by-point over $Y_1$ to get a unique element in $H^0(Y, \mathcal K_Y^{\otimes m})$.
Hence, $u_m^*$ is surjective and we complete the proof.
\end{proof}
There are also other birational invariants. Since $\Lambda^{p,1}\cong (T^*X)^{0,1}\otimes \Lambda^{p,0}$, the $\bar{\partial}$ operator induces a natural pseudoholomorphic structure on $\Lambda^{p,0}$ for $0\leq p\leq n$. By Proposition \ref{finsecE}, $H^0(X, \Omega^p_X):=H^0(X, \Omega^p(\mathcal O))=\mathcal H_{\bar{\partial}}^{(p,0)}(X, \mathcal O)$ is finite dimensional. We denote $h^{p,0}(X):=\dim H^0(X, \Omega^p(\mathcal O))$. For a pseudoholomorphic map $u: (X, J)\rightarrow (Y, J_Y)$ between closed almost complex $2n$-manifolds and any $0\le p\le n$, pullback of sections defines $u^*: H^0(Y, \Omega^p_Y)\rightarrow H^0(X, \Omega^p_X)$. When $u$ is surjective, by the same argument as Lemma \ref{kjsurj}, we have
\begin{lem}\label{p0surj}
Let $u: (X, J)\rightarrow (Y, J_Y)$ be a surjective pseudoholomorphic map between closed almost complex $2n$-manifolds. Then $u^*: H^0(Y, \Omega^p_Y)\rightarrow H^0(X, \Omega^p_X)$ is injective and $h^{p,0}(X)\ge h^{p,0}(Y)$ for any $0\le p\le n$.
\end{lem}
We can also show that $h^{p,0}$ are birational invariants in dimension $4$. In fact, the only one which does not follow from Theorem \ref{Kodbir} is the irregularity $h^{1,0}$.
\begin{thm}\label{hodgebir}
Let $u: (X, J)\rightarrow (Y, J_Y)$ be a degree one pseudoholomorphic map between closed almost complex $4$-manifolds. Then $h^{p,0}(X)=h^{p,0}(Y)$ for any $0\le p\le 2$.
\end{thm}
\begin{proof}
First, by Corollary \ref{holo}, we know that any element in $H^0(X, \Omega^p_X)$ is $(J, \mathcal J_{J})$-holomorphic where $\mathcal J_J$ is the bundle almost complex structure on $\Lambda^{p,0}$ corresponding to he natural pseudoholmorphic structure induced by $\bar{\partial}$.
By Lemma \ref{p0surj}, we only need to show $u^*$ is surjective.
By Theorem 1.5 of \cite{Z2}, we know there is a finite set $Y_1\subset Y$ such that $$u: X\setminus u^{-1}(Y_1) \rightarrow Y\setminus Y_1$$ is a diffeomorphism. For $\sigma\in H^0(X, \Omega^p_X)$, we could pull it back by $u^{-1}|_{Y\setminus Y_1}$ to get $(u^{-1})^*(\sigma)\in H^0(Y\setminus Y_1, \Omega^p_{Y\setminus Y_1})$, which are the pseudoholomorphic sections of $\Lambda^{p,0}(Y)$ over $Y\setminus Y_1$. By Theorem \ref{hartogs}, we could extend it point-by-point over $Y_1$ to get a unique element in $H^0(Y, \Omega^p_Y)$.
Hence, $u^*$ is surjective and we complete the proof.
\end{proof}
We would like to remark that the dimension of the $J$-anti-invariant cohomology $H_J^-(X, \mathbb R)$ defined in \cite{LZcag} is also a birational invariant as shown in \cite{BZ}.
\section{Examples}\label{exam}
In this section, we give some explicit examples on the calculation of the almost complex plurigenera, the Kodaira dimension $\kappa^J$ and the irregularity. As we have seen in Section \ref{birational}, all of them are birational invariant on 4-manifolds. However, different from the integrable case, they are no longer deformation invariants. This is easy to see by deforming an integrable almost complex structure of a surface of general type as shown in the introduction. This argument does not quite extend to the case when the canonical class is torsion. Our first two examples study such explicit deformations on Kodaira-Thurston surface and $4$-torus.
In Section \ref{bigex}, we show that there are examples of compact $2n$-dimensional nonintegrable almost complex manifolds with Kodaira dimension $\{-\infty, 0, 1, \cdots, n-1\}$ for $n\geq 2$ (Theorem \ref{niKod}).
\subsection{The Kodaira-Thurston surface}\label{kt}
Consider the Kodaira-Thurston surface $X=S^1\times (\Gamma \backslash \text{Nil}^3)$, where $\text{Nil}^3$ is the Heisenberg group
$$\text{Nil}^3=\{ A\in GL(3, \mathbb{R})| A=\begin{pmatrix} 1&x&z\\0&1&y\\0&0&1\end{pmatrix}\}$$
and $\Gamma$ is the subgroup in $\text{Nil}^3$ consisting of element with integer entries, acting by left multiplication (see \cite{TW}). $X$ is homogeneous and has trivial tangent and cotangent bundle. An invariant frame of the tangent bundle is given by $$\frac{\partial}{\partial t}, \hspace{4mm} \frac{\partial}{\partial x},\hspace{4mm} \frac{\partial}{\partial y}+x\frac{\partial}{\partial z}, \hspace{4mm}\frac{\partial}{\partial z},$$
where $t$ is the coordinate of $S^1$. The corresponding dual invariant coframe is given by
$$dt, \hspace{4mm}dx,\hspace{4mm} dy,\hspace{4mm} dz-xdy.$$
For any $a\neq 0\in \mathbb{R}$, define the almost complex structures $J_a$ by:
$$J_a(\frac{\partial}{\partial t})=\frac{\partial}{\partial x}, \hspace{2mm} J_a(\frac{\partial}{\partial x})=-\frac{\partial}{\partial t}, \hspace{2mm} J_a(\frac{\partial}{\partial y}+x\frac{\partial}{\partial z})=\frac{1}{a}\frac{\partial}{\partial z},\hspace{2mm} J_a(\frac{\partial}{\partial z})=-a(\frac{\partial}{\partial y}+x\frac{\partial}{\partial z}).$$
We can compute the Nijenhuis tensor to get $N(\frac{\partial}{\partial x}, \frac{\partial}{\partial z})=a^2(\frac{\partial}{\partial y}+x\frac{\partial}{\partial z})\neq 0$. Therefore $J_a$ is not integrable by the Newlander-Nirenberg theorem \cite{NN}. As $(T^*X)^{1,0}$ is spanned by $\phi_1=dt+idx, \phi_2=dy+ia(dz-xdy)$, any section of $\mathcal K$ can be written as $s=f\phi_1\wedge\phi_2$. Since $$d\phi_2=-iadx\wedge dy=-\frac{a}{4}(\phi_1\wedge\bar{\phi}_2-\bar{\phi}_1\wedge \phi_2+\bar{\phi}_1\wedge\bar{\phi}_2-\phi_1\wedge\phi_2),$$ we have
\begin{align*}
\bar{\partial}(\phi_1\wedge\phi_2)&=-\phi_1\wedge\bar{\partial}\phi_2=\frac{a}{4}\phi_1\wedge(\phi_1\wedge\bar{\phi}_2-\bar{\phi}_1\wedge\phi_2)\\
&=\frac{a}{4}\bar{\phi}_1\wedge\phi_1\wedge\phi_2.
\end{align*}
So $\bar{\partial}s=0$ if and only if \begin{align}\bar{\partial}f+\frac{a}{4}f\bar{\phi}_1=0. \label{09}\end{align}
Let $w=t+ix, \frac{\partial}{\partial \bar{w}}=\frac{1}{2}(\frac{\partial}{\partial t}+i\frac{\partial}{\partial x})$ and $V=\frac{1}{2}((\frac{\partial}{\partial y}+x\frac{\partial}{\partial z})+i\frac{1}{a}\frac{\partial}{\partial z})$. Then $\frac{\partial}{\partial \bar{w}}, V$ are dual vectors of $\bar{\phi}_1, \bar{\phi}_2$. From (\ref{09}) we have
\begin{align}\label{04}&\frac{\partial f}{\partial \bar{w}}+\frac{a}{4}f=0\\
\label{5}&V(f)=0.
\end{align}
Let $f=f_1+if_2$, where $f_1, f_2$ are smooth real functions on $X$. From (\ref{5}) we get that $\bar{V}Vf=0$ where $\bar{V}$ is the conjugate of $V$. As $\bar{V}V=\frac{1}{4}((\frac{\partial}{\partial y}+x\frac{\partial}{\partial z})^2+(\frac{1}{a}\frac{\partial}{\partial z})^2)$, we obtain
\begin{align}\label{8}
&\frac{\partial^2 f_1}{\partial y^2}+2x\frac{\partial^2f_1}{\partial y\partial z}+(x^2+\frac{1}{a^2})\frac{\partial^2f_1}{\partial z^2}=0,\\
\label{9}
&\frac{\partial^2 f_2}{\partial y^2}+2x\frac{\partial^2f_2}{\partial y\partial z}+(x^2+\frac{1}{a^2})\frac{\partial^2f_2}{\partial z^2}=0
\end{align}
Consider the fibration $\rho: X\rightarrow T^2=\mathbb{R}^2/\mathbb{Z}^2$ given by
$$\rho([t, x, y, z])=[t, x].$$
The fiber of $\rho$ is a torus with coordinate $(y,z)$. \eqref{8}, \eqref{9} is strictly elliptic without zero order term when viewing $f$ as a function of $y, z$. As the fiber is compact, by the maximum principle $f$ is constant in each fiber. We can push down $f$ to a function on the base $T^2$ with $(t, x)$ coordinate. To solve the equation (\ref{04}) on $T^2$, consider the Fourier series $$\mathcal{F}(f)=\sum_{(k,l)\in \mathbb{Z}^2}f_{k,l}e^{2\pi i(kt+lx)},\ f_{k,l}=\int_{T^2} f(t,x)e^{-2\pi i(kt+lx)}dtdx.$$ For smooth function $f$, $f=0$ if and only if $f_{k, l}=0, \forall (k,l)\in \mathbb Z^2$ by the completion of the series $\{e^{2\pi i(kt+lx)}\}$.
Apply $\mathcal{F}$ to \eqref{04}, we get
$$\sum_{(k,l)\in \mathbb{Z}^2} (\frac{a}{4}+\pi(ik-l))f_{k,l} e^{2\pi i(kt+lx)}=0.$$
If $a\notin 4\pi\mathbb{Z}$, then $\frac{a}{4}+\pi(k-il)\neq 0$ for any $(k,l)\in \mathbb{Z}^2$. So $f_{k,l}=0$ and $f=0$. If $a=4l\pi$ for some $l\in\mathbb{Z}\backslash \{0\}$, then $f=Ce^{2\pi i lx}$ are the solutions. Therefore we get
$$P_1(X, J_a)=\begin{cases} 0, a\notin 4\pi\mathbb{Z}\\
1, a\in 4\pi\mathbb{Z}\end{cases}.$$
For $m\geq 2$, assume that $s=f(\phi_1\wedge\phi_2)^{\otimes m}$ is a holomorphic section of $\mathcal K^{\otimes m}$. Then $$\bar{\partial}_m s=(\bar{\partial}f + \frac{ma}{4}f\bar{\phi}_1)(\phi_1\wedge\phi_2)^{\otimes m}=0.$$ The same computation from above shows that $f$ is constant on $(y,z)$ and satisfying
$$\frac{\partial f}{\partial \bar{w}}+\frac{ma}{4}f=0.$$
Using Fourier transform, we get that if $a\notin \frac{4}{m}\pi\mathbb{Z}$, then $f=0$; if $a=\frac{4l\pi}{m}$ for some $l\in \mathbb{Z}\backslash \{0\}$, then $f=Ce^{2\pi i lx}$. So
\begin{equation}\label{KTpm}
P_m(X, J_a)=\begin{cases} 0, a\notin \frac{4}{m}\pi\mathbb{Z}\\
1, a\in \frac{4}{m}\pi\mathbb{Z}.\end{cases}
\end{equation}
\vspace{.2cm}
To compute the irregularity $h^{1,0}(X)$, assume that $\gamma=g_1\phi_1+g_2\phi_2\in H^0(X,\Omega_X)$. As $d\phi_1=0$ and $\bar{\partial}\phi_2=-\frac{a}{4}\phi_1\wedge\bar{\phi}_2+\frac{a}{4}\bar{\phi}_1\wedge \phi_2$, from $\bar{\partial}\gamma=0$ we obtain
\begin{align}\bar{\partial}g_1+\frac{a}{4}g_2\bar{\phi}_2=0 \label{18} \\ \bar{\partial}g_2-\frac{a}{4}g_2\bar{\phi}_1=0 \label{19} \end{align}
(\ref{19}) is in the same form with \eqref{09}. So we have $V(g_2)=0$ and then $g_2$ is independent of $y,z$. (\ref{18}) is equivalent to
\begin{align}
\label{020} \frac{\partial g_1}{\partial \bar{w}}=0\\V(g_1)+\frac{a}{4}g_2=0 \label{021}
\end{align}
From \eqref{020} we have that $g_1$ is independent of $t, x$. As $g_2$ is independent of $y,z$, composing $\bar{V}$ to (\ref{021}) we get that $\bar{V}V(g_1)=0$. Therefore, $g_1$ is a constant. Returning to (\ref{021}) we get that $g_2=0$. Therefore $\gamma=c\phi_1$ for some constant $c$ and $h^{1,0}(X)=1$.
In conclusion, we have
\begin{prop}\label{6.1}
For any $a\neq 0 \in \mathbb{R}$, there is a nonintegrable almost complex structure $J_a$ on $X=S^1\times (\Gamma \backslash \text{Nil}^3)$ such that $h^{1,0}(X)=1$ and
$$\kappa^{J_a}(X)=\begin{cases}\begin{array}{cl} -\infty, &\ a\notin \pi\mathbb{Q}\\
0, &\ a\in \pi\mathbb{Q}\backslash\{0\}\end{array}\end{cases}$$
\end{prop}
\begin{proof}
As $\bigcup_{m\in\mathbb{Z}_+} \frac{4}{m}\pi\mathbb{Z}=\pi\mathbb{Q}$, we have that if $a\notin \pi\mathbb{Q}$, $P_m=0$ for all $m$; if $a\in \pi\mathbb{Q}$, then $P_m=1$ for some $m$.
\end{proof}
If we choose $a=4\pi, 2\pi, \frac{4}{3}\pi, \cdots, \frac{4}{n}\pi, \cdots$, then the first nonzero plurigenera are $P_1, P_2, P_3, \cdots, P_n,\cdots$. Therefore they are not birationally equivalent though with $\kappa^J=0$.
\begin{remk}
Let $J$ be the almost complex structure given by
$$J(\frac{\partial}{\partial t})=\frac{\partial}{\partial z}, \hspace{2mm} J(\frac{\partial}{\partial x})=\frac{\partial}{\partial y}+x\frac{\partial}{\partial z}.$$
Then $J$ is integrable and induces the usual complex structure on $X$. In this case, $\mathcal K$ is holomorphically trivial with a closed section $(dt+i(dz-xdy))\wedge(dy+idx)$. So $P_m(X,J)=1$ for any $m\geq 1$ and $\kappa^{J}(X)=0$.
\end{remk}
From \eqref{KTpm}, we see that both plurigenera and Kodaira dimension are not deformation invariant. However, we still have upper semi-continuity.
Assume that $\Delta$ is an open set in $\mathbb{C}$ and $\{J(t), t\in \Delta\}$ is a family of almost complex structures on a compact smooth manifold, depending smoothly on $t$. Let $P_m(t), h^{p,0}(t)$ be the $m$-th plurigenus and $(p,0)$ Hodge number of $J(t)$. We have
\begin{prop} $P_m(t)$ and $h^{p,0}(t)$ are upper semi-continuous function of $t$.
\end{prop}
\begin{proof}
As all sections in $H^0(X,\mathcal K(t)^{\otimes m})$ and $H^0(X,\Omega^p(t))$ are exactly the harmonic sections, by the properties of elliptic operators (Theorem 4.3 in \cite{KM}, see also \cite{DLZ}), $P_m(t)$ and $h^{p,0}(t)$ are upper semi-continuous.
\end{proof}
\subsection{$4$-torus}\label{T4}
We offer another example on the four torus.
Consider the four torus $X=T^4=\mathbb{R}^4/ \mathbb{Z}^4$ with coordinates $(x_1, x_2, x_3, x_4)$. We study the almost complex structure $J$ introduced in \cite{CKT} given by
$$J=\begin{pmatrix} 0&-1&\alpha&\beta\\1&0&-\beta&\alpha\\0&0&0&1\\0&0&-1&0\end{pmatrix}.$$
We assume that $\alpha, \beta$ are any two real smooth functions on $T^4$ satisfying $\frac{\partial^2 (\beta+i\alpha)}{\partial x_1^2}+\frac{\partial^2 (\beta+i\alpha)}{\partial x_2^2}\neq 0$ in a dense open set. For example, $\alpha=\cos 2\pi(x_1+x_2), \beta=\sin 2\pi(x_1+x_2)$. Direct computation shows that $J$ is integrable if and only if $\alpha,\beta$ are independent of $x_1,x_2$ (see \cite{CKT}). Therefore, $J$ is not integrable by our assumption. Let $$\phi_1=dx_1+i(dx_2-\alpha dx_3-\beta dx_4),\ \ \ \phi_2=dx_3-idx_4.$$
Then $(T^*X)^{1,0}$ is spanned by $\phi_1, \phi_2$. Assume that $s=f\phi_1\wedge\phi_2$ is a smooth section of $\mathcal K$. Let $w=x_1+ix_2, \frac{\partial}{\partial w}=\frac{1}{2}(\frac{\partial}{\partial x_1}-i\frac{\partial}{\partial x_2})$, we have $\bar{\partial}s=0$ if and only if
\begin{align}\label{t4}
\bar{\partial}f+\frac{1}{2}\dfrac{\partial(\beta+i\alpha)}{\partial w} f\bar{\phi}_2=0
\end{align}
It is equivalent to
\begin{align}\label{t41}
&\frac{\partial f}{\partial \bar{w}}=0\\
\label{t42}\frac{\partial f}{\partial x_3}+\alpha\frac{\partial f}{\partial x_2}-i(&\frac{\partial f}{\partial x_4}+\beta\frac{\partial f}{\partial x_2})+\dfrac{\partial(\beta+i\alpha)}{\partial w} f=0
\end{align}
As $T^4$ is compact, from \eqref{t41} we get that $f$ is constant in the $(x_1,x_2)$ direction. Then \eqref{t42} become
\begin{align}\label{t42'} \frac{\partial f}{\partial x_3}-i\frac{\partial f}{\partial x_4}+\dfrac{\partial(\beta+i\alpha)}{\partial w} f=0.\end{align}
Apply $\frac{\partial}{\partial \bar{w}}$ to \eqref{t42'} to get
\begin{align}\dfrac{\partial^2(\beta+i\alpha)}{\partial w\partial\bar{w}} f=0.\end{align}
By the assumption of $\alpha, \beta$, we have $f=0$ and $s=0$. Similarly, for $\mathcal K^{\otimes m}, m\geq 2$, if $s=f(\phi_1\wedge\phi_2)^{\otimes m}$ is holomorphic, then
$$\bar{\partial}f+\frac{m}{2}\dfrac{\partial(\beta+i\alpha)}{\partial w} f\bar{\phi}_2=0.$$
The same argument gives that $s=0$. Therefore, $P_m(X,J)=0, m\geq 1$ and $\kappa^J(X)=-\infty$.\\
For the irregularity, assume that $\gamma=g_1\phi_1+g_2\phi_2\in H^0(X,\Omega_X)$. Then from $\bar{\partial} \gamma=0$ we get that \begin{align} \bar{\partial}g_1+\frac{1}{2}\dfrac{\partial(\beta+i\alpha)}{\partial w} g_1\bar{\phi}_2=0 \label{21}\\ \bar{\partial}g_2+\frac{1}{2}\dfrac{\partial(\beta-i\alpha)}{\partial \bar{w}} g_1\bar{\phi}_1=0. \label{22}\end{align}
(\ref{21}) is the same with (\ref{t4}), so we get that $g_1=0$. Putting it to (\ref{22}), we deduce that $g_2$ is a constant. So $\gamma=c\phi_2$ and $h^{1,0}=1$.
For any $t=t_1+it_2\in \mathbb{C}$, let $$J(t)=\begin{pmatrix} 0&-1&t_1\alpha&t_2\beta\\1&0&-t_2\beta&t_1\alpha\\0&0&0&1\\0&0&-1&0\end{pmatrix}.$$
$J(0)$ is the standard complex structure on $T^4$ and $J(1+i)=J$. By the above calculation, for any $m\geq 1$, $t\in \mathbb{C}$, we have $$P_m(t)=\begin{cases} 0, t\neq 0 \\ 1, t=0\end{cases}, \, \,\, h^{1,0}(t)=\begin{cases} 1, t\neq 0 \\ 2, t=0\end{cases}.$$
This gives an example where the plurigenera, the Kodaira dimension and the irregularity are not constant under smooth deformation even when $K=0$.
\subsection{Non-integrable almost complex manifolds with large Kodaira dimension}\label{bigex}
Although a generic almost complex structure does not have any pseudoholomorphic curve, which forces Kodaira dimension to be $-\infty$ or $0$, we still have interesting non-integrable examples with large Kodaira dimension.
In this subsection, we give examples of non-integrable almost complex structures on $2n$-manifolds with Kodaira dimension lying among $-\infty, 0, 1, \cdots, n-1$. First, we construct non-integrable almost complex 4-manifolds with $\kappa^J=1$.
Let $S$ be a compact Riemann surface with genus $g\geq 2$. We shall define a nonintegrable almost complex structure on $X=T^2\times S$.\\
Denote the two projections by
$$\pi_1: T^2\times S\longrightarrow T^2, \pi_2: T^2\times S\longrightarrow S.$$ Assume that $T^2=\mathbb{R}^2/\mathbb{Z}^2$ has coordinate $(x,y)$, then $\frac{\partial}{\partial x}, \frac{\partial}{\partial y}$ is a global frame on $T^2$. The tangent bundle of $X$ has a splitting $TX=TT^2\times TS$. Let $J_S$ be the complex structure on $S$ with local holomorphic coordinate $w$ and $h=h(w)$ be a smooth real nonconstant function on $S$. $h$ is pulled back by $\pi_2$ to be a function on $X$ (we still denote it by $h$ which is constant on $(x,y)$ direction). Define an almost complex structure on $X$ by
$$J(\frac{\partial}{\partial x})=-h\frac{\partial}{\partial x}+\frac{\partial}{\partial y}, J(\frac{\partial}{\partial y})=-(1+h^2)\frac{\partial}{\partial x}+h\frac{\partial}{\partial y}$$
$$J|_{TS}=J_S$$
Then $J^2=-id$ and $(TX)^{1,0}=<V, \frac{\partial}{\partial w}>$, where $V=\frac{\partial}{\partial x}+i(h\frac{\partial}{\partial x}-\frac{\partial}{\partial y})$. As $$[V,\frac{\partial}{\partial w}]=-i\frac{\partial h}{\partial w}\frac{\partial}{\partial x}=-\frac{i}{2}\frac{\partial h}{\partial w}(V+\bar{V}),$$ $J$ is not integrable by Newlander-Nirenberg's theorem since $\frac{\partial h}{\partial w}\neq 0$.\\
We have $J(dx)=-(hdx+(1+h^2)dy)$. Let $\alpha=dx+i(hdx+(1+h^2)dy)$. Then locally $$(T^*X)^{1,0}=<\alpha, dw> \ \text{and}\ \ \mathcal K_J^{\otimes m}=<(\alpha\wedge dw)^{\otimes m}>$$ for any $m\geq 1$. There is an embedding $$\pi_2^*: \Gamma(S, \mathcal K_S^{\otimes m})\longrightarrow \Gamma(X, \mathcal K_J^{\otimes m})$$ given by $\pi_2^*(\gamma)=(\alpha)^{\otimes m}\wedge \gamma$ for any $\gamma\in \Gamma(S, \mathcal K_S^{\otimes m})$.
Defining the $(0,1)$ form $\beta=\frac{-i(h+i)}{2(h-i)}\bar{\partial} h$, we get
\begin{lem} \label{induce}
$\pi_2^*(\gamma)\in H^0(X, \mathcal K^{\otimes m})$ if and only if $\bar{\partial} \gamma+m\beta \wedge \gamma=0$.
\end{lem}
\begin{proof}
Assume locally that $\gamma=f(w)dw^{\otimes m}$. Then $\pi_2^*(\gamma)=(\alpha)^{\otimes m}\wedge \gamma=f(\alpha\wedge dw)^{\otimes m}$. We have
\begin{align*} \bar{\partial}_J (\alpha\wedge dw)&=(d\alpha\wedge dw)^{2,1}\\ &=\frac{i}{2}\frac{\partial h}{\partial \bar{w}}(1-\frac{2h(i+h)}{1+h^2})d\bar{w}\wedge \alpha\wedge dw\\ &=\frac{-i(h+i)}{2(h-i)}\frac{\partial h}{\partial \bar{w}}d\bar{w}\wedge \alpha\wedge dw.\end{align*}
Let $b=\frac{-i(h+i)}{2(h-i)}\frac{\partial h}{\partial \bar{w}}$ and $\beta=\frac{-i(h+i)}{2(h-i)}\bar{\partial} h$. As $f$ depends only on $w$, we have $$\bar{\partial}_J(\pi_2^*(\gamma))= (\frac{\partial f}{\partial \bar{w}}+mb f)d\bar{w}(\alpha\wedge dw)^{\otimes m}.$$ So $\bar{\partial}_J(\pi_2^*(\gamma))=0$ is equivalent to $\frac{\partial f}{\partial \bar{w}}+mb f=0$ which gives $\bar{\partial} \gamma+m\beta \wedge \gamma=0$.
\end{proof}
Denote $$H_h^0(S, \mathcal K_S^{\otimes m})=\{\gamma\in \Gamma(S, \mathcal K_S^{\otimes m}), \bar{\partial} \gamma+m\beta \wedge \gamma=0\}.$$ When $h=0$, then $\beta=0$ and the group is the ordinary holomorphic pluricanonical section group of $\mathcal K_S^{\otimes m}$. From Lemma \ref{induce}, we get an injective map $$\pi_2^*:H_h^0(S, \mathcal K_S^{\otimes m})\longrightarrow H^0(X, \mathcal K^{\otimes m}).$$ We can compute the dimension of $H_h^0(S, \mathcal K_S^{\otimes m})$ explicitly when $m>1$. Notice that the operator $\bar{\partial}_h=\bar{\partial}+m\beta\wedge\_$ satisfies the Leibniz rule and then gives a deformed holomorphic structure of $\mathcal K_S^{\otimes}$ as $\dim S=1$. For the holomorphic line bundle $(\mathcal K_S^{\otimes m}, \bar{\partial}_h)$, $$\deg(\mathcal K_S^{\otimes m}, \bar{\partial}_h)=\deg(\mathcal K_S^{\otimes m}, \bar{\partial})=2m(g-1)$$ by the deformation invariance of $c_1$. Also,
$$H^0(S, \mathcal K_S\otimes (m\mathcal K_S, \bar{\partial}_h)^*)=0$$ for $m>1$ since the degree is negative.
Applying the Riemann-Roch formula to $(\mathcal K_S^{\otimes m}, \bar{\partial}_h)$, we have
\begin{align} \dim H_h^0(S, \mathcal K_S^{\otimes m})&=\dim H_h^0(S, \mathcal K_S^{\otimes m})-\dim H^0(S, \mathcal K_S\otimes (\mathcal K_S^{\otimes m}, \bar{\partial}_h)^*) \notag \\
&=\deg(\mathcal K_S^{\otimes m}, \bar{\partial}_h)-g+1 \notag \\
&=(2m-1)(g-1) \label{RR} \end{align}
for $m>1$. When $m=1$, as $\deg(\mathcal K_S\otimes (\mathcal K_S, \bar{\partial}_h)^*)=0$, we have $\dim H^0(S, \mathcal K_S\otimes (\mathcal K_S, \bar{\partial}_h)^*)\leq 1$. Applying the Riemann-Roch formula we obtain $$g-1\leq \dim H_h^0(S, \mathcal K_S)\leq g.$$
Next, we show that $\pi_2^*$ is surjective.
\begin{lem} \label{pull}
For any $s\in H^0(X, \mathcal K^{\otimes m})$, $s=\pi_2^*(\gamma)$ for some $\gamma\in H_h^0(S, \mathcal K_S^{\otimes m})$.
\end{lem}
\begin{proof} We offer two different proofs. The first follows from direct calculation. The second proof applies the results of intersection theory built in \cite{Z2} which can be generalized to other cases (Theorem \ref{ellk=1} below).
Assume that $s=g(\alpha\wedge dw)^m$ locally. As $\bar{\partial}\alpha=\beta\wedge\alpha$, we have $\bar{\partial}_m s=(\bar{\partial}g+mg\beta)(\alpha\wedge dw)^m$. So $\bar{\partial}g+mg\beta=0$, which is equivalent to
\begin{align} \bar{V}(g)=0 \label{24} \\ (\frac{\partial g}{\partial \bar{w}}+mb g)d\bar{w}=0 \end{align}
Using the same technique in example 6.1, from (\ref{24}) we get that $g$ is independent of $x, y$. So we can define $\gamma=g(dw)^{\otimes m}\in H_h^0(S, \mathcal K_S^{\otimes m})$, and $s=\pi_2^*(\gamma)$.\vspace{.2cm}
The second approach is more topological. Define a deformation $$J_t(\frac{\partial}{\partial x})=-(th)\frac{\partial}{\partial x}+\frac{\partial}{\partial y}, J_t(\frac{\partial}{\partial y})=-(1+t^2h^2)\frac{\partial}{\partial x}+(th)\frac{\partial}{\partial y}$$ $$J_t|_{TS}=J_S, 0\leq t\leq 1.$$ Then $J_1=J$ and $J_0$ is the product complex structure on $X$. By the homotopy invariance of the Chern classes, we have $c_1(\mathcal K_J)=c_1(\mathcal K_{J_0})=(2g-2)[T^2]$, where $[T^2]$ is the cohomology class of the fiber of $\pi_2$. Also, each fiber $T^2$ is a $J$-holomorphic curve by definition.
Let $z_0=(t_0,w_0)$ be any point in $X$ where $t_0\in T^2, w_0\in S$ and $s\in H^0(X, \mathcal K^{\otimes m})$ a nontrivial section. First assume that $s(z_0)=0$. By Corollary \ref{holo}, $s$ induces a holomorphic map. Therefore, $s^{-1}(0)$ supports a pseudoholomorphic 1-subvariety in $X$ (Corollary 1.3 in \cite{Z2}). By the positive intersection of pseudoholomorphic curves (see \cite{Z2}), either $T^2\times \{w_0\}\subset s^{-1}(0)$ or $T^2\times \{w_0\}$ has positive intersection with $s^{-1}(0)$. As $$[s^{-1}(0)]=m\cdot c_1(\mathcal K_J)=m(2g-2)[T^2]$$ and $[T^2]\cdot [T^2]=0$, the latter case cannot be possible. So $s|_{T^2\times \{w_0\}}=0$.
Next, assume that $s(z_0)\neq 0$.
Denote $H^1_h(S, \mathcal K_S^{\otimes m}-\{w_0\})$ the first sheaf cohomology group of the tensor bundle $(\mathcal K_S^{\otimes m}, \bar{\partial}_h)\otimes (-\{w_0\})$. By the Kodaira vanishing theorem, when $m>1$, $$H^1_h(S, \mathcal K_S^{\otimes m}-\{w_0\})=0$$ as $\deg(\mathcal K_S^{\otimes (m-1)}, \bar{\partial}_h)\geq 2$. From the exact sequence $$0\longrightarrow \mathcal K_S^{\otimes m}-\{w_0\}\longrightarrow \mathcal K_S^{\otimes m}\longrightarrow {\mathcal K_S^{\otimes m}}|_{w_0}\longrightarrow 0,$$ we get the exact sequence of cohomology groups (\cite{GH}) : $$0\longrightarrow H_h^0(S, \mathcal K_S^{\otimes m}-\{w_0\})\longrightarrow H^0_h(S, \mathcal K_S^{\otimes m})\longrightarrow {\mathcal K_S^{\otimes m}}|_{w_0}\longrightarrow H^1_h(S, \mathcal K_S^{\otimes m}-\{w_0\})=0.$$ Therefore,
there is a $\tilde{\gamma}\in H_h^0(S, \mathcal K_S^{\otimes m})$ such that $\tilde{\gamma}(w_0)\neq 0$ when $m>1$. Then $\pi_2^*(\tilde{\gamma})(z_0)\neq 0$. Since $s(z_0)\neq 0$, there is some $k\neq 0$ such that $(s-k\pi_2^*(\tilde{\gamma}))(z_0)=0.$ As $s-k\pi_2^*(\tilde{\gamma})\in H^0(X, \mathcal K^{\otimes m}),$ by the same argument as in the first case, $$(s-k\pi_2^*(\tilde{\gamma}))|_{T^2\times \{w_0\}}=0.$$ So $s=k\pi_2^*(\tilde{\gamma})$ on $T^2\times \{w_0\}$.
Therefore, in either case, $s$ is constant on the fiber of $\pi_2$. Then we can push down $s$ through $\pi_2$ to get a section $\gamma\in \Gamma(S, \mathcal K_S^{\otimes m})$ such that $s=\pi_2^*(\gamma)$. By Lemma \ref{induce}, $\gamma\in H_h^0(S, \mathcal K_S^{\otimes m})$.
\end{proof}
Combining Lemma \ref{induce}, Lemma \ref{pull} and (\ref{RR}), we have
\begin{prop} \label{4}
$\pi_2^*:H_h^0(S, \mathcal K_S^{\otimes m})\longrightarrow H^0(X,\mathcal K^{\otimes m})$ is an isomorphism. Therefore, $P_m(X,J)=\dim H_h^0(S, \mathcal K_S^{\otimes m})=(2m-1)(g-1)$ for $m>1$, $g-1\leq P_1(X,J)\leq g$ and $\kappa^J(X)=1$.
\end{prop}
To compute the irregularity of $X$, assume that $\tau \in H^0(X, \Omega_X)$. Locally write $\tau=g_1\alpha+g_2dw$. From $\bar{\partial}\tau=0$ we get \begin{align} \bar{\partial}g_1+g_1\beta=0 \label{26} \\ \bar{\partial}g_2=0. \label{27}\end{align} From (\ref{26}) we get that \begin{align} \bar{V}(g_1)=0 \label{027}\\ \frac{\partial g_1}{\partial\bar{w}}+bg_1=0.\label{28} \end{align} Then (\ref{027}) gives that $g_1$ is independent of $x,y$ as before. (\ref{28}) can be interpreted as follows. The $\bar{\partial}_h=\bar{\partial}+\beta\wedge\_$ also induces a deformed complex structure on the trivial line bundle. Define $H_h^0(S, \mathcal O)=\{g\in C^{\infty}(S),\bar{\partial}_h g=0\}.$ Then (\ref{28}) is equivalent to $g_1\in H_h^0(S, \mathcal O)$. As $\deg \mathcal O=0$, we have $\dim H_h^0(S, \mathcal O)\leq 1$. From (\ref{27}) we get that $$\bar{V}g_2=0, \frac{\partial g_2}{\partial\bar{w}}=0.$$ which implies that $g_2$ is constant. Therefore $\tau=g_1\alpha+cdw$, with $g_1\in H_h^0(S, \mathcal O)$. As $h^{1,0}(S)=g$, we obtain $$g\leq h^{1,0}(X)\leq g+1,$$
The case $h^{1,0}(X)= g+1$ corresponds to $\dim H_h^0(S, \mathcal O)=1$ which implies that $(\mathcal O, \bar{\partial}_h)$ is holomorphic trivial. The case $h^{1,0}(X)= g$ corresponds to $\dim H_h^0(S, \mathcal O)=0$.\\
We can generalize the calculation to the case where $X$ admits a smooth pseudoholomorphic elliptic fibration.
\begin{thm}\label{ellk=1}
If $(X^4, J)$ admits a smooth pseudoholomorphic elliptic fibration over a Riemann surface of genus greater than $1$ with $J$ tamed, then $\kappa^J=1$.
\end{thm}
\begin{proof}
Let $\pi: X\rightarrow S$ be the pseudoholomorphic elliptic fibration. By \cite{T96}, the canonical class $K$ is represented by $J$-holomorphic $1$-subvariety $\Theta$. For the fiber class $T$, we have $T\cdot T=0$. Hence $K\cdot T=0$ by adjunction formula. By positivity of intersection, any component of $\Theta$ is contained in a fiber. Since each fiber is smooth, we have $K=bT$. On the other hand, any section of $\mathcal K_X$ pushed down to a section of $\mathcal K_S$ by integrate out the fiber. Hence $K=(2g-2)T$.
In other words, as complex bundles, $\pi^*(\mathcal K_S^{\otimes m})=\mathcal K_X^{\otimes m}$. We notice that $\bar{\partial}_m$ maps $\pi^*\Gamma(S, \mathcal K_S^{\otimes m})$ to $\pi^*\Gamma(S, \mathcal K_S^{\otimes m}\otimes T^*S)$. We thus denote $\bar{\partial}_m\pi^*(f\gamma)=\pi^*(\bar{\partial}_{\pi}(\gamma))$ where $\bar{\partial}_{\pi}$ is an operator mapping $\Gamma(S, \mathcal K_S^{\otimes m})$ to $\Gamma(S, \mathcal K_S^{\otimes m}\otimes T^*S)$. Hence, for any smooth function $f$ on $S$, by the Leibniz rule of $\bar{\partial}_m$, we have $$\pi^*(\bar{\partial}_{\pi}(f\gamma))=\bar{\partial}_m\pi^*(f\gamma)=\bar{\partial}\pi^*f\wedge\pi^*\gamma+\pi^*f\cdot \bar{\partial}_m\pi^*\gamma=\pi^*(\bar{\partial}_{\pi}f\wedge\gamma+f\bar{\partial}_{\pi}\gamma).$$
That is to say $\bar{\partial}_{\pi}$ also satisfies the Leibniz rule and hence it is a pseudoholomorphic structure on $\mathcal K_S^{\otimes m}$. Since $S$ is a Riemann surface, it defines a holomorphic structure on it. To summarize, $\pi^*(\gamma)\in H^0(X, \mathcal K^{\otimes m})$ if and only if $\gamma\in H^0_{\pi}(S, \mathcal K_S^{\otimes m})$, where $H^0_{\pi}(S, \mathcal K_S^{\otimes m})=\{\gamma\in \Gamma(S, \mathcal K_S^{\otimes m}), \bar{\partial}_{\pi} \gamma=0\}.$
Since any section $s\in H^0(X, \mathcal K^{\otimes m})$ would have zero locus a $J$-holomorphic $1$-subvariety in class $mK=m(2g-2)T$, Lemma \ref{pull} (or the argument in the first paragraph) still applies and we know any section $s\in H^0(X, \mathcal K^{\otimes m})$ is of the form $\pi^*(\gamma)$ for some $\gamma\in H^0_{\pi}(S, \mathcal K_S^{\otimes m})$.
Therefore, $P_m(X,J)=\dim H_{\pi}^0(S, \mathcal K_S^{\otimes m})=(2m-1)(g-1)$ for $m>1$ and $\kappa^J(X)=1$.
\end{proof}
We remark that the only place we use tameness is that it guarantees the existence of pseudoholomorphic $1$-subvariety in the (pluri)canonical class.
In fact, the examples in Section \ref{kt} (as well as Section \ref{T4}) are smooth pseudoholomorphic elliptic fibrations over $T^2$. In these cases, $(\mathcal K^{\otimes m}, \bar{\partial}_{\pi})$ are holomorphic line bundle of degree $0$. These bundles are holomorphically trivial if and only if $P_m=1$.
With those $4$-manifolds with $\kappa^J=1$, we can construct more nonintegrable almost complex manifolds with large Kodaira dimensions. First, we derive the K\"unneth formula for pluricanonical sections of almost complex manifolds. For two almost complex manifolds $(X_1, J_1)$ and $(X_2, J_2)$, the product map $J_1\times J_2$ induces an almost complex structure on $X_1\times X_2$. We have
\begin{prop}\label{Kun}
$P_m(X_1\times X_2, J_1\times J_2)=P_m(X_1, J_1)P_m(X_2, J_2)$ for $m\geq 1$.
\end{prop}
\begin{proof}
We apply the harmonic theory in section 3 to derive the formula, similar to the argument in the integrable case (see \cite{GH}). Let $$p_1: X_1\times X_2\longrightarrow X_1, p_2: X_1\times X_2\longrightarrow X_2$$ be the two projections. We have $\mathcal K_{X_1\times X_2}=p_1^*(\mathcal K_{X_1})\wedge p_2^*(\mathcal K_{X_2})$. Choose Hermitian metrics $g_1$ and $g_2$ on $X_1, X_2$ respectively. Then $g_1\times g_2$ gives a Hermitian metric on $X_1\times X_2$. A form $\phi\in \Gamma(X_1\times X_2, \mathcal K_{X_1\times X_2})$ is called decomposable if $\phi=p_1^*(\phi_1)\wedge p_2^*(\phi_2)$. Similar arguments as those in Page 104 in \cite{GH} show that the decomposable smooth forms are dense in the Hilbert space $L^2(X_1\times X_2, \mathcal K_{X_1\times X_2})$.
Denote $\Delta_{J_1}, \Delta_{J_2}$ the Laplacian operators associated to $\bar{\partial}_{J_1}, \bar{\partial}_{J_2}$ as given in (\ref{lap}). By the definition, they are both semi-positive operators. Let $\varphi_1, \varphi_2, \cdots,$ be the eigenforms of $\Delta_{J_1}$ in $\Gamma(X_1, \mathcal K_{X_1})$ with eigenvalues $\lambda_1, \lambda_2, \cdots$ and $\psi_1, \psi_2, \cdots,$ be the eigenforms of $\Delta_{J_2}$ in $\Gamma(X_2, \mathcal K_{X_2})$ with eigenvalues $\mu_1, \mu_2, \cdots$. Then $\lambda_i\geq 0, \mu_i\geq 0$ for any $i$. Let $\Delta_{J_1\times J_2}$ be the Laplacian operator associated to $J_1\times J_2$ and $g_1\times g_2$. From the definition, we directly get $\Delta_{J_1\times J_2}=\Delta_{J_1}+\Delta_{J_2}.$ Also, $$\Delta_{J_1\times J_2}(p_1^*(\varphi_i)\wedge p_2^*(\psi_j))=(\lambda_i+\mu_j)p_1^*(\varphi_i)\wedge p_2^*(\psi_j).$$ So we have $\Delta_{J_1\times J_2}(p_1^*(\varphi_i)\wedge p_2^*(\psi_j))=0$ if and only if $\lambda_i=\mu_j=0$. As $\{\varphi_i\}$, $\{\psi_i\}$ give Hilbert bases for $L^2(X_1, \mathcal K_{X_1})$ and $L^2(X_2, \mathcal K_{X_2})$ respectively, $\{p_1^*(\varphi_i)\wedge p_2^*(\psi_j)\}$ gives a Hilbert basis of $L^2(X_1\times X_2, \mathcal K_{X_1\times X_2})$ by the denseness of decomposable forms. Therefore, we get $Ker(\Delta_{J_1\times J_2})=<p_1^*(\varphi_i)\wedge p_2^*(\psi_j)>$ with $\lambda_i=\mu_j=0$. Namely,
$$H^0(X_1\times X_2,\mathcal K_{X_1\times X_2})=H^0(X_1, \mathcal K_{X_1})\otimes H^0(X_2, \mathcal K_{X_2}).$$
This shows that $P_1(X_1\times X_2, J_1\times J_2)=P_1(X_1, J_1)P_1(X_2, J_2)$. Similar argument gives that $$H^0(X_1\times X_2,\mathcal K_{X_1\times X_2}^{\otimes m})=H^0(X_1, \mathcal K_{X_1}^{\otimes m})\otimes H^0(X_2, \mathcal K_{X_2}^{\otimes m})$$ and $P_m(X_1\times X_2, J_1\times J_2)=P_m(X_1, J_1)P_m(X_2, J_2)$ for any $m>1$.
\end{proof}
From the definition of Kodaira dimension we have
\begin{cor}
$\kappa^{J_1\times J_2}(X_1\times X_2)=\kappa^{J_1}(X_1)+\kappa^{J_2}(X_2)$ for any two compact almost complex manifolds $(X_1, J_1), (X_2, J_2)$.
\end{cor}
\begin{thm}\label{niKod}
There are examples of compact $2n$-dimensional nonintegrable almost complex manifolds with Kodaira dimension lying among $\{-\infty, 0, 1, \cdots, n-1\}$ for $n\geq 2$.
\end{thm}
\begin{proof} By taking direct products of the Kodaira-Thurston surface with copies of two torus $T^2$, we can get compact $2n$-manifolds with nonintegrable almost complex structure and $\kappa^J=-\infty$ or $0$.
By taking direct products of the 4-manifold $X=T^2\times S$ as in Proposition \ref{4} with copies of $2$-torus $T^2$ or the Riemann surface $\Sigma$ with $g>1$, we get compact $2n$-manifolds with nonintegrable almost complex structures and $\kappa^J=1, 2, \cdots, n-1$.
\end{proof}
\section{The six sphere}
By a result of Borel and Serre \cite{BS}, the only spheres which admit almost complex structures are $S^2$ and $S^6$. The standard way to construct an almost complex structure on $S^6$ is to use the cross product of $\mathbb R^7$ applying to the tangent space of $S^6$. In this section, we will compute the Hodge numbers, the plurigenera and Kodaira dimension of the standard almost complex structure. Our method is to consider $S^6$ as a homogeneous space of the exceptional Lie group $G_2$ and apply an explicit real representation of the Lie algebra $\mathfrak g_2$.
First, we review some definitions following \cite{Br3}.
Let $e_1, e_2, \cdots,e_7$ be the standard basis of $\mathbb R^7$ and $e^1, e^2,\cdots, e^7$ be the dual basis. Denote $e^{ijk}$ the wedge product $e^i\wedge e^j\wedge e^k$ and define $$\Phi=e^{123}+e^{145}+e^{167}+e^{246}-e^{257}-e^{347}-e^{356}$$
Then $\Phi$ induces a unique bilinear mapping, the cross product: $\times : \mathbb R^7\times \mathbb R^7\longrightarrow \mathbb R^7$ by $(u\times v)\cdot w=\Phi(u,v,w)$, where $\cdot$ is the Euclidean metric on $\mathbb R^7$. It follows that $u\times v=-v\times u$ and \begin{align}\label{c1} (u\times v)\cdot u=0.\end{align} Also, further discussion (see \cite{Br3}) shows that \begin{align}\label{c2} u\times (u\times v)=(u\cdot v)u-(u\cdot u)v.\end{align} We remark that the cross product $\times$ differs from the cross product induced by Cayley's table of Octonion multiplication, though they are isomorphic. For example, here $e_1\times e_6=e_7$.
The six sphere $S^6=\{u\in \mathbb R^7, u\cdot u=||u||=1\}.$ The tangent space at $u\in S^6$ is $T_uS^6=\{v\in \mathbb R^7| u\cdot v=0\}$. Let $J_u=u\times \_$ be the cross product operator of $u$. Then by (\ref{c1}),(\ref{c2}), $J_u(T_uS^6)\subset T_uS^6$ and $J_u^2=-id$ on $T_uS^6$. In particular, when $u=e_1$, we have
\begin{align}\notag &J_{e_1}(e_2)=e_3, \quad J_{e_1}(e_3)=-e_2, \quad J_{e_1}(e_4)=e_5,\\ &J_{e_1}(e_5)=-e_4, \quad J_{e_1}(e_6)=e_7, \quad J_{e_1}(e_7)=-e_6.\label{Je}\end{align} Let $\mathsf J=\{J_u, u\in S^6\}$. Then $\mathsf J$ gives an almost complex structure on $S^6$ which is the standard almost complex structure we consider. It is shown \cite{EF}\cite{EL} that $\mathsf J$ is not integrable since the Nijenhuis tensor of $\mathsf J$ is nowhere vanishing.
On the other side, denote \begin{align} \label{G2} G_2=\{g\in GL(7, \mathbb R)| g^*(\Phi)=\Phi\},\end{align} where $G_2$ is the simple Lie group of type $G_2$ which is compact, connected and simply connected with real dimension 14 (see \cite{Br}). $G_2$ preserves the inner product $\cdot$ and the cross product $\times$ and acts transitively on $S^6$. Let $G_2\times S^6\longrightarrow S^6$ be the transitive action and $p:G_2\longrightarrow S^6$ the induced map given by $p(g)=g(e_1)$. The map $p$ is a submersion with $p^{-1}(e_1)=\{g\in G_2| g(e_1)=e_1\}\cong SU(3)$. This makes $G_2$ into a principle right $SU(3)$ bundle over $S^6$.
Next, we give the explicit representation of the Lie algebra $\mathfrak g_2$ of $G_2$ and define a left invariant almost complex structure on it. Let $\epsilon_{ijk}$ be skew-symmetric unit indices such that $\Phi=\frac{1}{6}\epsilon_{ijk}e^{ijk}$. For example, $\epsilon_{123}=-\epsilon_{132}=\epsilon_{231}=1$. By the characterization in \cite{Br2} (section 2.5 there), a skew-symmetric matric $A=(a_{jk})$ is in $\mathfrak g_2$ if and only if $\sum_{j,k=1}^7\epsilon_{ijk}a_{jk}=0$ for all $1\leq i\leq 7$.
Let $\vec{x}=(x_1,x_2,x_3,x_4,x_5,x_6)\in \mathbb R^6$, $\vec{y}=(y_1,y_2,\cdots,y_8)\in \mathbb R^8$. Direct calculation gives that a general element $A$ in $\mathfrak g_2\subset gl(7,\mathbb R)$ has the following form
\begin{align}
A=\{\vec{x},\vec{y}\}:=
\begin{pmatrix} 0&x_1&-x_2&x_3&-x_4&x_5&-x_6\\
-x_1&0&y_1&-x_6+y_4&x_5+y_3&x_4-y_6&-x_3-y_5\\
x_2&-y_1&0&-y_3&y_4&y_5&-y_6\\
-x_3&x_6-y_4&y_3&0&-y_1+y_2&-x_2-y_8&x_1-y_7\\
x_4&-x_5-y_3&-y_4&y_1-y_2&0&y_7&-y_8\\
-x_5&-x_4+y_6&-y_5&x_2+y_8&-y_7&0&-y_2\\
x_6&x_3+y_5&y_6&-x_1+y_7&y_8&y_2&0\
\end{pmatrix}\notag
\end{align}
Here $\{\cdot,\cdot\}$ denotes an operation $\{\cdot,\cdot\}: \mathbb R^6\times \mathbb R^8\longrightarrow \mathfrak g_2$ whose definition is stated above. The above expression is chosen so that it suits our later discussion on $S^6$.
Denote $\vec{\alpha}_i, i=1,\cdots,6,$ the $i$-th unit vector in $\mathbb R^6$ and $\vec{\beta}_j, j=1,\cdots,8,$ the $j$-th unit vector in $\mathbb R^8$. Define
$f_i=\{\vec{\alpha}_i,\vec{0}\}, h_j=\{\vec{0},\vec{\beta}_j\}$. For example, $f_1=\{(1,0,0,0,0,0),\vec{0}\}, h_2=\{\vec{0},(0,1,0,0,0,0,0,0)\}$. Then $\{f_i, h_j;1\leq i\leq 6,1\leq j\leq 8\}$ forms a basis of $\mathfrak g_2$. The Lie brackets between $f_i$ and $h_j$ are computed in the appendix. Let $$\mathfrak m=span\{f_1,\cdots,f_6\},\hspace{.8cm} \mathfrak h=span\{h_1,\cdots,h_8\}.$$
Then $\mathfrak g_2=\mathfrak m\oplus \mathfrak h,\ [\mathfrak h,\mathfrak h]\subset \mathfrak h$ and $\mathfrak h\cong su(3).$ A Cartan subalgebra of $\mathfrak g_2$ is given by the span of $h_1, h_2$. The corresponding decomposition of $\mathfrak g_2$ into root spaces can be also calculated. For the projection $p:G_2\longrightarrow S^6$, we have $$\ker \ dp=\mathfrak h,\ \ \ \ dp(f_i)=(-1)^ie_{i+1}.$$
Define an almost complex structure $\tilde{J}$ on $\mathfrak g_2$ by \begin{align*}
&\tilde{J}(f_1)=-f_2, \hspace{.4cm} \tilde{J}(f_3)=-f_4, \hspace{.4cm}\tilde{J}(f_5)=-f_6,\\
&\tilde{J}(h_1)=-h_2,\ \ \tilde{J}(h_3)=-h_4, \ \ \tilde{J}(h_5)=-h_6,\ \ \ \tilde{J}(h_7)=-h_8.\end{align*}
$\tilde{J}$ induces a left invariant almost complex structure on $G_2$ which is still denoted by $\tilde{J}$. By (\ref{Je}), the following holds at $1_{G_2}$, \begin{align} \label{pseudo} dp\circ \tilde{J}=\mathsf{J}\circ dp.\end{align} Since both $\tilde{J}$ and $\mathsf{J}$ are $G_2$-invariant, (\ref{pseudo}) holds globally on $G_2$. In other words, $p$ is a $(\tilde{J},\mathsf{J})$-pseudoholomorphic map.
With the construction of $\tilde{J}$, we prove the following
\begin{thm} \label{sphere}
For the standard almost complex structure $\mathsf J$ on $S^6$, $h^{1,0}=h^{2,0}=h^{2,3}=h^{1,3}=0$, $P_m(S^6, \mathsf J)=1$ for any $m\geq 1$ and $\kappa^{\mathsf J}=0$.
\end{thm}
We would like to thank several people, including Huijun Fan, Valentino Tosatti, Jiaping Wang and Bo Yang for encouraging us to proceed the calculation.
\begin{proof}
Compute the plurigenera $P_m(S^6, \mathsf J)$ first. Denote $(T^*S^6)^{1,0}$ the bundle of $(1,0)$ forms on $S^6$ and $p^*$ the pull back map of forms. As $p$ is $(\tilde{J},\mathsf{J})$-pseudoholomorphic, we have $p^*((T^*S^6)^{1,0})\subset (T^*G_2)^{1,0}=(\mathfrak g_2^*)^{1,0}$. $p^*$ is injective since $p$ is a submersion.
Denote $\{f^i, h^j\}$ the basis in $\mathfrak g_2^*$, dual to $\{f_i,h_j\}$. Then $(\mathfrak g_2^*)^{1,0}$ is generated by $\{\phi^1,\cdots,\phi^7\}$, where
\begin{align*}&\phi^1=f^1-if^2, \ \ \ \phi^2=f^3-if^4,\ \ \ \phi^3=f^5-if^6, \\ & \phi^4=h^1-ih^2,\ \ \phi^5=h^3-ih^4,\ \ \phi^6=h^5-ih^6, \ \ \phi^7=h^7-ih^8.\end{align*}
As $[\mathfrak h,\mathfrak h]\subset \mathfrak h$, using the Lie brackets in the appendix, we get
\begin{align*}
df^1&=-f^2\wedge h^1-2f^3\wedge f^6+f^3\wedge h^4-2f^4\wedge f^5-f^4\wedge h^3-f^5\wedge h^6+f^6\wedge h^5,\\
df^2&=f^3\wedge h^3+f^4\wedge h^4-f^5\wedge h^5-f^6\wedge h^6+f^1\wedge h^1,\\
df^3&=-f^1\wedge h^4+2f^1\wedge f^6+f^2\wedge f^5-f^2\wedge h^3+f^4\wedge h^1-f^4\wedge h^2-f^5\wedge h^8+f^6\wedge h^7,\\
df^4&=f^1\wedge f^5+f^1\wedge h^3-f^2\wedge h^4-f^3\wedge h^1+f^3\wedge h^2-f^5\wedge h^7-f^6\wedge h^8,\\
df^5&=-f^1\wedge f^4+f^1\wedge h^6-f^2\wedge f^3+f^2\wedge h^5+f^3\wedge h^8+f^4\wedge h^7+ f^6\wedge h^2,\\
df^6&=-2f^1\wedge f^3-f^1\wedge h^5+f^2\wedge h^6-f^3\wedge h^7+f^4\wedge h^8-f^5\wedge h^2.
\end{align*}
Therefore, the definition of $\bar{\partial}$ gives \begin{align} \bar{\partial} \phi^1&=-\frac{i}{2}\phi^1\wedge \bar{\phi}^4-i\phi^2\wedge \bar{\phi}^5+i\phi^3\wedge \bar{\phi}^6,\notag \\
\bar{\partial} \phi^2&=-\frac{i}{2}\phi^1\wedge \bar{\phi}^3-\frac{1-i}{2}\phi^2\wedge \bar{\phi}^4+i\phi^3\wedge \bar{\phi}^7,\label{s6d} \\
\bar{\partial} \phi^3&=\frac{i}{2}\phi^1\wedge \bar{\phi}^2-\frac{i}{2}\phi^2\wedge \bar{\phi}^1+\frac{1}{2}\phi^3\wedge \bar{\phi}^4. \notag\end{align}
Then $$\bar{\partial} (\phi^1\wedge \phi^2\wedge \phi^3)=\bar{\partial}\phi^1\wedge \phi^2\wedge \phi^3- \phi^1\wedge \bar{\partial}\phi^2\wedge \phi^3+\phi^1\wedge \phi^2\wedge \bar{\partial}\phi^3=0.$$ By the arguments in \cite{Br3} (equation (2.11) in \cite{Br3}), $\phi^1\wedge \phi^2\wedge \phi^3$ induces a nowhere-vanishing $G_2$-invariant $(3,0)$-form $\Phi$ on $S^6$. As $p$ is pseudoholomorphic and $p^*$ is injective, $\bar{\partial}\Phi=0$.
Assume $s\in H^0(S^6, \mathcal K_{\mathsf J})$, then $s=f\Phi$, where $f$ is a smooth function on $S^6$. From $\bar{\partial}s=0$ we get that $\bar{\partial}f=0$. Since $S^6$ is compact, the maximum principle gives that $f$ is a constant. Therefore, $P_1(S^6, \mathsf J)=h^{3,0}=1$ with $\Phi$ being a generator. Similarly, we get $P_m(S^6, \mathsf J)=1$ for $m\geq 2$, with $\Phi^m$ being a generator of $H^0(S^6, K_{\mathsf J}^{\otimes m})$. So $\kappa^{\mathsf J}=0$.
Next, we compute the Hodge numbers $h^{1,0}$ and $h^{2,0}$. Assume that $s\in H^{1,0}(S^6)$. Then $p^*s$ is in the span space of $\{\phi^1,\phi^2, \phi^3\}$, satisfying $\bar{\partial}(p^*s)=0$. Let $p^*s=k_1\phi^1+k_2\phi^2+k_3\phi^3$, where $k_i$ are smooth functions on $G_2$. From (\ref{s6d}) we get that
\begin{align}
\bar{\partial}k_3&=ik_1\bar{\phi}^6+ik_2\bar{\phi}^7+\frac{1}{2}k_3\bar{\phi}^4. \label{k13}
\end{align}
Let $X_i, 1\leq i\leq 7,$ be the dual complex vector of $\phi^i$. Namely, $X_1=\frac{1}{2}(f_1+if_2), \cdots, X_7=\frac{1}{2}(h_7+ih_8)$. From the Appendix, the following Lie brackets hold
\begin{align}
[\bar{X}_1,\bar{X}_2]=-iX_3+\frac{i}{2}\bar{X}_5, \quad [\bar{X}_3,\bar{X}_5]=\frac{i}{2}h_1,\quad [X_3,\bar{X}_3]=\frac{i}{2}h_2.\label{X11}
\end{align}
Equation (\ref{k13}) gives us that $$\bar{X}_1(k_3)=\bar{X}_2(k_3)=\bar{X}_3(k_3)=\bar{X}_5(k_3)=0.$$
From (\ref{X11}), we have $X_3(k_3)=0$ and $h_1(k_3)=0$. Then by the last relation in (\ref{X11}), $h_2(k_3)=0$. So $\bar{X}_4(k_3)=0$. Evaluate $\bar{X}_4$ to (\ref{k13}) we get that $k_3=0$. Then (\ref{k13}) directly gives that $k_1=k_2=0$. Therefore, $p^*s=0$. As $p^*$ is injective, we get $s=0$. Hence, $H^{1,0}(S^6)=0$ and $h^{1,0}=0$.
To calculate $h^{2,0}$, assume that $\sigma\in H^{2,0}(S^6)$. Then $p^*\sigma$ satisfies $\bar{\partial}(p^*\sigma)=0$. Let $p^*\sigma=l_1\phi^1\wedge\phi^2+l_2\phi^2\wedge\phi^3+l_3\phi^3\wedge\phi^1$, where $l_i$ are smooth functions on $G_2$. From (\ref{s6d}) we get
\begin{align*} \bar{\partial} (\phi^1\wedge \phi^2)&=\frac{1}{2}\phi^1\wedge\phi^2\wedge \bar{\phi}^4+i\phi^2\wedge \phi^3\wedge\bar{\phi}^6-i\phi^1\wedge \phi^3\wedge \bar{\phi}^7,\notag \\
\bar{\partial} (\phi^2\wedge \phi^3)&=\frac{i}{2}\phi^1\wedge \phi^3\wedge\bar{\phi}^3-\frac{i}{2}\phi^2\wedge\phi^3\wedge \bar{\phi}^4+\frac{i}{2}\phi^1\wedge \phi^2\wedge \bar{\phi}^2,\notag \\
\bar{\partial} (\phi^3\wedge \phi^1)&=-\frac{i}{2}\phi^1\wedge \phi^2\wedge\bar{\phi}^1+\frac{1-i}{2}\phi^1\wedge\phi^3\wedge \bar{\phi}^4-i\phi^2\wedge \phi^3\wedge \bar{\phi}^5. \end{align*}
So $\bar{\partial}(p^*\sigma)=0$ gives that
\begin{align}
\bar{\partial}l_2&=-il_1\bar{\phi}^6+\frac{i}{2}l_2\bar{\phi}^4+il_3\bar{\phi}^5. \label{l13}
\end{align}
Then $\bar{X}_1(l_2)=\bar{X}_2(l_2)=\bar{X}_3(l_2)=\bar{X}_7(l_2)=0$. By the Appendix, the following hold
\begin{align} &[\bar{X}_1,\bar{X}_7]=-\frac{i}{2}h_2,\hspace{1cm} [\bar{X}_2,\bar{X}_7]=i\bar{X}_6, \hspace{1cm} [\bar{X}_2,\bar{X}_6]=\frac{i}{2}(h_1+h_2).\label{h20} \end{align}
From (\ref{h20}), we have $h_1(l_2)=h_2(l_2)=0$. Then $\bar{X}_4(l_2)=0$. Putting it back to (\ref{l13}), we derive $l_1=l_2=l_3=0$. So $p^*\sigma=0$. By the injectivity, $\sigma=0$. Therefore, $H^{2,0}(S^6)=0$ and $h^{2,0}=0$.
By the Serre duality (Proposition \ref{Serre}), we have $h^{1,3}(S^6)=h^{2,3}(S^6)=0$.
\end{proof}
On the other hand, for a hypothetical complex structure on $S^6$, $P_1=h^{3,0}=0$. The key point is a $\bar{\partial}$-closed $(3, 0)$ form is also $d$-closed on a complex $3$-fold.
\section{Appendix}
Direct calculation gives the following Lie brackets of $\mathfrak g_2$:
\begin{align*}&[f_1,f_2]=h_1+h_2,\hspace{.5cm}[f_1,f_3]=2f_6,\hspace{.5cm}[f_1,f_4]=f_5,\hspace{.5cm} [f_1,f_5]=-f_4,\\
&[f_1,f_6]=-2f_3,\hspace{.5cm}[f_1,h_1]=-(f_2-h_8),\hspace{.5cm}[f_1,h_2]=-h_8,\\
&[f_1,h_3]=-(f_4+h_6),\hspace{.5cm}[f_1,h_4]=f_3,\hspace{.5cm} [f_1,h_5]=f_6,\\
&[f_1,h_6]=-f_5+h_3,\hspace{.5cm}[f_1,h_7]=0,\hspace{.5cm}[f_1,h_8]=h_2\\
&[f_2,f_3]=f_5-h_3,\hspace{.5cm}[f_2,f_4]=-h_4,\hspace{.5cm}[f_2,f_5]=-f_3+h_5,\\
&[f_2,f_6]=h_6,\hspace{.5cm}[f_2,h_1]=f_1+h_7,\hspace{.5cm}[f_2,h_2]=-h_7,\\
&[f_2,h_3]=f_3-h_5,\hspace{.5cm}[f_2,h_4]=f_4,\hspace{.5cm}[f_2,h_5]=-f_5+h_3,\\
&[f_2,h_6]=-f_6,\hspace{.5cm}[f_2,h_7]=h_2,\hspace{.5cm}[f_2,h_8]=0\\
&[f_3,f_4]=h_2,\hspace{.5cm}[f_3,f_5]=h_8,\hspace{.5cm}[f_3,f_6]=2f_1,\hspace{.5cm}[f_3,h_1]=f_4+h_6,\\
&[f_3,h_2]=-f_4,\hspace{.5cm}[f_3,h_3]=-(f_2-h_8),\hspace{.5cm}[f_3,h_4]=-f_1,\hspace{.5cm}[f_3,h_5]=0,\\
&[f_3,h_6]=-(h_1+h_2),\hspace{.5cm}[f_3,h_7]=f_6,\hspace{.5cm}[f_3,h_8]=-f_5,
\end{align*}
\begin{align*}
&[f_4,f_5]=2(f_1+h_7),\hspace{.5cm}[f_4,f_6]=h_8,\hspace{.5cm}[f_4,h_1]=h_5-f_3,\\
&[f_4,h_2]=f_3,\hspace{.5cm}[f_4,h_3]=f_1+h_7,\hspace{.5cm}[f_4,h_4]=-f_2,\\
&[f_4,h_5]=-(h_1+h_2),\hspace{.5cm}[f_4,h_6]=0,\hspace{.5cm}[f_4,h_7]=-f_5,\hspace{.5cm}[f_4,h_8]=-f_6,\\
&[f_5,f_6]=-h_2,\hspace{.5cm}[f_5,h_1]=h_4,\hspace{.5cm}[f_5,h_2]=f_6,\\
&[f_5,h_3]=0,\hspace{.5cm}[f_5,h_4]=-h_1,\hspace{.5cm}[f_5,h_5]=f_2-h_8,\\
&[f_5,h_6]=f_1+h_7, \hspace{.5cm}[f_5,h_7]=f_4,\hspace{.5cm}[f_5,h_8]=f_3,\\
&[f_6,h_1]=h_3,\hspace{.5cm}[f_6,h_2]=-f_5,\hspace{.5cm}[f_6,h_3]=-h_1,\hspace{.5cm}[f_6,h_4]=0,\\
&[f_6,h_5]=-f_1,\hspace{.5cm}[f_6,h_6]=f_2,\hspace{.5cm}[f_6,h_7]=-f_3,\hspace{.5cm}[f_6,h_8]=f_4,\\
&[h_1,h_2]=0, \hspace{.5cm} [h_1,h_3]=-2h_4,\hspace{.5cm} [h_2,h_3]=h_4,\hspace{.5cm} [h_1,h_4]=2h_3,\\
&[h_2,h_4]=-h_3,\hspace{.5cm}[h_1,h_5]=[h_2,h_5]=-h_6, \hspace{.5cm} [h_1,h_6]=[h_2,h_6]=h_5,\\
&[h_1,h_7]=h_8,\hspace{.5cm} [h_2,h_7]=-2h_8,\hspace{.5cm} [h_1,h_8]=-h_7, \hspace{.5cm}[h_2,h_8]=2h_7.
\end{align*}
| {
"timestamp": "2020-04-28T02:30:04",
"yymm": "1808",
"arxiv_id": "1808.00885",
"language": "en",
"url": "https://arxiv.org/abs/1808.00885",
"abstract": "This is the first of a series of papers, in which we study the plurigenera, the Kodaira dimension and more generally the Iitaka dimension on compact almost complex manifolds.Based on the Hodge theory on almost complex manifolds, we introduce the plurigenera, Kodaira dimension and Iitaka dimension on compact almost complex manifolds. We show that the plurigenera and the Kodaira dimension as well as the irregularity are birational invariants in almost complex category, at least in dimension $4$, where a birational morphism is defined to be a degree one pseudoholomorphic map. However, they are no longer deformation invariants, even in dimension $4$ or under tameness assumption. On the way to establish the birational invariance, we prove the Hartogs extension theorem in the almost complex setting by the foliation-by-disks technique.Some interesting phenomena of these invariants are shown through examples. In particular, we construct non-integrable compact almost complex manifolds with large Kodaira dimensions. Hodge numbers and plurigenera are computed for the standard almost complex structure on the six sphere $S^6$, which are different from the data of a hypothetical complex structure.",
"subjects": "Differential Geometry (math.DG); Algebraic Geometry (math.AG); Complex Variables (math.CV); Symplectic Geometry (math.SG)",
"title": "Kodaira dimensions of almost complex manifolds I",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877700966099,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7096296215003566
} |
https://arxiv.org/abs/math/0510552 | Betti Numbers and Degree Bounds for Some Linked Zero-Schemes | In their paper on multiplicity bounds (1998), Herzog and Srinivasan study the relationship between the graded Betti numbers of a homogeneous ideal I in a polynomial ring R and the degree of I. For certain classes of ideals, they prove a bound on the degree in terms of the largest and smallest Betti numbers, generalizing results of Huneke and Miller (1985). The bound is conjectured to hold in general; we study this using linkage. If R/I is Cohen-Macaulay, we may reduce to the case where I defines a zero-dimensional subscheme Y. If Y is residual to a zero-scheme Z of a certain type (low degree or points in special position), then we show that the conjecture is true for I_Y. | \section{Introduction}\label{sec:one}
Let $R$ be a polynomial ring over a field $\mathbb{K}$, and let $I$ be a
homogeneous ideal. Then the module $R/I$ admits a finite minimal graded
free resolution over $R$:
\begin{center}
$ \mathbb{F}: \mbox{ }\cdots \rightarrow \bigoplus\limits_{j \in
J_2} R(-d_{2,j}) \rightarrow \bigoplus\limits_{j \in J_1}
R(-d_{1,j}) \rightarrow R \rightarrow R/I\rightarrow 0.$
\end{center}
Many important numerical invariants of $I$ and the associated scheme can be
read off from the free resolution. For example, the {\em Hilbert
polynomial} is the polynomial $f(t) \in \mathbb{Q}[t]$ such that for all $m
\gg 0$, $\dim_\mathbb{K} (R/I)_m = f(m)$; if $f(t)$ has degree $n$ and lead
coefficient $d$, then the {\it degree} of $I$ is $n!d$. When one has an
explicit free resolution in hand, then it is possible to write down the
Hilbert polynomial, and hence the degree, in terms of the shifts $d_{i,j}$
which appear in the free resolution.
If $R/I$ is Cohen-Macaulay and has a {\em pure resolution}
\[ 0 \rightarrow R^{e_p}(-d_p)\cdots
\rightarrow R^{e_2}(-d_2) \rightarrow R^{e_1}(-d_1)\rightarrow R
\rightarrow R/I\rightarrow 0,
\]
then Huneke and Miller show in \cite{HM} that $deg(I) = (\prod_{i=1}^pd_i)/p!$.
Their result points to a more general possibility:
\begin{conj}[Huneke \& Srinivasan] \label{conj1}
Let $R/I$ be a Cohen-Macaulay algebra with minimal free resolution of the
form
\begin{center}
$ 0 \rightarrow \bigoplus\limits_{j \in J_p} R(-d_{p,j}) \rightarrow
\dots \rightarrow \bigoplus\limits_{j \in J_2} R(-d_{2,j})
\rightarrow \bigoplus\limits_{j \in J_1} R(-d_{1,j}) \rightarrow R
\rightarrow R/I \rightarrow 0$.
\end{center}
Let $m_i = \min \; \{d_{i,j} \;|\; j \in J_i\}$ be the minimum
degree shift at the $i$th step and let $M_i = \max \;\{d_{i,j} \;|\:
j \in J_i\}$ be the maximum degree shift at the $i$th step. Then
\[
\frac{\prod_{i=1}^p m_i}{p!} \;\leq \; \deg(I) \;\leq\;
\frac{\prod_{i=1}^p M_i}{p!}.
\]
\end{conj}
When $R/I$ is not Cohen-Macaulay, it is easy to see that the lower
bound fails; for example if $I = (x^2, x y) \subset k[x,y]$, then
$\deg(I)= 1$, $m_1 = 2$ and $m_2 = 3$, but $\frac{(2)(3)}{2!} \geq 1$.
However, in \cite{HS}, Herzog and Srinivasan conjecture that even if
$R/I$ is not Cohen-Macaulay, the upper bound is still valid if one
takes $p = \mathop{\rm codim}\nolimits(I)$. Conjecture 1.1 is verified in \cite{HS} in a
number of situations: when $I$ is codimension two; for codimension
three Gorenstein ideals with five generators (in fact, the upper bound
holds for codimension three Gorenstein with no restriction on the
number of generators); when $I$ is a complete intersection, and also
for certain classes of monomial ideals. Additional cases where
Conjecture 1.1 has been verified appear in \cite{GHP}, \cite{G},
\cite{GV}. In the non-Cohen-Macaulay case, \cite{HS}
proves the bound for stable monomial ideals \cite{EK}, squarefree
strongly stable monomial ideals \cite{AHH}, and ideals with a pure
resolution; \cite{R} proves it for codimension two. In fact, in
the codimension two Cohen-Macaulay and codimension three Gorenstein
cases, a stronger version of the conjecture holds, see \cite{MNR}.
Most of the situations where the conjecture is known to be
true are when the entire minimal free resolution is known; the work in
proving the conjecture generally involves a complicated analysis
translating the numbers $d_{i,j}$ to the actual degree. In this paper
we take a different approach. Our goal is to obtain {\it only}
the information germane to the conjecture; in particular we need the
smallest and biggest shift at each step. When $I$ is Cohen-Macaulay we
can always slice with hyperplanes without changing the degree or free
resolution, hence the study of the conjecture, in the Cohen-Macaulay
case, always reduces to the study of zero-schemes.
Suppose $Y$ is a zero-scheme, and $Z$ is a zero-scheme residual to $Y$
inside a complete intersection $X$. The resolution for $I_X$ is known, so
if one has some control over $Z$, (for example, when $Z$ consists of a
small number of points, or points in special position), then linkage allows
us to say something about the resolution for $I_Y$.
Central to this are the results of Peskine-Szpiro \cite{PS} connecting
resolutions and linkage.
\subsection{Resolutions and linkage}
Two codimension $r$ subschemes $Y$ and $Z$ of $\mathbb{P}^n$ are
{\em linked} in a complete intersection $X$ if
$I_Y = I_X:I_Z$ and $I_Z = I_X:I_Y$.
The most familiar form of linkage is the Cayley-Bacharach
theorem \cite{EGH1}, which was our original motivation.
\begin{thm}[see \cite{PS} or \cite{N}]\label{linkres}
Let $X \subset \proj{n}$ be an arithmetically
Gorenstein scheme of codimension $n$, with minimal free resolution
\[
0 \rightarrow R(-\alpha) \rightarrow F_{n-1} \rightarrow F_{n-2}
\rightarrow \cdots \rightarrow F_1 \rightarrow R \rightarrow R/I_X
\rightarrow 0.
\]
Suppose that $Z$ and $Y$ are linked in $X$, and that the minimal
free resolution of $R/I_Z$ is given by:
\[
0 \rightarrow G_n \rightarrow G_{n-1} \rightarrow \cdots \rightarrow
G_1 \rightarrow R \rightarrow R/I_Z \rightarrow 0.
\]
Then there is a free resolution for $R/I_Y$ given by
\[
0 \rightarrow G_1^\vee(-\alpha) \rightarrow
\begin{array}{c}
G_2^\vee(-\alpha) \\
\oplus\\
F_1^\vee(-\alpha)
\end{array}
\rightarrow
\begin{array}{c}
G_3^\vee(-\alpha) \\
\oplus\\
F_2^\vee(-\alpha)
\end{array}
\rightarrow
\cdots \rightarrow
\begin{array}{c}
G_n^\vee(-\alpha) \\
\oplus\\
F_{n-1}^\vee(-\alpha)
\end{array}
\rightarrow
R \rightarrow
R/I_Y \rightarrow 0.
\]
\end{thm}
It turns out that in certain situations
the shifts in the mapping cone resolution for $Y$ given by the
theorem above are such that no cancellation
of the relevant shifts can occur.
\section{Ideals linked to a collinear subscheme}
We assume for the remainder of the paper that $n \geq 3$ and that
$X$ is a non-degenerate (all the $d_i > 1$) complete intersection
zero-scheme of type $(d_1,d_2,\ldots, d_n)$; let $d_X$ denote the
degree of $X$, and $\alpha_X = \sum_{i=1}^n d_i$.
Suppose $Z$ is a complete intersection subscheme of $X$, of type
$(e_1,\ldots, e_n)$; with $d_Z$ and $\alpha_Z$ as above.
A minimal free resolution for $R/I_X$ is given by
$F_i = \wedge^i(\oplus_{j=1}^n R(-d_j))$, and a minimal free
resolution for $R/I_Z$ is given by
$G_i = \wedge^i(\oplus_{j=1}^n R(-e_j))$.
In this case it is easy to see that Theorem 1.2 implies that
there exists $f$ of degree $a = \alpha_X-\alpha_Z$ such that
$I_Y = I_X:I_Z = (I_X+f)$ and $I_Z = I_X:f$; in particular, $I_Y$ is
an almost complete intersection. Since $I_X \subseteq I_Z$, $R/I_X
\rightarrow R/I_Z$; the mapping cone of Theorem 1.2 comes from
a map of complexes which begins:
\begin{small}
\[
\xymatrix{
\ar[r] &\wedge^2(\oplus_{i=1}^n R(-d_i)) \ar[r]& \oplus_{i=1}^n
R(-d_i) \ar[r]\ar[d]^{\phi} & R \ar[r] \ar[d] &
R/I_X \ar[d] \ar[r]& 0\\
\ar[r]& \wedge^2(\oplus_{i=1}^n R(-e_i)) \ar[r] &\oplus_{i=1}^n R(-e_i)\ar[r] &
R \ar[r] & R/I_Z \ar[r] &0\\
}
\]
\end{small}
The comparison map $\phi$
which makes the diagram commute is simply an expression of the
generators of $I_X$ in terms of the generators of $I_Z$
(e.g.\cite{E}, Exercise 21.23). If
$I_X \subseteq \mathfrak{m}I_Z$ then $\phi$ has entries in
$\mathfrak{m}$; in the construction of Theorem 1.2 the map
$G_{n-1}^\vee \rightarrow F_{n-1}^\vee$ is the transpose of $\phi$.
Since the comparison maps further back in the
resolution are simply exterior powers of $\phi$, we have:
\begin{lem}\label{nocancel}
If $I_X \subseteq \mathfrak{m}I_Z$, then the mapping cone resolution
is in fact a minimal free resolution for $I_Y$.
\end{lem}
So if $I_X \subseteq \mathfrak{m}I_Z$, then the
minimal free resolution $H_\bullet$ for $R/I_Y$ has
$H_n = \oplus_{i=1}^n R(e_i - \alpha_X))$, and for
$i \in \{1,\ldots, n-1\}$,
\[
H_i = \wedge^{n-i}(\oplus_{i=1}^n R(d_i)) \bigoplus
\wedge^{n-i+1}(\oplus_{i=1}^n R(e_i))(-\alpha_X).
\]
If $I_X \not\subseteq \mathfrak{m}I_Z$, then $I_X$ and $I_Z$
share some minimal generators; in this case, there can be
cancellation in the mapping cone resolution:
\begin{exm}
Let $I_X = \langle x^2,y^2, z^6 \rangle \subseteq k[x,y,z,w]$,
and let $I_Z = \langle x,y, z^6\rangle$. Then we find that $I_Y =
I_X+\langle xy \rangle$. In betti diagram notation the mapping cone
resolution of $R/I_Y$ is:
\begin{small}
$$
\vbox{\offinterlineskip
\halign{\strut\hfil# \ \vrule\quad&# \ &# \ &# \ &# \ &# \ &# \
&# \ &# \ &# \ &# \ &# \ &# \ &# \
\cr
degree&1&4&6&3\cr
\noalign {\hrule}
0&1 &--&--&--&\cr
1&--&3 &2 &1 &\cr
2&--&--&1 &-- &\cr
3&--&--&--&--&\cr
4&--&--&--&--&\cr
5&--&1 &--&--&\cr
6&--&--&3 &2 &\cr
\noalign{\bigskip}
\noalign{\smallskip}
}}
$$
\end{small}
This is not a minimal resolution; the $R(-4)$ summand can be pruned off.
The degree of $I_Y$ is $18$. Checking, we obtain
$\prod_{i=1}^3 m_i = 54$, $\prod_{i=1}^3 M_i = 432$, and indeed
$9 \le 18 \le 72$. Notice that the upper bound was not affected
when we pruned the resolution, and the value of $\prod_{i=1}^3 m_i$
increased after pruning.
\end{exm}
\begin{exm}\label{onept}
Let $Z$ be a single point. For $Y,$ Lemma~\ref{nocancel} implies that
$M_n=m_n=\alpha_X-1$, and for $i<n$,
$M_i = \alpha_X-n+i-1$ and $m_i = \sum_{j=1}^i d_j$ (where $d_i \le
d_j$ if $i \le j$). We want to show that
\[
( \prod_{j=1}^{n-1} \sum_{i=1}^j d_i) (\sum_{i=1}^n d_i -1) \;
\leq \; n!(d_X-1) \; \leq \; \prod_{i=1}^n (\alpha_X -i).
\]
For the upper bound there are two cases. If
$d_1 < d_n$, then we have the following inequalities:
\begin{eqnarray*}
n d_1 & \leq & d_1 + d_2 + \cdots +d_{n-1}+ d_n -1 = \alpha_X-1\\ (n-1) d_2 &
\leq & (d_2 + \cdots + d_n) + (d_1 -2) = \alpha_X -2\\ \vdots & & \vdots \\ 2
d_{n-1} & \leq & (d_{n-1} + d_n) + (d_1 + d_2 + \cdots + d_{n-2} - (n-1)) =
\alpha_X -(n-1)\\ d_{n} & \leq & (d_n) + (d_1 + d_2 + \cdots + d_{n-1} - n) =
\alpha_X -n
\end{eqnarray*}
So it follows that $n!(d_X-1) \; \leq \; n! d_1 d_2 \cdots d_n \; \leq
\; \prod_{i=1}^n (\alpha_X-i).$ If $d_1 = d_n = \delta$, then
\begin{eqnarray*}
n \delta & \leq & n \delta = \alpha_X \\
(n-1) \delta & \leq & (n-1) \delta + (\delta -2) = \alpha_X - 2(1)
\; \leq \; \alpha_X -2 \\
(n-2) \delta & \leq & (n-2) \delta + (2)(\delta -2) = \alpha_X - 2(2)
\; \leq \; \alpha_X - 3\\
\vdots & & \vdots \\
2 \delta & \leq & 2 \delta + (n-2)(\delta -2) = \alpha_X - 2(n-2)
\; \leq \; \alpha_X - (n-1)\\
\delta & \leq & \delta + (n-1)(\delta -2) = \alpha_X - 2(n-1)
\end{eqnarray*}
So $n! (\delta^n-1) \; \leq \; n! \delta^n \; \leq \;
(\alpha_X) \left( \prod_{i=2}^{n-1} (\alpha_X-i) \right) (\alpha_X -2n+2).$
To finish the upper bound, we must verify that
$\alpha_X (\alpha_X -2n+2) \; \leq \; (\alpha_X -1)(\alpha_X -n)$;
this follows since $n \geq 3$.
The lower bound is easier: it holds for a complete intersection, and by assumption
$d_j \geq 2$ for all $j$, so we have
\[
\prod_{j=1}^{n} \sum_{i=1}^j d_i \le n!d_X \;\;\mbox{ and }\;\;
j+1 \; \leq \; 2 j \; \leq \; \sum_{i=1}^j d_i.
\]
Thus
\[
n! = \prod_{j=1}^{n-1} (j+1) \; \leq \; \prod_{j=1}^{n-1} 2j
\; \leq \; \prod_{j=1}^{n-1} \sum_{i=1}^j d_i.
\]
Combining these two inequalities yields the lower bound.
\end{exm}
\begin{lem}\label{ineq}
If $X$ is a non-degenerate zero-dimensional complete intersection in
$\mathbb{P}^n$, with $n\ge 3$, then $d_X \leq { \alpha_X-1 \choose
n}$, i.e. $d_X n! \le (\alpha_X-1)(\alpha_X-2) \cdots (\alpha_X-n)$.
\end{lem}
\begin{proof}
The bounds in Conjecture~\ref{conj1} hold for a $(d_1, d_2,\cdots,
d_n)$ complete intersection, so
$d_X \, n! \leq \alpha_X (\sum_{i=2}^n d_i) (\sum_{i=3}^n d_i) \cdots
d_n.$ If $d_1 < d_n$, then as in the first case of Example~\ref{onept},
$d_X\, n! \leq (\alpha_X-1) (\sum_{i=2}^n
d_i) (\sum_{i=3}^n d_i) \cdots d_n$.
Hence it suffices to show
\[
\alpha_X(\sum_{i=2}^n d_i) (\sum_{i=3}^n d_i) \cdots (\sum_{i=n}^n d_i)
\leq \prod_{j=1}^{n}(\alpha_X-j)
\]
\noindent{\it Case 1:} $d_1 > 2$. Then $(\sum_{i=2}^n d_i) \leq
(\alpha_X-3)$ and $(\sum_{i=j}^n d_i) \leq (\alpha_X-j)$ for all $j\geq 3$. So
since $\alpha_X(\alpha_X-3) \leq (\alpha_X-1)(\alpha_X-2)$, we obtain:
\[
\begin{array}{rcl}
\alpha_X (\sum_{i=2}^n d_i) (\sum_{i=3}^n d_i) \cdots (\sum_{i=n}^n d_i)
&\leq &\alpha_X(\alpha_X-3) (\alpha_X-3) (\alpha_X -4) \cdots (\alpha_X-n)\\
&\leq &\prod_{j=1}^{n}(\alpha_X-j)
\end{array}
\]
\noindent{\it Case 2:} $d_1 = 2$. Then $(\sum_{i=3}^n d_i) \leq (\alpha_X-4)$
and $(\sum_{i=j}^n d_i) \leq (\alpha_X-j)$ for all $j\geq 2$, so
\[
\alpha_X (\sum_{i=2}^n d_i) (\sum_{i=3}^n d_i) \cdots (\sum_{i=n}^n d_i)
\leq
\alpha_X(\alpha_X-2) (\alpha_X-4) (\alpha_X -4) \cdots(\alpha_X-n).
\]
Since $\alpha_X (\alpha_X-4) \leq (\alpha_X-1)(\alpha_X-3)$, we obtain
$ \alpha_X(\alpha_X-2) (\alpha_X-4) (\alpha_X -4) \cdots(\alpha_X-n)
\leq \prod_{j=1}^{n}(\alpha_X-j).$
\end{proof}
The proof of the next lemma is similar so we omit it.
\begin{lem}\label{betterineq}
With the same hypothesis as Lemma~\ref{ineq},
$d_X n! \leq \alpha_X (\alpha_X-2) (\alpha_X-4) (\alpha_X -6) \cdots
(\alpha_X-2(n-1))$.
\end{lem}
\begin{defn}
A subscheme $Z \subseteq \mathbb{P}^n$ is {\em collinear} if $I_Z =
\langle l_1,\ldots, l_{n-1},f \rangle$, where the $l_i$ are linearly
independent linear forms
and $\deg f = t$.
\end{defn}
We now use linkage to study the case where $Y$ is linked in $X$ to a
collinear subscheme $Z$. While we expect our methods to work more
generally, this case is already complicated enough to be interesting.
Since the line $V(l_1,\ldots,l_{n-1})$ cannot be contained
in each of the hypersurfaces defining $X$ (or $X$ would contain
the whole line), the line on which $Z$
is supported must intersect one of the hypersurfaces defining
$X$ in a zero-scheme. Thus, $Z$ is of degree at most $d_n$.
Henceforth we write $\alpha$ for $\alpha_X$.
\begin{thm}\label{ptsonlinethm}
Let $X$ be a zero-dimensional complete intersection of type $d_1,
d_2, \ldots, d_n$ in $\proj{n}$. Let $Z \subset X$ be a collinear
subscheme of degree $t$, and let $Y$ be residual to $Z$. Then
Conjecture~\ref{conj1} holds for $R/I_Y$.
\end{thm}
\begin{proof}
{\bf Upper bound.}
Because $d_j \geq 2$ for all $j$, even if cancellation occurs we
have $M_i = \alpha -n+i-1$ for $i \in \{2, \ldots, n\}$, as in
Example~\ref{onept}. For $i=1$, $M_1 \geq d_n$ or $M_1=d_n-1$, depending on
the amount of cancellation. If $t \leq \sum_{i=1}^{n-1} (d_i-1)$,
then $\alpha-n-t+1 \geq d_n$ and so $M_1 \geq d_n$. If
$\sum_{i=1}^{n-1} (d_i-1) < t$, then cancellation can occur.
\smallskip
\noindent{\it Case 1: $M_1 \geq d_n$.} In this case, since
\[
n!(d-t) \leq n!d \leq \alpha (\alpha-d_1) (\alpha-d_1-d_2) \cdots (d_n),
\]
it suffices to show that
\[
\alpha (\alpha-d_1) (\alpha-d_1-d_2) \cdots (d_{n-1}+d_n)(d_n)
\leq
(\alpha-1)(\alpha-2)(\alpha-3) \cdots (\alpha-(n-1))M_1
\]
Since $d_j \geq 2$ for all $j$, $\alpha(\alpha-d_1-d_2)
\leq (\alpha-1)(\alpha-3)$, and
\[
\begin{array}{rcl}
(\alpha-d_1) &\leq &(\alpha-2)\\
(\alpha-d_1-d_2-d_3) &\leq & (\alpha-4)\\
(\alpha-d_1-d_2-d_3-d_4) &\leq &(\alpha-5)\\
& \vdots, &
\end{array}
\]
the result follows if $n\ge 5$. If $n = 4$, then we must replace
the $\alpha-4$ above with $M_1$. The result holds since $M_1 \ge d_4 =
\alpha-d_1-d_2-d_3$.
For $n=3$, there are four cases to analyze.
If $d_1 \geq 3$, then $\alpha(\alpha-d_1) \leq (\alpha-1)(\alpha-2)$.
If $d_1 = 2$, then if $d_2 \geq 3$ we find that $6d \leq
(\alpha-1)(\alpha-2)d_3$ because $11 d_2 \leq d_2^2 + 2d_2d_3+d_3^2 +d_3$.
If $d_1=2$ and $d_2=2,$ but $d_3 \geq 3$, then we find that $24 d_3 \leq
d_3^3+5d_3^2 + 6d_3$. Since $d_3 \geq 3$, $18 \leq d_3^2+5d_3$ so the
inequality is true.
Finally, if $d_1=d_2=d_3=2$, then as long as $t>1$ we have $6(8-t) \leq
(5)(4)(2)$, so the bound holds when $t>1$. The case $t=1$ is covered
by Example~\ref{onept}, which concludes Case 1.
\smallskip
\noindent{\it Case 2: $d_n > M_1$.} Then $\alpha-t-n+1 = d_n-1$. If $d_1 =
d_n$, then since at most $n-1$ of the $d_i$'s can cancel, this forces $M_1
= d_1 =d_n$ and the inequalities from the previous case apply. So
henceforth we assume $d_1 < d_n$, which as noted in Lemma~\ref{ineq}
implies $d \, n! \leq (\alpha-1) (\sum_{i=2}^n d_i) (\sum_{i=3}^n d_i)
\cdots d_n$. We wish to show
\[
n!(d-t) \leq (\alpha-t-n+1) \prod_{i=2}^n (\alpha-n+i-1)
= (\alpha-t-n+1) \prod_{i=1}^{n-1} (\alpha-i)
\]
Suppose $n \geq 5$. We claim that $d_n(d_n+d_{n-1}) \leq (d_n-1)(\alpha - n+2)
= (d_n-1)(d_n+t)$. This follows from the inequalities
\[
\begin{array}{rcl}
(d_n-1)(d_n+t) - d_n(d_n+d_{n-1}) &=& -d_n + t(d_n-1) - d_{n-1}d_n\\ &\geq
&-d_n + (d_n-1)(d_{n-1} + n-2) -d_{n-1}d_n
\end{array}
\]
because $t = \alpha-d_n -n +2 = d_{n-1} + \sum_{i=1}^{n-2} (d_i-1) \geq
d_{n-1}+n-2$. Then
\[
\begin{array}{rcl}
-d_n + (d_n-1)(d_{n-1} + n-2) -d_{n-1}d_n & = &-d_n + (n-2) d_n -
d_{n-1}-(n-2)\\
&=& (n-4)d_n + (d_n-d_{n-1}) - (n-2)\\
&\geq& (n-4) d_n - (n-2)\\
&=& (n-4) (d_n-1) -2.
\end{array}
\]
Finally $(n-4)(d_n-1) \geq 2$ because $n \geq 5$ and $d_n > d_1 \geq 2$, so
we obtain
\begin{small}
\[
\begin{array}{rcl}
n!d &\leq &d_n (d_n+d_{n-1}) (d_n+d_{n-1}+d_{n-2}) \cdots
(\alpha-d_1)(\alpha-1)\\
& \leq &(d_n-1) (d_n+t) (d_n+d_{n-1}+d_{n-2}) \cdots
(\alpha-d_1)(\alpha-1)\\
&= &(d_n-1) (\alpha-n+2) (d_n+d_{n-1}+d_{n-2}) \cdots
(\alpha-d_1)(\alpha-1)\\
&\leq& (d_n-1) (\alpha-n+2) (\alpha-n+1)(d_n+d_{n-1}+d_{n-2}+d_{n-3}) \cdots
(\alpha-d_1)(\alpha-1)\\
&\leq &(d_n-1) (\alpha-n+2) (\alpha - n+1)
(\alpha-(n-3))(\alpha-(n-4)) \cdots (\alpha-2)(\alpha-1).
\end{array}
\]
\end{small}
Hence, the upper bound holds if $n\ge 5$.
\smallskip
If $n= 4$ and $d_2 < d_4$, then $3d_2 \leq d_2+d_3+d_4-1+d_1-2 = \alpha
-3$. If $d_4=d_3$, then since $d_1<d_4$, we also have $4d_1 \leq \alpha-2$.
So, $12d_1d_2 \leq (\alpha -2)(\alpha -3)$. On the other hand, if
$d_2=d_4$, then $3d_2 \leq \alpha-2$ and $4d_1 \leq \alpha-3$ so we also
find that $12d_1d_2 \leq (\alpha -2)(\alpha -3)$. It just remains to show
that $2d_3d_4 \leq (\alpha -1)(d_4-1)$. But $(d_4-1)(\alpha -1) -2d_3d_4
\ge (d_4-1)(2d_4+3)-2d_4^2=d_4-3\ge 0$. Thus the upper bound holds when
$d_4=d_3$. If $d_3<d_4$, we may only have $4d_1\leq (\alpha -1)$.
Nevertheless,
\[
\begin{array}{rcl}
(\alpha -2)(d_4-1)-2d_3d_4 &= &(d_1+d_2+d_4-d_3-2)(d_4-1)-2d_3 \\
& \ge & (d_1+d_2+d_4-d_3- 2)(d_4-1)-2(d_4-1)\\
& = &(d_1+d_2+d_4-d_3-4)(d_4-1)\\
& = & (d_1+d_2-4 +d_4- d_3)(d_4-1) \ge 0.
\end{array}
\]
Thus, the upper bound holds when $n=4$.
\smallskip
If $n=3$, then since $M_1 =d_3-1$, $d_2 \neq d_3$. If $3d_1\leq (\alpha
-2)$ then as before, $(\alpha -1)(d_3-1)-2d_2d_3 \ge (d_1-d_2+d_3-3)(d_3-1)
\ge 0$. If $3d_1=\alpha -1$, we must have $d_1=d_2=d_3-1$. In this case,
using the fact that $t=2d_1-1$, we calculate the inequality directly:
$6(d_1^2(d_1+1)- (2d_1-1) ) \leq (d_1)(3d_1+1-2) (3d_1+1-1)$ simplifies to
the true statement $0 \leq 3(d_1-1)(d_1-2d_1+2)$.
\vskip .15in
\noindent {\bf Lower bound.}
If there is no cancellation, then $m_n = \alpha - t$ and for $i<n$
we have $m_i = \min\{\alpha-n-t+i, \sum_{j=1}^i d_j \}$.
In particular, $m_i \leq \sum_{j=1}^i d_j$, for $i \in \{1,\ldots,
n-1 \}$, and so
\[
\prod_{i=1}^n m_i = (\prod_{i=1}^{n-1} m_i) m_n \leq (\prod_{i=1}^{n-1}
\sum_{j=1}^i d_j) m_n.
\]
Hence it is sufficient to prove that
\[
(\prod_{i=1}^{n-1} \sum_{j=1}^i d_j) (\alpha-t) \leq n!(d-t) .
\]
Exactly as in Example~\ref{onept}, we have
\[
\prod_{i=1}^{n-1} \sum_{j=1}^i d_j \alpha \le n!d \;\;\mbox{ and }\;\;
i+1 \; \leq \; 2 i \; \leq \; \sum_{j=1}^i d_j.
\]
So $n! t = t \prod_{i=1}^{n-1} (i+1) \;
\leq \; t \prod_{i=1}^{n-1} \sum_{j=1}^i d_j.$
Subtracting this inequality from the left hand inequality above yields
the desired inequality, so the lower bound holds for $R/I_Y$ if
there is no cancellation.
\smallskip
Now let us look at where cancellation can occur. We only care about
cancellation when a term of some degree that shows up in the set of
minimums disappears. We can break it up into two cases:
\smallskip
\noindent{\it Case 1: $t < d_n$.} Then $\alpha-t > \alpha-d_n$, and so
$\alpha-t -1 \ge \alpha-d_n$, hence $m_{n-1} \le \alpha-d_n$. Also
$\alpha-t -1 \ge \alpha-d_n$ implies $\alpha-t -1 >
\alpha-d_n-d_{n-1}$, so that $m_{n-2} \le \alpha-d_n-d_{n-1}$, and in
general $m_{n-i} \le \alpha-d_n-\cdots d_{n-i+1}$. So if $m_n =
\alpha-t$, then the argument from the previous case holds.
However, if $t=d_l$ for some $l<n$, then it is possible that $m_n =
\alpha-1$. So in this case, we need to show that
\[
(\prod_{i=1}^{n-1} \sum_{j=1}^i d_j) (\alpha-1)
\leq n! (d-d_l).
\]
We have the inequalities
\[
\begin{array}{ccc}
d_1 & \le & d_1 \\
d_1+d_2 & \le & 2d_2 \\
\vdots\\
d_1+d_2+\cdots +d_{n-2}+d_{n-1} & \le & (n-1)d_{n-1} \\
\alpha+1 & \le & nd_n,
\end{array}
\]
where the last row follows since $d_l < d_n$.
Subtracting $2\prod_{i=1}^{n-1} \sum_{j=1}^i d_j$ from the
product of the left hand column and $n!d_l$ from the product of
the right hand column would yield the desired inequality, so
it suffices to show that $n!d_l \le 2\prod_{i=1}^{n-1} \sum_{j=1}^i
d_j$. Let $\beta = \prod_{i=1}^{n-2} \sum_{j=1}^i d_j$, so
\[
2\prod_{i=1}^{n-1} \sum_{j=1}^i d_j = 2(d_{n-1}+\sum_{j=1}^{n-2} d_j)\beta.
\]
Since $d_l \le d_{n-1}$, it is enough to show that $n! \le
2\beta$. Since the $d_i$ are at least two,
\[
2^{n-1}(n-2)! \le 2\beta,
\]
and the inequality holds if $n\ge 6$. For $n \in \{3,4,5\}$, a
case analysis shows we have to verify the bound directly for
\[
\begin{array}{ll}
n=3 & d_1=2 \\
n=4 & (d_1,d_2) = (2,2) \mbox{ or }(2,3) \\
n=5 & (d_1,d_2,d_3) = (2,2,2) \mbox{ or }(2,2,3).
\end{array}
\]
For example, if $n=3$ and $d_1=2$, we must verify that
\[
2(2+d_2)(2+d_2+d_3-1) \le 6(2d_2d_3-d_2).
\]
This follows by summing the inequalities:
\[
\begin{array}{ccc}
(2+d_2)d_3 & \le & (2d_2)d_3 \\
(2+d_2)(d_2+1) & \le & (2d_2)d_3,
\end{array}
\]
and observing that $2d_2d_3-3d_2=d_2(2d_3-3)\ge 0$. The other
cases are similar so we omit them.
\smallskip
\noindent{\it Case 2: $t = d_n$.} The $\alpha-d_n$ term cancels with $\alpha-t$, and
so $m_n = \alpha-1$. Also $m_{n-1} = \min\{\alpha-d_{n-1}, \alpha-t-1\}
\leq \alpha-t -1 = \alpha - d_n-1$. Since all the $d_i \geq 2$, we cannot
have $\alpha-d_n- \cdots -d_{k+1} = \alpha-n-t+k+1$ for any $k \leq n-2$,
and hence we always have $m_i \leq \sum_{j=1}^i d_j$ for $i \leq n-2$. In
order to prove the lower bound, we need to show
\[
(\alpha-1)(\alpha-d_n-1) \prod_{i=1}^{n-2} \sum_{j=1}^i d_j
\leq
n! (d-d_n)
\]
We can write
\[
n! (d-d_n) = d_n n (n-1)! (d'-1)
\]
where $d' = \prod_{i=1}^{n-1} d_i$. By the bound on the complete
intersection of type $d_1, d_2, \ldots, d_{n-1}$, we know that
\[
(n-1)! d' \geq \prod_{i=1}^{n-1} \sum_{j=1}^i d_j = (\alpha - d_n)
\prod_{i=1}^{n-2} \sum_{j=1}^i d_j.
\]
It is also true that $n-1 \leq 2^{n-2}$ for all $n \geq 2$, so
\[
(n-1)! \leq 2^{n-2} (n-2)! \leq \prod_{i=1}^{n-2} \sum_{j=1}^i d_j,
\mbox{ since }d_i \ge 2.
\]
Therefore
\[
(n-1)! (d'-1) = (n-1)! d' - (n-1)! \geq (\alpha-d_n) \prod_{i=1}^{n-2}
\sum_{j=1}^i d_j - \prod_{i=1}^{n-2} \sum_{j=1}^i d_j = (\alpha-d_n-1)
\prod_{i=1}^{n-2} \sum_{j=1}^i d_j.
\]
But since $nd_n \geq \alpha \ge \alpha-1$, this gives
\[
n! (d - d_n) = d_n n (n-1)! (d'-1) \geq (\alpha-1) (\alpha-d_n-1)
\prod_{i=1}^{n-2} \sum_{j=1}^i d_j.
\]
\end{proof}
\section{$Y$ is linked to 3 general points}
In this section, we study the simplest $Z$ which is not a collinear scheme:
three general points. While we are able to carry out the degree analysis in
this case, it also serves to illustrate that this type of argument
will become increasingly complex.
\begin{thm}\label{3noncollinearptsthm}
Let $X$ be a zero-dimensional complete intersection of type $d_1, d_2,
\ldots, d_n$ in $\proj{n}$, $n>2$. Let $Z \subset X$ be a set of 3
non-collinear points, and suppose $Y$ is linked to $Z$ in $X$. Then
Conjecture~\ref{conj1} holds for $R/I_Y$.
\end{thm}
By Theorem~\ref{linkres}, the mapping cone resolution of $I_Y =
I_X:I_Z$ is
\begin{tiny}
\[
0 \rightarrow
\begin{array}{c}
R^{n-2} (-(\alpha-1))\\
\oplus\\
R^3 (-(\alpha-2))
\end{array}
\rightarrow
\cdots \rightarrow
\begin{array}{c}
R^{{{n-2} \choose {n-i+1}}} (-(\alpha-n-1+i))\\
\oplus\\
R^{3 {{n-2} \choose {n-i}} + 2{{n-2} \choose {n-i-1}}} (-(\alpha-n-2+i))\\
\oplus\\
\oplus R (-(\sum\limits_{j \in \Lambda \atop |\Lambda| = i} d_j))
\end{array}
\rightarrow
\cdots
\]
\[
\cdots \rightarrow
\begin{array}{c}
R^{{n-2 \choose n-2}} (-(\alpha-n+2))\\
\oplus\\
R^{3 {n-2 \choose n-3} + 2{n-2 \choose n-4} } (-(\alpha-n+1))\\
\oplus\\
\oplus R (-(\sum\limits_{j \in \Lambda \atop |\Lambda| = 3} d_j))
\end{array}
\rightarrow
\begin{array}{c}
R^{3 {n-2 \choose n-2} + 2{n-2 \choose n-3}} (-(\alpha-n))\\
\oplus\\
\oplus R (-(d_j + d_k))
\end{array}
\rightarrow
\begin{array}{c}
R^{ 2{n-2 \choose n-2}} (-(\alpha-n-1))\\
\oplus\\
\oplus R (-d_j)
\end{array}
\rightarrow
I_Y
\]
\end{tiny}
\begin{proof}
{\bf Upper bound.}
We begin with the upper bound. If $n\geq 4$, then there is no cancellation
of terms which affect the upper bound, and for $i \in \{3, \ldots, n-1 \}$,
$M_i = \max \{\sum_{j=n-i+1}^n d_j, \alpha-n+i-1\} = \alpha-n+i-1$,
while
\[
\begin{array}{l}
M_1 = \max \{ d_n, \alpha-n-1\} = \alpha-n-1\\
M_2 = \max \{d_{n-1}+d_n, \alpha-n\} = \alpha-n \\
M_n = \alpha - 1.
\end{array}
\]
So we want to show that
\[
n! (d-3) \leq (\alpha-n)(\alpha-(n+1)) \prod_{i=1}^{n-2} (\alpha-i).
\]
Since we know that
\[
n!(d-3) \leq n! d \leq \alpha \prod_{i=2}^n \sum_{j=i}^n d_j,
\]
it is enough to show that
\[
\alpha \prod_{i=2}^n \sum_{j=i}^n d_j \leq
(\alpha-n)(\alpha-(n+1)) \prod_{i=1}^{n-2} (\alpha-i).
\]
By Lemma~\ref{betterineq}, we know that
\[
\alpha \prod_{i=2}^n \sum_{j=i}^n d_j \leq
\alpha (\alpha-2) (\alpha-4) (\alpha -6) \cdots
(\alpha-2(n-1)),
\]
so it is enough to show that
\[
\alpha (\alpha-2) (\alpha-4) (\alpha -6) \cdots (\alpha-2(n-1))
\leq
(\alpha-n)(\alpha-(n+1)) \prod_{i=1}^{n-2} (\alpha-i).
\]
If $n>4$, then
\[
\begin{array}{rcl}
\alpha-2 &\leq&\alpha-2\\
\alpha-2(3) &\leq& \alpha - 4\\
\alpha-2(4) &\leq& \alpha - 5\\
&\vdots &\\
\alpha-2(n-3) &\leq& \alpha - (n-2)\\
\alpha-2(n-2) &\leq& \alpha-n\\
\alpha-2(n-1) &\leq& \alpha-(n+1)
\end{array}
\]
and
\[
\begin{array}{rcl}
\alpha(\alpha-4) &\leq&(\alpha-1)(\alpha-3).
\end{array}
\]
Taking the product, we see that the bound holds if $n>4$. If $n = 4$,
then we must show that
\[
\alpha(\alpha-2)(\alpha-4)(\alpha-6)
\leq (\alpha-1)(\alpha-2)(\alpha-4)(\alpha-5);
\]
which is true since $\alpha(\alpha-6) \leq (\alpha-1)(\alpha-5)$ for all
$\alpha$.
Finally, if $n=3$, then we have to be a bit more careful. It is always true
that $M_1=\alpha-4$ and $M_3 = \alpha-1$. The value of $M_2$ is either
$\alpha-2$ or $\alpha-3$ depending on cancellation.
\noindent{\it Case 1: $d_1=d_2=d_3=2$.}
We check directly that
\[
30=3!(8-3) = (2)(6-3)(5) =(\alpha-4)(\alpha-3)(\alpha-1)\leq M_1M_2M_3.
\]
{\it Case 2: $d_1=d_2=2, d_3>2$.}
In this case $\alpha=d_3+4$, and so $M_2 \geq d_3+1$. Again we plug in
values, and check to see that the resulting inequality is true. Is
$6(d-3) = 6(4d_3-3) \leq (d_3)(d_3+1)(d_3+3)$? This is equivalent to
$0 \leq d_3^3+4d_3^2 -21 d_3 + 18 =(d_3-2)(d_3^2+6d_3-9)$, which is
true for $d_3 \geq 3$.
\noindent{\it Case 3: $d_1=2, d_2>2$.} Here $\alpha=d_2+d_3+2$ and $M_2 \geq
d_2+d_3-1$, so we need to check that $6(2d_2d_3-3) \leq
(d_2+d_3-2)(d_2+d_3-1)(d_2+d_3+1)$. This inequality reduces to checking
that $d_2^3+3d_2^2d_3+3d_2d_3^2+d_3^3-2d_2^2-16d_2d_3-2d_3^2-d_2-d_3+20
\geq 0$, which is true since for $3 \leq d_2 \leq d_3$,
\[
\begin{array}{rcl}
d_2^3+3d_2^2d_3+3d_2d_3^2+d_3^3 &\geq & 3d_2^2+9d_2d_3 +9d_3^2+3d_3^2\\
&=&2d_2^2+2d_3^2+d_2^2+9d_2d_3+8d_3^2\\
&\geq&2d_2^2+2d_3^2+d_2^2+9d_2d_3+7d_2d_3+d_3^2\\
&=&2d_2^2+2d_3^2+16d_2d_3+d_2^2+d_3^2\\
&\geq &2d_2^2+2d_3^2+16d_2d_3+d_2+d_3.
\end{array}
\]
{\it Case 4: $d_1>2$.} In this case, we check directly that
\[
\alpha(\alpha-d_1)(\alpha-d_1-d_2) \leq
(\alpha-1)(\alpha-d_1)(\alpha-4) \leq M_1 M_2 M_3.
\]
The left expression is the familiar product from $I_X$, so it is bigger
than $3!d$, and hence also $3!(d-3)$. So the upper bound holds.
\vskip .1in
\noindent{\bf Lower bound}
Now we will prove the lower bound. Notice that the only cancellation
that is numerically feasible is at the last step because $d_j \geq 2$
for all $j$. So cancellation can only happen if $d_1$, $d_2$, and
possibly $d_3$ are all 2. Such a cancellation will affect $m_n$ only
if all three terms of degree $\alpha-2$ cancel, that is, if
$d_1=d_2=d_3=2$ and all possible cancellation occurs, and $d_4 \ge 3$
when $n\ge 4$. Therefore for $i< n$ we have
$m_i= \min \{\sum_{j=1}^i d_j,\alpha-n+i-2\}$, and $m_n$ is
either $\alpha - 1$ or $\alpha - 2$.
If we assume $m_n=\alpha-2$, then there are four cases to consider.
\smallskip
\noindent{\it Case $n \geq 4$:}
We know that
\[
(\alpha-2) \prod_{i=1}^{n-1} m_i
\leq
(\alpha-2) \prod_{i=1}^{n-1} \sum_{j=1}^i d_j,
\]
so we need to show that the rightmost expression is less than or equal
to $n! (d-3)$. Since $d_j \geq 2$, $2i \leq \sum_{j=1}^i d_j$, so
$n! 3
\leq
2^n (n-1)! \leq
2 \prod_{i=1}^{n-1} \sum_{j=1}^i d_j$. Thus
\[
(\alpha-2) \prod_{i=1}^{n-1} \sum_{j=1}^i d_j
\leq n!d - 2 \prod_{i=1}^{n-1} \sum_{j=1}^i d_j
\leq n! d - n! 3 = n!(d-3)
\]
\smallskip
\noindent{\it Case $n=3$, $d_1=d_2=d_3=2$:}
In this case $m_1 = 2$, $m_2 =3$, and $m_3=4$, so we check directly
that
\[
24 = (2)(3)(4) \leq 3! (2^3-3) = 30.
\]
\smallskip
\noindent{\it Case $n=3$, $d_1=d_2=2$, $d_3 > 2$:}
In this case we check directly that $(2)(4)(\alpha-2) \leq 3!(d-3)$.
Since $\alpha= d_3+4$, this inequality holds as long as $d_3 \geq
\frac{17}{8}$, which it is.
\smallskip \noindent{\it Case $n=3$, $d_2 > 2$:} In this case $m_1 \leq d_1$,
$m_2 \leq d_1+d_2$, and $m_3=\alpha-2$. Using the bound for the
complete intersection of type $d_1,d_2,d_3$, we have that
\[
d_1 (d_1+d_2)(\alpha-2) =
d_1 (d_1+d_2)\alpha - 2 d_1 (d_1+d_2) \leq 3! d -18,
\]
which is true if $2 d_1 (d_1+d_2) \geq 18$. But $2 d_1 (d_1+d_2) \geq
2(2)(5) = 20$, so the bound holds.
If on the other hand $m_n = \alpha-1$, then it
must be true that $d_1=d_2=d_3=2$. We know
\[
\prod_{i=1}^{n} m_i \leq (\alpha-1) \prod_{i=1}^{n-1} \sum_{j=1}^i d_j,
\]
and so it suffices to show
\[
\alpha \prod_{i=1}^{n-1} \sum_{j=1}^i d_j - \prod_{i=1}^{n-1} \sum_{j=1}^i
d_j \leq n!d - 3 \,n!,
\]
which would follow from
\[
3 \,n!
\leq \prod_{i=1}^{n-1} \sum_{j=1}^i d_j.
\]
Since $d_4 \geq 2$, we have that
\[
5!3 = 3 \cdot 2 \cdot 3 \cdot 4 \cdot 5
\leq
2 \cdot 4 \cdot 6 \cdot 8
\leq
2 \cdot 4 \cdot 6 \cdot (6+d_4) = \prod_{i=1}^4 \sum_{j=1}^i d_j,
\]
and once $n$ is at least $6$,
$\prod_{i=6}^n i \leq \prod_{i=5}^{n-1} \sum_{j=1}^i
d_j$; hence the desired inequality follows if $n \geq 5$.
If $n=4$, then we check directly. We have that $m_1=2$, $m_2 = 4$,
$m_3= 6$ and $m_4 = d_4+5$. A simple calculation shows
that in fact $4!(8d_4-3) \geq (2)(4)(6)(d_4+5)$ since $d_4 \geq 2$.
If $n=3$, then again we may check directly. We have that $m_1=2$,
$m_2=3$, and $m_3=5$. So we see that $30=3!(8-3) \geq (2)(3)(5)=30$.
\end{proof}
\vskip .15in
\noindent{\bf Acknowledgments}
Macaulay~2 computations provided evidence for the results in this
paper. The first author thanks the University of Missouri
for supporting her visit during the fall of 2003, when portions of
this work were performed.
\renewcommand{\baselinestretch}{1.0}
\small\normalsize
\bibliographystyle{amsalpha}
| {
"timestamp": "2005-10-26T13:00:10",
"yymm": "0510",
"arxiv_id": "math/0510552",
"language": "en",
"url": "https://arxiv.org/abs/math/0510552",
"abstract": "In their paper on multiplicity bounds (1998), Herzog and Srinivasan study the relationship between the graded Betti numbers of a homogeneous ideal I in a polynomial ring R and the degree of I. For certain classes of ideals, they prove a bound on the degree in terms of the largest and smallest Betti numbers, generalizing results of Huneke and Miller (1985). The bound is conjectured to hold in general; we study this using linkage. If R/I is Cohen-Macaulay, we may reduce to the case where I defines a zero-dimensional subscheme Y. If Y is residual to a zero-scheme Z of a certain type (low degree or points in special position), then we show that the conjecture is true for I_Y.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Betti Numbers and Degree Bounds for Some Linked Zero-Schemes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877684006775,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7096296202605307
} |
https://arxiv.org/abs/1106.4598 | Inverse problems for Jacobi operators II: Mass perturbations of semi-infinite mass-spring systems | We consider an inverse spectral problem for infinite linear mass-spring systems with different configurations obtained by changing the first mass. We give results on the reconstruction of the system from the spectra of two configurations. Necessary and sufficient conditions for two real sequences to be the spectra of two modified systems are provided. | \section{Introduction}
\label{sec:intro}
In this work we treat the two spectra inverse problem for Jacobi
operators in $l_2(\mathbb{N})$. The Jacobi operators considered here are
obtained from each other by a particular kind of rank-two
perturbation. The special form of the perturbation has a physical
motivation; it is the extension to the semi-infinite case of an
inverse problem for finite mass-spring systems studied in
\cite{delrio-kudryavtsev} and \cite{Ram}.
The Jacobi operator $J$ in the Hilbert space $l_2(\mathbb{N})$ is the operator
whose matrix representation with respect to the canonical basis in
$l_2(\mathbb{N})$ is a semi-infinite Jacobi matrix of the form
\begin{equation}
\label{eq:jm-0}
\begin{pmatrix}
q_1 & b_1 & 0 & 0 & \cdots
\\[1mm] b_1 & q_2 & b_2 & 0 & \cdots \\[1mm] 0 & b_2 & q_3 &
b_3 & \\
0 & 0 & b_3 & q_4 & \ddots\\ \vdots & \vdots & & \ddots
& \ddots
\end{pmatrix}\,,
\end{equation}
where $q_n\in\mathbb{R}$ and $b_n>0$ for any $n\in\mathbb{N}$ (see in
\cite{MR1255973} the definition of the matrix representation of an
unbounded symmetric operator). $J$ is closed by definition and it may
be self-adjoint or have deficiency indices (1,1). In this work we deal
with self-adjoint operators, so, if $J\ne J^*$, we consider its
self-adjoint extensions denoted $J^{(g)}$, where
$g\in\mathbb{R}\cup\{\infty\}$ (see Definition~\ref{def:s-a-ext-def} a)).
If $J=J^*$ we assume $J^{(g)}=J$ for all $g\in\mathbb{R}\cup\{\infty\}$
(see Definition~\ref{def:s-a-ext-def} b)).
The two spectra inverse problem for Jacobi operators $J^{(g)}$ takes
as input data the spectra of two operators in a operator family
obtained by perturbing $J^{(g)}$ in a certain way. The solution of the
problem is the finding of the matrix (\ref{eq:jm-0}) and the
``boundary condition at infinity'' $g$ if necessary. The case of the
operator family consisting of rank-one perturbations of a self-adjoint
Jacobi operator has been amply studied in
\cite{MR49:9676,MR499269,MR0221315} and, in the more general setting
of rank-one perturbations of $J^{(g)}$, in
\cite{weder-silva,MR1643529}. Rank-one perturbations can be seen as a
change of the ``boundary condition at the origin'' for the
corresponding difference equation (see \cite[Appendix]{weder-silva}).
We remark that the case of finite Jacobi matrices has also been
thoroughly studied (see
\cite{Chu-Golub,deBoor-Golub,MR1616422,MR2102477,MR0382314}).
It is known that the dynamics of a finite mass-spring system is
characterized by the spectral properties of a finite Jacobi matrix
\cite{MR2102477}. Accordingly, in solving the inverse problem for
mass-spring systems mentioned above, \cite{Ram} provides necessary and
sufficient conditions for two point sets to be the spectra of two
finite Jacobi matrices corresponding to two mass-spring systems, one
of which has a mass and a spring modified. The results of \cite{Ram}
are related to the study of microcantilevers
\cite{spletzer-et-al1,spletzer-et-al2}, which are modeled by a
spring-mass system whose masses and springs constants correspond to
the mechanical parameters of the system. The inverse problem treated
in \cite{Ram} could be used as a theoretical framework for the problem
of measuring micromasses with a help of microcantilevers
\cite{spletzer-et-al1,spletzer-et-al2}.
Let us consider a semi-infinite spring-mass system with masses
$\{m_j\}_{j=1}^\infty$ and spring constants $\{k_j\}_{j=1}^\infty$ as
in Fig.~\ref{fig:1}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
[mass1/.style={rectangle,draw=black!80,fill=black!13,thick,inner sep=0pt,
minimum size=7mm},
mass2/.style={rectangle,draw=black!80,fill=black!13,thick,inner sep=0pt,
minimum size=5.7mm},
mass3/.style={rectangle,draw=black!80,fill=black!13,thick,inner sep=0pt,
minimum size=7.7mm},
wall/.style={postaction={draw,decorate,decoration={border,angle=-45,
amplitude=0.3cm,segment length=1.5mm}}}]
\node (mass3) at (7.1,1) [mass3] {\footnotesize$m_3$};
\node (mass2) at (4.25,1) [mass2] {\footnotesize$\,m_2$};
\node (mass1) at (2.2,1) [mass1] {\footnotesize$m_1$};
\draw[decorate,decoration={coil,aspect=0.4,segment
length=2.1mm,amplitude=1.8mm}] (0,1) -- node[below=4pt]
{\footnotesize$k_1$} (mass1);
\draw[decorate,decoration={coil,aspect=0.4,segment
length=1.5mm,amplitude=1.8mm}] (mass1) -- node[below=4pt]
{\footnotesize$k_2$} (mass2);
\draw[decorate,decoration={coil,aspect=0.4,segment
length=2.5mm,amplitude=1.8mm}] (mass2) -- node[below=4pt]
{\footnotesize$k_3$} (mass3);
\draw[decorate,decoration={coil,aspect=0.4,segment
length=2.1mm,amplitude=1.8mm}] (mass3) -- node[below=4pt]
{\footnotesize$k_4$} (9.3,1);
\draw[line width=.8pt,loosely dotted] (9.4,1) -- (9.8,1);
\draw[line width=.5pt,wall](0,1.7)--(0,0.3);
\end{tikzpicture}
\end{center}
\caption{Semi-infinite mass-spring system}\label{fig:1}
\end{figure}
By a standard reasoning (see
\cite{MR2102477,marchenko-new,mono-marchenko}) one verifies that the
infinite system of Fig.~\ref{fig:1} is modeled by the spectral
properties of the Jacobi operator $J$ with
\begin{equation}
\label{eq:spring-mass}
q_j = -\frac{k_{j+1}+k_j}{m_j}\,, \qquad
b_j=\frac{k_{j+1}}{\sqrt{m_j m_{j+1}}}\,,
\qquad j\in\mathbb{N}\,.
\end{equation}
We remark that in \cite{MR2102477,marchenko-new,mono-marchenko} the obtained
matrix corresponds to $-J$. An alternative physical interpretation is
provided by a one dimensional harmonic crystal \cite[Sec. 1.5]{MR1711536}.
In this work we consider the spectrum of $J^{(g)}$ to be discrete (if
$J\ne J^*$ this is always the case). Below, in Remarks
\ref{rem:discrete-spectrum} and \ref{rem:nonselfadjoint} we comment on
matrices of the form (\ref{eq:jm-0}) whose corresponding operator
$J^{(g)}$ has discrete spectrum.
The discreteness of $\sigma(J^{(g)})$ implies that the movement of our
mechanical system is a superposition of harmonic oscillations whose
frequencies are the square roots of the modules of the eigenvalues.
Along with the self-adjoint operator $J^{(g)}$ we consider the family
of operators $J^{(g)}(\theta)$ ($\theta>0$) being self-adjoint
extensions of the Jacobi operator whose matrix representation with
respect to the canonical basis in $l_2(\mathbb{N})$ is
\begin{equation}
\label{eq:jm-theta}
\begin{pmatrix}
\theta^2q_1 & \theta b_1 & 0 & 0 & \cdots
\\[1mm] \theta b_1 & q_2 & b_2 & 0 & \cdots \\[1mm] 0 & b_2 & q_3 &
b_3 & \\
0 & 0 & b_3 & q_4 & \ddots\\ \vdots & \vdots & & \ddots
& \ddots
\end{pmatrix}\,.
\end{equation}
$J^{(g)}(\theta)$ ($\theta>0$) will be the family of perturbed Jacobi
operators. Note that the operators of the family are not obtained from
each other by a rank-one perturbation (see (\ref{eq:rank-two}) below).
Going from $J^{(g)}$ to $J^{(g)}(\theta)$ corresponds to changing the
first mass by $\Delta m=m_1(\theta^{-2}-1)$. In other words,
$\theta^2$ is the ratio of the original mass $m_1$ to the new mass
$m_1+\Delta m$. This is illustrated in Fig.~\ref{fig:2}. It is worth
mentioning that we also consider here the cases when $\Delta m<0$,
equivalently, $\theta>1$, although physical applications correspond to
$\theta<1$ \cite{spletzer-et-al1,spletzer-et-al2}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
[mass1/.style={rectangle,draw=black!80,fill=black!13,thick,inner sep=0pt,
minimum size=7mm},
mass2/.style={rectangle,draw=black!80,fill=black!13,thick,inner sep=0pt,
minimum size=5.7mm},
mass3/.style={rectangle,draw=black!80,fill=black!13,thick,inner sep=0pt,
minimum size=7.7mm},
dmass/.style={rectangle,draw=black!80,fill=black!13,thick,inner sep=0pt,
minimum size=5mm},
wall/.style={postaction={draw,decorate,decoration={border,angle=-45,
amplitude=0.3cm,segment length=1.5mm}}}]
\node (mass3) at (7.1,1) [mass3] {\footnotesize$m_3$};
\node (mass2) at (4.25,1) [mass2] {\footnotesize$\,m_2$};
\node (mass1) at (2.2,1) [mass1] {\footnotesize$m_1$};
\node (dmass) at (2.2,1.6) [dmass] {\scriptsize$\,\Delta m\,$};
\draw[decorate,decoration={coil,aspect=0.4,segment
length=2.1mm,amplitude=1.8mm}] (0,1) -- node[below=4pt]
{\footnotesize$k_1$} (mass1);
\draw[decorate,decoration={coil,aspect=0.4,segment
length=1.5mm,amplitude=1.8mm}] (mass1) -- node[below=4pt]
{\footnotesize$k_2$} (mass2);
\draw[decorate,decoration={coil,aspect=0.4,segment
length=2.5mm,amplitude=1.8mm}] (mass2) -- node[below=4pt]
{\footnotesize$k_3$} (mass3);
\draw[decorate,decoration={coil,aspect=0.4,segment
length=2.1mm,amplitude=1.8mm}] (mass3) -- node[below=4pt]
{\footnotesize$k_4$} (9.3,1);
\draw[line width=.8pt,loosely dotted] (9.4,1) -- (9.8,1);
\draw[line width=.5pt,wall](0,1.8)--(0,0.4);
\end{tikzpicture}
\end{center}
\caption{Perturbed semi-infinite mass-spring system}\label{fig:2}
\end{figure}
The problem of reconstructing the initial and the perturbed matrices
by their spectra can be then interpreted from the physical point of
view as the problem of finding the mechanical parameters of the
spring-mass system from the frequencies of its oscillations before
and after the modification.
We emphasize that, although the operators and the particular kind of
perturbation considered here were motivated by a physical system, the
general mathematical setting is consider throughout the work. Thus,
the entries in (\ref{eq:jm-0}) have no restriction other than $J$
being a Jacobi operator ($q_n\in\mathbb{R}$, $b_n>0$) and $J^{(g)}$ having
discrete spectrum (see Remarks \ref{rem:discrete-spectrum},
\ref{rem:nonselfadjoint}). Note that $J$ is then not necessary
semibounded though it actually is when $J$ corresponds to a
mass-spring system.
This work is organized as follows. In Section~\ref{sec:preliminaries}
we lay down the notation, introduce the Jacobi operators and its
perturbations, and present some preparatory facts related with the
inverse spectral problems of such
operators. Section~\ref{sec:direct-problem} gives an account of the
spectral properties of the family of perturbed Jacobi operators
$J^{(g)}(\theta)$. The problem of reconstruction is treated in
Section~\ref{sec:reconstruction}. This section gives some necessary
conditions for the spectra of $J^{(g)}(\theta)$, provides an algorithm
for reconstruction of the matrix and establishes uniqueness of the
reconstruction. Finally, Section~\ref{sec:nec-suf} gives necessary
and sufficient conditions for two sequences of real numbers to be the
spectra of $J^{(g)}$ and its perturbation $J^{(g)}(\theta)$
($\theta\ne 1$).
\section{Preliminaries}
\label{sec:preliminaries}
Let $\Upsilon$ be a second order symmetric difference
expression such that for any sequence $f=\{f_k\}_{k=1}^\infty$
\begin{align}
\label{eq:initial-spectral}
(\Upsilon f)_1&:= q_1 f_1 + b_1 f_2\,,\\
\label{eq:recurrence-spectral}
(\Upsilon f)_k&:= b_{k-1}f_{k-1} + q_k f_k + b_kf_{k+1}\,,
\quad k \in \mathbb{N} \setminus \{1\},
\end{align}
where, for $n\in\mathbb{N}$, $b_n$ is positive and $q_n$ is real. Let
$l_{\rm{fin}}(\mathbb{N})$ be the linear space of complex sequences with a
finite number of non-zero elements. In the Hilbert space
$l_2(\mathbb{N})$, let us consider the operator whose domain is
$l_{\rm{fin}}(\mathbb{N})$ and acts as the expression $\Upsilon$. This
operator is symmetric since it is densely defined and
Hermitian, and thus it is closable. Now, let $J$ be the closure of
this operator.
We have defined the operator $J$ so that the semi-infinite Jacobi
matrix (\ref{eq:jm-0}) is its matrix representation with respect to
the canonical basis $\{\delta_n\}_{n=1}^\infty$ in $l_2(\mathbb{N})$
(see \cite[Sec. 47]{MR1255973} for the definition of the matrix
representation of an unbounded symmetric operator). Indeed, $J$ is the
minimal closed symmetric operator satisfying
\begin{equation*}
\begin{array}{l}
\inner{\delta_n}{J\delta_n}=q_n\,,\quad
\inner{\delta_{n+1}}{J\delta_n}=\inner{\delta_n}{J\delta_{n+1}}=b_n\,,\\
\inner{J\delta_n}{\delta_{n+k}}
=\inner{\delta_{n}}{J\delta_{n+k}}=0\,,
\end{array}
\quad n\in\mathbb{N}\,,\,k\in\mathbb{N}\setminus\{1\}\,.
\end{equation*}
We shall refer to $J$ as the \emph{Jacobi operator} and to
(\ref{eq:jm-0}) as its associated matrix.
The operator $J^*$ turns out to be given by
\begin{equation*}
\dom(J^*)=\{f\in l_2(\mathbb{N}):\Upsilon f\in l_2(\mathbb{N})\},
\qquad J^*f=\Upsilon f\,,
\end{equation*}
which follows directly from the definition of $J$ \cite[Chap.\,4
Sec.\,1.1]{MR0184042}, \cite[Thm.\,2.7]{MR1627806}.
If one gives the complex number $f_1$, the solution of
the difference equation,
\begin{equation*}
(\Upsilon f)= \zeta f\,,
\qquad \zeta \in \mathbb{C}\,,
\end{equation*}
is uniquely determined from (\ref{eq:initial-spectral}) and
(\ref{eq:recurrence-spectral}) by recurrence. For the elements of this
solution when $f_1=1$, the following notation is standard
\cite[Chap. 1, Sec. 2.1]{MR0184042}
\begin{equation*}
P_{k-1}(\zeta):=f_k\,,\qquad k\in\mathbb{N}\,,
\end{equation*}
where the polynomial $P_k(\zeta)$ (of degree $k$) is referred
to as the $k$-th orthogonal polynomial of the first kind
associated with the matrix (\ref{eq:jm-0}). Now, let us
solve the difference equation
\begin{equation*}
(\Upsilon f)_k= \zeta f_k, \qquad k\in \mathbb{N} \setminus \{1\}
\end{equation*}
under the assumption that $f_1=0$ and $f_2=b_1^{-1}$, and define
\begin{equation*}
Q_{k-1}(\zeta):=f_k\,,\qquad k\in\mathbb{N}\,.
\end{equation*}
$Q_k(\zeta)$ is a polynomial of degree $k-1$ and it is called the $k$-th
orthogonal polynomial of the second kind associated with the matrix
(\ref{eq:jm-0}).
The sequence $P(\zeta):=\{P_{k-1}(\zeta)\}_{k=1}^\infty$ is not in
$l_{\rm{fin}}(\mathbb{N})$, but it may happen that
\begin{equation}
\label{eq:generalized-eigenvector}
\sum_{k=0}^\infty\abs{P_k(\zeta)}^2<\infty\,,
\end{equation}
in which case $P(\zeta)\in\ker(J^*-\zeta I)$. Since $J$ is symmetric,
if the series in (\ref{eq:generalized-eigenvector}) is convergent for
one $\zeta$ in the upper half plane $\mathbb{C}_+$ (the lower half plane
$\mathbb{C}_-$), then it is convergent in all $\mathbb{C}_+$
($\mathbb{C}_-$). Actually, because of the reality of the coefficients
of $P_{k-1}(\zeta)$ for all $k\in\mathbb{N}$, the series in
(\ref{eq:generalized-eigenvector}) will then be convergent in all
$\mathbb{C}\setminus\mathbb{R}$ and $J$ has deficiency indices $(1,1)$. When
the series in (\ref{eq:generalized-eigenvector}) is divergent for one
$\zeta$ in $\mathbb{C}\setminus\mathbb{R}$, $J$ has deficiency indices
$(0,0)$ and the operator is self-adjoint since $J$ is closed. There
are known conditions on the matrix (\ref{eq:jm-0}) which guarantee
that $J$ is self-adjoint \cite[Addenda\,1]{MR0184042},
\cite[Chap.\,7,\,Thms.\,1.2--1.4]{MR0222718}.
We now introduce the operators that will be at the center of our
considerations in this work.
\begin{definition}
\label{def:s-a-ext-def}
Let the operator $J^{(g)}$ be defined as follows:
\begin{enumerate}[a)]
\item In case $J\ne J^*$, define the sequence
$v(g)=\{v_k(g)\}_{k=1}^\infty$ such that $\forall k\in\mathbb{N}$
\begin{equation*}
v_k(g):=P_{k-1}(0)+gQ_{k-1}(0)\,,
\quad g\in\mathbb{R}
\end{equation*}
and
\begin{equation*}
v_k(\infty):=Q_{k-1}(0)\,.
\end{equation*}
Let $J^{(g)}$ be the restriction of
$J^*$ to the set
\begin{equation*}
\left\{f=\{f_k\}_{k\in\mathbb{N}}\in\dom(J^*):
\lim_{k\to\infty}b_k(v_k(g)f_{k+1}-f_kv_{k+1}(g))=0\right\}\,.
\end{equation*}
When $g\in\mathbb{R}\cup\{\infty\}$, $J^{(g)}$ runs over all self-adjoint
extensions of $J$. Moreover, different values of $g$ imply different
self-adjoint extensions \cite[Lemma 2.20]{MR1711536}.
\item In case $J=J^*$, let us define $J^{(g)}:=J$ for all
$g\in\mathbb{R}\cup\{\infty\}$.
\end{enumerate}
\end{definition}
Alongside the operator $J^{(g)}$ we consider the operators $J_n^{(g)}$
($n\in\mathbb{N}$) in the Hilbert space
$l_2(\mathbb{N})\ominus\Span\{\delta_1,\dots,\delta_n\}$ defined by
restricting $J^{(g)}$ to
$l_2(\mathbb{N})\ominus\Span\{\delta_1,\dots,\delta_n\}$. Thus, $J_n^{(g)}$
is a self-adjoint extension of the Jacobi operator whose associated
matrix is (\ref{eq:jm-0}) with the first $n$ columns and $n$ rows
removed.
Finally we introduce the perturbed operators $J^{(g)}(\theta)$. They
are defined as follows. Consider $J^{(g)}$ with fixed
$g\in\mathbb{R}\cup\{\infty\}$ and take any $\theta>0$. Then
\begin{equation}
\label{eq:rank-two}
J^{(g)}(\theta):=J^{(g)} +
q_1(\theta^2-1)\inner{\delta_1}{\cdot}\delta_1 +
b_1(\theta-1)(\inner{\delta_1}{\cdot}\delta_2 +
\inner{\delta_2}{\cdot}\delta_1)\,,
\end{equation}
where we take the inner product to be antilinear in its first
argument. By this definition $J^{(g)}(\theta)$ is a self-adjoint
extension of the Jacobi operator whose associated matrix is
(\ref{eq:jm-theta}). Note that $J^{(g)}(\theta)$ is a finite-rank
perturbation of $J^{(g)}$ and thus $\dom(J^{(g)})=\dom(J^{(g)}(\theta))$.
Fix $g\in\mathbb{R}\cup\{\infty\}$ and take the
resolution of the identity $E^{(g)}(t)$ of $J^{(g)}$, so
\begin{equation*}
J^{(g)}=\int_\mathbb{R}tdE^{(g)}(t)\,.
\end{equation*}
Since $J^{(g)}$ is simple \cite[Sec.\,2.2\,Chap.\,4]{MR0184042}, it is
particularly useful to consider
the function
\begin{equation}
\label{eq:spectral-measure}
\rho^{(g)}(t):=\inner{\delta_1}{E^{(g)}(t)\delta_1}\,,\quad t\in\mathbb{R}\,.
\end{equation}
It turns out that all the moments of the measure generated by $\rho^{(g)}$
are finite \cite[Thm.\,4.1.3]{MR0184042}, that is,
\begin{equation}
\label{eq:moments}
s_k=\int_\mathbb{R} t^kd\rho^{(g)}(t)<\infty\qquad\forall k\in\mathbb{N}\cup\{0\}\,,
\end{equation}
and the polynomials are
dense in $L_2(\mathbb{R},d\rho^{(g)})$ \cite[Thms.\,2.3.2,\,4.1.4]{MR0184042},
\cite[Prop.\,4.15]{MR1627806}.
In this work we also make use of the so-called Weyl $m$-function
\begin{equation}
\label{eq:m-function}
m^{(g)}(\zeta):=\inner{\delta_1}{(J^{(g)}-\zeta I)^{-1}
\delta_1}\,,\quad\zeta\not\in\sigma(J^{(g)})\,.
\end{equation}
The functions (\ref{eq:spectral-measure}) and (\ref{eq:m-function})
are related by the Borel transform, viz.,
\begin{equation*}
m^{(g)}(\zeta) =\int_{\mathbb{R}}\frac{d\rho^{(g)}(t)}{t-\zeta}\,,
\end{equation*}
so $m^{(g)}$ is a Herglotz function, i.\,e.,
\begin{equation*}
\frac{\im m^{(g)}(\zeta)}{\im \zeta}>0\,,\qquad\im\zeta>0\,.
\end{equation*}
Using the von Neumann expansion for the resolvent
(cf.\cite[Chap.\,6,\,Sec.\,6.1]{MR1711536})
\begin{equation*}
(J^{(g)}-\zeta I)^{-1}=
-\sum_{k=0}^{N-1}\frac{(J^{(g)})^k}{\zeta^{k+1}}
+\frac{(J^{(g)})^N}{\zeta^{N}}
(J^{(g)}-\zeta I)^{-1}\,,
\end{equation*}
where $\zeta\in\mathbb{C}\setminus\sigma(J^{(g)})$,
one can easily obtain the following asymptotic formula
\begin{equation}
\label{eq:m-asympt}
m^{(g)}(\zeta)=-\frac{1}{\zeta}-\frac{q_1}{\zeta^2}
-\frac{b_1^2+q_1^2}{\zeta^3}
+O(\zeta^{-4})\,,
\end{equation}
as $\zeta\to\infty$ ($\im \zeta\ge \epsilon$, $\epsilon>0$).
The inverse Stieltjes transform allows to recover the spectral
function (\ref{eq:spectral-measure}) from its corresponding Weyl
$m$-function (\ref{eq:m-function}). So they are in one-to-one
correspondence. Furthermore, either (\ref{eq:spectral-measure}) or
(\ref{eq:m-function}) uniquely determines the Jacobi operator
$J^{(g)}$, i.\,e., the matrix (\ref{eq:jm-0}) and the parameter $g$ in
the non-self-adjoint case. Indeed, there are two general methods for
recovering the matrix (\ref{eq:jm-0}) that work without any assumption
on the spectrum. One method, developed in \cite{MR1616422} (see also
\cite{MR1643529}), makes use of the asymptotic behavior of the Weyl
$m$-function and the Riccati equation \cite[Eq.\,2.15]{MR1616422},
\cite[Eq.\,2.23]{MR1643529},
\begin{equation}
\label{eq:riccati}
b_n^2 m_n^{(g)}(\zeta)=
q_n-\zeta-\frac{1}{m_{n-1}^{(g)}(\zeta)}\,,\quad n\in\mathbb{N}\,,
\end{equation}
where $m_n^{(g)}(\zeta)$ is the Weyl $m$-function of the
Jacobi operator $J_n^{(g)}$ ($m_0=m$).
The other method of reconstruction (see
\cite[Chap.\,7,\,Sec.\,1.5]{MR0222718} and, particularly,
\cite[Chap.\,7,\,Thm.\,1.11]{MR0222718}) has its starting point in the
sequence $\{t^k\}_{k=0}^\infty$, $t\in\mathbb{R}$. From
(\ref{eq:moments}) all the elements of the sequence
$\{t^k\}_{k=0}^\infty$ are in $L_2(\mathbb{R},d\rho^{(g)})$ and one
can apply, in this Hilbert space, the Gram-Schmidt procedure of
orthonormalization to the sequence $\{t^k\}_{k=0}^\infty$. One, thus,
obtains a sequence of polynomials $\{P_k(t)\}_{k=0}^\infty$ normalized
and orthogonal in $L_2(\mathbb{R},d\rho^{(g)})$. These polynomials
satisfy a three term recurrence equation
\cite[Chap.\,7,\,Sec.\,1.5]{MR0222718}, \cite[Sec.\,1]{MR1627806}
\begin{align}
\label{eq:favard-system1}
tP_{k-1}(t) &= b_{k-1}P_{k-2}(t) + q_k
P_{k-1}(t) + b_k P_k(t)\,,
\quad k \in \mathbb{N} \setminus \{1\}\,,\\
\label{eq:favard-system2}
tP_0(t) &= q_1 P_0(t) + b_1 P_1(t)\,,
\end{align}
where all the coefficients $b_k$ ($k\in\mathbb{N}$) turn out to be
positive and $q_k$ ($k\in\mathbb{N}$) are real numbers. The system
(\ref{eq:favard-system1}) and (\ref{eq:favard-system2}) defines a
Jacobi matrix which is the matrix representation of either
$J^{(g)}$ or a restriction of $J^{(g)}$ depending on whether $J=
J^*$ or not.
The function (\ref{eq:m-function}), equivalently
(\ref{eq:spectral-measure}), determines the parameter $g$
which defines the self-adjoint extension when the
reconstructed matrix turns out to be the matrix representation of a
non-self-adjoint operator. Indeed, consider a pole $\gamma$ of
$m^{(g)}$ (there is always one when $J\ne J^*$) and evaluate
$P_k(\gamma)$, $k\in\mathbb{N}$. Then either
\begin{equation*}
\lim_{k\to\infty}b_k(Q_{k-1}(0)P_{k}(\gamma)-P_{k-1}(\gamma)Q_{k-1}(0))=0,
\end{equation*}
which means that $g=\infty$, or
\begin{equation*}
g=\frac{\lim_{k\to\infty}b_k(P_{k-1}(0)P_{k}(\gamma)-P_{k-1}(\gamma)P_{k-1}(0))}
{\lim_{k\to\infty}b_k(Q_{k-1}(0)P_{k}(\gamma)-P_{k-1}(\gamma)Q_{k-1}(0))}\,.
\end{equation*}
The details of this recipe are explained for instance in
\cite[Sec.\,2]{weder-silva}.
Since any simple self-adjoint operator in an infinite dimensional
Hilbert space is unitarily equivalent to some operator $J=J^*$
\cite[Thm.\,4.2.3]{MR0184042}, \cite[Sec.\,69]{MR1255973}, in the case
$J=J^*$, $\sigma(J^{(g)})$ may be any non-empty closed infinite set in
$\mathbb{R}$. In particular $J^{(g)}$ may have discrete spectrum, that is,
$\sigma_{ess}(J^{(g)})=\emptyset$. When $J\ne J^*$, this is always the
case, that is all self-adjoint extensions $J^{(g)}$ of the
non-self-adjoint operator $J$ have discrete spectrum
\cite[Lem.\,2.19]{MR1711536}.
Assume that $J$ has discrete spectrum (this always happen if $J\ne
J^*$), so the spectrum is a sequence of real numbers,
$\{\lambda_k\}_k$, without finite points of accumulation.
The simplicity of $J^{(g)}$ implies
that all eigenvalues are of multiplicity one. In this case the
function $\rho^{(g)}(t)$, defined by (\ref{eq:spectral-measure}), can be
written as follows
\begin{equation}
\label{eq:rho-discrete}
\rho^{(g)}(t)=\sum_{\lambda_k< t}\frac{1}{\alpha_k}\,,
\end{equation}
where the coefficients $\{\alpha_k\}_k$ are called the normalizing
constants and according to \cite[Chap.\,7,\,Thm.\,1.17]{MR0222718} are
given by
\begin{equation}
\label{eq:def-normalizing}
\alpha_n=\sum_{k=0}^\infty\abs{P_k(\lambda_n)}^2\,.
\end{equation}
Thus, from (\ref{eq:rho-discrete}) and (\ref{eq:m-function}) one has
that
\begin{equation}
\label{eq:m-discrete}
m^{(g)}(\zeta)=\sum_{k}\frac{1}{\alpha_k(\lambda_k-\zeta)}\,.
\end{equation}
\begin{remark}
\label{rem:m-zeros-poles}
In the case of discrete spectrum, the set of poles of the
meromorphic Weyl $m$-function coincides with $\sigma(J^{(g)})$. By
(\ref{eq:riccati}), the set of zeros coincides with
$\sigma(J_1^{(g)})$. The zeros and poles of the Weyl $m$-function
are simple and interlace as occurred to any meromorphic Herglotz
function. Interlacing means that between two contiguous poles there
is exactly one zero and between two contiguous zeros there is
exactly one pole (see the proof of \cite[Chap.\,7,\,Thm.\,1]{MR589888}).
\end{remark}
\begin{remark}
\label{rem:discrete-simple}
By elementary perturbation theory (Weyl theorem), $J^{(g)}$ has
discrete spectrum if and only if $J^{(g)}(\theta)$ has discrete
spectrum. Note that $J^{(g)}(\theta)$ has simple spectrum since it
is a self-adjoint extension of a Jacobi operator.
\end{remark}
\begin{remark}
\label{rem:discrete-spectrum}
Let us comment briefly on the criteria for discreteness of
$\sigma(J^{(g)})$ on the basis of the matrix entries in
(\ref{eq:jm-0}) when $J=J^*$. Consider a matrix whose main diagonal
is a sequence $\{q_k\}_{k=1}^\infty$ of pairwise distinct real
numbers without finite accumulation points and the sequence defining
the off-diagonals $\{b_k\}_{k=1}^\infty$ is such that $b_k=o(q_k)$
as $k\to\infty$. Then, it can be shown that $J$ is the sum of the
operator $D$ whose matrix representation is
$\diag\{q_k\}_{k=1}^\infty$ and a perturbation relatively compact
with respect to $D$. By perturbation theory, $J$ is thus self-adjoint
and has discrete spectrum. Of course there are other examples of
self-adjoint Jacobi operators having discrete spectrum and whose
matrix representation diagonals do not satisfy the conditions just
given (see for instance \cite{naboko,toloza-growing-weights})
\end{remark}
\begin{remark}
\label{rem:nonselfadjoint}
There are conditions on the entries of (\ref{eq:jm-0}) which
guarantee that $J\ne J^*$ (see for instance
\cite[Addenda\,1]{MR0184042} and
\cite[Thm.\,7.1.5]{MR0222718}). Thus, for (\ref{eq:jm-0}) satisfying
those conditions, $J^{(g)}$ has discrete spectrum
\cite[Lem.\,2.19]{MR1711536}.
\end{remark}
\begin{remark}
\label{rem:discrete-mass-spring}
Consider the mass-spring system of the Introduction. On the basis of
Remarks \ref{rem:discrete-spectrum}, \ref{rem:nonselfadjoint}, and
by means of the recurrence equations given below in Remark
\ref{rem:inverse-mass-spring}, one could construct a mass-spring
system whose corresponding operator $J^{(g)}$ has discrete spectrum.
\end{remark}
\section{Direct spectral analysis of $J^{(g)}$ and $J^{(g)}(\theta)$}
\label{sec:direct-problem}
We begin this section by noting that
\begin{equation*}
J_1^{(g)}=J_1^{(g)}(\theta)\,, \qquad \forall\theta>0\,.
\end{equation*}
Fix $g\in\mathbb{R}\cup\{\infty\}$ and consider the Weyl $m$-functions
$m^{(g)}$, $m^{(g,\theta)}$ of the operators $J^{(g)}$ and
$J^{(g)}(\theta)$. Therefore, taking into account that $m_1^{(g)}$ and
$m_1^{(g,\theta)}$ coincide, (\ref{eq:riccati}) implies that
\begin{equation}
\label{eq:aux-m-m-theta}
\theta^2\left(\zeta+\frac{1}{m^{(g)}(\zeta)}\right)=
\zeta+\frac{1}{m^{(g,\theta)}(\zeta)}\,,
\end{equation}
Let us now
consider the function
\begin{equation}
\label{eq:m-goth-def}
\mathfrak{m}(\zeta):=\frac{m^{(g)}(\zeta)}{m^{(g,\theta)}(\zeta)}\,.
\end{equation}
\begin{remark}
\label{rem:zeros-poles}
In view of Remark~\ref{rem:discrete-simple}, if $J^{(g)}$ has
discrete spectrum, the function $\mathfrak{m}$ is meromorphic by
(\ref{eq:m-goth-def}). Since the zeros of $m^{(g)}$ and
$m^{(g,\theta)}$ are the same (see
Remark~\ref{rem:m-zeros-poles}), it follows that for all $\theta>0$
the set of poles of $\mathfrak{m}$ is a subset of $\sigma(J^{(g)})$,
while $\sigma(J^{(g)}(\theta))$ contains all the zeros of
$\mathfrak{m}$. Observe also that, from (\ref{eq:aux-m-m-theta}),
$0\in\sigma(J^{(g)})$ if and only if
$0\in\sigma(J^{(g)}(\theta))$. Moreover, whenever $\theta\ne 1$,
(\ref{eq:aux-m-m-theta}) implies that the sets $\sigma(J^{(g)})$ and
$\sigma(J^{(g)}(\theta))$ can intersect only at $0$.
\end{remark}
\begin{remark}
\label{rem:continuous}
By \cite[Chap.\,7,\,Thm.\,3.9]{MR0407617} the zeros
of $\mathfrak{m}$ are analytic functions of the parameter
$\theta$. The same is true for the eigenvectors of $J^{(g)}(\theta)$.
\end{remark}
\begin{proposition}
\label{prop:derivative}
Let $J^{(g)}$ have discrete spectrum and let $\{\lambda_k(\theta)\}_k$
be the set of eigenvalues of $J^{(g)}(\theta)$ ($\theta>0$). For a fixed
$k$ the following holds
\begin{equation*}
\frac{d}{d\theta}\lambda_k(\theta)=
\frac{2\lambda_k(\theta)}{\theta\alpha_k(\theta)}\,,
\end{equation*}
where $\alpha_k(\theta)$ is the normalizing constant corresponding
to $\lambda_k(\theta)$.
\end{proposition}
\begin{proof}
Let us denote by $f(\theta)$ the eigenvector of $J^{(g)}(\theta)$
corresponding to $\lambda_k(\theta)$. We assume that $f(\theta)$
is normalized in such a way that
\begin{equation}
\label{eq:eigenvalue-1}
\inner{\delta_1}{f(\theta)}=1\,.
\end{equation}
Pick any small real $\tau$ (it suffices that
$\abs{\tau}<\theta$). Then, taking into account that $\dom
(J^{(g)})=\dom (J^{(g)}(\theta))$ and the self-adjointness of
$J^{(g)}(\theta)$ for any $\theta>0$, we have that
\begin{align*}
(\lambda_k(\theta+\tau)-\lambda_k(\theta))
\inner{f(\theta)}{f(\theta+\tau)}&=
\inner{f(\theta)}{J^{(g)}(\theta+\tau)f(\theta+\tau)}\\
&- \inner{J^{(g)}(\theta)f(\theta)}{f(\theta+\tau)}\\
&=\inner{f(\theta)}
{(J^{(g)}(\theta+\tau)
-J^{(g)}(\theta)+J^{(g)}(\theta))f(\theta+\tau)}\\
&-\inner{J^{(g)}(\theta)f(\theta)}{f(\theta+\tau)}\\
&=\inner{f(\theta)}
{(J^{(g)}(\theta+\tau)-J^{(g)}(\theta))f(\theta+\tau)}\,.
\end{align*}
From (\ref{eq:eigenvalue-1}) it follows that the
entries $f(\theta+\tau)$ and $f(\theta)$ are the
polynomials of the first kind associated to the matrix of
$J^{(g)}(\theta+\tau)$ and $J^{(g)}(\theta)$, so
\begin{equation*}
f_2(\theta+\tau)=\frac{\lambda_k(\theta+\tau)-(\theta+\tau)^2q_1}
{(\theta+\tau)b_1}\,,\qquad
f_2(\theta)=\frac{\lambda_k(\theta)-\theta^2q_1}{\theta b_1}\,,
\end{equation*}
Now, taking into account these last equalities and
(\ref{eq:eigenvalue-1}), together with
\begin{equation*}
J^{(g)}(\theta+\tau)-J^{(g)}(\theta)=
\begin{pmatrix}
(2\theta\tau+\tau^2)q_1 & \tau b_1 & 0 & 0 & \cdots
\\[1mm] \tau b_1 & 0 & 0 & 0 & \cdots \\[1mm] 0 & 0 & 0 &
0 & \\
0 & 0 & 0 & 0 & \ddots\\ \vdots & \vdots & & \ddots
& \ddots
\end{pmatrix}\,,
\end{equation*}
one obtains that
\begin{equation*}
(\lambda_k(\theta+\tau)-\lambda_k(\theta))
\inner{f(\theta)}{f(\theta+\tau)}=
\tau\left(\frac{\lambda_k(\theta+\tau)}{\theta+\tau}+
\frac{\lambda_k(\theta)}{\theta}\right)
\end{equation*}
Therefore, on the basis of Remark~\ref{rem:continuous}, one has
\begin{equation*}
\lim_{\tau\to
0}\frac{\lambda_k(\theta+\tau)-\lambda_k(\theta)}{\tau}=
\lim_{\tau\to
0}\frac{1}{\inner{f(\theta)}{f(\theta+\tau)}}
\left(\frac{\lambda_k(\theta+\tau)}{\theta+\tau}+
\frac{\lambda_k(\theta)}{\theta}\right)=
\frac{2\lambda_k(\theta)}{\theta\alpha_k(\theta)}\,.
\end{equation*}
\end{proof}
The proposition below can be proven by means of
Remark~\ref{rem:zeros-poles}, \ref{rem:continuous}, and
Proposition~\ref{prop:derivative}. However, we present an alternative
proof based on the following expression
\begin{equation}
\label{eq:m-through-m}
\mathfrak{m}(\zeta)=\zeta(\theta^2-1)m^{(g)}(\zeta)+\theta^2\,,
\end{equation}
which follows from (\ref{eq:aux-m-m-theta}) and (\ref{eq:m-goth-def}).
\begin{proposition}
\label{prop:interlacing}
Fix $g\in\mathbb{R}\cup\{\infty\}$ and let $J^{(g)}$ have discrete
spectrum. The spectra $\sigma(J^{(g)})$, $\sigma(J^{(g)}(\theta))$
interlace in $\mathbb{R}_+$ and $\mathbb{R}_-$. Moreover,
$\sigma(J^{(g)}(\theta))$ in $\mathbb{R}_+$ ($\mathbb{R}_-$) is shifted with
respect to $\sigma(J^{(g)})$ to the left (right) if $\theta<1$, and
to the right (left) if $\theta>1$.
\end{proposition}
\begin{proof}
In view of Remark~\ref{rem:zeros-poles}, one only needs to verify
that between two positive and contiguous eigenvalues of $J^{(g)}$
there is only one eigenvalue of $J^{(g)}(\theta)$ and viceversa.
Take two positive and contiguous eigenvalues of $\sigma(J^{(g)})$,
$\lambda<\widetilde{\lambda}$. Due to (\ref{eq:m-discrete}), one
has
\begin{equation}
\label{eq:m-asympt-in-eigenvalue}
\lim_{\substack{t\to\widetilde{\lambda}^- \\
t\in\mathbb{R}}}m^{(g)}(t)=+\infty\,,
\qquad
\lim_{\substack{t\to\lambda^+ \\ t\in\mathbb{R}}}m^{(g)}(t)=-\infty\,.
\end{equation}
Now, in (\ref{eq:m-through-m}) assume that $\theta>1$. Thus,
because of the positivity of $\lambda,\widetilde{\lambda}$,
(\ref{eq:m-through-m}) and (\ref{eq:m-asympt-in-eigenvalue}) imply that
\begin{equation*}
\lim_{\substack{t\to\widetilde{\lambda}^- \\
t\in\mathbb{R}}}\mathfrak{m}(t)
=+\infty\,,\qquad
\lim_{\substack{t\to\lambda^+ \\ t\in\mathbb{R}}}\mathfrak{m}(t)=-\infty\,.
\end{equation*}
Since $\mathfrak{m}$ is analytic on
the interval $(\lambda,\widetilde{\lambda})$, it should cross the
0-axis an odd number of times. If it crosses this axis three or more
times as in Fig. \ref{fig:3} ($a$), then, by
Remarks~\ref{rem:m-zeros-poles} and \ref{rem:zeros-poles}, there are
at least two elements of $\sigma(J_1^{(g)})$ in
$(\lambda,\widetilde{\lambda})$. But, because of
Remark~\ref{rem:m-zeros-poles}, this would contradict the fact that
$\lambda,\widetilde{\lambda}$ are contiguous.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}
\draw[yshift=-15] (-.7,-.5) .. controls (0,3) and (.65,-1.9)
.. (1.5,1.7);
\draw[->] (-1,0) -- (2,0);
\path (0.5,-1.2) node {$a$};
\draw[xshift=110,yshift=-8.3] (-.7,-.5) .. controls (0,3) and (.65,-1.9)
.. (1.5,1.7);
\draw[->,xshift=110] (-1,0) -- (2,0);
\path[xshift=110] (0.5,-1.2) node {$b$};
\draw[xshift=220,yshift=-24] (-.7,-.5) .. controls (0,3) and (.65,-1.9)
.. (1.5,1.7);
\draw[->,xshift=220] (-1,0) -- (2,0);
\path[xshift=220] (0.5,-1.2) node {$c$};
\end{tikzpicture}
\end{center}
\caption{Impossible crossings of the 0-axis by $\mathfrak{m}$}\label{fig:3}
\end{figure}
Observe that one should discard the possibility of one crossing of the
0-axis and a tangential touch of it as in Fig. \ref{fig:3} ($b$) and
($c$). But again the impossibility of this follows from the fact that
the poles of $m^{(g,\theta)}$ are simple (see
Remark~\ref{rem:m-zeros-poles}). Analogously, between two contiguous
eigenvalues of $J^{(g)}(\theta)$, the function
$\frac{1}{\mathfrak{m}}\eval{\mathbb{R}}$ crosses the $0$-axis exactly
once. Thus, the interlacing in $\mathbb{R}_+$ has been established. By the
same token, the spectra interlace in $\mathbb{R}_-$. The case $\theta <1$
is treated in a similar way. The second assertion follows directly
from Proposition~\ref{prop:derivative}.
\end{proof}
\begin{remark}
We note that $\sigma(J^{(g)})\cap\mathbb{R}_+$,
$\sigma(J^{(g)})\cap\mathbb{R}_-$, may be finite or empty.
\end{remark}
\section{Inverse spectral analysis for $J^{(g)}$ and $J^{(g)}(\theta)$}
\label{sec:reconstruction}
In this section we find some necessary conditions for the spectra of
$J^{(g)}(\theta)$ ($\theta>0$). Also we provide a reconstruction algorithm
of the Jacobi matrix and establish uniqueness of the
reconstruction. Some of the formulae obtained in this section (see for
instance Corollary~\ref{cor:theta}) have an
analogous one in the finite case \cite{delrio-kudryavtsev}, \cite{Ram}.
A central part of our approach is the Weyl
$m$-function and its properties. We begin our discussion by setting
out a convention for enumerating the elements of the spectra.
\begin{convention}
\label{con:enumeration}
For a given countable set of real numbers $S$ without finite points of
accumulation, let $M$ be an infinite subset of consecutive integers
such that there is a one-to-one increasing function $h:M\to S$ with
the property that, $h^{-1}(0)=\{0\}$ when $0$ is in $S$.
Thus, $M$
is semi-bounded from above (below) if and only if
the same holds for $S$. We write $S=\{\lambda_k\}_{k\in M}$, where
$\lambda_k=h(k)$. Note that in the sequence $\{\lambda_k\}_{k\in M}$
only $\lambda_0$ is allowed to be zero. Thus, if $-1,1\in M$,
then
\begin{equation*}
\lambda_{-1} < 0 < \lambda_1\,.
\end{equation*}
In the sequel, the spectra of all operators will be enumerated
according to this convention.
When $\{\lambda_k\}_{k\in M}$ is considered together with a sequence
interlacing with it, we use the same set $M$ for enumerating both
sequences. For instance, if $\{\lambda_k\}_{k\in M}$ and
$\{\mu_k\}_{k\in M}$ are interlacing and not semi-bounded, then one can
assume that
\begin{equation*}
\lambda_k<\mu_k<\lambda_{k+1}\,,\quad\forall k\in M.
\end{equation*}
\end{convention}
The following auxiliary result can be found in
\cite[Sec.\,4]{weder-silva}. We sketch the proof here for the reader's
convenience.
\begin{lemma}
Let $J^{(g)}$ have discrete spectrum and assume that
$\sigma(J^{(g)})=\{\lambda_k\}_{k\in M}$, and
$\sigma(J_1^{(g)})=\{\eta_k\}_{k\in M}$. Then, the following formula
holds for the Weyl $m$-function of $J^{(g)}$
\begin{equation}
\label{eq:levin-herglotz-gen}
m^{(g)}(\zeta)=C \frac{\zeta-\eta_0}{\zeta-\lambda_0}
\prod_{\substack{k\in M\\k\ne 0}} \left(1-\frac{\zeta}{\eta_k}\right)
\left(1-\frac{\zeta}{\lambda_k}\right)^{-1}\,,
\end{equation}
Moreover, $C<0$ and
\begin{equation}
\label{eq:enum-zeros-poles-alt}
\eta_k<\lambda_k<\eta_{k+1}\quad\forall k\in M\,,
\end{equation}
if $\sigma(J^{(g)})$ is semi-bounded from above,
while, $C>0$ and
\begin{equation}
\label{eq:enum-zeros-poles}
\lambda_k<\eta_k<\lambda_{k+1}\quad\forall k\in M
\end{equation}
otherwise.
\end{lemma}
\begin{proof}
Assume first that $\sigma(J^{(g)})$ is semi-bounded from
below. Since the greatest lower bound of $J$ does not exceed the
greatest lower bound of $J_1^{(g)}$, the smallest element of
$\{\lambda_k\}_{k\in M}$ is less than the smallest of
$\{\eta_k\}_{k\in M}$ (see
\cite[Chap.\,6,\,Sec.\,1.3]{MR1192782}). Thus one can enumerate the
sequences $\{\lambda_k\}_{k\in M}$ and $\{\eta_k\}_{k\in M}$ so that
they obey our convention and (\ref{eq:enum-zeros-poles}). According
to \cite[Chap.\,7,\,Thm.\,1]{MR589888},
(\ref{eq:levin-herglotz-gen}) holds with $C>0$.
Clearly, when $\sigma(J^{(g)})$ is not semi-bounded, the sequences can be
arranged to obey (\ref{eq:enum-zeros-poles}), and then
(\ref{eq:levin-herglotz-gen}) holds with $C>0$.
Now suppose that $\sigma(J^{(g)})$ is semi-bounded from above. Then
$\sigma(-J^{(g)})$ is semi-bounded from below and, consequently, the
greatest of $\{\eta_k\}_{k\in M}$ is less than the greatest of
$\{\lambda_k\}_{k\in M}$. Thus $\{\lambda_k\}_{k\in M}$, and
$\{\eta_k\}_{k\in M}$ cannot be arranged according to
(\ref{eq:enum-zeros-poles}). However, we are still able to use
(\ref{eq:enum-zeros-poles}) for arranging the zeros and poles
of the meromorphic Herglotz function $-\frac{1}{m^{(g)}}$, that is, we use
(\ref{eq:enum-zeros-poles-alt}).
Therefore \cite[Chap.\,7,\,Thm.\,1]{MR589888} gives
\begin{equation*}
-\frac{1}{m^{(g)}(\zeta)}= \widetilde{C}
\frac{\zeta-\lambda_0}{\zeta-\eta_0}
\prod_{\substack{k\in M\\k\ne 0}} \left(1-\frac{\zeta}{\lambda_k}\right)
\left(1-\frac{\zeta}{\eta_k}\right)^{-1}\,,\qquad \widetilde{C}>0\,,
\end{equation*}
For completing the proof it only remains to note that the last
equation can be rewritten as asserted in the lemma. The infinite
product in (\ref{eq:levin-herglotz-gen}) is convergent because of
(\ref{eq:enum-zeros-poles-alt}) (see the proof of
\cite[Chap.\,7,\,Thm.\,1]{MR589888}).
\end{proof}
Another auxiliary simple result to be use later is the following lemma.
\begin{lemma}
\label{lem:uniform-convergence}
Let $J^{(g)}$ have discrete spectrum and $\{\lambda_k(\theta)\}_k$
be the set of eigenvalues of $J^{(g)}(\theta)$. Then,
the series
\begin{equation}
\label{eq:s-1-moment}
\sum_{k\in M}\frac{\lambda_k(\theta)}{\alpha_k(\theta)}
\end{equation}
converges uniformly in $[\theta_1,\theta_2]\subset\mathbb{R}_+$ to
$s_1(\theta)$ (see (\ref{eq:moments})).
\end{lemma}
\begin{proof}
From (\ref{eq:moments}) and (\ref{eq:rho-discrete}), it follows
that the series converges pointwise to $s_1(\theta)$. The series
\begin{equation}
\label{eq:s-2-moment}
\sum_{k\in M}\frac{\lambda_k^2(\theta)}{\alpha_k(\theta)}
\end{equation}
converges also pointwise to the function $s_2(\theta)$. Since this
function is continuous in $[\theta_1,\theta_2]$, then
(\ref{eq:s-2-moment}) is uniformly convergent in that interval (see
\cite[Sec.\,1.31]{titchmarsh_functions}). Now, for any
$\theta\in[\theta_1,\theta_2]$ and $\abs{\lambda_k}>1$, one has
\begin{equation*}
\abs{\lambda_k}<\lambda_k^2\,,
\end{equation*}
so (\ref{eq:s-1-moment}) is uniformly convergent in
$[\theta_1,\theta_2]$.
\end{proof}
\begin{remark}
\label{rem:arrange_sequences}
Proposition~\ref{prop:interlacing} tells that the interlacing of the
sequences $\sigma(J^{(g)})=\{\lambda_k\}_k$ and
$\sigma(J(\theta))=\{\mu_k\}_k$ is different in $\mathbb{R}_+$ and
$\mathbb{R}_-$. So let us agree to enumerate the sequences according to
our convention (the subscripts of the sequences run over $M$ and
only the eigenvalues with subscript equal zero are allowed to be
zero) and obeying
\begin{equation*}
\lambda_k<\mu_k<\lambda_{k+1} \quad\text{in}\ \mathbb{R}_+\,,
\qquad
\mu_k<\lambda_k<\mu_{k+1} \quad\text{in}\ \mathbb{R}_-\,,
\end{equation*}
when $\theta>1$, and
\begin{equation*}
\mu_k<\lambda_k<\mu_{k+1} \quad\text{in}\ \mathbb{R}_+\,,
\qquad
\lambda_k<\mu_k<\lambda_{k+1} \quad\text{in}\ \mathbb{R}_-\,,
\end{equation*}
if $\theta<1$.
\end{remark}
\begin{proposition}
\label{prop:convergence-eigenvalues}
Fix $g\in\mathbb{R}\cup\{\infty\}$ and $0<\theta_1<\theta_2$. Let
$J^{(g)}$ have discrete spectrum and assume that
$\sigma(J^{(g)}(\theta_1))=\{\lambda_k\}_{k\in M}$ and
$\sigma(J^{(g)}(\theta_2))=\{\mu_k\}_{k\in M}$, where the sequences
have been arranged according to
Remark~\ref{rem:arrange_sequences}. Then,
\begin{equation*}
\sum_{k\in M}(\mu_k-\lambda_k)=q_1(\theta_2^2-\theta_1^2)\,.
\end{equation*}
\end{proposition}
\begin{proof}
Observe that from Proposition~\ref{prop:derivative} it follows that
\begin{equation*}
\mu_k-\lambda_k=2\int_{\theta_1}^{\theta_2}
\frac{\lambda_k(\theta)d\theta}{\theta\alpha_k(\theta)}\,.
\end{equation*}
Consider a sequence $\{M_n\}_{n=1}^\infty$ of subsets of $M$, such
that $M_n\subset M_{n+1}$ and $\cup_nM_n=M$.
Thus
\begin{equation*}
\sum_{k\in M}(\mu_k-\lambda_k)=2
\lim_{n\to\infty}\int_{\theta_1}^{\theta_2}\left(
\sum_{k\in M_n}\frac{\lambda_k(\theta)}{\alpha_k(\theta)}
\right)\frac{d\theta}{\theta}
\end{equation*}
By Lemma \ref{lem:uniform-convergence} and the fact that
\begin{equation*}
s_1(\theta)=\inner{\delta_1}{J^{(g)}(\theta)\delta_1}=q_1\theta^2\,,
\end{equation*}
one obtains
\begin{equation*}
\sum_{k\in M}(\mu_k-\lambda_k)=
2q_1\int_{\theta_1}^{\theta_2}\theta d\theta
=q_1(\theta_2^2-\theta_1^2)
\end{equation*}
\end{proof}
\begin{proposition}
\label{prop:m-goth-actual-form}
Fix $g\in\mathbb{R}\cup\{\infty\}$ and $0<\theta\ne 1$. Let $J^{(g)}$ have
discrete spectrum and assume that
$\sigma(J^{(g)})=\{\lambda_k\}_{k\in M}$ and
$\sigma(J^{(g)}(\theta))=\{\mu_k\}_{k\in M}$, where the sequences
have been arranged according to
Remark~\ref{rem:arrange_sequences}. Then,
\begin{equation*}
\mathfrak{m}(\zeta)=
\prod\limits_{k\in M}\frac{\zeta-\mu_k}{\zeta-\lambda_k}\,.
\end{equation*}
\end{proposition}
\begin{proof}
Consider a sequence $\{M_n\}_{n=1}^\infty$ of subsets of $M$, such
that $M_n\subset M_{n+1}$ and $\cup_nM_n=M$. From (\ref{eq:levin-herglotz-gen}) and
(\ref{eq:m-goth-def}) it follows that
\begin{align}
\label{eq:m-goth-krein}
\mathfrak{m}(\zeta)&=C\frac{\zeta-\mu_0}{\zeta-\lambda_0}
\lim_{n\to\infty}\frac{\displaystyle
\prod_{\substack{k\in M_n\\k\ne 0}} \left(1-\frac{\zeta}{\eta_k}\right)
\left(1-\frac{\zeta}{\lambda_k}\right)^{-1}}{\displaystyle
\prod_{\substack{k\in M_n\\k\ne 0}} \left(1-\frac{\zeta}{\eta_k}\right)
\left(1-\frac{\zeta}{\mu_k}\right)^{-1}}\notag\\
&=C\frac{\zeta-\mu_0}{\zeta-\lambda_0}
\prod_{\substack{k\in M\\k\ne 0}}
\left(1-\frac{\zeta}{\mu_k}\right)
\left(1-\frac{\zeta}{\lambda_k}\right)^{-1}\,.
\end{align}
On the other hand, by
Proposition~\ref{prop:convergence-eigenvalues}, it holds true that
\begin{equation}
\label{eq:infinite-product-equalities}
\prod_{\substack{k\in M\\k\ne 0}}
\left(1-\frac{\zeta}{\mu_k}\right)
\left(1-\frac{\zeta}{\lambda_k}\right)^{-1}=
\prod_{\substack{k\in M\\k\ne 0}}\frac{\lambda_k}{\mu_k}
\prod_{\substack{k\in M\\k\ne 0}}\frac{\zeta-\mu_k}{\zeta-\lambda_k}
\end{equation}
From (\ref{eq:m-asympt}) and (\ref{eq:m-through-m}) it follows that
\begin{equation}
\label{eq:function-tends-to-1}
\lim_{\substack{\zeta\to\infty \\ \im \zeta\ge\epsilon>0}}
\mathfrak{m}(\zeta)=1\,.
\end{equation}
Also, on the basis that the second product on the r.\,h.\,s of
(\ref{eq:infinite-product-equalities}) converges uniformly, one has
\begin{equation}
\label{eq:product-tends-to-1}
\lim_{\substack{\zeta\to\infty \\
\im \zeta\ge\epsilon}}
\prod_{k\in M}
\frac{\zeta-\mu_k}{\zeta-\lambda_k}=
\lim_{\substack{\zeta\to\infty \\
\im \zeta\ge\epsilon}}
\prod_{k\in M}
\left(1+\frac{\mu_k-\lambda_k}{\lambda_k-\zeta}\right)=1\,.
\end{equation}
Thus, (\ref{eq:m-goth-krein}), (\ref{eq:infinite-product-equalities}),
(\ref{eq:function-tends-to-1}), and (\ref{eq:product-tends-to-1})
imply that
\begin{equation*}
C=\prod_{\substack{k\in M\\k\ne 0}}\frac{\mu_k}{\lambda_k}
\end{equation*}
and the proposition is proven.
\end{proof}
\begin{corollary}
\label{cor:theta}
Fix $g\in\mathbb{R}\cup\{\infty\}$ and $\theta>0$. Let $J^{(g)}$ have discrete
spectrum and assume that
$\sigma(J^{(g)})=\{\lambda_k\}_k$ and
$\sigma(J^{(g)}(\theta))=\{\mu_k\}_k$, where the sequences have
been arranged according to Remark~\ref{rem:arrange_sequences}. Then,
\begin{equation*}
\theta^2=
\prod\limits_{k\in M}\frac{\eta-\mu_k}{\eta-\lambda_k}\,.
\end{equation*}
where $\eta$ is any element of $\sigma(J_1^{(g)})$. Moreover, when
$0\not\in\sigma(J^{(g)})$,
\begin{equation}
\label{eq:parameter-recovery-zero}
\theta^2=
\prod\limits_{k\in M}\frac{\mu_k}{\lambda_k}\,.
\end{equation}
and, if $0\in\sigma(J^{(g)})$,
\begin{equation}
\label{eq:theta-special-case}
\theta^2=\frac{1}{\alpha_0-1}\left\{\alpha_0
\prod_{\substack{k\in M\\k\ne 0}}\frac{\mu_k}{\lambda_k} -1\right\}\,,
\end{equation}
where $\alpha_0$ is given in (\ref{eq:def-normalizing}).
\end{corollary}
\begin{proof}
The first two identities for $\theta^2$ are a straightforward
consequence of Proposition~\ref{prop:m-goth-actual-form} and
(\ref{eq:m-through-m}). As regards to (\ref{eq:theta-special-case}),
note that, from (\ref{eq:m-discrete}), one has
\begin{equation}
\label{eq:alpha-residue}
\alpha_k^{-1}=-\res_{\zeta=\lambda_k}m(\zeta)\,.
\end{equation}
Thus, according to (\ref{eq:m-through-m}),
\begin{equation}
\label{eq:0-case-m-goth-0}
\theta^2-\alpha_0^{-1}(\theta^2-1)=\mathfrak{m}(0)=
\prod_{\substack{k\in M\\k\ne 0}}\frac{\mu_k}{\lambda_k}\,.
\end{equation}
\end{proof}
\begin{remark}
\label{rem:0-case-m-goth-0}
Due to (\ref{eq:0-case-m-goth-0}) and the properties of the
normalizing constants, when $0\in\sigma(J^{(g)})$, one of the following
inequalities hold depending on the value of $\theta\ne 1$:
\begin{equation*}
\theta^2<\mathfrak{m}(0)=
\prod_{\substack{k\in M\\k\ne 0}}\frac{\mu_k}{\lambda_k}<1,\qquad
1<\mathfrak{m}(0)=
\prod_{\substack{k\in M\\k\ne 0}}
\frac{\mu_k}{\lambda_k}<\theta^2\,.
\end{equation*}
\end{remark}
\begin{theorem}
\label{prop:reconstruction}
Fix $g\in\mathbb{R}\cup\{\infty\}$ and $\theta>0$. Let $J^{(g)}$ have
discrete spectrum and assume that $0\not\in\sigma(J^{(g)})$. The spectra
$\sigma(J^{(g)})$, $\sigma(J^{(g)}(\theta))$ ($\theta\ne 1$)
uniquely determine the Jacobi matrix (\ref{eq:jm-0}), that is the
operator $J$, the parameter $\theta$ defining the perturbation, and
the parameter $g$ specifying the self-adjoint extension when $J\ne
J^*$.
\end{theorem}
\begin{proof}
Given the sequences $\sigma(J^{(g)})$ and $\sigma(J^{(g)}(\theta))$,
one finds the parameter $\theta$ from
(\ref{eq:parameter-recovery-zero}).
Proposition~\ref{prop:m-goth-actual-form}
yields the function $\mathfrak{m}$ and equation
(\ref{eq:m-through-m}) the Weyl function $m^{(g)}$. According to the
Preliminaries this function allows to recover the matrix associated
to the Jacobi operator and the parameter $g$ which determines the
self-adjoint extension when $J\ne J^*$.
\end{proof}
\begin{theorem}
\label{prop:reconstruction-2}
Fix $g\in\mathbb{R}\cup\{\infty\}$ and $\theta>0$. Let $J^{(g)}$ have
discrete spectrum and assume that $0\in\sigma(J^{(g)})$. The spectra
$\sigma(J^{(g)})$, $\sigma(J^{(g)}(\theta))$ ($\theta\ne 1$), together with
either $q_1$ or $\alpha_0$, uniquely determine the matrix
associated to $J$, the parameter $\theta$, and the parameter $g$
when $J\ne
J^*$. Alternatively, the
spectra $\sigma(J^{(g)})$, $\sigma(J^{(g)}(\theta))$ and the parameter
$\theta\ne 1$ uniquely determine the matrix corresponding to $J$ and
the parameter $g$ when $J$ turns out to be nonself-adjoint.
\end{theorem}
\begin{proof}
This follows immediately from the proof of the previous theorem,
taking into account (\ref{eq:theta-special-case}). Note that
$\theta$ can be determined either by Proposition
\ref{prop:convergence-eigenvalues} or by the asymptotic formula
\begin{equation*}
\mathfrak{m}(\zeta)=1+\frac{q_1(1-\theta^2)}{\zeta}
+O(\zeta^{-2})\,,
\end{equation*}
as $\zeta\to\infty$ ($\im \zeta\ge \epsilon$, $\epsilon>0$),
obtained by combining (\ref{eq:m-asympt}) and
(\ref{eq:m-through-m}).
\end{proof}
\begin{remark}
\label{rem:inverse-mass-spring}
Theorems \ref{prop:reconstruction} and \ref{prop:reconstruction-2}
solve the problem of reconstructing the matrix from spectral
data. However, in order to solve the inverse problem for the
mass-spring system, one should also recover the masses and spring
constants from the matrix entries. This is actually not difficult as it is
shown below (cf. \cite[Chap.\,8]{marchenko-new}).
On the basis of (\ref{eq:spring-mass}), one finds the equations
\begin{align*}
k_{j+1}&=-(k_j+q_jm_j),\\
m_{j+1}&=\frac{k_{j+1}^2}{m_jb_j^2},
\end{align*}
which allow to find recursively all spring constants and masses of the
system from the first spring constant and mass. Note that, when
the parameters $k_1$ and $m_1$ are given, only the quotient $\frac{k_1}{m_1}$
does not depend on the choice of mass unit. This quotient has a
concrete physical meaning: it equals the squared natural
frequency of the mass $m_1$ attached with the spring
$k_1$ to a fixed support. Thus, it is physically convenient to find a way of expressing
$k_j/m_j$ in terms of $k_1/m_1$. This is achieved by means of the
following continued fraction
\begin{equation*}
\frac{k_{j+1}}{m_{j+1}}=
\cfrac{-b_j^2}{q_j
- \cfrac{b_{j-1}^2}{\cdots q_2
- \cfrac{b_1^2}{q_1+\frac{k_1}{m_1}}}}\,,
\end{equation*}
which is constructed from $\frac{k_1}{m_1}$ upwards (cf. [17
p.~76]). We remark that, unlike the finite matrix case, here one cannot apply
without substantial changes, the method developed in
\cite[Chap.\,8]{marchenko-new} for determining the set of admissible
values for the quotient
$\frac{k_1}{m_1}$. Admissible values of $\frac{k_1}{m_1}$ are those
for which $\frac{k_{j+1}}{m_{j+1}}$ is a positive real number for
any $j\in\mathbb{N}$.
\end{remark}
\section{Necessary and sufficient conditions for the spectra\\
of $J^{(g)}$ and $J^{(g)}(\theta)$}
\label{sec:nec-suf}
The following statement gives an if-and-only-if criterion for two
sequences to be the spectra of $J^{(g)}$ and $J^{(g)}(\theta)$. In the
finite case the interlacing condition given in a) (see below) is
necessary and sufficient \cite{delrio-kudryavtsev},\cite{Ram}.
\begin{theorem}
\label{prop:necessary-sufficient}
Given two infinite real sequences $\{\lambda_k\}_k$ and
$\{\mu_k\}_k$ without finite points of accumulation, such that none
of them contains the zero, there is a unique positive $\theta$, a
unique operator $J$, and a unique $g\in\mathbb{R}\cup\{\infty\}$ if
$J\ne J^*$, such that $\{\mu_k\}_k$ is the spectrum of
$J^{(g)}(\theta)$ and $\{\lambda_k\}_k$ is the spectrum of $J^{(g)}$
if and only if the following conditions are satisfied.
\begin{enumerate}[\ a)]
\item $\{\lambda_k\}_k$ and $\{\mu_k\}_k$ interlace in $\mathbb{R}_+$,
$\mathbb{R}_-$ with one sequence shifted to the right (left) in
$\mathbb{R}_+$, ($\mathbb{R}_-$) with respect to the other one. Thus, the
sequences can be ordered according to
Remark~\ref{rem:arrange_sequences}.
\label{interlace-sufficient}
\item The following series converges
\begin{equation*}
\sum_{k\in M}(\mu_k-\lambda_k)
\end{equation*}\label{sum-spectr-sufficient}
By condition \ref{sum-spectr-sufficient}) the products
$\displaystyle\prod_{\substack{k\in M\\k\ne n}}
\frac{\mu_k-\lambda_n}{\lambda_k-\lambda_n}$,
$\displaystyle\prod_{k\in M}
\frac{\mu_k}{\lambda_k}$
are convergent, so
define
\begin{equation}
\label{eq:tau-def-1}
\tau_n:=
\frac{(\mu_n-\lambda_n)
\displaystyle\prod_{\substack{k\in M\\k\ne n}}
\frac{\mu_k-\lambda_n}{\lambda_k-\lambda_n}
}{\lambda_n\left(\displaystyle\prod_{k\in M}
\frac{\mu_k}{\lambda_k}-1\right)}\,,
\quad \forall n\in M\,.
\end{equation}
\item The sequence $\{\tau_n\}_{n\in M}$
is such that, for $m=0,1,2,\dots$, the series
\begin{equation*}
\sum_{k\in M}\lambda_k^{2m}\tau_k
\quad\text{converges.}
\end{equation*}
\label{finite-moments-sufficient}
\item If a sequence of complex numbers $\{\beta_k\}_{k\in M}$
is such that the series
\begin{equation*}
\sum_{k\in
M}\abs{\beta_k}^2\tau_k
\quad\text{converges}
\end{equation*}
and, for $m=0,1,2,\dots$,
\begin{equation*}
\sum_{k\in
M}\beta_k\lambda_k^m\tau_k=0\,,
\end{equation*}
then $\beta_k=0$ for all $k\in M$.
\label{density-poly-sufficient}
\end{enumerate}
\end{theorem}
\begin{proof}
In view of Propositions \ref{prop:interlacing} and
\ref{prop:convergence-eigenvalues}, for proving the necessity of
the conditions, it only remains to show that for all $n\in M$,
$\tau_n=\alpha_n^{-1}$. Indeed
\emph{\ref{finite-moments-sufficient}}) and
\emph{\ref{density-poly-sufficient}}) will follow from the fact
that all moments of the spectral measure
(\ref{eq:rho-discrete}) exist and that the
polynomials are dense in $L_2(\mathbb{R},\rho)$.
From (\ref{eq:m-through-m}), (\ref{eq:alpha-residue}), and
Proposition \ref{prop:m-goth-actual-form} , it follows that
\begin{align*}
\alpha_n^{-1}&=\frac{1}{\theta^2-1}\lim_{\zeta\to\lambda_n}
\frac{\lambda_n-\zeta}{\zeta}\mathfrak{m}(\zeta)\\
&=\frac{\mu_n-\lambda_n}{\lambda_n(\theta^2-1)}
\prod_{\substack{k\in M\\k\ne n}}
\frac{\lambda_n-\mu_k}{\lambda_n-\lambda_k}\,.
\end{align*}
Hence, by Corollary \ref{cor:theta}, one verifies that
$\tau_n=\alpha_n^{-1}$.
We now prove that conditions
\emph{\ref{interlace-sufficient}}),
\emph{\ref{sum-spectr-sufficient}}),
\emph{\ref{finite-moments-sufficient}}), and
\emph{\ref{density-poly-sufficient}}) are sufficient.
The condition \emph{\ref{interlace-sufficient}}) implies that
\begin{equation*}
\frac{\lambda_n-\mu_k}{\lambda_n-\lambda_k}>0,\qquad
\forall k\in M\,,\ k\ne n.
\end{equation*}
On the other hand, by \emph{\ref{sum-spectr-sufficient}}) one can
define the number
\begin{equation}
\label{eq:kappa-def}
\kappa=\prod_{k\in M}
\frac{\mu_k}{\lambda_k},
\end{equation}
which is clearly positive and also $\kappa>1$ if
$\abs{\mu_k}>\abs{\lambda_k}$ for all $k\in M$ and $\kappa<1$ if
$\abs{\mu_k}<\abs{\lambda_k}$ for all $k\in M$. Thus,
\begin{equation*}
\frac{\mu_n-\lambda_n}{\lambda_n(\kappa-1)}>0
\qquad
\forall n\in M.
\end{equation*}
Hence, for all $n\in M$, $\tau_n>0$, so define the function
\begin{equation}
\label{eq:rho-fro-proof}
\rho(t):=\sum_{\lambda_k<t}\tau_k\,.
\end{equation}
It follows from \emph{\ref{finite-moments-sufficient}}) that the
moments of the measure corresponding to $\rho$ are finite.
Now, on the basis of \emph{\ref{interlace-sufficient}}) and
\emph{\ref{sum-spectr-sufficient}}), define the meromorphic functions
\begin{equation*}
\widetilde{\mathfrak{m}}(\zeta):=
\prod_{k\in M}\frac{\zeta-\mu_k}{\zeta-\lambda_k}
\end{equation*}
and
\begin{equation}
\label{eq:definition-m-tilde}
\widetilde{m}(\zeta):=
\frac{\widetilde{\mathfrak{m}}(\zeta)-
\displaystyle\prod_{k\in M}
\frac{\mu_k}{\lambda_k}}
{\zeta\left(\displaystyle\prod_{k\in M}
\frac{\mu_k}{\lambda_k}-1\right)}\,.
\end{equation}
Thus, taking into account (\ref{eq:tau-def-1}), one has
\begin{equation}
\label{eq:residue-tilde}
\res_{\zeta=\lambda_n}\widetilde{m}(\zeta)=
\left(\prod_{k\in M}
\frac{\mu_k}{\lambda_k}-1\right)^{-1}\lim_{\zeta\to\lambda_n}
\frac{\zeta-\lambda_n}{\zeta}\widetilde{\mathfrak{m}}(\zeta)
=-\tau_n\,.
\end{equation}
In view of what was done earlier,
\begin{equation}
\label{eq:limit-tilde-one}
\lim_{\substack{\zeta\to\infty \\ \im \zeta\ge\epsilon>0}}
\widetilde{\mathfrak{m}}(\zeta)=1\,.
\end{equation}
Therefore,
\begin{equation}
\label{eq:limit-tilde}
\lim_{\substack{\zeta\to\infty \\ \im \zeta\ge\epsilon>0}}
\widetilde{m}(\zeta)=\left(\prod_{k\in M}
\frac{\mu_k}{\lambda_k}-1\right)^{-1}
\lim_{\substack{\zeta\to\infty \\ \im \zeta\ge\epsilon>0}}
\frac{\widetilde{\mathfrak{m}}(\zeta)}
{\zeta}=0
\end{equation}
By (\ref{eq:residue-tilde}) and (\ref{eq:limit-tilde}),
\cite[Chap.\,7,\,Thm.\,2]{MR589888} implies that
\begin{equation}
\label{eq:m-tilde-as-sum}
\widetilde{m}(\zeta)=
\sum_{k\in M}\frac{\tau_k}{\lambda_k-\zeta}\,.
\end{equation}
On the other hand, using (\ref{eq:limit-tilde-one}), one obtains
\begin{equation*}
\lim_{\substack{\zeta\to\infty \\ \im \zeta\ge\epsilon>0}}
\zeta\widetilde{m}(\zeta)=\left(\prod_{k\in M}
\frac{\mu_k}{\lambda_k}-1\right)^{-1}
\lim_{\substack{\zeta\to\infty \\ \im \zeta\ge\epsilon>0}}
\left(\widetilde{\mathfrak{m}}(\zeta)
-\prod_{k\in M}\frac{\mu_k}{\lambda_k}\right)=-1\,.
\end{equation*}
But
\begin{equation*}
\lim_{\substack{\zeta\to\infty \\ \im \zeta\ge\epsilon>0}}
\zeta\widetilde{m}(\zeta)=-\sum_{k\in M}\tau_k\,,
\end{equation*}
so it has been proven that, for the function given in
(\ref{eq:rho-fro-proof}),
\begin{equation*}
\int_\mathbb{R} d\rho(t)=1\,.
\end{equation*}
Thus the measure corresponding to $\rho$ is appropriately normalized
and all the moments exist, so in $L_2(\mathbb{R},\rho)$ apply the
Gram-Schmidt procedure of orthonormalization to the sequence
$\{t_k\}_{k=0}^\infty$ to obtain a Jacobi matrix as was explained in
the Preliminaries. Denote by $J$ the operator whose matrix
representation is the obtained matrix (cf. \cite[Sec. 47]{MR1255973}).
Now, depending on the sequence of moments, $J$ is self-adjoint or
not. If $J=J^*$, the function $\rho$ is the resolution of the identity
of $J$, while if $J\ne J^*$, $\rho$ corresponds to the resolution of
the identity of a self-adjoint extension of $J$. This is a
consequence of condition \emph{\ref{density-poly-sufficient}}) since
it means that the polynomials are dense in $L_2(\mathbb{R},\rho)$
\cite[Prop.\,4.15]{MR1627806}.
Finally, denote by $J^{(g)}$ the self-adjoint extension of $J$
corresponding to $\rho$ and consider the operator $J^{(g)}(\theta)$
obtained from $J^{(g)}$ as indicated in the Preliminaries with
$\theta$ given by (\ref{eq:parameter-recovery-zero}). By construction
the sequence $\{\lambda_k\}_{k\in M}$ is the spectrum of
$J^{(g)}$. For the proof to be complete it only remains to show that
$\{\mu_k\}_{k\in M}$ is the spectrum of $J^{(g)}(\theta)$. For the
function given in (\ref{eq:m-goth-def}), taking into account
(\ref{eq:m-through-m}) and (\ref{eq:m-discrete}), one has
\begin{equation*}
\mathfrak{m}(\zeta)=\theta^2+\zeta\left(\theta^2-1\right)
\sum_{k\in M}\frac{1}{\alpha_k(\lambda_k-\zeta)}\,.
\end{equation*}
On the other hand, from (\ref{eq:definition-m-tilde}) and
(\ref{eq:m-tilde-as-sum}), it follows that
\begin{equation*}
\widetilde{\mathfrak{m}}(\zeta)=\theta^2+\zeta\left(\theta^2-1\right)
\sum_{k\in M}\frac{\tau_k}{\lambda_k-\zeta}\,.
\end{equation*}
But we have already proven that $\alpha_k^{-1}=\tau_k$ for $k\in
M$. Thus $\mathfrak{m}=\widetilde{\mathfrak{m}}$, meaning that the
zeros of $\mathfrak{m}$ are given by the sequence $\{\mu_k\}_{k\in
M}$.
\end{proof}
\begin{theorem}
\label{prop:necessary-sufficient-zero}
Let $\{\lambda_k\}_k$ and $\{\mu_k\}_k$ be two infinite real
sequences without finite points of accumulation, such that each of
them contains exactly one element equal zero, and consider any
positive real number $\theta\ne 1$. There exists a unique operator
$J$, and a unique $g\in\mathbb{R}\cup\{\infty\}$ if $J\ne J^*$, such
that $\{\mu_k\}_k$ is the spectrum of $J^{(g)}(\theta)$ and
$\{\lambda_k\}_k$ is the spectrum of $J^{(g)}$ if and only if the
conditions \ref{interlace-sufficient}),
\ref{sum-spectr-sufficient}), \ref{finite-moments-sufficient}), and
\ref{density-poly-sufficient}) hold with
\begin{align*}
\tau_n&:=
\frac{\mu_n-\lambda_n}{\lambda_n\left(\theta^2-1\right)}
\prod_{\substack{k\in M\\k\ne n}}
\frac{\mu_k-\lambda_n}{\lambda_k-\lambda_n}\,,
\quad n\in M\,, n\ne 0\,, \\
\tau_0&:=(\theta^2-1)^{-1}\left(\theta^2-
\prod_{\substack{k\in M\\k\ne 0}}
\frac{\mu_k}{\lambda_k}\right)\,,
\end{align*}
where
\begin{equation}
\label{eq:bound-on-theta}
\theta^2
\begin{cases}
<\displaystyle\prod_{\substack{k\in M\\k\ne 0}}\frac{\mu_k}{\lambda_k} &
\text{if}\ \{\mu_k\}_k\ \text{is shifted to the
left in}\
\mathbb{R}_+\ \text{w.r.t.}\ \{\lambda_k\}_k\,,\\
>\displaystyle\prod_{\substack{k\in M\\k\ne 0}}\frac{\mu_k}{\lambda_k} &
\text{otherwise}.
\end{cases}
\end{equation}
\end{theorem}
\begin{proof}
The proof is analogous to the proof of
Theorem~\ref{prop:necessary-sufficient}. Recall that by our
convention for enumerating the sequences $\lambda_0=\mu_0=0$. Thus,
for proving the necessity of the conditions
\emph{\ref{interlace-sufficient}})--\emph{\ref{density-poly-sufficient}}),
one only should verify that $\tau_0=a_0^{-1}$ and
(\ref{eq:bound-on-theta}) holds. This is immediate in view of
(\ref{eq:0-case-m-goth-0}) and Remark~\ref{rem:0-case-m-goth-0}. The
sufficiency of the conditions is established as in the proof of
Theorem~\ref{prop:necessary-sufficient}. Here, one substitutes
(\ref{eq:kappa-def}) by
\begin{equation*}
\kappa=\prod_{\substack{k\in M\\k\ne 0}}
\frac{\mu_k}{\lambda_k}
\end{equation*}
and (\ref{eq:definition-m-tilde}) by
\begin{equation*}
\widetilde{m}(\zeta):=
\frac{\widetilde{\mathfrak{m}}(\zeta)-\theta^2}
{\zeta\left(\theta^2-1\right)}\,,\qquad \zeta\ne 0\,.
\end{equation*}
Then, one verifies that $
\res_{\zeta=\lambda_n}\widetilde{m}(\zeta)=-\tau_n$ for all $n\in M$
and $\sum_{k\in M}\tau_k=1$. Note that (\ref{eq:bound-on-theta})
guarantees that $\tau_n>0$ for all $n\in M$. The rest of the proof
repeats that of Theorem~\ref{prop:necessary-sufficient} taking
into account that now the zeros of $\mathfrak{m}$ are given by
$\{\mu_k\}_{k\in M}\setminus\{0\}$.
\end{proof}
\begin{theorem}
\label{prop:necessary-sufficient-aa}
Given two infinite real sequences $\{\lambda_k\}_k$ and
$\{\mu_k\}_k$ without finite points of accumulation, such that none
of them contains the zero, there is a unique positive $\theta$ and a
unique operator $J=J^*$ such that $\{\mu_k\}_k$ is the spectrum of
$J^{(g)}(\theta)$ and $\{\lambda_k\}_k$ is the spectrum of $J$
if and only if conditions \ref{interlace-sufficient}),
\ref{sum-spectr-sufficient}), \ref{finite-moments-sufficient}),
together with
\begin{equation*}
\text{d')}\qquad\qquad\qquad
\lim_{n\to\infty}
\frac{\det\begin{pmatrix}
s_0 & s_1 & \cdots & s_n\\[1mm]
s_1 & s_2 & \cdots & s_{n+1}\\[1mm]
\hdotsfor[2]{4}\\[1mm]
s_n & s_{n+1} & \cdots & s_{2n}
\end{pmatrix}}
{\det\begin{pmatrix}
s_4 & s_5 & \cdots & s_{n+2}\\[1mm]
s_5 & s_6 & \cdots & s_{n+3}\\[1mm]
\hdotsfor[2]{4}\\[1mm]
s_{n+2} & s_{n+3} & \cdots & s_{2n}
\end{pmatrix}}=0\,,\qquad\qquad\qquad
\end{equation*}
where $s_n:=\sum_{k\in M}\lambda_k^n\tau_k$ for $n$ in $\mathbb{N}\cup\{0\}$
are fulfilled. Note that by our convention on the notation
$J^{(g)}(\theta)$ is a non-singular finite-rank perturbation of $J$
which does not depend on $g$.
\end{theorem}
\begin{proof}
We again repeat the reasoning of the proof of
Theorem~\ref{prop:necessary-sufficient}. Clearly, $s_n$
($n\in\mathbb{N}\cup\{0\}$) are the numbers given in
(\ref{eq:moments}). Thus, on the basis of Hamburger criterion (see
\cite[Addenda\,2,\,Sec.\,9]{MR0184042}),
\emph{d')} holds when $J=J^*$. For the sufficiency, note that, due
to \cite[Addenda\,2,\,Sec.\,9]{MR0184042},
\emph{d')} implies that the
measure corresponding to the function given in
(\ref{eq:rho-fro-proof}) is the unique solution of the moment
problem, so $J=J^*$ and \emph{d)} is not needed.
\end{proof}
\begin{remark}
\label{last}
Admittedly, \emph{d')} is not easy to check, however it allows to give
necessary and sufficient conditions in the self-adjoint case. Note
that one can also give the analogous self-adjoint version of
Theorem~\ref{prop:necessary-sufficient-zero} by
substituting condition \emph{d)} for \emph{d')}.
\end{remark}
\begin{acknowledgments}
The authors thank the referee whose comments have led to an improved
presentation of this work.
\end{acknowledgments}
| {
"timestamp": "2013-01-14T02:00:44",
"yymm": "1106",
"arxiv_id": "1106.4598",
"language": "en",
"url": "https://arxiv.org/abs/1106.4598",
"abstract": "We consider an inverse spectral problem for infinite linear mass-spring systems with different configurations obtained by changing the first mass. We give results on the reconstruction of the system from the spectra of two configurations. Necessary and sufficient conditions for two real sequences to be the spectra of two modified systems are provided.",
"subjects": "Mathematical Physics (math-ph)",
"title": "Inverse problems for Jacobi operators II: Mass perturbations of semi-infinite mass-spring systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877675527112,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7096296196406177
} |
https://arxiv.org/abs/1202.3066 | Sets computing the symmetric tensor rank | Let n_d denote the degree d Veronese embedding of a projective space P^r. For any symmetric tensor P, the 'symmetric tensor rank' sr(P) is the minimal cardinality of a subset A of P^r, such that n_d(A) spans P. Let S(P) be the space of all subsets A of P^r, such that n_d(A) computes sr(P). Here we classify all P in P^n such that sr(P) < 3d/2 and sr(P) is computed by at least two subsets. For such tensors P, we prove that S(P) has no isolated points. | \section{Introduction}\label{S1}
Let $\nu _d: \mathbb {P}^r \to \mathbb {P}^N$, $N:= \binom{r+d}{r}-1$,
denote the degree $d$ Veronese embedding of $\mathbb {P}^r$.
Set $X_{r,d}:= \nu _d(\mathbb {P}^r)$.
For any $P\in \mathbb {P}^N$, the {\emph {symmetric rank}} or {\emph
{symmetric tensor rank}} or, just, the {\emph {rank}}
$sr (P)$ of $P$ is the minimal cardinality of a finite set
$S\subset X_{r,d}$ such that $P\in \langle S\rangle$, where
$\langle \ \ \rangle$ denote the linear span.
For any $P\in \mathbb {P}^N$, let $\mathcal {S}(P)$ denote the set
of all finite subsets $A\subset \mathbb {P}^r$ such that $\nu _d(A)$
computes $sr(P)$,
i.e. the set of all $A\subset \mathbb {P}^r$ such that
$P\in \langle \nu _d(A)\rangle$ and $\sharp (A) = sr (P)$. Notice that
if $A\in \mathcal {S}(P)$, then
$P\notin \langle \nu _d(A')\rangle$ for any $A'\subsetneq A$.
The study of the sets $\mathcal {S}(P)$ has a natural role in the theory
of symmetric tensors. Indeed, if we interpret points
$P\in \mathbb {P}^n$ as symmetric tensors, then $\mathcal {S}(P)$
is the set of all the representations of $P$ as a sum
of rank $1$ tensors. For many applications, it is crucial
to have some information about the structure of $\mathcal {S}(P)$.
We do not recall the impressive literature on the subject
(but see \cite{kb}, for a good references' repository).
The interest in the theory is growing, since applications of tensors are
actually increasing in Algebraic Statistics, and then in Biology, Chemistry
and also Linguistics (see e.g. \cite{kb} and \cite{l}).
Let us mention one relevant aspect, from our point of view.
If we are looking for one specific decomposition of $P$ as a sum of tensors of
rank $1$, and we find some decomposition (there is a software, which tries
heuristically to compute it), how to ensure that the found
decomposition is the expected one? Of course, if $\mathcal {S}(P)$
is a singleton, the answer is obvious. In a recent paper
(\cite{bgl}) Buczy\'{n}ski, Ginensky and Landsberg proved
that $\sharp (\mathcal {S}(P)) =1$ when the rank is small, i.e.
$sr (P) \le (d+1)/2$. This important uniqueness theorem
(which holds more generally for $0$-dimensional schemes, see \cite{bl} Proposition 2.3)
turns out to be sharp, even if $r=1$. For larger values of the rank, one can determine
the uniqueness of the decomposition, when
an element $A\in\mathcal{S}(P)$ satisfies some geometric properties
(e.g. when no $3$ points of $A$ are collinear, see \cite{bb}, Theorem 2
or when $A$ is in {\it general uniform position}, see \cite{bc}).
In this paper, we describe more closely the set $\mathcal {S}(P)$,
for tensors whose rank sits in the range $sr(P)< 3/2$.
In particular, we show that for each $P$ with
$\sharp (\mathcal {S}(P)) >1$, the set $\mathcal {S}(P)$ has no
isolated points.
This result has a consequence. Assume we are given
$Q\in \mathbb {P}^n$ with $sr (Q) < 3d/2$, and we find
$A\in \mathcal {S}(Q)$ which is isolated in $\mathcal {S}(Q)$.
Then we can conclude that $A$ is the unique element of $\mathcal {S}(Q)$
(in other words, $Q$ is {\it identifiable}).
This means that, in the specified range, given
one decomposition $A\in \mathcal{S}(P)$, one can conclude that
$A$ is unique, just by performing an analysis
$\mathcal{S}(P)$ in a neighbourhood of $A$. This sounds to be
much easier than looking for other points of $\mathcal{S}(P)$
in the whole space.
Our precise statement is:
\begin{theorem}\label{i2}
Assume $r\ge 2$. Fix a positive integer $t<3d/2$.
Fix $P\in \mathbb {P}^N$ such that $sr (P)=t$ and the
symmetric rank of $P$ is computed by at least two different sets
$A, B\subset \mathbb {P}^r$. Then
$sr (P)$ is computed by an infinite family of subsets of
$\mathbb {P}^r$, and this family has no isolated points.
\end{theorem}
We notice that the notion of ``isolated points~'' requires an algebraic
structure of the set $\mathcal {S}(P)$. As well-known (and checked
in Section \ref{S2}), the set $\mathcal {S}(P)$ is constructible
in the sense of Algebraic Geometry
(\cite{h}, Ex. II.3.18 and Ex. II.3.19). This makes more precise
the expression ``~no isolated point~'' above (see Remark \ref{o000}
in section \ref{S2} for the details).
We also prove that the bound $t<3d/2$, in the statement of Theorem \ref{i2}, is
sharp. Indeed, Example \ref{i1} provides one tensors $P$ with $sr (P) =3d/2$
(so $d$ is even), and $\sharp (\mathcal {S}(P))=2$.
In the proof, it is not difficult to see that if there are
at least two elements in $\sharp (\mathcal {S}(P))=2$, when
$sr(P)<3d/2$, then the shape of the Hilbert functions of $A,B$
shows that both sets have a large intersection
with either a line, or a conic of $\mathbb{P}^r$
(we will refer to \cite{bb} and \cite{c2}, for this part of the theory).
Then, we perform a (maybe tedious, but necessary) analysis
of the behaviour of sets of points, with a big intersection
with either a line or a conic.
We also provide a deeper description of $\mathcal {S}(P)$,
still in the range $sr(P)<3/2$ and
assuming that $\mathcal {S}(P)$ is not a singleton (hence it is infinite).
Indeed, we have the following:
\begin{theorem}\label{i3}
Assume $r\ge 2$ and $d\ge 3$. Fix a positive integer $t<3d/2$.
Fix $P\in \mathbb {P}^N$ such that $sr (P)=t$. Then, the set $\mathcal
{S}(P)$ is not a single point if and only if $P$ may be described
in one of the following way:
\begin{itemize}
\item[(a)] for any $A\in \mathcal {S}(P)$, there is a line
$D\subset \mathbb {P}^r$ such that $\sharp (A\cap D) \ge \lceil
(d+2)/2\rceil$; set $F:= A\setminus A\cap D$; the set
$\langle \nu _d(A\cap D)\rangle \cap \langle\{P\}\cup \nu _d(F)\rangle$,
is formed by a unique point $P_D$ and $\mathcal {S}(P_D)$ is infinite;
for each $E\in \mathcal {S}(P_D)$ we have $E\cap F =\emptyset$
and $E\cup F \in \mathcal {S}(P)$.
\item[(b)] for any $A\in \mathcal {S}(P)$, there is a smooth conic
$T\subset \mathbb {P}^m$ such that $\sharp (A\cap T) \ge d+1$;
set $F:= A\setminus A\cap T$; the set
$\langle \nu _d(A\cap T)\rangle \cap \langle \{P\}\cup F\rangle$,
is formed by a unique point $P_T$ and $\mathcal {S}(P_T)$ is infinite;
for each $E\in \mathcal {S}(P_T)$ we have $E\cap F =\emptyset$;
every element of $\mathcal {S}(P)$ is of the form $E'\cup F$ for some
$E'\subset T$ computing $\mathcal {S}(P_T)$ with respect to the
rational normal curve $\nu _d(T)$.
\item[(c)] $d$ is odd; for any $A\in \mathcal {S}(P)$, there is
a reducible conic $T=L_1\cup L_2\subset \mathbb {P}^m$, $L_1\ne
L_2$, such that $\sharp (A\cap L_1) = \sharp (A\cap L_2)=(d+1)/2$ and
$L_1\cap L_2\notin A$.
\end{itemize}
\end{theorem}
Let us mention that if $L$ is a linear subspace of
dimension $m$ in $\mathbb{P}^r$,
then the Veronese embedding $\nu_d$, restricted to
$L$, can be identified with a $d$-th Veronese embedding
of $\mathbb{P}^s$. Thus, if $Q$ is a point of the linear
span $\langle \nu_d(L)\rangle$, then we can consider
the rank of $Q$, either with respect to $X_{r,d}$, or with respect
to $X_{m,d}$. Fortunately, in our cases where
this ambiguity could arise, \cite{bl} Corollary 2.2
will guarantee that the two ranks are equal, and
every decomposition $A\in\mathcal {S}(Q)$, with respect to
$X_{r,d}$, is contained in $X_{m,d}$. Indeed, we have:
\begin{remark}\label{i4}
Take $P_D$ (resp. $P_T$) as in case (a) (resp. (b)) of
Theorem \ref{i3}. By \cite{ls}, Proposition 3.1, or \cite{lt},
subsection 3.2, $sr (P_D)$ (resp. $sr (P_T)$) is equal to its
symmetric rank with respect to the rational normal curve
$\nu _d(D)$ (resp. $\nu _d(T))$. By the symmetric case of
\cite{bl}, Corollary 2.2, each element of $\mathcal {S}(P_D)$
(resp. $\mathcal {S}(P_T)$) is contained in $D$ (resp. $T$).
Several algorithms are available, to get an element of $\mathcal
{S}(P_D)$ or $\mathcal {S}(P_T)$ (\cite{cs}, \cite{lt}, \cite{bgi}).
\end{remark}
Finally, we wish to thank J. Landsberg, who pointed out to us
the importance of studying the existence of isolated points $A\in \mathcal
{S}(P)$, when $\mathcal {S}(P)$ is not a singleton.
\section{Preliminaries}\label{S2}
We work over an algebraically closed field $\mathbb {K}$
such that $\mbox{char}(\mathbb {K})=0$.
\smallskip
Recall, from the introduction, than $\nu _d: \mathbb {P}^r \to \mathbb {P}^N$,
$N:= \binom{r+d}{r}-1$ denotes the degree $d$ Veronese embedding of $\mathbb {P}^r$.
Call $X_{r,d}$ the image of this map.
For any closed subscheme $W\subseteq \mathbb {P}^r$, let $\langle W\rangle$
denote the linear span of $W$. If $W$ sits in some hyperplane, $\langle W\rangle$
is the intersection of all the hyperplanes of $\mathbb {P}^r$ containing $W$.
For any integer $m>0$ and any integral, positive-dimensional subvariety
$T\subset \mathbb {P}^r$, we let $\Sigma _m(T)$ denote the embedded $m$-th
secant variety of $X$, i.e. the closure in $\mathbb {P}^r$
of the union of all $(m-1)$-dimensional linear subspaces spanned by $m$ points of $T$.
We take the closure with respect to the Zariski topology. Notice that, over
the complex number field, the closure in the euclidean topology gives the same set.
For any integer $k>0$, let $\mbox{Hilb}^k(\mathbb {P}^r)^0$ denote the set of
all finite ($0$--dimensional) reduced subsets of $\mathbb {P}^r$, with cardinality $k$.
$\mbox{Hilb}^k(\mathbb {P}^r)^0$ is a smooth and quasi-projective variety of dimension $rk$.
\begin{remark}\label{o000}
We observe that the set $\mathcal {S}(P)$, defined in the
introduction, is always constructible.
Indeed, let $G:=G(k-1,r)$ denote the Grassmannian of all $(k-1)$-dimensional
linear subspaces of $\mathbb {P}^r$.
For any point $P\in \mathbb {P}^r$, set $G(k-1,r)(P):= \{V\in G(k-1,r): P\in V\}$
and $G(k-1,r)(P)_+:= \{V\in G(k-1,r)(P):P$ is spanned by $k$ points of $V\cap X$.
Notice that, by definition, $G(k-1,r)(P)_+ =\emptyset$ for all $k < sr (P)$
and $G(sr (P)-1,r)(P)_+ \ne \emptyset$.
Now, put $\mathcal {J}:= \{(S,V)\in \mbox{Hilb}^{sr (P)}(\mathbb {P}^r)^0\times G(sr (P)-1,k)(P)_+ :
P\in \langle \nu _d(S)\rangle \}$. This set $\mathcal {J}$
is locally closed. If $\pi_1$ denotes the projection onto the first factor, then
$\mathcal {S}(P)$ is exactly the image $\pi _1(\mathcal {J})$.
Hence, a theorem of Chevalley guarantees that $\mathcal {S}(P)$ is a constructible set
(\cite{h}, Ex. II.3.18 and Ex. II.3.19).
We are interested in isolated points of $\mathcal {S}(P)$. Notice that $Z$ is an
isolated point for $\mathcal {S}(P)$ when $Z$ is an irreducible component of the closure of
$\mathcal {S}(P)$. Thus, the notion of {\it isolated points} for $\mathcal {S}(P)$
are equal both if we use the Zariski or the Euclidean topology on $\mathcal {S}(P)$.
\end{remark}
\begin{remark}\label{a00}
Let $X$ be any projective scheme and $D$ any effective Cartier divisor of $X$. For any closed
subscheme $Z$ of $X$, we denote with $\mbox{Res}_D(Z)$ the residual scheme of $Z$ with
respect to $D$. i.e. the closed subscheme of $X$ with ideal sheaf
$\mathcal {I}_Z:\mathcal {I}_D$ (where $\mathcal {I}_Z, \mathcal {I}_D$ are the ideal sheaves
of $Z$ and $D$, respectively).
We have $\deg (Z) = \deg (Z\cap D) + \deg (\mbox{Res}_D(Z))$. If $Z$ is a finite reduced set,
then $\mbox{Res}_D(Z) = Z\setminus Z\cap D$. For every $L\in \mbox{Pic}(X)$ we have the exact sequence
\begin{equation}\label{eqa1.0}
0 \to \mathcal {I}_{\mbox{Res}_D(Z)}\otimes L(-D) \to \mathcal {I}_Z\otimes L \to
\mathcal {I}_{Z\cap D,D}\otimes (L\vert D)
\to 0
\end{equation}
From (\ref{eqa1.0}) we get
$$h^i(X,\mathcal {I}_Z\otimes L) \le h^i(X,\mathcal {I}_{\mbox{Res}_D(Z)}
\otimes L(-D))+h^i(D,\mathcal {I}_{Z\cap
D,D}\otimes (L\vert D))$$ for every integer $i\ge 0$.
\end{remark}
\section{The proofs}\label{S3}
We will make an extensive use of the following two results.
\begin{lemma}\label{v1} Let $A,B\in \mathbb{P}^r$
be two zero-dimensional schemes such that $A\ne B$.
Assume the existence of $P\in \langle \nu _d(A)\rangle \cap \langle
\nu _d(B)\rangle$ such that $P\notin \langle \nu _d(A')\rangle$
for any $A'\subsetneq A$ and $P\notin \langle \nu _d(B')\rangle$ for any $B'\subsetneq B$.
Then $h^1(\mathbb P^r,\mathcal {I}_{A\cup B}(d)) >0$.
\end{lemma}
\begin{proof} See \cite{bb}, Lemma 1.
\end{proof}
The following lemma was proved (with $D$ a hyperplane)
in \cite{bb1}, Lemma 7. The same proof works for an arbitrary
hypersurface $D$ of $\mathbb {P}^r$.
\begin{lemma}\label{v2}
Fix positive integers $r, d, t$ such that $t\le d$ and finite sets
$A, B\subset \mathbb {P}^r$. Assume the existence of a degree $t$ hypersurface
$D\subset \mathbb {P}^r$ such that $h^1(\mathcal {I}_{(A\cup B)\setminus
(A\cup B)\cap D}(d-t)) =0$. Set $F:= A\cap B \setminus (D\cap A\cap B)$.
Then $\nu _d(F)$ is linearly independent. Moreover $\langle \nu _d(A)\rangle
\cap \langle \nu _d(B)\rangle$ is the linear span of the two supplementary subspaces
$\langle \nu _d(F)\rangle$ and $\langle \nu _d(A)\rangle
\cap \langle \nu _d(B)\rangle$.
Assume there is $P\in \langle \nu _d(A)\rangle \cap \langle
\nu _d(B)\rangle$ such that $P\notin \langle \nu _d(A')\rangle$ for any $A'\subsetneq A$,
and $P\notin \langle \nu _d(B')\rangle$ for any $B'\subsetneq B$. Then
$A = (A\cap D)\sqcup F$, $B = (B\cap D)\sqcup F$ and $A\setminus A\cap D = B\setminus B\cap D$.
\end{lemma}
Next, we need to point out first the case
of the Veronese embeddings $X_{1,d}$ of $\mathbb P^1$.
This (already non--trivial) case anticipates some features
of the behaviour of the sets $\mathcal S(P)$, in higher dimension.
\begin{lemma}\label{w1}
Assume $r=1$ and hence $N =d$. Fix $P\in \mathbb {P}^d$ such that $sr (P)$ is
computed by at least two different subsets of $X_{1,d}$.
Then $\dim (\mathcal {S}(P)) > 0$ and $\mathcal {S}(P)$ has no isolated points.
\end{lemma}
\begin{proof}
Let $t$ be the border rank of $P$, i.e. the minimal integer such that $P$ sits in
the secant variety $\Sigma _t(X_{1,d})$. The dimension of secant varieties of
irreducible curve is well known (\cite{a}, Remark 1.6), and it turns out that
$t \le \lfloor (d+2)/2\rfloor$. Take $A, B$ computing $sr (P)$ and such that
$A\ne B$. Lemma \ref{v1} gives $h^1(\mathcal {I}_{A\cup B}(d)) >0$.
Since any set of at most $d+1$ points is separated by divisors of degree $d$, we
see that $\sharp (A\cup B) \ge d+2$. Hence $\sharp (A) =\sharp (B)\ge t$
and equality holds only if $t =(d+2)/2$ and $A\cap B=\emptyset$.
\quad (i) First assume $t =(d+2)/2$, so that, as we observed above, $t$
is also the symmetric rank of $P$.
In this case, by \cite{a}, Remark 1.6, a standard dimensional count
proves that $\Sigma _{t}(X_{1,d}) = \mathbb {P}^d$.
Moreover, $(\mathcal {S}(Q))$ can be described as the fiber of a natural
proper map of varieties. Namely, let $G(t-1,d)$ denotes the Grassmannian
of $(t-1)$-dimensional linear subspaces of $\mathbb {P}^d$.
Let $\mathbb {I} := \{(O,V)\in \mathbb {P}^d\times G(t-1,d): O\in V\}$
denote the incidence correspondence, and $\pi _1$, $\pi _2$ denote the morphisms induced
from the projections to the two factors.
Since $X_{1,d}$ is a rational normal curve, of degree $d$, notice that
$\dim (\langle W\rangle )=t-1$ for {\it every} $W\in \mbox{Hilb}^t(X_{1,d})$.
Thus, the map $Z\mapsto \langle Z\rangle$ defines
a proper morphism $\phi : \mbox{Hilb}^t(X_{1,d})\to G(t-1,d)$.
Set $\Phi := \pi _2^{-1}(\phi (\mbox{Hilb}^t(X_{1,d})))$.
By construction, $\mathcal {S}(P)$ corresponds to the fiber of the map
$\pi_{1|Phi} : \Phi \to \mathbb {P}^d$ over $P$.
$\Phi$ (the {\it abstract secant variety}) is an integral variety of
dimension $\dim (\Phi )=d+1$ (\cite{a}).
Since $\psi$ is proper and $\Phi$ is integral, every fiber of $\pi_{1|Phi}$
has dimension at least $1$ and no isolated points (\cite{h}, Ex. II.3.22 (d)).
Thus, the claim holds, in this case.
\quad (ii) Now assume $d \ge 2t-1$. Hence $t< sr (P)$. A theorem of Sylvester
(see \cite{cs}, or \cite{lt}, Theorem 4.1) proves that, in this case,
$sr (P) = d+2 -t$. Moreover, by \cite{lt} \S 4, there is a unique
zero-dimensional scheme $Z\subset \mathbb {P}^1$ such that $\deg (Z)=t$ and
$P\in \langle \nu _d(Z)\rangle$. As $t<sr(P)$, this subscheme $Z$ cannot
be reduced.
Fix any $A\in \mathcal {S}(P)$. Since $h^1(\mathcal {I}_{A\cup Z}(d))>0$ (Lemma \ref{v1})
and $\deg (A)+\deg (Z) =d+2$, we have $Z\cap A = \emptyset$. Fix any $E\subset A$ such that
$d-\sharp (E) = 2t-2$. Let $Y_E\subset \mathbb {P}^{2t-2}$ be the image of $X_{1,d}$
under the projection $\pi _E$ from the linear subspace $\langle \nu _d(E)\rangle$.
Notice that $Y_E$ is again a rational normal curve, of degree $2t-2$, so that it coincides,
up to a projectivity, with $X_{1,2t-2}$.
We have $Z\cap E = \emptyset$. Moreover $\deg (Z)+\sharp (E) \le d+1$, so that, by the
properties of the rational normal curve mentioned above, the set $\nu_d(Z)\cup\nu_d(E)$
is linearly independent. It follows $\langle \nu _d(Z)\rangle\cap \langle \nu _d(E)\rangle = \emptyset$.
Hence $\pi _E$ is a morphism at each point of $\langle \nu _d(Z)\rangle$ and maps it isomorphically
onto a $(t-1)$-dimensional linear subspace of $\mathbb {P}^{2t-2}$. As $\deg (A) \le d+1$,
for the same reason we also have $\langle \nu _d(A\setminus E)\rangle \cap \langle \nu _d(E)\rangle
= \emptyset$. It follows that the symmetric rank of $\pi_E(P)$ (with respect to $Y_E$)
is exactly $t$, and $\pi _E(\nu _d(A\setminus E))$ is one of the elements
of the set $\mathcal {S}(\pi _E(P))$. Moreover, for any $U\in \mathcal {S}(\pi _E(P))$
the set $U\cup E$ computes $sr (P)$. We saw above that $\pi _E(\nu _d(A\setminus E))$
is not an isolated element of $\mathcal {S}(\pi _E(P))$. Thus $A$ is not an isolated
element of $\mathcal {S}(P)$.
\end{proof}
\vspace{0.3cm}
Now, we are ready to prove our first main result.
\smallskip
\qquad {\emph {Proof of Theorem \ref{i2}.}}
Since $A\ne B$, Lemma \ref{v1} gives $h^1(\mathcal {I}_{A\cup B}(d)) >0$.
Then, since $\sharp (A\cup B) \le 2t < 3d$,
one of the following cases occurs (\cite{c2}, Th. 3.8):
\begin{itemize}
\item[(i)] there is a line $D\subset \mathbb {P}^r$ such that $\sharp (D\cap (A\cup B))\ge d+2$;
\item[(ii)] there is a conic $T\subset \mathbb {P}^r$ such that $\sharp (T\cap (A\cup B))\ge 2d+2$.
\end{itemize}
We will proof the statement, by showing that Lemma \ref{w1} implies that
we can move the points of $A\cap D$ (in case (i)), or $A\cap T$ (in case (ii)),
in a continuous family, whose elements, together with $A\setminus (A\cap D)$,
determine a non trivial family of sets in $\mathcal {S}(P)$, which generalizes $A$.
\quad (a) In this step, we assume the existence of a line $D\subset \mathbb {P}^r$ such that
$\sharp (D\cap (A\cup B))\ge d+2$.
Set $F:= A\setminus (A\cap D)$. Let $H\subset \mathbb {P}^r$ be a general hyperplane containing $D$.
Since $A\cup B$ is finite and $H$ is general, we have have $(A\cup B)\cap H = (A\cup B)\cap D$.
First assume $h^1(\mathcal {I}_{(A\cup B)\setminus (A\cup B)\cap D}(d-1))=0$.
Lemma \ref{v2} gives $A\setminus (A\cap D) = B\setminus (B\cap D)$. Hence $\sharp (A\cap D) =
\sharp (B\cap D)$ and $A\cap D \ne B\cap D$, since $A\ne B$.
The Grassmann's formula shows that $\langle \nu _d(A)\rangle \cap
\langle \nu _d(B)\rangle$ is the linear span of its (supplementary) subspaces
$\langle \nu _d(A\setminus (A\cap D))\rangle$ and $\langle \nu _d(A\cap D)\rangle \cap
\langle \nu _d(B\cap D)\rangle$. This means that one can find a point $P_D\in \langle \nu _d(A\cap D)\rangle
\cap \langle \nu _d(B\cap D)\rangle$ such that $P\in \langle \{P_D\}\cup
\nu _d(A\setminus A\cap D)\rangle = \langle \{P_D\}\cup
\nu _d(F)\rangle$. We notice that $A\cap D$ and $B\cap D$ are two different
subsets of the rational normal curve $\nu_d(D)$, and they computes the rank of $P_D$,
with respect to $\nu_d(D)=X$ (which can be identified with $X_{1,d}$, see
the Introduction). Indeed, if $P_D$ belongs to the span of a subset $Z$
of $\nu_d(D)$, with cardinality smaller than $A\cap D$, then $P$ would belong
to the span of the subset $\nu_d(F)\cup Z$, of cardinality smaller than $sr(P)$, a
contradiction. By Lemma \ref{w1}, $A\cap D$ is not an isolated point of $\mathcal{S}(P_D)$.
\quad {\emph {Claim 1:}} Fix any $E\in \mathcal {S}(P_D)$. Then
$sr (P) = \sharp (F) +sr (P_D)$
and $E\cup F\in \mathcal {S}(P)$.
\quad {\emph {Proof of Claim 1:}} Notice that, by the symmetric case of \cite{bl},
Corollary 2.2 (see also Remark \ref{i4}),
every element of $\mathcal {S}(P_D)$ is contained in $D$ and
in particular it is disjoint from $F$.
Since $P_D\in \langle \nu _d(E)\rangle$ and $P\in \langle \{P_D\}\cup \nu _d(F)$,
we have $P\in \langle \nu_d(E\cup F)\rangle $.
Hence, to prove Claim 1 it is sufficient to prove $\sharp (E\cup F) \le sr (P)$.
Since $F\cap D = \emptyset$, we have $\sharp (E\cup F) = sr (P) +sr (P_D) -\sharp (A\cap D)$.
Since $P_D\in \langle \nu _d(A\cap D)\rangle$, we have $\sharp (A\cap D) \ge sr (P_D)$
by the definition of $sr (P_D)$, concluding the proof of Claim 1.
Claim 1 implies that $A$ is not an isolated
point of $\mathcal {S}(P)$. Namely, let $\Delta$ be an integral affine curve and $o\in \Delta$
such that there is $\{\alpha _\lambda \}_{\lambda \in \Delta} \subseteq \mathcal {S}(P_D)$
with $\alpha _o = A\cap D$ and $\alpha _\lambda \subset D$ for all $\lambda \in \Delta$
(Lemma \ref{w1}). By Claim 1, we have $F\cup \alpha _\lambda \in \mathcal {S}(P)$ for
all $\lambda \in \Delta$.
Now assume $h^1(\mathcal {I}_{(A\cup B)\setminus (A\cup B)\cap D}(d-1))>0$. Since
$\sharp ((A\cup B)\setminus (A\cup B)\cap D) \le 2d-2\le 2d-1$, again there is a line
$L\subset \mathbb {P}^m$ such that $\sharp (L\cap ((A\cup B)\setminus (A\cup B)\cap D)) \ge d+1$.
Let $H_2\subset \mathbb {P}^m$ be a general quadric hypersurface containing $D\cup L$
(it exists, because if $L\cap D =\emptyset$, then $r\ge 3$). Since $L\cup D$ is the base locus
of the linear system $\vert \mathcal {I}_{L\cup D}(2)\vert$, $A\cup B$ is finite and
$H_2$ is general in $\vert \mathcal {I}_{L\cup D}(2)\vert$,
we have $H_2\cap (A\cup B) = (L\cup D)\cap (A\cup B)$. By Lemma \ref{v2}, $A\setminus (A\cap
(D\cup L)) = B\setminus (B\cap (D\cup L))$. Since $\sharp ((A\cup B)\setminus (A\cup B)\cap H_2)
\le 3d-2d-3 \le d-1$, we have $h^1(\mathcal {I}_{(A\cup B)\setminus (A\cup B)\cap H_2}(d-2)) =0$.
Lemma \ref{w1} gives $A\setminus (A\cap (D\cup L)) = B\setminus (B\cap (D\cup L))$.
Notice that either $\sharp (A\cap L) \ge (d+2)/2$, or $\sharp (B\cap L) \ge
(d+2)/2$, since $\sharp ((A\cup B)\cap (D\cup L)) \ge 2d+3$ and $\sharp (A\cap (D\cup L))
= \sharp (B\cap (D\cup L))$.
Assume $x:= \sharp (A\cap L) \ge (d+2)/2$. Since $P\in \langle \nu _d(A)\rangle$
and $P\notin \langle \nu _d(A')\rangle$ for any $A'\subsetneq A$, the set $\langle \{P\} \cup
\nu _d(A\setminus A\cap L)\rangle \cap \langle \nu _d(A\cap L)\rangle$ is a single point.
Call $P_{L,A}$ this point. Since $A$ computes $sr(P)$, we see that
$A\cap L$ computes the rank of $P_{L,A}$, with respect to the rational normal curve $\nu_d(L)$.
Since $2x+1> d$, as explained in the proof of Lemma \ref{w1}, $A\cap L$ is not an isolated
point of $\mathcal{S}(P_{L,A})$ (w.r.t. $\nu_d(L)$). On the other hand, as in Claim 1,
adding $A\setminus (A\cap L)$ to a sets in $\mathcal {S}(P_{L,A})$
we obtain sets in $\mathcal{S}(P)$. As above, this implies that $A$ is not an isolated
point of $\mathcal {S}(P)$.
In the same way we conclude if $\sharp (B\cap D) \ge (d+2)/2$.
\quad (b) Here we assume the non-existence of a line $D\subset \mathbb {P}^m$ such that
$\sharp (D\cap (A\cup B))\ge d+2$. Hence there is a conic $T\subset \mathbb {P}^m$
such that $\sharp (T\cap (A\cup B))\ge 2d+2$.
Since $A$ computes $sr (P)$, the set
$\langle \{P\}\cup \nu _d(A\setminus A\cap T)\rangle \cap \langle \nu _d(A\cap T)\rangle$
is a single point. Call this point $P_T$. Let $H_2$ be a general element of
$\vert \mathcal {I}_T(2)\vert$. Since $\mathcal {I}_T(2)$ is spanned outside $T$ and
$A\cup B$ is finite, we have $H_2\cap (A\cup B) = T\cap (A\cup B)$. Since $\sharp (A\cup B)
-\sharp ((A\cup B)\cap T) \le d-2\le d-1$, we have $h^1(\mathcal {I}_{A\cup B\setminus (A\cup B)
\cap H_2}(d-2))=0$. Lemma \ref{w1} gives $A\setminus A\cap T = B\setminus B\cap T$.
First assume that $T$ is a smooth conic. Hence $\nu _d(T)$ is a rational normal curve
of degree $2d$. In this case, the conclusion follows
by repeating the proof of the case
$h^1(\mathcal {I}_{(A\cup B)\setminus (A\cup B)
\cap D}(d-1))=0$ of step (a), including Claim 1, with $\nu _d(T)$
instead of $\nu _d(D)$, and applying
Lemma \ref{w1} for the integer $2d$.
Now assume that $T$ is singular. Since $A\cup B$
is reduced, we may find $T$ as above which is not a double
line, say $T = L_1\cup L_2$ with $L_1\ne L_2$. Since $\sharp ((A\cup B)\cap T)\ge 2d+2$
and $\sharp ((A\cup B)\cap R)\le d+1$ for every line $R$, we have $\sharp ((A\cup B)\cap
L_1)=\sharp ((A\cup B)\cap L_2) =d+1$ and $L_1\cap L_2\notin (A\cup B)$. If either
$\sharp (A\cap L_i) = \geq (d+2)/2$ or $\sharp (B\cap L_i) \geq (d+1)/2$ for some $i$, we may repeat
the proof of the case $h^1(\mathcal {I}_{(A\cup B)\setminus (A\cup B)\cap D}(d-1))>0$ taking
$L_1\cup L_2$ instead of $L\cup D$.
Thus, it remains to consider the case where $d$ is odd and $\sharp (A\cap L_i) = \sharp (B
\cap L_i) =(d+1)/2$ for all $i$.
Set $\{O\}:= L_1\cap L_2$. Since $\langle \nu _d(L_1)
\rangle \cap \langle \nu _d(L_2)\rangle = \{\nu _d(O)\}$
and $P\notin \langle \nu _d(L_i)\rangle$, $i=1,2$, the linear space $\langle \nu _d(L_i)\rangle
\cap \langle \{\nu _d(P_T)\}\cup \nu _d(L_{2-i})\rangle$ is a line $D_i \subset
\langle \nu _d(L_i)\rangle$ passing through $\nu _d(O)$. The set $\langle \nu _d(A\cap L_i)
\rangle \cap D_i$ is a point $P_{A,i}\in D_i\setminus\{\nu _d(O)\}$. Notice
that $\langle D_1\cup D_2\rangle$ is a plane and $P_T\in \langle D_1\cup D_2\rangle
\setminus (D_1\cup D_2)$. Hence for each $U_1\in D_1\setminus \{\nu _d(O)\}$ there is a
unique $U_2\in D_2\setminus \{O\}$ such that $P_T\in \langle \{U_1,U_2\}\rangle$.
By construction, $P_{L_i,A}$ has symmetric tensor
rank $sr _{L_i}(P_{L_i,A}) =(d+1)/2$ with respect to the rational normal
curve $\nu _d(L_i)$ (\cite{lt}, Theorem 4.1 or \cite{bgi}, \S 3) (we also have
$sr (P) = (d+1)/2$, by \cite{ls}, Proposition 3.1). The non-empty
open subset $\langle \nu _d(L_i)\rangle \setminus \Sigma _{(d-1)/2}(\nu _d(L_i))$ of
$\langle \nu _d(L_i)\rangle$ is the set of all $Q\in \langle \nu _d(L_i)\rangle$
whose symmetric rank with respect to $v_d(L_i)$ is exactly $sr _{L_i}(Q) = (d+1)/2$.
Since $h^1(\mathbb {P}^1,\mathcal {I}_E(d)) =0$ for
every set $E\subset \mathbb {P}^1$ such that $\sharp (E)\le d+1$, for every
$Q\in \langle \nu _d(L_i)\rangle \setminus \Sigma _{(d-1)/2}(\nu _d(L_i))$ there is a
unique $A_{i,Q} \subset L_i$ such that $\nu _d(A_{i,Q})$ computes $sr _{L_i}(P)$.
Set $\mathcal {U}_i:= \langle \nu _d(L_i)\rangle \setminus \Sigma _{(d-1)/2}(\nu _d(L_i))
\cap D_i$. For each $Q_1\in D_1\cap \nu _d(L_i)\rangle \setminus \Sigma _{(d-1)/2}(\nu _d(L_i)$,
call $Q_2$ the only point
of $D_2\setminus \{O\}$ such that $P\in \langle \{Q_1,Q_2\}\rangle$.
By moving $Q_1 in D_1$, we find an integral one-dimensional
variety $\Delta := \{F\cup A_{L_1,Q_1}\cup A_{L_2,Q_2}\} \subseteq \mathcal {S}(P)$
with $A\in \Delta$. Hence $A$ is not an isolated point of
$\mathcal {S}(P)$.\qed
\medskip
The following example shows the bound $sr (P) < 3d/2$
in the statement of Theorem \ref{i2} is sharp, for large $d$.
\begin{example}\label{i1}
Fix an even integer $d \ge 6$. Assume $m \ge 2$. Here we construct
$P\in \mathbb {P}^n$ such that $sr (P) =3d/2$ and its symmetric rank is computed
by exactly two subsets of $X_{m,d}$
Fix a $2$-dimensional linear subspace $M\subseteq \mathbb {P}^r$
and a smooth plane cubic $C\subset M$. Since $h^1(M,\mathcal {I}_C(d)) =
h^1(M,\mathcal {O}_M(d-3)) =0$, we have $\deg (\nu _d(C)) =3d$, $\dim (\langle \nu _d(C)\rangle )
=3d-1$ and $\nu _d(C)$ is a linearly normal elliptic curve of $\langle \nu _d(C)\rangle$.
Since no non-degenerate curve is defective (\cite{a}, Remark 1.6), we have $\Sigma _{3d/2}(\nu _d(C))
=\langle \nu _d(C)\rangle$ and $\Sigma _{3d/2}(\nu _d(C))\setminus \Sigma _{(3d-2)/2}(\nu _d(C))$
is a non-empty open subset of the secant variety $\Sigma _{3d/2}(\nu _d(C))$.
Fix a general $P\in \Sigma _{3d/2}(\nu _d(C))$.
Since $\nu _d(C)$ is not a rational normal curve, by \cite{cc1}, Theorem 3.1 and \cite{cc1}, Proposition 5.2, there are exactly $2$ (reduced) subsets of $\nu_d(C)$,
of cardinality $3d/2$, which compute the symmetric
rank of $P$. Thus, to settle the example, it is sufficient to prove that
any $B\subset \mathbb {P}^m$ such that $\nu _d(B)$ computes $sr (P)$, is
a subset of $C$. Obviously $\sharp (B) \le 3d/2$.
Assume $B\nsubseteq C$. Let $H_3$ be a general cubic hypersurface containing
$C$ (hence $H_3 = C$ if $r=2$). Set $B':= B\setminus B\cap C$. Since $B$ is finite and $H_3$ is general,
we have $B\cap H_3 = B\cap C$. Since $A\subset C$, we have $B' = (A\cup B)\setminus (A\cup B)\cap C$.
Lemma \ref{v1} gives $h^1(\mathcal {I}_{A\cup B}(d)) >0$. Hence $h^1(M,\mathcal {I}_{A\cup B}(d)) > 0$.
Remark \ref{a00} gives that either $h^1(C,\mathcal {I}_{(A\cup B)\cap C}(d)) > 0$
or $h^1(\mathcal {I}_{B'}(d-3)) > 0$.
\quad (a) First assume $h^1(\mathcal {I}_{B'}(d-3)) > 0$. Since $d\ge 3$ and $\sharp (B') \le 2d-1$,
there is a line $D\subset M$ such that $\sharp (D\cap B') \ge d-1$
(see \cite{bgi}, Lemma 34, or \cite{c2}, Th. 3.8). Since $\nu _d(B)$
is linearly independent, we have $\sharp (D\cap B)\le d+1$.
Assume $\sharp (D\cap (A\cup B)) \le d+1$. Hence $h^1(D,\mathcal {I}_{(A\cup B)\cap D}(d))=0$.
Remark \ref{a00} gives $h^1(M,\mathcal {I}_{(A\cup B)\setminus (A\cup B)\cap D}(d-1)) >0$.
Set $F:= (A\cup B)\setminus ((A\cup B)\cap D)$. We easily compute $\sharp (F) < 3(d-1)$.
By \cite{c2}, Theorem 3.8, we get that either there is a line $D_1$ such that $\sharp (F\cap D_1)\ge d+1$
or there is a conic $D_2 $ such that $\sharp (D_2\cap F) \ge 2d$.
As $P\in \Sigma _{3d/2}(\nu _d(C))$ is general, then also $A$ is general in $C$ (hence reduced).
Thus, no $3$ of its points are collinear and no $6$ of its points are contained in a conic.
Hence if $D_1$ exists, we get $\sharp (B) \ge 2d-2$, while if $D_2$ exists, we get $\sharp (B)
\ge d-1 +(2d-5) =3d-6$; both lead to a contradiction, because $d\ge 6$ and $\sharp (B) =3d/2$.
Now assume $\sharp (D\cap (A\cup B)) \ge d+2$. Let $H\subset \mathbb {P}^m$ be a general
hyperplane containing $D$. Since $A\cup B$ is finite and $H$ is general, we have
$H\cap (A\cup B) = D\cap (A\cup B)$. If $h^1(\mathcal {I}_{(A\cup B)\setminus (A\cup B)\cap H}(d-1)) =0$,
then Lemma \ref{v2} gives $B\setminus B\cap D = A\setminus A\cap D$. Hence $\sharp (A\cap D) =
\sharp (B\cap D)$. Since $\sharp (A\cap D) \le 2$, we get $d\le 2$, a contradiction.
Now assume $h^1(\mathcal {I}_{(A\cup B)\setminus (A\cup B)\cap H}(d-1))>0$. Since
$\sharp ((A\cup B)\setminus (A\cup B)\cap H) \le 2d-2$, there is a line $L\subset \mathbb {P}^m$
such that $\sharp (L\cap (A\cup B)\setminus ((A\cup B)\cap D)) \ge d+1$. Let $H_2\subset \mathbb {P}^m$
be a general quadric hypersurface containing $L\cup D$. As usual, since $A\cup B$ is finite,
$L\cup D$ is the base locus of the linear system $\vert \mathcal {I}_{L\cup D}(2)\vert$
and $H_2$ is general in $\vert \mathcal {I}_{L\cup D}(2)\vert$, we have
$H_2\cap (A\cup B) = (L\cup D)\cap (A\cup B)$.
Since $\sharp ((A\cup B)\setminus (A\cup B)\cap H_2) \le d-3$,
we have $h^1(\mathcal {I}_{(A\cup B)\setminus (A\cup B)\cap H_2}(d-2)) =0$. Hence
Lemma \ref{v2} gives $A\setminus A\cap H = B\setminus B\cap H$. Hence $\sharp ((A\cap (L\cup D)) =
\sharp (B\cap (L\cup D))$. This is absurd, because $d\ge 4$ while, by generality,
no $6$ points of $A$ are on a conic.
\quad (b) Assume $h^1(C,\mathcal {I}_{(A\cup B)\cap C}(d)) > 0$ and $h^1(\mathcal {I}_{B'}(d-3))=0$.
Since $C$ is a smooth elliptic curve and $\deg (\mathcal {O}_C(d)) =3d$,
either $\deg ((A\cup B)\cap C) \ge 3d+1$ or $\deg ((A\cup B)\cap C) = 3d$ and
$\mathcal {O}_C((A\cup B)\cap C) \cong \mathcal {O}_C(d)$. Hence $\sharp (B\cap C) \ge (3d-1)/2$.
Therefore $\sharp (B') \le 2$. Taking $D:= C$ in Lemma \ref{v2} we get $B'=\emptyset$,
because $A\subset C$.
\end{example}
\vspace{0.3cm}
Next, we prove Theorem \ref{i3}, a more precise description of the positive dimensional
components of $\mathcal{S}(P)$, when $sr(P)<3d/2$.
\qquad {\emph {Proof of Theorem \ref{i3}.}} Fix $A\in \mathcal {S}(P)$.
and assume the existence of $B\in \mathcal {S}(P)$ such that $B\ne A$.
At the beginning of the proof of Theorem \ref{i2}
we showed that either:
\begin{itemize}
\item[(i)] there is a line $D\subset \mathbb {P}^r$ such that $\sharp (D\cap (A\cup B))\ge d+2$;
\item[(ii)] there is a conic $T\subset \mathbb {P}^r$ such that $\sharp (T\cap (A\cup B))\ge 2d+2$.
\end{itemize}
\quad (i) Here we assume the existence of a line $D\subset \mathbb {P}^r$ such that
$\sharp ((A\cup B)\cap D) \ge d+2$. We proved in step (a) of the proof of Theorem \ref{i2}
that $\sharp (A\cap D) = \sharp (B\cap D)$. Hence $\sharp (A\cap D) \ge \lceil (d+2)/2\rceil$.
Set $F:= A\setminus A\cap D$. Since $P\in \langle \nu _d(A)\rangle$ and
$P\notin \langle \nu _d(A')\rangle$ for any $A'\subsetneq A$, the set $\langle
\nu _d(A\cap D)\rangle \cap \langle \{P\}\cup \nu _d(F)\rangle$ is a single point. Let $P_D$ denote
this point. Lemma \ref{w1} and the symmetric case of \cite{bl}, Corollary 2, give that
$\mathcal {S}(P_D)$ is infinite and each element of it is contained in $D$.
Thus, to prove that we are in case (a) of the statement, it is sufficient to prove that
$E\cup F\in \mathcal {S}(P)$ for any $E\in \mathcal {S}(P_D)$.
This assertion is just Claim 1 of the proof of Theorem \ref{i2}.
\quad (ii) Now assume the non-existence of a line $D$ as above. Then, there is a (reduced) conic
$T\subset \mathbb {P}^r$ such that $\sharp (T\cap (A\cup B)) \ge 2d+2$ and
$A\setminus A\cap T = B\setminus B\cap T$.
Hence $\sharp (A\cap T) = \sharp (B\cap T) \ge d+1$. We consider separately the cases in which
$T$ is smooth or $T$ is singular.
\qquad (ii.1) Assume $T$ is smooth. Set $F:= A\setminus A\cap T$. As in step (i),
we see that $\langle \nu _d(A\cap D)\rangle \cap \langle \{P\}\cup \nu _d(F)\rangle$ is a single point,
$P_T$. Moreover, we see that $\sharp (A\cap T) = sr (P_T)$ and $\mathcal {S}(P_T)$ is infinite,
since $\{F\cup E\}_{E\in \mathcal {S}(P_T)}\subseteq \mathcal {S}(P)$. To conclude that we are in case (b),
we need to prove that every element of $\mathcal {S}(P)$ is of the form $F\cup E$,
$E\in \mathcal {S}(P_T)$. Fix any $B\in \mathcal {S}(P)$ such that $B \ne A$. Since $\sharp (A\cup B)<3d$
and $h^1(\mathcal {I}_{A\cup B}(d)) >0$, either there is a line $D_1$ such that
$\sharp ((A\cup B)\cap D_1) \ge d+2$, or there is a reduced conic $T_2\neq T$ such that
$\sharp ((A\cup B)\cap T_2) \ge 2d+2$ (\cite{c2}, Theorem 3.8).
Assume the existence of the line $D_1$. If $h^1(\mathcal {I}_{(A\cup B)\setminus
((A\cup B)\cap D_1)}(d-1)) =0$, then Lemma \ref{v2} gives $A\setminus
A\cap D_1 = B\setminus B\cap D_1$. Since
$\sharp (A) = sr (P) = \sharp (B)$, we get $\sharp (A\cap D_1)=\sharp (B\cap D_1)
\geq (d+2)/2$, which contradicts the fact that we are not in case (i).
Therefore $h^1(\mathcal {I}_{(A\cup B)\setminus
(A\cup B)\cap D_1}(d-1))>0$. Hence there is a line $D_2$ such that $\sharp (D_2\cap ((A\cup B)\setminus
(A\cup B)\cap D_1))\ge d+1$. Let $H_2$ be a general quadric hypersurface containing $D_1\cup D_2$
(it exists, because if $D_1\cap D_2 =\emptyset$, then $m\ge 3$).
Since $\sharp ((A\cup B)\setminus (A\cup B)\cap H_2)\le (3d-1)-2d-3
\le d-1$, we have $h^1(\mathcal {I}_{(A\cup B)\setminus (A\cup B)\cap H_2}(d-2))=0$.
Hence Lemma \ref{v2} implies $A\setminus A\cap H_2 = B\setminus B\cap H_2$. Since
$H_2$ be a general quadric hypersurface containing $D_1\cup D_2$, we have
$A\cap H_2 = A\cap (D_1\cup D_2)$ and $B\cap H_2 = B\cap (D_1\cup D_2)$. Since
$T\cap (D_1\cup D_2) \le 4$, we get $2d+3 \le \sharp ((A\cup B)\cap (D_1\cup D_2)) \le 8$,
contradicting the assumption $d \ge 3$.
Assume the existence of the conic $T_2$ and assume $T\neq T_2$. In step (ii) of the proof of Theorem \ref{i2},
we proved that $A\setminus T_2\cap A = B\setminus T_2\cap B$. Since $\sharp (A) = sr (P) =\sharp (B)$,
we get $\sharp (A\cap T_2) = \sharp (B\cap T_2)$. Since $\sharp (T\cap
T_2) \le 4$ and $\sharp (A\setminus A\cap T) \le (3d-1)/2-d-1$, we have
$\sharp (A\cap T_2) \le (3d-1)/2 -d+3 = (d+5)/2$. Hence $\sharp (A\cap T_2) =
\sharp (B\cap T_2) \ge 2d+2 -(d+5)/2 = (3d-1)/2$. Since $\sharp (A\cap T_2) +\sharp (B\cap T_2) \ge
\sharp ((A\cup B)\cap T_2) \ge 2d+2$ we get $d=3$ and $A\subset T$. Hence $\sharp(B\cap T_2)\geq 4$
so that $B\subset T_2$. Thus $A\subset T $ and $B\subset T_2$ and moreover
$A\setminus A\cap T_2 = B\setminus B\cap T_2=\emptyset$. It follows that $A = T\cap T_2$.
Since $A\subset T$ and $T$ is a smooth conic, we have $P\in \langle \nu _3(T)\rangle$
and the symmetric rank of $P$, with respect to the rational normal curve $\nu _3(T)\subset\mathbb {P}^6$,
is $4$. It follows that $\mathcal {S}(P)$ is infinite. By the symmetric case
of \cite{bl}, Corollary 2.2, we have $B\subset \nu _3(T)$ for all $B\in \mathcal {S}(P)$.
Hence (b) holds, in this case.
Finally, assume that $T_2$ exists and $T=T_2$. I.e. assume $\sharp (T\cap (A\cup B)) \ge 2d+2$.
In step (ii) of the proof of Theorem \ref{i2}, we proved that
$A\setminus T\cap A = B\setminus T\cap B$ and that $B\cap T$ computes $sr (P_T)$.
Hence $B\in \{F\cup E\}_{E\in \mathcal {S}(P_T)}$.
\qquad (ii.2) Here we assume the existence of a {\it reducible} conic $T$ such that
$\sharp (A\cap T)\ge d+1$. Write $T = L_1\cup L_2$ with $\sharp (A\cap L_1) \ge \sharp (A\cap L_2)$.
If $\sharp (A\cap L_1)\ge (d+2)/2$, then, by step (i), we are in case (a).
If $\sharp (A\cap L_1)< (d+2)/2$, then we get $\sharp (A\cap L_1) = \sharp (A\cap
L_2)=(d+1)/2$ and $L_1\cap L_2\notin A$. We also get that $d$ is odd.
It remains simply prove that $\mathcal {S}(P) \ne \{A\}$. Indeed, we proved
that $\mathcal {S}(P)$ is infinite in the second part of step (ii) of the proof of Theorem \ref{i2}.
The proof of the statement is completed.\qed
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
| {
"timestamp": "2012-02-15T02:03:08",
"yymm": "1202",
"arxiv_id": "1202.3066",
"language": "en",
"url": "https://arxiv.org/abs/1202.3066",
"abstract": "Let n_d denote the degree d Veronese embedding of a projective space P^r. For any symmetric tensor P, the 'symmetric tensor rank' sr(P) is the minimal cardinality of a subset A of P^r, such that n_d(A) spans P. Let S(P) be the space of all subsets A of P^r, such that n_d(A) computes sr(P). Here we classify all P in P^n such that sr(P) < 3d/2 and sr(P) is computed by at least two subsets. For such tensors P, we prove that S(P) has no isolated points.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Sets computing the symmetric tensor rank",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877726405082,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.7096296176723578
} |
https://arxiv.org/abs/2209.10309 | On the Existential Fragments of Local First-Order Logics with Data | We study first-order logic over unordered structures whose elements carry a finite number of data values from an infinite domain which can be compared wrt. equality. As the satisfiability problem for this logic is undecidable in general, in a previous work, we have introduced a family of local fragments that restrict quantification to neighbourhoods of a given reference point. We provide here the precise complexity characterisation of the satisfiability problem for the existential fragments of this local logic depending on the number of data values carried by each element and the radius of the considered neighbourhoods. | \subsection{Preliminary results: 0 and 1 data values}
We introduce two preliminary results we shall use in this
section to obtain new decidability results. First, note that
formulas in $\ndFO{0}{\Sigma}$ (i.e. where no data is considered)
correspond to first order logic formulas with a set of predicates and
equality test as a unique relation. As mentioned in Chapter 6.2.1 of
\cite{borger-classical-springer97}, these formulas belong to the
\emph{L\"owenheim class with equality} also called as the relational
monadic formulas, and their satisfiability problem is in
\textsc{NEXP}. Furthermore, thanks to \cite{etessami-first-ic02} (Theorem 11), we know that this latter problem is \textsc{NEXP}-hard even
if one considers formulas which use only two variables.
\begin{theorem}\label{thm:0fo}
$\nDataSat{\textup{dFO}}{0}$ is \textsc{NEXP}-complete.
\end{theorem}
In \cite{Mundhenk09}, the authors study the satisfiability problem for
Hybrid logic over Kripke structures where the transition relation is
an equivalence relation, and they show that it is
\textsc{N2EXP}-complete. Furthermore in \cite{Fitting12}, it is shown
that Hybrid logic can be translated to first-order logic in
polynomial time and this holds as well for the converse
translation. Since $1$-data structures can be interpreted as Kripke
structures with one equivalence relation, altogether this allows us to
obtain the following preliminary result about the satisfiability
problem of $\ndFO{1}{\Sigma}$.
\begin{theorem}\label{thm:1fo}
$\nDataSat{\textup{dFO}}{1}$ is \textsc{N2EXP}-complete.
\end{theorem}
\subsection{Two data values and balls of radius 2}
In this section, we prove that the satisfiability problem for the
existential fragment of local first-order logic with two data values and balls of radius two is decidable.
To obtain this result we provide a reduction to the satisfiability
problem for first-order logic over $1$-data structures. Our reduction is based on the following intuition. Consider a
$2$-data structure $\mathfrak{A}=(A,(P_{\sigma}),\f{1},\f{2}) \in
\nData{2}{\Sigma}$ and an element $a \in A$. If we take an
element $b$ in $\Ball{2}{a}{\mathfrak{A}}$, the radius-2-ball around $a$, we
know that either $\f{1}(b)$ or $\f{2}(b)$ is a common value with
$a$. In fact, if $b$ is at distance $1$ of $a$, this holds by definition and
if $b$ is
at distance $2$ then $b$ shares an element with $c$ at distance $1$ of
$a$ and this element has to be shared with $a$ as well so $b$ ends to
be at distance $1$ of $a$. The
trick consists then in using extra-labels for elements sharing a value with
$a$ that can be forgotten and to keep only the value of $b$ not
present in $a$, this construction leading to a $1$-data structure. It
remains to show that we can ensure that a $1$-data structure is the
fruit of this construction in a formula of $\ndFO{1}{\Sigma'}$ (where
$\Sigma'$ is obtained from $\Sigma$ by adding extra predicates).\\
The first step for our reduction consists in providing a
characterisation for the elements located in the radius-1-ball and the radius-2-ball around
another element.
\begin{lemma}\label{lem:shape-balls}
Let $\mathfrak{A}=(A,(P_{\sigma}),\f{1},\f{2}) \in
\nData{2}{\Sigma}$ and $a,b\in A$ and $j \in \{1,2\}$. We have:
\begin{enumerate}
\item $(b,j)\in \Ball{1}{a}{\mathfrak{A}}$ iff there is $i\in \{1,2\}$ such that $\relsaa{i}{j}{\mathfrak{A}}{a}{b}$.
\item $(b,j)\in \Ball{2}{a}{\mathfrak{A}}$ iff there exists $i,k\in \{1,2\}$ such that $\relsaa{i}{k}{\mathfrak{A}}{a}{b}$.
\end{enumerate}
\end{lemma}
\begin{proof} We show both statements:
\begin{enumerate}
\item Since $(b,j)\in \Ball{1}{a}{\mathfrak{A}}$, by definition we have either $b=a$ and in that case $\relsaa{j}{j}{\mathfrak{A}}{a}{b}$ holds, or $b \neq a$ and necessarily there exists $i\in \{1,2\}$ such that $\relsaa{i}{j}{\mathfrak{A}}{a}{b}$.
\item First, if there exists $i,k\in \{1,2\}$ such that
$\relsaa{i}{k}{\mathfrak{A}}{a}{b}$, then $(b,k)\in \Ball{1}{a}{\mathfrak{A}}$ and $(b,j)\in \Ball{2}{a}{\mathfrak{A}}$ by definition. Assume now that $(b,j)\in \Ball{2}{a}{\mathfrak{A}}$. Hence there exists $i\in \{1,2\}$ such that $\distaa{(a,i)}{(b,j)}{\mathfrak{A}}\leq 2$.
We perform a case analysis on the value of
$\distaa{(a,i)}{(b,j)}{\mathfrak{A}}$.
\begin{itemize}
\item \textbf{Case $\distaa{(a,i)}{(b,j)}{\mathfrak{A}}=0$}. In that case
$a=b$ and $i=j$ and we have $\relsaa{i}{i}{\mathfrak{A}}{a}{b}$.
\item \textbf{Case $\distaa{(a,i)}{(b,j)}{\mathfrak{A}}=1$}. In that case,
$((a,i),(b,j))$ is an edge in the data graph $\gaifmanish{\mathfrak{A}}$
of $\mathfrak{A}$ which means that $\relsaa{i}{j}{\mathfrak{A}}{a}{b}$ holds.
\item \textbf{Case $\distaa{(a,i)}{(b,j)}{\mathfrak{A}}=2$}. Note that
we have by definition $a \neq b$. Furthermore, in that case, there is
$(c,k)\in A\times\{1,2\}$ such that $((a,i),(c,k))$ and
$((c,k),(b,j))$ are edges in $\gaifmanish{\mathfrak{A}}$. If $c\neq a$ and
$c\neq b$, this implies that $\relsaa{i}{k}{\mathfrak{A}}{a}{c}$ and
$\relsaa{k}{j}{\mathfrak{A}}{c}{b}$, so $\relsaa{i}{j}{\mathfrak{A}}{a}{b}$ and
$\distaa{(a,i)}{(b,j)}{\mathfrak{A}}=1$ which is a contradiction.
If $c=a$ and $c\neq b$, this implies that $\relsaa{k}{j}{\mathfrak{A}}{a}{b}$.
If $c\neq a$ and $c = b$, this implies that $\relsaa{i}{k}{\mathfrak{A}}{a}{b}$.
\end{itemize}
\end{enumerate}
\end{proof}
We consider a formula $\phi=\exists x_1\ldots\exists
x_n.\phi_{qf}(x_1,\ldots,x_n)$ of $\eFO{2}{\Sigma}{2}$ in prenex normal form, i.e., such that $\phi_{qf}(x_1,\ldots,x_n)\in\qfFO{2}{\Sigma}{2}$. We know that there is a structure $\mathfrak{A}=(A,(P_{\sigma})_{\sigma \in \Sigma},\linebreak[0]\f{1},\f{2})$ in $\nData{2}{\Sigma}$ such that $\mathfrak{A}\models\phi$ if and only if there are $a_1,\ldots,a_n \in A $ such that $\mathfrak{A}\models\phi_{qf}(a_1,\ldots,a_n)$.
Let $\mathfrak{A}=(A,(P_{\sigma})_{\sigma \in \Sigma},\f{1},\f{2})$ be a structure in $\nData{2}{\Sigma}$ and a tuple $\tuple{a} = (a_1,\ldots,a_n)$ of elements in $A^n$. We shall present the construction of a $1$-data structure
$\widehat{(\mathfrak{A},\tuple{a})}$ in $\nData{1}{\Unary'}$ (with $\Sigma \subseteq \Unary'$) with the same set of nodes as $\mathfrak{A}$, but where each node carries a single data value. In order to retrieve the data relations that hold in $\mathfrak{A}$ while reasoning over $\widehat{(\mathfrak{A},\tuple{a})}$, we introduce extra-predicates in $\Unary'$ to establish whether a node shares a common value with one of the nodes among $a_1,\ldots,a_n$ in $\mathfrak{A}$.
\begin{figure*}[htbp]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{tikzpicture}[node distance=2cm]
\node [data, label=below left:$a$] (A) {1
\nodepart{two} 2 };
\node [data, above left of=A,xshift=-1em,label=below:$b$] (B) {1 \nodepart{second} 3};
\node [data, above right of=A,xshift=1em,label=below right:$c$] (C) {3 \nodepart{second} 2};
\node [dataredred, below left of=A,label=below:$d$] (D) {5 \nodepart{second} 6};
\node [dataredred, above right of=B,xshift=1em, label=below:$e$]
(E) {4 \nodepart{second} 3};
\node [data, below right of=A, label=below:$f$] (F) {2 \nodepart{second} 7};
\draw[line width=0.7pt,<->] (A.one north) .. controls +(0,.5) and
+(.5,0).. (B.one east);
\draw[line width=0.7pt,<->] (B.two east) .. controls +(2,-0.5) and
+(-2,.5).. (C.one west);
\draw[line width=0.7pt,<->] (E.two east) .. controls +(0,0) and
+(0,0.5).. (C.one north);
\draw[line width=0.7pt,<->] (E.south west) .. controls +(0,0) and
+(0.5,.2).. (B.two east);
\draw[line width=0.7pt,<->] (A.south east) .. controls +(1,-.5)
and +(0,0).. (C.south west);
\draw[line width=0.7pt,<->] (A.south) .. controls +(0,-0.5) and
+(0,0).. (F.one west);
\draw[line width=0.7pt,<->] (F.north) .. controls +(0,0) and
+(0,0).. (C.south);
\selfconnectionright{A};
\selfconnectionleft{B};
\selfconnectionright{C};
\selfconnectionleft{D};
\selfconnectionleft{E};
\selfconnectionright{F};
\end{tikzpicture}
\caption{A data structure $\mathfrak{A}$ and $\gaifmanish{\mathfrak{A}}$.}
\label{fig:abstract-a}
\end{subfigure}
\unskip\ \vrule\ \hspace{1em}
\begin{subfigure}[b]{0.45\textwidth}
\begin{tikzpicture}[node distance=2cm]
\node [dataone, label=below left:$a$] (A) {8};
\node [dataone, above left of=A,xshift=-1em,label=below:$b$] (B) {3};
\node [dataone, above right of=A,xshift=1em,label=below:$c$] (C) {3};
\node [dataone, below left of=A,label=below:$d$] (D) {9};
\node [dataone, above right of=B,xshift=1em, label=below:$e$]
(E) {10};
\node [dataone, below right of=A, label=below:$f$] (F) {7};
\node [right of=C,yshift=-3em] (G) {$\begin{array}{l}\uP{a[1,1]}=\{a,b\} \\
\uP{a[2,2]}=\{a,c\}\\
\uP{a[1,2]}=\emptyset \\
\uP{a[2,1]}=\{f\}\\\end{array}$};
\end{tikzpicture}
\caption{$\sem{\mathfrak{A}}_{(a)}$.}
\label{fig:abstract-b}
\end{subfigure}
\caption{
\label{fig:abstract}}
\end{figure*}
We now explain formally how we build $\widehat{(\mathfrak{A},\tuple{a})}$. Let $\Udeci{n}=\{\udd{p}{i}{j}\mid p\in\{1,\ldots,n\}, i,j\in\{1,2\}\}$ be a set of new unary predicates and $\Unary' = \Sigma \cup \Udeci{n}$.
For every element $b\in A$, the predicates in $\Udeci{n}$ are used to keep track of the relation between the data values of $b$ and the one of $a_1,\ldots,a_n$ in $\mathfrak{A}$.
Formally, we define $\uP{\udd{p}{i}{j}}=\{b\in A\mid \mathfrak{A}\models \rels{i}{j}{a_p}{b}\}$.
We now define a data function $f:A\to \N$.
We recall for this matter that $\Valuessub{\mathfrak{A}}{\tuple{a}} = \{f_1(a_1),f_2(a_1),\ldots,f_1(a_n),f_2(a_n)\}$ and let $f_{\textup{new}}:A\to\N\setminus \Values{\mathfrak{A}}$ be an injection. For every $b \in A$, we set:
\[
f(b) = \begin{cases}
f_2(b) \text{ if } f_1(b)\in \Valuessub{\mathfrak{A}}{\tuple{a}} \text{ and } f_2(b)\notin \Valuessub{\mathfrak{A}}{\tuple{a}}\\
f_1(b) \text{ if } f_1(b)\notin \Valuessub{\mathfrak{A}}{\tuple{a}} \text{ and } f_2(b)\in \Valuessub{\mathfrak{A}}{\tuple{a}}\\
f_{\textup{new}}(b) \text{ otherwise}
\end{cases}
\]
Hence depending if $f_1(b)$ or $f_2(b)$ is in $\Valuessub{\mathfrak{A}}{\tuple{a}}$, it splits the elements of $\mathfrak{A}$ in four categories.
If $f_1(b)$ and $f_2(b)$ are in $\Valuessub{\mathfrak{A}}{\tuple{a}}$, the predicates in $\Udeci{n}$ allow us to retrieve all the data values of $b$.
Given $j\in\{1,2\}$, if $f_j(b)$ is in $\Valuessub{\mathfrak{A}}{\tuple{a}}$ but $f_{3-j}(b)$ is not, the new predicates will give us the $j$-th data value of $b$ and we have to keep track of the $(3-j)$-th one, so we save it in $f(b)$.
Lastly, if neither $f_1(b)$ nor $f_2(b)$ is in $\Valuessub{\mathfrak{A}}{\tuple{a}}$, we will never be able to see the data values of $b$ in $\phi_{q_f}$ (thanks to Lemma \ref{lem:shape-balls}), so they do not matter to us. Finally, we have $\widehat{(\mathfrak{A},\tuple{a})} = (A, (\uP{\sigma})_{\sigma\in\Unary'}, f) $. Figure \ref{fig:abstract-b} provides an example of $\Valuessub{\mathfrak{A}}{\tuple{a}}$ for the data structures depicted on Figure \ref{fig:abstract-a} and $\tuple{a}=(a)$.
The next lemma formalizes the connection existing between $\mathfrak{A}$ and
$\widehat{(\mathfrak{A},\tuple{a})}$ with $\tuple{a} = (a_1,\ldots,a_n)$.
\begin{lemma}\label{lem:r2dv2-semantique}
Let $b,c\in A$ and $j,k\in\{1,2\}$ and $p\in\{1,\ldots,n\}$. The following statements then hold.
\begin{enumerate}
\item If $(b,j)\in\Ball{1}{a_p}{\mathfrak{A}}$ and $(c,k)\in\Ball{1}{a_p}{\mathfrak{A}}$ then $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$ iff there is $i\in\{1,2\}$ s.t. $b \in \uP{\udd{p}{i}{j}}$ and $c \in \uP{\udd{p}{i}{k}}$.
\item If $(b,j)\in\Ball{2}{a_p}{\mathfrak{A}}\setminus\Ball{1}{a_p}{\mathfrak{A}}$ and $(c,k)\in\Ball{1}{a_p}{\mathfrak{A}}$ then $\vprojr{\mathfrak{A}}{a_p}{2}\nvDash\rels{j}{k}{b}{c}$
\item If $(b,j),(c,k) \in\Ball{2}{a_p}{\mathfrak{A}}\setminus\Ball{1}{a_p}{\mathfrak{A}}$ then $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$ iff either $\relsaa{1}{1}{\widehat{(\mathfrak{A},\tuple{a})}}{b}{c}$ or there exists $p' \in \{1,\ldots,n\}$ and $\ell \in \{1,2\}$ such that $b \in \uP{\udd{p'}{\ell}{j}}$ and $c \in \uP{\udd{p'}{\ell}{k}}$ .
\item If $(b,j)\notin\Ball{2}{a_p}{\mathfrak{A}}$ and $(c,k)\in\Ball{2}{a_p}{\mathfrak{A}}$ then $\vprojr{\mathfrak{A}}{a_p}{2}\nvDash\rels{j}{k}{b}{c}$
\item If $(b,j)\notin\Ball{2}{a_p}{\mathfrak{A}}$ and $(c,k)\notin\Ball{2}{a_p}{\mathfrak{A}}$ then $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$ iff $b=c$ and $j=k$.
\end{enumerate}
\end{lemma}
\begin{proof}
We suppose that $\vprojr{\mathfrak{A}}{a_p}{2} = (A,(\uP{\sigma})_\sigma,f^p_1,f^p_2)$.
\begin{enumerate}
\item Assume that $(b,j)\in\Ball{1}{a_p}{\mathfrak{A}}$ and $(c,k)\in\Ball{1}{a_p}{\mathfrak{A}}$.
It implies that $f^p_j(b)=f_j(b)$ and $f^p_k(c)=f_k(c)$.
Then assume that $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$.
As $(b,j)\in\Ball{1}{a_p}{\mathfrak{A}}$, thanks to Lemma \ref{lem:shape-balls}.1 it means that there is a $i\in\{1,2\}$ such that $\relsaa{i}{j}{\mathfrak{A}}{a_p}{b}$.
So we have $f_k(c)=f^p_k(c)=f^p_j(b)=f_j(b)=f_i(a_p)$, that is $\relsaa{i}{k}{\mathfrak{A}}{a_p}{c}$. Hence by definition, $b \in \uP{\udd{p}{i}{j}}$ and $c \in \uP{\udd{p}{i}{k}}$.
Conversely, let $i\in\{1,2\}$ such that $b \in \uP{\udd{p}{i}{j}}$ and $c \in \uP{\udd{p}{i}{k}}$. This means that $\relsaa{i}{j}{\mathfrak{A}}{a_p}{b}$ and $\relsaa{i}{k}{\mathfrak{A}}{a_p}{c}$.
So $f^p_j(b)=f_j(b)=f_i(a_p)=f_k(c)=f^p_k(c)$, that is $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$.
\item Assume that $(b,j)\in\Ball{2}{a_p}{\mathfrak{A}}\setminus\Ball{1}{a_p}{\mathfrak{A}}$ and $(c,k)\in\Ball{1}{a_p}{\mathfrak{A}}$.
It implies that $f^p_j(b)=f_j(b)$ and $f^p_k(c)=f_k(c)$.
Thanks to Lemma \ref{lem:shape-balls}.1, $(c,k)\in\Ball{1}{a_p}{\mathfrak{A}}$ implies that $f_k(c)\in\{f_1(a_p),f_2(a_p)\}$ and $(b,j)\notin\Ball{1}{a_p}{\mathfrak{A}}$ implies that $f_j(b)\notin\{f_1(a_p),f_2(a_p)\}$.
So $\vprojr{\mathfrak{A}}{a_p}{2}\not \models\rels{j}{k}{b}{c}$.
\item Assume that $(b,j), (c,k) \in\Ball{2}{a_p}{\mathfrak{A}}\setminus\Ball{1}{a_p}{\mathfrak{A}}$. As previously, we have that $f_j(b)\notin\{f_1(a_p),f_2(a_p)\}$ and $f_k(c)\notin\{f_1(a_p),f_2(a_p)\}$, and thanks to Lemma \ref{lem:shape-balls}.2, we have $f_{3-j}(b) \in \{f_1(a_p),f_2(a_p)\}$ and $f_{3-k}(b) \in \{f_1(a_p),f_2(a_p)\}$. There is then two cases:
\begin{itemize}
\item Suppose there does not exists $p' \in \{1,\ldots,n\}$ such that $f_{j}(b) \in \{f_1(a_{p'}),f_2(a_{p'})\}$ .This allows us to deduce that $f^p_j(b)=f_j(b)=f(b)$ and $f^p_k(c)=f_k(c)$. If $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$, then necessarily there does not exists $p' \in \{1,\ldots,n\}$ such that $f_{k}(c) \in \{f_1(a_{p'}),f_2(a_{p'})\}$ so we have $f^p_k(c)=f_k(c)=f(c)$ and $f(b)=f(c)$, consequently $\relsaa{1}{1}{\widehat{(\mathfrak{A},\tuple{a})}}{b}{c}$. Similarly assume that $\relsaa{1}{1}{\widehat{(\mathfrak{A},\tuple{a})}}{b}{c}$, this means that $f(b)=f(c)$ and either $b=c$ and $k=j$ or $b \neq c$ and by injectivity of $f$,we have $f_j(b)=f(b)=f_k(c)$. This allows us to deduce that $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$.
\item If there exists $p' \in \{1,\ldots,n\}$ such that $f_{j}(b) = f_\ell(a_{p'})$ for some $\ell \in \{1,2\}$. Then we have $b \in \uP{\udd{p'}{\ell}{j}}$. Consequently, we have $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$ iff $c \in \uP{\udd{p'}{\ell}{k}}$.
\end{itemize}
\item We prove the case 4 and 5 at the same time.
Assume that $(b,j)\notin\Ball{2}{a_p}{\mathfrak{A}}$.
It means that in order to have $f^p_j(b)=f^p_k(c)$, we must have $(b,j)=(c,k)$.
So if $(c,k)\in\Ball{2}{a_p}{\mathfrak{A}}$, we can not have $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$ which ends case 4.
And if $(c,k)\notin\Ball{2}{a_p}{\mathfrak{A}}$, we have that $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$ iff $b=c$ and $j=k$.
\end{enumerate}
\end{proof}
We shall now see how we translate the formula
$\phi_{qf}(x_1,\ldots,x_n)$ into a formula
$\phit{\phi_{qf}}(x_1,\ldots,x_n)$ in $\ndFO{1}{\Unary'}$ such that $\mathfrak{A}$ satisfies $\phi_{qf}(a_1,\ldots,a_n)$ if, and only if, $\widehat{(\mathfrak{A},\tuple{a})}$ satisfies $\phit{\phi_{qf}}(a_1,\ldots,a_n)$. Thanks to the previous lemma we know that if $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$ then $(b,j)$ and $(c,k)$ must belong to the same set among $\Ball{1}{a_p}{\mathfrak{A}}$, $\Ball{2}{a_p}{\mathfrak{A}}\setminus\Ball{1}{a_p}{\mathfrak{A}}$ and $\comp{\Ball{2}{a_p}{\mathfrak{A}}}$ and we can test in $\widehat{(\mathfrak{A},\tuple{a})}$ whether $(b,j)$ is a member of $\Ball{1}{a_p}{\mathfrak{A}}$ or $\Ball{2}{a_p}{\mathfrak{A}}$.
Indeed, thanks to Lemmas \ref{lem:shape-balls}.1 and \ref{lem:shape-balls}.2, we have $(b,j) \in \Ball{1}{a_p}{\mathfrak{A}}$ iff $b\in\bigcup_{i=1,2}\uP{\udd{p}{i}{j}}$ and $(b,j) \in \Ball{2}{a_p}{\mathfrak{A}}$ iff $b\in\bigcup_{i=1,2}^{j'=1,2} \uP{\udd{p}{i}{j'}}$. This reasoning leads to the following formulas in $\ndFO{1}{\Unary'}$ with $p \in \{1,\ldots,n\}$ and $j \in \{1,2\}$:
\begin{itemize}
\item $\phiBun{j}(y) := \udd{p}{1}{j}(y) \ou \udd{p}{2}{j}(y)$ to test if the $j$-th field of an element belongs to $\Ball{1}{a_p}{\mathfrak{A}}$
\item $\phi_{\Ball{2}{a_p}{}}^{}(y) := \phiBun{1}(y) \ou \phiBun{2}(y)$ to test if a field of an element belongs to $\Ball{2}{a_p}{\mathfrak{A}}$
\item $\phiBdsu{j}(y) := \phi_{\Ball{2}{a_p}{}}^{}(y) \et \neg\phiBun{j}(y)$ to test that the $j$-th field of an element belongs to $\Ball{2}{a_p}{\mathfrak{A}}\setminus\Ball{1}{a_p}{\mathfrak{A}}$
\end{itemize}
We shall now present how we use these formulas to translate atomic formulas of the form $\rels{j}{k}{y}{z}$ under some $\locformr{-}{x_p}{2}$. For this matter, we rely on the three following formulas of $\ndFO{1}{\Unary'}$:
\begin{itemize}
\item The first formula asks for $(y,j)$ and $(z,k)$ to be in $\Ball{1}{a_p}{1}$ (where here we abuse notations, using variables for the elements they represent) and for these two data values to coincide with one data value of $a_p$, it corresponds to Lemma \ref{lem:r2dv2-semantique}.1:
$$
\phi_{j,k,a_p}^{r=1}(y,z) := \phiBun{j}(y) \et \phiBun{k}(z) \et \Ou_ {i=1,2}\udd{p}{i}{j}(y)\et\udd{p}{i}{k}(z)
$$
\item The second formula asks for $(y,j)$ and $(z,k)$ to be in $\Ball{2}{a_p}{\mathfrak{A}}\setminus\Ball{1}{a_p}{\mathfrak{A}}$ and checks either whether the data values of $y$ and $z$ in $\widehat{(\mathfrak{A},\tuple{a})}$ are equal or whether there exist $p'$ and $\ell$ such that $y$ belongs to $\udd{p'}{\ell}{j}(y)$ and $z$ belongs to $\udd{p'}{\ell}{k}(z)$, it corresponds to Lemma \ref{lem:r2dv2-semantique}.3:
$$
\phi_{j,k,a_p}^{r=2}(y,z) := \phiBdsu{j}(y) \et \phiBdsu{k}(z) \et \big (y\sim z \ou\big(\Ou^n_{p'=1}\Ou^2_ {\ell=1}\udd{p'}{\ell}{j}(y)\et\udd{p'}{\ell}{k}(z)\big)\big)
$$
\item The third formula asks for $(y,j)$ and $(z,k)$ to not belong to $\Ball{2}{a_p}{\mathfrak{A}}$ and for $y=z$, it corresponds to Lemma \ref{lem:r2dv2-semantique}.5:
$$
\phi_{j,k,a_p}^{r>2}(y,z) := \begin{cases}
\neg \phi_{\Ball{2}{a_p}{}}^{}(y) \et \neg\phi_{\Ball{2}{a_p}{}}^{}(z) \et y=z &\text{ if } j=k \\
\bot &\text{ otherwise}
\end{cases}
$$
\end{itemize}
Finally, here is the inductive definition of the translation $\T{-}$ which uses sub transformations $\Tp{-}$ in order to remember the centre of the ball and leads to the construction of $\phit{\phi_{qf}}(x_1,\ldots,x_n)$:
\[ \begin{array}{rcl}
\T{\phi\ou\phi'} &=& \T{\phi} \ou \T{\phi'}\\
\T{x_p=x_p'} &=& x_p=x_p' \\
\T{\neg\phi} &=& \neg\T{\phi} \\
\T{\locformr{\psi}{x_p}{2}} &=& \Tp{\psi} \\
\Tp{\rels{j}{k}{y}{z}} &=&\phi_{j,k,a_p}^{r=1}(y,z) \ou \phi_{j,k,a_p}^{r=2}(y,z) \ou \phi_{j,k,a_p}^{r>2}(y,z)\\
\Tp{\sigma(x)} &=& \sigma(x) \\
\Tp{x=y} &=& x=y \\
\Tp{\phi\ou\phi'}&=& \Tp{\phi} \ou \Tp{\phi'} \\
\Tp{\neg\phi} &=& \neg\Tp{\phi}\\
\Tp{\exists x. \phi} &=& \exists x.\Tp{\phi}\\
\end{array}\]
\begin{lemma} \label{lem:correct}
We have $\mathfrak{A}\models\phi_{qf}(\tuple{a})$ iff $\widehat{(\mathfrak{A},\tuple{a})}\models\phit{\phi_{qf}}(\tuple{a})$.
\end{lemma}
\begin{proof}
Because of the inductive definition of $\T{\phi}$ and that only the atomic formulas $\rels{j}{k}{y}{z}$ change, we only have to prove that given $b,c\in A$, we have $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$ iff $\widehat{(\mathfrak{A},\tuple{a})}\models \Tp{\rels{j}{k}{y}{z}}(b,c)$.
We first suppose that $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$.
Using Lemma \ref{lem:r2dv2-semantique}, it implies that $(b,j)$ and $(c,k)$ belong to same set between $\Ball{1}{a_p}{\mathfrak{A}}$, $\Ball{2}{a_p}{\mathfrak{A}} \setminus \Ball{1}{a_p}{\mathfrak{A}}$ and $\comp{\Ball{2}{a_p}{\mathfrak{A}}}$. We proceed by a case analysis.
\begin{itemize}
\item If $(b,j),(c,k)\in\Ball{1}{a_p}{\mathfrak{A}}$ then by lemma \ref{lem:r2dv2-semantique}.1 we have that $\widehat{(\mathfrak{A},\tuple{a})}\models\phi_{j,k,a_p}^{r=1}(b,c)$ and thus $\widehat{(\mathfrak{A},\tuple{a})}\models \Tp{\rels{j}{k}{y}{z}}(b,c)$.
\item If $(b,j),(c,k)\in\Ball{2}{a_p}{\mathfrak{A}} \setminus \Ball{1}{a_p}{\mathfrak{A}}$ then by lemma \ref{lem:r2dv2-semantique}.3 we have that $\widehat{(\mathfrak{A},\tuple{a})}\models\phi_{j,k,a_p}^{r=2}(b,c)$ and thus $\widehat{(\mathfrak{A},\tuple{a})}\models \Tp{\rels{j}{k}{y}{z}}(b,c)$.
\item If $(b,j),(c,k)\in\comp{\Ball{2}{a_p}{\mathfrak{A}}}$ then by lemma \ref{lem:r2dv2-semantique}.5 we have that $\widehat{(\mathfrak{A},\tuple{a})}\models\phi_{j,k,a_p}^{r>2}(b,c)$ and thus $\widehat{(\mathfrak{A},\tuple{a})}\models \Tp{\rels{j}{k}{y}{z}}(b,c)$.
\end{itemize}
We now suppose that $\widehat{(\mathfrak{A},\tuple{a})}\models \Tp{\rels{j}{k}{y}{z}}(b,c)$.
It means that $\widehat{(\mathfrak{A},\tuple{a})}$ satisfies at least $\phi_{j,k,a_p}^{r=1}(b,c)$, $\phi_{j,k,a_p}^{r=2}(b,c)$ or $\phi_{j,k,a_p}^{r>2}(b,c)$.
If $\widehat{(\mathfrak{A},\tuple{a})}\models\phi_{j,k,a_p}^{r=1}(b,c)$, it implies that $(b,j)$ and $(c,k)$ are in $\Ball{1}{a_p}{\mathfrak{A}}$, and we can then apply lemma \ref{lem:r2dv2-semantique}.1 to deduce that $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$.
If $\widehat{(\mathfrak{A},\tuple{a})}\models\phi_{j,k,a_p}^{r=2}(b,c)$, it implies that $(b,j)$ and $(c,k)$ are in $\Ball{2}{a_p}{\mathfrak{A}} \setminus \Ball{1}{a_p}{\mathfrak{A}}$, and we can then apply lemma \ref{lem:r2dv2-semantique}.3 to deduce that $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$.
If $\widehat{(\mathfrak{A},\tuple{a})}\models\phi_{j,k,a_p}^{r>2}(b,c)$, it implies that $(b,j)$ and $(c,k)$ are in $\comp{\Ball{2}{a_p}{\mathfrak{A}}}$, and we can then apply lemma \ref{lem:r2dv2-semantique}.5 to deduce that $\vprojr{\mathfrak{A}}{a_p}{2}\models\rels{j}{k}{b}{c}$.
\end{proof}
\medskip
To provide a reduction from $\nDataSat{\eFOr{2}}{2}$ to
$\nDataSat{\textup{dFO}}{1}$, having the formula $\phit{\phi_{qf}}(x_1,\ldots,x_n)$
is not enough because to use the result of the previous
Lemma, we need to ensure that there exists a model $\mathfrak{B}$ and a tuple
of elements $(a_1,\ldots,a_n)$ such that $\mathfrak{B} \models\
\phit{\phi_{qf}}(a_1,\ldots,a_n)$ and as well that there exists
$\mathfrak{A}\in \nData{2}{\Sigma}$ such that $ \mathfrak{B} = \widehat{(\mathfrak{A},\tuple{a})}$. We explain now how we
can ensure this last point.
Now, we want to characterize the structures of the form $\widehat{(\mathfrak{A},\tuple{a})}$.
Given $\mathfrak{B} =
(A,(\uP{\sigma})_{\sigma\in\Unary'},f)\in\nData{1}{\Unary'}$ and
$\tuple{a}\in A$, we say that $(\mathfrak{B},\tuple{a})$ is \emph{well formed}
iff there exists a structure $\mathfrak{A}\in \nData{2}{\Sigma}$ such that $ \mathfrak{B}
= \widehat{(\mathfrak{A},\tuple{a})}$. Hence $(\mathfrak{B},\tuple{a})$ is \emph{well formed} iff there
exist two functions $f_1,f_2:A\to\N$ such that $\widehat{(\mathfrak{A},\tuple{a})}=\sem{(A,(\uP{\sigma})_{\sigma\in\Sigma}, f_1,f_2)}_{\tuple{a}}$.
We state three properties on $(\mathfrak{B},\tuple{a})$, and we will show that they characterize being well formed.
\begin{enumerate}
\item (Transitivity) For all $b,c\in A$, $p,q \in\{1,\ldots,n\}$,
$i,j,k,\ell \in\{1,2\}$ if $b\in\uP{\udd{p}{i}{j}}$, $c\in\uP{\udd{p}{i}{\ell}}$ and $b\in\uP{\udd{q}{k}{j}}$ then $c\in\uP{\udd{q}{k}{\ell}}$.
\item (Reflexivity) For all $p$ and $i$, we have $a_p\in\uP{\udd{p}{i}{i}}$
\item (Uniqueness) For all $b\in A$, if $b\in\bigcap_{j=1,2}\bigcup_{p=1,\ldots,n}^{i=1,2} \uP{\udd{p}{i}{j}}$ or $b\notin\bigcup_{j=1,2}\bigcup_{p=1,\ldots,n}^{i=1,2} \uP{\udd{p}{i}{j}}$ then for any $c\in B$ such that $f(c)=f(b)$ we have $c=b$.
\end{enumerate}
Each property can be expressed by a first order logic formula, which
we respectively name $\phi_{\mathit{tran}}$, $\phi_{\mathit{refl}}$ and $\phi_{\mathit{uniq}}$ and we
denote by $\phi_{\mathit{wf}}$ their conjunction:
$$
\begin{array}{ll}
\phi_{\mathit{tran}} &= \forall y \forall z.\Et_{p,q=1}^{n}\Et_{i,j,k,\ell=1}^2 \Big(\udd{p}{i}{j}(y) \et \udd{p}{i}{\ell}(z) \et \udd{q}{k}{j}(y) \donc \udd{q}{k}{\ell}(z)\Big) \\
\phi_{\mathit{refl}}(x_1,\ldots,x_n) &=\Et_{p=1}^n\Et_{i=1}^2 \udd{p}{i}{i}(x_p) \\
\phi_{\mathit{uniq}} &= \forall y. \Big(\Et_{j=1}^2 \Ou^n_{p=1} \Ou_{i=1}^2 \udd{p}{i}{j}(y) \ou \Et_{j=1}^2 \Et^n_{p=1}\Et^2_{i=1} \neg\udd{p}{i}{j}(y)\Big) \donc (\forall z. y\sim z \donc y=z)\\
\phi_{\mathit{wf}}(x_1,\ldots,x_n) &=\phi_{\mathit{tran}} \et \phi_{\mathit{refl}}(x_1,\ldots,x_n) \et
\phi_{\mathit{uniq}}
\end{array}
$$
The next lemma expresses that the formula $\phi_{\mathit{wf}}$ allows to
characterise precisely the $1$-data structures in $\nData{1}{\Unary'}$
which are well-formed.
\begin{lemma}\label{lem:well-formed}
Let $\mathfrak{B}\in\nData{1}{\Unary'}$ and $a_1,\ldots,a_n$ elements of
$\mathfrak{B}$, then $(\mathfrak{B},\tuple{a})$ is well formed iff $\mathfrak{B}\models\phi_{\mathit{wf}}(\tuple{a})$.
\end{lemma}
\begin{proof}
First, if $(\mathfrak{B},\tuple{a})$ is well formed, then there there exists
$\mathfrak{A}\in \nData{2}{\Sigma}$ such that $ \mathfrak{B} = \widehat{(\mathfrak{A},\tuple{a})}$ and by
construction we have $\widehat{(\mathfrak{A},\tuple{a})} \models\phi_{\mathit{wf}}(\tuple{a})$. We now suppose
that $\mathfrak{B}=(A,(\uP{\sigma})_{\sigma\in\Unary'},f)$ and $\mathfrak{B}\models\phi_{\mathit{wf}}(\tuple{a})$.
In order to define the functions $f_1,f_2:A\to\N$, we need
to introduce some objects.
We first define a function $g :
\{1,\ldots,n\} \times \{1,2\} \to \N\setminus \im{f}$ (where
$\im{f}$ is the image of $f$ in $\mathfrak{B}$) which
verifies the following properties:
\begin{itemize}
\item for all $p \in \{1,\ldots,n\}$ and $i \in \{1,2\}$, we
have
$a_p \in \uP{\udd{p}{i}{3-i}} $ iff $g(p,1)=g(p,2)$;
\item for all $p, q \in \{1,\ldots,n\}$ and $i,j \in \{1,2\}$,
we have $a_q \in \uP{\udd{p}{i}{j}} $ iff $g(p,i)=g(q,j)$.
\end{itemize}
We use this function to fix the two data values carried by the
elements in $\{a_1,\ldots,a_m\}$. We now explain why this function is
well founded, it is due to the fact
that $\mathfrak{B}\models\phi_{\mathit{tran}} \et \phi_{\mathit{refl}}(a_1,\ldots,a_n)$. In fact, since
$\mathfrak{B} \models \phi_{\mathit{refl}}(a_1,\ldots,a_n)$, we have for all $p \in
\{1,\ldots,n\}$ and $i \in \{1,2\}$, $a_p \in \uP{\udd{p}{i}{i}}
$. Furthermore if $a_p \in \uP{\udd{p}{i}{j}}$ then $a_p \in
\uP{\udd{p}{j}{i}}$ thanks to the formula $\phi_{\mathit{tran}}$; indeed since we
have $a_p \in \uP{\udd{p}{i}{j}}$ and $a_p \in \uP{\udd{p}{i}{i}}$
and $a_p \in \uP{\udd{p}{j}{j}}$, we obtain $a_p \in
\uP{\udd{p}{j}{i}}$. Next, we also have that if $a_q \in
\uP{\udd{p}{i}{j}}$ then $a_p \in
\uP{\udd{q}{j}{i}}$ again thanks to $\phi_{\mathit{tran}}$; indeed since we
have $a_q \in \uP{\udd{p}{i}{j}}$ and $a_p \in \uP{\udd{p}{i}{i}}$
and $a_q \in \uP{\udd{q}{j}{j}}$, we obtain $a_p \in
\uP{\udd{q}{j}{i}}$.
We also need a natural $d_{\mathit{out}}$ belonging to $\N\setminus
(\im{g}\cup\im{f})$. For $j \in
\{1,2\}$, we define $f_j$ as follows for all $b \in A$:
\[f_j(b) = \left\{\begin{array}{ll}
g(p,i) & \text{if for some } p,i \text{ we have } b\in\uP{\udd{p}{i}{j}} \\
f(b) &\text{if for all $p,i$ we have $b\notin\uP{\udd{p}{i}{j}}$ and for some $p,i$ we have $b\in\uP{\udd{p}{i}{3-j}}$} \\
d_{\mathit{out}} &\text{if for all $p,i,j'$, we have } b\notin\uP{\udd{p}{i}{j'}}
\end{array}\right.
\]
Here again, we can show that since $\mathfrak{B}\models\phi_{\mathit{tran}} \et
\phi_{\mathit{refl}}(a_1,\ldots,a_n)$, the functions $f_1$ and $f_2$ are well
founded. Indeed, assume that $b\in\uP{\udd{p}{i}{j}} \cap
\uP{\udd{q}{k}{j}}$, then we have necessarily that
$g(p,i)=g(q,k)$. For this we need to show that $a_p \in
\udd{q}{k}{i}$ and we use again the formula $\phi_{\mathit{tran}}$. This can be
obtained because we have $b\in\uP{\udd{p}{i}{j}}$ and
$a_p\in\uP{\udd{p}{i}{i}}$ and $b \in \uP{\udd{q}{k}{j}}$.
We then define $\mathfrak{A}$ as the $2$-data-structures
$(A,(P_{\sigma})_{\sigma \in \Sigma},\f{1},\f{2})$. It remains to
prove that $\mathfrak{B} = \widehat{(\mathfrak{A},\tuple{a})}$.
First, note that for all $b\in A$, $p \in \{1,\ldots,n\}$ and
$i,j\in\{1,2\}$, we have $b\in\uP{\udd{p}{i}{j}}$ iff
$\relsaa{i}{j}{\mathfrak{A}}{a_p}{b}$. Indeed, we have $b\in\uP{\udd{p}{i}{j}}$,
we have that $f_j(b)=g(p,i)$ and since $a_p \in \uP{\udd{p}{i}{j}}$ we
have as well that $f_i(a_p)=g(p,i)$, as a consequence
$\relsaa{i}{j}{\mathfrak{A}}{a_p}{b}$. In the other direction, if
$\relsaa{i}{j}{\mathfrak{A}}{a_p}{b}$, it means that $f_j(b)=f_i(a_p)=g(p,i)$
and thus $b\in\uP{\udd{p}{i}{j}}$. Now to have $\mathfrak{B} = \widehat{(\mathfrak{A},\tuple{a})}$, one has
only to be careful in the choice of function $f_{\textup{new}}$
while building $\widehat{(\mathfrak{A},\tuple{a})}$. We recall that this function is injective and is
used to give a value to the elements $b \in A$ such that neither
$f_1(b)\in \Valuessub{\mathfrak{A}}{\tuple{a}} \text{ and } f_2(b)\notin
\Valuessub{\mathfrak{A}}{\tuple{a}}$ nor $ f_1(b)\notin
\Valuessub{\mathfrak{A}}{\tuple{a}} \text{ and } f_2(b)\in
\Valuessub{\mathfrak{A}}{\tuple{a}}$. For these elements, we make $f_{\textup{new}}$
matches with the function $f$ and the fact that we define an injection
is guaranteed by the formula $\phi_{\mathit{uniq}}$.
\end{proof}
Using the results of Lemma \ref{lem:correct} and
\ref{lem:well-formed}, we deduce that the formula $\phi=\exists x_1\ldots\exists
x_n.\phi_{qf}(x_1,\ldots,x_n)$ of $\eFO{2}{\Sigma}{2}$ is satisfiable
iff the formula $\psi=\exists x_1\ldots\exists
x_n.\phit{\phi_{qf}}(x_1,\ldots,x_n) \wedge \phi_{\mathit{wf}}(x_1,\ldots,x_n) $
is satisfiable. Note that $\psi$ can be built in polynomial time from
$\phi$ and that it belongs to $\ndFO{1}{\Unary'}$. Hence, thanks to
Theorem \ref{thm:1fo}, we obtain that $\nDataSat{\eFOr{2}}{2}$ is in
\textsc{N2EXP}.
We can as well obtain a matching lower bound thanks to a
reduction from $\nDataSat{\textup{dFO}}{1}$. For this matter we rely on two
crucial points. First in the formulas of $\eFO{2}{\Sigma}{2}$, there is no
restriction on the use of quantifiers for the formulas located under the scope
of the $\locformr{\cdot}{x}{2}$ modality and consequently we can write
inside this modality a formula of $\ndFO{1}{\Sigma}$ without any
modification. Second we can extend a
model $\ndFO{1}{\Sigma}$ into a $2$-data structure such that all
elements and their values are located in the same radius-$2$-ball by adding everywhere a second
data value equal to $0$. More formally, let $\phi$ be
a formula in $\ndFO{1}{\Sigma}$ and consider the formula $\exists
x.\locformr{\phi}{x}{2}$ where we interpret $\phi$ over $2$-data
structures (this formula simply never mentions the values located in the second
fields). We have then the following lemma.
\begin{lemma} \label{lem:hardness-radius2-2}
There exists $\mathfrak{A} \in \nData{1}{\Sigma}$ such that $\mathfrak{A} \models \phi$
if and only if there exists $\mathfrak{B} \in \nData{2}{\Sigma}$ such that
$\mathfrak{B} \models \exists
x.\locformr{\phi}{x}{2}$.
\end{lemma}
\begin{proof}
Assume that there exists $\mathfrak{A}=(A,(P_{\sigma})_{\sigma \in
\Sigma},\f{1})$ in $\nData{1}{\Sigma}$ such that $\mathfrak{A} \models
\phi$. Consider the $2$-data structure $\mathfrak{B}=(A,(P_{\sigma})_{\sigma \in
\Sigma},\f{1},\f{2})$ such that $\f{2}(a)=0$ for all $a\in
A$. Let $a \in A$. It is clear that we have $\vprojr{\mathfrak{B}}{a}{2}=\mathfrak{B}$
and that $\vprojr{\mathfrak{B}}{a}{2} \models \phi$ (because $\mathfrak{A} \models
\phi$ and $\phi$ never mentions the second values of the elements
since it is a formula in $\ndFO{1}{\Sigma}$ ). Consequently $\mathfrak{B} \models \exists
x.\locformr{\phi}{x}{2}$.
Assume now that there exists $\mathfrak{B}=(A,(P_{\sigma})_{\sigma \in
\Sigma},\f{1},\f{2})$ in $ \nData{2}{\Sigma}$ such that $\mathfrak{B} \models \exists
x.\locformr{\phi}{x}{2}$. Hence there exists $a \in A$ such that
$\vprojr{\mathfrak{B}}{a}{2} \models \phi$, but then by forgetting the second
value in $\vprojr{\mathfrak{B}}{a}{2}$ we obtain a model in $\nData{1}{\Sigma}$
which satisfies $\phi$.
\end{proof}
Since $\nDataSat{\textup{dFO}}{1}$ is
\textsc{N2EXP}-hard (see Theorem \ref{thm:1fo}), we obtain the desired lower bound.
\begin{theorem}\label{thm:radius2-2}
The problem $\nDataSat{\eFOr{2}}{2}$ is \textsc{N2EXP}-complete.
\end{theorem}
\subsection{Balls of radius 1 and any number of data values }
Let $D \geq 1$. We first show that $\nDataSat{\eFOr{1}}{D}$ is in
\textsc{NEXP} by providing a reduction towards
$\nDataSat{\textup{dFO}}{0}$. This reduction uses the characterisation of
the radius-1-ball provided by Lemma \ref{lem:shape-balls} and is very
similar to the reduction provided in the previous section. In fact,
for an element $b$ located in the radius-1-ball of another
element $a$, we use extra unary predicates to explicit which are the
values of $b$ that are common with the values of $a$. We provide here
the main step of this reduction whose proof follows the same line as
the one of Theorem \ref{thm:radius2-2}.
We consider a formula $\phi=\exists x_1\ldots\exists
x_n.\phi_{qf}(x_1,\ldots,x_n)$ of $\eFO{D}{\Sigma}{1}$ in prenex normal
form, i.e., such that $\phi_{qf}(x_1,\ldots,x_n)\in\qfFO{D}{\Sigma}{1}$. We
know that there is a structure $\mathfrak{A}=(A,(P_{\sigma})_{\sigma \in
\Sigma},\linebreak[0]\f{1},\f{2},\ldots,\f{D})$ in
$\nData{D}{\Sigma}$ such that $\mathfrak{A}\models\phi$ if and only if there
are $a_1,\ldots,a_n \in A $ such that
$\mathfrak{A}\models\phi_{qf}(a_1,\ldots,a_n)$. Let then $\mathfrak{A}=(A,(P_{\sigma})_{\sigma \in \Sigma},\f{1},\f{2},\ldots,\f{D})$ in $\nData{D}{\Sigma}$ and a tuple $\tuple{a} = (a_1,\ldots,a_n)$ of elements in $A^n$. Let $\Omega_n=\{\udd{p}{i}{j}\mid p\in\{1,\ldots,n\}, i,j\in\{1,\ldots,D\}\}$ be a set of new unary predicates and $\Unary' = \Sigma \cup \Omega_n$.
For every element $b\in A$, the predicates in $\Omega_n$ are used to keep track of the relation between the data values of $b$ and the one of $a_1,\ldots,a_n$ in $\mathfrak{A}$.
Formally, we have $\uP{\udd{p}{i}{j}}=\{b\in A\mid \mathfrak{A}\models \rels{i}{j}{a_p}{b}\}$.
Finally, we build the $0$-data-structure
$\sem{\mathfrak{A}}'_{\tuple{a}}= (A, (\uP{\sigma})_{\sigma\in\Unary'})
$. Similarly to Lemma \ref{lem:r2dv2-semantique}, we have the
following connection between $\mathfrak{A}$ and $\sem{\mathfrak{A}}'_{\tuple{a}}$.
\begin{lemma}\label{lem:r1-semantique}
Let $b,c\in A$ and $j,k\in\{1,\ldots,D\}$ and $p\in\{1,\ldots,n\}$. The following statements hold:
\begin{enumerate}
\item If $(b,j)\in\Ball{1}{a_p}{\mathfrak{A}}$ and $(c,k)\in\Ball{1}{a_p}{\mathfrak{A}}$ then $\vprojr{\mathfrak{A}}{a_p}{1}\models\rels{j}{k}{b}{c}$ iff there is $i\in\{1,2\}$ s.t. $b \in \uP{\udd{p}{i}{j}}$ and $c \in \uP{\udd{p}{i}{k}}$.
\item If $(b,j)\notin\Ball{1}{a_p}{\mathfrak{A}}$ and $(c,k)\in\Ball{1}{a_p}{\mathfrak{A}}$ then $\vprojr{\mathfrak{A}}{a_p}{1}\nvDash\rels{j}{k}{b}{c}$
\item If $(b,j)\notin\Ball{1}{a_p}{\mathfrak{A}}$ and $(c,k)\notin\Ball{1}{a_p}{\mathfrak{A}}$ then $\vprojr{\mathfrak{A}}{a_p}{1}\models\rels{j}{k}{b}{c}$ iff $b=c$ and $j=k$.
\end{enumerate}
\end{lemma}
We shall now see how we translate the formula
$\phi_{qf}(x_1,\ldots,x_n)$ into a formula
$\phit{\phi_{qf}}'(x_1,\ldots,x_n)$ in $\ndFO{0}{\Unary'}$ such that $\mathfrak{A}$
satisfies $\phi_{qf}(a_1,\ldots,a_n)$ if, and only if,
$\sem{\mathfrak{A}}'_{\tuple{a}}$ satisfies
$\phit{\phi_{qf}}(a_1,\ldots,a_n)$. As in the previous section, we
introduce the following formula in $\ndFO{0}{\Unary'}$ with $p \in
\{1,\ldots,n\}$ and $j \in \{1,\ldots,D\}$ to test if the $j$-th field of an element belongs to $\Ball{1}{a_p}{\mathfrak{A}}$:
$$
\phiBun{j}(y) := \bigvee_{i \in \{1,\ldots,D\}}\udd{p}{i}{j}(y)
$$
We now present how we translate atomic formulas of the form $\rels{j}{k}{y}{z}$ under some $\locformr{-}{x_p}{1}$. For this matter, we rely on two formulas of $\ndFO{0}{\Unary'}$ which can be described as follows:
\begin{itemize}
\item The first formula asks for $(y,j)$ and $(z,k)$ to be in $\Ball{1}{a_p}{1}$ (here we abuse notations, using variables for the elements they represent) and for these two data values to coincide with one data value of $a_p$, it corresponds to Lemma \ref{lem:r1-semantique}.1:
$$
\psi_{j,k,a_p}^{r=1}(y,z) := \phiBun{j}(y) \et \phiBun{k}(z) \et \Ou^D_ {i=1}\udd{p}{i}{j}(y)\et\udd{p}{i}{k}(z)
$$
\item The second formula asks for $(y,j)$ and $(z,k)$ to not belong to $\Ball{1}{a_p}{\mathfrak{A}}$ and for $y=z$, it corresponds to Lemma \ref{lem:r1-semantique}.3:
$$
\psi_{j,k,a_p}^{r>1}(y,z) := \begin{cases}
\bigwedge^D_{i=1} (\neg
\phiBun{i}(y) \wedge \neg \phiBun{i}(z)) \et y=z &\text{ if } j=k \\
\bot &\text{ otherwise}
\end{cases}
$$
\end{itemize}
Finally, as before we provide an inductive definition of the
translation $\Tbis{-}$ which uses subtransformations $\Tpbis{-}$ in
order to remember the centre of the ball and leads to the
construction of $\phit{\phi_{qf}}'(x_1,\ldots,x_n)$. We only
detail the case
$$
\Tpbis{\rels{j}{k}{y}{z}} =\psi_{j,k,a_p}^{r=1}(y,z) \ou
\psi_{j,k,a_p}^{r>1}(y,z)
$$
as the other cases are identical as for the
translation $\T{-}$ shown in the previous section. This leads to
the following lemma (which is the pendant of Lemma
\ref{lem:correct}).
\begin{lemma} \label{lem:correct2}
We have $\mathfrak{A}\models\phi_{qf}(\tuple{a})$ iff $\sem{\mathfrak{A}}'_{\tuple{a}}\models\phit{\phi_{qf}}'(\tuple{a})$.
\end{lemma}
As we had to characterise the well-formed $1$-data structure, a similar trick is necessary here. For this matter, we use the following
formulas:
$$
\begin{array}{ll}
\psi_{\mathit{tran}} &= \forall y \forall z.\Et_{p,q=1}^{n}\Et_{i,j,k,\ell=1}^D \Big(\udd{p}{i}{j}(y) \et \udd{p}{i}{\ell}(z) \et \udd{q}{k}{j}(y) \donc \udd{q}{k}{\ell}(z)\Big) \\
\psi_{\mathit{refl}}(x_1,\ldots,x_n) &=\Et_{p=1}^n\Et_{i=1}^D \udd{p}{i}{i}(x_p) \\
\psi_{\mathit{wf}}(x_1,\ldots,x_n) &=\psi_{\mathit{tran}} \et \psi_{\mathit{refl}}(x_1,\ldots,x_n)
\end{array}
$$
Finally with the same reasoning as the one given in the previous
section, we can show that the formula $\phi=\exists x_1\ldots\exists
x_n.\linebreak[0]\phi_{qf}(x_1,\ldots,x_n)$ of $\eFO{D}{\Sigma}{1}$ is satisfiable
iff the formula $\exists x_1\ldots\exists
x_n.\linebreak[0]\phit{\phi_{qf}}'(x_1,\ldots,x_n) \wedge \psi_{\mathit{wf}}(x_1,\ldots,x_n) $
is satisfiable. Note that this latter formula can be built in polynomial time from
$\phi$ and that it belongs to $\ndFO{0}{\Unary'}$. Hence, thanks to
Theorem \ref{thm:0fo}, we obtain that $\nDataSat{\eFOr{1}}{D}$ is in
\textsc{NEXP}. The matching lower bound is as well obtained the same
way by reducing $\nDataSat{\textup{dFO}}{0}$ to $\nDataSat{\eFOr{1}}{D}$
showing that a formula $\phi$ in $\ndFO{0}{\Sigma}$ is satisfiable
iff the formula $\exists
x.\locformr{\phi}{x}{1}$ in $\eFO{1}{\Sigma}{D}$ is satisfiable.
\begin{theorem}
For all $D \geq 1$, the problem $\nDataSat{\eFOr{1}}{D}$ is \textsc{NEXP}-complete.
\end{theorem}
\section{Introduction}
\input{introduction}
\section{Structures and first-order logic}
\label{sec:preliminaries}
\input{structures}
\section{Decidability results}
\label{sec:decidability}
\input{decidability}
\section{Undecidability results}
\label{sec:undecidability}
\input{undecidability}
\bibliographystyle{eptcs}
\subsection{Data Structures}
We define here the class of models we are interested in. It consists of sets of nodes containing data values with the assumption that each node is labeled by a set of predicates and carries the same number of values. We consider hence $\Sigma$ a finite set of unary relation symbols (sometimes
called unary predicates) and an integer $\nbd \geq 0$. A \emph{$\nbd$-data structure} over $\Sigma$ is a tuple
$\mathfrak{A}=(A,(P_{\sigma})_{\sigma \in \Sigma},\f{1},\ldots,\f{\nbd})$
(in the following, we simply write $(A,(P_{\sigma}),\f{1},\ldots,\f{\nbd})$)
where $A$ is a nonempty finite set,
$P_\sigma \subseteq A$ for all $\sigma \in \Sigma$, and
$\f{i}$s are mappings $A \to \N$.
Intuitively $A$ represents the set of nodes and $f_i(a)$ is the $i$-th data value carried by $a$ for each node $a \in A$.
For $X\subseteq A$, we let $\Valuessub{\mathfrak{A}}{X} = \{\f{i}(a) \mid a \in X, i\in\{1,\ldots,\nbd\}\}$.
The set of all $\nbd$-data structures over $\Sigma$
is denoted by $\nData{\nbd}{\Sigma}$.
While this representation is often very convenient to represent
data values, a more standard way of
representing mathematical structures is in terms of binary
relations.
For every $(i,j) \in \{1,\ldots,\nbd\} \times \{1,\ldots,\nbd\}$, the mappings
$\f{1},\ldots,\f{\nbd}$ determine a binary relation
${\relsaaord{i}{j}{\mathfrak{A}}} \subseteq A \times A$
as follows:
$\relsaa{i}{j}{\mathfrak{A}}{a}{b}$ iff $\funct{i}(a) = \funct{j}(b)$.
We may omit the superscript $\mathfrak{A}$ if it is clear from the context
and if $\nbd=1$, as there will be only one relation, we way may write $\sim$ for $\relsaord{1}{1}$.
\subsection{First-Order Logic}
Let $\mathcal{V} = \{x,y,\ldots\}$ be a
countably infinite set of variables. The set $\ndFO{\nbd}{\Sigma}$ of first-order formulas interpreted over $\nbd$-data structures
over $\Sigma$ is inductively given by the grammar
$\varphi ::= \Pform{\sigma}{x} \mid \rels{i}{j}{x}{y} \mid x=y \mid \varphi \vee \varphi \mid \neg \varphi \mid \exists x.\varphi$,
where $x$ and $y$ range over $\mathcal{V}$, $\sigma$ ranges over $\Sigma$, and $i,j \in \{1,\ldots,\nbd\}$.
We use standard abbreviations such as $\wedge$ for conjunction and
$\Rightarrow$ for implication.
We write $\phi(x_1,\ldots,x_k)$ to indicate that the free variables of $\phi$ are among
$x_1,\ldots,x_k$. We call $\phi$ a \emph{sentence} if it does not contain free variables.
For $\mathfrak{A}=(A,(P_{\sigma}),\f{1},\ldots,\f{\nbd}) \in \nData{\nbd}{\Sigma}$ and a formula $\phi\in\ndFO{\nbd}{\Sigma}$,
the satisfaction relation $\mathfrak{A} \models_I \phi$ is defined wrt.\
an interpretation function $I: \mathcal{V} \to A$. The
purpose of $I$ is to assign an interpretation to every (free)
variable of $\phi$ so that $\phi$ can be assigned a truth value.
For $x \in \mathcal{V}$ and $a \in A$, the interpretation function $\Intrepl{x}{a}$
maps $x$ to $a$ and coincides
with $I$ on all other variables.
We then define:
\begin{center}
\begin{tabular}{ll}
$\mathfrak{A} \models_I \Pform{\sigma}{x}$ if $I(x) \in P_{\sigma}$ &
$\mathfrak{A} \models_I \phi_1 \vee \phi_2$ if $\mathfrak{A} \models_I \phi_1$ or $\mathfrak{A} \models_I \phi_2$\\
$\mathfrak{A} \models_I \rels{i}{j}{x}{y}$ if $\relsaa{i}{j}{\mathfrak{A}}{I(x)}{I(y)}$~~~ &
$\mathfrak{A} \models_I \neg \phi$ if $\mathfrak{A} \not\models_I \phi$\\
$\mathfrak{A} \models_I x = y$ if $I(x) = I(y)$ &
$\mathfrak{A} \models_I \exists x.\phi$ if there is $a \in A$ s.t. $\mathfrak{A} \models_{\Intrepl{x}{a}} \phi$
\end{tabular}
\end{center}
Finally, for a data structure $\mathfrak{A}=(A,(P_{\sigma}),\f{1},\ldots,\f{\nbd})$, a formula $\phi(x_1,\ldots,x_k)$ and $a_1,\ldots,a_k\in A$,
we write $\mathfrak{A}\models\phi(a_1,\ldots a_k)$ if there exists an interpretation function $I$ such that $\mathfrak{A}\models_{I[x_1/a_1]\ldots[x_k/a_k]} \phi$. In particular, for a sentence $\phi$, we write $\mathfrak{A}\models\phi$ if there exists an interpretation function $I$ such that $\mathfrak{A} \models_I \phi$.
\begin{example}\label{ex:leader-election}
Assume a unary predicate $\mathrm{leader} \in \Sigma$.
The following formula from $\ndFO{2}{\Sigma}$ expresses
correctness of a leader-election algorithm: (i)~there is a unique
process that has been elected leader, and (ii)~all processes agree,
in terms of their output values (their second data), on
the identity (the first data) of the leader:
$ \exists x. (\mathrm{leader}(x) \et \forall y. \big(\mathrm{leader}(y)
\Rightarrow y=x)\big) \et \forall y. \exists x. (\mathrm{leader}(x) \et \rels{1}{2}{x}{y})$.
\end{example}
We are interested here in the satisfiability problem for these logics.
Let $\mathcal{F}$ denote a generic class of first-order formulas, parameterized
by $\Sigma$ and $\nbd$.
In particular, for $\mathcal{F} = \textup{dFO}$,
we have that $\mathcal{F}[\Sigma,\nbd]$ is the class $\ndFO{\nbd}{\Sigma}$. The satisfiability problem for $\mathcal{F}$ wrt.\ $\nbd$-data structures
is defined as follows:
\begin{center}
\begin{decproblem}
\problemtitle{\nDataSat{\mathcal{F}}{\nbd}}
\probleminput{A finite set $\Sigma$ and a sentence $\varphi \in \mathcal{F}[\Sigma,\nbd]$.}
\problemquestion{Is there $\mathfrak{A} \in \nData{\nbd}{\Sigma}$ such that $\mathfrak{A} \models \varphi$\,?}
\end{decproblem}
\end{center}
The following negative result (see \cite[Theorem~1]{Janiczak-Undecidability-fm53}) calls for restrictions of the general logic.
\begin{theorem}\cite{Janiczak-Undecidability-fm53}\label{thm:undec-general}
The problem $\nDataSat{\textup{dFO}}{2}$ is undecidable, even when we require that $\Sigma = \emptyset$ and we do not use $\relsaord{1}{2}$ and $\relsaord{2}{1}$ in the considered formulas.
\end{theorem}
\subsection{Local First-Order Logic and its existential fragment}
We are interested in logics combining the advantages of $\ndFO{\nbd}{\Sigma}$,
while preserving decidability. With this in mind, we have introduced in \cite{bollig-local-fsttcs21}, for the case of two data values, a \emph{local} restriction, where the scope of quantification in the presence of free variables is restricted to the view of a given element. We present now the defintion of such restrictions adapted to the case of many data values.
First, the view of a node $a$ includes all elements whose distance to $a$ is bounded by a given radius.
It is formalized using the notion of a Gaifman graph (for an
introduction, see~\cite{Libkin04}).
We use here a variant that is
suitable for our setting and that we call \emph{data graph}.
Given a data structure $\mathfrak{A}=(A,(P_{\sigma}),\f{1},\ldots,\f{\nbd}) \in \nData{\nbd}{\Sigma}$, we define its \emph{data graph} $\gaifmanish{\mathfrak{A}}=(\Vertex{\gaifmanish{\mathfrak{A}}},\Edge{\gaifmanish{\mathfrak{A}}})$ with set of vertices $\Vertex{\gaifmanish{\mathfrak{A}}} = A \times\{1,\ldots,\nbd\}$ and set of edges
$\Edge{\gaifmanish{\mathfrak{A}}} = \{ ((a,i),(b,j)) \in \Vertex{\gaifmanish{\mathfrak{A}}} \times \Vertex{\gaifmanish{\mathfrak{A}}} \mid a=b$ or $\rels{i}{j}{a}{b} \}$.
Figure \ref{fig:gaifman-a} provides an example of the graph
$\gaifmanish{\mathfrak{A}}$ for a data structure with $2$ data values.
\newcommand{\selfconnectionright}[1]{\draw[<->, line width=0.7pt] (#1.one east) .. controls +(.4,0) and +(.4,0) .. (#1.two east);}
\newcommand{\selfconnectionleft}[1]{\draw[<->, line width=0.7pt] (#1.one west) .. controls +(-.4,0) and +(-.4,0) .. (#1.two west);}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\begin{tikzpicture}[node distance=2cm]
\node [data, label=below left:$a$] (A) {1
\nodepart{two} 2 };
\node [data, above left of=A,xshift=-1em,label=below:$b$] (B) {1 \nodepart{second} 3};
\node [data, above right of=A,xshift=1em,label=below right:$c$] (C) {3 \nodepart{second} 2};
\node [dataredred, below left of=A,label=below:$d$] (D) {5 \nodepart{second} 6};
\node [dataredred, above right of=B,xshift=1em, label=below:$e$]
(E) {4 \nodepart{second} 3};
\node [data, below right of=A, label=below:$f$] (F) {2 \nodepart{second} 7};
\draw[line width=0.7pt,<->] (A.one north) .. controls +(0,.5) and
+(.5,0).. (B.one east);
\draw[line width=0.7pt,<->] (B.two east) .. controls +(2,-0.5) and
+(-2,.5).. (C.one west);
\draw[line width=0.7pt,<->] (E.two east) .. controls +(0,0) and
+(0,0.5).. (C.one north);
\draw[line width=0.7pt,<->] (E.south west) .. controls +(0,0) and
+(0.5,.2).. (B.two east);
\draw[line width=0.7pt,<->] (A.south east) .. controls +(1,-.5)
and +(0,0).. (C.south west);
\draw[line width=0.7pt,<->] (A.south) .. controls +(0,-0.5) and
+(0,0).. (F.one west);
\draw[line width=0.7pt,<->] (F.north) .. controls +(0,0) and
+(0,0).. (C.south);
\selfconnectionright{A};
\selfconnectionleft{B};
\selfconnectionright{C};
\selfconnectionleft{D};
\selfconnectionleft{E};
\selfconnectionright{F};
\end{tikzpicture}
\caption{A data structure $\mathfrak{A}$ and $\gaifmanish{\mathfrak{A}}$.}
\label{fig:gaifman-a}
\end{subfigure}
\unskip\ \vrule\ \hspace{2em}
\begin{subfigure}[b]{0.45\textwidth}
\begin{tikzpicture}[node distance=2cm]
\node [data, label=below left:$a$] (A) {1
\nodepart{two} 2 };
\node [data, above left of=A,xshift=-1em,label=below:$b$] (B) {1 \nodepart{second} 3};
\node [data, above right of=A,xshift=1em,label=below right:$c$] (C) {3 \nodepart{second} 2};
\node [dataredred, below left of=A,label=below:$d$] (D) {10 \nodepart{second} 11};
\node [dataredred, above right of=B,xshift=1em, label=below:$e$]
(E) {8 \nodepart{second} 9};
\node [data, below right of=A, label=below:$f$] (F) {2 \nodepart{second} 7};
\end{tikzpicture}
\caption{$\vprojr{\mathfrak{A}}{a}{2}$: the $2$ view of $a$}
\label{fig:gaifman-b}
\end{subfigure}
\caption{
\label{fig:gaifman}}
\end{figure*}
\medskip
We then define the distance $\distaa{(a,i)}{(b,j)}{\mathfrak{A}} \in \N \cup \{\infty\}$ between two elements $(a,i)$ and $(b,j)$ from $A \times\{1,\ldots,\nbd\}$ as the length of the shortest path from $(a,i)$ to $(b,j)$ in $\gaifmanish{\mathfrak{A}}$.
For $a \in A$ and $r \in \N$, the \emph{radius-$r$-ball around} $a$ is
the set $\Ball{r}{a}{\mathfrak{A}} = \{ (b,j)\in\Vertex{\gaifmanish{\mathfrak{A}}} \mid
\distaa{(a,i)}{(b,j)}{\mathfrak{A}}\leq r $ for some $i \in
\{1,\ldots,\nbd\}\}$. This ball contains the elements of $\Vertex{\gaifmanish{\mathfrak{A}}}$ that can be reached from $(a,1),\ldots,(a,\nbd)$ through a path of length at most $r$.
On Figure~\ref{fig:gaifman-a} the blue nodes represent $\Ball{2}{a}{\mathfrak{A}}$.
We now define the $r$-view of an element $a$ in the $D$-data structure
$\mathfrak{A}$. Intuitively it is a $D$-data structure with the same elements as $\mathfrak{A}$
but where the data values which are not in the radius-$r$-ball around $a$ are
changed with new values all different one from each other.
Let $f_{\textup{new}}: A \times \{1,\ldots,\nbd\} \to \N \setminus \Values{\mathfrak{A}}$
be an injective mapping. The \emph{$r$-view of $a$ in $\mathfrak{A}$} is the
structure $\vprojr{\mathfrak{A}}{a}{r} = (A,(P_{\sigma}),\f{1}',\ldots,\f{n}')
\in \nData{\nbd}{\Sigma}$ where its universe is the same as the one of $\mathfrak{A}$ and the unary predicates
stay the same and $\funct{i}'(b)= \funct{i}(b)$ if $(b,i)
\in\Ball{r}{a}{\mathfrak{A}}$, and $\funct{i}'(b)= f_{\textup{new}}((b,i))$
otherwise.
On Figure~\ref{fig:gaifman-b},
the structure $\vprojr{\mathfrak{A}}{a}{2}$ is depicted where the values of the red nodes, not belonging to $\Ball{2}{a}{\mathfrak{A}}$ have been replaced by
fresh values not in $\{1,\ldots,7\}$.
We are now ready to present the logic $\rndFO{\nbd}{\Sigma}{r}$, where $r \in \N$, interpreted over structures from
$\nData{\nbd}{\Sigma}$. It is given by the grammar
\begin{align*}
\varphi ~&::=~ \locformr{\psi}{x}{r} \;\mid\; x=y \;\mid\; \exists x.\varphi \;\mid\; \varphi \vee \varphi \;\mid\; \neg \varphi
\end{align*}
where $\psi$ is a formula from $\ndFO{\nbd}{\Sigma}$
with (at most) one free variable $x$. This logic uses the \emph{local
modality} $\locformr{\psi}{x}{r}$ to specify that the formula $\psi$
should be interpreted over the $r$-view of the element associated to
the variable $x$.
For $\mathfrak{A} \in \nData{\nbd}{\Sigma}$ and an interpretation function $I$,
we have indeed
$\mathfrak{A} \models_I \locformr{\psi}{x}{r}$ iff $\vprojr{\mathfrak{A}}{I(x)}{r} \models_I \psi$.
\begin{example}
We now illustrate what can be specified by formulas in
the logic $\rndFO{2}{\Sigma}{1}$. We can rewrite the formula from Example~\ref{ex:leader-election} so that
it falls into our fragment as follows:
$\exists x. (\locformr{\mathrm{leader}(x)}{x}{1} \et \forall
y. \linebreak[0](\locformr{\mathrm{leader}(y)}{y}{1} \Rightarrow x=y))
\et \forall
y. \linebreak[0] \locformr{\exists x. \mathrm{leader}(x) \et \rels{2}{1}{y}{x}
}{y}{1} $.
The next formula specifies an algorithm in which all processes suggest a value and then choose a new value among those that have been suggested at least twice:
$\forall x.\locformr{\exists
y.\exists z. y \neq z \et \rels{2}{1}{x}{y} \et \rels{2}{1}{x}{z} }{x}{1} $. We can also specify partial renaming, i.e., two output values agree only if their input values are the same:
$\forall x.\locformr{\forall y.(\rels{2}{2}{x}{y}\donc\rels{1}{1}{x}{y}}{x}{1}$.
Conversely, the formula $\forall x.\locformr{\forall y.(\rels{1}{1}{x}{y}\donc\rels{2}{2}{x}{y}}{x}{1}$ specifies partial fusion of equivalences classes.
\end{example}
In \cite{bollig-local-fsttcs21}, we have studied the decidability
status of the satisfiability problem for $\rndFO{2}{\Sigma}{r}$ with $r \geq 1$ and we have shown
that \nDataSat{$\rndFOr{2}$}{2} is undecidable and that
\nDataSat{$\rndFOr{1}$}{2} is decidable when restricting the
formulas (and the view of elements) to binary relations belonging to
the set $\{\rels{1}{1}{}{},\rels{2}{2}{}{},\rels{1}{2}{}{}\}$. Whether
\nDataSat{$\rndFOr{1}$}{2} in its full generality is decidable or not
remains an open problem.
We wish to study here the existential fragment of
$\rndFO{\nbd}{\Sigma}{r}$ (with $r \geq 1$ and $D \geq 1$) and
establish when its satisfiability problem is
decidable. This fragment, denoted by $\eFO{\nbd}{\Sigma}{r}$, is given by the grammar
\[\phi ~::=~ \locformr{\psi}{x}{r} \;\mid\; x=y \;\mid\; \neg(x=y) \;\mid\; \exists x.\phi \;\mid\; \phi\ou\phi \;\mid\; \phi\et\phi \]
where $\psi$ is a formula from $\ndFO{\nbd}{\Sigma}$
with (at most) one free variable $x$. The quantifier free fragment $\qfFO{\nbd}{\Sigma}{r}$ is
defined by the grammar
$\phi ~::=~ \locformr{\psi}{x}{r} \;\mid\; x=y \;\mid\; \neg(x=y)
\;\mid\; \phi\ou\phi \;\mid\; \phi\et\phi $.
\begin{remark}
Note that for both these
fragments, we do not impose any restrictions on the use of quantifiers in
the formula $\psi$ located under the local modality
$\locformr{\psi}{x}{r}$.
\end{remark}
\subsection{Radius 3 and two data values}
In order to reduce $\nDataSat{\textup{dFO}}{2}$ to
$\nDataSat{\eFOr{3}}{2}$, we show that
we can transform slightly any $2$-data structure $\mathfrak{A}$ into an other
2-data structure $\addge{\mathfrak{A}}$ such that $\addge{\mathfrak{A}}$ corresponds to the
radius-3-ball of any element of $\addge{\mathfrak{A}}$ and this transformation has
some kind of inverse. Furthermore, given a formula $\phi \in
\ndFO{2}{\Sigma}$, we transform it into a formula $T(\phi)$ in
$\eFO{2}{\Sigma'}{3}$ such that $\mathfrak{A}$ satisfies $\phi$ iff $\addge{\mathfrak{A}}$
satisfies $T(\phi)$ . What follows is the formalisation of this reasoning.
Let $\mathfrak{A}=(A,(\uP{\sigma})_{\sigma},\ifunct,\ofunct)$ be a $2$-data
structure in $\nData{2}{\Sigma}$ and $\mathsf{ge}$ be a fresh unary
predicate not in $\Sigma$. From $\mathfrak{A}$ we build the following $2$-data structure
$\addge{\mathfrak{A}}=(A',(\uP{\sigma}')_{\sigma},\ifunct',\ofunct')\in\nData{2}{\Sigma\cup\{\mathsf{ge}\}}$
such that:
\begin{itemize}
\item $A' = A \uplus \Values{\mathfrak{A}}\times\Values\mathfrak{A}$,
\item for $i\in\{1,2\}$ and $a\in A$, $f_i'(a)=f_i(a)$ and for $(d_1,d_2)\in \Values{\mathfrak{A}}\times\Values\mathfrak{A}$, $f_i((d_1,d_2))=d_i$,
\item for $\sigma\in\Sigma$, $\uP{\sigma}'=\uP{\sigma}$,
\item $\uP{\mathsf{ge}}=\Values{\mathfrak{A}}\times\Values\mathfrak{A}$.
\end{itemize}
Hence to build $\addge{\mathfrak{A}}$ from $\mathfrak{A}$ we have added to the elements of
$\mathfrak{A}$ all pairs of data presented in $\mathfrak{A}$ and in order to recognise
these new elements in the structure we use the new unary predicate
$\mathsf{ge}$. We add these extra elements to ensure that all the elements of
the structure are located in the
radius-3-ball of any element of $\addge{\mathfrak{A}}$.
We have then the following property.
\begin{lemma}\label{lem:ge-has-radius-3}
$\vprojr{\addge{\mathfrak{A}}}{a}{3}=\addge{\mathfrak{A}}$ for all $a \in A'$.
\end{lemma}
\begin{proof}
Let $b\in A'$ and $i,j \in \{1,2\}$. We show that
$\distaa{(a,i)}{(b,j)}{\addge{\mathfrak{A}}}\leq 3$. i.e. that there is a path of length at most 3 from $(a,i)$ to $(b,j)$ in the data graph $\gaifmanish{\addge{\mathfrak{A}}}$.
By construction of $\addge{\mathfrak{A}}$, there is an element $c\in A'$ such that $f_1(c)=f_i(a)$ and $f_2(c)=f_j(b)$.
So we have the path $(a,i),(c,1),(c,2),(b,j)$ of length at most 3 from $(a,i)$ to $(b,j)$ in $\gaifmanish{\addge{\mathfrak{A}}}$.
\end{proof}
Conversely, to $\mathfrak{A}=(A,(\uP{\sigma})_{\sigma},\ifunct,\ofunct)\in\nData{2}{\Sigma\cup\{\mathsf{ge}\}}$, we associate $\minusge{\mathfrak{A}}=(A',(\uP{\sigma}')_{\sigma},\ifunct',\ofunct')\in\nData{2}{\Sigma}$ where:
\begin{itemize}
\item $A' = A \setminus \uP{\mathsf{ge}}$,
\item for $i\in\{1,2\}$ and $a\in A'$, $f_i'(a)=f_i(a)$,
\item for $\sigma\in\Sigma$, $\uP{\sigma}'=\uP{\sigma}'\setminus \uP{\mathsf{ge}}$.
\end{itemize}
Finally we inductively translate any formula $\phi\in\ndFO{2}{\Sigma}$ into $T(\phi)\in\ndFO{2}{\Sigma\cup\{\mathsf{ge}\}}$ by making it quantify over elements not labeled with $\mathsf{ge}$: $T(\sigma(x)) = \sigma(x)$, $T(\rels{i}{j}{x}{y})=\rels{i}{j}{x}{y}$, $T( x=y )= (x=y) $, $T(\exists x.\phi)=\exists x. \neg \mathsf{ge}(x) \wedge T(\phi)$, $T( \varphi \vee \varphi')=T(\varphi) \vee T(\varphi')$ and $T(\neg \varphi)=\neg T(\varphi)$.
\begin{lemma}\label{lem:ge-vs-without}
Let $\phi$ be a sentence in $\ndFO{2}{\Sigma}$,
$\mathfrak{A}\in\nData{2}{\Sigma}$ and $\mathfrak{B} \in
\nData{2}{\Sigma\cup\{\mathsf{ge}\}}$. The two following properties hold:
\begin{itemize}
\item $\mathfrak{A}\models\phi$ iff $\addge{\mathfrak{A}}\models T(\phi)$
\item $\minusge{\mathfrak{B}} \models\phi$ iff $\mathfrak{B}\models T(\phi)$.
\end{itemize}
\end{lemma}
\begin{proof}
As for any $\mathfrak{A}\in\nData{2}{\Sigma}$ we have $\minusge{(\addge{\mathfrak{A}})} = \mathfrak{A}$, it is sufficient to prove the second point.
We reason by induction on $\phi$.
Let $\mathfrak{A}=(A,(\uP{\sigma})_{\sigma},\ifunct,\ofunct)\in\nData{2}{\Sigma\cup\{\mathsf{ge}\}}$ and let $\minusge{\mathfrak{A}}=(A',(\uP{\sigma}')_{\sigma},\ifunct',\ofunct')\in\nData{2}{\Sigma}$.
The inductive hypothesis is that for any formula $\phi\in\ndFO{2}{\Sigma}$ (closed or not) and any context interpretation function $I: \mathcal{V} \to A'$ we have $\minusge{\mathfrak{A}} \models_I \phi \text{ iff } \mathfrak{A} \models_I T(\phi)$.
Note that the inductive hypothesis is well founded in the sense that the interpretation $I$ always maps variables to elements of the structures.
We prove two cases: when $\phi$ is a unary predicate and when
$\phi$ starts by an existential quantification, the other cases
being similar. First, assume that $\phi = \sigma(x)$ where $\sigma\in\Sigma$.
$\minusge{\mathfrak{A}} \models_I \sigma(x)$ holds iff $I(x)\in\uP{\sigma}'$.
As $I(x)\in A\setminus \uP{\mathsf{ge}}$, we have $I(x)\in\uP{\sigma}'$ iff $I(x)\in\uP{\sigma}$, which is equivalent to $\mathfrak{A} \models_I T(\sigma(x))$ .
Second assume $\phi = \exists x.\phi'$.
Suppose that $\minusge{\mathfrak{A}} \models_I \exists x.\phi'$.
Thus, there is a $a\in A'$ such that $\minusge{\mathfrak{A}} \models_\Intrepl{x}{a} \phi'$.
By inductive hypothesis, we have $\mathfrak{A}\models_\Intrepl{x}{a} T(\phi')$.
As $a\in A' = A\setminus \uP{\mathsf{ge}}$, we have $\mathfrak{A}\models_\Intrepl{x}{a} \neg\mathsf{ge}(x)$, so $\mathfrak{A}\models_I \exists x. \neg\mathsf{ge}(x)\et T(\phi')$ as desired.
Conversely, suppose that $\mathfrak{A} \models_I T(\exists x.\phi') $.
It means that there is a $a\in A$ such that $\mathfrak{A} \models_\Intrepl{x}{a}\neg\mathsf{ge}(x)\et T(\phi')$.
So we have that $a\in A'=A\setminus \uP{\mathsf{ge}}$, which means that $\Intrepl{x}{a}$ takes values in $A$ and we can apply the inductive hypothesis to get that
$\minusge{\mathfrak{A}} \models_\Intrepl{x}{a} \phi'$.
So we have $\minusge{\mathfrak{A}} \models_I \exists x.\phi'$.
\end{proof}
From Theorem \ref{thm:undec-general}, we know that
$\nDataSat{\textup{dFO}}{2}$ is undecidable. From a closed formula
$\phi\in\ndFO{2}{\Sigma}$, we build the formula $\exists
x.\locformr{T(\phi)}{x}{3}\in\eFO{2}{\Sigma\cup\{\mathsf{ge}\}}{3}$. Now if
$\phi$ is satisfiable, it means that there exists $\mathfrak{A}\in
\nData{2}{\Sigma}$ such that $\mathfrak{A}\models\phi$. By Lemma
\ref{lem:ge-vs-without}, $\addge{\mathfrak{A}}\models T(\phi)$. Let $a$ be an element
of $\mathfrak{A}$, then thanks to Lemma \ref{lem:ge-has-radius-3}, we have $\vprojr{\addge{\mathfrak{A}}}{a}{3}\models T(\phi)$.
Finally by definition of our logic, $\addge{\mathfrak{A}}\models\exists x.\locformr{T(\phi)}{x}{3}$.
So $\exists x.\locformr{T(\phi}{x}{3}$ is satisfiable. Now assume
that $\exists x.\locformr{T(\phi)}{x}{3}$ is satisfiable. So there
exist $\mathfrak{A} \in \nData{2}{\Sigma\cup\{\mathsf{ge}\}}$ and an element $a$ of
$\mathfrak{A}$ such that $\vprojr{\mathfrak{A}}{a}{3}\models T(\phi)$.
Using Lemma \ref{lem:ge-vs-without}, we obtain
$(\vprojr{\mathfrak{A}}{a}{3})_{\setminus\mathsf{ge}}\models\phi$. Hence $\phi$ is
satisfiable. This shows that we can reduce $\nDataSat{\textup{dFO}}{2}$ to $\nDataSat{\eFOr{3}}{2}$ .
\begin{theorem}\label{thm:undec-existential-r3}
The problem $\nDataSat{\eFOr{3}}{2}$ is undecidable.
\end{theorem}
\subsection{Radius 2 and three data values}
We provide here a reduction from $\nDataSat{\textup{dFO}}{2}$ to
$\nDataSat{\eFOr{2}}{3}$. The idea is similar to the one used in the
proof of Lemma \ref{lem:hardness-radius2-2} to show that
$\nDataSat{\eFOr{2}}{2}$ is \textsc{N2EXP}-hard by reducing
$\nDataSat{\textup{dFO}}{1}$. Indeed we have the following Lemma.
\begin{lemma}
Let $\phi$ be
a formula in $\ndFO{2}{\Sigma}$. There exists $\mathfrak{A} \in \nData{2}{\Sigma}$ such that $\mathfrak{A} \models \phi$
if and only if there exists $\mathfrak{B} \in \nData{3}{\Sigma}$ such that
$\mathfrak{B} \models \exists
x.\locformr{\phi}{x}{2}$.
\end{lemma}
\begin{proof}
Assume that there exists $\mathfrak{A}=(A,(P_{\sigma})_{\sigma \in
\Sigma},\f{1},\f{2})$ in $\nData{2}{\Sigma}$ such that $\mathfrak{A} \models
\phi$.Consider the $3$-data structure $\mathfrak{B}=(A,(P_{\sigma})_{\sigma \in
\Sigma},\f{1},\f{2},\f{3})$ such that $\f{3}(a)=0$ for all $a\in
A$. Let $a \in A$. It is clear that we have $\vprojr{\mathfrak{B}}{a}{2}=\mathfrak{B}$
and that $\vprojr{\mathfrak{B}}{a}{2} \models \phi$ (because $\mathfrak{A} \models
\phi$ and $\phi$ never mentions the third values of the elements
since it is a formula in $\ndFO{1}{\Sigma}$). Consequently $\mathfrak{B}
\models \exists
x.\locformr{\phi}{x}{2}$.
Assume now that there exists $\mathfrak{B}=(A,(P_{\sigma})_{\sigma \in
\Sigma},\f{1},\f{2},\f{3})$ in $ \nData{3}{\Sigma}$ such that $\mathfrak{B} \models \exists
x.\locformr{\phi}{x}{2}$. Hence there exists $a \in A$ such that
$\vprojr{\mathfrak{B}}{a}{2} \models \phi$, but then by forgetting the third
value in $\vprojr{\mathfrak{B}}{a}{2}$ we obtain a model in $\nData{3}{\Sigma}$
which satisfies $\phi$.
\end{proof}
Using Theorem \ref{thm:undec-general}, we obtain the following result.
\begin{theorem}\label{thm:undec-existential-r2}
The problem $\nDataSat{\eFOr{2}}{3}$ is undecidable.
\end{theorem}
| {
"timestamp": "2022-09-22T02:15:54",
"yymm": "2209",
"arxiv_id": "2209.10309",
"language": "en",
"url": "https://arxiv.org/abs/2209.10309",
"abstract": "We study first-order logic over unordered structures whose elements carry a finite number of data values from an infinite domain which can be compared wrt. equality. As the satisfiability problem for this logic is undecidable in general, in a previous work, we have introduced a family of local fragments that restrict quantification to neighbourhoods of a given reference point. We provide here the precise complexity characterisation of the satisfiability problem for the existential fragments of this local logic depending on the number of data values carried by each element and the radius of the considered neighbourhoods.",
"subjects": "Logic in Computer Science (cs.LO)",
"title": "On the Existential Fragments of Local First-Order Logics with Data",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877717925421,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7096296170524449
} |
https://arxiv.org/abs/1612.06706 | The Coulomb potential in quantum mechanics revisited | The procedure commonly used in textbooks for determining the eigenvalues and eigenstates for a particle in an attractive Coulomb potential is not symmetric in the way the boundary conditions at $r=0$ and $r \rightarrow \infty$ are considered. We highlight this fact by solving a model for the Coulomb potential with a cutoff (representing the finite extent of the nucleus); in the limit that the cutoff is reduced to zero we recover the standard result, albeit in a non-standard way. This example is used to emphasize that a more consistent approach to solving the Coulomb problem in quantum mechanics requires an examination of the non-standard solution. The end result is, of course, the same. | \section{introduction}
The solution of the quantum mechanical problem of determining the energy levels of a (bound) particle in the presence of
an attractive Coulomb potential, i.e. the hydrogen atom with centre-of-mass coordinate removed, was a spectacular achievement by
Schr\"odinger, published in the same paper in which his famous equation was first introduced,\cite{schrodinger26a} early in 1926.
This solution is now reproduced in every undergraduate textbook on quantum mechanics, with additional steps inserted to make the
derivation easier to understand for the novice. The purpose of this note is to draw attention to the omission of an important part of this
derivation; including it of course ultimately necessarily leads to the same result, with the consequence that the problem is addressed in
what we consider a more systematic manner.
We will first summarize the standard process for the Coulomb potential, mostly in words; the detailed mathematics is available in many textbooks, of which several
clearly laid out ones are cited here.\cite{griffiths05,shankar94,gasiorowicz96,cohen-tannoudji77,bransden00,townsend12} As
described below, all of these references use a power series solution that requires truncation to avoid an un-normalizable solution
{\it{at $r \rightarrow \infty$}}. The other solution is assumed to diverge as $r \rightarrow 0$, and is
discarded for that reason (but it will be shown that there are certain energy values for which the second solution does not diverge
as $r \rightarrow 0$.) We will
demonstrate that the symmetric equivalent of this procedure is also possible --- discard the solution that
diverges as $r \rightarrow \infty$, and truncate the other solution to avoid a divergence as $r \rightarrow 0$. To highlight this second procedure, we consider a more
realistic problem, the Coulomb potential with a cutoff
near the origin, where we are forced to follow this route to the solution. This problem is anyways more physical than the pure Coulomb problem, as this cutoff models the finite extent of the nucleus. While this necessarily requires a
knowledge of more complicated mathematical functions, it can be argued that a rudimentary knowledge of this mathematics
is necessary to fully appreciate even the standard Coulomb problem, where both procedures are possible, and students have
a choice on how to proceed.
\section{The Textbook Coulomb Problem}
The standard treatment is as follows.\cite{griffiths05} The Hamiltonian for the Coulomb potential is given by
\begin{equation}
H = -{\hbar^2 \over 2m}\nabla^2 - {e^2 \over 4 \pi \epsilon_0}{1 \over r},
\label{ham}
\end{equation}
with the first and second terms representing the kinetic and potential energies, respectively of a particle with mass $m$
(this is the reduced mass of the electron if this Hamiltonian arises from the hydrogen problem). Since the Coulomb potential is
central, the solution for the angular part of the wave function is standard, and one is left with the radial equation.
The radial equation for $u(\rho) \equiv r R(r)$, where $R(r)$ is the radial part
of the wave function and $\rho \equiv \kappa r$, with $\kappa \equiv \sqrt{(-2mE)}/\hbar$, is usually rendered in dimensionless form; it is
given by
\begin{equation}
{d^2u \over d\rho^2} = \left[1 - {\rho_0 \over \rho} + {\ell (\ell + 1) \over \rho^2} \right] u.
\label{uofrho}
\end{equation}
Here, $\ell$ is the azimuthal quantum number,
$\rho_0 \equiv 2/(\kappa a_0)$ with $a_0 \equiv 4\pi \epsilon_0 \hbar^2/(me^2)$ the Bohr radius,
and $E<0$ indicates that we are considering bound states. Asymptotic solutions are then `peeled off'
by examining the behavior as $\rho \rightarrow \infty$ and $\rho \rightarrow 0$.
A more general consideration rules out solutions that diverge at the origin; when this is addressed at all (e.g. see Sect.~12.6 in
Ref.~\onlinecite{shankar94}), it is based on normalization and/or conditions of hermiticity. However, the elimination of such solutions
on general grounds is premature in some cases, as will become evident in the next section.
Incorporating the asymptotic behavior, one writes the solution $u(\rho)$ as
\begin{equation}
u(\rho) = \rho^{\ell + 1} e^{-\rho} v(\rho),
\label{ansatz}
\end{equation}
and writes a new 2nd order differential equation for $v(\rho)$ (see below). This is then solved in one of two ways: (i) most commonly this
function is expanded in a power series in $\rho$ and then a recursion relation is derived for the coefficients in the power series,
or (ii) the equation is recognized as the differential equation for the confluent hypergeometric function, and then the solution is
simply written down as the Kummer function.\cite{landau77,nist10}
\begin{figure}[h]
\center
\includegraphics[scale=0.40]{hfig1.pdf}
\caption{$V_{\rm cut}(r)$ vs. $r$, as given in Eq.~(\ref{VCcut}). The range of the `nuclear' part, $r_0$ is highly exaggerated
in the figure.}
\label{hfig1}
\end{figure}
In either case it is recognized that in fact the Kummer function
diverges as $e^{2\rho}$ as $\rho \rightarrow \infty$, which overwhelms the `peeled-off' solution, and gives rise to a non-normalizable
wave function. In the version that utilizes the power series, a remedy is then recognized: by making one of the parameters in the
problem, $\rho_0 \equiv 2/(\kappa a_0)$, equal to a positive even integer, the recursion relation is truncated, so instead of an
infinite power series that describes exponentially growing behavior, we obtain a polynomial of finite order.
The same conclusion is reached for those familiar with the properties of the Kummer function, and in fact one recognizes that
these polynomials are the Associated Laguerre polynomials.\cite{nist10,abramowitz72}
The radial part of the wave function therefore consists of an Associated Laguerre polynomial times an exponential
with argument $-r/(a_0n)$ and
$n$ is a positive integer, the so-called principal quantum number.
\section{Bound-state solutions for the Coulomb potential with a cutoff near the origin\label{Coulomb}}
Because the radius of a proton is of order one femtometer, roughly five orders of magnitude smaller than the Bohr radius, it is
usually disregarded (except perhaps as an example of a perturbation) in undergraduate studies of the hydrogen atom.
Nonetheless, a more realistic potential for the hydrogen atom is
\begin{equation}
V_{\rm cut}(r)=\left\{ \begin{array}{lll}
-\frac {e^2}{4 \pi \epsilon_0}{1 \over r_0}, \qquad & 0<r\leq r_0, & ({\rm region}\ I)\\
& & \\
-\frac {e^2}{4 \pi \epsilon_0}{1 \over r},\qquad & r\geq r_0, & ({\rm region}\ II)\end{array}\right.
\label{VCcut}\end{equation}
where $r_0$ represents the radius of the nucleus. A schematic is provided in Fig.~\ref{hfig1}.
One immediate question a novice might ask is, does this potential support an infinite
number of bound states as is the case for the Coulomb potential without a cutoff? As we shall see below, the answer is `yes,'
obvious to those who realize this infinite number of bound states is associated with the long-range
tail of the Coulomb potential (and not with the singular behavior near the origin).
The strategy for the solution to this problem is standard; determine solutions appropriate to the two regions, with
arbitrary coefficients, and then match the wave function and its derivative at $r=r_0$ to determine the remaining
coefficients.
With $\ell = 0$ the solution for $0< r< r_0$ is elementary --- a linear combination of $\sin{(qr)}$ and $\cos{(qr)}$ with the coefficient of
the $\cos{(qr)}$ solution set to zero to achieve the proper behavior at $r=0$ (i.e. $u(r) = 0$ as $r \rightarrow 0$),
with $q \equiv \sqrt{{2m}(E + V_0)/\hbar^2}$,
and $V_0 \equiv e^2/(4 \pi \epsilon_0 r_0)$. Therefore, in region I,
\begin{equation}
u_I(r) = A{\sin{(qr)} \over \sin{(qr_0)}},
\label{region1}
\end{equation}
where $A$ is an unknown coefficient. The solution for $r_0 < r < \infty$ is more difficult. One can attempt a power series in $\rho$,
as was done in the case with no cutoff, and in fact this is the first hint that perhaps the recipe provided in the previous section is
not the whole story. For one thing, it has likely occurred to the reader already that the standard power series solution represents one
solution; since the equation is a 2nd order differential equation, there should be two independent solutions.
In fact, the equation for $v(\rho)$ follows from insertion of Eq.~(\ref{ansatz}) into Eq.~(\ref{uofrho})
\begin{equation}
\rho {d^2v \over d\rho^2} + 2 (\ell + 1 - \rho){dv \over d\rho} + [\rho_0 - 2(\ell + 1)]v = 0,
\label{hyper}
\end{equation}
and is a particular example of the confluent hypergeometric equation:
\begin{equation}
z\frac{d^2y}{dz^2}+\left(b-z\right)\frac{dy}{dz}-ay=0,
\label{Kummer}
\end{equation}
whose general solution is
\begin{equation}
y=C\ M\left(a,b,z\right)+D\ U\left(a,b,z\right),
\label{KummerSol}
\end{equation}
where $C$ and $D$ are arbitrary constants. $M\left(a,b,z\right)$ is known as the Kummer confluent hypergeometric
function, and $U\left(a,b,z\right)$ is known as the Tricomi confluent hypergeometric function; these two solutions are independent
of one another. They are further discussed in the Appendix.
Henceforth we will focus on $\ell = 0$ to simplify the analysis. If we substitute $z\equiv 2\rho$ into Eq.~(\ref{hyper}) then we see from Eq. (\ref{KummerSol}) that this equation has two independent solutions,
\begin{equation}
v(\rho) = C \ M(1 - \rho_0/2,2,2\rho) + D \ U(1 - \rho_0/2,2,2\rho),
\label{two_solutions}
\end{equation}
with $a \equiv 1 - \rho_0/2$ and $b \equiv 2$. Usually, in the confluent hypergeometric functions, $a$ and $b$ are thought of as parameters and $z$ is the variable. It turns out (students are not told this!) the Tricomi function generally diverges as $z \rightarrow 0$ (more on this later).
Perhaps for this reason it is usually not considered in the solution to the usual Coulomb problem.
But there is a twist! Note that when we wrote down the solution for region I, we eliminated one of the arbitrary constants by
examining the boundary
condition at $r=0$ (recall $\rho \equiv \kappa r$). Similarly we now eliminate one of the constants for the solution in region II, by examining
the boundary condition at $\rho \rightarrow \infty$, which immediately gives $C = 0$ (since, as we learned in the standard Coulomb
problem, the Kummer function blows up exponentially in this limit (more on this below), and we cannot `salvage' the solution by making $\rho_0$ equal to a
positive even integer --- instead, it will be determined by the matching at $r=r_0$). We now have the remaining task of matching the wave function and its derivative at $r = r_0$. Using Eq.~(\ref{two_solutions}) (with $C=0$) in Eq.~(\ref{ansatz}) and matching with Eq.~(\ref{region1}),
we obtain two equations,
\begin{equation}
A = D \kappa r_0 e^{-\kappa r_0} U(1-\rho_0/2,2,2\kappa r_0),
\label{eqn1}
\end{equation}
and
\begin{eqnarray}
Aq {\rm cot}(qr_0) &&= u_{II}(r_0)\left[{1 \over r_0} - \kappa\right] \nonumber \\
+&& 2D \kappa^2 r_0 e^{-\kappa r_0} {dU(1-\rho_0/2,2,z) \over dz}|_{z=2\kappa r_0}.
\label{eqn2}
\end{eqnarray}
Dividing the latter equation by the former, and inserting the identity,\cite{nist10}
\begin{equation}
{dU(a,b,z) \over dz} = -aU(a+1,b+1,z),
\label{identity}
\end{equation}
gives us an equation to determine the allowed bound state energies,
\begin{equation}
qr_0 {\rm cot}(qr_0) -1 = -\kappa r_0\left[1 + 2 (1 - \rho_0/2){U(2-\rho_0/2,3,2\kappa r_0) \over U(1-\rho_0/2,2,2\kappa r_0)}
\right].
\label{eigenvalue}
\end{equation}
Equation~(\ref{eigenvalue}) can be rewritten in terms of the dimensionless variables, $\tilde{r}_0 \equiv r_0/a_0$ and
\begin{equation}
x \equiv 1/\sqrt{-\epsilon}; \ \ \ \ \ \ \ \epsilon = E/E_0; \ \ \ \ \ \ \ \ \ E_0 \equiv {\hbar^2 \over 2 m a_0^2}.
\label{x_epsilon}
\end{equation}
The equation becomes
\begin{eqnarray}
&&\sqrt{\tilde{r}_0\left(2-{\tilde{r}_0 \over x^2}\right)}\cot \sqrt{\tilde{r}_0\left(2-{\tilde{r}_0 \over x^2}\right)} -1 = \nonumber \\
&&-{\tilde{r}_0 \over x} \left[1+2\left(1-x\right)\frac{U\left(2-x,3,{2\tilde{r}_0 \over x}\right)}{U\left(1-x,2,{2\tilde{r}_0\over x}\right)}\right].
\label{eqn_in_x}
\end{eqnarray}
Note that we require solutions $x$ as a function of $\tilde{r}_0$ in order to determine the energy. The virtue of using the variable $x$
is that the solutions for $x$ should approach the positive integers as the cutoff $\tilde{r}_0$ approaches zero.
\begin{figure}[h]
\center
\includegraphics[scale=0.50,angle=-90]{sfig2.pdf}
\caption{The left-hand side (LHS) and right-hand side (RHS) of Eq. (\ref{eqn_in_x}) plotted as a function of $x \equiv 1/\sqrt{-\epsilon}$.
Solid (red) curves are for the parameter $\tilde{r}_0 = 0.3$, while, for reference, we have also plotted the solutions for
$\tilde{r}_0 = 0.01$ [dashed (blue) curves]. In both cases the thicker curves with many branches (two of which are labelled)
refer to the RHS, while the thinner curves (both labeled) refer to the LHS. The solutions to Eq. (\ref{eqn_in_x}) are given by
the intersection of thin and thick curves. For $\tilde{r}_0 = 0.01$ these essentially coincide with the integers, as indicated by the
pink dots, since there is essentially no cutoff. For $\tilde{r}_0 = 0.3$ (red curves) the solutions are clearly at higher values of $x$; given
that $x \equiv 1/\sqrt{-\epsilon}$ this corresponds to higher values of energy, as we would expect.}
\label{hfig2}
\end{figure}
Fig.~\ref{hfig2} illustrates the graphical solution as represented by the left-hand-side (LHS) and right-hand-side (RHS) of
Eq.~(\ref{eqn_in_x}) for two different values of $\tilde{r}_0$. The solutions shown here make apparent that the energies increase
as $\tilde{r}_0$ increases from zero. In Fig.~\ref{hfig3} we show the solutions for a variety of values of $\tilde{r}_0$ showing
how the limit of the
Coulomb potential (no cutoff) is approached for sufficiently small values of $\tilde{r}_0$. It is also evident that the number of bound
states remains fixed, i.e. there is a one-to-one correspondence between bound state energies for the Coulomb potential and those
for the Coulomb potential with a cutoff, even if the cutoff is $1000 \times$ the Bohr radius.
\begin{figure}[h]
\center
\includegraphics[scale=0.50,angle=-90]{sfig3.pdf}
\caption{The product $n^2 \frac{E_n}{E_0}$ as a function of $n$ for various values of $\tilde{r}_0$, as shown.
For $\tilde{r}_0$ approaching zero we obtain a horizontal line at $-1$, which corresponds to the eigenvalues of the
Coulomb potential. With $\tilde{r}_0 = 0.01$ this limit has clearly been achieved.
For $\tilde{r}_0 >>1$, we expect the first few energies to have values almost equal to those of a spherical potential, as given by Eq.~(\ref{sqwell}), indicated with a dashed (red) curve (after multiplication by $n^2$) for $\tilde{r}_0=1000$. The reasonable
agreement with the data for $\tilde{r}_0=1000$ indicates that this limit has been achieved for the lowest energy levels for
this value of $\tilde{r}_0$.}
\label{hfig3}
\end{figure}
More specifically, as $r_0/a_0$ increases from zero, the energy eigenvalues are all slightly increased in value (reduced in magnitude),
$\epsilon_n \approx -(1-\delta_n)^2/n^2$, where $\delta_n$ is a small positive quantity. Increasing $r_0$ to values $r_0 >> a_0$
increases these eigenvalues further, but {\it all these states remain bound}. For very large $r_0$ the potential resembles a finite
square well, with (shallow) depth $V_0 \equiv {e^2 \over 4 \pi \epsilon_0}{1 \over r_0}$ and (large) width $r_0$, augmented with a
Coulomb tail. Viewed as an attractive square well potential, the lowest energy levels are given by
\begin{equation}
\epsilon_n \approx -{2 \over \tilde{r}_0} + \left( {n \pi \over \tilde{r}_0} \right)^2,
\label{sqwell}
\end{equation}
so that even in this limit the argument of the cotangent function on the left-hand-side of Eq.~(\ref{eqn_in_x}) remains real. The bound
state energies as a function of the principal quantum number $n$ are plotted in Fig.~\ref{hfig3} for various values of $\tilde{r}_0$,
where the two limits are clearly indicated. For a
Coulomb potential with no cutoff ($\tilde{r}_0 \rightarrow 0$) we expect all results at $n^2 \epsilon_n = -1$, while the opposite extreme
($\tilde{r}_0 \rightarrow \infty$), the dashed curve is Eq.~(\ref{sqwell}) for $\tilde{r}_0 = 1000$ and indicates that the results have
approached the limit described by Eq.~(\ref{sqwell}).
\section{So What?}
We have solved for a more realistic variation of the Coulomb potential. If we let $r_0 \rightarrow 0$ we should recover
the usual results. However, returning to the discussion in the previous section following Eq.~(\ref{two_solutions}) we note
that in this limiting process, we are left only with the Tricomi function, $U(1-\rho_0/2,2,2\rho)$. We know that the Kummer function
will reduce to the Laguerre polynomials (note\cite{remark1} that we use the physicist's definition of the Laguerre
polynomials, as found for example in Ref.~\onlinecite{griffiths05}) through
\begin{equation}
\lim_{\rho_0 \rightarrow 2n} M(1-\rho_0/2,2,2\rho) = {1 \over n}{1 \over n!} L_{n-1}^1(2\rho);
\label{laguerrem}
\end{equation}
but the Kummer function has been eliminated by setting the coefficient $C=0$.
For reference, Fig.~\ref{hfig4} shows the product of $e^{-z/2}$ and the
Kummer function $M(1-\rho_0/2,2,z)$ as a function of $z$ for several values of $\rho_0$ close to $2.0$. This combination
diverges except for the special case when $\rho_0 = 2n$, with $n$ a positive integer, in this case, $n=1$. This is the condition that
normally ``saves'' the solution to the standard Coulomb potential from blowing up and gives us the Coulomb eigenvalues.
However, in the way we have set up the
problem, this solution is no longer salvageable as $r_0 \rightarrow 0$, as it was eliminated from the start. How are we to recover
the known solutions?
\begin{figure}[h]
\center
\includegraphics[scale=0.5,angle=-90]{sfig4.pdf}
\caption{The function $e^{-z/2}M(1-\rho_0/2, 2, z)$ as a function of $z$, with $\rho_0=2.004$ (red, lowest),
$\rho_0=2.002$ (green, second lowest), $\rho_0=2.000$ (blue, middle curve), $\rho_0=1.998$ (pink, second highest),
and $\rho_0=1.996$ (black, highest), where the rankings refer to the far right of the figure. Note that on the left of the
figure the results all agree with one another to high accuracy. Also note that if one investigated this function only out
to $z=10$ or $12$, it would appear to converge for all values of the parameter $1-\rho_0/2$. Only for larger values of
$z$ is it clear that for non-integer values of $1-\rho_0/2$ this function actually diverges. Here, $M(a,b,z)$ is the so-called
Kummer function; details on how to calculate it are given in the Appendix.}
\label{hfig4}
\end{figure}
\begin{figure}[h]
\center
\includegraphics[scale=0.5,angle=-90]{sfig5.pdf}
\caption{The function $e^{-z/2}U(1-\rho_0/2, 2, z)$ as a function of $z$, with $\rho_0=2.004$ (red, lowest),
$\rho_0=2.002$ (green, second lowest), $\rho_0=2.000$ (blue, middle curve), $\rho_0=1.998$ (pink, second highest),
and $\rho_0=1.996$ (black, highest), where the rankings refer to the far left of the figure. Note that on the right of the
figure the results all agree with one another to high accuracy. It is clear that for non-integer values of $1-\rho_0/2$ this
function actually diverges as $z \rightarrow 0$. Here, $U(a,b,z)$ is the so-called
Tricomi function, further discussed in the Appendix.}
\label{hfig5}
\end{figure}
The answer is provided in Fig.~\ref{hfig5}, where the product of $e^{-z/2}$ and the Tricomi function $U(1-\rho_0/2,2,z)$ is plotted as
a function of $z$ for several values of $\rho_0$ close to $2.0$. This function is always well behaved as $z \rightarrow \infty$, but
tends to diverge as $z \rightarrow 0$, {\it except} in the case where $\rho_0 = 2n$, with $n$ an positive integer --- precisely the condition
that yields the known eigenvalues for the Coulomb potential. The case shown in
Fig.~\ref{hfig5} corresponds to $n=1$. It is also true (and probably less known) that
\begin{equation}
\lim_{\rho_0 \rightarrow 2n} U(1-\rho_0/2,2,2\rho) = {(-1)^{n-1} \over n} L_{n-1}^1(2\rho)
\label{laguerreu}
\end{equation}
so we indeed recover not only the correct eigenvalues but also the correct eigenfunctions, when $r_0 \rightarrow 0$.
The point we wish to make is that, even when we consider the usual Coulomb potential, without a cutoff, we should include
the Tricomi solution as well as the Kummer solution. Both are ``saved'' (i.e. rendered normalizable) in the same way, by having
$\rho_0 = 2n$ with $n$ a positive integer. That is, both functions, $M(a,b,z)$ and $U(a,b,z)$, reduce to Laguerre polynomials
when the parameter $a$ is a negative integer (and $b$ is a non-negative integer). There is therefore an
equivalent symmetric procedure for solving this standard problem; one can first view the boundary condition at $r \rightarrow \infty$,
realize that the Kummer function diverges there, and therefore set the constant in front of this function equal to zero, as is
normally done (usually implicitly) for the Tricomi solution. Having done this, one can now declare the Tricomi function to be the
solution, only to discover on more careful examination that this function diverges (and is un-normalizable) as $r \rightarrow 0$.
We can then discover that this difficulty is overcome by requiring $\rho_0 = 2n$ with $n$ a positive integer, which gives both
the correct eigenvalues and the correct eigenfunctions.
\section{Summary}
We have presented solutions for the cutoff Coulomb potential, a model for the hydrogen atom that includes the finite
extent of the nucleus. The number of bound states remains infinite, on a one-to-one mapping with the solutions for the
standard Coulomb problem. Naturally, they are elevated in value compared to the standard Coulomb problem. To solve
this problem we have followed the procedure normally followed for the standard problem, except it has been necessary
to include the two independent solutions to the radial equation. We have further shown that this more difficult procedure
can also be followed for the standard problem. That is, either the Kummer function
{\it or} the Tricomi function can be
retained in the solution to the standard problem. Both these functions cause difficulties; the former diverges at $r \rightarrow \infty$, while
the latter diverges at $r=0$. Divergences at both ends, near $r=0$ and for $r\rightarrow \infty$ are
prevented by a quantization condition which is identical at either end, and ultimately gives the usual Coulomb eigenvalues,
$E = -E_0/n^2$, with $E_0 = \hbar^2/(2ma_0^2)$, with the usual eigenstates, proportional to the Laguerre polynomials. The usual
procedure only recognizes the `salvaging' of the Kummer solution; one of the primary purposes of this paper is to alert instructors
and students that for the Coulomb potential both solutions are possible and an equivalent symmetric
procedure is available, as outlined here. The standard
procedure for `salvaging' the one (demanding that $\rho_0 = 2n$ where $n$ is a positive integer) also `salvages' the other. Therefore
the correct eigenvalues and eigenvectors are obtained in either case.
\section*{Acknowledgements}
A. Othman acknowledges financial support from the Taibah University (Medina, Saudi Arabia).
We are also grateful to the Natural Sciences and Engineering Research Council of Canada (NSERC), to the Alberta iCiNano program, and to the University of Alberta Teaching and Learning Enhancement Fund (TLEF) grant
for partial support.
\section*{Appendix}
Two independent solutions to the confluent hypergeometric equation (also sometimes called Kummer's Equation),
\begin{equation}
z\frac{d^2y}{dz^2}+\left(b-z\right)\frac{dy}{dz}-ay=0,
\label{Kummer_app1}
\end{equation}
are given by the Kummer function, $M(a,b,z)$, and the Tricomi function, $U(a,b,z)$.\cite{nist10,abramowitz72}
While these are not familiar to
most undergraduates, they underly the known (and correct) solution to the bound and excited eigenstates of the
single particle problem in a Coulomb potential. They each have a number of representations; for the Kummer function,
a power series solution is given by
\begin{equation}
M(a,b,z) = 1 + {a \over b} z + {a(a+1) \over b(b+1)} {z^2 \over 2 !} + {a(a+1)(a+2) \over b(b+1)(b+2)}{z^3 \over 3!} + ....,
\label{kummer_app2}
\end{equation}
which exists for all parameter and variable values except when $b$ is a non-positive integer. Eq.~(\ref{kummer_app2}) is
written more concisely, using so-called Pochammer symbols, $(a)_k$, where
\begin{equation}
(a)_k \equiv a(a+1)(a+2)....(a+k-1), \phantom{aaaaaaaa} (a)_0 \equiv 1.
\label{poch}
\end{equation}
These are simple if $a$ is an integer. For example, $(2)_k = (k+1)!$, $(3)_k = (k+2)!/2$, and so on. The concise form is then
\begin{equation}
M(a,b,z) = \sum_{k=0}^\infty {(a)_k \over (b)_k} {z^k \over k!}.
\label{kummer_sum}
\end{equation}
Note that this function
has simple limiting forms,
\begin{equation}
\lim_{z \rightarrow 0} M(a,b,z) = 1,
\label{kummer_limits}
\end{equation}
and
\begin{equation}
\lim_{z \rightarrow \infty} M(a,b,z) = e^z z^{a-b}/\Gamma(a), \ \ \ a \ne 0, -1, -2, ....
\label{kummer_limits2}
\end{equation}
where $\Gamma(a)$ is the Gamma Function.\cite{abramowitz72,nist10} Notice that $M(a,b,z)$ is generally divergent as $z$
increases, except (refer back to Eq.~(\ref{kummer_app2})) if $a$ is equal to a negative integer. Then in fact the infinite series
terminates, and $M(a,b,z)$ becomes a polynomial. In fact this is already quoted in the text, and we repeat Eq.~(\ref{laguerrem}) here
in more generic form, for the case encountered in the Coulomb problem ($b=2$):
\begin{equation}
M(-n,2,z) = {1 \over n+1}{1 \over (n+1)!} L_{n}^1(z), \ \ n=0, 1, 2, ....
\label{laguerre_gen}
\end{equation}
and the polynomial is identified as the Associated Laguerre polynomial.\cite{remark1} Note that the terms in the summation
Eq.~(\ref{kummer_sum}) satisfy a recursion relation,
\begin{equation}
M(a,b,z) = \sum_{k=0}^\infty S_k, \ \ {\rm with}\ \ S_{k} = {(a+k-1) z \over k(b+k-1)}S_{k-1},
\label{recursion}
\end{equation}
which makes Eq.~(\ref{kummer_sum}) very easy to program. Convergence is very fast; for everything required in this
manuscript, 30 terms in the summation were more than enough for 8-digit accuracy.
Much of this will be somewhat familiar
to the student who has studied the power series solution for the Coulomb potential --- it is just Eq.~(\ref{kummer_app2}), and requiring
the parameter $a$ to be a non-positive integer is precisely the condition required to `salvage' this solution, i.e. to keep it normalizable.
The Tricomi function is less familiar. The power series solution is, when $b=n+1$, $n=0, 1, 2, ....$, and $a \ne 0, -1, -2, ...$,
\begin{eqnarray}
U(a,n+1,z) &=& {(-1)^{n+1} \over n!\Gamma(a-n)} \sum_{k=0}^\infty {(a)_k \over (n+1)_k} {z^k \over k!}
h(k,n,a,z) \nonumber \\
&+& {1 \over \Gamma(a)} \sum_{k=1}^n {(k-1)!(1-a+k)_{n-k} \over (n-k)!}{1 \over z^k}
\label{tricomi_sum}
\end{eqnarray}
where $\psi(z)$ is the Digamma function\cite{abramowitz72} and
\begin{equation}
h(k,n,a,z) \equiv {\rm ln} z + \psi(a+k) - \psi(k+1) - \psi(n+k+1) \nonumber.
\label{defn_of_h}
\end{equation}
For $a=-m$, $m = 0, 1, 2, ...$,
\begin{equation}
U(-m,n+1,z) =(-1)^{m} \sum_{k=0}^m \left( {m \atop k}\right) (n+k+1)_{m-k}(-z)^k,
\label{tricomi_sum2}
\end{equation}
with $\left( {m \atop k}\right) \equiv {m! \over k! (m-k)!}$.
These are forms we have found suitable for programming, again with no more than 30 terms required for high accuracy. Similar to
Eq.~(\ref{laguerre_gen}), a special case of Eq.~(\ref{tricomi_sum2}) pertinent to the Coulomb potential is
\begin{equation}
U(-n,2,z) = {(-1)^n \over n+1} L_{n}^1(z), \ \ n=0, 1, 2, ....
\label{laguerre_gen2}
\end{equation}
Once again, the Associated Laguerre polynomials appear, this time as a result of `salvaging' the solutions that would
otherwise diverge at the origin. Thus, for special parameter values ($b=2$ and $a=-n$, $n = 0, 1, 2, ...$) both (formerly)
independent solutions become proportional to the same Associated Laguerre polynomial.
| {
"timestamp": "2016-12-21T02:06:33",
"yymm": "1612",
"arxiv_id": "1612.06706",
"language": "en",
"url": "https://arxiv.org/abs/1612.06706",
"abstract": "The procedure commonly used in textbooks for determining the eigenvalues and eigenstates for a particle in an attractive Coulomb potential is not symmetric in the way the boundary conditions at $r=0$ and $r \\rightarrow \\infty$ are considered. We highlight this fact by solving a model for the Coulomb potential with a cutoff (representing the finite extent of the nucleus); in the limit that the cutoff is reduced to zero we recover the standard result, albeit in a non-standard way. This example is used to emphasize that a more consistent approach to solving the Coulomb problem in quantum mechanics requires an examination of the non-standard solution. The end result is, of course, the same.",
"subjects": "General Physics (physics.gen-ph)",
"title": "The Coulomb potential in quantum mechanics revisited",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877684006774,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7096296145727932
} |
https://arxiv.org/abs/1110.2554 | Stable Birational Equivalence and Geometric Chevalley-Warning | We propose a 'geometric Chevalley-Warning' conjecture, that is a motivic extension of the Chevalley-Warning theorem in number theory. It is equivalent to a particular case of a recent conjecture of F. Brown and O.Schnetz. In this paper, we show the conjecture is true for linear hyperplane arrangements, quadratic and singular cubic hypersurfaces of any dimension, and cubic surfaces in $\Pbb^3$. The last section is devoted to verifying the conjecture for certain special kinds of hypersurfaces of any dimension. As a by-product, we obtain information on the Grothendieck classes of the affine 'Potts model' hypersurfaces considered in \cite{aluffimarcolli1}. | \section{introduction}
Let ${\mathbb{F}}_q$ be the finite field of $q$ elements with $q$ a prime power. The Chevalley-Warning theorem states that the number of solutions in ${\mathbb{F}}_q$ of a system of polynomial equations with $n$ variables is divisible by $q$, provided that the sum of the degrees of these polynomials is less than $n$.
In \cite{brownschnetz}, \S3.3, F.~Brown and O.~Schnetz conjecture that a similar
statement holds in the Grothendieck ring of varieties over a $C_1$ field~$k$. They conjecture that the class of an affine $k$-variety defined by equations satisfying the same degree condition should be a multiple of the class ${\mathbb{L}}$ of the affine line ${\mathbb{A}}^1_k$. Even the
`geometric' case, i.e., when $k$ is an algebraically closed field, appears to be open. We propose the following variant of
this conjecture:
\begin{conj}[Geometric Chevalley-Warning]\label{CW}
Let $f_1,\ldots, f_l$ be homogeneous polynomials in $k[x_0,\ldots, x_n]$ such that $\sum_{i=1}^{l}deg(f_i)<n+1$, where $k$ is an algebraically closed field of characteristic $0$. Then $[Z(f_1,\ldots, f_l)] \equiv 1 (mod \ {\mathbb{L}})$ in $K_0(Var_k)$,
where $Z(f_1,\dots,f_l)$ denotes the set of zeros of $f_1,\dots,f_l$ in ${\mathbb{P}}^n$.
\end{conj}
Over a field $k$ as in this statement, Conjecture \ref{CW} is equivalent to the conjecture of Brown and Schnetz. Indeed, let $X=Z(f_1,\dots,f_l)\subseteq {\mathbb{P}}^n$ , then $[X]\cdot ({\mathbb{L}}-1)+1$ is the class of the zero-locus of
$f_1,\dots,f_l$ in ${\mathbb{A}}^{n+1}$; the Brown-Schnetz conjecture would imply that this class is
$\equiv 0\mod {\mathbb{L}}$, and this is equivalent to $[X]\equiv 1 \mod {\mathbb{L}}$.
In this paper, we show
that Conjecture~\ref{CW}
is true
for
hyperplane arrangements,
for quadratic hypersurfaces of any dimension,
for cubic surfaces in ${\mathbb{P}}^3$,
and for singular cubic hypersurfaces in any dimension.
Along the way, we also establish a
result which settles some special cases of the conjecture in higher dimensions. The hypothesis that $k$ is an algebraically closed field of characteristic zero is used in our proofs, and it is underlying the contextual remarks that follow in this introduction.
We note that the statement of Conjecture~\ref{CW} (for any $l$) is equivalent
to the case of hypersurfaces ($l=1$). Indeed,
we have the equality $[Z(f_1,f_2)]=[Z(f_1)]+[Z(f_2)]-[Z(f_1f_2)]$ in $K_0(Var)$, and the
condition on degrees is satisfied by polynomials on one side of the equation whenever
it is satisfied by polynomials on the other side of the equation. It follows that the
conjecture is true for $Z(f_1,f_2)$ as long as it is true for the hypersurfaces $Z(f_1)$,
$Z(f_2)$, and $Z(f_1f_2)$. The same type of considerations applies to the zero set of
any finite numbers of polynomial equations.
When the variety in consideration is a hypersurface, the condition on degree asked
by the geometric Chevalley-Warning conjecture becomes $\deg(f)<n+1$. This condition
is reminiscent of results concerning the ``weakened rationality" of varieties.
Recall that a variety is rationally chain connected if two general points on the variety can be
joined by a chain of rational curves. It is known that a smooth hypersurface of degree $d$ in
${\mathbb{P}}^n$ is rationally chain connected if and only if $d<n+1$ \cite{MR1158625}. Moreover,
if we fix the degree of the hypersurface, and make the dimension of the ambient
projective space large enough, then it is proved that a general such hypersurface is
unirational \cite{paranjapesrinivas}.
Introducing the notion of ${\mathbb{L}}$-rationality, Conjecture~\ref{CW} admits an equivalent reformulation.
\begin{defin} \cite{MR2775124}
A variety is ${\mathbb{L}}$-rational if its class in $K_0(Var)$ is 1 modulo ${\mathbb{L}}$.
\end{defin}
\begin{conj}\label{lrat}
Every hypersurface of degree $<n+1$ in ${\mathbb{P}}^n$ is ${\mathbb{L}}$-rational.
\end{conj}
Conjecture~\ref{lrat} postulates that ${\mathbb{L}}$-rationality behaves in a sense as rational chain connectedness
does. For instance, we know that neither cubic nor quartic smooth
threefolds in ${\mathbb{P}}^4$ are {\em rational,\/} \cite{MR0291172}, \cite{MR0302652}, while they would be both ${\mathbb{L}}$-rational and rationally chain connected according to Conjecture~\ref{lrat} and previous discussion.
The notion of ${\mathbb{L}}$-rationality is motivated by stable rationality. We recall that two nonsingular irreducible varieties $X$ and $Y$ are ``stably birational'' if
$X \times {\mathbb{P}}^k$ is birational to $Y \times {\mathbb{P}}^l$ for some $k$ and $l$.
We say that a nonsingular, complete irreducible variety is `stably rational'
if it is stably birational to projective space. For nonsingular varieties, ${\mathbb{L}}$-rationality and stable rationality are equivalent. The argument is the following. Recall that
the ideal generated by ${\mathbb{L}}$ in $K_0(Var)$ has a concrete meaning in stably
birational geometry.
Denote by ${\mathbb{Z}}[SB]$ the monoid ring generated by stably birational classes of smooth
complete irreducible varieties. Then M.~Larsen and V.~Lunts prove
in \cite{MR1996804} that there exists a surjective homomorphism $\Psi_{SB} :
K_0(Var) \rightarrow {\mathbb{Z}}[SB]$,
mapping the class of a smooth complete variety in $K_0(Var)$ to its class in ${\mathbb{Z}}[SB]$,
and the kernel of this homomorphism is precisely $({\mathbb{L}})$.
Thus, a smooth projective variety is stably birational to projective space precisely
when its class in $K_0(Var)$ is $1$ modulo ${\mathbb{L}}$.
The reader should note that e.g., every {\em cone\/} is ${\mathbb{L}}$-rational; cf.~Lemma~\ref{cone}. Also, according to the result we recalled above, a smooth projective rational variety is ${\mathbb{L}}$-rational. However, {\em singular\/} rational varieties may well not be ${\mathbb{L}}$-rational. For example, if the normalization morphism of an irreducible rational curve is not set theoretically injective, then the curve itself is not ${\mathbb{L}}$-rational. Thus, `most' singular rational curves are not ${\mathbb{L}}$-rational. Examples in higher dimension may be obtained by applying Lemma~\ref{special}. While all varieties considered in this paper are ruled or rational, it is by no means obvious a priori that they should be ${\mathbb{L}}$-rational as prescribed by conjecture \ref{CW}, and as we prove below.
We wrap up this discussion by noting that rationally chain connectedness admits a
description analogous to the description of stable rationality we just recalled.
In \cite{kahnsujatha}, B.~Kahn and R.~Sujatha construct a category of pure birational
motives by localizing the category of pure motives with respect to certain classes of
birational morphisms. They prove (\cite{kahnsujatha}, \S3.1) that if $X$ is a
rationally chain connected smooth projective $F$-variety, then $h^\circ(X)=1$ in
$Mot_{rat}^\circ(F,{\mathbb{Q}})$. Thus, $h^\circ(X)$ plays for rational connectedness
a role analogous to the role played by the class of X in ${\mathbb{Z}}[SB]\cong K_0(Var)/({\mathbb{L}})$ for stable
rationality.
\section{A few simple cases of the conjecture}\label{simplecases}
In this section we verify that the conjecture is true
when the degrees of the homogeneous polynomials defining the variety are low.
Namely, we will show the following results:
\begin{prop}\label{lineareqs}
If $X$ is the union of n or fewer hyperplanes in ${\mathbb{P}}^n$, then $X$ is ${\mathbb{L}}$-rational.
\end{prop}
\begin{prop}\label{quadratic}
Any quadratic hypersurface in ${\mathbb{P}}^n \ (n>1)$ is ${\mathbb{L}}$-rational.
\end{prop}
Proposition \ref{lineareqs} can be proved in a way similar to the reduction of the varieties in Conjecture ~\ref{CW} to hypersurfaces. In fact, the equation of $X$ can be written as $f_1\ldots f_l$ where $l$ is the numbers of hyperplanes and the $f_i$'s are all linear equations. Then $[X]=Z[f_1\ldots f_{l-1}]+Z[f_l]-Z[f_1\ldots f_{l-1}, f_l]$. The notation $Z[\ldots]$ indicates the set of common zeros of the equations appearing in the bracket, separated by commas, as mentioned in Conjecture ~\ref{CW}. The last term is the class of the union of $l-1$ hyperplanes in ${\mathbb{P}}^{n-1}$. By induction, all terms on the right side of the equation are equivalent to 1 modulo ${\mathbb{L}}$, so is $[X]$.
To prove Proposition ~\ref{quadratic}, we observe that any singular quadratic hypersurface is a cone. According to the following lemma and its corollary, the singular case can be taken care of generally, and we are left to consider the class of a nonsingular quadratic hypersurface.
\begin{lemma}\label{join}
Let Z be the join of varieties X and Y, which is obtained by connecting pairs of points from X and Y by ${\mathbb{P}}^1$ and assuming these rational curves only meet at points of X or Y. If either X or Y is ${\mathbb{L}}$-rational, then Z is also ${\mathbb{L}}$-rational.
\end{lemma}
\begin{proof}
Taking out $X$ and $Y$ from the variety $Z$, we get a bundle over $X\times Y$ whose fiber is ${\mathbb{P}}^1$ taken out 2 points. Thus the class of $Z$ in $K_0(Var)$ is $[X]\cdot [Y]\cdot ({\mathbb{L}} -1)+[X]+[Y]=[X]\cdot [Y]\cdot {\mathbb{L}} -([X]-1)\cdot([Y]-1)+1$.
\end{proof}
\begin{corol}\label{cone}
If the projective variety $X'\subset {\mathbb{P}}^m$ is a cone over another projective variety $X \subset {\mathbb{P}}^n$, $n<m$, then $X'$ is ${\mathbb{L}}$-rational.
\end{corol}
\begin{remark}
The union of $n$ or fewer hyperplanes in ${\mathbb{P}}^n$ is a cone (over an arbitrary point in the set of the intersection). Thus we get a new proof of Propostion ~\ref{lineareqs}.
\end{remark}
A special case of the result by Larsen-Lunts mentioned in the introduction gives an effective treatment of the Chevalley-Warning problem for nonsingular quadratic hypersurfaces, as
\begin{lemma}~\label{rational}
Every rational smooth complete variety is ${\mathbb{L}}$-rational.
\end{lemma}
\begin{proof}
A rational smooth complete variety has the same stable birational class as a point. Thus the difference of its class in $K_0(Var)$ and 1 is in the ideal generated by ${\mathbb{L}}$. \cite{MR1996804}
\end{proof}
The previous lemmas settle the Chevalley-Warning problem for quadratic hypersurfaces. However, we can give another proof for the nonsingular one avoiding using Lemma~\ref{rational}. Let $Q$ be a nonsingular quadratic hypersurface. The projection from the ``north pole'' gives a nice birational map between $Q$ and ${\mathbb{A}}^n$ which allows us to stratify $Q$.
\begin{proof}[Proof of Proposition~\ref{quadratic} in the nonsingular case]
First, let's fix some notations. Let $Q_n$ be the nonsingular quadratic hypersurface in
$ {\mathbb{P}}^{n+1} $ defined by the equation $ X^2_0+X^2_1+\ldots+X^2_{n+1}=0 $
and $Y_n$ be the affine variety defined by $ \sum_{i=1}^{n+1} y^2_i=1 $ in
$ {\mathbb{A}}^{n+1} $.
Projecting from the point $ P=(0,0,\ldots,1) $, we can establish a birational map between $ Y_n $ and $ {\mathbb{A}}^n $. The formula is given by:
\begin{equation*}
x_i=-\frac {y_i}{y_{n+1}-1}
\end{equation*}
Here, the $x_i$'s are coordinates of the affine space $ {\mathbb{A}}^n $.
The inverse rational map from $ {\mathbb{A}}^n $ to $Q_n$ is given by the formula:
\begin{equation*}
y_i=\frac {2x_i}{\sum_{i=1}^{n} x^2_i +1} \quad
y_{n+1}=\frac {\sum_{i=1}^{n} x^2_i -1}{\sum_{i=1}^{n} x^2_i +1}
\end{equation*}
From this description, it is easy to see the closed set $Z(\sum x^2_i +1) $ is not in the image of the polar projection. Then we have the following relation in $ K_0(Var) $:
\begin{equation}
[Y_n]-[Z(\sum_{i=1}^n y^2_i)]=[{\mathbb{A}}^{n}]-[Y_{n-1}]
\end{equation}
In addition to this relation, we also have the trivial relation
\begin{equation}
[Q_n]=[Q_{n-1}]+[Y_n]
\end{equation}
Because the variety $ Z(\sum_{i=1}^n y^2_i) $ is a cone, by the proof of ~\ref{cone}, we get $ Z(\sum_{i=1}^n y^2_i)$ is equal to $ 1+({\mathbb{L}}-1)[Q_{n-2}] $ in $ K_0(Var) $. So we can replace our first equation by:
\begin{equation}
[Y_n]-(1+({\mathbb{L}}-1)[Q_{n-2}])=[{\mathbb{A}}^n]-[Y_{n-1}]
\end{equation}
With the last two equations and the simple cases $ [Q_1]={\mathbb{L}}+1 $, $ [Y_0]=2 $, $ [Y_1]={\mathbb{L}}-1 $, we can conclude by an induction on dimension that $ [Q_n] \equiv 1\ (mod \ {\mathbb{L}}) $ when $ n>0 $ and $ [Y_n] \equiv 0 \ (mod \ {\mathbb{L}}) $ when $n>1$.
\end{proof}
\section{cubic hypersurfaces}\label{cubic}
The next easiest case to consider is the variety defined by a cubic equation in $ {\mathbb{P}}^3 $. We have the following theorem:
\begin{theorem}\label{cubic}
Any cubic surface in $ {\mathbb{P}}^3 $ is ${\mathbb{L}}$-rational.
\end{theorem}
In fact, we can prove something more:
\begin{theorem}\label{scubic}
Any singular cubic hypersurface in ${\mathbb{P}}^n \ (n\geqslant 3)$ is ${\mathbb{L}}$-rational.
\end{theorem}
The following lemma helps to analyze singular cubic hypersurfaces.
\begin{lemma}\label{special}
Let $X\subseteq {\mathbb{P}}^n$ have equation
$F=x_n f_k(x_0,\ldots,x_{n-1})+f_{k+1}(x_0,\ldots,x_{n-1}) =0$ where $f_k$ and
$f_{k+1}$ are homogeneous polynomials of degree $k$ and $k+1$ respectively.
$X$ is ${\mathbb{L}}$-rational if and only if the variety in ${\mathbb{P}}^{n-1}$ defined by $f_k= 0$ is ${\mathbb{L}}$-rational.
\end{lemma}
\begin{proof}[Proof of Lemma ~\ref{special}]
On the hypersurface $Z(F)$, the equation $f_k=0$ defines a cone over the point
$(0,\ldots,1)$. By Corollary~\ref{cone}, this subvariety of the hypersurface $Z(F)$ is
${\mathbb{L}}$-rational. On the other hand, we have the
isomorphism between the affine open set $f_k \neq 0$ in ${\mathbb{P}}^{n-1}$ and $Z(F)-Z(f_k)$ provided by $(x_0,\ldots,x_{n-1})
\rightarrow (x_0,\ldots,x_{n-1},-\frac{f_{k+1}}{f_k})$. We see the ${\mathbb{L}}$-rationality of $X$ is equivalent to the condition that the class of the affine open set $f_k \neq 0$ is in the ideal generated by ${\mathbb{L}}$. This happens if and only if the hypersurface $f_k=0$ in ${\mathbb{P}}^{n-1}$ is ${\mathbb{L}}$-rational.
\end{proof}
\begin{proof}[Proof of theorem~\ref{scubic}]
For a singular cubic hypersurface, we can assume one of its singular point is $(0:\ldots:0:1)$ by a change of projective coordinates. We keep using the notation of the previous lemma. The equation of the cubic surface is written as $x_n f_2(x_0,\ldots,x_{n-1})+f_3(x_0,\ldots,x_{n-1})=0$ or $f_3(x_0,\ldots,x_{n-1})=0$, depending on the singular point is a double point or a triple point. Then the ${\mathbb{L}}$-rationality of the singular cubic hypersurface follows immediately from lemma ~\ref{special}, proposition ~\ref{quadratic} or corollary ~\ref{cone}.
\end{proof}
\begin{proof}[Proof of theorem~\ref{cubic}]
A nonsingular cubic surface arises from the blow-up of ${\mathbb{P}}^2$ at 6 general points.
In particular, it is rational. The ${\mathbb{L}}$-rationality follows from Lemma~\ref{rational}.
The singular case has been done by theorem~\ref{scubic}.
\end{proof}
\begin{remark}
One can also approach theorem ~\ref{cubic} by the classification of cubic surfaces given in \cite{MR533323}. That is, one can directly compute their class in $K_0(Var)$ according to the standard equations given in this reference. This however leads to a lengthy computation.
\end{remark}
\begin{remark}
The criterion we derived in lemma \ref{special} can be applied to give another proof of proposition \ref{quadratic}. Since as long as the rank of the quadratic form is greater or equal to 2, the equation can be written as $F=x_0 x_1 + x_2^2 + \ldots$.
\end{remark}
\begin{corol}\label{quartic}
If a singular quartic hypersurface in ${\mathbb{P}}^4$ has a triple point, then it is ${\mathbb{L}}$-rational.
\end{corol}
\begin{proof}
Assuming the triple point is $(0:0:0:0:1)$, then the equation of the quartic hypersuface can be written as $F=x_4 f_3(x_0,x_1,x_2,x_3) + g_4(x_0,x_1,x_2,x_3)$. The ${\mathbb{L}}$-rationality follows immediately from lemma~\ref{special} and theorem~\ref{cubic}.
\end{proof}
\section{${\mathbb{L}}$-rationality of higher dimensional varieties}
\begin{theorem}
If the equation of a hypersurface of degree $n$ in ${\mathbb{P}}^m \ (m\geqslant n\geqslant4)$ can be written as $F=x_n \ldots x_4 f_3(x_0, x_1,x_2,x_3) + \sum_{i=5}^{n}x_n\ldots x_{i} g_{i-1}(x_0,\ldots,x_{i-2})+g_n(x_0,\ldots,x_{n-1})$, then this hypersurface is ${\mathbb{L}}$-rational.
\end{theorem}
\begin{proof}
When $m>n$, not all coordinates of ${\mathbb{P}}^m$ appear in $F$. In this case $F$ defines a cone in ${\mathbb{P}}^m$ and the ${\mathbb{L}}$-rationality of this hypersurface follows from lemma~\ref{cone}. When $m=n$, Rewrite the polynomial as
\begin{multline*}
F=x_n[x_{n-1}\ldots x_4 f_3(x_0,x_1,x_2,x_3) + \sum_{i=5}^{n-1}x_{n-1}\ldots x_{i}g_{i-1}(x_0,\ldots,x_{i-2})\\+g_{n-1}(x_0,\ldots,x_{n-2})] + g_n(x_0,\ldots,x_{n-1}).
\end{multline*}
Then the proof follows by induction, lemma~\ref{special} and corollary~\ref{quartic}.
\end{proof}
\begin{theorem}
If the equation of the hypersurface of degree at most $n$ in ${\mathbb{P}}^n \ (n\geqslant 4)$ has degree 1 in all variables except at most 4 variables, then this hypersurface is ${\mathbb{L}}$-rational.
\end{theorem}
\begin{proof}
Suppose the 4 possibly non-linear variables are $x_0,x_1,x_2,$ and $x_3$. Proceed the proof by an induction on $n$. When $n=4$, the equation of the hypersurface is either $x_4 f_k(x_0,x_1,x_2,x_3)+f_{k+1}(x_0,x_1,x_2,x_3)=0 \ (k\leqslant 3)$ or $f_k(x_0,x_1,x_2,x_3)=0 \ (k\leqslant 4)$. Since we have checked the ${\mathbb{L}}$-rationality of cubic surfaces, quadratic hypersurfaces, the ${\mathbb{L}}$-rationality of such hypersurfaces are guaranteed by lemma~\ref{special} or corollary~\ref{cone}. When $n>4$, if the equation of the hypersurface is written only in terms of $x_0,x_1,x_2,x_3$, then it is a cone. Otherwise, let $x_n$ be one of its linear variables. Then the equation of the hypersurface is $x_n f_k(x_0,\ldots, x_{n-1})+f_{k+1}(x_0,\ldots,x_{n-1})$ where $f_k$ is again linear in all variables except at most 4 variables. So the ${\mathbb{L}}$-rationality follows from lemma~\ref{special} and the induction hypothesis.
\end{proof}
\begin{remark}
Theorem~4.2 generalizes Corollary 3.3 from \cite{MR2775124}, since the equations of the ``graph hypersurfaces'' considered there are linear in all variables, they must be ${\mathbb{L}}$-rational by Theorem~4.2.
\end{remark}
\begin{example}[Affine Potts model hypersurface]
The previous theorem also gives an easy way to calculate the modulo ${\mathbb{L}}$ class of the affine Potts model hypersurfaces appearing in \cite{aluffimarcolli1} definition (2.2) equation (2.5). The equations for such affine hypersurfaces are
$$ Z_G(q,t)=\sum_{G' \subseteq G'}q^{k(G)} \prod_{e \in E(G')}t_e,$$
where $G'$ is a subgraph of $G$, $k(G')$ and $E(G')$ are the number of connected components and the set of edges of the graph $G'$ respectively. Fix $q$ in this equation and denote by $n$ the number of edges of the graph $G$, then this equation defines the affine Potts model hypersurface in ${\mathbb{A}}^n$. Its class in $K_0(Var)$ is congruent to 1 modulo ${\mathbb{L}}$ if $n$ is odd and congruent to $-1$ modulo ${\mathbb{L}}$ when $n$ is even.
The calculation goes as follows. These hypersurfaces are defined by inhomogoneous polynomials of degree $n$ in ${\mathbb{A}}^n$, linear in all variables, where $n$ is the number of the edges of the graph in consideration. Homogenize the equation and write it as $F=0$, then clearly $F$ is a homogeneous polynomial of degree $n$ linear in all variables except the variable $x_0$ introduced in homogenizing. Now the class of the affine Potts model hypersurface is $[Z(F)]-[Z(x_0, x_1x_2\ldots x_{n})]$. $Z(F)$ is ${\mathbb{L}}$-rational by the previous theorem, and now we are left to calculate $[Z(x_0, x_1x_2\ldots x_{n})]$ which is the class of the union of $n$ hyperplanes in ${\mathbb{P}}^{n-1}$.
Let $x_1,\ldots, x_{n}$ be the projective coordinates of ${\mathbb{P}}^{n-1}$. Consider the complement of $Z(x_1x_2\ldots x_{n})$, which can be explicitly expressed as points $(x_1:\cdots :x_{n})$ such that all projective coordinates are nonzero. We see this affine open set is isomorphic to the $(n-1)$-fold cartesian product of $({\mathbb{A}}^1 - \{pt\})$. So the class of the union of $n$ hyperplanes in ${\mathbb{P}}^{n-1}$ equals $[{\mathbb{P}}^{n-1}]- ({\mathbb{L}} -1)^{n-1}$. Taking into account that $[{\mathbb{P}}^{n-1}]= 1+{\mathbb{L}} +\cdots +{\mathbb{L}}^{n-1}$, we see this class is $1+(-1)^n$ modulo ${\mathbb{L}}$. We conclude the class of the affine Potts model hypersurface is ${\mathbb{L}}$-rational when $n$ is odd and congruent to $-1$ modulo ${\mathbb{L}}$ when $n$ is even.
\end{example}
\bibliographystyle{alpha}
| {
"timestamp": "2011-10-13T02:01:21",
"yymm": "1110",
"arxiv_id": "1110.2554",
"language": "en",
"url": "https://arxiv.org/abs/1110.2554",
"abstract": "We propose a 'geometric Chevalley-Warning' conjecture, that is a motivic extension of the Chevalley-Warning theorem in number theory. It is equivalent to a particular case of a recent conjecture of F. Brown and O.Schnetz. In this paper, we show the conjecture is true for linear hyperplane arrangements, quadratic and singular cubic hypersurfaces of any dimension, and cubic surfaces in $\\Pbb^3$. The last section is devoted to verifying the conjecture for certain special kinds of hypersurfaces of any dimension. As a by-product, we obtain information on the Grothendieck classes of the affine 'Potts model' hypersurfaces considered in \\cite{aluffimarcolli1}.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Stable Birational Equivalence and Geometric Chevalley-Warning",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.970687766704745,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7096296133329673
} |
https://arxiv.org/abs/1408.0673 | Geometric structure for the principal series of a split reductive $p$-adic group with connected centre | Let $\mathcal{G}$ be a split reductive $p$-adic group with connected centre. We show that each Bernstein block in the principal series of $\mathcal{G}$ admits a definite geometric structure, namely that of an extended quotient. For the Iwahori-spherical block, this extended quotient has the form $T//W$ where $T$ is a maximal torus in the Langlands dual group of $\mathcal{G}$ and $W$ is the Weyl group of $\mathcal{G}$. | \section{Introduction}
Let ${\mathcal G}$ be a split reductive $p$-adic group with connected centre, \
and let $G = {\mathcal G}^\vee$ denote the Langlands dual group. Then $G$ is a complex reductive group.
Let $T$ be a maximal torus in $G$ and let $W$ be the common Weyl group of ${\mathcal G}$ and $G$.
We can form the quotient variety
\[T/W
\]
and, familiar from noncommutative geometry \cite[p.77]{K}, the \emph{noncommutative quotient algebra}
\[
{\mathcal O}(T) \rtimes W .
\]
Within periodic cyclic homology (a noncommutative version of de Rham theory)
there is a canonical isomorphism
\[
{\rm HP}_*({\mathcal O}(T) \rtimes W) \simeq {\rm H}^*(T{/\!/} W ; \mathbb C)
\]
where
\[
T{/\!/} W
\]
denotes the \emph{extended quotient} of $T$ by $W$, see \S \ref{sec:extquot}.
In this sense, the extended quotient $T{/\!/} W$, a complex algebraic variety,
is a more concrete version of the noncommutative quotient algebra ${\mathcal O}(T) \rtimes W$.
Returning to the $p$-adic group ${\mathcal G}$, let ${\mathbf {Irr}}({\mathcal G})^{{\mathfrak i}}$ denote the subset of the smooth dual
${\mathbf {Irr}}({\mathcal G})$ comprising all the irreducible smooth Iwahori-spherical representations of ${\mathcal G}$.
We prove in this article that there is a continuous bijective map, satisfying several
constraints, as follows:
\[
T{/\!/} W \simeq {\mathbf {Irr}}({\mathcal G})^{{\mathfrak i}}.
\]
We note that there is nothing in the classical representation theory of ${\mathcal G}$
to indicate that ${\mathbf {Irr}}({\mathcal G})^{{\mathfrak i}}$ admits such a geometric structure.
Nevertheless, such a structure was conjectured by the present authors in \cite{ABPS1},
and so this article is a confirmation of that conjecture, for the single point ${\mathfrak i}$
in the Bernstein spectrum of ${\mathcal G}$. We prove, more generally, that,
subject to constraints itemized in \cite{ABPS1}, and subject to the Condition \ref{CC}
on the residual characteristic, there is a continuous bijective map
\[
T^{\mathfrak s} {/\!/} W^{\mathfrak s} \simeq {\mathbf {Irr}}({\mathcal G})^{\mathfrak s}
\]
for each point ${\mathfrak s}$ in the Bernstein spectrum for the principal series of ${\mathcal G}$,
see Theorem \ref{split}. Here, $T^{\mathfrak s}$ and $W^{\mathfrak s}$ are the complex torus
and the finite group attached to ${\mathfrak s}$.
This, too, is a confirmation of the geometric conjecture in \cite{ABPS1}.
Let ${\mathbf W}_F$ denote the Weil group of $F$.
A Langlands parameter $\Phi$ for the principal series of
${\mathcal G}$
should have
$\Phi (\mathbf{W}_F)$ contained in a maximal torus of $G$. In particular, it should suffice to consider
parameters $\Phi$ such that
$\Phi \big|_{\mathbf{W}_F}$ factors through $\mathbf{W}_F^{{\rm ab}} \cong F^\times$, that is, such that $\Phi$ factors as follows:
\[
\Phi \colon\mathbf{W}_F\times {\rm SL}_2(\mathbb C) \to F^{\times} \times {\rm SL}_2(\mathbb C) \to G.
\]
Such a parameter is \emph{enhanced} in the following way. Let $\rho$ be an irreducible representation
of the component group of the centralizer of the image of $\Phi$:
\[
\rho \in {\mathbf {Irr}} \, \pi_0 ( Z_G({\rm im\,} \, \Phi)) .
\]
The pair $(\Phi, \rho)$ will be called an \emph{enhanced Langlands parameter}.
We rely on Reeder's classification of the constituents of a given principal series
representation of ${\mathcal G}$, see \cite[Theorem 1, p.101-102]{Reed}.
Reeder's theorem amounts to a local Langlands correspondence for the principal series
of ${\mathcal G}$. Reeder uses only enhanced Langlands parameters with a particular geometric origin,
namely those which occur in the homology of a certain variety of Borel subgroups of $G$.
This condition is essential, see, for example, the discussion, in \cite{ABP2},
of the Iwahori-spherical representations of the exceptional group $G_2$.
In Theorem \ref{compareParameters} we show how to replace the enhanced Langlands
parameters of this kind, namely those of geometric origin,
by the \emph{affine Springer parameters} defined in \S \ref{par:affSpringer}.
These affine Springer
parameters are defined in terms of data attached to the complex reductive group $G$ --
in this sense, the affine Springer parameters are independent of the cardinality $q$ of the residue
field of $F$. The scene is now set for us to prove the first theorem of geometric structure,
namely Theorem \ref{thm:bijection}, from which our main structure theorem,
Theorem \ref{split} follows.
We also relate our basic structure theorem with $L$-packets in the principal series of ${\mathcal G}$,
see Theorem \ref{Lpackets}.
An earlier, less precise version of our conjecture was formulated in \cite{ABP1}.
That version was proven in \cite{Sol} for Bernstein components which are described nicely
by affine Hecke algebras. These include the principal series of split groups
(with possibly disconnected centre), symplectic
and orthogonal groups and also inner forms of ${\rm GL}_n$. \\
\textbf{Acknowledgements.}
Thanks to Mark Reeder for drawing our attention to the article of Kato \cite{Kat}.
We thank Joseph Bernstein, David Kazhdan, George Lusztig, and David Vogan for
enlightening comments and discussions.
\section{Extended quotient}
\label{sec:extquot}
Let $\Gamma$ be a finite group acting on a complex affine variety $X$
by automorphisms,
\[
\Gamma \times X \to X.
\]
The quotient variety $X/\Gamma$ is obtained by collapsing each orbit to a point.
For $x\in X ,\; \Gamma_x$ denotes the stabilizer group of $x$:
\[
\Gamma_x = \{\gamma\in \Gamma : \gamma x = x\}.
\]
Let $c(\Gamma_x)$ denote the set of conjugacy classes of $\Gamma_x$. The extended quotient
is obtained from $X / \Gamma$ by replacing the orbit of $x$ by $c(\Gamma_x)$.
This is done as follows:\\
\noindent Set $\widetilde{X} = \{(\gamma, x) \in \Gamma \times X : \gamma x = x\}$.
It is an affine variety and a subvariety of $\Gamma \times X$.
The group $\Gamma$ acts on $\widetilde{X}$:
\begin{align*}
& \Gamma \times \widetilde{X} \to \widetilde{X}\\
& \alpha(\gamma, x) = (\alpha\gamma \alpha^{-1}, \alpha x), \quad\quad \alpha \in \Gamma,
\quad (\gamma, x) \in \widetilde{X}.
\end{align*}
\noindent The extended quotient, denoted $ X/\!/\Gamma $, is $\widetilde{X}/\Gamma$.
Thus the extended quotient $ X/\!/\Gamma $ is the usual quotient for the action of
$\Gamma$ on $\widetilde{X}$.
The projection
\[
\widetilde{X} \to X ,\; (\gamma, x) \mapsto x
\]
is $\Gamma$-equivariant
and so passes to quotient spaces to give a morphism of affine varieties
\[
\rho\colon X/\!/\Gamma \to X/\Gamma.
\]
This map will be referred to as the projection of the extended quotient
onto the ordinary quotient. The inclusion
\begin{align*}
& X \hookrightarrow \widetilde{X}\\
& x \mapsto (e,x)\qquad e=\text{identity element of }\Gamma
\end{align*}
is $\Gamma$-equivariant and so passes to quotient spaces to give an inclusion of affine
varieties $X/\Gamma\hookrightarrow X/\!/\Gamma$.
This article will be dominated by extended quotients of the form
$T{/\!/} W$ or, more generally, extended quotients of the form $T^{\mathfrak s} {/\!/} W^{\mathfrak s}$.
\section{The group $W^{\mathfrak s}$ as a Weyl group}
\label{sec:Lp}
Let ${\mathcal G}$ be a connected reductive $p$-adic group over $F$, which is $F$-split and
has connected centre. Let $\mathcal T$ be a $F$-split maximal torus in ${\mathcal G}$. Let
$G$, $T$ denote the Langlands dual groups of $\mathcal{G}$, $\mathcal{T}$.
The principal series consists
of all $\mathcal G$-representations that are obtained with parabolic induction
from characters of $\mathcal T$.
We will suppose that the residual characteristic $p$ of $F$ satisfies the hypothesis
in \cite[p.~379]{Roc}, for all reductive subgroups $H \subset G$ containing $T$:
\begin{Condition}\label{CC}
If the root system $R (H,T)$ is irreducible,
then the restriction on the residual characteristic $p$ of $F$ is as follows:
\begin{itemize}
\item for type $A_n \quad p > n+1$
\item for types $B_n, C_n, D_n \quad p \neq 2$
\item for type $F_4 \quad p \neq 2,3$
\item for types $G_2, E_6 \quad p \neq 2,3,5$
\item for types $E_7, E_8 \quad p \neq 2,3,5,7.$
\end{itemize}
If $R (H,T)$ is reducible, one excludes primes attached to each of its
irreducible factors.
\end{Condition}
Since $R (H,T)$ is a subset of $R (G,T) \cong R ({\mathcal G},{\mathcal T})^\vee$,
these conditions are fulfilled when they hold for $R ({\mathcal G},{\mathcal T})$.
We denote the collection of all Bernstein components of ${\mathcal G}$ of the form
${\mathfrak s}=[{\mathcal T},\chi ]_{\mathcal G}$ by $\mathfrak B ({\mathcal G},{\mathcal T})$ and call these the Bernstein
components in the principal series. The union
\[
{\mathbf {Irr}} ({\mathcal G},{\mathcal T}) := \bigcup_{{\mathfrak s} \in \mathfrak B ({\mathcal G},{\mathcal T})} {\mathbf {Irr}} ({\mathcal G} )^{\mathfrak s}
\]
is by definition the set of all irreducible subquotients of principal series
representations of $\mathcal G$.
Choose a uniformizer $\varpi_F \in F$. There is a bijection $t \mapsto \nu$ between
points in $T$ and unramified characters of $\mathcal{T}$, determined by the relation
\[
\nu (\lambda(\varpi_F)) = \lambda (t)
\]
where $\lambda \in X_* (\mathcal{T}) = X^* (T)$.
The space ${\mathbf {Irr}} ({\mathcal T} )^{[{\mathcal T} ,\chi]_{\mathcal T}}$ is in bijection with $T$ via
$t \mapsto \nu \mapsto \chi \otimes \nu$. Hence Bernstein's torus $T^{\mathfrak s}$ is isomorphic
to $T$. However, because the isomorphism is not canonical and the action of the group
$W^{\mathfrak s}$ depends on it, we prefer to denote it $T^{\mathfrak s}$.
The uniformizer $\varpi_F$
gives rise to a group isomorphism ${\mathfrak o}_F^\times \times \mathbb Z \to F^\times$,
which sends $1 \in \mathbb Z$ to $\varpi_F$.
Let ${\mathcal T}_0$ denote the maximal compact subgroup of ${\mathcal T}$. As the latter is $F$-split,
\begin{equation}\label{eq:cT0}
{\mathcal T} \cong F^\times \otimes_{\mathbb Z} X_* ({\mathcal T}) \cong ({\mathfrak o}_F^\times \times \mathbb Z)
\otimes_{\mathbb Z} X_* ({\mathcal T}) = {\mathcal T}_0 \times X_* ({\mathcal T}) .
\end{equation}
Because ${\mathcal W}^G = W ({\mathcal G},{\mathcal T})$ does not act on $F^\times$, these isomorphisms are
${\mathcal W}^G$-equivariant if we endow the right hand side with the diagonal ${\mathcal W}^G$-action.
Thus \eqref{eq:cT0} determines a ${\mathcal W}^G$-equivariant isomorphism of character groups
\begin{equation}\label{eq:split}
{\mathbf {Irr}} ({\mathcal T}) \cong {\mathbf {Irr}} ({\mathcal T}_0) \times {\mathbf {Irr}} (X_* ({\mathcal T})) = {\mathbf {Irr}} ({\mathcal T}_0) \times X_{{\rm unr}}({\mathcal T}) .
\end{equation}
\begin{lem}\label{lem:cBernstein}
Let $\chi$ be a character of ${\mathcal T}$, and let
\begin{align}\label{artin}
{\mathfrak s} = [{\mathcal T},\chi]_{{\mathcal G}}
\end{align}
be the inertial class of the pair $({\mathcal T},\chi)$.
Then ${\mathfrak s}$ determines, and is determined by, the ${\mathcal W}^G$-orbit of a smooth morphism
\[
c^{\mathfrak s} \colon {\mathfrak o}_F^\times \to T.
\]
\end{lem}
\begin{proof}
There is a natural isomorphism
\[
{\mathbf {Irr}} ({\mathcal T}) = {\rm Hom} (F^\times \otimes_{\mathbb{Z}} X_* ({\mathcal T}),\mathbb C^\times) \cong
{\rm Hom} (F^\times ,\mathbb C^\times \otimes_\mathbb{Z} X^* ({\mathcal T})) = {\rm Hom} (F^\times ,T) .
\]
Together with \eqref{eq:split} we obtain isomorphisms
\begin{align*}
& {\mathbf {Irr}} ({\mathcal T}_0) \cong {\rm Hom} ({\mathfrak o}_F^\times ,T) , \\
& X_{{\rm unr}}({\mathcal T}) \cong {\rm Hom} (\mathbb{Z} ,T) = T .
\end{align*}
Let $\hat \chi \in {\rm Hom} (F^\times ,T)$ be the image of $\chi$ under these isomorphisms.
By the above the restriction of $\hat \chi$ to ${\mathfrak o}_F^\times$ is not disturbed by
unramified twists, so we take that as $c^{\mathfrak s}$. Conversely, by \eqref{eq:split} $c^{\mathfrak s}$
determines $\chi$ up to unramified twists. Two elements of ${\mathbf {Irr}} ({\mathcal T})$ are
${\mathcal G}$-conjugate if and only if they
are ${\mathcal W}^G$-conjugate so, in view of \eqref{artin}, the ${\mathcal W}^G$-orbit
of the $c^{\mathfrak s}$ contains the same amount of information as ${\mathfrak s}$.
\end{proof}
We define
\begin{equation}
H := Z_G({\rm im\,} \, c^{\mathfrak s}).
\end{equation}
The following crucial result is due to Roche, see \cite[p. 394 -- 395]{Roc}.
\begin{lem} \label{lem:Roche}
The group $H^{\mathfrak s}$ is connected, and the finite group $W^{\mathfrak s}$ is the Weyl group of $H^{\mathfrak s}$:
\[
W^{\mathfrak s} = {\mathcal W}^{H^{\mathfrak s}}
\]
\end{lem}
\section{Comparison of different parameters}
\label{sec:Borel}
\subsection{Varieties of Borel subgroups}
We clarify some issues with different varieties of Borel subgroups and different
kinds of parameters arising from them.
Let $\mathbf{W}_F$ denote the Weil group of $F$, let $\mathbf{I}_F$ be the inertia
subgroup of $\mathbf{W}_F$.
Let $\mathbf{W}_F^{{\rm der}}$ denote the closure of the commutator subgroup of
$\mathbf{W}_F$, and write $\mathbf{W}_F^{{\rm ab}} = \mathbf{W}_F/\mathbf{W}^{{\rm der}}_F$.
The group of units in $\mathfrak{o}_F$ will be denoted ${\mathfrak o}_F^\times$.
Next, we consider conjugacy classes in $G$ of continuous morphisms
\[
\Phi\colon \mathbf{W}_F\times {\rm SL}_2 (\mathbb{C}) \to G
\]
which are rational on ${\rm SL}_2 (\mathbb{C})$ and such that $\Phi(\mathbf{W}_F)$
consists of semisimple elements in $G$.
Let $B_2$ be the upper triangular Borel subgroup in ${\rm SL}_2 (\mathbb{C})$.
Let $\mathcal B^{\Phi (\mathbf{W}_F \times B_2)}$ denote the variety of Borel
subgroups of $G$ containing $\Phi(\mathbf{W}_F \times B_2)$.
The variety $\mathcal B^{\Phi (\mathbf{W}_F \times B_2)}$ is non-empty if and
only if $\Phi$ factors through $\mathbf W_F^{{\rm ab}}$, see \cite[\S 4.2]{Reed}.
In that case, we view the domain of $\Phi$ to be $F^{\times} \times {\rm SL}_2 (\mathbb{C})$:
\[
\Phi\colon F^{\times} \times {\rm SL}_2 (\mathbb{C}) \to G.
\]
In Section \ref{subsec:enp} we show
how such a Langlands parameter $\Phi$ can be enhanced with a parameter $\rho$.
We start with the following data: a point ${\mathfrak s} = [{\mathcal T}, \chi]_{{\mathcal G}}$ and an $L$-parameter
\[
\Phi \colon F^{\times} \times {\rm SL}_2 (\mathbb C) \to G
\]
for which
\[
\Phi |{\mathfrak o}^{\times}_F = c^{\mathfrak s}.
\]
This data creates the following items:
\begin{equation}\label{H}
\begin{aligned}
& t: = \Phi(\varpi_F, I),\\
& x := \Phi \left( 1, \matje{1}{1}{0}{1} \right) ,\\
& M: = {\rm Z}_H (t) .
\end{aligned}
\end{equation}
We note that $\Phi ({\mathfrak o}^{\times}_F) \subset {\rm Z}(H)$ and that $t$ commutes with
$\Phi ({\rm SL}_2 (\mathbb C)) \subset M$.
For $\alpha \in \mathbb{C}^{\times}$ we define the following matrix in ${\rm SL}_2 (\mathbb{C})$:
\[
Y_{\alpha} = \matje{\alpha}{0}{0}{\alpha^{-1}} .
\]
For any $q^{1/2} \in \mathbb C^\times$ the element
\begin{equation}\label{eq:S.12}
t_q := t \Phi \big( Y_{q^{1/2}} \big)
\end{equation}
satisfies the familiar relation $t_q x t_q^{-1} = x^q$. Indeed
\begin{equation}\label{eq:tqx} \begin{split}
t_q x t_q^{-1} & = t \Phi (Y_{q^{1/2}}) \Phi \matje{1}{1}{0}{1}
\Phi (Y_{q^{1/2}}^{-1}) t^{-1} \\
& = t \Phi \big( Y_{q^{1/2}} \matje{1}{1}{0}{1} Y_{q^{1/2}}^{-1} \big) t^{-1} \\
& = t \Phi \matje{1}{q}{0}{1} t^{-1} = x^q .
\end{split} \end{equation}
Notice that $\Phi ({\mathfrak o}^{\times}_F)$ lies in every Borel subgroup of $H$, because it
is contained in ${\rm Z}(H)$. We abbreviate ${\rm Z}_H (\Phi) =
{\rm Z}_H ({\rm im\,} \Phi)$ and similarly for other groups.
\begin{lem}\label{inc}
The inclusion map
${\rm Z}_H (\Phi) \to {\rm Z}_H (t,x)$
is a homotopy equivalence.
\end{lem}
\begin{proof}
Our proof depends on \cite[Prop. 3.7.23]{CG}. There is a Levi decomposition
\[
{\rm Z}_{H} (x) = {\rm Z}_{H} (\Phi ({\rm SL}_2 (\mathbb C))) U_x
\]
where ${\rm Z}_{H} (\Phi ({\rm SL}_2 (\mathbb C))$ a maximal reductive subgroup of ${\rm Z}_H(x)$ and
$U_x$ is the unipotent radical of ${\rm Z}_H(x)$. Therefore
\begin{equation}\label{eq:S.1}
{\rm Z}_{H} (t,x) = {\rm Z}_{H} (\Phi) {\rm Z}_{U_x}(t)
\end{equation}
We note that ${\rm Z}_{U_x}(t) \subset U_x$ is contractible, because it is a unipotent complex group.
It follows that
\begin{equation}\label{eq:S.10}
{\rm Z}_{H} (\Phi) \to {\rm Z}_{H} (t,x)
\end{equation}
is a homotopy equivalence.
\end{proof}
If a group $A$ acts on a variety $X$, let ${\mathcal R}(A, X)$ denote the set of irreducible
representations of $A$ appearing in the homology $H_*(X)$.
The variety of Borel subgroups of $G$ which contain $\Phi(\mathbf{W}_F \times B_2)$
will be denoted ${\mathcal B}_G^{\Phi(\mathbf{W}_F \times B_2)}$ and the
variety of Borel subgroups of $H$ containing $\{t,x\}$ will be denoted ${\mathcal B}^{t,x}_H$.
Lemma~\ref{inc} allows us to define
\[
A: = \pi_0( {\rm Z}_H (\Phi)) = \pi_0( {\rm Z}_H (t,x)).
\]
\begin{thm}\label{Rgroup}
We have
\[
{\mathcal R}(A, {\mathcal B}^{\Phi(\mathbf{W}_F \times B_2)}) = {\mathcal R}(A, {\mathcal B}^{t,x}_H).
\]
\end{thm}
\begin{proof}
This statement is equivalent to \cite[Lemma 4.4.1]{Reed} with a minor
adjustment in his proof. To translate into Reeder's paper, write
\[
t_q = \tau, Y_q = \tau_u, x = u, t = s.
\]
The adjustment consists in the observation that the Borel subgroup $B$ of $H$
contains $\{x,t_q,Y_q\}$ if and only if $B$ contains
$\{x,t,Y_q\}$. This is because $t = t_qY_q^{-1}$.
Therefore, in the conclusion of his proof, ${\mathcal B}^{\tau, u}_H$, which is ${\mathcal B}_H^{t_q, x}$,
can be replaced by ${\mathcal B}_H^{t,x}$.
\end{proof}
In the following sections we will make use of two different but related
kinds of parameters.
\vspace{2mm}
\subsection{Enhanced Langlands parameters}
\label{subsec:enp}
Let ${\mathbf W}_F$ denote the Weil group of $F$. Via the Artin reciprocity map a Langlands
parameter $\Phi$ for the principal
series of ${\mathcal G}$ will factor through $F^{\times} \times {\rm SL}_2(\mathbb C)$:
\begin{align}\label{Phi}
\Phi : {\mathbf W}_F \times {\rm SL}_2(\mathbb C) \to F^{\times} \times {\rm SL}_2(\mathbb C) \to G.
\end{align}
Such a parameter is \emph{enhanced} in the following way. Let $\rho$ be an irreducible
representation of the component group of the centralizer of the image of $\Phi$:
\[
\rho \in {\mathbf {Irr}} \, \pi_0 ( Z_G({\rm im\,} \, \Phi)).
\]
The pair $(\Phi, \rho)$ will be called an \emph{enhanced Langlands parameter}.
We rely on Reeder's
classification of the constituents of a given principal series representation of ${\mathcal G}$,
see \cite[Theorem 1, p.101-102]{Reed}. Reeder's theorem amounts to a
local Langlands correspondence for the principal series
of ${\mathcal G}$. Reeder uses only enhanced Langlands parameters with a particular geometric origin,
namely those which occur in the homology of a certain variety of Borel subgroups of $G$.
Let $B_2$ denote the standard Borel subgroup of ${\rm SL}_2(\mathbb C)$.
For a Langlands parameter as in \eqref{Phi}, the variety of Borel subgroups
$\mathcal B_G^{\Phi (\mathbf W_F \times B_2)}$ is nonempty, and the centralizer
${\rm Z}_G (\Phi)$ of the image of $\Phi$ acts on it. Hence the group of components
$\pi_0 ({\rm Z}_G (\Phi))$ acts on the homology $H_* \big( \mathcal B_G^{\Phi (\mathbf W_F
\times B_2)} ,\mathbb C \big)$. We call an irreducible representation $\rho$ of
$\pi_0 ({\rm Z}_G (\Phi))$ \emph{geometric} if
\[
\rho\in{\mathcal R}\left(\pi_0 ({\rm Z}_G (\Phi)),\mathcal B_G^{\Phi (\mathbf W_F \times B_2)}\right).
\]
Consider the set of enhanced Langlands parameters $(\Phi,\rho)$ for which $\rho$ is geometric.
The group $G$ acts on these parameters by
\begin{equation}\label{eq:defKLRparameter}
g \cdot (\Phi,\rho) = (g \Phi g^{-1}, \rho \circ \mathrm{Ad}_g^{-1})
\end{equation}
and we denote the corresponding equivalence class by $[\Phi,\rho ]_G$.
\begin{defn}\label{Psi}
Let $\Psi(G)_{{\rm en}}^{\mathfrak s}$ denote the set of $H$-conjugacy classes of enhanced
parameters $(\Phi, \rho)$ for ${\mathcal G}$ such that
\begin{itemize}
\item $\rho$ is geometric;
\item $\Phi | {\mathfrak o}^{\times} = c^{\mathfrak s}$.
\end{itemize}
\end{defn}
Let us define a topology on $\Psi(G)_{{\rm en}}^{\mathfrak s}$.
For any $(\Phi, \rho) \in \Psi(G)_{{\rm en}}^{\mathfrak s}$ the element $x = \Phi ( \matje{1}{1}{0}{1} ) \in H$
is unipotent and $t = \Phi (\varpi_F,I) \in H$ is semisimple. By the Jacobson--Morozov Theorem
the $H$-conjugacy class of $\Phi$ is determined completely by the $H$-conjugacy class of
$(t,x)$, see \cite[\S 4.2]{Reed}.
We endow the finite set $\mathfrak U^{\mathfrak s}$ of unipotent conjugacy classes in $H$ with the discrete
topology and we regard the space of semisimple conjugacy classes in $H$ as the algebraic variety
$T^{\mathfrak s} / W^{\mathfrak s}$. On $T^{\mathfrak s} / W^{\mathfrak s} \times \mathfrak U^{\mathfrak s}$ we take the product topology and we
endow $\Psi(G)_{{\rm en}}^{\mathfrak s}$ with the pullback topology from $T^{\mathfrak s} / W^{\mathfrak s} \times \mathfrak U^{\mathfrak s}$,
with respect to the map $(\Phi,\rho) \mapsto (t,x)$.
Notice that for this topology the $\rho$ does not play a role, two elements
of $\Psi (G)^{\mathfrak s}_{{\rm en}}$ with the same $\Phi$ are inseparable.
\begin{thm}\cite{Reed}\label{Reed}
Suppose that the residual characteristic of $F$ satisfies Condition \ref{CC}.
\begin{enumerate}
\item There is a canonical continuous bijection
\[
\Psi(G)_{{\rm en}}^{\mathfrak s} \to {\mathbf {Irr}} ({\mathcal G})^{\mathfrak s} .
\]
\item This bijection maps the set of enhanced Langlands parameters $(\Phi,\rho)$ for which
$\Phi (F^\times)$ is bounded onto ${\mathbf {Irr}} ({\mathcal G})^{\mathfrak s} \cap {\mathbf {Irr}} ({\mathcal G})_{\mathrm{temp}}$.
\item If $\sigma \in {\mathbf {Irr}}({\mathcal G})^{\mathfrak s}$ corresponds to $(\Phi,\rho)$, then the cuspidal support
$\pi^{\mathfrak s} (\sigma) \in T^{\mathfrak s} / W^{\mathfrak s}$, considered as a semisimple conjugacy class in $H^{\mathfrak s}$,
equals $\Phi \big( \varpi_F, Y_{q^{1/2}} \big)$.
\end{enumerate}
\end{thm}
\begin{proof} (1) The canonical bijection is Reeder's classification of the constituents
of a given principal series representation, see \cite[Theorem 1, p.101 -- 102]{Reed}.
First he associates to
$\Phi$ a finite length "standard" representation of ${\mathcal G}$, say $M_{t,x}$, with a unique maximal
semisimple quotient $V_{t,x}$. Then $(\Phi,\rho)$ is mapped to an irreducible constituent of
$V_{t,x}$. To check that the bijection is continuous with respect to the above topology, it
suffices to see that $M_{t,x}$ depends continuously on $t$ when $x$ is fixed. This property
is clear from \cite[\S 3.5]{Reed}.\\
(2) Reeder's work is based on that of
Kazhdan--Lusztig, and it is known from \cite[\S 8]{KL} that the tempered ${\mathcal G}$-representations
correspond precisely to the set of bounded enhanced L-parameters in the setting of \cite{KL}.
As the constructions in \cite{Reed} preserve temperedness, this characterization remains valid
in Reeder's setting.\\
(3) The element $\Phi \big( \varpi_F, Y_{q^{1/2}} \big) \in H$ is the same as $t_q$
in \eqref{eq:S.12}, up to $H$-conjugacy. In the setting of Kazhdan--Lusztig,
it is known from \cite[5.12 and Theorem 7.12]{KL} that property (3) holds. As in (2), this
is respected by the constructions of Reeder that lead to (1).
\end{proof}
\subsection{Affine Springer parameters}
\label{par:affSpringer}
As before, suppose that $t \in H$ is semisimple and that $x \in {\rm Z}_H (t)$ is unipotent.
Then ${\rm Z}_H (t,x)$ acts on $\mathcal B_H^{t,x}$ and $\pi_0 ({\rm Z}_H (t,x))$ acts on the
homology of this variety. In this setting we say that $\rho_1 \in {\mathbf {Irr}} \big(
\pi_0 ({\rm Z}_H (t,x)) \big)$ is \emph{geometric} if it belongs to
${\mathcal R}\left(\pi_0 ({\rm Z}_H (t,x)),\mathcal B_H^{t,x} \right)$.
For the affine Springer parameters it does not matter whether we
consider the total homology or only the homology in top degree. Indeed, it follows
from \cite[bottom of page~296 and Remark 6.5]{Shoji} that any irreducible representation
$\rho_1$ which appears in
$H_* \big( \mathcal B_H^{t,x} ,\mathbb C \big)$, already appears in the top homology of this
variety. Therefore, we may refine Theorem~\ref{Rgroup} as follows:
\begin{thm}\label{Rgroup_refined}
\[
{\mathcal R}(A, {\mathcal B}^{\Phi(\mathbf{W}_F \times B_2)}) = {\mathcal R}^{{\rm top}}(A, {\mathcal B}^{t,x}_H),
\]
where ${\rm top}$ refers to highest degree in which the homology is nonzero,
the real dimension of $\mathcal B_H^{t,x}$.
\end{thm}
We call such triples $(t,x,\rho_1)$ affine Springer parameters for $H$,
because they appear naturally in the representation theory of the affine Weyl group
associated to $H$. The group $H$ acts on such parameters by conjugation, and we
denote the conjugacy classes by $[t,x,\rho_1]_H$.
\begin{defn}
The set of $H$-conjugacy classes of affine Springer parameters will be denoted $\Psi(H)_{{\rm aff}}$.
\end{defn}
Notice that the projection on the first coordinate is a canonical map
$\Psi (H)_{\rm aff} \to T / {\mathcal W}^H$. We endow $\Psi(H)_{{\rm aff}}$ with a topology in the same way
as we did for $\Psi(G)_{{\rm en}}^{\mathfrak s}$, as the pullback of the product topology on
$T / {\mathcal W}^H \times \mathfrak U^{\mathfrak s}$ via the map $[t,x,\rho_1]_H \mapsto (t,x)$.
For use in Theorem \ref{thm:bijection} we recall the parametrization of
irreducible representations of $X^* (T) \rtimes \mathcal W^H$ from \cite{Kat}.
Let $t \in T$ and let $x \in M^\circ = {\rm Z}_H (t)^\circ$ be unipotent.
Kato defines an action of $X^* (T) \rtimes \mathcal W^H$ on the top homology
$H_{d(x)}(\mathcal B^{t,x}_H,\mathbb C)$, which commutes with the action of
${\rm Z}_H (t,x)$ induced by conjugation of Borel subgroups.
By \cite[Proposition 6.2]{Kat} there is an isomorphism of
$X^* (T) \rtimes \mathcal W^H$-representations
\begin{equation}\label{eq:indKato}
H_{d(x)}(\mathcal B^{t,x}_H,\mathbb C) \cong
\mathrm{ind}_{X^* (T) \rtimes {\mathcal W}^{M^\circ}}^{X^* (T) \rtimes {\mathcal W}^H}
\big( \mathbb C_t \otimes H_{d(x)}(\mathcal B^{x}_{M^\circ},\mathbb C) \big) .
\end{equation}
Here $H_{d(x)}(\mathcal B^{x}_{M^\circ},\mathbb C)$ is a representation occurring in
the Springer correspondence for ${\mathcal W}^{M_0}$, promoted to a representation of
$X^* (T) \rtimes {\mathcal W}^H$ by letting $X^* (T)$ act trivially. Hence \eqref{eq:indKato}
has central character ${\mathcal W}^H t$. We note that the underlying vector space of this
representation does not depend on $t$, and that this determines an algebraic family of
$X^* (T) \rtimes \mathcal W^H$-representations parametrized by $T^{{\mathcal W}^{M_0}}$.
Let $\rho_1 \in {\mathbf {Irr}} \big( \pi_0 ({\rm Z}_H (t,x)) \big)$. By \cite[Theorem 4.1]{Kat} the
$X^* (T) \rtimes \mathcal W^H$-representation
\begin{equation}\label{eq:KatoMod}
\mathrm{Hom}_{\pi_0 ({\rm Z}_H (t,x))} \big( \rho_1, H_{d(x)}(\mathcal B^{t,x}_H,\mathbb C) \big)
\end{equation}
is either irreducible or zero. Moreover every irreducible representation of
$X^* (T) \rtimes \mathcal W^H$ is obtained in this way, and the data $(t,x,\rho_1)$
are unique up to $H$-conjugacy. So Kato's results provide a natural bijection
\begin{equation}\label{eq:affSpringer}
\Psi (H)_{\rm aff} \to {\mathbf {Irr}} (X^* (T) \rtimes {\mathcal W}^H ) .
\end{equation}
This generalizes the Springer correspondence for finite Weyl groups, which can be
recovered by considering the representations on which $X^* (T)$ acts trivially.
In \cite{KL,Reed} there are some indications that the above kinds of parameters are
essentially equivalent. The next result allows us to make this precise
in the necessary generality.
\begin{thm}\label{compareParameters}
Let ${\mathfrak s}$ be a Bernstein component in the principal series, associate
$c^{\mathfrak s} \colon {\mathfrak o}_F^\times \to T$
to it as in Lemma \ref{lem:cBernstein} and let $H$ be as in (\ref{H}).
There are natural bijections between $H$-equivalence classes of:
\begin{itemize}
\item enhanced Langlands parameters $(\Phi, \rho)$ for ${\mathcal G}$,
with $\rho$ geometric and $\Phi \big|_{{\mathfrak o}_F^\times} = c^{\mathfrak s}$;
\item affine Springer parameters for $H$.
\end{itemize}
In other words we have a homeomorphism
\[
\Psi(G)_{{\rm en}}^{\mathfrak s} \simeq \Psi(H)_{{\rm aff}}.
\]
\end{thm}
\begin{proof}
An $L$-parameter gives rise to the ingredients $t,x$ in an affine Springer parameter
in the following way. For an $L$-parameter
\[
\Phi \colon F^{\times} \times {\rm SL}_2(\mathbb C) \to G
\]
we set $t = \Phi(\varpi_F, 1)$ and $x = \Phi \big( 1, \matje{1}{1}{0}{1} \big)$.
Conversely, we work with the Jacobson--Morozov Theorem \cite[p. 183]{CG}.
Let $x$ be a unipotent element in $M^0$. There exist rational homomorphisms
\begin{equation} \label{eqn:gamt}
\gamma \colon {\rm SL}_2 (\mathbb{C}) \to M^0 \quad \text{with} \quad
\gamma \big( \matje{1}{1}{0}{1} \big) = x ,
\end{equation}
see \cite[\S 3.7.4]{CG}. Any two such homomorphisms $\gamma$ are conjugate by
elements of ${\rm Z}_{M^\circ}(x)$.
Define the Langlands parameter $\Phi$ as follows:
\begin{equation}\label{eqn:Phi}
\Phi \colon F^{\times} \times {\rm SL}_2 (\mathbb{C}) \to G, \qquad
(u\varpi_F^n,Y) \mapsto c^{\mathfrak s} (u) \cdot t^n\cdot \gamma(Y)
\end{equation}
for all $u \in {\mathfrak o}_F^\times, \; n \in \mathbb{Z},\; Y \in {\rm SL}_2 (\mathbb{C})$.
Note that the definition of $\Phi$ uses the appropriate data:
the semisimple element $t \in T$, the map $c^{\mathfrak s}$, and the
homomorphism $\gamma$ (which depends on $x$).
Since $x$ determines $\gamma$ up to $M^\circ$-conjugation, $c^{\mathfrak s},x$ and $t$
determine $\Phi$ up to conjugation by their common centralizer in $G$.
Notice also that one can recover $c^{\mathfrak s}, x$ and $t$ from $\Phi$ and that
\begin{equation}\label{eq:hPhi}
h (\alpha): = \Phi (1, Y_\alpha)
\end{equation}
defines a cocharacter $\mathbb C^\times \to T$.
To complete $\Phi$ or $(t,x)$ to a parameter of the appropriate kind,
we must add an irreducible representation $\rho$ or $\rho_1$.
Then the bijectivity follows from Theorem~\ref{Rgroup_refined}.
It is clear that the above correspondence between $\Phi$ and $(t,x)$ is
continuous in both directions. In view of the chosen topologies on
$\Psi(G)_{{\rm en}}^{\mathfrak s}$ and $\Psi(H^{\mathfrak s})_{{\rm aff}}$, this implies that the
bijection is a homeomorphism.
\end{proof}
\section{Structure theorem}
\label{sec:unip}
Let ${\mathfrak s} \in \mathfrak B ({\mathcal G},{\mathcal T})$ and construct $c^{\mathfrak s}$ as in
Lemma \ref{lem:cBernstein}. We note that the set of enhanced Langlands parameters
$\Phi(G)_{\rm en}^{\mathfrak s}$ is naturally labelled by the unipotent classes in $H$:
\begin{equation}
\Phi(G)_{\rm en}^{{\mathfrak s},[x]} := \big\{ (\Phi,\rho) \in \Phi(G)_{\rm en}^{\mathfrak s} \mid
\Phi \big( 1, \matje{1}{1}{0}{1} \big) \text{ is conjugate to } x \big\} .
\end{equation}
Via Theorem \ref{compareParameters} and \eqref{eq:affSpringer} the sets $\Phi(G)_{\rm en}^{\mathfrak s}$
and ${\mathbf {Irr}} (X^* (T) \rtimes {\mathcal W}^H)$ are naturally in
bijection with $\Psi (H)_{\rm aff}$. In this way we can associate to any of these
parameters a unique unipotent class in $H$:
\begin{equation} \label{eq:labelling}
\begin{aligned}
& {\mathbf {Irr}} ({\mathcal G} )^{\mathfrak s} && = \; \bigcup\nolimits_{[x]} {\mathbf {Irr}} ({\mathcal G} )^{{\mathfrak s},[x]} , \\
& \Psi (H)_{\rm aff} && = \; \bigcup\nolimits_{[x]} \Psi (H)_{\rm aff}^{[x]} ,\\
& {\mathbf {Irr}} (X^* (T) \rtimes {\mathcal W}^H) &&
= \; \bigcup\nolimits_{[x]} {\mathbf {Irr}} (X^* (T) \rtimes {\mathcal W}^H)^{[x]} .
\end{aligned}
\end{equation}
As ${\mathbf {Irr}} ({\mathcal G} )^{\mathfrak s} = {\mathbf {Irr}} ({\mathcal H}^{\mathfrak s})$ and ${\mathbf {Irr}} (X^* (T) \rtimes {\mathcal W}^H) =
{\mathbf {Irr}} (\mathbb C [X^* (T) \rtimes {\mathcal W}^H])$, these spaces are endowed with the Jacobson topology
from the respective algebras ${\mathcal H}^{\mathfrak s}$ and $\mathbb C [X^* (T) \rtimes {\mathcal W}^H]$.
Recall from Section \ref{sec:extquot} that
\[
\widetilde{T^{\mathfrak s}} = \{ (w,t) \in W^{\mathfrak s} \times T^{\mathfrak s} \mid w t = t \}
\]
and $T^{\mathfrak s} /\!/ W^{\mathfrak s} = \widetilde{T^{\mathfrak s}} / W^{\mathfrak s}$. We endow $\widetilde{T^{\mathfrak s}}$ with
the product of the Zariski topology on $T^{\mathfrak s} \cong T$ and the discrete topology on
$W^{\mathfrak s} = {\mathcal W}^H$. Then $T^{\mathfrak s} /\!/ W^{\mathfrak s}$ with the quotient topology from $\widetilde{T^{\mathfrak s}}$
becomes a disjoint union of algebraic varieties. The following result enables us to transfer
the labellings \eqref{eq:labelling} to $T^{\mathfrak s} {/\!/} W^{\mathfrak s}$.
\begin{thm} \label{thm:bijection}
There exists a bijection
$\tilde{\mu}^{{\mathfrak s}} : T^{\mathfrak s} {/\!/} W^{\mathfrak s} \to {\mathbf {Irr}} (X^* (T) \rtimes W^{\mathfrak s})$ such that:
\begin{itemize}
\item[(1)] $\tilde{\mu}^{{\mathfrak s}}$ respects the projections to $T^{\mathfrak s} / W^{\mathfrak s}$;
\item[(2)] for every unipotent class $x$ of $H$, the inverse image
$(\tilde{\mu}^{\mathfrak s} )^{-1} {\mathbf {Irr}} (X^* (T) \rtimes W^{\mathfrak s})^{[x]}$ is a union of connected
components of $T^{\mathfrak s} {/\!/} W^{\mathfrak s}$.
\end{itemize}
\end{thm}
\begin{proof}
Consider the ring $R (X^* (T^{\mathfrak s}) \rtimes W^{\mathfrak s})$ of virtual finite dimensional complex
representations $X^* (T^{\mathfrak s}) \rtimes W^{\mathfrak s}$. Its canonical $\mathbb Z$-basis is
\[
{\mathbf {Irr}} (X^* (T) \rtimes W^{\mathfrak s}) = \{ \tau (t,x,\rho_1) : (t,x,\rho_1) \in \Psi (H)_{\rm aff} \} .
\]
The $\mathbb Q$-vector space
\[
R_\mathbb Q (X^* (T) \rtimes W^{\mathfrak s}) := \mathbb Q \otimes_{\mathbb Z} R (X^* (T) \rtimes W^{\mathfrak s})
\]
possesses another useful basis coming from $T^{\mathfrak s} {/\!/} W^{\mathfrak s}$. Given $w \in W^{\mathfrak s}$,
let $C_w$ be the cyclic subgroup it generates. We define a character $\chi_w$ of $C_w$
by the formula
\[
\chi_w (w^n ) = \exp (2 \pi i n / |C_w|).
\]
For any $t \in (T^{\mathfrak s})^w$ we obtain a character $\mathbb C_t \otimes \chi_w$ of
$X^* (T^{\mathfrak s}) \rtimes C_w$. We induce that to a $X^* (T^{\mathfrak s}) \rtimes W^{\mathfrak s}$-representation
\[
\chi (w,t) := \mathrm{ind}^{X^* (T) \rtimes W^{\mathfrak s}}_{X^* (T) \rtimes C_w}
( \mathbb C_t \otimes \chi_w )
\]
with central character $W^{\mathfrak s} t$. The representation $\chi (w,t)$ is irreducible whenever
$t$ is a generic point of $(T^{\mathfrak s})^w$, which in this case means simply that $t$ is not fixed
by any element of $W^{\mathfrak s} \setminus C_w$. It is easy to see that $\chi (w,t) \cong \chi (w',t')$
if and only if $(w,t)$ and $(w',t')$ are $W^{\mathfrak s}$-associate (which means that they determine
the same point of $T^{\mathfrak s} {/\!/} W^{\mathfrak s}$). Moreover, it follows from Clifford theory and Artin's
theorem \cite[Theorem 17]{Ser} that
\[
\{ \chi (w,t) : [w,t] \in T^{\mathfrak s} {/\!/} W^{\mathfrak s} \}
\]
is a $\mathbb Q$-basis of $R_\mathbb Q (X^* (T) \rtimes W^{\mathfrak s})$, see \cite[(40)]{Sol2}.
Now we construct the desired map $\tilde{\mu}^{\mathfrak s}$, with a recursive procedure. Take
$0 \leq d \leq \dim_\mathbb C (T)$. With $w \in W^{\mathfrak s}$, define
\[
(T^{\mathfrak s})^w: = \{t \in T^{\mathfrak s} : wt = t\}.
\]
Suppose that we already have defined $\tilde{\mu}^{\mathfrak s}$
on all connected components of $T^{\mathfrak s} {/\!/} W^{\mathfrak s}$ of dimension $<d$, and that
\begin{equation}\label{eq:span}
\begin{split}
\text{span } \tilde{\mu}^{\mathfrak s} \big( \{ [w,t] \in T^{\mathfrak s} {/\!/} W^{\mathfrak s} :
\dim (T^{\mathfrak s})^w < d \} \big) \; \cap \\
\text{span } \{ \chi (w,t) : (w,t) \in \widetilde{T^{\mathfrak s}}, \dim (T^{\mathfrak s})^w \geq d \} \; = 0 .
\end{split}
\end{equation}
Fix $t_1 \in T^{\mathfrak s}$. Since \eqref{eq:indKato} has central character $W^{\mathfrak s} t$,
both $\{ \tau (t,x,\rho_1) : (t_1,x,\rho_1) \in \Psi (H)_{\rm aff}$ and
$\{ \chi (w,t_1) : [w,t_1] \in T^{\mathfrak s} {/\!/} W^{\mathfrak s}\}$ are bases of the finite dimensional
$\mathbb Q$-vector space $R_\mathbb Q (X^* (T) \rtimes W^{\mathfrak s})_{W^{\mathfrak s} t_1}$ spanned by the
$X^* (T) \rtimes W^{\mathfrak s}$-representations which admit the central character $W^{\mathfrak s} t_1$.
From this and the assumption \eqref{eq:span} we see
that we can find, for very $w \in W^{\mathfrak s}$ fixing $t_1$, an irreducible constituent
$\tilde{\mu}^{\mathfrak s} ([w,t_1])$ of $\chi (w,t_1)$ such that
\begin{multline*}
\text{span } \tilde{\mu}^{\mathfrak s} \big( \{ [w,t_1] \in T^{\mathfrak s} {/\!/} W^{\mathfrak s} :
\dim (T^{\mathfrak s})^w \leq d \} \big) \\
\cup \; \text{span } \{ \chi (w,t_1) : (w,t_1) \in \widetilde{T^{\mathfrak s}} ,\, \dim (T^{\mathfrak s})^w > d \}
\end{multline*}
is again a $\mathbb Q$-basis of $R_\mathbb Q (X^* (T) \rtimes W^{\mathfrak s})_{W^{\mathfrak s} t_1}$. In this way we construct
$\tilde{\mu}^{\mathfrak s}$ on the $d$-dimensional connected components of $T^{\mathfrak s} {/\!/} W^{\mathfrak s}$, such that
\eqref{eq:span} becomes valid for $d+1$.
Thus we obtain a bijection $\tilde{\mu}^{\mathfrak s} : T^{\mathfrak s} {/\!/} W^{\mathfrak s} \to {\mathbf {Irr}} (X^* (T) \rtimes W^{\mathfrak s})$
which satisfies (1).
It remains to check (2). Fix $w \in W^{\mathfrak s}$ and consider a connected component $(T^{\mathfrak s})^w_i$
of $(T^{\mathfrak s})^w$. For generic $t \in (T^{\mathfrak s})^w_i ,\; \chi (w,t) = \tilde{\mu}^{\mathfrak s} ([w,t])$
is irreducible. We note that both $\{ \chi (w,t) : t \in (T^{\mathfrak s})^w_i \}$ and \eqref{eq:indKato}
(with fixed $x$) are algebraic families of $X^* (T) \rtimes W^{\mathfrak s}$-representations parametrized
by $(T^{\mathfrak s})^w_i$. That set is an irreducible algebraic variety because it is a coset of the
neutral component of $(T^{\mathfrak s})^w$, which is a subtorus of $T^{\mathfrak s}$. It follows that the
irreducible $\chi (w,t)$ are all contained in ${\mathbf {Irr}} (X^* (T) \rtimes W^{\mathfrak s})^{[x]}$ for one $x$.
By continuity $\chi (w,t)$ is a subrepresentation of $H_{d(x)} (\mathcal B_H^{t,x},\mathbb C)$ for
all $t \in (T^{\mathfrak s})^w_i$, which implies that the subquotient $\tilde{\mu}^{\mathfrak s} ([w,t])$ of
$\chi (w,t)$ has the form $\tau (t,x,\rho_1)$ for the same $x$.
Hence $\tilde{\mu}^{\mathfrak s} ( [w,(T^{\mathfrak s})^w_i] ) \subset {\mathbf {Irr}} (X^* (T) \rtimes W^{\mathfrak s})^{[x]}$.
\end{proof}
We remark that with more effort it is possible to refine the above construction so that
$\tilde{\mu}^{\mathfrak s}$ becomes continuous. But since we do not need that refinement, we refrain from
writing it down here.
\begin{thm}\label{split}
Let ${\mathcal G}$ be a split reductive $p$-adic group with connected
centre, such that the residual characteristic satisfies Condition \ref{CC}.
Then, for each point ${\mathfrak s}$ in the principal series of ${\mathcal G}$, we have a continuous bijection
\[
\mu^{\mathfrak s} : T^{\mathfrak s} {/\!/} W^{\mathfrak s} \to {\mathbf {Irr}}({\mathcal G})^{\mathfrak s} .
\]
It maps $T^{\mathfrak s}_{\rm cpt} {/\!/} W^{\mathfrak s}$ onto ${\mathbf {Irr}}({\mathcal G})^{\mathfrak s} \cap {\mathbf {Irr}} ({\mathcal G})_{\mathrm{temp}}$.
\end{thm}
\begin{proof}
To get the bijection $\mu^{\mathfrak s}$, apply Theorems \ref{compareParameters},
\ref{Reed}.(1) and \ref{thm:bijection}.
The properties (1) and (2) in Theorem \ref{thm:bijection} ensure that the composed map
\[
T^{\mathfrak s} {/\!/} W^{\mathfrak s} \to {\mathbf {Irr}} (X^* (T) \rtimes W^{\mathfrak s}) \to \Psi (H)_{\rm aff}
\]
is continuous, so $\mu^{\mathfrak s}$ is continuous as well.
By Theorem \ref{thm:bijection} $T^{\mathfrak s}_{\rm cpt} {/\!/} W^{\mathfrak s}$ is first mapped bijectively
to the set of parameters in $\Psi (H)_{\rm aff}$ with $t$ compact. From the proof of Theorem
\ref{compareParameters} we see that the latter set is mapped onto the set of enhanced
Langlands parameters $(\Phi,\rho)$ with $\Phi \big|_{{\mathfrak o}_F^\times} = c^{\mathfrak s}$ and
$\Phi (\varpi_F)$ compact. These are just the bounded enhanced Langlands parameters, so by
Theorem \ref{Reed}.(2) they correspond to ${\mathbf {Irr}}({\mathcal G})^{\mathfrak s} \cap {\mathbf {Irr}} ({\mathcal G})_{\mathrm{temp}}$.
\end{proof}
\section{Correcting cocharacters and L-packets}
\label{sec:cochar}
In this section we construct "correcting cocharacters" on the extended quotient
$T^{\mathfrak s} /\!/ W^{\mathfrak s}$. These measure the difference between the canonical projection
$T^{\mathfrak s} /\!/ W^{\mathfrak s} \to T^{\mathfrak s} / W^{\mathfrak s}$ and the composition of $\mu^{\mathfrak s}$ (from Theorem
\ref{split}) with the cuspidal support map ${\mathbf {Irr}} ({\mathcal G})^{\mathfrak s} \to T^{\mathfrak s} / W^{\mathfrak s}$.
As conjectured in \cite{ABPS1}, they show how to determine when two elements of
$T^{\mathfrak s} {/\!/} W^{\mathfrak s}$ give rise to ${\mathcal G}$-representations in the same L-packet.
Every enhanced Langlands parameter $(\Phi,\rho)$ naturally determines a cocharacter $h_\Phi$
and elements $\theta (\Phi,\rho,z) \in T^{\mathfrak s}$ by
\begin{equation}\label{eq:defhPhi}
\begin{aligned}
& h_\Phi (z) = \Phi \big( 1,\matje{z}{0}{0}{z^{-1}} \big) ,\\
& \theta (\Phi,\rho,z) = \Phi \big( \varpi_F, \matje{z}{0}{0}{z^{-1}} \big) =
\Phi (\varpi_F) h_\Phi (z) .
\end{aligned}
\end{equation}
Although these formulas obviously do not depend on $\rho$, it turns out to be convenient to
include it in the notation anyway.
However, in this generality we would end up with infinitely many correcting cocharacters,
most of them with range outside $T$. To reduce to finitely many cocharacters with values
in $T$, we must fix some representatives for $\mathfrak U^{\mathfrak s}$ in $H$.
Fix a Borel subgroup $B_H$ of $H$ containing $T$. Following the recipe from the
Bala--Carter classification \cite[Theorem 5.9.6]{Car} we choose a set of
representatives ${\mathfrak U}^{\mathfrak s} \subset B_H$ for the unipotent classes of $H$.
\begin{lem}\label{lem:Bala-Carter}
Every commuting pair $(t,x)$ with $t \in H$ semisimple and $x \in H$
unipotent is conjugate to one with $x \in {\mathfrak U}^{\mathfrak s}$ and $t \in T$.
\end{lem}
\begin{proof}
Obviously we can achieve that $x \in {\mathfrak U}^{\mathfrak s}$ via conjugation in $H$. Choose a homomorphism
of algebraic groups $\gamma : {\rm SL}_2 (\mathbb C) \to H$ with $\gamma ( \matje{1}{1}{0}{1} ) = x$.
As noted in \eqref{eqn:gamt}, such a $\gamma$ exists and is unique up to conjugation
by $Z_H (x)$. The constructions for the Bala--Carter theorem in \cite[\S 5.9]{Car} entail
that we can choose $\gamma$ such that $\gamma (Y_\alpha) \in T$ for all
$\alpha \in \mathbb C^\times$. On the other hand, we can also construct such a $\gamma$ inside
the reductive group $Z_H (t)$. So, upon conjugating $t$ by a suitable element of
$Z_H (x)$, we can achieve that the standard maximal torus $T_x$ of $\gamma ({\rm SL}_2 (\mathbb C))$
is contained in $T$ and commutes with $t$. Let $S \subset H$ be a maximal torus
containing $T_x$ and $t$. Then
\[
T = (T \cap Z_G ( \mathrm{im} \, \gamma ))^\circ Z(G) T_x ,
\]
and similarly for $S$. It follows that
\[
T \cap Z_G ( \mathrm{im} \, \gamma )^\circ \quad \text{and} \quad
S \cap Z_G ( \mathrm{im} \, \gamma )^\circ
\]
are maximal tori of $Z_G ( \mathrm{im} \gamma )^\circ$. They are conjugate, which shows
that we can conjugate $t$ to an element of $T$ without changing $x \in {\mathfrak U}^{\mathfrak s}$.
\end{proof}
Recall that \eqref{eq:labelling} and Theorem \ref{thm:bijection}
determine a labelling of the connected
components of $T^{\mathfrak s} /\!/ W^{\mathfrak s}$ by unipotent classes in $H$. This enables us to define the
correcting cocharacters: for a connected component $\mathbf c$ of $T^{\mathfrak s} /\!/ W^{\mathfrak s}$ with
label (represented by) $x \in \mathfrak U^{\mathfrak s}$ let $\gamma_x = \gamma$
be as in \eqref{eqn:gamt} and $\Phi$ as in \eqref{eqn:Phi}. We take the cocharacter
\begin{equation}\label{eq:defhx}
h_{\mathbf c} = h_x : \mathbb C^\times \to T ,\quad h_x (z) = \gamma_x \matje{z}{0}{0}{z^{-1}} .
\end{equation}
Let $\widetilde{\mathbf c}$ be a connected component of $\widetilde{T^{\mathfrak s}}$ that projects
onto $\mathbf c$ and centralizes $x$. In view of Lemma \ref{lem:Bala-Carter} this can always
be achieved by adjusting by an element of $W^{\mathfrak s}$. We define
\begin{equation} \label{eq:defThetaz}
\begin{aligned}
& \widetilde{\theta_z} : \widetilde{\mathbf c} \to T^{\mathfrak s} ,& & (w,t) \mapsto
t \, h_{\mathbf c}(z) , \\
& \theta_z : \mathbf c \to T^{\mathfrak s} / W^{\mathfrak s} ,& & [w,t] \mapsto W^{\mathfrak s} t \, h_{\mathbf c}(z) .
\end{aligned}
\end{equation}
\begin{thm}\label{Lpackets}
Let $[w,t],[w',t'] \in T^{\mathfrak s} /\!/W^{\mathfrak s}$. Then $\mu^{\mathfrak s} [w,t]$ and $\mu^{\mathfrak s} [w',t']$ are
in the same L-packet if and only if
\begin{itemize}
\item $[w,t]$ and $[w',t']$ are labelled by the same unipotent class in $H$;
\item $\theta_z [w,t] = \theta_z [w',t']$ for all $z \in \mathbb C^\times$.
\end{itemize}
\end{thm}
\begin{proof}
Suppose that the two ${\mathcal G}$-representations $\mu^{\mathfrak s} [w,t] = \pi (\Phi,\rho)$ and \\
$\mu^{\mathfrak s} [w',t'] = \pi (\Phi',\rho')$ belong to the
same L-packet. By definition this means that $\Phi$ and $\Phi'$ are $G$-conjugate.
Hence they are labelled by the same unipotent class, say $[x]$ with $x \in \mathfrak U^{\mathfrak s}$.
By choosing suitable representatives we may assume that $\Phi = \Phi'$ and that
$\{(\Phi,\rho),(\Phi,\rho')\} \subset \Phi (G)_{\rm en}^{{\mathfrak s},[x]}$. Then
\[
\theta (\Phi,\rho,z) = \theta (\Phi,\rho',z) \text{ for all } z \in \mathbb C^\times.
\]
Although in general $\theta (\Phi,\rho,z) \neq \widetilde{\theta_z} (w,t)$, they differ only
by an element of $W^{\mathfrak s}$. Hence $\theta_z [w,t] = \theta_z [w',t']$ for all $z \in \mathbb C^\times$.
Conversely, suppose that $[w,t],[w',t']$ fulfill the two conditions of the lemma. Let
$x \in \mathfrak U^{\mathfrak s}$ be the representative for the unipotent class which labels them.
From Lemma \ref{lem:Bala-Carter} we see that there are
representatives for $[w,t]$ and $[w',t']$ such that $t (T^w)^\circ$ and $t' (T^{w'})^\circ$
centralize $x$. Then
\[
\widetilde{\theta_z} (w,t) = t \, h_x (z) \quad \text{and} \quad
\widetilde{\theta_z} (w',t') = t' \, h_x (z)
\]
are $W^{\mathfrak s}$ conjugate for all $z \in \mathbb C^\times$. As these points depend continuously on $z$
and $W^{\mathfrak s}$ is finite, this implies that there exists a $v \in W^{\mathfrak s}$ such that
\[
v (t \, h_x (z)) = t' \, h_x (z) \quad \text{for all } z \in \mathbb C^\times .
\]
For $z = 1$ we obtain $v(t) = t'$, so $v$ fixes $h_x (z)$ for all $z$.
Consider the minimal parabolic root subsystem $R_P$ of $R (G,T)$ that supports $h_x$. In
other words, the unique set of roots $P$ such that $h_x$ lies in a facet of type
$P$ in the chamber decomposition of $X^* (T) \otimes_{\rm Z} \mathbb R$. We write
\[
T^P = \{ t \in T \mid \alpha (t) = 1 \; \forall \alpha \in P \}^\circ .
\]
Then $t (T^w)^\circ$ and $t' (T^{w'})^\circ$ are subsets of $T^P$ and $v$ stabilizes $T^P$.
It follows from \cite[Proposition B.4]{Opd} that $h_x (q^{1/2}) t T^P$ and
$h_x (q^{1/2}) t' T^P$ are residual cosets in the sense of Opdam. By the above, these two
residual cosets are conjugate via $v \in W^{\mathfrak s}$. Now \cite[Corollary B.5]{Opd} says that
the pairs $(h_x (q^{1/2}) t,x)$ and $(h_x (q^{1/2}) t',x)$ are $H$-conjugate. Hence the
associated Langlands parameters are conjugate, which means that $\mu^{\mathfrak s} [w,t]$ and
$\mu^{\mathfrak s} [w',t']$ are in the same L-packet.
\end{proof}
\begin{cor}\label{cor:properties}
Properties 1--5 from \cite[\S 15]{ABPS1} hold for $\mu^{\mathfrak s}$ as in
Theorem \ref{split}, with the morphism $\theta_z$ from \eqref{eq:defThetaz}
and the labelling by unipotent classes in $H^{\mathfrak s}$ from \eqref{eq:labelling}
and Theorem \ref{thm:bijection}.
Together with Theorem \ref{split} this proves the conjecture from \cite{ABPS1}
for all Bernstein components in the principal series of a split reductive
$p$-adic group with connected centre, such that the residual characteristic satisfies
Condition \ref{CC}.
\end{cor}
\begin{proof}
Property (1) was shown in Theorem \ref{split}.
By the definition of $\theta_z$ \eqref{eq:defThetaz}, property (4) holds.
Property (3) is a consequence of property (4), in combination with Theorems \ref{Reed}.(3),
\ref{split} and \ref{thm:bijection}. Property (2) follows from Theorem \ref{split} and
property (3). Property (5) is none other than Theorem \ref{Lpackets}.
\end{proof}
| {
"timestamp": "2016-04-13T02:12:11",
"yymm": "1408",
"arxiv_id": "1408.0673",
"language": "en",
"url": "https://arxiv.org/abs/1408.0673",
"abstract": "Let $\\mathcal{G}$ be a split reductive $p$-adic group with connected centre. We show that each Bernstein block in the principal series of $\\mathcal{G}$ admits a definite geometric structure, namely that of an extended quotient. For the Iwahori-spherical block, this extended quotient has the form $T//W$ where $T$ is a maximal torus in the Langlands dual group of $\\mathcal{G}$ and $W$ is the Weyl group of $\\mathcal{G}$.",
"subjects": "Representation Theory (math.RT)",
"title": "Geometric structure for the principal series of a split reductive $p$-adic group with connected centre",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877700966098,
"lm_q2_score": 0.7310585669110203,
"lm_q1q2_score": 0.7096296101248816
} |
https://arxiv.org/abs/2009.05124 | Tiered Random Matching Markets: Rank is Proportional to Popularity | We study the stable marriage problem in two-sided markets with randomly generated preferences. We consider agents on each side divided into a constant number of "soft tiers", which intuitively indicate the quality of the agent. Specifically, every agent within a tier has the same public score, and agents on each side have preferences independently generated proportionally to the public scores of the other side.We compute the expected average rank which agents in each tier have for their partners in the men-optimal stable matching, and prove concentration results for the average rank in asymptotically large markets. Furthermore, we show that despite having a significant effect on ranks, public scores do not strongly influence the probability of an agent matching to a given tier of the other side. This generalizes results of [Pittel 1989] which correspond to uniform preferences. The results quantitatively demonstrate the effect of competition due to the heterogeneous attractiveness of agents in the market, and we give the first explicit calculations of rank beyond uniform markets. | \section{Introduction}
\label{sectionIntroduction}
The theory of stable matching, initiated by~\cite{gale1962college}, has led to a deep understanding of two-sided matching markets and inspired successful real-world market designs. Examples of such markets include marriage markets, online dating, assigning students to schools, labor markets, and college admissions. In a market matching ``men'' to ``women'' (a commonly used analogy), a matching is stable if no man-woman pair prefer each other over their assigned partners.
A fundamental issue is characterizing stable outcomes of matching markets, i.e. the outcome agents should {\it expect} based on market characteristics.
Such characterizations are not only useful for describing outcomes but also likely to be fruitful in market designs. Numerous papers so far have studied stable matchings in random markets, in which agents' preferences are generated uniformly at random \cite{pittel1989average, knuth1990stable, ashlagi2017unbalanced, pittel2019likely}. This paper contributes to the literature by expanding these results to a situation where preferences are drawn according to different tiers of ``public scores'', generalizing the uniform case. We ask how public scores, which correspond to the attractiveness of agents, impact the outcome in the market.
Formally, we study the following class of {\it tiered random markets}. There are $n$ men and $n$ women. Each side of the market is divided into a constant number of ``soft tiers''. There is a fraction of $\epsilon_i$ women in tier $i$, each of which has a public score $\alpha_i$. And there is a fraction of $\delta_j$ men in tier $j$, each of which has a public score $\beta_j$. For each agent we draw a complete preference list by sampling without replacement proportionally to the public scores of agents on the other side of the market.\footnote{These are also termed popularity-based preferences \cite{gimbert2019popularity,immorlica2015incentives} and also equivalent to generating preferences according to a Multinomial-Logit (MNL) induced by the public scores.}
So a man's preference list is generated by sampling women one at a time without replacement according to a distribution that is proportional to their public scores. Using $\bm{\alpha}, \bm{\epsilon}$ to denote the vector of scores and proportions of tiers on the women's side, we see that the marginal probability of drawing a woman in tier $i$ is $\alpha_i / (n\bm{\epsilon}\cdot\bm{\alpha})$. An analogous statement holds for the tier configuration $\bm{\beta}, \bm{\delta}$ of the men.
These preferences are a natural next-step beyond the uniform distribution over preference lists, and provide a~priori heterogeneous quality of agents while still being tractable to theoretical analysis.
Our primary goal is to study the \emph{average rank} of agents in each
tier under the man-optimal stable matching, with a focus on the asymptotic
behavior in large markets. The rank of an agent is defined to be the index
of their partner on their full preference list, where lower is better.
Additionally, we prove results on the \emph{match type distribution},
i.e. the fraction of tier $i$ women matched to tier $j$ men
(for each $i, j$).
We show that, for large enough markets, the following hold
to within an arbitrarily small approximation factor:
\begin{enumerate}
\item (Theorem~\ref{thrmMenRanksCentralConcentration}.)
With high probability, the average rank of men in tier $j$ is
\[ \frac{\bm{\epsilon}\cdot\bm{\alpha}}{\alpha_{\min}}\cdot
\frac{1}{\bm{\delta}\cdot\bm{\beta}^{-1}}\cdot\frac{\ln n}{\beta_j}. \]
\item (Theorem~\ref{thrmWomenRanks}.)
With high probability, the average rank of women in tier $i$ is
\[ (\bm{\delta}\cdot\bm{\beta})(\bm{\delta}\cdot\bm{\beta}^{-1}) \frac{\alpha_{\min}}{\alpha_i}
\frac{n}{\ln n}. \]
\item (Theorem~\ref{thrmMatchTypes}.)
The probability that a woman in tier $i$ matches to a man
in tier $j$ is $\delta_j$.
\end{enumerate}
In the above, $\bm{\beta}^{-1} = \{1/\beta_j\}$ denotes
the vector of the reciprocals of men's public scores,
$\alpha_{\min}$ denotes the smallest public score
on the women's side, and $\bm{x}\cdot\bm{y}$ denotes
the dot product of the vectors $\bm{x}$ and $\bm{y}$.
\subparagraph*{Intuition and Observations}
As in the case of uniform
preferences~\cite{pittel1989average}, in the man-optimal stable outcome,
men get a much lower rank than women. Indeed,
both men and women get the same order of rank
as in the uniform case ($\ln n$ and $n/\ln n$, respectively).
This in itself
is an interesting consequence of this work -- a constant
tier structure affects the market only up to constants.
This fact also highlights that determining these constants
is an interesting area for investigation,
as the constants capture how the outcome of the market changes
with respect to the public scores.
The first observation we make is that
agents on each side get a rank inversely
proportional to their public score.
Perhaps more interesting is the following observation:
The rank of both sides depends on the tier structure of the other side, but
\emph{each tier is affected the same amount} by the tier parameters of the
other side.
This is closely related to the fact
that the probability of a woman matching to a man in tier $j$
is proportional to only the number of men in tier $j$
(regardless of the tier the woman lies in).
Moreover, both $\bm{\epsilon}\cdot\bm{\alpha} / \alpha_{\min}$ and
$(\bm{\delta}\cdot\bm{\beta})(\bm{\delta}\cdot\bm{\beta}^{-1})$ are always greater
than or equal to one\footnote{
To prove $(\bm{\delta}\cdot\bm{\beta})(\bm{\delta}\cdot\bm{\beta}^{-1}) \ge 1$,
use Jensen's inequality to conclude that
$\sum_j \delta_j \beta_j \ge \left(
\sum_j \delta_j \beta_j^{-1}\right)^{-1}$.
}.
Thus, in these markets, any heterogeneity in the public scores
of one side harms the average ranks of the other side
(but does not significantly affect the likelihood
that an agent matches to a certain tier on the other side).
Another interesting feature is the following: While the average
ranks for men's tiers depend on public score distributions on both sides
of the market, the average rank of women in tier $i$ depends only on the
ratio between $\alpha_i$ and the public score $\alpha_{\min}$ of the
bottom tier of women (and the distribution of public scores on the men's
side). Intuitively, the rank of the men depends on the distribution of
scores of the women because \emph{men are competing to avoid being
matched to the lowest tier of women}.
To elaborate on that last point, let us first consider the total number of
proposals made during the man-proposing deferred acceptance process (DA).
The algorithm will terminate
when the last woman receives a proposal. Naturally one would expect that
this woman will belong to the bottom tier. Therefore, using standard coupon
collector arguments, the total number of proposals made to women {\it in
the bottom tier} until they all receive a proposal is expected to be
$(\epsilon_{\min} n)\ln(\epsilon_{\min} n)$, where $\epsilon_{\min}$ is
the fraction of women in the bottom tier.
These proposals are a
$\epsilon_{\min} \alpha_{\min} /\bm{\epsilon}\cdot\bm{\alpha} $ fraction of the total
proposals, so one expects the number of total proposals to be
\[ \frac{ (\epsilon_{\min} n)\ln(\epsilon_{\min} n) }
{\epsilon_{\min} \alpha_{\min} /\bm{\epsilon}\cdot\bm{\alpha}}
= \frac{\bm{\epsilon}\cdot\bm{\alpha}} {\alpha_{\min}}
\cdot n \ln n - O(n).
\]
This introduces the factor of $\bm{\epsilon}\cdot\bm{\alpha}/\alpha_{\min}$ in result (i) on the men's
ranks (i.e. the number of proposals per man).
On the other hand, the probability that one of these proposals goes to a
woman in tier $i$ is $\alpha_i / (n\bm{\epsilon}\cdot\bm{\alpha})$, implying that such a
woman should receive roughly $(\alpha_i / \alpha_{\min}) \ln n$ proposals.
Thus, for a given woman, the increase in the total number of proposals
caused by the tier proportions $\bm{\epsilon}$ is exactly canceled
out by the likelihood that a proposal goes to that woman,
and the only thing that matters is the woman's score (relative
to the bottom tier).
If men are uniform, women should then expect rank roughly $(\alpha_{\min} /
\alpha_{i})(n/\ln n)$, which helps explain the corresponding factors
in result (ii).
Consider now the public scores of the men, and for simplicity assume that the bottom tier of men has score $1$.
Suppose for the sake of demonstration that every time a man with public score $\beta_j$ proposes to a woman who is already matched, this man is $\beta_j$ times more likely to be accepted than a man with than a man with public score $1$.\footnote{
As we discuss below,
this approximation is only valid if the woman is already matched with a man she ranks highly.
A major technical step in our proof is showing that,
in certain situations, ``enough'' women are ``matched
well enough'' for this approximation to be used.
}
We would expect that such a man makes a $1/\beta_j$ fraction fewer proposals before his next acceptance, and indeed $1/\beta_j$ fewer proposals overall.
Let $S$ be the total number of proposals, let $r_j$ denote the rank of a man in tier $j$, and $r_{\min}$ the rank of the bottom tier of men. If every tier of size $\delta_j n$ each accounts for a share of proposals proportional to $1/\beta_j$, then we should have
\[ S = \sum_j (n\delta_j) \beta_j^{-1} r_{\min}
\qquad \Longrightarrow \qquad
r_{\min} = \frac{S}{n\bm{\delta}\cdot\bm{\beta}^{-1}},\qquad
r_j = \frac{S}{(n\bm{\delta}\cdot\bm{\beta}^{-1})\beta_j},
\]
which introduces the factor of $1/((\bm{\delta}\cdot\bm{\beta}^{-1})\beta_j )$
in result (i) on the men's rank.
The final remaining factor in our results
is $(\bm{\delta}\cdot\bm{\beta})(\bm{\delta}\cdot\bm{\beta}^{-1})$ in result (ii).
Deriving this term requires reasoning about the number
of proposals from each tier of men received by a fixed woman $w$.
Building from the previous paragraph,
we reason that each of the $\delta_j n$ men in tier
$j$ makes a number of proposals proportional to $1/\beta_j$.
Each such proposal has the same probability of going to $w$,
regardless of the tier $j$.
So the number of proposals $w$ receives from tier $j$ men
is proportional to $\delta_j/\beta_j$.
The factor $(\bm{\delta}\cdot\bm{\beta})(\bm{\delta}\cdot\bm{\beta}^{-1})$ then
arises for somewhat technical reasons
(described in section~\ref{sectionWomenRanks}) which have to do
with the way women generate their preference lists.
We now describe how result (iii), which may seem somewhat
more mysterious than the other results, emerges as a corollary
of computing the ranks women receive.
We argued above that a woman $w$ in tier $i$ receives approximately
$(\delta_j / \beta_j) U_i$ proposals from men in tier $j$,
for some value of $U_i$ independent of $j$.
Recall that $w$ applies weight $\beta_j$
to each proposal she sees from a man in tier $j$.
Moreover, the identity of $w$'s favorite proposal
is independent of the order in which $w$ saw proposals.
Thus, the probability that $w$'s favorite proposal
(i.e. the proposal of the man she matches to) came from tier $j$
is approximately $(\delta_j U_i) / U_i = \delta_j $, which is
\emph{independent of $\beta_j$}, as well as independent of the
tier $w$ is in.
Thus, up to lower order terms, the distribution of match
types is the same as it would be in a uniformly random matching
market,
and the match is not assortative.
Intuitively, result (iii) arises when men make enough proposals
to offset any disadvantage (in the type of their match)
they have due to public score.
Due to the highly connected and relatively competitive nature
of our markets,
men in the lowest tier make more proposals,
but they are not more likely to end up matched with lower tier agents.
Put another way, men in lower tiers are less likely
to attain matches they idiosyncratically like,
but often settle for a high-quality agent which
is low on their personal preference list.
This indicates that public scores that differ by constant weight do not provide any significant a~priori predictive power
over the matches agents receive.
In particular, agents with lower public scores
can still hope to achieve high-tier
matches if they consider enough options.
\subparagraph*{Techniques.}
Our proofs require developing some technical tools that may be of independent interest, especially when we reason about the ranks achieved by the men. We build on the analysis of DA from \cite{wilson1972analysis,pittel1989average,immorlica2015incentives,ashlagi2017unbalanced} to handle public scores rather than just uniform random preferences. As in these previous works, a key step in our proof is letting all men but one (call him $m$) first propose and match though DA, and then tracking the proposals of $m$ (this works because DA is independent of the order of proposals). For demonstration purposes, let's call the proposals before man $m$ the ``setup''. A key fact in previous works is that the distribution of proposals made by $m$ is identical for every man, and moreover that the distribution of setups is identical as well. This fails to hold in tiered random markets, and thus we must develop new techniques.
We prove that, for ``most'' setups, the rank a man can achieve is approximately given by a certain geometric distribution, whose parameter $p$ is essentially the probability that a proposal by that man will be accepted. We then prove that, up to lower order terms, this success parameter scales up with the public score of the men. This gives the fact that the rank of men is inversely proportional to public score.
Characterizing the setups where our proof goes through requires a technical analysis, and we term the setups which work ``smooth matching states''. The most crucial thing we need for these setups is that \emph{many women are matched to partners they rank highly}, which helps us prove that 1) men are likely to remain matched to their first acceptance (so our approximation with a geometric distribution is valid), and 2) a man with fitness $\beta$ is approximately $\beta$ times more likely to be accepted every time. For details, see section~\ref{sectionRankOfMen}.
Finally, to prove that the average rank of men within a tier \emph{concentrates}, we need to show the correlation between the ranks of different men is not too large. Thus, we track the proposals of the last \emph{two} men to propose, and find that the joint distribution of the ranks of these men can be approximated by a pair of independent geometric distributions. Intuitively, this is because men do not propose to very many women overall, and thus the last two men are unlikely to interfere with each other as they make proposals.
The crucial aspects of our model are that preferences
of each agent are independent and identically distributed,
that preference weights are constant, and that the market is roughly
in balance. While our techniques are useful to reason about
markets which do not have these properties,
the results are not nearly as clean; indeed the tier structure simplifies our analysis, but most of it
goes through if each agent has an
individual, constant, bounded public score.
\subsection{Related literature}
\label{sec:relatedliterature}
Several papers have studied matching markets with complete preference lists that are generated uniformly at random. Coupon collector techniques are used in \cite{wilson1972analysis} to upper bound the men's average rank by $\ln n$. \cite{pittel1989average,knuth1990stable,pittel1992likely} analyze further balanced markets with $n$ men and $n$ women. They find that in the man-optimal stable matching in balanced markets, men and women match on average to their $\ln n$ and $\frac{n}{\ln n}$ ranks, respectively. Our results generalize these findings to markets with preferences induced by public scores, thus incorporating much more heterogeneity in the market.
\cite{ashlagi2017unbalanced,pittel2019likely,cai2019short} study markets with uniformly drawn preferences but with an imbalance between men and women. These papers find that in any stable matching the average ranks of men and women are similar to the average ranks under the short-side-proposing DA. Additionally, \cite{kanoria2020random} investigates the relation between the imbalance and the length of preference lists (though the model is still uniform for each agent). This paper does not consider imbalanced markets but we believe that similar techniques to those we develop will be useful to reason about unbalanced tiered random markets.
Several papers look at random matching markets in which preferences are generated based on public scores \cite{immorlica2015incentives,kojima2009incentives,ashlagi2014stability}. These papers restrict attention to the size of the core (a measure of the difference between the man-optimal and woman-optimal outcome) and strategic manipulation of agents under a stable matching mechanism. Key assumptions in these papers generate outcomes which leave many agents unmatched. In particular, their models either assume that preference lists of men are of constant length, or, alternatively, one side has many more agents than the other.\footnote{Some papers additionally consider manipulations in more restricted randomized settings~\cite{coles2014optimal} or in deterministic (worst case) settings~\cite{Gonczarowski14}.}
Closely related to this paper is~\cite{gimbert2019popularity},
which primarily studies a special case of highly correlated
popularity preferences which is termed ``geometric preferences''.
While our work focuses on the rank agents achieve in the
man-optimal outcome (a canonical stable matching),
\cite{gimbert2019popularity} focuses on the size
of the core (more specifically, they study the number of stable partners
that agents have in typical stable matchings) using techniques
specialized to geometric preferences.
Other papers have addressed tiered matching markets,
especially in market design settings.
However, these papers mostly study ``hard tiers'',
i.e. such that agents in higher tiers are deterministically
ranked above lower tiers by every agent on the other side.
Examples include~\cite{BeyhaghiST17, AshlagiBKS17}.
\cite{Lee16} also considers a certain restricted
tiered model of cardinal utilities (which is incomparable
with our model), focusing on
which tier of agents match to which tier.
Our contribution to the literature is a detailed study
of ``soft tiers'', a natural special case of the
popularity preferences of~\cite{immorlica2015incentives, kojima2009incentives, gimbert2019popularity}.
In cases where each agent's utility for each match on the other side
is independent and identically distributed,
popularity preferences are the natural next step beyond
uniform markets, as they model situations where agents on each
side have significant but non-definitive variation in a~priori quality.
Our techniques build on the large body of work analyzing the
``proposal dynamics'' of deferred acceptance for random preferences,
such as \cite{wilson1972analysis,immorlica2015incentives,
ashlagi2017unbalanced,gimbert2019popularity}.
Our results give insight into how constant-factor preference
biases affect stable matching markets,
including the first explicit calculations of expected rank
beyond uniform markets.
The rest of the paper is organized as follows: Section~\ref{sectionDefsAndPreliminaries} offers basic definitions and preliminaries for our discussion. Section~\ref{sectionCouponCollector} studies the tiered coupon collector process, which serves as an important coupling process for the deferred acceptance algorithm. Section~\ref{sectionRankOfMen} and~\ref{sectionWomenRanks} present the core results of this paper, namely the average rank among tiers of men and women. Section~\ref{sectionSimulations} showcases some numerical experiments.
\section{Definitions and Preliminaries}\label{sectionDefsAndPreliminaries}
A matching market consists of a finite set of men $M$ and a finite set of women $W$. Each man (woman) has a complete and strict preference list over women (men). A matching is a mapping $\mu:M\cup W\to M\cup W$ such that: for every $m\in M$, $\mu(m)\in W$ (or $\mu(m)$ is undefined), for every woman $w\in W$, $\mu(w)\in M$ (or $\mu(w)$ is undefined), and for every $m\in M$ and $w\in W$, $\mu(m)=w$ if and only if $\mu(m)=w$. A matching $\mu$ is {\it stable} if no man-woman pair who are not matched in $\mu$ prefer each other to their matched partners.
It is well-known that there is a unique man-optimal stable matching, which can be found using the man-proposing deferred acceptance algorithm (DA).
While this algorithm does not fully specify an execution order, it is a
classically known result that the order does not affect the final outcome.
\begin{algorithm}
\SetAlgoLined
\caption{(Man-Proposing) Deferred Acceptance Algorithm (DA)}
\label{algDA}
Initialize matching $\mu$ to be empty (i.e. every agent's partner is undefined)\;
Initialize $\mathcal{U}=M$ to be the set of all unmatched men\;
\While{$|\mathcal{U}|>0$}{
Choose any $m\in\mathcal{U}$\;
Let $m$ propose to his most preferred woman
$w$ to whom he has not made a proposal yet\;
\If{$w$ prefers $m$ to $\mu(w)$ (or if $\mu(w)$ is undefined)}{
\lIf{$\mu(w)$ is defined}{Add $\mu(w)$ to $\mathcal{U}$}
Remove $m$ from $\mathcal{U}$\;
Assign $\mu(w)=m$\;
}
}
\end{algorithm}
\begin{lemma}[\cite{gale1962college, mcvitie1970stable}]
\label{thrmDAExecutionOrderDoesntMatter}
The same proposals are made in every run of DA, regardless of which
man is chosen to propose at each step.
\end{lemma}
We study the man-optimal stable matching in a class of tiered random markets, which will be defined below. We will assume that $|M| = |W|$
and that no agent finds any other agent on the other side unacceptable. We will also assume that each side draws their preferences from an identical and independent underlying distribution, and moreover these preferences are generated by repeatedly sampling
without replacement from a fixed distribution on the \emph{agents} of each side.
In \cite{immorlica2015incentives, gimbert2019popularity},
this assumption is termed ``popularity-based preferences'', with the weight
of an agent in the distribution intuitively indicating their popularity for
agents on the other side.
Our main goal is to study randomized matching markets with \emph{a constant number of constant weight tiers} of agents on each side.
For this entire paper, we consider the tier structure to be defined by
fixed proportions $\bm{\epsilon}, \bm{\delta}$ of agents in each tier
and constant weights $\bm{\alpha}, \bm{\beta}$ for each tier,
and we investigate the outcome of the man-proposing DA as $n\to\infty$.
\begin{definition}
Consider constant vectors $\bm{\alpha}, \bm{\epsilon} \in \mathbb{R}^{k_1}_{> 0}$
and $\bm{\beta}, \bm{\delta} \in \mathbb{R}^{k_2}_{>0}$,
where $\|\bm{\epsilon}\|_1, \|\bm{\delta}\|_1=1$.
A \emph{tiered matching market} of size $n$ with respect to
$\bm{\alpha}, \bm{\epsilon}, \bm{\beta}, \bm{\delta}$ is defined
by generating agents' preference lists as follows:
\begin{itemize}
\item The set of $n$ women $W$ is divided into tiers $T_1, \ldots, T_{k_1}$,
of size $|T_i| = \epsilon_i n$ each\footnote{
Note that, for most vectors $\bm{\epsilon}, \bm{\delta}$,
many values of $n$ will produce tier sizes which are not integers.
However, as all our results are \emph{continuous} in
$\bm{\epsilon}, \bm{\delta}$ this is not a problem -- for any particular fixed
$n$, each tier size can be rounded in a way that effectively just changes
$\bm{\epsilon}, \bm{\delta}$ by a tiny amount, and our results will still
hold as written as $n\to\infty$.
}.
Define a distribution $\mathcal{W}$ on women such that a woman in tier $i$ is
selected with probability proportional to $\alpha_i$.
That is, the weight of $w\in T_i$ in $W$ is
$\alpha_i / (n\bm{\epsilon}\cdot\bm{\alpha})$ (which we often denote by $\pi_i$).
\item The set of $n$ men $M$ is divided into tiers
$T_1, \ldots, T_{k_2}$, of size $|T_j| = \delta_j n$ each.
Define a distribution $\mathcal{M}$ on men such that a man in tier $j$ is
selected with probability proportional to $\beta_j$.
That is, the weight of $m\in T_j$ in $M$ is
$\beta_j / (n\bm{\delta}\cdot\bm{\beta})$.
\end{itemize}
For each man $m$ independently,
women are repeatedly sampled from $\mathcal{W}$ without replacement,
and the order in which women are selected is $m$'s preference list.
Preferences for the women are analogously drawn over the distribution $\mathcal{M}$.
The rank that a man has for a woman $w$ is the index of $w$ on his preference list (where lower is better).
\end{definition}
We refer to each $\alpha_i$ as the \emph{weight} or \emph{public score}
of the women in tier $i$, and similarly for the men.
For simplicity of certain arguments, we assume that each $\alpha_i\ge 1$
and each $\beta_j\ge 1$ (although for clarity of our results, we do not
assume that the smallest weight is exactly $1$). We write $\alpha_{\min}$ for the weight of the bottom tier of women, and $\epsilon_{\min}$ for the corresponding tier proportion.
Using a simple generalization of the ``principle of deferred decisions''
used in \cite{Knuth76}, we can arrive at a
characterization of the random process of running DA with a tiered matching
market.
\begin{lemma}\label{thrmWomenDeferedDecisions}
The distribution of runs of DA
for a tiered matching market can be generated as follows:
For the men,
every time a man is chosen to propose, he samples a woman at random from
$\mathcal{W}$, and repeats this until he samples a woman who he has not yet
proposed to.
For the women,
suppose $w$ has seen proposals from a set of men $p(w)$,
and let $\Gamma_w = \sum_{m\in p(w)}\beta(m)$, where $\beta(m)$
denotes the public score of a man $m\in p(m)$.
Then if a proposal from a man $m_*$ with public score $\beta_*$
arrives, $w$ accepts the proposal from $m_*$
with probability
\[
\frac{\beta_*}{\beta_* + \Gamma_w}.
\]
\end{lemma}
\begin{proof}
The above formula gives the probability that $m_*$ is chosen
as $w$'s favorite out of the set of men $p(w)\cup\{m_*\}$.
The only additional observation we need to make is that
the probability that $m_*$ is the new
favorite is independent of the identity of the old favorite.
\end{proof}
We often call $\Gamma_w$ the total ``weight of proposals'' woman $w$ has seen at some point during DA.
\subsection{Deferred acceptance with re-proposals}
With respect to any popularity-based model of preferences,
we can define a procedure analogous to DA.
In our case, we will show that the difference between DA and this procedure
is indeed small.
\begin{definition}
Consider any random matching market with men's preferences determined by
sampling from a distribution $\mathcal{W}$ over women.
The \emph{deferred acceptance with re-proposals} algorithm
is defined as being identical to Algorithm~\ref{algDA}, except
\begin{itemize}
\item Every time a man is chosen to propose to a woman, he draws a
woman from $\mathcal{W}$ with replacement, and may propose more than once to a
single woman.
\item Women's preferences are consistent throughout proposals from the
same man (so if a woman rejected a man before, she will reject him
again).
\end{itemize}
\end{definition}
Since re-proposals are ignored, this process will always yield
the same outcome as algorithm~\ref{algDA}.
\subparagraph*{Notation.}
We write $x = (1\pm \epsilon) y$ to mean
$(1-\epsilon) y \le x \le (1+\epsilon) y$.
We let $\epsilon$ denote an arbitrarily small constant
greater than $0$, while $\bm{\epsilon}$ and $\epsilon_i$
denote the tier parameters of the women.
We let $\alpha_{\min}$ denote the smallest public score
for the women's side, and $\epsilon_{\min}$ denotes
the corresponding tier proportion.
We let $\bm{v}\cdot \bm{w}$ denote the inner product
of vectors $\bm{v}, \bm{w}$.
We denote the exponential and geometric distributions by $\Exp(\lambda)$
and $\Geo(p)$, respectively.
We denote the fact that a random variable $X$ is a draw from a distribution
$D$ by $X\sim D$.
We use $X\preceq Y$ to denote the fact that $X$ is statistically dominated
by $Y$ (i.e. for all $t\in \mathbb{R}$, we have $\P{X \ge t}\le \P{Y\ge t}$).
We let $\Cov(X,Y)$ denote the covariance of $X$ and $Y$.
We write $f(n) = \widetilde O(g(n))$ if there exists a constant $k$
such that $f(n) = O( g(n) \log^k (g(n)))$.
\section{The Coupon Collector and the Total Number of Proposals}\label{sectionCouponCollector}
Fix a tier structure $\bm{\alpha}, \bm{\epsilon}$ corresponding to men's
preferences over the women.
Consider running deferred acceptance with re-proposals.
Recall that each man samples a woman in tier
$i$ with probability $\pi_i = \alpha_i/(n\bm{\epsilon}\cdot\bm{\alpha})$
each draw.
Define $\pi_{\min} = \alpha_{\min}/(n\bm{\epsilon}\cdot\bm{\alpha})$
as the probability of drawing a woman in the lowest tier
(and keep in mind that $\pi_{\min}$ scales like $O(1/n)$).
The core tool we use to reason about the total number of proposals in DA is
the classically studied coupon collector process.
In particular, we study this process when
coupons from different tiers are drawn with a constant-factor
difference in probability.
\begin{definition}
\label{defCouponCollector}
Given a probability distribution $(p_i)_{i\in [n]}$,
we define the \emph{coupon collector with unequal probabilities}
as follows: once every time step, an integer from $[n]$
is drawn independently and with replacement according to distribution
$(p_i)_{i\in [n]}$.
The coupon collector random variable with respect to $(p_i)_{i\in [n]}$
is defined as the number of total draws required
before every integer in $[n]$ has appeared at least once.
The coupon collector $T$ which we are interested in is defined
by taking the distribution $\mathcal{W}$ of men's preferences.
\end{definition}
We will show in section~\ref{sectionTotalProposals} that, in our case,
this random process
is also very close to that of DA (without re-proposals).
For now, we simply bound the expectation of the coupon collector
(with the proof deferred to appendix~\ref{appendixCouponExpectation}).
Note that similar probabilistic problems have been considered before (see e.g.
\cite{brayton1963asymptotic, doumas2012couponsRevisited}) but we include
our own full proofs in appendix~\ref{appendixCouponExpectation}
and~\ref{appendixCouponConcentration} for completeness.
\begin{restatable}{theorem}{RestateThrmCouponExpectation}
\label{thrmCouponCentralConcentration}
Let $T$ denote the number of draws in a coupon collector process with
weights proportional to $\mathcal{W}$.
We have
\[ \E{T} = \big(1 \pm O(1/\ln n)\big)
\frac{\bm{\epsilon}\cdot\bm{\alpha}}{\alpha_{\min}} n \ln n.
\]
\end{restatable}
\begin{remark}\label{remarkLogEspMinErrorBody}
While we are mostly interested in the asymptotic performance of these
matching markets, we make one comment here that the above big-$O$ notation
hides a constant factor of order $\ln(1/\epsilon_{\min})$. For small
values of $\epsilon_{\min}$, this can be much larger than $\ln n$ for most
realistic market sizes.
Note that this error term already
showed up in the intuition given in section~\ref{sectionIntroduction},
where our estimate for the total number of proposals had an additive
term of $O(\ln(\epsilon_{\min}) n)$. For more information,
see proposition~\ref{thrmCouponExpectationLowerBound}.
\end{remark}
\subsection{The Total Number of Proposals in Deferred Acceptance}
\label{sectionTotalProposals}
Let $S = S_n$ denote the total number of proposals made a run of DA
with random preferences given by our tiered market.
As before, let $T = T_n$ denote the distribution of a coupon collector with
distribution $\mathcal{W}$.
As in many prior studies of randomized deferred acceptance,
our starting point is the fact that $S$ is statistically dominated by $T$:
The connection to stable matchings is the following very simple
observation, which has been used in many previous works~\cite{knuth1990stable, pittel1989average}:
\begin{proposition}
\label{thrmCouponIsDAReprops}\label{thrmStatisticalDominance}
The coupon collector random variable $T$ is distributed identically to
the total number of proposals made in deferred acceptance with
re-proposals (regardless of the preferences that women have for men).
Moreover, if $S$ is the number of proposals in DA,
then $S\preceq T$ (i.e. $S$ is statistically dominated by $T$).
\end{proposition}
\begin{proof}
First, recall that DA terminates as soon as every man is matched.
Observe that women never return
to being unmatched once they receive a single proposal.
Because the market is balanced (i.e. $|W|=|M|$), this means DA will
terminate as soon as every woman has been proposed to.
Moreover, because re-proposals are allowed, every proposal is
distributed exactly according to $\mathcal{W}$.
Thus, ignoring the identity of the man doing the proposing,
$T$ is distributed exactly according to the coupon collector random
process.
Furthermore, we can recover the exact distribution $S$ of proposal in
DA simply by ignoring each repeated proposal in $T$.
Thus, $S \le T$ for each run of deferred acceptance with re-proposals, so
$S\preceq T$.
\end{proof}
We proceed to show that the upper bound provided by $T$ is essentially
tight, i.e. there is not a big difference between $T$ and $S$.
The key step will be to upper bound maximum number of distinct women any
man proposes to in $S$, and thus upper bound the
probability that any proposal
in $T$ is a repeat for the man making the proposal.
Crucially, this lemma will have to account for the preferences of the women
(which up until this point have been ignored, but which play a significant role
in the distribution of proposals in DA).
Recall that we denote the sizes of the tiers of the men by the vector
$\bm{\delta}$, and the public scores of the men in each tier by $\bm{\beta}$.
\begin{lemma} \label{thrmMaxPropsLastMan}
Consider running DA with all men except $m_*$,
and suppose that at most $O(n\ln n)$ proposals are made during this process.
Afterwards, consider $m_*$ joining and run DA until the end.
Then for any $C\ge 0$, with probability
$1 - 1/n^C$, the number of proposals made by $m_*$ is at most $O(C\ln^2 n)$.
\end{lemma}
\begin{proof}
This proof follows a similar logic as the proof of Lemma B.4 (ii) in
\cite{ashlagi2017unbalanced}.
Suppose $m_*$ has public score $\beta_*$,
and that he proposes at the end (and $O(n\ln n)$ prior proposals have been made).
We proceed as follows:
\begin{enumerate}
\item When $m_*$ makes a proposal, he will choose a woman who he has
not yet proposed to. For some fixed proposal index $i$ of $m_*$,
let's denote the set of all women $m_*$ has not proposed to by $W_*$, and denote by $\mathcal{W}_*$ the distribution
of $m_*$'s next proposal, i.e. a sample over $W_*$ weighted by the
public scores $\alpha_i$.
For a women $w$ denote her sample weight by $\alpha(w)$ and the
set of proposals she has received by $p(w)$.
Further denote by $\Gamma_w = \sum_{m\in p(w)} \beta(m)$ the sum of the
public scores of men who have proposed to $w$.
Suppose that $|W_*|\ge n/2$, i.e. that $m_*$ has not yet proposed to
over half the women.
Using the assumption that the total number of proposals made is at most
$O(n\ln n)$,
we can bound the expected total weight of proposals women have seen by
\[ \Es{w\sim \mathcal{W}^*}{\Gamma_w}
= \frac{\sum_{w\in W_*} \alpha(w)\Gamma_w}
{\sum_{w\in W_*} \alpha(w)}
\le \frac{ \alpha_{\max}\sum_{w\in W}\Gamma_w}
{|W_*|\alpha_{\min}}
\le \frac{ \alpha_{\max}\beta_{\max}\cdot O(n\ln n)}
{|W_*|\alpha_{\min}}
\le O(\ln n).
\]
Thus, by lemma~\ref{thrmWomenDeferedDecisions}, the probability that
the proposal by $m_*$ will be accepted is
\[
p_1
:= \Es{w \sim \mathcal{W}_*}{\frac{\beta_*}{\beta_* + \Gamma_w}}
\ge \frac{\beta_*}{\beta_* + \Es{w\sim \mathcal{W}_*}{\Gamma_w}}
\ge \Omega(1/\ln n).
\]
where the first inequality is due to Jensen's inequality.
\item If $m_*$ proposes to $w$ and is accepted, then the subsequent
rejection chain can either end at the last woman without proposals,
$w_{\mathrm{last}}$,
or cycles back to $w$ who this time rejects $m_*$. Notice that
for each subsequent proposal, the ratio between the probability that it
goes to $w_{\mathrm{last}}$
(in which case the process will be terminated) and the
probability that it returns to $w$ is at most
$\alpha_{\max}:\alpha_{\min}$ (and possibly less if the proposing man
has already proposed to $w$). Hence, the probability that the chain
ends at the last women $w_{\mathrm{last}}$ is bounded below by
\[p_2 :=
\frac{\alpha_{\min}}{\alpha_{\max}+\alpha_{\min}}
\ge \Omega(1).
\]
Note that this is ignoring the chance that a new proposal by $w$ is
rejected, but it still suffices for a lower bound.
\item \label{itemMaxPropsLastProbWrapup}
The probability that $m_*$ makes more than $K\ln^2 n$
proposals is thus bounded above by
\[ (1-p_1p_2)^{K\ln^2 n} \leq \exp(-p_1p_2 K\ln^2 n)
= \exp(-\Omega(K\ln n))
\le n^{-C}
\]
as long as we choose $K = \Omega(C)$ large enough.
\end{enumerate}
\end{proof}
\begin{corollary}\label{thrmMaxIndividualProposals}
For any constant $C\ge 1$,
with probability $1 - 1/n^C$, the maximum number of proposals made by any
man in DA is $O(C\ln^2 n)$.
\end{corollary}
\begin{proof}
By \ref{thrmStatisticalDominance} and \ref{thrmCouponUpperTail},
the total number of proposals made in DA is $O(C n\ln n)$ with probability
$1 - 1/n^C$. In particular, if we consider any $m_*$ and let all other
agents propose, this will be true.
Recall that by lemma~\ref{thrmDAExecutionOrderDoesntMatter},
DA is independent of the order in which men are chosen to propose.
Thus, for each man $m_*$ we can apply lemma~\ref{thrmMaxPropsLastMan} to get
that, with probability $1 - 1/n^{C+1}$, $m_*$ makes fewer than $O((C+1)\ln^2 n)=O(C\ln^2 n)$
proposals. Taking a union bound over the $n$ men gets the desired result.
\end{proof}
\begin{remark}
Both of the above results hold for deferred acceptance with re-proposals
as well as deferred acceptance.
Indeed, even with re-proposals, deferred acceptance will be independent
of the order of proposals (as re-proposals are ignored by the women).
Moreover, the logic required to prove points 1. and 2. of the proof of
lemma~\ref{thrmMaxPropsLastMan} is only easier to prove when men sample
over all of $W$ as opposed to just the set $W_*$.
\end{remark}
The above result is enough to show that
proposition~\ref{thrmCouponCentralConcentration} holds for DA as well for
the coupon collector,
because repeated proposals are at most a $O(\ln^2 n / n) = o(1)$
fraction of total proposals in deferred acceptance
with re-proposals. We defer the proof to
appendix~\ref{appendixTotalProposalsExpectation}.
\begin{restatable}{theorem}{RestateThrmDACentralConcentration}
\label{thrmDACentralConcentration}
Let $S$ be the total number of proposals made in DA
with tiers of women $\bm{\epsilon}, \bm{\alpha}$,
and arbitrary constant tiers on the men.
We have
\[ \E{S} = \big(1 - O(\ln^2 n/n)\big) \E{T}
= \big(1 \pm O(1/\ln n)\big)
\frac{\bm{\epsilon}\cdot\bm{\alpha}}{\alpha_{\min}} n \ln n.
\]
\end{restatable}
\section{Rank Achieved by the Men}
\label{sectionRankOfMen}
Up until this point, our arguments have only crudely considered the
preferences women have for men.
Due to the asymmetry across the different tiers, this
means we cannot yet calculate the expected rank men get.
Consider a man $m$ in tier $j$. Our main goal is to prove that the rank of $m$ is inversely proportional to $\beta_j$.
As in~\ref{thrmMaxPropsLastMan},
the core tool of our proof will be the fact that deferred acceptance is
independent of execution order (by~\ref{thrmDAExecutionOrderDoesntMatter}),
and thus we can wait until all other men
have finished proposing and found a match before letting $m$ propose.
Once this is done, the major ideas are
\begin{enumerate}
\item Suppose $m$ has public score $1$, and define
\[ p = \Es{w\sim\mathcal{W}}{\P{\text{$w$ accepts a proposal from $m$}}}. \]
Note that, if $m$ were able to propose to a woman independently
multiple times, the number of proposals until $m$ gets his first
acceptance would be distributed exactly according to $\Geo(p)$,
and the expected value would be $1/p$.
We show that (because men make much less than $n$ proposals) the
difference due to re-proposals is not large.
\item Because $m$ is the last man to propose, most women have already
seen many proposals and arrived at a decent match.
When $m$ gets his first acceptance, he should thus be likely to stay
where he is. We show that, while the probability of $m$ proposing to
more women is non-negligible,
it still contributes only $O(1)$ in expectation.
So $m$'s expected rank is $1/p$ up to lower-order terms.
\item Another consequence of a woman $w$ receiving a large number
of proposals is the following:
\begin{align*}
\P{\text{$w$ accepts a proposal from $m'$ with weight $\beta$}}
\qquad\qquad\qquad\\
\approx \beta\cdot
\P{\text{$w$ accepts a proposal from $m$ with weight $1$}}.
\end{align*}
simply by~\ref{thrmWomenDeferedDecisions} and the fact
that $\beta / (\beta + \Gamma_w) \approx \beta \cdot 1/(1+\Gamma_w)$
for $\Gamma_w$ (the sum of public scores of men who proposed to $w$) large.
Thus, if $m$ had public score $\beta$, the effective value of $p$ would be
approximately $\beta p$, and the expected rank of $m$
would become approximately $1/(\beta p)$.
In other words, while we are not able to calculate $p$ directly, we
show that
$p$ scales properly with $m$'s score.
\item \label{itemIssueTierSelectionPrior}
Finally, we prove that the above holds for most sequences of
proposals of men before $m$,
and thus holds in expectation over the entire execution of DA.
Note that the distribution of proposals before $m$ changes slightly
depending on which tier $m$ is chosen from, but in a large market, we
do not expect this to make a big difference.
\end{enumerate}
The biggest difference between the above proof sketch
and its implementation is that we focus on \emph{two} men proposing
at the end of DA. This serves to address
point~\ref{itemIssueTierSelectionPrior} above -- we are able to show that,
for the vast majority of sequences of proposals before the last two men,
their expected ranks are proportional to the ratio of their scores.
Thus, this ratio holds in expectation over all of DA.
Focusing on two men also allows us to bound the \emph{correlation} between
the two men's ranks, which is crucial for our concentration results.
In our proof, we also formalize what it means for all men other than two to
propose, with the notion of a ``partial matching state''.
Moreover, we give the term \emph{smooth} to those states in which the proof
sketch above goes through. Most crucially, in smooth matching states,
``most women have received a lot of proposals'',
so that the reasoning in points~2 and~3 are valid.
Additionally, to address certain technicalities (such as being able to
bound the magnitude of the expected number of proposals) we define smooth
matching states to not have too many proposals in total.
\subsection{Smooth matching states}
\begin{definition}\label{defPartialMatchingState}
Given a set of men $L$,
we define the \emph{partial matching state excluding $L$},
denoted $\mu_{-L}$, as follows:
Run DA with men in $M\setminus L$ proposing to $W$,
and keep track of which proposals were made.
More specifically, if $\mu$ is the (partial) matching
resulting from running DA with a set of men $M\setminus L$
and set of women $W$, and
$P = \{ (m_{i_{\ell}}, w_{j_\ell}) \}_{\ell}$
is the set of all tuples $(m_i, w_j)$
where $m_i$ proposed to $w_j$ during this process,
then $\mu_{-L} = (\mu, P)$.
In a random matching market, we consider this state as a random
variable. In a tiered random matching market, to specify this random
variable, it suffices to give a multiset of tiers which the men in $L$
belong to.
For a fixed $\mu_{-L}$,
denote by $\Gamma_w$ the total sum of
weights which woman $w$ received in $P$.
\end{definition}
Note that the state $\mu_{-L}$ keeps track of which proposals
have been made (in addition to which current matches are formed)
before the men in $L$ propose.
\begin{definition}\label{defSmoothMatchingState}
We call a partial matching state $\mu_{-L}$
\emph{smooth} if the following hold for some constants
$C_1, C_2, C_3 > 0$:
\begin{enumerate}
\item At most $C_1 n\ln n$ proposals were made to women overall.
\item At most $n^{1-C_2}$ women have received fewer than $C_3 \ln n$
proposals.
\end{enumerate}
\end{definition}
The constants $C_1, C_2, C_3$ in the above depend on the tier structure,
and can simply be chosen such that the following proposition holds.
Our arguments will go through if smoothness holds with respect to
any $C_1, C_2, C_3$ which are held constant as $n\to \infty$.
The proof is given in appendix~\ref{appendixReachingSmooth}.
\begin{restatable}{proposition}{RestateThrmSmoothWHP}
\label{thrmSmoothWHP}
Let $L = \{m_1, m_2\}$ be any pair of men.
After running deferred acceptance,
$\mu_{-L}$ is smooth with probability $1 - n^{-\Omega(1)}$.
\end{restatable}
Once we know that $\mu_{-L}$ is smooth, our two main tasks are to show that
men's ranks scale inverse-proportionally to their score,
and that the ranks of different men do not correlate too highly.
These are the main technical novelties of the paper. The exact details are given in Appendix~\ref{sectionSmooth}.
\begin{restatable}{proposition}{RestateThrmSmoothRanksScale}
\label{thrmSmoothedRanksProportional}
Suppose $\mu_{-L}$ is smooth, and
let $r_1$ and $r_2$ be the ranks of $m_1$ and $m_2$ after running DA with
$m_1$ and $m_2$ starting from $\mu_{-L}$.
We have
\begin{equation*}
\mathbb{E}_{L}[r_1]
= \big(1 \pm O(1/\ln n)\big)\frac{\beta_2}{\beta_1} \mathbb{E}_{L}[r_2].
\end{equation*}
where we use $\Es{L}{}$ to denote taking an expectation over the random
process of $m_1, m_2$ proposing in DA after starting from state $\mu_{-L}$.
\end{restatable}
\begin{restatable}{proposition}{RestateThrmSmoothRanksCovariance}
\label{thrmSmoothCovar}
Suppose $\mu_{-L}$ is smooth, and
let $r_1$ and $r_2$ be the ranks of $m_1$ and $m_2$ after running DA with
$m_1$ and $m_2$ starting from $\mu_{-L}$.
Then we have $\Cov(r_i, r_j) = O(\ln^{3/2}n)$.
\end{restatable}
\subsection{Expected rank of the men}
\label{sectionRankOfMenConclusion}
In this subsection, we show that overall, expected rank scales
proportionally to fitness (in addition to under smooth matching states). This allows us to compute the expected rank of the men.
The proofs (deferred to appendix~\ref{appendixMenRanksProof})
follow by carefully keeping track of the (limited)
effect of non-smooth matching states on the expectation.
\begin{restatable}{proposition}{RestateThrmMenRanksProportional}
\label{thrmMenRanksProportional}
Let $r_i$ and $r_j$ denote the rank of a man in tiers $i$ and $j$.
Then we have
\[ \E{ r_i } = \big(1 \pm O(1/\ln n)\big)
\frac{\beta_j}{\beta_i}\E{r_j}.
\]
\end{restatable}
\begin{restatable}{theorem}{RestateThrmMenRanksExpectation}
\label{thrmMenRanksExpectation}
Let $\bm{\beta}^{-1}$ denote the vector $(1/\beta_i)_{i}$.
For each tier $j$, the rank $r_j$ of men in tier $j$ has expectation
\[ \E{r_j}
= \big(1 \pm O(1/\ln n)\big)
\frac{\E{S}}{(n\bm{\delta}\cdot\bm{\beta}) \beta_j }
= \big(1 \pm O(1/\ln n)\big)
\frac{\bm{\epsilon}\cdot\bm{\alpha}}{\alpha_{\min}}\cdot
\frac{1}{(\bm{\delta}\cdot\bm{\beta}^{-1})}\cdot
\frac{\ln n}{\beta_j}.
\]
\end{restatable}
Finally, we also use our results on the covariance of men's ranks to prove concentration. We defer the proof to appendix~\ref{appendixMenRanksProof}. At a high level, the proof follows simply because the weak correlation implied by~\ref{thrmSmoothCovar} means that the variance of the average of the ranks is lower-order (compared to its expectation), so Chebyshev's inequality can be used.
\begin{restatable}{theorem}{RestateThrmMenRanks}
\label{thrmMenRanksCentralConcentration}
For any tier $j$, let
$\overline R^{M}_j = (\delta_j n)^{-1} \sum_m r_m$ denote the average rank of
men in tier $j$.
Then, for any $\epsilon > 0$,
\[ \overline R^{M}_j = (1 \pm \epsilon)
\frac{\bm{\epsilon}\cdot\bm{\alpha}}{\alpha_{\min}}\cdot
\frac{1}{(\bm{\delta}\cdot\bm{\beta}^{-1})}\cdot
\frac{\ln n}{\beta_j}
\]
with probability approaching $1$ as $n\to\infty$.
\end{restatable}
\section{Expected rank of the women and the distribution of match types}
\label{sectionWomenRanks}
\subsection{Expected rank of women}
We saw in section~\ref{sectionRankOfMenConclusion}
that men achieve ranks proportional to the inverse
of their public scores. In this section, we turn to the women.
To study the rank the women achieve, we
need to reason about the number of proposals women
receive on average.
By theorem~\ref{thrmMenRanksCentralConcentration}, we expect that
for each tier $j$ of men, the $\delta_j n$ men make a total number of
proposals approximately
\[ \frac{\delta_j \beta_j^{-1}}{\bm{\delta}\cdot\bm{\beta^{-1}}}
\cdot\frac{\bm{\alpha}\cdot\bm{\epsilon}}{\alpha_{\min}}n \ln n.
\]
Each of these proposals goes to a woman in tier $i$ with
probability $\pi_i = \alpha_i / (n\bm{\epsilon}\cdot\bm{\alpha})$,
so we expect such a woman to receive approximately
$ (\delta_j \beta_j^{-1})/(\bm{\delta}\cdot\bm{\beta^{-1}})
\cdot(\alpha_i/{\alpha_{\min}})\ln n
$
proposals from men in tier $j$.
Each of these men has public score $\beta_j$, so we expect $\Gamma_w$,
the total sum of public scores of men proposing to $w$,
to be roughly
\[ \Gamma_w \approx
\sum_j \beta_j\frac{\delta_j\beta_j^{-1}}{\bm{\delta}\cdot\bm{\beta^{-1}}}
\cdot\frac{\alpha_i}{\alpha_{\min}}\ln n
= \frac{\alpha_i \ln n}
{\alpha_{\min}(\bm{\delta}\cdot\bm{\beta^{-1}})}.
\]
It is not immediately clear how the above
value of $\Gamma_w$ should translate to the \emph{rank}
that $w$ gets. Unlike in the case where men are uniform,
we cannot simply divide $n$ by the number of proposals
which $w$ receives.
Indeed, suppose a woman $w$ receives exactly the total
sum of weight $\Gamma_w$ predicted above. What should her rank be?
This is essentially the following:
across all tiers of $\delta_j n$ men each,
how many do we expect to beat her best proposal so far?
The probability that $w$ ranks a man $m$ higher than
her match, when
viewed according to~\ref{thrmWomenDeferedDecisions},
is a function only of the weight $\beta(m)$ of
$m$ and the weight of proposals $\Gamma_w$ which $w$ received.
Specifically, this probability is
$\beta(m) / (\beta(m) + \Gamma_w) \approx \beta_j / \Gamma_w$.
Summing this
across all the men, we get
\[
\E{r_w}
\approx \sum_m \frac{\beta(m)}{\beta(m) + \Gamma_w}
\approx \frac{n \bm{\delta}\cdot\bm{\beta}}{\Gamma_w}
\approx (\bm{\delta}\cdot\bm{\beta})
(\bm{\delta}\cdot\bm{\beta}^{-1})
\frac{\alpha_{\min}}{\alpha_i}
\cdot \frac{n}{\ln n}.
\]
Note that this ignores the fact that a woman will never rank $m$ higher than her match if that $m$ already proposed to
her during DA. But since $w$ only likely receives $\ln n \ll n/\ln n$ proposals, the difference is not noticeable.
It turns out that, with a detailed probabilistic
analysis, the above proof sketch goes through.
The details are given in appendix~\ref{appendixWomenRanks}.
\begin{restatable}{theorem}{RestateThrmWomenRanks}
\label{thrmWomenRanks}
Let ${\overline R}^{W}_i = (\epsilon_i n)^{-1} \sum_{w\in T_i} r_w$
denote the average rank of women in tier $i$.
For all $\epsilon>0$, we have
\[ {\overline R}^{W}_i = (1 \pm \epsilon)
(\bm{\delta}\cdot\bm{\beta})(\bm{\delta}\cdot\bm{\beta}^{-1}) \frac{\alpha_{\min}}{\alpha_i}
\frac{n}{\ln n}
\]
with probability approaching $1$ as $n\to\infty$.
\end{restatable}
\subsection{The distribution of match types}
Fix a woman $w$ in tier $i$.
We now study the probability that $w$ is matched to a man
from some tier $j$.
In the previous section, we argued that with high probability
$w$ receives approximately a total of
\[ \frac{\delta_j \beta_j^{-1}}{\bm{\delta}\cdot\bm{\beta^{-1}}}
\cdot\frac{\bm{\alpha}\cdot\bm{\epsilon}}{\alpha_{\min}}n \ln n
\]
proposals from men in tier $j$.
Thus, the contribution to $\Gamma_w$ (the total weight of proposals $w$
received) from men in tier $j$ is
\[ \Gamma_{j\to w} \approx
\frac{\delta_j}{\bm{\delta}\cdot\bm{\beta^{-1}}}
\cdot\frac{\bm{\alpha}\cdot\bm{\epsilon}}{\alpha_{\min}}n \ln n
\approx \delta_j \Gamma_w.
\]
Moreover, it turns out that, with high probability,
the above holds up to $(1\pm\epsilon)$
for all tiers $j$ simultaneously.
Regardless of the order in which $w$ saw proposals,
the probability that $w$'s favorite proposal
came from a man in tier $j$ is $\Gamma_{j\to w}/\Gamma_w$.
Thus, this probability is approximately $\delta_j$.
We formally implement this proof in appendix~\ref{appendixMatchType}.
\begin{restatable}{theorem}{RestateThrmMatchTypes}
\label{thrmMatchTypes}
Consider an arbitrary tier $i$ of women and $j$ of men.
For all $\epsilon>0$, there is an $n$ large enough such
that the probability that a woman in tier $i$ matches
to a man in tier $j$ is $(1\pm \epsilon) \delta_j$.
\end{restatable}
\section{Computational Experiments on Expected Rank}
\label{sectionSimulations}
In this section, we provide computational experiments to back up the main
features of our theorems -- the estimates for the rank which agents on each
side achieve. First, we find that, as the theory suggests, men have a large
advantage in rank compared to women, with men getting ranks
of order $\ln n$ and women of order $n/\ln n$.
More interestingly,
these two sets of simulations together isolate and
investigate all of the major constant factors
present in our estimates.
We find that, overall, our estimates correspond
to the empirical averages.
\subsection{ Women divided into tiers }
Figures \ref{fig:vary_women_tier_avg_men},
\ref{fig:vary_women_tier_avg_women_1}, and
\ref{fig:vary_women_tier_avg_women_2} showcase the expected rank in a
market where women are broken into two tiers,
while men have a constant public score.
In such a market, theorems~\ref{thrmMenRanksCentralConcentration}
and~\ref{thrmWomenRanks} predict that the expected rank of men,
and the expected rank of women in tier $i$, are respectively:
\[
\frac{\bm{\epsilon}\cdot\bm{\alpha}}{\alpha_{\min}}\cdot
\ln n
\qquad\text{ and }\qquad
\frac{\alpha_{\min}}{\alpha_i}\cdot \frac{n}{\ln n}.
\]
In our experiment, there are $n=1000$ agents on each side.
The tiers of women have fraction
$\bm{\epsilon} = (\epsilon_1, 1-\epsilon_1)$, and weight
$\bm{\alpha} = (\alpha_1, 1)$, where tier $1$ is the ``top tier''
(i.e. $\alpha_1 > 1$ and $\alpha_{\min} = 1$).
Each plot has $\alpha_1$ ranging from 1 to 10 at each multiple of $0.25$,
and $\epsilon_1$ ranging from 0.025 to 0.975 at each multiple of $0.025$.
In each plot, we show the average rank in the result of DA, i.e. the
man-optimal stable matching, as this is the quantity studied in our
theorems.
Figure \ref{fig:vary_women_tier_avg_men} shows
men's average rank of partners, which in this market is approximately
the total number of proposals divided by $n$, because men are uniform.
Note that our prediction becomes increasingly bad as $\epsilon_1$
approaches $1$, even though for any fixed constant $\epsilon_1$,
we have convergence by theorem~\ref{thrmMenRanksCentralConcentration}.
This is natural because, as per remarks~\ref{remarkLogEspMinErrorBody}
and~\ref{remarkLogEspMinErrorAppendix}, our estimates break
down for fixed $n$ as $\epsilon_{\min}\to 0$.
Indeed, we find that the total number of proposals
is much less than our estimate in cases where $\epsilon_{\min}$
is small, as proposition~\ref{thrmCouponExpectationLowerBound}
simply proves that the true average is at least
our estimate minus $O(\ln(1/\epsilon_{\min})n)$.
This comment also applies to figures~\ref{fig:vary_women_tier_avg_women_1}
and~\ref{fig:vary_women_tier_avg_women_2}.
Accounting for cases with very small tiers (say, tiers which
grow sublinearly with $n$) is an intriguing future research direction.
\begin{figure}[ph]
Figures \ref{fig:vary_women_tier_avg_men},
\ref{fig:vary_women_tier_avg_women_1}, and
\ref{fig:vary_women_tier_avg_women_2} display expected rank in a
market with women broken into two tiers.
\centering
\includegraphics[scale=0.45]{Figures/VaryWomenTiers_AvgRkMen.png}
\caption{
Men's average rank under DA.
The left panel computes the prediction
$(\bm{\epsilon}\cdot\bm{\alpha}/\alpha_{\min}) \ln n = (\bm{\epsilon}\cdot\bm{\alpha}) \ln n$,
while the right panel is an average over 200 realizations.
}\label{fig:vary_women_tier_avg_men}
\includegraphics[scale=0.45]{Figures/VaryWomenTiers_AvgRkWomen1.png}
\caption{
Top tier women's average rank under DA.
The left panel computes the prediction
$(\alpha_{\min}/\alpha_i) (n / \ln n)
= (1/\alpha_1) (n / \ln n)$, while the
right panel is an average over 200 realizations.
}\label{fig:vary_women_tier_avg_women_1}
\includegraphics[scale=0.45]{Figures/VaryWomenTiers_AvgRkWomen2.png}
\caption{
Bottom tier women's average rank under DA.
Our prediction is the constant $n/\ln n$
as $\bm{\alpha}, \bm{\epsilon}$ change,
whereas the right panel is an average over 200 realizations.
}\label{fig:vary_women_tier_avg_women_2}
\end{figure}
\subsection{ Men divided into tiers }
Figures~\ref{fig:vary_men_tier_avg_men_1},
\ref{fig:vary_men_tier_avg_men_2}, and~\ref{fig:vary_men_tier_avg_women}
showcase the expected rank in a
market where men are broken into two tiers,
while women have a constant public score.
In such a market, our prediction for the expected
rank of men in tier $j$
and the expected rank of women are respectively:
\[
\frac{1}{(\bm{\delta}\cdot\bm{\beta}^{-1})}\cdot
\frac{\ln n}{\beta_j}
\qquad\text{ and }\qquad
(\bm{\delta}\cdot\bm{\beta})(\bm{\delta}\cdot\bm{\beta}^{-1}) \cdot
\frac{n}{\ln n}.
\]
\begin{figure}[ph]
Figures~\ref{fig:vary_men_tier_avg_men_1},
\ref{fig:vary_men_tier_avg_men_2}, and~\ref{fig:vary_men_tier_avg_women}
display expected rank in a
market with men broken into two tiers.
\centering
\includegraphics[scale=0.45]{Figures/VaryMenTiers_AvgRkMen1.png}
\caption{
Top tier men's average ranks in DA.
The left panel computes the prediction
$\ln n / (\beta_1\bm{\delta}\cdot\bm{\beta}^{-1})$,
while the right panel is an average over 200 realizations.
}\label{fig:vary_men_tier_avg_men_1}
\includegraphics[scale=0.45]{Figures/VaryMenTiers_AvgRkMen2.png}
\caption{
Bottom tier men's average ranks in DA.
The left panel computes the predicted value
$\ln n / (\beta_{\min}\bm{\delta}\cdot\bm{\beta}^{-1})
= \ln n / (\bm{\delta}\cdot\bm{\beta}^{-1})$,
while the right panel is an average over 200 realizations.
}\label{fig:vary_men_tier_avg_men_2}
\includegraphics[scale=0.45]{Figures/VaryMenTiers_AvgRkWomen.png}
\caption{
Women's average ranks in DA.
The left panel computes the prediction
$(\bm{\delta}\cdot\bm{\beta})(\bm{\delta}\cdot\bm{\beta}^{-1})(n / \ln n)$,
while the right panel is an average over 200 realizations.
}\label{fig:vary_men_tier_avg_women}
\end{figure}
We again take $n=1000$ agents on each side.
The tiers of men have fraction
$\bm{\delta} = (\delta_1, 1-\delta_1)$
and weight $\bm{\beta} = (\beta_1, 1)$, with $\beta_1 > 1$.
Each plot takes $\beta_1$ ranging from 1 to 10 at each multiple of $0.25$, and $\delta_1$ ranging from 0.025 to 0.975 at each multiple of 0.025.
Because the women's side is balanced, the number of proposals in these
markets does not suffer from a great loss of accuracy in certain
parameter settings (as in the previous regime when $\epsilon_{\min}$
was small).
However, lower order terms still make a visible difference,
especially in the rank achieved by the women.
\subsection{Distribution of matched pairs among tiers}
\label{sectionDistributionMatchExperiment}
As we have seen from proposition~\ref{thrmMenRanksProportional}, each individual man from tier $i$ has a more advantageous expected rank of partner than another man from tier $j$ whenever $\beta_i > \beta_j$. In our last experiment, we want to take a macro viewpoint and explore the distribution of matched pairs across tiers on both side in the man-optimal stable matching in a tiered market
We demonstrate this effect by considering a sequence of balanced markets with two tiers with equal size on each side, i.e. $\bm{\delta}=\bm{\epsilon}=(0.5,0.5)$, with public scores $\bm{\beta}=(3,1)$ and $\bm{\alpha}=(5,1)$ for men and women, respectively. The market size $n$ grows from $2^4$ to $2^{18}$ at each integer power of 2. In such markets, the distribution of matched pair can be solely characterized by the fraction of men in tier 1 who are matched to women in tier 1, denoted by $m_{11}$. For each market configuration in the sequence, we simulate 1,000 realizations of man-proposing deferred acceptance, and recorded the values of $m_{11}$ for each realization. The result is shown in figure~\ref{fig:distribution_of_pairs}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{Figures/DistributionPairs_b1-3_a1-5.png}
\caption{Percentage of tier 1 men matched to tier 1 women under the man-optimal outcome in markets with parameters $\bm{\delta}=\bm{\epsilon}=(0.5,0.5)$, $\bm{\beta}=(3,1)$, and $\bm{\alpha}=(5,1)$. The two dashed curves indicate 3 and 97 percentile, respectively.}\label{fig:distribution_of_pairs}
\end{figure}
The simulation suggests that the distribution of matched pairs gets closer to uniformity as the market size increases, with a slight skew benefiting the better tier. For example, for a market with 1,000 men and women on each side with two tiers of equal size and $\bm{\beta}$ and $\bm{\alpha}$ specified above, $52.4\pm 1.1\%$ of the men in tier 1 are matched to women in tier 1. That is, top-tier men are only slightly more likely than bottom tier men to match to top tier women, even though at a micro level each man in the first tier on average does three times better than those in the second tier. In the future, it may also be of interest to examine how the tier structure determines the deviation from uniformity.
On the other hand, we want to stress that the approximately uniform distribution of matched pairs among tiers relies heavily on our assumption of bounded public scores. The result may cease to hold if we allow the gap in scores to grow with the market size. Figure~\ref{fig:distribution_of_pairs_non_uniform} shows the deviation from the uniform distribution when the gap in scores of the top and bottom tier men grows in polynomial order, namely $\sqrt{n}/2$.
We hypothesize that in this case the fraction of tier 1 men matched to tier 1 women converges to the solution to the equation $x^5 + x=1$, approximately $0.755$. Note that this would be identical to the match type distribution limit if the top tier men were deterministically preferred over the lower tier men.
It is an interesting direction to study the matching dynamics when scores grow with market size (for example, in poly-log or polynomial order).
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{Figures/DistributionPairs_b1-sqrtn_a1-5.png}
\caption{Non-uniform distribution of matched pairs among tiers when scores may grow with market size. The fraction of tier 1 men matched to tier 1 women under the man-optimal outcome deviates from $0.5$ in markets with parameters $\bm{\delta}=\bm{\epsilon}=(0.5,0.5)$, $\bm{\beta}=(\sqrt{n}/2,1)$, and $\bm{\alpha}=(5,1)$. The solid curve marks the average across 200 runs, and the two dashed curves indicate 3 and 97 percentile, respectively.}\label{fig:distribution_of_pairs_non_uniform}
\end{figure}
\section{Summary}
\label{sec:summary}
The model and findings in this paper contribute to the understanding of random stable matching markets. Indeed, the results quantify the effect of competition that arises from heterogeneous quality in agents, specifically, when the agents fall into different constant-factor tiers of quality. Novel technical tools are developed in order to reason about the proposal dynamics of deferred acceptance.
Relaxing some of the modeling assumptions raises interesting questions that cannot be trivially answered. This includes having non-constant (size, or public score) tiers,
personalized private scores which give agents different distributions of preferences,
and imbalance in the number of agents on each side of the market. Moreover, it is natural to ask when one should expect the matching to be sorted, i.e., higher tiers will be more likely to match with higher tiers (e.g., \cite{hitsch2010matching} demonstrates the presence of sorting in dating markets).
| {
"timestamp": "2021-01-14T02:03:29",
"yymm": "2009",
"arxiv_id": "2009.05124",
"language": "en",
"url": "https://arxiv.org/abs/2009.05124",
"abstract": "We study the stable marriage problem in two-sided markets with randomly generated preferences. We consider agents on each side divided into a constant number of \"soft tiers\", which intuitively indicate the quality of the agent. Specifically, every agent within a tier has the same public score, and agents on each side have preferences independently generated proportionally to the public scores of the other side.We compute the expected average rank which agents in each tier have for their partners in the men-optimal stable matching, and prove concentration results for the average rank in asymptotically large markets. Furthermore, we show that despite having a significant effect on ranks, public scores do not strongly influence the probability of an agent matching to a given tier of the other side. This generalizes results of [Pittel 1989] which correspond to uniform preferences. The results quantitatively demonstrate the effect of competition due to the heterogeneous attractiveness of agents in the market, and we give the first explicit calculations of rank beyond uniform markets.",
"subjects": "Computer Science and Game Theory (cs.GT); Theoretical Economics (econ.TH)",
"title": "Tiered Random Matching Markets: Rank is Proportional to Popularity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9918120908243659,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.709566175101778
} |
https://arxiv.org/abs/0902.4899 | New results on the lower central series quotients of a free associative algebra | We continue the study of the lower central series and its associated graded components for a free associative algebra with n generators, as initiated by B. Feigin and B. Shoikhet. We establish a linear bound on the degree of tensor field modules appearing in the Jordan-Hoelder series of each graded component, which is conjecturally tight. We also bound the leading coefficient of the Hilbert polynomial of each graded component. As applications, we confirm conjectures of P. Etingof and B. Shoikhet concerning the structure of the third graded component. | \section{Introduction and results}
In this paper we consider the free associative algebra\footnote{over $\mathbb{C}$, or any field of characteristic zero} $A:=A_n$ on generators $x_1,\ldots, x_n$, for $n\geq 2$, and its lower central series filtration: $L_1=A, L_{m+1}=[A,L_m]$. The corresponding associated graded Lie algebra is $B(A)=\oplus_m B_m(A)$, where $B_m(A) = L_m(A)/L_{m+1}(A)$. The natural grading on $A$ by $S=(\mathbb{Z}_{\geq 0})^n$ descends to each $B_m$, and it is interesting to study the Hilbert series $h_{B_m}(t_1,\ldots,t_n)$ and $h_{B_m}(t):=h_{B_m}(t,\ldots,t)$.
The formula for $h_A$ is straightforward:
$$h_A(t_1\ldots, t_n)=\frac{1}{1-(t_1 + \cdots + t_n)},$$ and implies that $\dim A[d]$ grows exponentially in $d$. It is thus a somewhat surprising fact that for $m\geq 2$, $\dim B_m[d]$ grows as a polynomial in $d$ of degree $n-1$. For $m=2$, this was shown in \cite{FS}, and for $m\geq 3$ it was conjectured in \cite{FS} and proven in \cite{DE}.
The proof of this fact is based on the representation theory of the Lie algebra $W_n$ of polynomial vector fields on $\mathbb{C}^n$. Namely, in \cite{FS}, an action of $W_n$ was constructed on each $B_m, m\geq 2$. It was conjectured there, and proved in \cite{DE}, that each $B_m$ had a finite length Jordan-H\"older series, with respect to this action. The proof relied, firstly, on the observation in \cite{FS} that all irreducible subquotients of $B_m$ could be identified with certain tensor field modules $\mathcal{F}_\lambda$ associated to a Young diagram $\lambda$, and secondly, on a bound for the sizes $|\lambda|$ that could occur:
\begin{thm}\cite{DE} For $m\geq 3$ and $\mathcal{F}_\lambda$ in the Jordan-H\"older series of $B_m(A_n)$, we have the following estimate on the size (i.e., the number of squares) of the Young diagram $\lambda$:
$$ |\lambda| \leq (m-1)^2 + 2\lfloor\frac{n-2}{2}\rfloor (m-1)$$
(where $\lfloor x\rfloor$ denotes the integer part of $x$).
\end{thm}
This result, combined with well-known formulas for the Hilbert series of each $\mathcal{F}_\lambda$, established the finiteness of the Jordan-H\"older series, as well as the growth of $\dim B_m[d]$ as a degree $n-1$ polynomial in $d$, for $m\geq 2$. However, it was evident from experimental computations produced by Eric Rains that the maximal $|\lambda|$ for which $\mathcal{F}_\lambda$ occurs in $B_m$ should grow linearly rather than quadratically in $m$. This indeed turns out to be the case. Namely, the main result of this paper is the following improvement of the bound of \cite{DE}:
\begin{thm}\label{2m-3} Let $m\geq 3$.
\begin{enumerate}
\item For $\mathcal{F}_\lambda$ in the Jordan-H\"{o}lder series of $B_m(A_n)$, $$|\lambda| \leq 4m-7 + 2\lfloor \frac{n-2}{2}\rfloor.$$
\item Let $n=2$ or $3$. For $\mathcal{F}_{\lambda}$ in the Jordan-H\"{o}lder series of $B_m(A_n)$, $$|\lambda| \leq 2m-3$$
\end{enumerate}
\end{thm}
This is proven by means of some elementary commutative algebra, and a technical but very useful result:
\begin{thm}\label{xyxy} Let $m\geq2$.
\begin{enumerate}
\item For all $n$, we have:
$$B_2=\sum_i[x_i,B_1].$$
\item For $n\geq 4$, we have:
$$B_{m+1}= \sum_{i}[x_i,B_m]+\sum_{i\leq j}[x_ix_j,B_m]+\sum_{i<j<k}[x_i[x_j,x_k],B_m].$$
\item For $n=3$, we have:
$$B_{m+1}=\sum_i[x_i,B_m]+\sum_{i \leq j}[x_ix_j,B_m].$$
\item For $n=2$, we have:
$$B_{m+1}= \sum_i[x_i,B_m]+\sum_{i < j}[x_ix_j,B_m].$$
\end{enumerate}
\end{thm}
\begin{rem} The presence of the cubic terms in (2) above appears superfluous from computer experiments, and we conjecture that they can be omitted (see also Lemma \ref{B3identity} and Corollary \ref{B3case}). Were this so, it would imply the bound $|\lambda|\leq 2m-3 + 2\lfloor\frac{n-2}{2}\rfloor$ in Theorem \ref{2m-3}.
\end{rem}
\begin{defn} The Hilbert polynomials $p_{mn}(d)$ are defined by
$$p_{mn}(d)=\dim B_m(A_n)[d], \quad (d\gg 0).$$
The \emph{density}, $a_{mn}$, is the leading coefficient of $p_{mn}$, times $(n-1)!$.
\end{defn}
\begin{ex} The Hilbert polynomial of $\mathbb{C}[x_1,\ldots,x_n]$ is ${n+d-1}\choose {n-1}$, with leading coefficient $\frac{1}{(n-1)!}$, so the density is one. More generally, if $\lambda_1\geq 2$ or $\lambda=(1^n)$, the density of $\mathcal{F}_\lambda$ is equal to the dimension of the irreducible representation $V_\lambda$ of $\mathfrak{gl}_n$ with highest weight $\lambda$.
\end{ex}
As a corollary to Theorem \ref{xyxy}, we derive a bound for the density $a_{mn}$:
\begin{cor}\label{rank}
For $n=2$, we have $$a_{m+1,n} \leq 3 a_{m,n}.$$
For $n=3$, we have $$a_{m+1,n} \leq 9 a_{m,n}.$$
For $n\geq 4$, we have $$a_{m+1,n} \leq \frac{n^3+11n}{6}a_{m,n}.$$
\end{cor}
As applications of the theory, we are able to prove the following conjecture of P. Etingof describing the complete structure of $B_3(A_n)$:
\begin{thm}\label{PavelConj}
$$B_3(A_n)=\bigoplus_{i=1}^{\lfloor\frac{n}{2}\rfloor}(2,1^{2i-1},0^{n-2i}).$$
\end{thm}
As a corollary we derive the following conjecture of B. Shoikhet, which motivated the first conjecture:
\begin{cor}\label{BConj} Let $B_3(A_n)[1,\ldots,1]$ denote the subspace of $B_3(A_n)$ of degree 1 in each generator. We have
$$\dim B_3(A_n)[1,\ldots,1]=(n-2)2^{n-2}.$$
\end{cor}
\noindent Here, as elsewhere in the paper, we have used the abbreviation $(p_1,\ldots,p_n)$ instead of $\mathcal{F}_{(p_1,\ldots,p_n)}$. Combining our bounds in Theorem \ref{2m-3} with MAGMA\cite{BCP} computations, we are also able to give the complete Jordan-H\"older series of $B_m(A_n)$ for many new $m$ and $n$.
The structure of the paper is as follows. In Section \ref{pre} we briefly review the representation theory
of the Lie algebra $W_n$, as well as the results of \cite{FS} we will use. In Section \ref{xyxypf} we prove Theorem \ref{xyxy} and Corollary \ref{rank}. In Section \ref{2m-3pf} we prove Theorem \ref{2m-3}. In Section \ref{PavelConjPf}, we prove Theorem \ref{PavelConj}, and present as a corollary a geometric description of the bracket map of $\bar{B}_1$ with $B_2$. In Section \ref{decomps}, we present the Jordan-H\"older series for $B_m(A_n)$ for small $m$ and $n$.
\subsection{Acknowledgments} We would like to heartily thank Pavel Etingof for many helpful conversations as the work progressed, and especially for explaining how to derive Theorem \ref{PavelConj} from our work. The work of both authors was funded by the Research Science Institute, at MIT. The research of the second author was partially supported by the NSF grant DMS-0504847.
\section{Preliminaries}\label{pre}
In this section, we recall definitions for the Lie algebra $W_n$, the tensor field modules $\mathcal{F}_\lambda$, and the quantized algebra of even differential forms $\Omega_*^{ev}$.
\begin{defn} Let $W_n=Der(\mathbb{C}[x_1,\ldots,x_n])$ denote the Lie algebra of polynomial vector fields,
$$W_n = \oplus_i\mathbb{C}[x_1,\ldots,x_n] \partial_i,$$
with bracket $[p\partial_i,q\partial_j]=p\frac{\partial q}{\partial x_i} \partial_j - q\frac{\partial p}{\partial x_j}\partial_i$.
\end{defn}
Let $\mathfrak{gl}_n$ denote the Lie algebra of $n$ by $n$ matrices. A Young diagram
$$\lambda=(\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_n)$$
with $n$ rows gives rise to an finite dimensional irreducible representation $V_{\lambda}$ of $\mathfrak{gl}_n$ contained in the space $(\mathbb{C}^{n*})^{\otimes |\lambda|}$ of covariant tensors of rank $|\lambda|$ on $\mathbb{C}^n$. Let $\widetilde{\mathcal{F}}_{\lambda}$ be the space of polynomial tensor fields of type $V_{\lambda}$ on $\mathbb{C}^n$. As a vector space,
$$\widetilde{\mathcal{F}}_{\lambda}=\mathbb{C}[x_1,\ldots,x_n]\otimes V_{\lambda}.$$
It is well known that $\widetilde{\mathcal{F}}_{\lambda}$ is a representation of $W_n$ with action given by the standard Lie derivative formula for action of vector fields on covariant tensor fields (see, e.g. \cite{SL} for details).
\begin{thm}\cite{ANR}
If $\lambda_1 \geq 2$, or if $\lambda=(1^n)$, then $\widetilde{\mathcal{F}}_{\lambda}$ is irreducible. Otherwise, if $\lambda =(1^k,0^{n-k})$, then $\widetilde{\mathcal{F}}_{\lambda}$ is the space $\Omega^k=\Omega^k(\mathbb{C}^n)$ of polynomial differential k-forms on $\mathbb{C}^{n}$, and it contains a unique irreducible submodule which is the space of all closed differential k-forms.
\end{thm}
Denote by $\mathcal{F}_{\lambda}$ the irreducible submodule of $\widetilde{\mathcal{F}}_{\lambda}$, so that $\mathcal{F}_{\lambda}=\widetilde{\mathcal{F}}_{\lambda}$ unless \mbox{$\lambda = (1^k,0^{n-k})$} for some $1\leq k \leq n-1$.
\begin{thm}\label{semisimple}\cite{ANR}
Any $W_n$-module on which the operators $x_i\partial_i$, for $i=1,\ldots,n$, act semisimply with
nonnegative integer eigenvalues and finite dimensional common
eigenspaces has a Jordan-H\"{o}lder series
whose composition factors are $\mathcal{F}_{\lambda}$, each occurring with finite
multiplicity. \end{thm}
As a trivial, but important, consequence of the definition of $\mathcal{F}_\lambda$, we have:
\begin{prop}
Suppose $\lambda\neq (1^k,0^{n-k})$ for $1\leq k \leq n-1$. Let $h_{V_\lambda}$ denote the Hilbert series for $V_\lambda$. Then we have
$$h_{\mathcal{F}_\lambda}(t_1,\ldots, t_n) = \frac{h_{V_\lambda}(t_1,\ldots,t_n)}{(1-t_1)\cdots(1-t_n)},$$
\end{prop}
\noindent where we view $V_\lambda$ as a $\mathbb{Z}^n$-graded vector space via its weight decomposition. In particular, we observe that $h_{\mathcal{F}_\lambda}(t_1,\ldots,t_n)(1-t_1)\cdots(1-t_n)$ is a polynomial of total degree $|\lambda|$.
Let $\Omega^{ev}$ be the space of polynomial differential forms on $\mathbb{C}^n$ of even rank, and $\Omega^{ev}_{ex}$ its subspace of even exact forms. These spaces are graded by setting deg$(x_i)=$deg$(dx_i)=~1$. Recall that $\Omega^{ev}$ is a \emph{commutative} algebra with respect to the wedge product.
\begin{defn}
The multiplication $a*b=a\wedge b+da\wedge db$ defines an associative product on $\Omega^{ev}$. By $\Omega^{ev}_*$ we will mean the space $\Omega^{ev}$ equipped with the $*$-product, and call it the quantized algebra of even differential forms.
\end{defn}
\begin{defn}
Let $Z$ denote the image of $A[A,[A,A]]$ in $B_1$, which was shown in \cite{FS} to be central in $B$. We define $\bar{B}_{1}$ to be the quotient, $\bar{B}_{1}=B_{1}/Z,$ and define $\bar{B}=\bar{B}_1~\oplus~(\displaystyle\oplus_{i \geq 2} B_i)$. \end{defn}
Clearly $\bar{B}$ inherits the grading from $B$, and is thus a graded Lie algebra generated in degree 1.
\begin{thm}\label{isom}\cite{FS}
There is a unique isomorphism of algebras,
\begin{align*}
\xi: \Omega^{ev}_* &\to A/A[A,[A,A]],\\
x_i &\mapsto x_i,
\end{align*}
which restricts to an isomorphism $\xi:\Omega_{*,ex}^{ev}\overset\sim\to B_2$, and descends to an isomorphism $\xi: \Omega_*^{ev}/\Omega^{ev}_{*,ex}\overset\sim\to\bar{B}_1$.
\end{thm}
\begin{thm}\cite{FS}
The action of $W_n$ on $\bar{B}_1 \cong \Omega^{ev}/\Omega^{ev}_{ex}$ by Lie derivatives uniquely extends to an action of $W_n$ on $\bar{B}$ by grading-preserving derivations.\end{thm}
Thus, for each $m\geq2$, $B_{m}$ is a $W_n$-module clearly satisfying the conditions of Theorem \ref{semisimple}, and thus has composition factors $\mathcal{F}_{\lambda}$, each occurring with finite multiplicity.
\section{Proof of Theorem \ref{xyxy} and Corollary \ref{rank}}\label{xyxypf}
\begin{lem}\label{identity} We have the following identity, which may be directly checked.
\begin{align*}{[u^{3},[v,w]]}=&{3[u^{2},[uv,w]]}-{3[u,[u^{2}v,w]]}
+\frac{3}{2}[u^2,[v,[u,w]]]\\&- \frac{3}{2}[u,[v,[u^{2},w]]]+[u,[u,[u,[v,w]]]]\\&
-\frac{3}{2}[u,[u,[v,[u,w]]]]+\frac{3}{2}[u,[v,[u,[u,w]]]].\end{align*}
\end{lem}
\begin{cor}\label{symm} Let $S(a,b,c)$ be the symmetrized sum $$S(a,b,c)=\frac16(abc+bca+cab+acb+cba+bac).$$ Then, for all $a,b,c$, we have that $$[S(a,b,c),B_m] \subset [ab,B_m]+[bc,B_m]+[ca,B_m]+[a,B_m]+[b,B_m]+[c,B_m]\subset B_{m+1}.$$
\end{cor}
\begin{proof}
In Lemma \ref{identity}, set $u=t_1a+t_2b+t_3c$, $v$ equal to any element of $A_n$, $w$ equal to any element of $B_{m-1}$, and take the coefficient of $t_1t_2t_3$. The result follows.
\end{proof}
\begin{lem}\label{lemmae}
Let $E\subset \Omega^{ev}_*$ be the span of $S(a,b,c)$, where $a,b,$ and $c$ are of positive degree \emph{(}$\deg(x_i)=\deg(dx_i)=1$\emph{)}. Let $X$ be the span of 1, $x_i$, $x_ix_j$ for $i\leq j$, $x_i dx_j \wedge dx_k$ for $i<j<k$. Then $\Omega^{ev}_*=X+E+\Omega^{ev}_{ex,*}$.
\end{lem}
\begin{proof} $\Omega^{ev}_*$ has a finite length descending filtration by rank of forms, such that the associated graded is the usual commutative algebra of even forms. Therefore, it is sufficient to check the same statement for the commutative algebra of even forms. Then $\Omega^{ev}/E$ is spanned by $$1, ~x_i, ~dx_i \wedge dx_j, ~x_ix_j, ~x_idx_j\wedge dx_k, ~dx_i\wedge dx_j \wedge dx_k \wedge dx_l.$$ As the forms $$dx_i\wedge dx_j, ~{x_idx_i\wedge dx_j}, ~dx_i\wedge
dx_j\wedge dx_k\wedge dx_l$$ are exact, the Lemma follows.
\end{proof}
Now we proceed to prove the theorem. (1) Follows from the isomorphism $B_2\cong\Omega^{ev}_{ex}$. For (2), we have: $B_{m+1}=[\xi(\Omega^{ev}),B_m]$. Now, by Lemma \ref{lemmae} and Corollary \ref{symm}, $[\xi(\Omega^{ev}),B_m]=[\xi(X),B_m]$, so the statement follows.
For (4), we observe the following identity, which may be checked directly:
\begin{lem}\label{second}\begin{align*}
{[x^{2},[y,w]]}=&{2[x,[xy,w]]}+{[y,[x^2,w]]}-{2[xy,[x,w]]}-[w,[x,[y,x]]].\\
6[x^2,[xy,w]]=&12[x,[x^2y,w]] + 4[y,[x^3,w]] -6[xy,[x^2,w]]\\
& -3[x^2,[y,[x,w]]] - 3[y,[x^2,[x,w]]] + 9[x,[y,[x^2,w]]]\\
& -3[x,[x,[x,[y,w]]]] +3[x,[x,[y,[x,w]]]] -3[x,[y,[x,[x,w]]]]\\&-[y,[x,[x,[x,w]]]].
\end{align*}
\end{lem}
Setting $x=x_1$, $y=x_2$, and letting $w\in B_{m-2}$, we get that $[x_i^2,B_{m-1}]\subset [x_1,B_{m-1}]+[x_2,B_{m-1}]+[x_1x_2,B_{m-1}]$ as desired.
For (3), we use a lemma:
\begin{lem}\cite{DE} There exists a function $\epsilon:S_{m}\to\mathbb{Q}$, such that
$$[a_0,[a_{1},[\cdots [a_{m-1},a_m]\cdots] = \sum_{\sigma \in S_{m}}\epsilon(\sigma)[a_{\sigma(1)},[a_{\sigma(2)},[\cdots[a_{\sigma(m)},a_0]\cdots].$$
\end{lem}
Thus an element $x=[a_0,[a_{1},[\cdots [a_{m-1},a_m]\cdots]$, with $a_0\in\Omega^2$ is a sum of elements with $a_0$ in the innermost bracket. The innermost bracket will then be in $\zeta(\Omega^4)$, which is zero in $B_2$ for $n=3$.
\qed
Corollary \ref{rank} follows immediately by counting the dimension of $X$.
\section{Proof of Theorem \ref{2m-3}}\label{2m-3pf}
Let $\xi: \Omega^{ev} \to \bar{B}_1$ be the Feigin-Shoikhet surjective map from Theorem \ref{isom}.
Consider the map $f_m: (\Omega^{ev})^{\otimes m}\to B_m$
given by
$$f_m(a_1,\ldots,a_m)=[\xi(a_1),[\xi(a_2),\ldots[\xi(a_{m-1}),\xi(a_m)]].$$
\noindent Since $\bar{B}$ is generated in degree 1, this map is surjective. By Theorem \ref{xyxy}, the restriction of $f_m$ to $Y:=(\Omega^0)^{\otimes m}$
is surjective for $n=2,3$, while the restriction to $Y:=(\Omega^0+\Omega^2)^{m-2}\otimes(\oplus_{j+k\leq \lfloor \frac{n-2}{2}\rfloor}\Omega^{2j}\otimes\Omega^{2k})$ is surjective for $n\geq 4$. The idea of the proof will be to find a large $W_n$-submodule $K$ of $Y$ such that $f_m|_K=0$ and, such that the composition factors $\mathcal{F}_{\lambda}$ of $Y/K$ satisfy the bound on $|\lambda|$ given in the theorem. This will obviously imply Theorem \ref{2m-3}.
\begin{lem}\label{cor1}
Let $K \subset Y$ be the submodule spanned by elements of the form:
\begin{enumerate} \item{$p_1\otimes\cdots\otimes p_{m-3-i}\otimes(1\otimes b)*(a\otimes1-1\otimes a)^3\otimes p_{m-2-i}\otimes\cdots\otimes p_{m-2},$ for $0\leq i\leq m-3,$ where $a\in\Omega_*^{0}, b\in \Omega_*^{ev},$ and $p_i\in \Omega_*^{ev}$, \emph{and}}
\item{$p_1\otimes\cdots\otimes p_{m-2}\otimes(b_1\otimes b_2)*(a\otimes1-1\otimes a)^2,$ where $a\in\Omega_*^{0}, b_1,b_2\in \Omega_*^{ev},$ and $p_i\in \Omega_*^{ev}$.}
\end{enumerate}
Then $f_m|_K=0$.
\end{lem}
\begin{proof}
Elements of type 1 are killed by $f_m$ by Lemma \ref{identity}, as $$(1\otimes b)*(a\otimes 1-1\otimes a)^3=(a^3\otimes b-3a^2\otimes b*a+3a\otimes b*a^2-1\otimes b*a^3).$$ Elements of type 2 are killed, as, for any functions $a,b_1,b_2$, $$d(b_1a^2)\wedge db_2-2d(b_1a)\wedge d(b_2a)+db_1\wedge d(b_2a^2)=0. $$\end{proof}
Let $K_0\subset Y$ be the associated graded of $K$ under the ``rank of forms'' filtration, and let $K_0'$ be the span of elements of type $1$ and $2$ in the commutative algebra of forms. The Jordan-H\"{o}lder series of $Y/K$ is the same as the Jordan-H\"{o}lder series of $Y/K_0$, so it is dominated by the Jordan-H\"{o}lder series of $Y/K_0'$. So, it suffices to show that all $\mathcal{F}_{\lambda}$ occurring in $Y/K_0'$ satisfy the bound of the theorem. We will do this by precisely computing the Hilbert series of $Y/K_0'$. First, we recall a well-known fact from commutative algebra.
\begin{lem}\label{cor2} Let $B$ be a commutative algebra, and $I\subset B\otimes B$ be the kernel of the multiplication homomorphism
$\mu:B\otimes B \to B.$ Also, let $k\in \mathbb{N}$. Then, $I^k$ is spanned by elements of the form $(1\otimes b)(a\otimes 1-1\otimes a)^k$, where $a,b \in B$.
\end{lem}
\begin{proof} Obviously $(a\otimes1-1\otimes a)\in I$, so $(1\otimes b)(a\otimes 1-1\otimes a)^k\in I^k$. Now $I^k$ is generated by $(a_1\otimes 1-1\otimes a_1)\cdots(a_k\otimes 1-1\otimes a_k)$, for $a_i\in B$. Because for every vector space $V$, $S^k(V)$ is spanned by \{$v^{\otimes k}|v\in V$\}, such elements can be obtained as linear combinations of elements of the form $(a\otimes1-1\otimes a)^k$. So, $I^k$ is spanned by $(b_1 \otimes b_2)(a\otimes1-1\otimes a)^k$. But
\begin{align*}(b_1\otimes b_2)(a\otimes 1-1\otimes a)^k=&(1\otimes b_2)(b_1a\otimes 1-1\otimes b_1a)(a\otimes1-1\otimes a)^{k-1}\\
&-(1\otimes ab_2)(b_1 \otimes 1-1\otimes b_1)(a\otimes 1 - 1\otimes a)^{k-1},\end{align*}
so we are done. \end{proof}
We let $R=\mathbb{C}[x_1,\ldots,x_n]^{\otimes m}=\mathbb{C}[x_1^1,\ldots,x_n^1,x_1^2,\ldots,x_n^2,\ldots,x_1^m,\ldots,x_n^m]$. Let $J_j$, for $1\leq j \leq m-1$ be the ideal in $R$ generated by $X^{j}_i:=x^{j}_i-x^{j+1}_i$. Let $$J=\displaystyle \sum_{j=1}^{m-2}J^3_j+J_{m-1}^2.$$
\begin{cor}
$K'_0=JY,$ so $Y/K'_0=Y/JY$.
\end{cor}
\begin{proof}This follows immediately from Lemma \ref{cor1} and Lemma \ref{cor2}.\end{proof}
Now, we can finish the proof of the theorem. Namely, $Y$ is a free module over $R$, so $h_{Y/JY}=h_{R/J}\cdot h_{X}$, where $h_X$ is the Hilbert series of the generators over $R$ of $Y$. For $n=2,3$, we have $h_X=1$, while for $n\geq 4$, we have:
\begin{equation}\label{hX}h_X= (1+\sigma_2)^{m-2}\cdot\sum_{j+k\leq 2\lfloor\frac{n-2}{2}\rfloor} \sigma_{2j}\cdot\sigma_{2k}.\end{equation}
where $\sigma_l=\displaystyle \sum_{i_1<\cdots<i_l} t_{i_1}\cdots t_{i_l}$ are elementary symmetric functions. Now, from the description of $J$, we compute
$$h_{R/J}=\frac{(1+\displaystyle \sum t_i+\displaystyle\sum_{i\leq j}t_it_j)^{m-2}(1+\displaystyle\sum t_i)}{(1-t_1)\cdots(1-t_n)}.$$ Thus $h_{Y/JY}\cdot(1-t_1)\cdots(1-t_n) $ is a polynomial of degree less than or equal to $2m-3$, for $n=2,3$ and $2m-3 +2(m-2) + 2\lfloor\frac{n-2}{2}\rfloor$ for $n\geq 4$, proving Theorem \ref{2m-3}.
\begin{rem}
For n=2, it follows from the proof of the theorem that the image, $f_m(v_m)$, of
$$v_m = (x_1-x_2)(y_1-y_2)\cdots(x_{m-2}-x_{m-1})(y_{m-2}-y_{m-1})(x_{m-1}-x_m)$$
generates a copy of $(m-1,m-2),$ if it is non-zero (here we have used the notation $x_k:=x_1^k, y_k:=x_2^k$). We conjecture that $f_m(v_m)$ is indeed non-zero, and that $(m-1,m-2)$ occurs with multiplicity one in $B_m(A_2)$. Furthermore we conjecture that for $(p,q)$ occurring in $B_m(A_2)$, one has $p\leq m-1$. For $m\leq 7$, these conjectures are confirmed in Theorem \ref{JH}.
\end{rem}
\section{The complete structure of $B_3(A_n)$}\label{PavelConjPf}
In this section, we prove Theorem \ref{PavelConj}, which was first conjectured by P. Etingof. Firstly, we show that only representations of the form $(2,1^{2i-1},0^{n-2i})$ can appear in the Jordan-H\"older series of $B_3(A_n)$; this is accomplished by a strengthening of Theorem \ref{2m-3} in the case $m=3$. Secondly, we use the representation theory of $\mathfrak{gl}_n$ to bound the multiplicity of each $(2,1^{2i-1},0^{n-2i})$ in $B_3(A_n)$ to at most one. Thirdly, we exhibit a non-zero vector in each $(2,1^{2i-1},0^{n-2i})$ by representing the generators of $A_n$ in a certain quotient algebra. Finally we show that the sum appearing in the Theorem is direct. Steps 2-4 were explained to us by P. Etingof.
\subsection{Step one.}
\begin{lem} \label{simplemult}
Let $m \geq 3$. Then no $\mathcal{F}_{(1^k,0^{n-k})}$ occurs in $B_m$.
\end{lem}
\begin{proof}
\textbf{Case 1: $k=n$.}\\
As a representation of $S_n$, the polylinear part (i.e., the part of degree 1 in each variable) of $(1,\ldots,1)$ is isomorphic to the sign representation. On the other hand, the polylinear part of $A_n$ (i.e., the span of monomials of the form $x_{\sigma(1)}\cdots x_{\sigma(n)}$ for $\sigma \in S_n$) is clearly a copy of the regular representation, which contains the sign representation exactly once, and thus the total multiplicity of $(1,\ldots,1)$ in all $B_{m}(A_n)$ is equal to 1. In \cite{FS}, it is shown that when $n$ is odd, $(1,\ldots,1) \subset \bar{B}_1(A_n)$ and when $n$ is even, $(1,\ldots,1) \subset {B}_2(A_n)$, so $(1,\ldots,1)$ cannot occur in $B_m$ for $m\geq3$.\\
\textbf{Case 2: $k<n$.}\\
Suppose that $(1^k,0^{n-k})$ occurs in $B_m(A_n)$. If $k<n$, then this means $B_m(A_{k})$ contains a copy of $(1^k)$, which contradicts Case 1.
\end{proof}
\begin{lem}\label{B3identity} We have:
\begin{equation*}[x[y,z],[w,v]]=[x,[w[y,z],v]] - [y,[w[x,z],v]]+[z,[w[x,y],v]]\mod L_4\end{equation*}
\end{lem}
\begin{proof}
Let $G$ denote the symmetric group on the set $\{x,y,z,w,v\}$, and let $\psi$ denote the RHS minus the LHS of the identity. Applying the Jacobi identity to the term $[x[y,z],[w,v]]$, we have
\begin{align*}
\psi =[x,[w[y,z],v]] - [y,[w[x,z],v]] + [z,[w[x,y],v]] - [w,[x[y,z],v]] + [v,[x[y,z],w]].
\end{align*}
Recall the isomorphism $\xi:\Omega^{ev,\geq 2}_{ex}\overset\sim\to B_2$ of Theorem \ref{isom}; for any $\alpha,\beta,\gamma,\delta\in \bar{B}_1$, we have $[\alpha[\beta,\gamma],\delta]=4\xi(d\alpha\wedge d\beta \wedge d\gamma \wedge d\delta) \mod L_3$. Thus we may re-express $\psi$:
\begin{align*}
\psi =& 4\big( [x,\xi(dw\wedge dy \wedge dz \wedge dv)] - [y,\xi(dw\wedge dx \wedge dz\wedge dv)] + [z,\xi(dw\wedge dx \wedge dy \wedge dv)]\\ &- [w,\xi(dx\wedge dy \wedge dz \wedge dv)] + [v,\xi(dx \wedge dy \wedge dz \wedge dw)]\big).
\end{align*}
We see immediately that $\psi$ is skew symmetric with respect to each of the permutations $(x,y)$, $(y,z)$, $(z,w)$ and $(w,v)$. As these permutations generate $G$, it follows that if $\psi$ is non-zero, then $\mathbb{C}\psi$ is isomorphic to the sign representation. However, we have already seen in the proof of Lemma \ref{simplemult} that the sign representation does not occur in the polylinear part of $B_m(A_n)$ for any $m\geq 3$, and thus $\psi=0 \mod L_4$.
\end{proof}
\begin{cor} \label{B3case}$B_3(A_n)=\sum_{i}[x_i,B_2] + \sum_{i\leq j}[x_ix_j,B_2]$.
\end{cor}
\begin{proof}
Lemma \ref{B3identity}, combined with Corollary \ref{symm}, allows us to reduce the degree of any expression in the outer slot of degree three or greater, thus leaving only the quadratic terms, as desired.
\end{proof}
\begin{lem} \label{B3bound} For $\mathcal{F}_\lambda$ appearing in the Jordan-H\"older series of $B_3(A_n)$, we have
$$|\lambda|\leq 3 + 2\lfloor\frac{n-2}{2}\rfloor=\left\{\begin{array}{ll}n, &\textrm{n odd}\\n+1, &\textrm{n even}\end{array}\right.$$
\end{lem}
\begin{proof}
By Corollary \ref{B3case}, the map $f_3$ in the proof of Theorem \ref{2m-3} is surjective when restricted to $Y:=(\Omega^0)\otimes(\oplus_{j+k\leq \lfloor \frac{n-2}{2}\rfloor}\Omega^{2j}\otimes\Omega^{2k})$. Thus we may omit the factor $(1+\sigma_2)$ in equation (\ref{hX}), and we compute that $h_{Y/JY}\cdot(1-t_1)\cdots(1-t_n)$ is a polynomial of degree less than or equal to $3+2\lfloor\frac{n-2}{2}\rfloor$.
\end{proof}
\begin{cor} If $\mathcal{F}_\lambda$ appears in the Jordan-H\"older series of $B_3(A_n)$, then $\lambda=(2,1^{2i-1},0^{n-2i})$ for some $1\leq i \leq \lfloor\frac{n}{2}\rfloor$.
\end{cor}
\begin{proof}
We proceed by induction on $n$, the case $n=2$ having been proved in \cite{DKM}. First, suppose for contradiction that some $\mathcal{F}_\lambda$ occurs in the Jordan-H\"older series for $B_3(A_n)$, with $\lambda_1\geq 3$. Then we have $\lambda_n=0$ by Lemma \ref{B3bound}, which implies that $(\lambda_1,\ldots,\lambda_{n-1})$ occurs in $B_3(A_{n-1})$. This contradicts the induction assumption. Thus $\lambda_1\leq 2$.
Let us again suppose for contradiction that some $\mathcal{F}_\lambda$ occurs with \mbox{$\lambda_1=\lambda_2=2$}. Then $\lambda_n=0$, and so $(2,2,\lambda_3,\ldots,\lambda_{n-1})$ occurs in $B_3(A_{n-1})$, which contradicts the induction assumption.
Furthermore, by Lemma \ref{simplemult}, no factors $(1^k,0^{n-k})$ may occur in $B_3$. The only remaining possibilities are of the form $(2,1^k,0^{n-k-1})$,and it remains only to show that $k$ must be odd. Indeed, if $k$ is even, say $\lambda=(2,1^k,0^{n-k-1})$, and $\mathcal{F}_\lambda$ occurs in $B_3(A_n)$, then $(2,1^k)$ occurs in $B_3(A_{k+1})$, which contradicts Lemma \ref{B3bound}.
\end{proof}
\subsection{Step two.}
\begin{lem}\label{multbound} The multiplicity of $(2,1^{n-1})$ in $B_3(A_n)$ is at most one.\end{lem}
\begin{proof}
Let $n=2k$. By Lemma \ref{B3case}, we have a surjection:
$$f_3:\sum_{p=0}^{k-1} Y_p\to B_3,$$
where $Y_p= \Omega_0 \otimes\Omega_0 \otimes\Omega_{2p}$. As in the proof of Theorem \ref{2m-3}, we let
$$R=\mathbb{C}[x_1^1,\ldots, x_n^1, x_1^2,\ldots x_n^2, x_1^3,\ldots,x_n^3].$$
We can identify $Y_p$ with the free module over $R$ with generators
$$X_p=\{dx_{\alpha_1}\wedge\cdots\wedge dx_{\alpha_{2p}}\},$$ and Hilbert series $h_{X_p}=\sigma_{2p}$. We let $J_j$, for $j =1,2$, denote the ideal generated by $X^j_i:=x^j_i-x^{j+1}_i, i=1,\ldots n$, and let $J=J_1^3 + J_2^2$. Then, Lemmas \ref{cor1} and \ref{cor2} imply that $JY_p$ is in the kernel of $f_3$. As $J$ is $W_n$ invariant, we have a surjection of $W_n$ modules $f_3:Y_p/JY_p\to f_3(Y_p)\subset B_3$. We compute:
$$h_{Y_p/JY_p}=h_{R/J}h_{X_p}=\frac{(1+\displaystyle \sum t_i+\displaystyle\sum_{i\leq j}t_it_j)(1+\displaystyle\sum t_i)\sigma_{2p}}{(1-t_1)\cdots(1-t_n)}.$$ Thus $h_{Y_p/JY_p}\cdot(1-t_1)\cdots(1-t_n) $ is a polynomial of degree less than or equal to $2p+3$, so that the maximal size for $\lambda$ which can appear in each $Y_p/JY_p$ is $2p+3$. Thus only $Y_{k-1}$ can contribute to multiplicity of $(2,1^{n-1})$.
So we consider the image $f_3(Y_{k-1}) \subset B_3$. Scalars in the outer slot are sent to zero, and we can view the inner two slots naturally as the degree $n$ subspace of $B_2\cong\Omega_{ex}^{ev,\geq 2}$. We can thus write $f_3(Y_{k-1})$ as a quotient of
$$M = (\oplus_{j\ge 1}S^jV)\otimes \Lambda^n V\otimes (\oplus_{m\ge 0} S^mV).$$
The multiplicity of the $\mathfrak{gl}_n$ module $(2,1^{n-1})$ in $M$ is equal to the multiplicity
of $(1,0^{n-1})$ in $(\Lambda^nV)^*\otimes M = (\oplus_{j\ge 1}S^jV)\otimes (\oplus_{m\ge 0} S^mV)$, which is clearly one.
\end{proof}
\begin{cor}A cyclic generator of $(2,1^{n-1})$ in $B_3(A_n)$ is $$v_n=[x_1,[x_1,x_2]\cdots[x_{2k-1},x_{2k}]].$$\end{cor}
\begin{proof} Apply $f_3$ to the generator $x_1\otimes x_1\wedge\cdots\wedge x_n$ of $(2,1^{n-1})$ in $M$.
\end{proof}
\begin{cor} The multiplicity of $(2,1^{2i-1},0^{n-2i})$ in $B_3(A_n)$ is at most one.\end{cor}
\begin{proof} If we assume to the contrary that $(2,1^{2i-1},0^{n-2i})$ appears with multiplicity greater than one, then it follows that $(2,1^{2i-1})$ occurs in $B_3(A_{2i})$ with multiplicity greater than one, contradicting Lemma \ref{multbound}.
\end{proof}
\subsection{Step three.}
Fix some $k\in \mathbb{N}$, let $A$ be any algebra, let $E$ denote the exterior algebra in generators $\zeta_1,\ldots \zeta_k$, and let $B=A\otimes E$. We denote by $E^i,E_+,E_-,$ and $E_+^{\geq j}$, and $E_-^{\geq j}$ the $i$-th graded component, even part, odd part, and the even and odd parts of degree at least $j$, respectively.
\begin{lem} We have
\begin{align*}
[B,B]=&[A,A]\otimes (E^0\oplus E_-) \oplus A\otimes E_+^{\ge 2}.\\
[B,[B,B]]=&[A,[A,A]]\otimes (E^0\oplus E^1)\oplus A[A,A]\otimes E_+^{\ge
2}\oplus [A,A]\otimes E_-^{\ge 3}.\\
[B,[B,[B,B]]]=&[A,[A,[A,A]]\otimes (E^0\oplus E^1)\oplus
([A,A[A,A]]+A[A,[A,A]])\otimes E^2\\ &\oplus [A,A[A,A]]\otimes E_-^{\ge 3}\oplus
A[A,A]\otimes E_+^{\ge 4}. \nonumber
\end{align*}
\end{lem}
\begin{proof} A direct computation using the skew commutativity of $E$.
\end{proof}
\begin{cor} We have
\begin{align*}
[B,[B,B]]/[B,[B,[B,B]]]=&([A,[A,A]]/[A,[A,[A,A]]])\otimes
(E^0\oplus E^1)\\&\oplus (A[A,A]/([A,A[A,A]]+A[A,[A,A]]))\otimes E^2\\&\oplus
([A,A]/[A,A[A,A]])\otimes E_-^{\ge 3}.
\end{align*}
\end{cor}
\begin{prop} The generator $v_n=[x_1,[x_1x_2]\cdots[x_{2k-1},x_{2k}]]$ of $(2,1^{n-1})$ is nonzero in $B_3(A_n)$.
\end{prop}
\begin{proof}
Clearly it suffices to find some algebra $C$ and elements $x_1,\ldots x_{2k}$ such that the expression defining $v_n$ is not in $L_4(C)$. We let $A$ be the free algebra in two generators $a,b,$ let $E$ be the exterior algebra in generators $\zeta_0,\ldots,\zeta_{2k}$,and let $B=A\otimes E$.
We set $x_i=\zeta_i$ for $i=2,\ldots,2k,$ and $x_{1}=a\zeta_{0}+b\zeta_{1}$. Then a direct
computation shows:
$$[x_1,[x_1x_2]\cdots[x_{2k-1}x_{2k}]]=2^{k+1}[ab]\otimes \zeta_0\wedge\cdots\wedge\zeta_{2k}.$$
By the corollary, this is nonzero in $[B,[B,B]]/[B,[B,[B,B]]]$, as its component in $([A,A]/[A,A[A,A]])\otimes E_-^{\ge 3}$ is non-zero. The proposition is proved.\end{proof}
\begin{cor} The Jordan-H\"older series of $B_3(A_n)$ is $\{(2,1^{2i-1},0^{n-2i})\}_{1\leq i\leq\lfloor\frac{n}{2}\rfloor}$, each appearing with multiplicity one.
\end{cor}
\subsection{Step four.}
\begin{prop} Each $(2,1^{2i-1},0^{n-2i})$ is a submodule, so the sum in Theorem \ref{PavelConj} is direct.
\end{prop}
\begin{proof}
Let $v_k=[x_1,[x_1x_2]\cdots[x_{2k-1}x_{2k}]]\in B_3(A_n)$, and let $X_k$ be the submodule generated by $v_k$. Clearly, $\partial_iv_k=0$ for all $i$, so the JH series of $X_k$ involves only terms of the form $(2,1^{2r-1},0^{n-2k})$, where $r\geq k$. On the other hand, we saw in the proof of Lemma \ref{multbound} that this representation cannot involve $\mathcal{F}_\lambda$ with more than $2k+1$ boxes. Thus, we must have $X_k=(2,1^{2k-1},0^{n-2k})$ as desired.
\end{proof}
Corollary \ref{BConj} can now be derived by counting the dimension of the graded component for the decomposition in Theorem \ref{PavelConj}.
\subsection{A geometric description of the bracket of $\bar{B}_1$ and $B_2$.}
The isomorphism in Theorem \ref{PavelConj} allows us the following geometric description of the bracket map,
\begin{align*}[-,-]:&(\bar{B}_1/\mathbb{C}) \otimes B_2\to B_3.\\&a\otimes b \mapsto [a,b].\end{align*}
To begin, we identify $\bar B_1/\mathbb{C}\cong\Omega_{ex}^{odd}$, and $B_2\cong\Omega_{ex}^{ev,\ge
2}$, as in \cite{FS}. Also, we identify $B_3$ with the direct sum of Theorem \ref{PavelConj}, by sending
$[x_1,[x_1,x_2]\cdots[x_{2k-1},x_{2k}]]$
to $\pi(dx_1\otimes dx_1\wedge\cdots\wedge dx_{2k}),$ where $\pi:V\otimes\Lambda^{2k}V\to (2,1^{2k-1},0^{n-2k})$ is the standard projection.\footnote{Here, to fix normalizations unambiguously, we define the $\mathfrak{gl}(V)$-modules $(2,1^{p-1},0,\ldots,0)$ as the unique submodules of the corresponding type in $V\otimes \Lambda^p V.$}
\begin{defn}Let $\psi_s: \Lambda^{s+1} V\otimes \Lambda^q V \to (2,1^{s+q-1},0,\ldots,0)$ be the unique (up to scaling) morphism of $\mathfrak{gl}_n$-modules:
$$\psi_s(v_0\wedge\cdots\wedge v_s\otimes b)=\sum_{i=0}^s (-1)^i \pi(v_i\otimes
(v_0\wedge\cdots\hat v_i\cdots\wedge v_s\wedge b))$$
\end{defn}
\begin{prop} For $a\in \Omega_{ex}^{2p+1}, b\in \Omega_{ex}^{ev,\ge 2},$ the bracket map is given by
the formula:
$$[a,b]=\psi_{2p}(a\otimes b).$$
\end{prop}
\begin{proof}
This follows immediately from Lemma 5.1 by induction on p.
\end{proof}
In particular, the proposition implies that the bracket map is induced from a fiberwise morphism of the corresponding vector bundles on $\mathbb{A}^n$.
\section{Decompositions}\label{decomps}
Using the computational algebra system MAGMA \cite{BCP}, we were able to produce
the bigraded Hilbert series of $B_{m}(A_{2})$ up to degree $12$, and the tri-graded Hilbert series for $B_{m}(A_{3})$ up to degree $8$. Combined with Theorem \ref{2m-3}, these results imply the following:
\begin{thm}\label{JH} The Jordan-H\"{o}lder series of $B_{m}(A_{2})$ for $m=2,\ldots,7$ are
\begin{align*}
B_{2}(A_{2}) =& (1,1) \textrm{\cite{FS}}\\
B_{3}(A_{2}) =& (2,1) \textrm{\cite{DKM}}\\
B_{4}(A_{2}) =& (3,1)+(3,2) \textrm{\cite{DKM}}\\
B_{5}(A_{2}) =& (4,1)+(3,2)+(4,2)+(4,3)\\
B_{6}(A_{2}) =& (5,1)+(4,2)+(3,3)+2(5,2)+2(4,3)+(5,3)+(5,4)\\
B_{7}(A_{2}) =&
(6,1)+2(5,2)+2(4,3)+2(6,2)+3(5,3)+2(4,4)+2(6,3)+2(5,4)\\ &+(6,4)+(6,5)
\end{align*}
The Jordan-H\"{o}lder series of $B_{m}(A_3)$ for $m=2,\ldots,5$ are
\begin{align*}
B_{2}(A_{3}) &=(1,1,0)\textrm{\cite{FS}}\\
B_{3}(A_{3}) &=(2,1,0)\textrm{\cite{DE}}\\
B_{4}(A_{3}) &=(3,1,0)+(2,1,1)+(3,2,0)+(2,2,1) \quad\textrm{\emph{(conj. in \cite{FS})}}\\
B_{5}(A_{3}) &=
(4,1,0)+(3,2,0)+(3,1,1)+(2,2,1)+(4,2,0)+(4,1,1)\\&+
3(3,2,1)+(2,2,2)+(4,3,0)+(3,3,1) \end{align*}
\end{thm}
\pagestyle{empty}
| {
"timestamp": "2009-02-27T23:19:42",
"yymm": "0902",
"arxiv_id": "0902.4899",
"language": "en",
"url": "https://arxiv.org/abs/0902.4899",
"abstract": "We continue the study of the lower central series and its associated graded components for a free associative algebra with n generators, as initiated by B. Feigin and B. Shoikhet. We establish a linear bound on the degree of tensor field modules appearing in the Jordan-Hoelder series of each graded component, which is conjecturally tight. We also bound the leading coefficient of the Hilbert polynomial of each graded component. As applications, we confirm conjectures of P. Etingof and B. Shoikhet concerning the structure of the third graded component.",
"subjects": "Rings and Algebras (math.RA); Quantum Algebra (math.QA); Representation Theory (math.RT)",
"title": "New results on the lower central series quotients of a free associative algebra",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9918120884041585,
"lm_q2_score": 0.7154239897159439,
"lm_q1q2_score": 0.7095661613346055
} |
https://arxiv.org/abs/0902.4682 | Lectures on Jacques Herbrand as a Logician | We give some lectures on the work on formal logic of Jacques Herbrand, and sketch his life and his influence on automated theorem proving. The intended audience ranges from students interested in logic over historians to logicians. Besides the well-known correction of Herbrand's False Lemma by Goedel and Dreben, we also present the hardly known unpublished correction of Heijenoort and its consequences on Herbrand's Modus Ponens Elimination. Besides Herbrand's Fundamental Theorem and its relation to the Loewenheim-Skolem-Theorem, we carefully investigate Herbrand's notion of intuitionism in connection with his notion of falsehood in an infinite domain. We sketch Herbrand's two proofs of the consistency of arithmetic and his notion of a recursive function, and last but not least, present the correct original text of his unification algorithm with a new translation. |
\section{\Firstsectionname}%
\label{sec:preface}%
Regarding the work on formal logic of \herbrandname\ \herbrandlifetime,
our following lectures will provide a lot of useful information for the
student interested in logic
as well as a few surprising insights for the experts in the fields
of history and logic.
As \herbrandname\ today is an idol of many scholars,
right from the start
we could not help asking ourselves the following questions: \
Is there still something to learn from
his work on logic
which has not found its way into the standard textbooks on logic? \
Has everything already been published
which should be said or written on him? \
Should we treat him just as an icon?
Well, the lives of
mathematical prodigies who
passed away very early
after ground-breaking work invoke a fascination for later generations: \
The early death of \abelname\ \abellifetime\ from ill health after a
sled trip to visit his \fiance\ for Christmas; \hskip .2em the obscure
circumstances of \galoisname' \galoislifetime\ duel; \hskip .2em the
deaths of consumption of \eisensteinname\ \eisensteinlifetime\ (who
sometimes lectured his few students from his bedside) and of
\rochname\ \rochlifetime\ in Venice; \hskip .2em the drowning of the
topologist {\urysohnname} {\urysohnlifetime} on vacation; \hskip .2em
the burial of \paleyname\ \paleylifetime\ in an avalanche at Deception
Pass in the Rocky Mountains; \hskip .2em as well as the fatal
imprisonment of
\index{Gentzen!Gerhard}%
\gentzenname\ \gentzenlifetime\ in
\Prag\footnote{\Cf\ \citep{last-months-gentzen}.} --- these are
tales most scholars of logic and mathematics have heard in their
student days.
\index{Herbrand!Jacques|(}%
\herbrandname, a young prodigy admitted to the
{\frenchfont\EcoleNormaleSuperieure} as the best student of the
year\,1925, when he was\,17, died only six years later in a
mountaineering accident in {\frenchfont La B\'erarde (Is\`ere)} in France. \
He left a legacy in logic and mathematics that is outstanding.
Despite his very short life,
\herbrand's contributions
were of great significance at his time and they had a strong impact on the
work by others later in
mathematics, proof theory, computer science, and artificial intelligence. \
Even today the name ``\herbrand'' can be found astonishingly often
in research papers in fields that did not even exist at his time.\footnote
{To \nolinebreak wit, the search in any online library (\eg\ citeseer)
reveals that astonishingly many authors
dedicate parts of their work directly to \herbrandname. \
A ``Google Scholar'' search gives a little less than ten thousand hits and the
phrases we find by such an experiment include:
\herbrand\ agent language, \herbrand\ analyses, \herbrand\ automata,
\herbrand\ base, \herbrand\ complexity, \herbrand\ constraints,
\herbrand\ disjunctions, \herbrand\ entailment, \herbrand\ equalities,
\herbrandexpansion, \herbrandsfundamentaltheorem, \herbrand\
functions, \herbrand--\gentzen\ theorem, \herbrand\ interpretation,
\herbrand--\kleene\ universe, \herbrand\ model, \herbrand\ normal
forms, \herbrand\ procedures, \herbrand\ quotient, \herbrand\
realizations, \herbrand\ semantics, \herbrand\ strategies, \herbrand\
terms, \herbrandribet\ theorem, \herbrand's theorem, \herbrand\
theory, \herbranduniverse. \ Whether and to what extend these
references to \herbrand\ are justified is sometimes open for debate.
This list shows, however, that in addition to the
foundational importance of his work at the time, his insights still
have an impact on research even at the present time. \ \herbrand's
name is therefore not only frequently mentioned among the most important
mathematicians and logicians of the \nth{20} century but also among
the pioneers of modern computer science and artificial intelligence.}
Let us start this \firstsectionname\ by sketching a preliminary
list of topics that were influenced by \herbrand's work.%
\vfill\pagebreak
\subsection{Proof Theory}\noindent
\index{Dalen, Dirk van}%
\dalenname\ \dalenlifetime\ \hskip.2em
begins his review on
\citep{herbrand-logical-writings} \hskip.2em
as follows:%
\index{consistency!proof of|(}%
\index{consistency!of arithmetic|(}%
\notop\halftop\begin{quote}
``Much of the logical activity in the first half of this century was inspired by
\index{programme!Hilbert's|(}%
\hilbertsprogram, which contained, besides fundamental reflections on the
nature of mathematics, a number of clear-cut problems for technically gifted
people. \
In particular the quest for so-called ``consistency proofs'' was
taken up by quite a number of logicians. \
Among those, two men can be singled
out for their imaginative approach to logic and mathematics: \
\herbrandname\ and
\index{Gentzen!Gerhard}%
\gentzenname. \hskip.3em
Their contributions to this specific area of logic, called
``proof theory'' ({\germanfont Bewei\esi theorie}) \hskip.3em
following
\index{Hilbert!David}%
\hilbert, are so fundamental that
one easily recognizes their stamp in modern proof theory.''\footnote
{\Cfnlb\ \citep[\p\,544]{dalen74:_review}.}
\notop\halftop\end{quote}
\noindent
\dalen\ continues:
\notop\halftop\begin{quote}
``When we realize that
\herbrand's activity in logic took place in just a few
years, we cannot but recognize him as a giant of proof theory. \
He discovered
an extremely powerful theorem and experimented with it in proof theory. \
It is
fruitless to speculate on the possible course \herbrand\ would have chosen,
had he not died prematurely; \hskip.2em
a \nolinebreak consistency proof for full arithmetic would
have been within his reach.''\footnote
{\Cfnlb\ \citep[\p\,548]{dalen74:_review}.}
\notop\halftop\end{quote}
\noindent
The major thesis of
\index{Anellis!Irving H.}%
\cite{anellis-loewenheim}
is that, building on the
\index{L\"owenheim!--Skolem Theorem}%
\loewenheimskolemtheorem,
it was \herbrand's work in elaborating
\index{Hilbert!David}%
\hilbert's concept
of ``being a proof'' that gave rise to the development
of the variety of \firstorder\ calculi in the 1930s,
such as the ones of the
\index{Hilbert!school}%
\hilbert\ school,
and such as Natural Deduction and Sequent calculi in
\index{Gentzen!'s calculi}%
\citep{gentzen}.
As will be shown in \sectref{section lemma}, \hskip.2em
\index{Herbrand!'s Fundamental Theorem|(}%
\herbrandsfundamentaltheorem\ has directly influenced
\index{Bernays, Paul}%
\bernaysname' work on proof theory. \
The main inspiration in the
\index{programme!unwinding|(}%
{\em unwinding programme},\footnote
{\Cfnlb\
\makeaciteoffour{kreisel-1951}{kreisel-1952}{kreisel-1958}{kreisel-1982},
and do not miss the discussion in
\index{Feferman, Sol(omon)}%
\citep{unwinding}!\begin{quote}
``{\em To determine the constructive (recursive) content or the
constructive equivalent of the non-constructive concepts and
theorems used in mathematics},
particularly arithmetic and analysis.''\getittotheright
{\cite[\p\,155]{kreisel-1958}}\notop\notop\end{quote}}
which
--- \nolinebreak to save the merits of proof theory \nolinebreak---
\index{Kreisel, Georg}%
\kreiselname\ \kreisellifetime\ \hskip.2em
suggested as a replacement for
\index{programme!Hilbert's|)}%
\hilbert's failed programme,
is \herbrandsfundamentaltheorem,
especially for \kreisel's notion of a {\em recursive interpretation}\/
of a logic calculus in another one,\footnote
{\Cfnlb\ \cite[\p\,160]{kreisel-1958} and \citep[\p\,259\f]{unwinding}.}
such as given by \herbrandsfundamentaltheorem\
for his \firstorder\ calculus in the
\sententialtautologies\ over the language \signatureenlargedby\
\skolem\ functions.
\herbrand's approach to consistency proofs, as we will sketch in
\sectrefs{section 1 Proof}{section 2 Proof}, has a
semantical flavor and is inspired by \hilbert's evaluation method of
\index{Hilbert!'s epsilon}%
\mbox{\math\varepsilon-substitution},
whereas it avoids the dependence on \hilbert's $\varepsilon$-calculus. \
The main idea (\cfnlb\ \sectref{section 2 Proof}) \hskip .2em
is to replace the induction axiom
by
\index{function!recursive|(}%
recursive functions of finitistic character. \
\herbrand's
approach is in contrast to the purely syntactical style of
\index{Gentzen!'s consistency proof}%
\gentzen\footnote
{\Cfnlb\
\makeaciteofthree{gentzenfirstconsistent}{gentzenconsistent}{gentzenepsilon}.}
and \schuette\footnote
{\Cfnlb\ \citep{schuette60:_beweis}.}
in which semantical interpretation plays no \role.
So-called
\index{Herbrand!-style consistency proof}%
{\em \herbrand-style consistency proofs}\/
follow \herbrand's idea of constructing finite sub-models to
imply consistency by \herbrandsfundamentaltheorem.
During the early 1970s, this technique was used by
\index{Scanlon, Thomas M., Jr.}%
\scanlonname\ \scanlonlifetime\ \hskip.2em
in collaboration with
\index{Dreben, Burton}%
\dreben\ and
\index{Goldfarb, Warren}%
\goldfarb.\footnote
{\Cfnlb\ \citep{herbrand-style-consistency-proofs},
\citep{scanlon73:_consis_number_theor_via_theor},
\citep{goldfarb-herbrand-consistency}.}
These consistency proofs for arithmetic
roughly follow
\index{Ackermann, Wilhelm}%
\ackermann's previous proof,\footnote
{\Cfnlb\ \citep{ackermann-consistency-of-arithmetic}.}
but they apply \herbrandsfundamentaltheorem\ in advance
and consider \skolemizedform\ instead of
\index{Hilbert!'s epsilon}%
\hilbert's \nlbmath\varepsilon-terms.\footnote
{Contrary to the proofs of \citep{herbrand-style-consistency-proofs}
and \citep{ackermann-consistency-of-arithmetic},
the proof of \citep{scanlon73:_consis_number_theor_via_theor},
which is otherwise
similar to the proof of \citep{herbrand-style-consistency-proofs},
admits
the inclusion of induction axioms over {\em any}\/
recursive \wellordering\ on the natural numbers: \par
By an application of
\index{Herbrand!'s Fundamental Theorem|)}%
\herbrandsfundamentaltheorem,
from a given derivation of an inconsistency,
we can compute a positive natural number
\nlbmath n such that
\index{Property C}%
\propertyC\ of order \nlbmath n holds. \
Therefore, in his analog of \hilbert's and
\index{Ackermann, Wilhelm}%
\ackermann's
\index{Hilbert!'s epsilon}%
\math\varepsilon-substitution method,
\index{Scanlon, Thomas M., Jr.}%
\scanlon\ can effectively pick a minimal counterexample on the
\index{champ fini}%
{\frenchfont champ fini}
\nlbmath{\termsofdepthnovars n}
from a given critical counterexample, even if this
neither has a direct predecessor nor a finite initial segment. \
\par
This result was then further generalized in
\index{Goldfarb, Warren}%
\citep{goldfarb-herbrand-consistency}
to $\omega$-consistency of arithmetic.%
}
\index{Gentzen!'s {\germanfont Hauptsatz}}%
\gentzen's and \herbrand's insight on Cut and
\index{modus ponens}%
\index{modus ponens!elimination}%
\index{Cut elimination}%
{\em modus ponens} elimination
and the existence of normal form derivations
with mid-sequents had a strong influence
on
\index{Craig!William}%
\craigname's work on interpolation\fullstopnospace\footnote
{\Cfnlb\ \citep{craig57:_linear_reason,craig57:_three_uses_herbr_gentz_theor}.}
The impact of
\index{Craig!'s Interpolation Theorem}%
{\em\craig's Interpolation Theorem}\/ to various
disciplines in turn has recently been discussed at the Interpolations
Conference in Honor of William Craig in May 2007.\footnote
{\Cfnlb\ \url{http://sophos.berkeley.edu/interpolations/}.}
\subsection{Recursive Functions and \protect
\goedelssecondIncompletenessTheorem}%
\index{Goedel@G\"odel!'s Second Incompleteness Theorem}%
\noindent
As will be discussed in detail in \nlbsectref{section recursive functions},
in his 1934 \Princetonnostate\ lectures,
\index{Goedel@G\"odel!Kurt}%
\goedel\ introduced the notion of
\index{function!recursive|)}%
{\em (general) recursive functions} and mentioned that this notion
had been proposed to him in a letter from \herbrand,
\cfnlb\ \nlbsectref{sec:life}. \
This letter, however, seems to have had more influence on \goedel's thinking,
namely on the consequences of \goedelssecondincompletenesstheorem:
\notop\halftop\begin{quote}
``Nowhere in the correspondence
does the issue of
{\em general}\/ computability arise. \
\herbrand's discussion, in particular, is solely
trying to explore the limits of
\index{consistency!proof of|)}%
\index{consistency!of arithmetic|)}%
consistency proofs that are imposed by the second theorem. \
\goedel's response also
focuses on that very topic. \
It seems that he subsequently developed
a more critical perspective on the
very character and generality of this theorem.''\getittotheright
{\citep[\p\,180]{sieg05:_only}}
\notop\halftop\end{quote}
\noindent
\citet{sieg05:_only} argues that the letter of \herbrand\ to
\goedel\ caused a change of \goedel's perception of the impact of his
own \secondincompletenesstheorem\ on
\index{programme!Hilbert's}%
\hilbertsprogram:
Initially \goedel\ did assert that it would not
contradict \hilbert's viewpoint. \
Influenced by \herbrand's letter, however, he accepted the
more critical opinion of \herbrand\ on this matter.%
\vfill\pagebreak
\yestop\begin{figure}[h]
\framebox
\begin{minipage}{.995\linewidth}
\small
\englishherbrandribettheorem
\end{minipage}}
\caption{\herbrandribet\ Theorem\label{figure herbrand ribet}}
\end{figure}
\yestop\yestop\yestop\yestop\subsection{Algebra and Ring Theory}
\yestop\noindent
In 1930--1931, within a few months, \herbrand\ wrote several papers
on algebra and ring theory. \
During his visit to Germany he met and
briefly worked on this topic with
\index{Noether, Emmy}%
\noether,
\index{Hasse, Helmut}%
\hasse, and
\index{Artin!Emil}%
\artin;
\cfnlb\ \sectref{sec:life}\@. \
He contributed several new theorems of his own and
simplified proofs of results by
\index{Kronecker, Leopold}%
\kroneckername\ \kroneckerlifetime,
\webername\ \weberlifetime,
\takaginame\ \takagilifetime,
\index{Hilbert!David}%
\hilbert, and
\index{Artin!Emil}%
\artin, thereby generalizing some of these results.
The
\index{Herbrand!--Ribet Theorem|(}%
\herbrandribet\ Theorem
is a result on the class number of certain number fields and it strengthens
\kummer's convergence criterion; \cfnlb\ \figuref{figure herbrand ribet}.
\vfill\pagebreak
Note that there is no direct connection between \herbrand's work on logic and
his work on algebra.
Useful applications of proof theory to mathematics are very rare.
\index{programme!unwinding|)}%
\kreisel's ``unwinding'' of
\index{Artin!'s Theorem!Artin's proof}%
\artin's proof of
\index{Artin!'s Theorem|(}%
\artin's Theorem
into a constructive proof seems to be one of the few exceptions.\footnote
{With \artin's Theorem we mean:
\begin{quote}``{\germanfontfootnote\germantextartineins}''
\getittotheright{\citep[\p\,100]{artin-1927}}
\end{quote}
\begin{quote}``{\germanfontfootnote\germantextartinzwei}''
\getittotheright{\citep[\p\,109, modernized orthography]{artin-1927}}
\end{quote}
\Cf\ \cite{unwinding-artin} for the unwinding
of \artin's proof of \artin's Theorem.
\\\Cf\ \cite{unwinding} for a discussion of the application of proof theory
to mathematics in general. %
\index{Herbrand!--Ribet Theorem|)}%
\index{Artin!'s Theorem|)}%
}%
\subsection{A first r\'esum\'e}
Our following lectures will be more self-contained than this
\firstsectionname. \
But already on the basis of this first overview,
we just have to admit that
\herbrandname's merits are so outstanding
that he has no chance to escape idolization. \
Actually, he has left a world heritage in logic in a very short time. \
But does this mean that we should treat him just as an icon?
On a more careful look,
we will find out that this genius had his flaws,
just as everybody of us made of this strange protoplasmic variant of matter,
and that he has left some of them in his scientific writings.
And he can teach us not only to be less afraid
of logic than of mountaineering; \hskip.2em
he can also provide us with a surprising amount of insight
that partly still lies to be rescued from contortion
in praise and faulty quotations.
\subsection{Still ten minutes to go}\label
{section Still ten minutes to go}
Inevitably, when all introductory words are said,
we will feel the urge to point out to the young
students that
there are things beyond the
latest developments of computer
technology or the fabric
of the Internet: eternal truths valid on planet Earth but
in all those far away galaxies just as \nolinebreak well.
And as there are still ten minutes to go till the end of the lecture,
the students listen in surprise to the strange tale about the unknown
flying objects from the far away, now visiting planet Earth and being
welcomed by a party of human dignitaries from all strata of society.
Not knowing what to make of all this, the little green visitors will
ponder the state of evolution on this strange but beautiful planet:
obviously life is there --- but can it think?
The Earthlings seem to have flying machines, they are all connected
planet-wide by communicators --- but can they really think?
Their gadgets and little pieces of machinery appear impressive --- but is
there a true civilization on planet Earth?
How dangerous are they,
these Earthlings made of a strange protoplasmic variant of matter?
And then cautiously looking through the electronic windows of their
flying unknown objects, they notice that strange little bearded
Earthling, being pushed into the back by the more powerful
dignitaries, who holds up a sign post with
\par\yestop\noindent\LINEnomath{\fbox{$\ \models\quad\equiv\ \ \yields$}}
\par\yestop\noindent
written on it.
Blank faces, not knowing what to make of all this, the
oldest and wisest scientist is slowly moved out through the e-door of
the flying object, slowly being put down to the ground, and
now the bearded Earthling is asked to come forward and the two begin
that cosmic debate about syntax and semantics, proof theory and model
theory, while the dignitaries stay stunned and silent.
And soon there is a sudden flash of recognition and
a warm smile on that green and wrinkled old face,
who has seen it all and now waves back to his
fellow travelers who remained safely within the flying object:
``Yes, they have minds --- yes \nolinebreak oh \nolinebreak yes!''
And this is why the name ``{\herbrandname}'' is finally written among others
with a piece of chalk onto the blackboard --- and now that the introductory
lecture is coming to a close,
we promise
to tell in the following lectures,
what this name stands for and what that young scientist found
out when he was only 21\,years old.
\vfill
\begin{center}
\includegraphics
[width=1.0\linewidth]
{KnowingUfoVisitors.eps}
\end{center}
\vfill
\vfill\pagebreak
\section{\herbrand's Life}
\label{sec:life}
\noindent
This brief r\'esum\'e of \herbrandname's life
focuses on his entourage and the people he met\fullstopnospace\footnote
{More complete accounts of \herbrand's life and personality can be found in
\makeaciteoftwo{herbrand-thoughts}{chevalley1982-herbrand-colloquium},
\citep{herbrand-praise}, \citep[\Vol\,V, \PP{3}{25}]{goedelcollected}. \
All in all, very little is known about his personality and life.}
He was born on \herbrandbirthday, in \Paris, France,
where his father,
\index{Herbrand!Jacques, Sr.\ (Herbrand's father)|(}%
\jacques\ \herbrand\ \sr, worked as a trader in
antique paintings\fullstopnospace\footnote
{\label{note chevalley1982-herbrand-colloquium}\Cfnlb\
\citep{chevalley1982-herbrand-colloquium}.}
He remained the only
child of his parents, who were of Belgian origin. He died --- 23
years old --- in a mountaineering accident on \herbranddeathday, in
{\frenchfont La B\'erarde, Is\`ere}, France.
In 1925, only 17 years old, he was ranked first at the entrance
examination to the prestigious {\frenchfont\em\EcoleNormaleSuperieure}
({\frenchfont\em ENS}\/)\footnote
{\Cf\ \url{http://www.ens.fr} for the {\frenchfont ENS} in
general and \url{http://www.archicubes.ens.fr} for the former
students of the {\frenchfont ENS}.}
in \Paris\ --- but he showed little interest for
the standard courses at the Sorbonne,
which he considered a waste of time\fullstopnospace
\arXivfootnotemarkref{note chevalley1982-herbrand-colloquium} \
However, he closely
followed the famous ``{\frenchfont S\'eminaire \hadamard}'' at the
{\frenchfont Coll\`ege de France}, organized by
\index{Hadamard, Jacques S.|(}%
\hadamardname\ \hadamardlifetime, from 1913 until 1933.\footnote
{\Cfnlb\ \citep[\p\,82]{hadamard-RS-memories},
\citep[\p\,107]{chevalley-praise}.} \
That seminar attracted many students.
At \herbrand's time,
among these students, prominent in their later lives,
were:\par\noindent
\index{Weil, Andr\'e}%
\index{Dieudonn\'e, Jean
\index{Lautman, Albert}%
\index{Chevalley!Claude}%
\LINEnomath{
\begin{tabular}{l|l|l}
name
&lifetime
&year of entering the {\frenchfont ENS}
\\\hline
\headroom
\weilname &\weillifetime &1922
\\\dieudonnename&\dieudonnelifetime&1924
\\\herbrandname &\herbrandlifetime &1925
\\\lautmanname &\lautmanlifetime &1926
\\\chevalleyname&\chevalleylifetime&1926
\end{tabular}}
\halftop\par\noindent\weil, \dieudonne, and \chevalley\
would later be known among the
eight founding members of the renowned \bourbaki\ group:
the French mathematicians who published the book series
on the formalization of mathematics, starting with \cite{bourbaki}.
\weil, \lautman, and \chevalley\ were \herbrand's friends.
\chevalley\ and
\herbrand\ became particularly close friends\footnote
{%
\index{Chevalley!Catherine}%
\catherine\ \chevalley, the daughter of \chevalleyname\ has written
to
\index{Roquette, Peter}%
\roquettename\ on \herbrand: ``he was maybe my father's dearest
friend''
\index{Roquette, Peter}%
\citep[\p\,36, \litnoteref{44}]{roquette-artin}.}
and they worked together on algebra.\footnote
{\Cfnlb\ \citep{herbrand-chevalley}.} \hskip.4em
\chevalley\ depicts \herbrand\ as an adventurous, passionate, and often
perfectionistic personality who was not only interested in
mathematics, but also in poetry and sports.\arXivfootnotemarkref
{note chevalley1982-herbrand-colloquium} \
In particular, he
seemed to have liked extreme sportive challenges: mountaineering,
hiking, and long distance swimming.
His interest in philosophical
issues and foundational problems of science was developed well beyond
his \nolinebreak age.
At that time, the {\frenchfont ENS}
did not award a diploma, but the students had to
prepare the {\frenchfont\em agr\'egation},
an examination necessary to be promoted
to {\frenchfont\em professeur agr\'eg\'e},\footnote
{This corresponds to a high-school teacher.
The original \role\ of the {\frenchfont ENS} was to educate students
to become high-school teachers. \
Also Jean-Paul Sartre
started his career like this.}
even though most students engaged into research.
\herbrand\ passed the {\frenchfont agr\'egation} in\,1928,
again ranked first, and he prepared his doctoral thesis under the
direction of
\index{Vessiot, Ernest|(}%
\vessiotname\ \vessiotlifetime, who was the director of
the {\frenchfont ENS} since\,1927.\footnote
{It is interesting to note that
{\vessiotname} and
\index{Hadamard, Jacques S.|)}%
\hadamardname\ where the two top-ranked students
at the examination for the {\frenchfont ENS} in 1884.}%
\herbrand\ submitted his thesis \citep{herbrand-PhD}, entitled
{\frenchfont\em\herbrandPhDtitle}, on \herbrandPhDdating. \
It was approved for publication on \herbrandPhDapprovedforpublicationdate. \
In \nolinebreak
October that year he had to join the army for his military service
which lasted for one year in those days.
He finally defended his
thesis on \herbrandPhDdefensedate.\footnote
{One reason for the late defense is that because of the minor \role\
mathematical logic played at that time in France, \herbrand's
supervisor \vessiotname\ had problems finding examiners for the
thesis. The final committee consisted of
\index{Vessiot, Ernest|)}%
\vessiotname,
\denjoyname\ \denjoylifetime\ and \frechetname\ \frechetlifetime.}
After completing his military service in September\,1930, awarded
with a Rocke\-feller Scholarship, he spent the academic year
1930--1931 in Germany and planned to stay in \Princetonnostate\footnote
{\Cfnlb\ \citep[\Vol\,V, \p\,3\f]{goedelcollected}. \
\Cfnlb\ \citep{hasse-herbrand-correspondence}
for the letters
between \hasse, \herbrand, and \wedderburnname
\index{Wedderburn, Joseph H. M.
\ \wedderburnlifetime\ (\Princetonnostate)
on \herbrand's visit to \Princetonnostate.}
for the year after.
He visited the following mathematicians:\halftop\par\noindent
\index{Neumann, John von}%
\index{Artin!Emil}%
\index{Noether, Emmy}%
\LINEnomath{
\begin{tabular}{l|l|l|l}
hosting scientist
&lifetime
&place
&time of \herbrand's stay
\\\hline
\begin{tabular}[c]{@{}l@{}}
\neumannname
\\\artinname
\\\noethername\footnotemark
\\\end{tabular}
&\begin{tabular}[c]{@{}c@{}}
\neumannlifetime
\\\artinlifetime
\\\noetherlifetime
\\\end{tabular}
&\begin{tabular}[c]{@{}l@{}}
\Berlin\
\\\Hamburg
\\\Goettingen
\\\end{tabular}
&\begin{tabular}[c]{@{}l@{}}
\math\{
\\\math\{
\\\math\{
\\\end{tabular}
\begin{tabular}[c]{@{}l@{}}
middle of \Oct\,1930
\\middle of \May\,1931
\\middle of \Jun\,1931
\\middle of \Jul\,1931\footnotemark
\\\end{tabular}
\\\end{tabular}}\addtocounter{footnote}{-1}\footnotetext
{\herbrand\ had met \noether\ already very early in\,1931 in \Halle. \
\Cfnlb\ \citep[\p 106, \litnoteref{10}]{noether-hasse-correspondence}.}%
\addtocounter{footnote}{1}\footnotetext
{According to \citep[\p 73]{Dubreil-1983-a},
\herbrand\ stayed in
\index{Goettingen@G\"ottingen}%
\Goettingen\
until the {\em beginning}\/ of \Jul\,1931. \
In \nolinebreak his report to the Rockefeller
Foundation \herbrand\ wrote that his stay in Germany lasted from \Oct\,20,
1930, until the {\em end}\/ of \Jul\,1931, which is unlikely because
he died on \herbranddeathday, in France.}%
\par\halftop\noindent\herbrand\ discussed his ideas with
\index{Bernays, Paul}%
\bernaysname\ \bernayslifetime\
in \Berlin, and he met \bernaysname,
\index{Hilbert!David}%
\hilbertname\ \hilbertlifetime, and
\index{Courant, Richard}%
\courantname\ \courantlifetime\
later in
\index{Goettingen@G\"ottingen}%
\Goettingen.\footnote
{That \herbrand\ met \bernays, \hilbert, and \courant\ in
\index{Goettingen@G\"ottingen}%
\Goettingen\ is most likely, but we cannot document it. \
\hilbert\ was still lecturing regularly in\,1931, \cfnlb\
\citep[\p 199]{reid-hilbert}. \
\courant\ wrote a letter to \herbrand's father,
\index{Herbrand!Jacques, Sr.\ (Herbrand's father)|)}%
\cfnlb\ \citep[\p\,25, \litnoteref 1]{herbrand-logical-writings}.}
On \herbrandtogoedeldate, \herbrand\ wrote a letter to
\index{Goedel@G\"odel!Kurt}%
\goedelname\ \goedellifetime, who answered with some delay on \Jul\,25, most
probably too late for the letter to reach \herbrand\ before his early
death two days later\fullstopnospace\footnote
{\Cfnlb\ \citep[\Vol\,V, \PP{3}{25}]{goedelcollected}.}
In other words, although {\herbrandname} was still a relatively
unknown young scientist, he was well connected to the best
mathematicians and logicians of his time, particularly to those
interested in the foundations of mathematics.
\begin{sloppypar}
\herbrand\ met
\index{Hasse, Helmut}%
\hassename\ \hasselifetime\ at the {\germanfont
Schie\fki\oe rper-Kongre\ses}
(\Feb\,26~--~\Mar\,1, 1931) \hskip.2em
in \Marburg, \hskip.2em
and he wrote several letters including plenty of mathematical ideas to
\hasse\ afterwards.\footnote
{\Cfnlb\ \makeaciteoftwo{roquette-email}{hasse-herbrand-correspondence}.} \
After exchanging several
compassionate letters with \herbrand's father,\footnote
{\Cfnlb\ \citep{hasse-herbrand-senior-correspondence}.} \hskip.3em
\hasse\ wrote \herbrand's obituary which is printed as the
foreword to
\herbrand's article on the consistency of arithmetic
\citep{herbrand-consistency-of-arithmetic}.\pagebreak\end{sloppypar}
\section{Finitistic Classical \FirstOrder\ Proof Theory}\label
{section Subject Area and Methodological Background}%
\index{finitism}%
\index{logic!classical}%
\index{logic!two-valued}%
\noindent \herbrand's work on logic falls into the area of what is
called {\em proof theory}\/ today. \
More specifically, he is concerned
with the {\em finitistic} analysis of {\em two-valued} (\ie\ {\em
classical}\/), {\em\firstorder}\/ logic and its relationship to
sentential, \ie\ propositional logic.
Over the millenia, logic developed as {\em proof theory}.
The key
observation of the ancient Greek schools, first formulated by
\aristotlename\ \aristotlelifetime, \hskip .2em is that certain
patterns of reasoning are valid irrespective of their actual
denotation. \
From ``all men are mortal'' and ``Socrates is a man'' we
can conclude that ``Socrates is mortal\closequotecomma irrespectively
of Socrates' most interesting personality and the contentious meaning
of ``being mortal'' in this and other possible worlds. \
The discovery
of those patterns of reasoning, called {\em syllogisms}, where
meaningless symbols are used instead of everyday words, was the
starting point of the known history of mathematical logic in the
ancient world. \
For over two millenia, the development of these rules
for drawing conclusions from given assumptions just on the basis of
their {\em syntactical form} was the main subject of logic.
{\em Model theory} --- on the other hand --- the study of formal languages
and their interpretation, became a respectable way of logical
reasoning through the seminal works of
\index{L\"owenheim!Leopold}%
\loewenheimname\ \loewenheimlifetime\ and
\index{Tarski, Alfred}%
\tarskiname\ \tarskilifetime. \ \
Accepting the
actual infinite, model theory considers the
set-theoretic semantical structures of a given language. \
With \tarski's work, the relationship between these two areas of logic
assumed overwhelming importance --- as
captured in our little anecdote
of the \firstsectionname\ (\sectref{section Still ten minutes to go}),
where `\math\models' \nolinebreak signifies model-theoretic validity and
`\tightyields' \nolinebreak
denotes proof-theoretic derivability.\footnote
{As we will discuss in \sectref{section herbrand loewenheim skolem},
\herbrandname\ still had problems in telling `\tightyields' and
`\math\models' apart: \
For instance, he blamed
\index{L\"owenheim!Leopold}%
\loewenheim\ for not showing
\index{consistency!of first-order logic}%
consistency of \firstorder\ logic,
which is a property related to
\herbrand's \nolinebreak `\tightyields\closesinglequotecommaextraspace
but not to \loewenheim's \nolinebreak
`\math\models\closesinglequotefullstopextraspace}
\herbrand's
scientific work
coincided with the maturation
of modern logic,
as marked inter alia by
\goedel's
\index{Goedel@G\"odel!'s First Incompleteness Theorem}%
\index{Goedel@G\"odel!'s Second Incompleteness Theorem}%
\incompletenesstheorem s of 1930--1931.\footnote
{\label{note incompleteness theorems}\Cfnlb\ \citep{goedel},
\citep{rosser-incompleteness}. \
For an interesting discussion of the reception of
the \incompletenesstheorem s \cfnlb\
\index{Dawson, John W., Jr.}%
\cite{dawson-reception-goedel}.}
It was strongly influenced by the
foundational crisis in mathematics as well. \
\index{Russell!'s Paradox}%
\russellsparadox\ was not only a personal calamity to
\index{Frege, Gottlob}%
\fregename\ \fregelifetime,\footnote
{\Cfnlb\ \citep[\Vol\,II]{frege-grundgesetze}\@.}
but it jeopardized the whole
enterprise of set theory and thus the foundation of modern mathematics. \
From an epistemological point of view, maybe there
was less reason for getting scared as it appeared at the time: \
As
\index{Wittgenstein, Ludwig}%
\wittgenstein\ \wittgensteinlifetime\ reasoned
later\commanospace\footnote{\Cfnlb\ \citep{wittgenstein}.}
the detection of
inconsistencies is an inevitable element of human learning, and many
logicians today would be happy to live at such an interesting time of
a raging\footnote
{This crisis has actually never been resolved in the sense that we
would have a single set theory that suits all the needs of a
working mathematician.
\notop\halftop\begin{quote}
``\englishquoteforstertext''
\getittotheright{\englishquoteforstertextplace}
\end{quote}}
foundational crisis. \
\pagebreak
\yestop\yestop\noindent
The development of mathematics, however, more
often than not attracts the intelligent young men looking for clarity
and reliability in a puzzling world, threatened by social complexity. \
As
\index{Hilbert!David}%
\hilbertname\ put it:
\begin{quote}
``\germantexthilbertsicherheit''\footnote
{\Cfnlb\ \citep[\p 170]{unendliche}.\begin{quote}
``And where else are security and truth to be found,
if even mathematical thinking fails?'
\end{quote}
\end{quote}
\begin{quote}
``\germantexthilbertignorabimus''\footnote
{\Cfnlb\ \citep[\p 180, modernized orthography]{unendliche}.\begin{quote}
``After all, one of the things that attract us most when we apply
ourselves to a mathematical problem is precisely that within us we
always hear the call: here is a problem, search for the solution;
you can find it by pure thought, for in mathematics there is no {\em
ignorabimus}.''\getittotheright
{\translationnotewithlongcite
{\p\,384}{heijenoort-source-book}{translation by \bauermengelbergname\index{Bauer-Mengelberg, Stefan}}}%
\end{quote}}
\end{quote}
\yestop\halftop\noindent
Furthermore,
\index{Hilbert!David}%
\hilbert\ did not want to surrender to the new
\index{intuitionism}%
``intuitionistic'' movements of
\index{Brouwer, Luitzen}%
\brouwername\ \brouwerlifetime\ and
\index{Weyl, Hermann}%
\weylname\ \weyllifetime, who suggested a restructuring of mathematics
with emphasis on the problems of existence and
\index{consistency|(}%
consistency rather than
elegance,
giving up many previous achievements, especially in analysis and in
the set theory of \cantorname\ \cantorlifetime:\footnote
{\Cfnlb\ \makeaciteofthree{brouwer1}{brouwer2}{brouwer3}, \
\makeaciteoftwo{weyl-1921}{weyl-grundlagenvortrag}, \
\citep{cantor-collected}.}
\begin{quote}
``\germantexthilbertcantorsparadise''\footnote
{\Cfnlb\ \germantexthilbertcantorsparadisecitation.\begin{quote}
``No one shall drive us from the paradise \cantor\ has created.''
\notop\notop\end{quote}%
}
\end{quote}
\begin{sloppypar}\yestop\halftop\noindent
Building on the works
of
\index{Dedekind!Richard}%
\dedekindname\ \dedekindlifetime, \hskip.3em
\index{Peirce!Charles S.}%
\peircename\ \peircelifetime, \hskip.3em
\index{Schr\"oder!Ernst}%
\schroedername\ \schroederlifetime, \hskip.3em
\index{Frege, Gottlob}%
\fregename\ \fregelifetime, \hskip.3em
and
\index{Peano!Guiseppe}%
\peanoname\ \peanolifetime, \hskip.3em
the celebrated three volumes of
\index{Principia Mathematica}%
{\em\PM}\/ \citep{PM} \hskip.3em
of
\index{Whitehead!Alfred North}%
\whiteheadname\ \whiteheadlifetime\ \hskip.3em
and
\index{Russell!Bertrand}%
\russellname\ \russelllifetime\ \hskip.3em
had provided evidence
that --- \nolinebreak in principle \nolinebreak--- mathematical
proofs could be reduced to logic, \hskip.3em
using only a few rules of inference and appropriate axioms.
\end{sloppypar}
\yestop\yestop\noindent
The goals of
\index{Hilbert!David|(}%
\index{programme!Hilbert's|(}%
\index{finitism|(}%
{\em\hilbertsprogram}\/ on the foundation of
mathematics, however, extended well beyond this: \
His contention was that the reduction of mathematics to formal theories of
logical calculi would be
insufficient to resolve
the foundational crisis of mathematics, neither would it protect against
\index{Russell!'s Paradox}%
\russellsparadox\ and other inconsistencies in the future --- unless the
consistency of these theories could be shown formally by simple
means.
\yestop\yestop\noindent
Let us elaborate on what was meant by these ``simple means\closequotefullstop
\\\mbox{}\raisebox{-1.9ex}{\rule{0ex}{.5ex}}
\vfill\pagebreak
Until he moved to \Goettingen,
\hilbert\ lived in \Koenigsberg, \hskip.2em
and his view on mathematics in the 1920s
was partly influenced by
\index{Kant, Immanu\"el|(}%
\kant's {\em Critique of pure reason}.\footnote
{\label{note objective reality}The transcendental philosophy of
{\em pure speculative\/\footnotemark\ reason}\/
is developed in the main work on
epistemology, the
{\em Critique of pure reason}\/ \makeaciteoftwo{KrVA}{KdrV},
of \kantname\ \kantlifetime, who spent most of his life in \Koenigsberg\
and strongly influenced the education on \hilbert's
high school and university in \Koenigsberg. \
The {\em Critique of pure reason}\/ elaborates how little we can know about
things independent of an observer
({\em things in themselves}, {\germanfontfootnote\em Dinge an sich selbst}\/)
in comparison to our conceptions,
\ie\ the representations of the things within our thinking
({\germanfontfootnote Erscheinungen und sinnliche Anschauungen,
Vorstellungen}). \
In \nolinebreak what he compared\footnotemark\
to the \kopernikan\ revolution
\cite[\p\,XVI]{KdrV}, \hskip.2em
\kant\ \nolinebreak
considered the conceptions gained in connection with sensual experience
to be real and partly objectifiable,
and accepted the
things in themselves only as limits of
our thinking, about which nothing can be known for certain. \
}\addtocounter{footnote}{-1}\footnotetext
{The term ``{\em pure (speculative) reason}\/'' is opposed to ``{\em (pure)
practical reason}\/\closequotefullstop}\addtocounter{footnote}{1}\footnotetext
{Contrary to what is often written, \kant\ never wrote
of ``his \kopernikan\ revolution of philosophy\closequotefullstop} \
Mathematics as directly and intuitionally perceived by a mathematician is called
{\em contentual}\/\footnote
{The word
\index{contentual!history of the English word}%
``contentual'' did not use to be part of the English language
until recently. \
For instance,
it \nolinebreak is not listed in the most complete Webster's \cite{webster}. \
According to \cite[\p\,viii]{heijenoort-source-book}, \hskip .3em
this \nolinebreak neologism was especially introduced by \bauermengelbergname\index{Bauer-Mengelberg, Stefan}\
as a translation for the word
``{\germanfontfootnote inhaltlich}'' in German texts on mathematics and logic,
because there was no other way to reflect the special intentions
of the \hilbert\ school when using this word. \
In \nolinebreak January\,2008, \hskip .2em
``contentual'' got 6350 Google hits, 5600 of which, however,
contain neither the word ``\hilbert'' nor the word
``\bernays\closequotefullstopextraspace
As these hits also include a pop song, \hskip .2em
``contentual'' is likely to become an English word outside science in
the nearer future. \
For a comparison,
there were 4\,million Google hits for ``contentious\closequotefullstop}
({\germanfont\em inhaltlich}\/) by \nolinebreak\hilbert. \
According to \citep{unendliche}, \hskip.2em
the notions and methods of contentual mathematics are partly abstracted from
finite symbolic structures
(such as explicitly and concretely given natural numbers, proofs, and
algorithms) \hskip.2em
where we can effectively {\em decide}\/ (in
finitely many effective steps) \hskip.2em
whether a given object has a certain
property or not. \
Beyond these
{\em aposterioristic}\/ abstractions from phenomena,
contentual mathematics also has a
{\em synthetic}\/\footnote
{\label{note wrong}``synthetic'' is the opposite of
``analytic'' and means that a statement provides new information
that cannot be deduced from a given knowledge base. \ Contrary to
\kant's opinion that all mathematical theorems are synthetic
\citep[\p 14]{KdrV}, \ (contentual) mathematics also has
analytic sentences. \
In \nolinebreak particular, \kant's example
\bigmaths{7+5=12}{} becomes analytic when we read it as
\bigmaths{\plusppnoparentheses{\sppiterated 7\zeropp} {\sppiterated
5\zeropp}=\sppiterated{12}\zeropp}{} and assume the
non-necessary, synthetic, aprioristic axioms \bigmathnlb{\plusppnoparentheses
x\zeropp=x}{} and \bigmaths{\plusppnoparentheses x{\spp
y}=\spp{\plusppnoparentheses x y}}. \
\index{Frege, Gottlob}%
\Cfnlb\ \citep[\litsectref{89}]{frege-grundlagen}.}
{\em aprioristic}\/\footnote
{``{\em a priori}\/'' is the opposite of
``{\em a posteriori}\/'' and means that a statement does not depend
on any form of experience. \ For instance, all necessary
\citep[\p\,3]{KdrV} and all analytic \citep[\p 11]{KdrV}
statements are {\em a priori}. \
Finally, \kant\ additionally
assumes that all aprioristic statements are
necessary \citep[\p\,219]{KdrV}, which seems to be wrong,
\cfnlb\ \noteref{note wrong}.}
aspect,
which depends neither
on experience nor on deduction, and which cannot be reduced to logic,
but which is transcendentally related to intuitive conceptions. \
Or, as \nolinebreak\hilbert\ put \nolinebreak it:
\notop\halftop\begin{quote}
\noindent``\germantexthilbertonkant''\footnote
{\Cfnlb\ \citep[\p 170\f, modernized orthography]{unendliche}.%
\footroom\notop\halftop
\begin{quote}
``\englishtexthilbertonkant''\getittotheright
{\translationnotewithlongcite
{\p\,376}
{heijenoort-source-book}
{translation by \bauermengelbergname\index{Bauer-Mengelberg, Stefan}, modified\footnotemark
}}\notop\notop\end{quote}}%
\pagebreak\par
\footnotetext
{We have replaced ``\mbox{extralogical}'' with
``extra-logical\closequotecomma
and --- more importantly ---
``experience'' with ``conception\closequotecomma for
the following reason:
Contrary to ``{\germanfontfootnote Erfahrung}'' (experience),
the German word ``{\germanfontfootnote Erlebni\es}''
does not suggest an {\em aposterioristic}
intention, which would contradict the obviously {\em aprioristic}\/
intention of \hilbert's sentence.}
\end{quote}
\noindent
To refer to intellectual concepts
which are not directly related to sensual perception
or intuitive conceptions, \hskip.2em
both \kant\ and \hilbert\ use the word
``ideal\closequotefullstopextraspace
{\em Ideal}\/ objects and methods in mathematics ---~as opposed to
contentual ones~--- may involve the
\index{infinite!the actual}%
actual infinite; \hskip.3em
such \nolinebreak as quantification, \math\varepsilon-binding, set theory, and
non-terminating computations.
According to both \citep{KdrV} and \citep{unendliche}, \hskip.2em
the only possible criteria for the acceptance of {\em
ideal}\/ notions are {\em consistency}\/ and {\em usefulness}. \ \
Contrary to
\index{Kant, Immanu\"el|)}%
\kant,\footnote
{\kant\ considers ideal notions to be problematic,
because they transcend what he considers to be the area
of objective reality; \cfnlb\ \noteref{note objective reality}. \
For notions that are consistent, useful, and ideal,
\kant\ actually introduces the technical term {\em problematic}\/
({\germanfontfootnote problematisch}):
\notop\halftop\begin{quote}
``{\germanfontfootnote\germantextkantsnotionofproblematisch}''\nopagebreak
\getittotheright
{\citep[\p\,310, modernized orthography]{KdrV}}\notop\notop\end{quote}}
however, \hilbert\
is willing to accept useful ideal theories, provided that
their consistency can be shown with contentual and intuitively clear
methods --- \ie\ with ``simple means\closequotefullstop
These ``simple means'' that may be admitted here must be,
on the one hand,
sufficiently expressive and powerful to show the
\index{consistency!of arithmetic}%
consistency of arithmetic,
but, on the other hand,
simple, \ie\ intuitively clear and contentually reliable. \
The notion of
\index{finitism}%
{\em\hilbert's finitism}\/ was born out of the conflict
of these two goals.
Moreover, \hilbert\ expresses the hope that the new proof theory,
primarily developed to show the
\index{consistency!proof of}%
consistency of ideal mathematics with
contentual means, would also admit (possibly ideal, \ie\
non-finitistic) proofs of
\index{completeness!{\em definition}}%
{\em completeness}\/\footnote
{A theory is {\em complete} \udiff\ for any formula \nlbmath A
without free variables
(\ie\ any closed formula in the given language) \hskip.2em
which is not part of this theory,
its negation \nlbmath{\neg A} is part of this theory.}
for certain mathematical theories. \
If this goal of \hilbertsprogram\ had been achieved, then ideal proofs
would have been justified as convenient short-cuts for constructive,
contentual, and intuitively clear proofs, so that --- even under the
threat of
\index{Russell!'s Paradox}%
\russellsparadox\ and others --- there would be no reason to
give up the paradise of axiomatic mathematics and abstract set theory.
And these basic convictions of the \hilbert\ school constituted the
most important influence on the young student of mathematics \herbrandname.
As \goedel\ showed with his famous
\index{Goedel@G\"odel!'s First Incompleteness Theorem}%
\index{Goedel@G\"odel!'s Second Incompleteness Theorem}%
\incompletenesstheorem s in
1930--1931,\arXivfootnotemarkref{note incompleteness theorems}
however, the
consistency of any (reasonably conceivable)
recursively enumerable
mathematical theory that
includes arithmetic excludes both its
\index{completeness}%
completeness and
the existence of a finitistic
\index{consistency|)}%
\index{consistency!proof of}%
consistency proof.
Nevertheless, the contributions of
\index{Ackermann, Wilhelm
\ackermannname\ \ackermannlifetime,
\index{Bernays, Paul}%
\bernays, \herbrand, and
\index{Gentzen!Gerhard}%
\gentzen\ within
\index{Hilbert!David|)}%
\index{programme!Hilbert's|)}%
\index{finitism|)}%
\hilbertsprogram\ gave {\em
proof theory}\/ a new meaning as a field in which proofs are the
objects and their properties and constructive transformations are the
field of mathematical study, just as in arithmetic the numbers are the
objects and their properties and algorithms are the field of study.
\vfill\pagebreak
\section
{\herbrand's Main Contributions to Logic and\\his Notion of Intuitionism}\label
{section Contributions and his Notion of Intuitionism}%
\index{intuitionism|(}%
\noindent
The essential
works of {\herbrand} on logic are his \PhDthesis\
\citep{herbrand-PhD} \hskip .3em
and the subsequent journal article
\citep{herbrand-consistency-of-arithmetic}, \hskip .3em
both to be found in \citep{herbrand-logical-writings}.\footnote
{\label{note on herbrand-logical-writings}This
book is still the best source on \herbrand's writings today. \
It is not just an English translation
of \herbrand's complete works on logic
(all of \herbrand's work in logic was written in French, \cfnlb\
\citep{herbrand-ecrits-logiques}),
but contains additional
annotation, brief introductions, and extended notes by
\index{Heijenoort!Jean van}%
\heijenoortname,
\index{Dreben, Burton}%
\drebenname, and
\index{Goldfarb, Warren}%
\goldfarbname.
Besides minor
translational corrections,
possible addenda
for future editions would be the original texts in French;
\herbrand's mathematical writings outside of logic; some remarks on
the two \repair s of \herbrand's False Lemma by \goedel\ and
\index{Heijenoort!Jean van}%
\heijenoort, respectively, \cfnlb\ \sectrefs
{section lemma}{section herbrand fundamental theorem} below; \
and \herbrand's correspondence. \
The correspondence with
\index{Goedel@G\"odel!Kurt}%
\goedel\ is published in
\citep[\Vol\,V, \PP{14}{25}]{goedelcollected}. \
\herbrand's letters to
\index{Hasse, Helmut}%
\hasse\ are still in private possession according to
\index{Roquette, Peter}%
\citep{roquette-email}. \
The whereabouts of the rest of his correspondence is unknown.}
The main contribution is captured in what is called today
\index{Herbrand!'s Fundamental Theorem|(}%
{\em\herbrandsfundamentaltheorem}. \
Sometimes it is simply
called ``\herbrand's Theorem\closequotecomma but the longer name is
preferable as there are other important
``\herbrand\ theorems\closequotecomma such as the {\herbrand--\ribet}
Theorem. \
Moreover, \herbrand\ himself calls it
``{\frenchfont Th\'eor\`eme fondamental}\/\closequotefullstop
The subject of \herbrandsfundamentaltheorem\ is the effective reduction
of (the semi-decision problem of) provability in \firstorder\ logic to
provability in sentential logic.
Here we use the distinction well-known to \herbrand\ and
his contemporaries between {\em\firstorder\ logic}\/
(where quantifiers bind variables ranging over objects of
the universe, \ie\ of the domain of reasoning or discourse) \hskip .2em
and {\em sentential logic}\/ without any quantifiers. \
Validity of a formula in sentential logic is effectively decidable,
for instance with the truth-table method. \
Although \herbrand\ spends \litchapref 1
of his thesis on the subject, \hskip.1em
he actually ``shows no interest for the sentential work''
\index{Heijenoort!Jean van}%
\citep[\p 120]{heijenoort-work-herbrand},
and takes it for granted.
\begin{enumerate}
\item[A.] Contrary to
\index{Gentzen!'s {\germanfont Hauptsatz}}%
\gentzensHauptsatz\ \citep{gentzen}, \hskip.3em
\herbrandsfundamentaltheorem\ starts right with a
{\em single \sententialtautology}\/
(\cfnlb\ \sectref{section herbrands calculi}). \
He treats this property as given and does not fix a
concrete method for establishing it.
\item[B.] The way \herbrand\ presents his sentential logic in terms
of `\math\neg' and `\math\vee' indicates that he is not concerned
with
\index{logic!intuitionistic}%
{\em intuitionistic logic}\/ as we understand the term today.\footnote
{\Cfnlb\ \eg\
\index{Heyting, Arend|(}%
\makeaciteoftwo{heyting-1930-logic}{heyting}, \citep{gentzen}.}
\end{enumerate}
Contrary to
\index{Gentzen!'s calculi}%
\gentzen's sequent calculus \nolinebreak\LK\
\citep{gentzen}, in \herbrand's calculi
we do not find something like a sub-calculus \nolinebreak\LJ\
for intuitionistic logic. \
Moreover, there is no way to generalize \herbrandsfundamentaltheorem\
to include intuitionistic logic: \
Contrary to the Cut elimination in
\index{Cut elimination}%
\index{Gentzen!'s {\germanfont Hauptsatz}}%
\gentzensHauptsatz,
the \nolinebreak elimination of
\index{modus ponens}%
\index{modus ponens!elimination}%
{\em modus ponens}\/ according to
\index{Herbrand!'s Fundamental Theorem|)}%
\herbrandsfundamentaltheorem\ does not hold for intuitionistic logic.
\begin{quote}
``All the attempts to generalize \herbrand's theorem in that
direction have only led to partial and unhandy results (see
\index{Heijenoort!Jean van}%
\citep{heijenoort-mints}).'' \getittotheright
{%
\index{Heijenoort!Jean van}%
\citep[\p 120\f]{heijenoort-work-herbrand}}
\end{quote}
\vfill\pagebreak
\noindent
When \herbrand\ uses the term ``intuitionism\closequotecomma this
typically should be understood as referring to something closer to the
\index{finitism}%
finitism of \hilbert\ than to the intuitionism of
\index{Brouwer, Luitzen}%
\brouwer.\footnote
{%
\index{Goedel@G\"odel!Kurt}%
\goedel, however, expressed a
different opinion in a letter to
\index{Heijenoort!Jean van}%
\heijenoort\ of \Sep\,18, 1964:\begin{quote}
``In \litnoteref 3 of \citep{herbrand-consistency-of-arithmetic}
he does {\em not}\/ require the enumerability of mathematical objects, and
gives a definition which fits
\index{Brouwer, Luitzen}%
\brouwer's intuitionism
very well''
\getittotheright
{\citep[\Vol\,V, \p\,319\f]{goedelcollected}}\vspace*{-2ex}\end{quote}} \
This ambiguous usage of the term ``intuitionism'' ---~centered around the
partial rejection of the Law of the Excluded Middle, the
\index{infinite!the actual}%
actual infinite,
as well as quantifiers and other binders~--- was common
in the
\index{Hilbert!school}%
\hilbert\ school at \herbrand's time.\footnote
{\Cfnlb\ \eg\ \citep[\p\,283\f]{herbrand-logical-writings}, \
\index{Tait, William W.}%
\citep[\p\,82\ff]{tait-2006}.} \
\herbrand's view on what he calls ``intuitionism''
is best captured in the following quote:
\begin{quote}%
``\frenchtextninty''\footnote
{\Cfnlb\ \frenchtextnintylocationoriginal. \
\frenchtextnintylocationoriginalmodifierforreprint\ also in:
\frenchtextnintylocationreprint.
\begin{quote}``By an intuitionistic argument
we understand an argument satisfying the following conditions: in it
we never consider anything but a given finite number of objects and
of functions; these functions are well-defined, their definition
allowing the computation of their value in a univocal way; we never
state that an object exists without giving the means of constructing
it; we never consider the totality of all the objects \nlbmath x of
an infinite collection; and when we say that an argument (or a
theorem) is true for all these \nlbmath x, we mean that, for each
\nlbmath x taken by itself, it is possible to repeat the general
argument in question, which should be considered to be merely the
prototype of the particular arguments.'' \getittotheright
{\translationnotewithlongcite{\litnoteref 5, \p\,288\f}
{herbrand-logical-writings}{translation by \heijenoort}}\notop\end{quote}}
\end{quote}
Contrary to today's precise meaning of the term ``intuitionistic
logic\closequotecomma the terms
\index{intuitionism|)}%
``intuitionism'' and ``finitism''
denote slightly different concepts, which are related to the
philosophical background, differ from person to person, and vary
over times.\footnote
{Regarding intuitionism,
besides
\index{Brouwer, Luitzen}%
\brouwer,
\index{Weyl, Hermann}%
\weyl, and \hilbert, we may count
\index{Kronecker, Leopold}%
\kroneckername\ \kroneckerlifetime\ and
\index{Poincar\'e, Henri}%
\poincarename\ \poincarelifetime\ among the ancestors, and have to mention
\index{Heyting, Arend|)}%
\heytingname\ \heytinglifetime\ for his major differing view, \cfnlb\ \eg\
\makeaciteofthree{heyting-1930-logic}{heyting-1930-mathematics}{heyting}. \
Deeper discussions of \herbrand's notion of ``intuitionism'' can be found in
\index{Heijenoort!Jean van}%
\citep[\PP{113}{118}]{heijenoort-work-herbrand}
and in
\index{Tait, William W.}%
\citep[\p\,82\ff]{tait-2006}\@. \
Moreover, we briefly discuss it in \noteref{note intuitionism}. \
For more on finitism \cfnlb\ \eg\
\citep{parsons-finitism},
\index{Tait, William W.}%
\citep{tait-finitism}, \citep{zach-finitism}. \
For more on \herbrand's background in
philosophy of mathematics, \cfnlb\ \citep{herbrand-thoughts},
\citep{DubucsEgre-Herbrand-PUF}.}
While \herbrand\ is not concerned with
\index{logic!intuitionistic}%
intuitionistic logic,
he is a finitist with respect to the following two aspects:
\begin{enumerate}
\item[1.]\herbrand's work is strictly contained within
\index{Hilbert!David|(}%
\index{programme!Hilbert's|(}%
\index{finitism|(}%
\hilbert's
finitistic programme and he puts ample emphasis on his finitistic
standpoint and the finitistic character of his theorems.
\raisebox{-1.9ex}{\rule{0ex}{.5ex}}
\pagebreak
\noitem\item[2.]\herbrand\ does not accept any
model-theoretic semantics unless the models are finite. \ In this
respect, \herbrand\ is more finitistic than \hilbert, who demanded
finitism only for consistency proofs.
\noitem\begin{quote}
``\herbrand's negative view of set theory leads him to take,
on certain questions,
a \nolinebreak stricter attitude than \hilbert\ and his collaborators. \
He is more royalist than the king. \
\hilbert's metamathematics has as its main goal to establish the consistency
of certain branches of mathematics and thus to justify them; \hskip.3em
there, one had to restrict himself to finitistic methods. \
But in logical investigations other than the consistency problem of
mathematical theories the
\index{Hilbert!school}%
\hilbert\ school was ready to work with
set-theoretic notions.''\getittotheright
{%
\index{Heijenoort!Jean van}%
\citep[\p 118]{heijenoort-work-herbrand}
\notop\end{quote}\end{enumerate}
\section{The Context of \herbrand's Work on Logic}
\noindent Let us now have a look at what was known in \herbrand's time
and at the papers that influenced his work on logic.
\index{Zaremba, Stanis\l aw}%
\zarembaname\ \zarembalifetime\ \hskip.2em
is mentioned in \citep{herbrand-first}, \hskip.2em
where \herbrand\ cites
\zaremba's textbook on mathematical logic \citep{zaremba}, \hskip.2em
which clearly influenced \herbrand's notation.\footnote
{\Cfnlb\ \goldfarb's Note to \citep{herbrand-first} on
\p\,32\ff\ in \citep{herbrand-logical-writings}. \
\zaremba\ was one of the leading Polish mathematicians in the 1920s. \
He had close connections to \Paris,
but we do not know whether \herbrand\ ever met him.}
\herbrand's subject,
\index{logic!first-order}%
\firstorder\ logic, became a field of special interest
not least because of the seminal paper \citep{loewenheim-1915},
which singled out \firstorder\ logic in the {\em Theory of
Relatives}\/ developed by
\index{Peirce!--Schr\"oder tradition}%
\peirce\ and \schroeder.\footnote
{\label{note peirce schroeder tradition}For the heritage of
\index{Peirce!Charles S.}%
\peirce\ \cfnlb\
\index{Peirce!Charles S.}%
\citep{peirce-1885},
\citep{brady}; for that of
\index{Schr\"oder!Ernst}%
\schroeder\ \cfnlb\
\index{Schr\"oder!Ernst}%
\citep{schroeder-vorlesungen-III},
\citep{brady},
\citep{schroeder-handbook}.}
With this paper, \firstorder\ logic became an area of special interest,
due to the surprising meta-mathematical properties of this logic,
which was intended to be an especially useful tool with a restricted area of
application.\footnote
{Without set theory, \firstorder\ logic was too poor to serve
as such a single universal logic as the ones
for which
\index{Frege, Gottlob}%
\frege,
\index{Peano!Guiseppe}%
\peano, and
\index{Russell!Bertrand}%
\russell\ had been searching;
\cfnlb\
\index{Heijenoort!Jean van}%
\citep{heijenoort-absolutism-relativism}. \
For the suggestion
of \firstorder\ logic as the basis for set theory,
we should mention \citep{skolem-1923b},
which is sometimes cited as of the year\,1922,
and therefore easily confused with \citep{skolem-1923a}. \
For the emergence of \firstorder\ logic as
the basis for mathematics see \cite{moore-1987}.}
As the presentation in \citep{loewenheim-1915} is opaque,
\index{Skolem!Thoralf}%
\skolemname\ \skolemlifetime\ wrote five clarifying papers
contributing to the substance of
\index{L\"owenheim!Leopold}%
\loewenheim's
{\germanfont Satz\,2}, the now famous
\index{L\"owenheim!--Skolem Theorem}%
\loewenheimskolemtheorem\
\index{Skolem!Thoralf}%
\makeaciteoffive
{skolem-1920}{skolem-1923b}{skolem-1928}{skolem-1929}{skolem-1941}. \
From these papers, \herbrand\ cites \citep{loewenheim-1915} and
\citep{skolem-1920}, and the controversy pro and contra \herbrand's
reading of \citep{skolem-1923b} and \citep{skolem-1928} will be
presented in \nlbsectref{section herbrand loewenheim skolem} below.
While \herbrand\ neither cites
\index{Peano!Guiseppe}%
\peano\ nor even mentions
\index{Frege, Gottlob}%
\frege, the
\index{Principia Mathematica}%
{\em\PM} \citep{PM} were influential at his time,
and he was well aware of \nolinebreak this. \hskip.2em
\herbrand\ cites all editions of the
\index{Principia Mathematica}%
{\em\PMshort}\/
and there are indications that he
studied parts of it carefully.\footnote
{\herbrand\ seems to have studied \math{\ast9} and \math{\ast10} of
\index{Principia Mathematica}%
\citep[\Vol\,I]{PM} \hskip.3em
carefully, \hskip.2em
leaving traces in \herbrand's
\index{Property A}%
\propertyA\ and in \litchapref 2 of \herbrand's \PhDthesis, \ \cfnlb\
\index{Heijenoort!Jean van}%
\citep[\PP{102}{106}]{heijenoort-work-herbrand}.}
But
\index{Russell!Bertrand}%
\russell's influence is comparatively minor compared to
\hilbert's, as, indeed, \herbrand\ was most interested in proving
consistency, decidability, and
\index{completeness}%
completeness. \
\index{Heijenoort!Jean van}%
\heijenoortname\ \heijenoortlifetime\ notes on \herbrand
\begin{quote}
``The difficulties provoked by the
\index{Russell!'s Paradox}%
\russell\ Paradox,
stratification, ramification, the problems connected with the
\index{axiom!of infinity}%
axiom of infinity or the
\index{axiom!of reducibility}%
axiom of reducibility, nothing of that seems to
retain his
attention.
\\
The reason for this attitude is that \herbrand\ does not share
\index{Russell!Bertrand}%
\russell's conception concerning the relation between logic and mathematics,
but had adopted
\index{Hilbert!David}%
\hilbert's.
In\,1930
\herbrand\ indicates quite well where he sees the limits of
\russell's accomplishment:
`So far we have only replaced ordinary language
with another more convenient one,
but this does not help us at all with respect to the problems regarding
the principles of mathematics.' \makeanextendedciteoftwo
{\p\,248}{herbrand-hilbert}{\p\,208}{herbrand-logical-writings}.
And the sentence that follows indicates the way to be followed:
`\hilbert\ sought to resolve the questions which can be raised
by applying himself to the study of collections
of signs which are translations of
propositions true in a determinate theory.'\,\,''
\getittotheright{%
\index{Heijenoort!Jean van}%
\citep[\p 105]{heijenoort-work-herbrand}.}
\end{quote}
\noindent As \herbrand's major orientation was toward the
\index{Hilbert!school}%
\hilbert\ school,
it is not surprising that the majority of his citations\footnote
In his thesis \citep{herbrand-PhD}, \herbrand\
cites \citep{ackermann-1925},
\index{Artin!Emil}%
\citep{artin-schreier},
\citep{behmann}, \citep{grundlagenvortrag-zusatz},
\index{Bernays, Paul}%
\citep{bernays-schoenfinkel},
\makeaciteofthree{neubegruendung}{unendliche}{grundlagenvortrag},
\citep{grundzuege}, and
\index{Neumann, John von}%
\makeaciteofthree{neumann-1925}{neumann-1927}{neumann-1928}. \ Furthermore,
\herbrand\ cites \citep{nicod} in \citep{herbrand-PhD}, \hskip.3em
\citep{ackermann-1928} in \citep{herbrand-fundamental-problem}, \hskip.3em
and
\citep{zahlenlehre} and
\index{Goedel@G\"odel!Kurt}%
\citep{goedel} in
\citep{herbrand-consistency-of-arithmetic}.}
refer to mathematicians related either to the
\index{Hilbert!school}%
\hilbert\ school or to
\index{Goettingen@G\"ottingen}%
\Goettingen,
which was the Mecca of mathematicians until its
intellectual
and
organizational
destruction by the Nazis in 1933.
By
the title of his thesis {\frenchfont\em\herbrandPhDtitle},
\herbrand\ clearly commits himself to King
\hilbert's following, and
being the first contributor to
\hilbert's finitistic programme in
France, he was given the opportunity to write on \hilbert's logic
in a French review journal, and
this paper \citep{herbrand-hilbert}
is historically interesting because it captures \herbrand's
personal view of
\index{Hilbert!David|)}%
\index{programme!Hilbert's|)}%
\index{finitism|)}%
\hilbert's finitistic programme.
\section{A Genius with some Flaws}\label{section flaws}
\noindent On the one hand, \herbrand\ was a creative mathematician whose
ideas were truly outstanding, not only for his time. \
Besides logic,
he also contributed
to class-field theory and to the
theory of algebraic number fields. \
Although this is not our subject here,
we should keep in mind that \herbrand's contributions to algebra
are as important from a mathematical point of view and as numerous
as his contributions to logic.\footnote
{\Cfnlb\ \ \startacitewithnine
{herbrand-1930b}{herbrand-corps-abstract}{herbrand-group}
{herbrand-1932}{herbrand-1932a}
{herbrand-corps-I}{herbrand-discriminant}{herbrand-algebraic-functions}
{herbrand-corps-II}\stopacitewithone{herbrand-1936} \ and \
\index{Chevalley!Claude}%
\citep{herbrand-chevalley}, as well as
\index{Dieudonn\'e, Jean}%
\cite{dieudonne-1982}.}
Among the many statements about \herbrand's abilities as
a mathematician is
\index{Weil, Andr\'e}%
\weil's letter to
\index{Hasse, Helmut}%
\hasse\ in August\,1931 where he
writes that he would not need to tell him what a loss \herbrand's death means
especially for number theory.\footnote
{\Cfnlb\ \citep[\p 119, \litnoteref 6]{noether-hasse-correspondence}.}
As
\index{Andrews, Peter B.}%
\andrewsname\ put it:\notop\halftop\begin{quote}
``\herbrand\ was by all accounts a brilliant mathematician.''
\getittotheright
\citep[\p 171]{andrews-herbrand-award}.}
\end{quote}
\noindent
On the other hand, \hskip.2em
\herbrand\ neither had the education nor the
supervision to present his results in proof theory with the technical
rigor and standard, \hskip.2em
say, \hskip.2em
of the
\index{Hilbert!school}%
\hilbert\ school in the
1920s, \hskip.3em
let alone today's emphasis on formal precision.\footnote
\index{Heijenoort!Jean van}%
\heijenoort\ writes in his well-known ``source
book\closequotecolon
\begin{quote}
``\herbrand's thesis bears the marks of hasty writing; this is
especially true of \nolinebreak\litchapref 5\@. \ Some sentences
are poorly constructed, and the punctuation is haphazard.
\herbrand's thoughts are not nebulous, but they are so hurriedly
expressed that many a passage is ambiguous or obscure. To bring
out the proper meaning of the text the translators had to depart
from a literal rendering, and more rewriting has been allowed in
this translation than in any other translation included in the
present volume.'' \getittotheright
{\citep[\p\,525]{heijenoort-source-book}}
\end{quote}
Similarly,
\index{Goldfarb, Warren}%
\goldfarbname,
the translator and editor of \herbrand's logical writings,
writes:\begin{quote}
``\herbrand\ also tended to express himself rather hastily,
resulting in many obscurities;
in these translations an attempt has been made to balance the demands
of literalness and clarity.'' \getittotheright
{\citep[\p V]{herbrand-logical-writings}}
\notop\end{quote}} \
Finitistic proof theory sometimes strictly
demands the disambiguation of form and content
and a higher degree of precision than most other mathematical fields. \
Moreover, \hskip.2em
the field was novel at \herbrand's time and probably
hardly anybody in France was able to advise \herbrand\ competently. \
Therefore, \hskip.2em
\herbrand, \hskip.2em
a {\frenchfont\em g\'enie cr\'eateur}, \hskip.2em
as
\index{Heijenoort!Jean van}%
\heijenoort\ called him,\footnote
{\Cfnlb\ \citep[\p 1]{herbrand-ecrits-logiques}.} \hskip.2em
was apt to make undetected errors. \
Well known today is a serious flaw in his thesis
which stayed unnoticed by its reviewers at the time. \
Moreover, \hskip.2em
several theorems are in fact conceptually correct, \hskip.2em
but incorrectly formulated. \
\yestop\noindent
Let us have a look at three
flaws in \litsectref{3.3} of the
\litchaprefs{2, \nolinebreak 3}{5}, respectively:
\begin{description}
\item[\litchapwithsectref{2}{3.3}: ]
A typical instance for an
incorrectly formulated theorem which is conceptually correct
can be found
in \litchapwithsectref 2{3.3}, on
\index{inference!deep}%
deep inference:\\
{\em From \bigmathnlb{\yields\,B\implies C}{} we can conclude
\bigmathnlb{\yields\,A[B]\implies A[C]}, provided that
\nlbmath{[\cdots]} denotes only positive positions\/\footnote
{\label{note positive and negative positions}Note that a position
in a formula (seen as a tree built-up from logical operators
`\tightund\closesinglequotecomma
`\tightoder\closesinglequotecomma
`\math\neg\closesinglequotecomma
`\math\forall\closesinglequotecomma and \nolinebreak
`\math\exists') \hskip.2em
is {\em positive} \udiff\ the number of \math\neg-operators preceeding
it on the path from the root position is even, and {\em
negative} \udiff\ it is uneven.}
in \nlbmath A.}
\\
\herbrand, however, states \\\LINEmaths{\yields\ \inpit{B\implies
C}\nottight\implies
\inpit{A[B]\implies A[C]}}{}\\
which is not valid; to wit\footnote
{\Cfnlb\ also \citep
[\goldfarb's \litnoterefs{6 (\p 78)}{A (\p\,98)}]{herbrand-logical-writings}.}
apply the substitution
\\\LINEmaths{\{\ A[\ldots]\mapsto\forall x.\,[\ldots]\comma
B\mapsto\truepp\comma C\mapsto\Pppp x\ \}}. \
\item[\litchapwithsectref{3}{3.3}: ]
Incorrectly formulated is also
\herbrand's theorem on the relativization of quantifiers.\footnote
{\label{note relativization}Relativization of quantifiers was first discussed
in the elaboration of \firstorder\ logic in
\index{Peirce!Charles S.}%
\citep{peirce-1885}. \
Roughly speaking, \hskip.3em
it means to restrict quantifiers to a
predicate \nlbmath\Ppsymbol\
by replacing any \bigmathnlb{\forall x\stopq A}{} with
\bigmathnlb{\forall x\stopq\inpit{\Pppp x\implies A}},
and, \
dually, \bigmaths{\exists x\stopq A}{} with
\bigmaths{\exists x\stopq\inpit{\Pppp x\und A}}.}
This error was recognized later by \herbrand\ himself.\footnote
{In \litnoteref 1 of \citep{herbrand-consistency-of-arithmetic}.}
Moreover, notice
that in this context,
\herbrand\ discusses the
{\em many-sorted}\/ \firstorder\ logic related to the restriction
to the language where all quantifiers are relativized,
extending a similar discussion found already in
\index{Peirce!Charles S.}%
\citep{peirce-1885}.%
\vfill\pagebreak\end{description}
\yestop\noindent
All of this is not terribly interesting,
except that it gives us some clues on how
\herbrand\ developed his theorems: \
It seems that he started, like any mathematician,
with a strong intuition of the semantics and used it
to formulate the theorem. \
Then he had a careful look at those parts of the proof
that might violate the finitistic standpoint. \
The final critical check of minor details of the formalism in the
actual proof,\footnote
{\index{Hadamard, Jacques S.}%
\Cfnlb\ \citep[\litchapref V]{hadamard-psychology} for a nice
account of
this mode of mathematical creativity.}
however, hardly played a \role\ in this work.
\begin{description}
\item[\litchapwithsectref{5}{3.3}: ]
The drawback of his intuitive
style of work manifests itself in a serious mistake, which concerns
a lemma that has crucial applications in the proof of the
\index{Herbrand!'s Fundamental Theorem}%
\fundamentaltheorem, namely the ``\hskip.001em lemma'' of \litchapwithsectref
5{3.3}, which we will call
\index{Herbrand!'s False Lemma}%
{\em\herbrand's False Lemma}. \ Before we
discuss this in \nlbsectref{section lemma}, however, we have to define some
notions.
\end{description}
\section
[Champs Finis, \herbrand\ Universe, and \herbrand\ Expansion]
{Champs Finis, \herbrand\ Universe, and \\\herbrand\ Expansion}\label
{section herbrand expansion}%
\index{champ fini|(}%
\index{Herbrand!expansion|(}%
\index{Herbrand!universe|(}%
\noindent Most students of logic or computer science know
\herbrand's name in the form of {\em\herbranduniverse}\/ or
{\em\herbrandexpansion}. \
Today, the
\index{Herbrand!universe!{\em definition}}%
{\em\herbranduniverse}\/ is usually defined as the
set of all terms over a given signature,
and the {\em\herbrandexpansion}\/ of a set
of formulas results from a systematic replacement of all variables
in that set of formulas with
terms from the
\index{Herbrand!universe|)}%
\herbranduniverse. \
Historically, however, this is not quite correct. \
First of all, \herbrand\ does not use {\em term structures}\/ for two reasons:
\nopagebreak\begin{enumerate}\nopagebreak\noitem\item
\herbrand\ typically equates terms with objects of the
universe, and thereby avoids working explicitly with term
structures.\footnote{As this equating of terms has no
essential function in \herbrand's works, but only adds extra
complication to \herbrand's subjects, we will completely ignore it
here and exclusively use free term structures in what follows.}
\noitem\item As a finitist more royal than King \hilbert,
\herbrand\ does not accept structures with infinite universes.
\noitem\end{enumerate}
As a finite substitute for a typically infinite full term universe,
\herbrand\ uses what he calls a
{\frenchfont\em champ fini}\/ of order \nlbmath {n},
which we will denote with
\index{\termsofdepthnovars n}%
\index{T@\termsofdepthnovars n}%
\nlbmath{\termsofdepthnovars n}. \
Such a
\index{champ fini|)}%
\index{champ fini!{\em definition}}%
{\frenchfont champ fini}\/ differs from a full term universe in
containing only the terms \nlbmath t with \bigmaths{\CARD t<n}{\,.} \
We \nolinebreak use \nlbmath{\CARD t}
to denote the
\index{height of a term}%
\index{height of a term!{\em definition}}%
{\em height}\/ of the term \nlbmath t,
which is given by
\\\LINEmaths{\CARD{\anonymousfpp{t_1}{t_{m}}}
\nottight{\nottight{\nottight=}}1+\max\{0,\CARD{t_1},\ldots,\CARD{t_m}\}}.\\
\noindent
The terms of \nlbmath{\termsofdepthnovars n} are constructed from the
function symbols and constant symbols
(which we will tacitly subsume under the function symbols in the following)
of a finite signature and from a
finite set of variables.
We will assume that an additional variable \nlbmath l, \hskip .2em
the {\em lexicon}, \hskip .2em is \nolinebreak
included in this construction, if necessary
to have \ \maths{\termsofdepthnovars n\tightnotequal\emptyset}.
\herbrand\ prefers to treat all logical symbols besides
`\math\neg\closesinglequotecomma `\math\vee\closesinglequotecomma and
`\math\exists' as defined. \
Only in
\index{form!prenex}%
prenex forms, the universal quantifier `\math\forall'
is also treated as a primitive symbol.
\vfill\pagebreak
\yestop\noindent
The first elaborate description of \firstorder\ logic
---~under the name ``first-intentional logic of relatives''~---
was published by
\index{Peirce!Charles S.}%
\citet{peirce-1885} \hskip.2em
shortly after the invention of quantifiers by \citet{begriffsschrift}. \
What today we call an {\em\herbrandexpansion}\/
was implicitly given
already in that publication \citep{peirce-1885}. \
\herbrand\ spoke of ``reduction'' ({\frenchfont\em r\'eduite}) instead.
\begin{definition}[\herbrand\ Expansion, \math{A^{\mathcal T}}]\\%
\index{Herbrand!expansion!{\em definition}}%
\index{\math{A^{\mathcal T}}}%
\index{T@\math{A^{\mathcal T}}}%
For a finite set of terms \nlbmath{\mathcal T}, \ the expansion
\nlbmath{A^{\mathcal T}} of a formula \nlbmath A is defined as
follows:
\bigmaths{A^{\mathcal T}=A}{} if \math A \nolinebreak does
not have a quantifier, \bigmaths{\inpit{\neg A}^{\mathcal T}=\neg
A^{\mathcal T}}, \bigmathnlb{\inpit{A\oder B}^{\mathcal T}=
A^{\mathcal T}\oder B^{\mathcal T}}, \bigmathnlb{\inpit{\exists
x.\,A}^{\mathcal T}= \bigvee_{t\in\mathcal T}A^{\mathcal
T}\!\{x\tight\mapsto t\}}, an
\bigmaths{\inpit{\forall x.\,A}^{\mathcal T}=
\bigwedge_{t\in\mathcal T}A^{\mathcal T}\!\{x\tight\mapsto t\}},
where \math{A^{\mathcal T}\!\{x\tight\mapsto t\}} denotes the result
of applying the substitution \nlbmath{\{x\tight\mapsto t\}}
to \nlbmath{A^{\mathcal T}}.
\getittotheright\qed\end{definition}
\begin{example}[\herbrand\ Expansion, \math{A^{\mathcal T}}]%
\index{Herbrand!expansion!{\em example}}%
\\
For example, for \bigmaths{{\mathcal T}\nottight{\nottight{:=}}\{ \
\threepp,\ \plusppnoparentheses z\twopp
\ \}}, \ and for \math A being the arithmetic formula \\[.5ex]\LINEmaths{
\forall\boundvari x{}\stopq \inpit{ \boundvari x{}\tightequal\zeropp
\nottight{\nottight\oder} \exists \boundvari y{}\stopq \boundvari
x{}\tightequal\plusppnoparentheses{\boundvari y{}}\onepp}
},\\
the expansion \math{A^{\mathcal T}} is \\[.5ex]\LINEmaths{
\inparenthesesoplist{
\threepp\tightequal\zeropp \oplistoder
\threepp\tightequal\plusppnoparentheses\threepp\onepp\oplistoder
\threepp\tightequal\plusppnoparentheses{\pluspp z\twopp}\onepp}
\nottight{\nottight\und}
\inparenthesesoplist{
\plusppnoparentheses z\twopp\tightequal\zeropp \oplistoder
\plusppnoparentheses z\twopp\tightequal
\plusppnoparentheses\threepp\onepp \oplistoder
\plusppnoparentheses z\twopp\tightequal
\plusppnoparentheses{\pluspp z\twopp}\onepp
}}.\\[-2.0ex]\mbox{}\getittotheright\qed
\end{example}
\noindent
The \herbrandexpansion\ reduces a \firstorder\ formula to sentential
logic in such a way that a sentential formula is reduced to itself,
and that the semantics is invariant if the terms in
\nlbmath{\mathcal T} range over the whole universe.
If, however, this is not the case
--- \nolinebreak
such as in our above example and for all infinite universes
\nolinebreak---, then
\index{Herbrand!expansion|)}%
\herbrandexpansion\ changes the semantics by relativization of
the quantifiers\arXivfootnotemarkref{note relativization}
to range only over those elements of the universe to which the
elements of \nlbmath{\mathcal T}
evaluate
\section{\skolemization, \smullyan's
Uniform Notation, \\and \math\gamma- and \math\delta-quantification}\label
{section skolemization}
\index{Skolemization|(}%
\index{notation!uniform|(}%
\noindent
A first-order formula may contain existential as well as universal
quantifiers.
Can we make it more uniform by replacing either of them?
Consider the formula \bigmaths{\forall x.\,\exists y.\,\Qppp x y}.
These two quantifiers express a functional dependence between the values for
\nlbmath x and \nlbmath y, which could also be expressed by a (new)
function, say \nlbmath {g}, such that \math{\forall x.\,\Qppp x
{\app {g} x}}, \ie\ this function \nlbmath {g} chooses for each
\nlbmath x the correct value for \nlbmath y, provided that it exists.
In other words, we can replace any existentially quantified variable \nlbmath x
which occurs in the scope of universal quantifiers for
\nlbmath {y_1}, \ldots , \nlbmath {y_n} with the new
\index{term!Skolem|(}%
{\em\skolem\ term}
\nlbmath {\app {g} {{y_1}, \ldots, {y_n}}}.
This replacement, carried out for all existential quantifiers, results
in a formula having only universal quantifiers.
Using the convention that all free
variables are universally quantified,
we may then just drop these quantifiers as well.
Roughly speaking,
this transformation, {\em\skolemization} as we call it today,
leaves satisfiability invariant. \
It occurs for the first time explicitly in
\index{Skolem!Thoralf}%
\cite{skolem-1928}, \hskip.2em
but was already used
in an awkward formulation in
\index{Schr\"oder!Ernst}%
\cite{schroeder-vorlesungen-III}
and
\index{L\"owenheim!Leopold}%
\cite{loewenheim-1915}.
For reasons that will become apparent later,
\herbrand\ employs a form of \skolemization\
that is dual to the one above. \
Now the {\em universal}\/ variables are removed first,
so that all remaining variables are existentially quantified.
How can this be done?
Well, if the universally
quantified variable \nlbmath x
occurs in the scope of the existentially quantified variables
\math{y_1,\ldots,y_m}, we can replace \nlbmath x with the
\index{term!Skolem}%
{\em\skolem\ term}
\nlbmath{\app{\forallvari x{}}{y_1,\ldots,y_m}}. \
The \secondorder\ variable or \firstorder\ function symbol
\nlbmath{\forallvari x{}} in this \skolem\ term
stands for any function with arguments \nlbmath{y_1,\ldots,y_m}. \
Roughly speaking, this dual
form of {\em\skolemization} leaves validity invariant.
For example, let us consider the formula \bigmathnlb
{\exists y.\,\forall x.\,\Qppp x y}. \
Assuming the
\index{axiom!of choice}%
\axiomofchoice\ and the
standard interpretation of (higher-order) quantification, all of the
following statements are logically equivalent:
\begin{itemize}
\item\math{\exists y.\,\forall x.\,\Qppp x y} \ holds.
\item There is an object \nlbmath y such that
\bigmaths{\Qppp x y}{} holds for every object \nlbmath x.
\item
\math{\exists y.\,\Qppp{\app{\forallvari x{}}y}y} \ holds for every function
\nlbmath{\forallvari x{}}.
\item
\math{\forall f.\,\exists y.\,\Qppp{\app f y}y} \ holds.
\end{itemize}
Now \bigmaths{\exists y.\,\Qppp{\app{\forallvari x{}}y}y}{} is called the
{\em (validity)
\index{form!Skolemized@(outer) Skolemized}%
\index{form!Skolemized@(outer) Skolemized!{\em example}}%
\skolemizedform\ of\/ \bigmaths{\exists y.\,\forall x.\,\Qppp x y}.} \
The variable or function symbols
\nlbmath{\forallvari x{}} of increased logical order are
called
\index{function!Skolem|(}%
{\em\skolem\ functions}.\footnote
{\herbrand\ calls \skolem\ functions
\index{function!index}%
{\em index functions},
translated according to \citep{herbrand-logical-writings}. \
Moreover, in \citep{herbrand-style-consistency-proofs}, \
\citep{scanlon73:_consis_number_theor_via_theor}, \ and
\index{Goldfarb, Warren}%
\citep{goldfarb-herbrand-consistency}, \
we find the term
\index{function!indicial}%
{\em indicial functions}\/ instead of
``index functions\closequotefullstopextraspace
The name ``\skolem\ function'' was used in
\index{Goedel@G\"odel!Kurt}%
\cite{goedel-consistency-continuum},
probably for the first time, \cfnlb\
\index{Anellis!Irving H.}%
\cite{anellis-skolem-function}.}
The \skolemizedform\ is also called
\index{form!functional}%
{\em functional form} (with several addenda specifying the dualities),
because
\index{Skolemization|)}%
\skolemization\ turns the object variable \nlbmath x into a
function variable or function symbol
\nlbmath{\app{\forallvari x{}}\cdots}.
Note that
\bigmaths{A\tightimplies B}{} and \bigmaths{\neg A\oder B}{} and
\bigmaths{\neg\inpit{A\und\neg B}}{} are equivalent in two-valued
logic. \ So are \bigmaths{\neg\forall x.\,A}{} and \bigmaths{\exists
x.\,\neg A}, as well as \bigmaths{\neg\exists x.\,A}{} and
\bigmaths{\forall x.\,\neg A}. \
Accordingly, the
\index{notation!uniform|)}%
{\em uniform notation}\/ (as introduced in \citep{smullyan}) \hskip.3em
is a modern classification of formulas
into only four categories:
\index{alpha@\math\alpha}%
\math\alpha,
\index{beta@\math\beta}%
\nlbmath\beta,
\index{gamma@\math\gamma}%
\nlbmath\gamma, and
\index{delta@\math\delta}%
\nlbmath\delta.
More important than the classification of formulas
is the associated classification of the
reductive inference rules applicable to them as
{\em principal formulas}.
According to \citep{gentzen}, \hskip.2em
but viewed under the aspect of reduction
(\ie\ the converse of deduction), \hskip.3em
the {\em principal formula}\/ of an inference rule is the one
which is (partly) replaced by its immediate ``sub''-formulas, depending on
its topmost operator.
\begin{itemize}\item
\mbox{An \math\alpha-formula} is one whose validity reduces to
the validity of a single operand of its topmost operator. \
For example, \bigmath{A\oder B}
may be reduced either to \bigmath A or to
\bigmaths B, and \bigmath{A\implies B} may be reduced either to
\bigmath{\neg A} or to \bigmaths B.
\noitem\item \mbox{A \math\beta-formula} is one whose validity reduces to the
validity of both operands of its topmost binary operator,
introducing two cases of proof (\math\beta\ = \underline branching). \
For example, \bigmath{A\und B}
reduces to both \bigmath A and \bigmaths B, and
\bigmath{\neg\inpit{A\implies B}} reduces to both \bigmath{A} and
\bigmaths{\neg B}.
\pagebreak
\noitem\item \mbox{A \math\gamma-formula} is one whose validity reduces to
the validity of alternative instances of its topmost quantifier. \
For example, \bigmath{\exists y.\,A} \nolinebreak reduces to
\bigmath{A\{y\tight\mapsto\existsvari y{}\}} in addition to
\bigmaths{\exists y.\,A}, for a fresh {\em\fev} \nlbmath{\existsvari y{}}. \
Similarly, \bigmath{\neg\forall y.\,A} reduces to \bigmath{\neg
A\{y\tight\mapsto\existsvari y{}\}} in addition to
\bigmaths{\neg\forall y.\,A}. \
\Fev s may be globally instantiated at any time in a reduction proof.
\noitem\item A \math\delta-formula is one whose validity reduces to
the validity of the instance of its topmost quantifier with its
\index{term!Skolem|)}%
\skolem\ term. \
For example, \bigmath{\forall x.\,A} reduces to
\bigmaths{A\{x\tight\mapsto\app
{\forallvari x{}}{\existsvari y 1,\ldots,\existsvari y m}\}},
where \math{\existsvari y 1,\ldots,\existsvari y m} are
the \fev s\ already in use.\footnote
{In the game-theoretic semantics of \firstorder\ logic, the
\mbox{\math\delta-variables} (such as \math{\forallvari x{}}
in the above example)
stand for the unknown choices by our opponent in the game, whereas,
for showing validity, we have to specify a winning strategy by
describing a finite number of \firstorder\ terms as alternative
solutions for the \mbox{\math\gamma-variables} (such as \math{\existsvari y i}
above), \cf\ \eg\ \citep{hintikkaprinciples}.}
\end{itemize}
For a more elaborate introduction into free
\index{variable!free gamma-@free \math\gamma-}%
\index{variable!free delta-@free \math\delta-}%
\math\gamma- and \math\delta-variables
see
\index{Wirth, Claus-Peter}%
\citep{wirth-jal}.
\herbrand\ considers {\em validity}\/ and
{\em \skolemizedform}\/ as above in his thesis. \hskip.4em
In \nolinebreak a similar context, which we will have to discuss below,
\skolem\ considers {\em unsatisfiability}, a dual of validity, and
{\em\skolemnormalform}\/ in addition to \skolemizedform. \
As \nolinebreak it was standard at his time,
\herbrand\ called the two kinds of quantifiers
--- \nolinebreak \ie\ \nolinebreak for \nolinebreak\math\gamma- \nolinebreak
and \nolinebreak\math\delta-formulas\footnote
{Notice that it is obvious how to generalize the definition of
\math\alpha-, \math\beta-,
\math\gamma- and \math\delta-formulas from top positions
to inner occurrences according to the category into which they
would fall in a stepwise reduction. \
Therefore, we can speak of \math\alpha-, \math\beta-, \math\gamma- and
\math\delta-formulas also for the case of subformulas
and classify their quantifiers accordingly.}
\nolinebreak --- \
\index{quantifier!restricted}%
{\em restricted}\/ and
\index{quantifier!general}%
{\em general}\/
quantifiers, respectively. \
To avoid the problem of getting lost in
several dualities in what follows, we prefer to speak of
\index{quantifier!gamma-@\math\gamma-}%
\mbox{\em\math\gamma-quantifiers\/} and
\index{quantifier!gamma-@\math\delta-}%
\mbox{\em\math\delta-quantifiers\/} instead. \
The variables bound by \mbox{\math\delta-quantifiers} will be called
\index{variable!bound delta-@bound \math\delta-}%
\mbox{\em bound \math\delta-variables}. \
The \nolinebreak variables bound by
\mbox{\math\gamma-quantifiers} will be called
\index{variable!bound gamma-@bound \math\gamma-}%
\mbox{\em bound \math\gamma-variables}.
For a \firstorder\ formula \nlbmath A in which any bound variable is
bound exactly once and does not occur again \freely,
we define:
The
\index{form!Skolemized@(outer) Skolemized}%
\index{form!Skolemized@(outer) Skolemized!{\em definition}}%
{\em outer}\/\footnote
{\label{note inner}\herbrand\ has no name for the outer \skolemizedform\
and he does not use the inner \skolemizedform,
which is the current standard in two-valued \firstorder\ logic
and which is required for our discussion in \noteref{note discussion inner}. \
The
\index{form!Skolemized @inner Skolemized}%
\index{form!Skolemized @inner Skolemized!{\em definition}}%
{\em inner \skolemizedform\ of \nlbmath A} results from
\nlbmath A by repeating the following
until all \mbox{\math\delta-quantifiers} have been removed: \
Remove an outermost \math\delta-quantifier
and replace its bound variable \bigmath x
with \bigmaths{\app{\forallvari x{}}{y_1,\ldots,y_m}}, \
where \bigmath{\forallvari x{}} is a new symbol and
\bigmaths{y_1,\ldots,y_m}, in this order, are the variables of the
\mbox{\math\gamma-quantifiers} in whose scope the
\mbox{\math\delta-quantifier}
occurs and which actually occur in the scope of the
\mbox{\math\delta-quantifier}. \
The {\em inner}\/ \skolemizedform\
is closely related to the {\em liberalized}\/ \math\delta-rule
(also called \deltaplus-rule) in reductive calculi, such as sequent, tableau,
or matrix calculi; \
\cfnlb\ \eg\
\index{Ferm\"uller, Christian G.}%
\citep{baazdelta}, \
\citep{strongskolem}, \
\index{Wirth, Claus-Peter}%
\citep[\litsectrefs{1.2.3}{2.1.5}]{wirthcardinal}, \
\citep{nonpermut}, \
\citep[\litsectref 4]{wirth-jal}.}
{\em \skolemizedform\ of \nlbmath A}\/ results from \nlbmath
A by removing any \math\delta-quantifier and replacing its bound
variable \bigmath x with \bigmaths{\app{\forallvari x{}}{y_1,\ldots,y_m}}, \
where \bigmath{\forallvari x{}} is a new symbol and
\bigmaths{y_1,\ldots,y_m}, in this order,\footnote{Contrary to
our fixation of the order of the variables as arguments to the
\index{function!Skolem|)}%
\skolem\ functions (to achieve uniqueness of the notion), \herbrand\
does not care for the order in his definition of the outer
\index{form!Skolemized@(outer) Skolemized}%
\skolemizedform. \
Whenever he takes the order into account,
however, he orders by occurrence from left to right or else by the
\index{height of a term}%
height of the terms \wrt\ a substitution, but never by the names of
the variables.}
are the variables of the \math\gamma-quantifiers in
whose scope the \math\delta-quantifier occurs.
\vfill\pagebreak
\section{Axioms and Rules of Inference}\label
{section herbrands calculi}\label
{section where the precedence is explained}
\noindent In the following we will present the calculi
of \herbrand's thesis
(\ie\ the axioms and rules of inference)
as required for our presentation of the \fundamentaltheorem.
When we speak of a term, a formula, or a structure,
we refer to \firstorder\ terminology without
mentioning this explicitly. \
When we explicitly speak of ``first
order\closequotecomma however,
this \nolinebreak is to emphasize the contrast to
sentential logic. \
\begin{description}
\item[\SententialTautology: ]
\index{tautology!sentential}%
Let \math B be a \firstorder\ formula.
\ \math B \nolinebreak is a {\em\sententialtautology}\/ \udiff\ it
is quantifier-free and truth-functionally valid, provided its
atomic subformulas are read as atomic sentential variables.\footnote
{Notice that this notion is more
restrictive than the following,
which is only initially used by \herbrand\ in his thesis,
but which is standard for the
predicate calculi of the
\index{Hilbert!school}%
\index{Hilbert!school!calculi of}%
\hilbert\ school and the
\index{Principia Mathematica!calculi of}%
\PM; \
\cfnlb\ \citep[\Vol\,II, Supplement\,I\,D]{grundlagen}, \
\index{Principia Mathematica}%
\citep[*10]{PM}\@. \
\math B \nolinebreak is \nolinebreak a
\index{tautology!substitutional sentential
{\em\substitutionalsententialtautology}\/
\udiff\ there is a truth-functionally valid sentential formula \nlbmath
A and a substitution \nlbmath\sigma\ mapping any sentential
variable in \nlbmath A to a \firstorder\ formula such that \math
B \nolinebreak is \nlbmath{A\sigma}. \
For example, both \bigmathnlb{\Pppp x{\oder}\neg\Pppp x}{} and
\bigmathnlb{\exists x\stopq\Pppp x\nottight{\oder}\neg\exists
x\stopq\Pppp x}{} are \substitutionalsententialtautologies,
related to the truth-functionally
valid sentential formula \bigmathnlb{p{\oder}\neg
p}, but only the first one is a \sententialtautology.}
\yesitem\item[Modus Ponens: ]
\index{modus ponens}%
\index{modus ponens!{\em definition}}%
\bigmaths{\displaystyle{A\quad\quad\quad
A\implies B}\over \displaystyle{B}}.
\item[Generalized Rule of \math\gamma-Quantification: ]
\index{Generalized Rule!of gamma-Quantification@of \math\gamma-Quantification}%
\index{Generalized Rule!of gamma-Quantification@of \math\gamma-Quantification!{\em definition}}%
\bigmaths{\displaystyle{A[B\{x\mapsto t\}]}\over
\displaystyle{A[\gamma x.\,B]}}, where the free variables of the
term \nlbmath t must not be bound by quantifiers in \nlbmath B, and
\math{\gamma} stands for \nlbmath\exists\ if
\math{[\ldots]} denotes a positive
position\arXivfootnotemarkref{note positive and negative positions}
in \nlbmath{A[\ldots]}, and \math{\gamma} stands
for \nlbmath\forall\ if this position is negative. \
Moreover, we
require that \nlbmath{[\ldots]} does not occur in the scope of
any quantifier in \nlbmath{A[\ldots]}. \ This requirement is not
necessary for soundness, but for the constructions in the proof of
\index{Herbrand!'s Fundamental Theorem}%
\herbrandsfundamentaltheorem.
\par\noindent
For example,
we get
\\\LINEmaths{\displaystyle
{\inpit{t\tightprec t}
\nottight{\oder}\neg\inpit{t\tightprec t}}\over\displaystyle
{\inpit{t\tightprec t}
\nottight{\oder}\exists x\stopq\neg\inpit{x\tightprec t}}}{}
\\and\\\LINEmaths{\displaystyle
{\inpit{t\tightprec t}
\nottight{\oder}\neg\inpit{t\tightprec t}}\over\displaystyle
{\inpit{t\tightprec t}
\nottight{\oder}\neg\forall x\stopq\inpit{x\tightprec t}}}{}
\par\noindent
via the meta-level substitutions
\par\noindent\LINEmaths
\{\ \ \ \ A[\ldots]\
\mapsto\ \inpit{t\tightprec t}
\oder [\ldots]\comma\ \ \
B\ \mapsto\ \neg\inpit{x\tightprec t}\ \ \ \ \}}{\,\,}\\and\\
\LINEmaths
\{\ \ \ \ A[\ldots]\ \mapsto\
\inpit{t\tightprec t}
\oder\neg[\ldots]\comma\ \ \
B\ \mapsto\ \inpit{x\tightprec t}\ \ \ \ \}},\\respectively.
Note that \herbrand\ considers equality of formulas only up to
renaming of bound variables and often implicitly assumes that a
bound variable is bound only once and does not occur \freely. \
Thus, if a free variable \nlbmath y of the
term \nlbmath t is bound by quantifiers in \nlbmath B, an implicit
renaming of the bound occurrences of \nlbmath y in
\nlbmath{B} is admitted to enable backward application of the
inference rule.\footroom
\pagebreak
\notop\item[Generalized Rule of \math\delta-Quantification: ]
\index{Generalized Rule!of delta@of \math\delta-Quantification}%
\index{Generalized Rule!of delta@of \math\delta-Quantification!{\em definition}}%
\bigmaths{\displaystyle{A[B]}\over\displaystyle{A[\delta x.\,B]}},
where the variable \math x must not occur in the context
\nlbmath{A[\ldots]}, and \math{\delta} stands for \nlbmath\forall\
if \math{[\ldots]} denotes a positive position in
\nlbmath{A[\ldots]}, and \math{\delta} stands for \nlbmath\exists\ if this
position is negative. \
Moreover, both for soundness and for the reason mentioned above, we require
that \math{[\ldots]} \nolinebreak
does not occur in the scope of any quantifier in \nlbmath{A[\ldots]}.\\
Again, if \math x occurs in the context \nlbmath{A[\ldots]}, an implicit
renaming of the bound occurrences of \nlbmath x in
\bigmaths{\delta x.\,B}{} is admitted to enable backward application.
\noitem\item[Generalized Rule of Simplification: ]
\index{Generalized Rule!of Simplification}%
\index{Generalized Rule!of Simplification!{\em definition}}%
\bigmaths{\displaystyle{A[B\circ B]}\over\displaystyle{A[B]}}, where
\math{\circ} stands for \tightoder\ if \math{[\ldots]} \nolinebreak
denotes a positive position in \nlbmath{A[\ldots]}, \
and \math{\circ} stands for \tightund\ if this position is negative.\\
To enable a forward application of the inference rule,
the bound variables may be renamed such that the two occurrences of
\nlbmath B become equal.\\
Moreover, the
\index{Generalized Rule!of gamma-Simplification@of \math\gamma-Simplification}%
\index{Generalized Rule!of gamma-Simplification@of \math\gamma-Simplification!{\em definition}}%
{\em Generalized Rule of \math\gamma-Simplification}
is the sub-rule for the case that
\math B \nolinebreak is of the form \nlbmath{\exists y.C}
if \math{[\ldots]} denotes a
positive position in \nlbmath{A[\ldots]}, and of the form
\nlbmath{\forall y.C}
if this position is negative.
\noitem\end{description}
\label{section discussion prenex}%
To avoid the complication of quantifiers within a formula,
where it is hard to keep track of the scope of each individual quantification,
all quantifiers can be moved to the front,
provided some caution is taken with the renaming of quantified variables. \
This is called the
\index{form!prenex!{\em definition}}%
{\em prenex form}\/ of a formula. \
The
\index{form!anti-prenex!{\em definition}}%
{\em anti-prenex form}\/ results from the opposite transformation,
\ie\ from moving the quantifiers inwards as much as possible. \
\herbrand\ achieves these transformations with his
\index{Rule!of Passage}%
{\em Rules of Passage}.
\begin{description}
\item[Rules of Passage: ]
\index{Rule!of Passage!{\em definition}}%
The following six \nolinebreak logical
equivalences may be used for rewriting from left to right
\index{direction!prenex!{\em definition}}%
({\em prenex direction}\/) and from right to left
\index{direction!anti-prenex!{\em definition}}%
({\em anti-prenex direction}\/),
resulting in twelve \nolinebreak deep
\index{inference!deep}%
inference rules:\par\noindent\LINEmath{\begin{array}{
l@{~~~~~~}
r
c
l
}
(1)
&
\neg\forall x.A
&\equivalent
&\exists x.\neg A
\\(2)
&\neg\exists x.A
&\equivalent
&\forall x.\neg A
\\(3
&\inpit{\forall x.A}\oder B
&\equivalent
&\forall x.\,\inpit{A\,\tightoder B}
\\(4)
&B\oder\forall x.A
&\equivalent
&\forall x.\,\inpit{B\,\tightoder A}
\\(5)
&\inpit{\exists x.A}\oder B
&\equivalent
&\exists x.\,\inpit{A\,\tightoder B}
\\(6)
&B\oder\exists x.A
&\equivalent
&\exists x.\,\inpit{B\,\tightoder A}
\\\end{array}}%
\par\noindent
Here, \math B is a formula in which the variable \nlbmath x does not occur. \
As explained above,
if \nlbmath x \nolinebreak occurs \freely\
in \nlbmath B, an implicit renaming of
the bound occurrences of \nlbmath x in \nlbmath A
is admitted to enable rewriting in
\index{direction!prenex}%
prenex direction.
\end{description}
If we restrict the ``Generalized'' rules to outermost applications only
\hskip.15em (\ie, if we restrict \math A to
\nolinebreak be the empty context), \hskip.3em
we obtain the rules without the attribute
``Generalized\closequotecommaextraspace
\ie\ the
\index{Rule|see {{\em also}\/\, Generalized Rule}}%
\index{Rule!of gamma-Quantification@of \math\gamma-Quantification!{\em definition}}%
\index{Rule!of delta-Quantification@of \math\delta-Quantification!{\em definition}}%
{\em Rules of \math\gamma- and \math\delta-Quantification}\/ and the
\index{Rule!of Simplification!{\em definition}}%
{\em Rule of Simplification}.\footnote
{The {\em Generalized}\/
Rules of Quantification are introduced (under varying names) in
\index{Heijenoort!Jean van}%
\makeaciteoffour
{heijenoort-tree-herbrand}
{heijenoort-herbrand}
{heijenoort-oeuvre-herbrand}
{heijenoort-work-herbrand}
and under the names \inpit{\mu^\ast} and \inpit{\nu^\ast} in
\citep[\Vol\,II, \p 166]{grundlagen}, but only in the second edition of
1970, not in the first edition of 1939. \ \herbrand\ had only the
non-generalized versions of the Rules of Quantification and named
them ``First \nolinebreak and Second \nolinebreak
\index{Rule!of Generalization}%
Rule of Generalization\closequotecomma translated according to
\citep{herbrand-logical-writings}. \
Note that the restrictions of the Generalized Rules of Quantification guarantee
the equivalence of the generalized and the non-generalized versions by the
\index{Rule!of Passage}%
Rules of Passage; \cfnlb\
\index{Heijenoort!Jean van}%
\citep[\p\,6]{heijenoort-tree-herbrand}. \
\herbrand's name for {\em modus
ponens}\/ is
\index{Rule!of Implication}%
``Rule of Implication\closequotefullstopextraspace
Moreover,
\index{Rule!of Simplification}%
\index{Generalized Rule!of Simplification}%
``(Generalized) Rule of Simplification'' and
\index{Rule!of Passage}%
``Rules of Passage'' are \herbrand's names. \
All other names introduced in \nlbsectref{section herbrands calculi} are
our own invention to simplify our following presentation.}
\vfill\pagebreak
\section{Normal Identities, Properties A, B, and C,
and\\\herbrand\ Disjunction and Complexity}\label
{section herbrands properties}%
\index{Herbrand!disjunction|(}%
\index{Herbrand!complexity|(}%
\notop\halftop\noindent
Key notions of \herbrand's thesis are
\index{identity!normal}%
{\em normal identity},
\index{Property A}%
{\em\propertyA},
\index{Property B}%
{\em\propertyB}, and
\index{Property C|(}%
{\em\propertyC}. \ \
\propertyC\ is the most important and
the only one we need in this account.\footnote
{\label{note normal identity}The following are the definitions for the
omitted notions
{\em normal identity},
{\em\propertyA}, and
{\em\propertyB}
for a formula \nlbmath D. \
\math D is a
\index{identity!normal!{\em definition}}%
{\em normal identity}\/ \udiff\
\math D \nolinebreak
has a linear proof starting with a \sententialtautology,
possibly followed by applications of the Rules of Quantification,
and finally possibly followed by applications of the
\index{Rule!of Passage}%
Rules of Passage. \
\math D has
\index{Property A!{\em definition}}%
{\em\propertyA}\/ \udiff\
\math D \nolinebreak
has a linear proof starting with a \sententialtautology,
possibly followed by applications of the Rules of Quantification,
and finally possibly followed by applications of the
\index{Rule!of Passage}%
Rules of Passage
and the
\index{Generalized Rule!of Simplification}%
Generalized Rule of Simplification. \
\herbrand's original definition of
\index{Property A}%
\propertyA\ is technically more complicated,
but extensionally defines the same property
and is also intensionally very similar. \
Finally, \math D has
\index{Property B!{\em definition}
{\em \propertyB\ of order \nlbmath n}\/
\udiff\ \math{D'} \nolinebreak has \propertyC\ of order \nlbmath n,
where \math{D'} results from possibly repeated
application of the
\index{Rule!of Passage}%
Rules of Passage to \nlbmath D, in
\index{direction!anti-prenex}%
anti-prenex direction
as long as possible.}
\herbrand's \propertyC\ was implicitly used already in
\index{L\"owenheim!Leopold}%
\citep{loewenheim-1915}
and
\index{Skolem!Thoralf}%
\citep{skolem-1928}, \hskip.2em
but as an explicit notion,
it was first formulated in \herbrand's thesis. \
It is the main property of
\herbrand's work and may well be called the central property of
\firstorder\ logic, for reasons to be explained in the following.
In essence, \propertyC\ captures the following intuition taken from
\index{L\"owenheim!Leopold}%
\citep{loewenheim-1915}:
Assuming the
\index{axiom!of choice}%
\axiomofchoice,
the validity of a formula \nlbmath A
is equivalent to the validity of its
\index{form!Skolemized@(outer) Skolemized}%
\skolemizedform\ \nlbmath F\@. \
Moreover,
the validity of \nlbmath F would be equivalent to the validity
of the
\index{Herbrand!expansion}%
\herbrandexpansion\ \nlbmath{F^{\cal U}}
for a universe \nlbmath{\cal U},
provided only that this expansion were a finite formula
and did not vary over different universes. \
To provide this, we replace the semantical objects of the
universe \nlbmath{\cal U} with syntactical objects,
namely the countable set of all terms,
used as ``place holders'' or names. \
To get a {\em finite} formula,
we again replace this set of terms, which is infinite in general,
with the
\index{champ fini|(}%
{\frenchfont champ fini}
\nlbmath{\termsofdepthnovars n}, \
as defined in \sectref{section herbrand expansion}. \
If \nolinebreak we can show
\nlbmath{F^{\termsofdepthnovars n}} to be a sentential tautology
for some positive natural number \nlbmath n, \hskip.2em
then we know
that the \math\gamma-quantifications in \nlbmath F
have solutions in any structure, so that \math F \nolinebreak
and \math A \nolinebreak are valid.\footnote
{Indeed, we have \bigmaths{F^{\termsofdepthnovars n}\yields A},
\cfnlb\ \littheoref 4 in
\index{Heijenoort!Jean van}%
\citep{heijenoort-herbrand},
which roughly is our \lemmref{lemma from C to yields a la heijenoort}.} \
Otherwise, the
\index{L\"owenheim!--Skolem Theorem}%
\loewenheimskolemtheorem\ says that \math A is invalid.
\notop\halftop
\begin{definition}[\propertyC, \herbrand\ Disjunction, \herbrand\ Complexity]%
\index{Property C!{\em definition}
\index{Herbrand!disjunction!{\em definition}}%
\index{Herbrand!complexity!{\em definition}}%
\\
Let \math A be a \firstorder\ formula, in which,
without loss of generality, any bound variable is
bound exactly once and does not occur again \freely, neither as a
variable nor as a function symbol. \
Let \math F be
the outer
\index{form!Skolemized@(outer) Skolemized}%
\skolemizedform\ of \nlbmath A. \ Let \math n be a positive
natural number. \ Let the
\index{champ fini|)}%
{\frenchfont champ fini}
\nlbmath{\termsofdepthnovars n} be formed over the function
and free variable symbols occurring in \nlbmath F. \\\math A
{\em has \propertyC\ of order \nlbmath n}\/ \udiff\ the
\index{Herbrand!expansion}%
\herbrandexpansion\ \nlbmath{F^{\,\termsofdepthnovars n}} is a
\sententialtautology. \
\\\indent
The \herbrandexpansion\ \nlbmath{F^{\,\termsofdepthnovars n}}
is sententially equivalent to the so-called {\em\herbrand\ disjunction
of \nlbmath A of order \nlbmath n}, \hskip .3em
which is the finite disjunction
\nlbmath{\bigvee_{\FUNDEF\sigma\Y{\termsofdepthnovars n}}E\sigma}, \
\mbox{where \Y\ is}
the set of bound (\math\gamma-) variables of \nlbmath F, \
and \math E results from \math F by removing all
(\math\gamma-) quantifiers.
\\\indent
\enlargethispage{1ex}%
This form of representation can be used to define
the {\em\herbrand\ complexity}\/ of \nlbmath A,
which is the minimal number of instances of \nlbmath E
whose disjunction is a \sententialtautology.\footnote
{\herbrand\ has no name for {\em\herbrand\ disjunction}\/
and does not use the notion of {\em\herbrand\ complexity},
which, however, is closely related to
\index{Herbrand!'s Fundamental Theorem}%
\herbrandsfundamentaltheorem,
which says that the \herbrand\ complexity
of \nlbmath A is always defined as a positive natural number,
provided that \bigmaths{\tightyields A}{} holds. \
More formally,
the {\em\herbrand\ complexity of \nlbmath A}
is defined as the minimal cardinality \nlbmath{\CARD S}
such that, for some positive natural number \nlbmath m and some
\ \bigmaths{S\nottight{\nottight{\nottight\subseteq}}
\FUNSET\Y{\termsofdepthnovars m}}, \ \
the finite disjunction
\bigmath{\bigvee_{\sigma\in S}E\sigma}
is a \sententialtautology. \
It is useful in the comparison of logical calculi
\wrt\ their smallest proofs for certain
generic sets of formulas,
\cfnlb\ \eg\
\index{Ferm\"uller, Christian G.}%
\citep{baazdelta}.}\getittotheright\qed\end{definition}
\newcommand\termeins{\app{\forallvari m{}}{\forallvari v{},\forallvari w{}}}%
\newcommand\termzwei{\app{\forallvari m{}}{\forallvari u{},\termeins}}%
\newcommand\boxeins{\framebox{\termeins}}%
\newcommand\boxzwei{\framebox{\termzwei}}%
\newcommand\boxu{\framebox{\forallvari u{}}}%
\newcommand\boxv{\framebox{\forallvari v{}}}%
\newcommand\boxw{\framebox{\forallvari w{}}}%
\begin{example}[\propertyC, \herbrand\ Disjunction, \herbrand\ Complexity]\label
{example running herbrand start}%
\index{Property C!{\em example}
\index{Herbrand!disjunction!{\em example}}%
\index{Herbrand!complexity!{\em example}}%
\\
Let \math A be the following formula, which
says that if we have transitivity and
an upper bound of two elements,
then we also have an upper bound of three elements:
\par\noindent\LINEmaths{\noparenthesesoplist{
\forall\boundvari a{},\boundvari b{},\boundvari c{}\stopq\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\forall\boundvari x{},\boundvari y{}\stopq
\exists\boundvari m{}\stopq
\inpit{\boundvari x{}\tightprec\boundvari m{}\und
\boundvari y{}\tightprec\boundvari m{}}
\oplistimplies \forall\boundvari u{},\boundvari v{},\boundvari w{}\stopq
\exists\boundvari n{}\stopq
\inpit{
\boundvari u{}\tightprec\boundvari n{}\und
\boundvari v{}\tightprec\boundvari n{}\und
\boundvari w{}\tightprec\boundvari n{}}}}{}{\Large\math{(A)}}\par\noindent
The outer
\index{form!Skolemized@(outer) Skolemized}%
\skolemizedform\ \nlbmath F of \nlbmath A is
\par\noindent\LINEmaths{\noparenthesesoplist{
\forall\boundvari a{},\boundvari b{},\boundvari c{}\stopq
\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\forall\boundvari x{},\boundvari y{}\stopq
\inpit{
\boundvari x{}\tightprec\app
{\forallvari m{}}{\boundvari x{},\boundvari y{}}\und
\boundvari y{}\tightprec\app
{\forallvari m{}}{\boundvari x{},\boundvari y{}}}
\oplistimplies
\exists\boundvari n{}\stopq
\inpit{
\forallvari u{}\tightprec\boundvari n{}\und
\forallvari v{}\tightprec\boundvari n{}\und
\forallvari w{}\tightprec\boundvari n{}}}}{}{\Large\math{(F)}}\par\noindent
The result of removing
the quantifiers from \nlbmath F is the formula \nlbmath E:
\par\noindent\LINEmaths{\noparenthesesoplist{\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\boundvari x{}\tightprec\app
{\forallvari m{}}{\boundvari x{},\boundvari y{}}\und
\boundvari y{}\tightprec\app
{\forallvari m{}}{\boundvari x{},\boundvari y{}}
\oplistimplies
\forallvari u{}\tightprec\boundvari n{}\und
\forallvari v{}\tightprec\boundvari n{}\und
\forallvari w{}\tightprec\boundvari n{}}}{}{\Large\math{(E)}}\par\noindent
By semantical considerations it is obvious
that a solution for \nlbmath{\boundvari n{}} is
\termzwei. \
This is a term of
\index{height of a term}%
height \nolinebreak 3,
which suggests that
\math A \nolinebreak has \propertyC\ of order \nlbmath 4. \
Let us show that this is indeed the case
and that the
\index{Herbrand!complexity|)}%
\herbrand\ complexity of \nlbmath A is \nlbmath 2. \
Consider the following 2 substitutions: \
\bigmath{\begin{array}[t]{l l@{}l@{}l l@{}l@{}l l@{}l@{}l l}
\{
&\boundvari a{}&\mapsto&\forallvari v{},
&\boundvari b{}&\mapsto&\termeins,
&\boundvari c{}&\mapsto&\termzwei,
\\
&\boundvari x{}&\mapsto&\forallvari v{},
&\boundvari y{}&\mapsto&\forallvari w{},
&\boundvari n{}&\mapsto&\termzwei
&\};
\\\{
&\boundvari a{}&\mapsto&\forallvari w{},
&\boundvari b{}&\mapsto&\termeins,
&\boundvari c{}&\mapsto&\termzwei,
\\
&\boundvari x{}&\mapsto&\forallvari u{},
&\boundvari y{}&\mapsto&\termeins,
&\boundvari n{}&\mapsto&\termzwei
&\}.
\\\end{array}}{}\\\smallheadroom
Indeed, if we normalize the \herbrand\ disjunction generated by
these two substitutions to a disjunctive normal form
(\ie\ a disjunctive set of conjunctions)
we get the following \sententialtautology.\smallfootroom
\\%\par\noindent
\LINEmaths{\begin{array}[c]{@{}l@{}l@{}}
\{\ \
&\forallvari v{}\tightprec\termeins
\und\termeins\tightprec\termzwei
\und\forallvari v{}\tightnotprec\termzwei\comma
\\
&\forallvari w{}\tightprec\termeins
\und\termeins\tightprec\termzwei
\und\forallvari w{}\tightnotprec\termzwei\comma
\\
&\forallvari v{}\tightnotprec\termeins
\comma
\forallvari w{}\tightnotprec\termeins
\comma
\\
&\forallvari u{}\tightnotprec\termzwei\comma
\termeins\tightnotprec\termzwei\comma
\\
&\forallvari u{}\tightprec\termzwei\und
\forallvari v{}\tightprec\termzwei\und
\forallvari w{}\tightprec\termzwei
\ \ \}
\\\end{array}}{}{\begin{tabular}
{@{}r@{}}\\\\{\Large\math{(C)}}\\\\\qed\\\end{tabular}}\par\noindent
\end{example}
\noindent
\enlargethispage{1ex}%
The different treatment of
\mbox{\math\delta-quantifiers} and \math\gamma-quantifiers
in \propertyC,
namely by \skolemization\ and
\index{Herbrand!expansion}%
\herbrandexpansion, respectively,
as found in
\index{Skolem!Thoralf}%
\citep{skolem-1928} and \citep{herbrand-PhD},
rendered the reduction to
sentential logic by hand (or \nolinebreak actually today, on a computer)
practically executable for the first time.\footnote
{For instance, the elimination of both \math\gamma- and
\math\delta-quantifiers with the help of
\index{Hilbert!'s epsilon}%
\hilbert's \mbox{\nlbmath\varepsilon-operator} suffers from an exponential
complexity in formula size. \
As \nolinebreak a result, already small formulas grow so large
that the mere size makes them inaccessible to human inspection; \hskip.3em
and this is still the case
for the term-sharing representation of
\index{Hilbert!'s epsilon}%
\mbox{\math\varepsilon-terms} of \nolinebreak
\index{Wirth, Claus-Peter}%
\citep{wirth-jal}.} \
This different treatment of the two kinds of quantification
is inherited from the
\index{Peirce!--Schr\"oder tradition}%
\peirce--\schroeder\
tradition\arXivfootnotemarkref{note peirce schroeder tradition}
which came on \herbrand\ via
\index{L\"owenheim!Leopold}%
\loewenheim\ and
\index{Skolem!Thoralf}%
\skolem. \
\index{Russell!Bertrand}%
\russell\ and
\index{Hilbert!David}%
\hilbert\ had already merged
that tradition with the one of
\index{Frege, Gottlob}%
\frege,
sometimes emphasizing their \frege\ heritage over
one of
\index{Peirce!--Schr\"oder tradition}%
\index{Peirce!Charles S.}%
\peirce\ and
\index{Schr\"oder!Ernst}%
\schroeder\fullstopnospace\footnote
{While this emphasis on
\index{Frege, Gottlob}%
\frege\
will be understood by everybody who ever had the
fascinating experience of reading \frege,
it put some unjustified bias to the historiography of modern logic,
still present in the selection of
the famous source book \citep{heijenoort-source-book}; \
\cf\ \eg\
\index{Anellis!Irving H.}%
\citep[\litchapref 3]{anellis-heijenoort-long}.%
}
It was \herbrand\ who completed the bridge
between these two traditions with his \fundamentaltheorem,
as depicted in
\sectref{section herbrand fundamental theorem}
below.\pagebreak
\section{\herbrand's False Lemma}\label{section lemma}%
\index{Herbrand!'s False Lemma|(}%
\notop\halftop\noindent
For a given positive natural number \nlbmath n, \
\index{Herbrand!'s False Lemma!{\em definition}}%
{\em\herbrand's (False) Lemma} \
says that
\propertyC\ of order \nlbmath n \hskip.2em
is invariant under the application of the
\index{Rule!of Passage}%
Rules of Passage.
\\\indent
The basic function of \herbrand's False Lemma in
the proof of
\index{Herbrand!'s Fundamental Theorem|(}%
\herbrandsfundamentaltheorem\
is to establish the logical equivalence of
\propertyC\ of a formula \nlbmath A
with \propertyC\ of the
\index{form!prenex}%
{\em prenex}\/ and
\index{form!anti-prenex}%
{\em anti-prenex forms}\/ of
\nlbmath A,
\cfnlb\ \sectref{section discussion prenex}. \
\\\indent
\herbrand's Lemma is wrong because the
\index{Rule!of Passage}%
Rules of Passage
may change the
outer
\index{form!Skolemized@(outer) Skolemized}%
\skolemizedform. \
This happens whenever
a \mbox{\math\gamma-quantifier} binding \nlbmath{x} is moved
over a binary operator
whose unchanged operand \nlbmath B
contains a \mbox{\math\delta-quantifier}.\footnote
{\label{footnote same meta}Here we use the same meta variables
as in our description of the Rules of Passage in
\nlbsectref{section herbrands calculi}
and assume that \math x \nolinebreak does not occur \freely\ in \nlbmath B.}
\\\indent
To find a counterexample for \herbrand's Lemma for the case of
\propertyC\ of order \nlbmath 2, \hskip .2em
let us consider moving out the \mbox{\math\gamma-quantifier} \nolinebreak
``\math{\exists\boundvari x{}.}'' \ in
the valid formula
\\\LINEmaths{\inpit{\exists\boundvari x{}.\,\Pppp{\boundvari x{}}}
\oder\neg\exists\boundvari y{}.\,\Pppp{\boundvari y{}}}.\\
The
\index{form!Skolemized@(outer) Skolemized}%
(outer) \skolemizedform\ of this formula
is \\\LINEmaths{\inpit{\exists\boundvari x{}.\,\Pppp{\boundvari x{}}}
\oder\neg\Pppp{\forallvari y{}}}.\\
The \herbrand\ disjunction over the single substitution
\nlbmath{\{\boundvari x{}\tight\mapsto\forallvari y{}\}}
is a \sententialtautology. \
The
\index{form!Skolemized@(outer) Skolemized}%
{\em outer}\/ \skolemizedform\ after moving out the
``\math{\exists\boundvari x{}.}'' is
\\\LINEmaths{\exists\boundvari x{}\stopq\inparentheses
{\Pppp{\boundvari x{}}
\oder\neg\Pppp{\app{\forallvari y{}}{\boundvari x{}}}}}.\\
To get a \sententialtautology\ again,
we now have to take the \herbrand\ disjunction over both
\nlbmath{\{\boundvari x{}\tight\mapsto\app{\forallvari y{}}{l}\}}
and \nlbmath{\{\boundvari x{}\tight\mapsto l\}} \
(instead of the single
\nlbmath{\{\boundvari x{}\tight\mapsto\forallvari y{}\}}), \
for the lexicon \nlbmath l\@. \
\\\indent
This, however, is not really
a counterexample for \herbrand's Lemma
because \herbrand\ treated the lexicon \nlbmath l as a variable and
defined the height of a \skolem\ constant to be \nlbmath 1,
and the
\index{height of a term}%
height of a variable to be \nlbmath 0,
so that
\ \mbox{\math{\CARD{{\forallvari y{}}{}}=1=\CARD{\app{\forallvari y{}}{l}}}}. \
As free variables and \skolem\ constants play exactly the same \role,
this definition of
\index{height of a term}%
\index{height of a term!{\em treatment of the lexicon}}%
height is a bit unintuitive and was possibly introduced
to avoid this counterexample.
\
But now for the similar formula
\par\noindent\LINEmaths{
\inparentheses{\inpit{\exists\boundvari x{}.\,\Pppp{\boundvari x{}}}
\und\forall\boundvari y{}.\,\Qpppeins{\boundvari y{}}}
\nottight{\oder}\neg\inpit{\exists\boundvari x{}.\,\Pppp{\boundvari x{}}}
\nottight{\oder}\neg
\forall\boundvari y{}.\,\Qpppeins{\boundvari y{}}}{
\par\noindent
after moving the first \math\gamma-quantifier \nolinebreak
``\math{\exists\boundvari x{}.}'' out
over the ``\tightund\closequotecommaextraspace
we have to apply \\instead of\LINEmath{\begin{array}[b]{l l}
\{ \ \boundvari x{}\mapsto\forallvari x{},
&\boundvari y{}\mapsto\app{\forallvari y{}}{\forallvari x{}} \ \}
\\\{ \ \boundvari x{}\mapsto\forallvari x{}, \
&\boundvari y{}\mapsto\forallvari y{} \ \}
\phantom{\app{\forallvari y{}}{\forallvari x{}}}
\\\end{array}}\phantom{instead of}\\
to get a \sententialtautology,
and we have
\maths{\CARD{\forallvari y{}}=1}{} \ and
\maths{\CARD{\app{\forallvari y{}}{{\forallvari x{}}{}}}=2}, \
and thus
\bigmaths{\forallvari y{}
\in\termsofdepthnovars 2}, but
\bigmaths{\app{\forallvari y{}}{{\forallvari x{}}{}}
\notin\termsofdepthnovars 2}. \ \
This means:
\propertyC\ of order \nlbmath 2 varies under a single application of a
\index{Rule!of Passage
Rule of Passage, and thus
we have a proper counterexample for \herbrand's False Lemma \nolinebreak here.
\\\indent
In\,1939,
\index{Bernays, Paul|(}%
\bernays\ remarked that \herbrand's proof is hard to
follow\footnote {In \citep[\Vol\,II]{grundlagen}, in \litnoteref 1 on
\p 158 of the 1939 edition (\p 161 of the 1970 edition), we read: \
``{\germanfontfootnote Die \herbrand sche Bewei\esi f\ue hrung ist
schwer zu verfolgen}''} and --- for the first time --- published a
sound proof of a version of \herbrandsfundamentaltheorem\ which is
restricted to
\index{form!prenex}%
prenex form, but more efficient in the number of terms
that have to be considered in a \herbrand\ disjunction than \herbrand's
quite global limitation to {\em all terms \nlbmath t with \
\math{\CARD t<n}}, \ \ related to \propertyC\ of order \nlbmath n.\footnote
{\label{footnote bernays}\Cfnlb\ \litsectref{3.3} of the 1939 edition of
\citep[\Vol\,II]{grundlagen}\@. \ In the 1970 edition,
\index{Bernays, Paul}%
\bernays\ also indicates how to remove the restriction to
\index{form!prenex}%
prenex formulas.}
\\\indent
According to a conversation with
\index{Heijenoort!Jean van}%
\heijenoort\ in
autumn\,1963,\footnote
{\Cfnlb\ \citep[\p\,8, \litnoteref j]{herbrand-ecrits-logiques}\@.}
\index{Goedel@G\"odel!Kurt}%
\goedel\ noticed the lacuna in the proof
of \herbrand's False Lemma in 1943 and wrote a private note,
but did not publish \nolinebreak it. \
While \goedel's documented attempts to construct a counterexample to
\herbrand's False Lemma failed,
he had actually worked out a
\index{correction (of Herbrand's False Lemma)!G\"odel's and Dreben's|(}%
{\em \repair}\/ of
\herbrand's False Lemma,
which is sufficient for the proof of \herbrandsfundamentaltheorem.\footnote
{\Cfnlb\
\index{Goldfarb, Warren}%
\citep{goldfarb-herbrand-goedel}.}
\\\indent
In\,1962, when \goedel's \repair\ was still unknown,
a young student,
\index{Andrews, Peter B.|(}%
\andrewsname, had the audacity to tell
his advisor
\index{Church, Alonzo}%
\churchname\ \churchlifetime\ that there seemed to be a gap in
the proof of \herbrand's False Lemma.
\church\ sent \andrews\ to
\index{Dreben, Burton}%
\drebenname\ \drebenlifetime, who finally came up
with a counterexample.
And then \andrews\ constructed a simpler counterexample
(essentially the one we presented above)
and joint work found a \repair\ similar to \goedel's,\footnote
{\Cfnlb\
\index{Andrews, Peter B.}%
\citep{andrews-herbrand-award},
\citep{false-lemmas-in-herbrand},
\citep{supplement-to-herbrand}.}
which we will call
\index{correction (of Herbrand's False Lemma)!G\"odel's and Dreben's!{\em definition}}%
\index{Andrews, Peter B.|)}%
{\em\goedel's and \dreben's \repair}
in \nlbsectref{section modus ponens elimination}.
\\\indent
Roughly speaking, the \repaired\ lemma says that --- to keep \propertyC\ of
\nlbmath A invariant under (a single application of) a
\index{Rule!of Passage}%
Rule of Passage --- we may have to step from
order \nlbmath n \hskip.2em to order
\nlbmath{n\,\inpit{N^r\tight+1}^n}. \
Here \math{r} \nolinebreak is the
number of \mbox{\math\gamma-quantifiers} in whose scope the
\index{Rule!of Passage}%
Rule of Passage is applied and \math{N} is the cardinality of
\nlbmath{\termsofdepthnovars n} for the function symbols in the outer
\index{form!Skolemized@(outer) Skolemized}%
\skolemizedform\ of \nlbmath A.\footnote
{\Cfnlb\ \citep[\p\,393]{supplement-to-herbrand}.}
This \repair\ is not particularly elegant
because --- iterated several times until a
\index{form!prenex}%
prenex form is reached \nolinebreak--- it can lead to pretty high orders. \
Thus, although this
\index{correction (of Herbrand's False Lemma)!G\"odel's and Dreben's|)}%
\repair\ serves
well for soundness and finitism,
it results in a
complexity that is unacceptable in practice (\eg\ in automated reasoning)
already for small non-prenex formulas.
\\\indent
The problems with \herbrand's False Lemma in the step from \propertyC\ to
a proof without
\index{modus ponens}%
\index{modus ponens!elimination}%
{\em modus ponens}\/ in his \fundamentaltheorem\
(\cfnlb\ \sectref{section modus ponens elimination}) \hskip.2em
result primarily\footnote
{\label{note discussion inner}Secondarily,
the flaw in
\index{Herbrand!'s False Lemma|)}%
\herbrand's False Lemma is a peculiarity of the
\index{form!Skolemized@(outer) Skolemized}%
{\em outer}\/ \skolemizedform.
For the
\index{form!Skolemized @inner Skolemized}%
{\em inner}\/ \skolemizedform\ (\cfnlb\ \noteref{note inner}),
moving \math\gamma-quantifiers
with the
\index{Rule!of Passage}%
Rules of Passage
cannot change the number of arguments of the \skolem\ functions.
This does not help, however,
because, for the inner \skolemizedform,
moving a \math\delta-quantifier may change the number
of arguments of its \skolem\ function if the
\index{Rule!of Passage}%
Rule of Passage is
applied within the scope of a \mbox{\math\gamma-quantifier} whose bound
variable occurs in \nlbmath B but not in
\nlbmath A.\arXivfootnotemarkref{footnote same meta}
The inner \skolemizedform\ of
\bigmaths{
\exists\boundvari y 1\stopq
\forall\boundvari z 1\stopq
\Qppp{\boundvari y 1}{\boundvari z 1}
\oder
\exists\boundvari y 2\stopq
\forall\boundvari z 2\stopq
\Qppp{\boundvari y 2}{\boundvari z 2}
}{} is
\bigmaths{\exists\boundvari y 1\stopq
\Qppp{\boundvari y 1}{\app{\forallvari z 1}{\boundvari y 1}}
\oder
\exists\boundvari y 2\stopq
\Qppp{\boundvari y 2}{\app{\forallvari z 2}{\boundvari y 2}}
}
but the inner \skolemizedform\ of any
\index{form!prenex}%
prenex form
has a {\em binary}\/ \skolem\ function, unless we use
\henkin\ quantifiers as found in \hintikka's
\firstorder\ logic, \cfnlb\ \citep{hintikkaprinciples}.}
from a detour over
\index{form!prenex}%
prenex form, which
was standard at \herbrand's time.
\index{L\"owenheim!Leopold}%
\loewenheim\ and \skolem\ had always reduced their problems to
\index{form!prenex}%
prenex forms
of various kinds.
The reduction of a proof task to prenex form
has several disadvantages, however, such as
serious negative effects on proof complexity.\footnote
{\Cfnlb\ \eg\
\index{Ferm\"uller, Christian G.}%
\citep{baazdelta}; \ \citep{baazleitschcolllog}.}
\enlargethispage{1ex}%
If \herbrandname\ had known of his flaw,
he would probably have avoided the whole detour over
\index{form!prenex}%
prenex forms,
namely in the form of what we will call
\index{correction (of Herbrand's False Lemma)!Heijenoort's}%
\index{correction (of Herbrand's False Lemma)!Heijenoort's!{\em definition}}%
{\em\heijenoort's \repair},%
\index{Generalized Rule!of gamma-Quantification@of \math\gamma-Quantification}%
\footnote
{The first published hints on {\em\heijenoort's \repair}\/ are
\citep[\litnoteref{77}, \p\,555]{heijenoort-source-book} and
\citep[\litnoteref{60}, \p171]{herbrand-logical-writings}. \
On page\,99 of
\index{Heijenoort!Jean van}%
\citep{heijenoort-work-herbrand}, without giving a definition,
\index{Heijenoort!Jean van}%
\heijenoort\ speaks of generalized versions (which \herbrand\ did not have)
of the rules of
``existen\-tial\-ization and universalization\closequotecomma
which we have formalized in our
Generalized Rules
of Quantification in \nlbsectref{section herbrands calculi}.
Having studied \herbrand's \PhDthesis\ \citep{herbrand-PhD} and
\index{Heijenoort!Jean van}%
\makeaciteoftwo{heijenoort-tree-herbrand}{heijenoort-herbrand},
what
\index{Heijenoort!Jean van}%
\heijenoort's generalized rules must look like
can be inferred from the following two facts:
\herbrand\ has a generalized version
of his
\index{Generalized Rule!of Simplification}%
\index{Rule!of Simplification}%
Rule of Simplification in addition to a non-generalized one.
Rewriting with the
Generalized Rule of \math\gamma-Quantification
within the scope
of quantifiers would not permit \herbrand's constructive proof
of his \fundamentaltheorem.%
\par
Note that
\index{correction (of Herbrand's False Lemma)!Heijenoort's}%
\heijenoort's \repair\ avoids
the detour over the Extended
\index{Hilbert!'s epsilon}%
First \nlbmath\varepsilon-Theorem of
the proof of
\index{Bernays, Paul|)}%
\bernays\ mentioned above; \cfnlb\ \noteref{footnote bernays}. \
Moreover,
\index{Heijenoort!Jean van}%
\heijenoort\ gets along without
\herbrand's complicated
\index{form!prenex}%
prenex forms with raised
\math\gamma-multiplicity, which are required for \herbrand's definition of
\index{Property A}%
\propertyA\@. \par
Notice, however, that \goedel's and \dreben's \repair\ is still needed for the
step from a proof with
\index{modus ponens}%
\index{modus ponens!elimination}%
{\em modus ponens}\/ to \propertyC,
\ie\ from Statement\,4 to Statement\,1 in
\theoref{theorem herbrand fundamental two}. \
As the example on top of page\,\,201
in \citep{herbrand-logical-writings} shows,
an intractable increase of the order of \propertyC\
cannot be avoided in general for
an inference step by {\em modus ponens}.}
which avoids an intractable and
\notop
unintuitive\footnote
{Unintuitive \eg\ in the sense of
\index{Tait, William W.}%
\citep{tait-2006}.}
rise in complexity,
\nolinebreak\cfnlb\ \sectref{section modus ponens elimination}.\pagebreak
\section{The \fundamentaltheorem}%
\label{section herbrand fundamental theorem}%
\noindent The \fundamentaltheorem\ of {\herbrandname}
is not easy to comprehend at first, because of
its technical nature, but it rests upon a basic intuitive idea,
which turned out to be one of the
most profound insights in the history of logic.
We know --- and so did {\herbrand} --- that sentential logic is decidable:
for any given sentential formula,
we could, for instance, use truth-tables to decide its validity. \
But what about a \firstorder\ formula with quantifiers?
There is
\index{L\"owenheim!Leopold}%
\loewenheim's and
\index{Skolem!Thoralf}%
\skolem's observation that
\bigmaths{\forall x\stopq\Pppp{x}}{} in the context of the
existentially quantified variables \nlbmath{y_1,\ldots,y_n} stands for
\Pppp{\forallvari x{}(y_1,\ldots,y_n)} for an arbitrary \skolem\ function
\math{\forallvari x{}(\cdots)},
as outlined in \nlbsectref{section skolemization}. \
This gives us a formula with existential quantifiers only. \
Now, taking the \herbrand\ disjunction,
an existentially quantified formula can be shown to be valid,
if we find a finite set of names denoting elements from the domain to be
substituted for the existentially quantified variables, such that the
resulting sentential formula is truth-functionally valid. \
Thus, we have a model-theoretic argumentation
how to reduce a given \firstorder\ formula to
a sentential one. \
The semantical elaboration of this idea is due to
\index{L\"owenheim!Leopold|(}%
\loewenheim\ and
\index{Skolem!Thoralf}%
\skolem, and this was known to {\herbrand}. \
But what about the reducibility of an
{\em actual proof}\/ of a given formula within a \firstorder\ calculus? \
The affirmative answer to this question is the essence of \herbrand's
Fundamental Theorem and the technical device, by which we can
eliminate a switch of quantifiers
(such as \math{\exists y.\,\forall x.\,\Qppp x y}{}
of \nlbsectref{section skolemization}) \hskip .2em
is captured in his \propertyC\@.
Thus, if we want to
cross the river that divides the land of {\em valid}\/
\firstorder\ formulas from the
land of {\em provable}\/ ones, it is the sentential \propertyC\ that
stands firm in the middle of that river and holds the bridge,
whose first half was built by
\index{L\"owenheim!Leopold|)}%
\loewenheim\ and
\index{Skolem!Thoralf}%
\skolem\ and
the other by \herbrand:
\yestop
\begin{center}\includegraphics
[width=144mm,height=66mm]
{LoewenheimSkolemHerbrand-FinalBridge.eps}
\end{center}
\pagebreak
\herbrandsfundamentaltheorem\ shows that if a formula $A$ has
\propertyC\ of some order \nlbmath n\,
--- \nolinebreak\ie, by the
\index{L\"owenheim!--Skolem Theorem}%
\loewenheimskolemtheorem, if $A$ is a valid
(\math{\models A}) \nolinebreak---
then we not only {\em know of the existence}\/ of
a proof in any of the standard proof calculi (\math{\tightyields A}), \hskip.3em
but we can actually {\em construct}\/ a
proof for \nlbmath A in \herbrand's calculus
from a given \nlbmath n. \
The proof construction process
is guided by the
\index{champ fini|(}%
{\frenchfont champ fini} of order \nlbmath n,
whose size determines the multiplicities of $\gamma$-quantifiers and whose
elements are the terms substituted as witnesses in the
$\gamma$-Quantification steps.
That proof begins with a sentential tautology
and may use the Rules of\/ \math\gamma- and\/
\math\delta-Quantification, the
\index{Generalized Rule!of Simplification}%
Generalized Rule of Simplification, and the
\index{Rule!of Passage}%
Rules of Passage.
Contrary to what
\index{Herbrand!'s False Lemma}%
\herbrand's False Lemma implies,
a detour over a
\index{form!prenex}%
prenex form of \nlbmath A
dramatically increases
the order of \propertyC\ and thus the length of that proof,
\cfnlb\ \sectref{section lemma}. \
\index{Heijenoort!Jean van}%
\heijenoort, however, observed that this rise of proof length can be overcome
by avoiding the
problematic
\index{Rule!of Passage}%
Rules of Passage
with the help of
\index{inference!deep}%
deep (or Generalized) quantification rules,
which may introduce quantifiers deep within formulas
\index{correction (of Herbrand's False Lemma)!Heijenoort's}%
(``\heijenoort's \repair''). \
We have included these considerations into our statement of
\herbrandsfundamentaltheorem.
\yestop
\begin{theorem}[\herbrandsfundamentaltheorem]%
\label{theorem herbrand fundamental two}%
\index{Herbrand!'s Fundamental Theorem!{\em definition}}%
\\Let\/ \math A be a \firstorder\ formula
in which each bound variable
is bound by a single quantifier and does not occur \freely.
The following five statements are logically equivalent. \
Moreover,
we can construct a witness for any statement from a witness of any other
statement.
\begin{enumerate}
\noitem\item[1.]\math A has \propertyC\ of order \nlbmath n for some positive
natural number \nlbmath n.
\noitem\item[2.]\sloppy We can derive \math A from a \sententialtautology,
starting possibly with applications of
the
\index{Generalized Rule!of delta@of \math\delta-Quantification|(}%
\index{Generalized Rule!of gamma-Quantification@of \math\gamma-Quantification|(}%
Generalized Rules of\/ \math\gamma- and\/ \math\delta-Quantification,
which are then possibly followed by applications of the
\index{Generalized Rule!of gamma-Simplification@of \math\gamma-Simplification|(}%
Generalized Rule of\/ \mbox{\math\gamma-Simplification}.%
\noitem\item[3.]We can derive \math A from a \sententialtautology,
starting possibly with applications of
the Rules of\/ \math\gamma- and\/ \math\delta-Quantification,
which are then possibly followed by applications of
the Generalized Rule of\/ \math\gamma-Simplification and the
\index{Rule!of Passage}%
Rules of Passage.
\noitem\item[4.]We can derive \math A from a \sententialtautology\ with
the
\index{Rule!of delta-Quantification@of \math\delta-Quantification}%
\index{Rule!of gamma-Quantification@of \math\gamma-Quantification}%
Rules of\/ \math\gamma- and\/ \math\delta-Quantification, the
\index{Rule!of Simplification}%
Rule of Simplification, the
\index{Rule!of Passage}%
Rules of Passage, and
\index{modus ponens}%
Modus Ponens.
\noitem\item[5.]We can derive \math A in one of the standard \firstorder\
calculi of
\index{Principia Mathematica!calculi of}%
\PM\ or of the
\index{Hilbert!school!calculi of}%
\hilbert\ school.\footnote
{\Cfnlb\
\index{Principia Mathematica}%
\citep[*10]{PM}, \
\citep[\Vol\,II, Supplement\,I\,D]{grundlagen}, respectively.}\getittotheright
\qed\end{enumerate}\end{theorem}
\yestop\yestop\noindent
The following deserves emphasis: \ \
The derivations in the
above Statements \nolinebreak 2 to \nolinebreak 5 as well as the
number \nlbmath n \hskip.1em
of Statement\,1 \hskip.05em
can be {\em constructed}\/
from each other; \hskip .3em
and this construction is finitistic in the spirit of
\herbrand's basic beliefs in the nature of proof theory and
meta-mathematics. \ \
Statement\,2 \hskip.09em
is due to
\index{correction (of Herbrand's False Lemma)!Heijenoort's}%
\heijenoort's \repair; \hskip.3em
\cfnlb\ \sectrefs{section lemma}{section modus ponens elimination}. \ \ \
Statement\,3 \hskip.09em
and \herbrand's
\index{Property A}%
\propertyA\, are
extensionally equal and intensionally very close to each other. \
\vfill\pagebreak
\section{{\em Modus Ponens}\/ Elimination}\label
{section modus ponens elimination}%
\index{modus ponens|(}%
\index{modus ponens!elimination|(}%
\noindent The following lemma provides the step from Statement\,1 to
Statement\,2 of \theoref{theorem herbrand fundamental two}
with additional details exhibiting
an elimination of {\em modus ponens}\/
similar to the
\index{Cut elimination}%
Cut elimination in
\index{Gentzen!'s {\germanfont Hauptsatz}}%
\gentzensHauptsatz.
We present the lemma in parallel both in the version of
\index{correction (of Herbrand's False Lemma)!G\"odel's and Dreben's}%
\goedel's and \dreben's \repair\
and of
\index{correction (of Herbrand's False Lemma)!Heijenoort's}%
\heijenoort's \repair.\footnote{\Cfnlb\ \sectref{section lemma}.
We present \heijenoort's \repair\ actually in form of \littheoref 4 in
\index{Heijenoort!Jean van}%
\citep{heijenoort-herbrand} with a slight change,
which becomes necessary for our use of \herbrand\ disjunction
instead of the
\index{Herbrand!expansion}%
\herbrandexpansion, namely the addition of
the underlined part of
Step\,1 in \lemmref{lemma from C to yields a la heijenoort}.} \
To melt these two versions into one,
we underline the parts that are just part of
\index{correction (of Herbrand's False Lemma)!Heijenoort's}%
\heijenoort's \repair\
and overline the parts that result from \goedel's and \dreben's. \
Thus, the lemma stays valid if we omit
either the underlined or else the overlined part of it, but not both.
\yestop\begin{lemma}[{\em Modus Ponens}\/ Elimination\footroom]
\label{lemma from C to yields a la heijenoort}\\\noindent
\index{modus ponens!elimination}%
Let\/ \math A be a \firstorder\ formula
\index{form!prenex}%
\overline{\mbox{in prenex form}}
in which each bound variable
is bound by a single quantifier and does not occur \freely. \
Let\/ \math F be the
\index{form!Skolemized@(outer) Skolemized}%
outer \skolemizedform\ of \nlbmath A. \
Let\/ \Y\ be the set of bound (\math\gamma-) variables of \nlbmath F. \
Let\/ \math E result from \math F by removing all
\mbox{(\math\gamma-) quantifiers}. \
Let\/ \math n be a positive natural number. \
Let the
\index{champ fini|)}%
{\frenchfont champ fini}
\nlbmath{\termsofdepthnovars n}
be formed over the function and free variable symbols occurring in \nlbmath F. \
\\If\/ \math A has \propertyC\ of order \nlbmath n,
then we can construct a derivation of \nlbmath A
of the following form,
in which we read any term starting with a \skolem\ function
as an atomic variable:
\begin{description}\noitem\item[Step\,1: ]\begin{tabular}[t]{@{}l@{}}
We \nolinebreak start with
\underline{a sentential tautology whose disjunctive normal form is a}
\\\underline{re-ordering of a disjunctive normal form of}
the \sententialtautology\nlbmath
{\!\displaystyle\bigvee_{\FUNDEF\sigma\Y{\termsofdepthnovars n}}
\!\!\!\!\!E\sigma}.
\\\end{tabular}
\noitem\item[Step\,2: ]Then we may repeatedly apply the
\index{Generalized Rule!of delta@of \math\delta-Quantification|)}%
\index{Generalized Rule!of gamma-Quantification@of \math\gamma-Quantification|)}%
\underline{Generalized} Rules of\/ \math\gamma- and
\math\delta-Quanti\-fi\-cation.
\noitem\item[Step\,3: ]\begin{tabular}[t]{@{}l@{}}
Then, (after renaming all bound \math\delta-variables)
we may repeatedly apply
\\the
\index
{Generalized Rule!of gamma-Simplification@of \math\gamma-Simplification|)}%
Generalized Rule of\/ \math\gamma-Simplification.
\\[-2ex]\end{tabular}
\\\getittotheright\qe
\end{description}\end{lemma}
\yestop\yestop\noindent
Obviously, there is no use of {\em modus ponens}\/ in such a proof,
and thus, it is linear, \ie\ written as a tree,
it \nolinebreak has no branching. \
Moreover, all function and predicate symbols
within this proof occur already in \nlbmath A,
and all formulas in the proof are similar to \nlbmath A
in the sense that they have the so-called {\em``sub''-formula property}.
\yestop\begin{example}[{\em Modus Ponens}\/ Elimination]\label
{example from C to yields}\sloppy\hfill
\index{modus ponens!elimination}%
{\em(continuing \examref{example running herbrand start})}\\
Let us derive the formula \math A of \examref{example running herbrand start}
in \sectref{section herbrands properties}. \
As \math A is not in
\index{form!prenex}%
prenex form we have to apply the version of
\lemmref{lemma from C to yields a la heijenoort} without the overlined
part. \
As explained in \examref{example running herbrand start},
\math A \nolinebreak has
\index{Property C|)}%
\propertyC\ of order \nlbmath n
for \nlbmath{n\tightequal 4}, \hskip .2em
and the result of removing
the quantifiers from the
\index{form!Skolemized@(outer) Skolemized}%
outer \skolemizedform\ of \nlbmath A
is the formula \nlbmath E:
\par\noindent\LINEmaths{\noparenthesesoplist{\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\boundvari x{}\tightprec\app
{\forallvari m{}}{\boundvari x{},\boundvari y{}}\und
\boundvari y{}\tightprec\app
{\forallvari m{}}{\boundvari x{},\boundvari y{}}
\oplistimplies
\forallvari u{}\tightprec\boundvari n{}\und
\forallvari v{}\tightprec\boundvari n{}\und
\forallvari w{}\tightprec\boundvari n{}}}{}{\Large\math{(E)}}
\par\yestop\noindent
Let \math N denote the cardinality of \termsofdepthnovars n. \ \
Let \bigmaths{\termsofdepthnovars n=\{t_1,\ldots,t_N\}}.
\yestop\noindent
For the case of \math{n\tightequal 4}, \
we have \bigmaths{N=3+3^2+\inpit{3+3^2}^2=156}, and, \
for \math{\Y:=\{
\boundvari a{},\boundvari b{},\boundvari c{},
\boundvari n{},\boundvari x{},\boundvari y{}
\}}, \
the \herbrand\ disjunction
\bigmathnlb
{\bigvee_{\FUNDEF\sigma\Y{\termsofdepthnovars 4}}E\sigma}{}
has \ \math{N^{\CARD Y}} elements, \ie\
more than \nlbmath{10^{13}}. \
Thus,
we had better try a reduction proof here,\footnote
{As \herbrand's proof of his version of
\lemmref{lemma from C to yields a la heijenoort}
\citep[\p 170]{herbrand-logical-writings}
proceeds reductively too,
we explain \herbrand's general proof in parallel to the development of our
special example,
in \notefromtoref
{first note on Herbrand's proof}{last note on Herbrand's proof}. \
\herbrand's proof is interesting by itself and similar
to the later proof
of the
\index{Hilbert!'s epsilon}%
Second \nlbmath\varepsilon-Theorem in
\citep[\Vol\,II,\,\litsectref{3.1}]{grundlagen}\@.}
applying the inference rules backwards,
and be content with arriving at a \sententialtautology\ which is a
sub-disjunction of a re-ordering of a
disjunctive normal form of \bigmathnlb
{\bigvee_{\FUNDEF\sigma\Y{\termsofdepthnovars 4}}E\sigma}. \
\yestop\halftop\noindent
\index{Generalized Rule!of gamma-Quantification@of \math\gamma-Quantification}%
As the backwards application of the Generalized
Rule of \math\gamma-Quantification
admits only a single (\ie\ linear) application of each
\math\gamma-quantifier (or each ``lemma''), \hskip .2em
and as we will
have to apply both the first and the second line of \math A twice,
we first increase the
\math\gamma-multiplicity of the top \math\gamma-quantifiers
of these two lines to two. \
This is achieved by
applying the
\index{Generalized Rule!of gamma-Simplification@of \math\gamma-Simplification}%
Generalized Rule of \math\gamma-Simplification twice
backwards to \nlbmath A,
resulting in:\footnote
{\label{note gamma}\label{first note on Herbrand's proof}To arrive at the full
\index{Herbrand!disjunction|)}%
\herbrand\ disjunction \bigmathnlb
{\bigvee_{\FUNDEF\sigma\Y{\termsofdepthnovars n}}E\sigma},
\herbrand's proof requires us to apply the
\index{Rule!of Simplification}%
Rule of Simplification
top-down at each occurrence of a \math\gamma-quantifier
\math N \nolinebreak times,
and the idea is to substitute \nlbmath{t_i}
for the \mth i occurrence of this \math\gamma-quantifier on each branch.}
\par\noindent\LINEmaths{\noparenthesesoplist{
\forall\boundvari a{},\boundvari b{},\boundvari c{}\stopq\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\forall\boundvari a{},\boundvari b{},\boundvari c{}\stopq\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\forall\boundvari x{},\boundvari y{}\stopq
\exists\boundvari m{}\stopq\inparenthesesoplist{
\boundvari x{}\tightprec\boundvari m{}
\oplistund
\boundvari y{}\tightprec\boundvari m{}}
\oplistund
\forall\boundvari x{},\boundvari y{}\stopq
\exists\boundvari m{}\stopq\inparenthesesoplist{
\boundvari x{}\tightprec\boundvari m{}
\oplistund
\boundvari y{}\tightprec\boundvari m{}}
\oplistimplies \forall\boundvari u{},\boundvari v{},\boundvari w{}\stopq
\exists\boundvari n{}\stopq\inpit{
\boundvari u{}\tightprec\boundvari n{}\und
\boundvari v{}\tightprec\boundvari n{}\und
\boundvari w{}\tightprec\boundvari n{}}}}{}
\yestop\halftop\noindent
Renaming the bound \math\delta-variables to
some terms from \nlbmath{\termsofdepthnovars n},
and applying the
\index{Generalized Rule!of delta@of \math\delta-Quantification}%
Generalized Rule of \nlbmath\delta-Quantification three times backwards
in the last line,
we get:\nopagebreak
\par\noindent\LINEmaths{\noparenthesesoplist{
\forall\boundvari a{},\boundvari b{},\boundvari c{}\stopq\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\forall\boundvari a{},\boundvari b{},\boundvari c{}\stopq\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\forall\boundvari x{},\boundvari y{}\stopq
\exists\boxeins\stopq\inparenthesesoplist{\boundvari x{}\tightprec\boxeins
\oplistund
\boundvari y{}\tightprec\boxeins}
\oplistund
\forall\boundvari x{},\boundvari y{}\stopq
\exists\boxzwei\stopq\inparenthesesoplist{\boundvari x{}\tightprec\boxzwei
\oplistund
\boundvari y{}\tightprec\boxzwei}
\oplistimplies
\exists\boundvari n{}\stopq\inpit{
\boxu\tightprec\boundvari n{}\und
\boxv\tightprec\boundvari n{}\und
\boxw\tightprec\boundvari n{}}}}{}
\begin{sloppypar}\noindent
The boxes indicate that the enclosed term actually denotes
an atomic variable whose structure cannot be changed by a substitution. \
By this nice trick of taking outermost \skolem\ terms
as names for variables, \herbrand\ avoids the hard task of giving
semantics to \skolem\ functions,
\cfnlb\ \sectref{section herbrand loewenheim skolem}.\footnote
{According to \herbrand's proof we would have to replace
any bound \math\delta-variable \nlbmath{\boundvari x{}}
with its \skolem\ term \nlbmath{\forallvari x{}(t_{i_0},\ldots,t_{i_k})},
provided that
\math{i_0,\ldots,i_k} denotes the branch on which this
\mbox{\math\delta-quantifier}
occurs \wrt\ the previous step of raising
each \math\gamma-multiplicity to \nlbmath N,
described in \noteref{note gamma}.}
\end{sloppypar}
\vfill\pagebreak
\begin{sloppypar}\yestop\noindent We apply
the
\index{Generalized Rule!of gamma-Quantification@of \math\gamma-Quantification}%
Generalized Rule of \nlbmath\gamma-Quantification four times backwards,
resulting in application of \\\linemath{\{
\boundvari x{}\mapsto\boxv
\comma
\boundvari y{}\mapsto\boxw
\}}{} to the third line and
\\\LINEmaths{\{
\boundvari x{}\mapsto\boxu
\comma
\boundvari y{}\mapsto\boxeins
\}\footroom\headroom}{}\\\headroom
to the fourth \nolinebreak line. \
This yields:\footroom\headroom%
\index{Rule!of gamma-Quantification@of \math\gamma-Quantification}%
\footnote
{\label{last note on Herbrand's proof}Note that the terms
to be substituted for a bound \math\gamma-variable,
say \nlbmath{\boundvari y{}},
in such a reduction
step can always be read out from any bound \math\delta-variable
in its scope: \
If there are \math j \mbox{\math\gamma-quantifiers}
between the quantifier for \nlbmath{\boundvari y{}} inclusively and
the quantifier for the \math\delta-variable,
the value for \nlbmath{\boundvari y{}} is the
\nonumbermth j argument of the bound \mbox{\math\delta-variable},
counting from the last argument backwards.
\\
For instance, in the previous reduction step,
the variable \nlbmath{\boundvari y{}} in the third \nolinebreak
line was replaced with \boxw, the last argument of
the bound \mbox{\math\delta-variable} \nlbmath\boxeins,
being first in the scope of \nlbmath{\boundvari y{}}.
\\
This property is obvious from \herbrand's proof
but hard to express as a
property of proof normalization.
\\
Moreover, this property is useful in \herbrand's proof for showing
that the side condition of the
Rule of \mbox{\math\gamma-Quantification}
\bigmaths{{B\{x\mapsto t\}}\over{\exists x.\,B}}{}
is always satisfied, even for a certain
\index{form!prenex}%
prenex form. \
Indeed, \mbox{\math\gamma-variables} never occur in the replacement \nlbmath t
and the
\index{height of a term}%
height of \nlbmath t is strictly smaller than the
height of all bound \math\delta-variables in the scope \nlbmath B, \hskip.2em
so that no free variable in \nlbmath t can be bound by quantifiers in
\nlbmath{B}; \
\cfnlb\ \sectref{section herbrands calculi}.}\end{sloppypar}
\par\noindent\LINEmaths{\noparenthesesoplist{
\forall\boundvari a{},\boundvari b{},\boundvari c{}\stopq\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\forall\boundvari a{},\boundvari b{},\boundvari c{}\stopq\inpit{
\boundvari a{}\tightprec\boundvari b{}
\und
\boundvari b{}\tightprec\boundvari c{}
\implies
\boundvari a{}\tightprec\boundvari c{}}
\oplistund
\exists\boxeins\stopq\inpit{\boxv\tightprec\boxeins\und
\boxw\tightprec\boxeins}
\oplistund
\exists\boxzwei\stopq\inparenthesesoplist{\boxu\tightprec\boxzwei
\oplistund
\boxeins\tightprec\boxzwei}
\oplistimplies
\exists\boundvari n{}\stopq\inpit{
\boxu\tightprec\boundvari n{}\und
\boxv\tightprec\boundvari n{}\und
\boxw\tightprec\boundvari n{}}
}}{}\par\yestop\noindent
Applying (always backwards)
the
\index{Generalized Rule!of delta@of \math\delta-Quantification}%
Generalized Rule of \nlbmath\delta-Quantification twice and the
\index{Generalized Rule!of gamma-Quantification@of \math\gamma-Quantification}%
Generalized Rule of \mbox{\nlbmath\gamma-Quantification} seven times,
and then dropping
the boxes (as they
are irrelevant for sentential reasoning without substitution) and
rewriting it all into a disjunctive list of conjunctions,
we arrive at the disjunctive set \nlbmath C of
\examref{example running herbrand start},
which is a \sententialtautology. \
Moreover, as a list, \math C is obviously a re-ordered
sublist of a disjunctive normal form of
\bigmathnlb
{\bigvee_{\FUNDEF\sigma\Y{\termsofdepthnovars 4}}E\sigma}.
\getittotheright\qed\end{example}
\yestop\yestop\noindent
In the time before \herbrandsfundamentaltheorem,
a calculus was basically a means to describe a set of theorems
in a semi-decidable and theoretical fashion. \
In \nolinebreak\hilbert's calculi, for instance, the
\index{proof search|(}%
search for concrete proofs is very hard.
Contrary to most other
\index{Hilbert!-style calculi}%
\hilbert-style calculi,
the normal form of proofs given in
Statement\,2 of
\index{Herbrand!'s Fundamental Theorem}%
\theoref{theorem herbrand fundamental two}, however,
supports the search for reductive proofs: \
Methods of
\index{proof search!human-oriented}%
\index{theorem proving!human-oriented}%
human\footnote
{Roughly speaking, we may do a proof by hand, count the lemma applications
and remember their instantiations,
and then try to construct a formal normal form proof accordingly,
just as we have done in \examref{example from C to yields}. \
See
\index{Wirth, Claus-Peter}%
\citep{wirthcardinal} for more on this.}
and
\index{proof search!automatic}%
\index{theorem proving!automated}%
automatic\footnote
{Roughly speaking, we may compute the connections and search for a reductive
proof in the style of say \citep{wallen},
which we then transform into a proof in the normal form
of Statement\,2 of \theoref{theorem herbrand fundamental two}.}
proof search may help us to find simple proofs in this normal form.
This means that, for the first time in known history,
\herbrand's version of \lemmref{lemma from C to yields a la heijenoort}
gives us the means to search successfully for
simple proofs in a formal calculus by hand
(or \nolinebreak actually today, on a computer), \hskip.2em
just as we have done in
\examref{example from C to yields}.\footnote
{Even without the avoidance of the detour over
\index{form!prenex}%
prenex forms
due to
\index{correction (of Herbrand's False Lemma)!Heijenoort's}%
\heijenoort's \repair,
this already holds for the normal form given by
Statement\,3 of \theoref{theorem herbrand fundamental two},
which is extensionally equal to \herbrand's original
\index{Property A}%
\propertyA. \
The next further steps to improve this support for proof search would be
sequents and {\em free} \math\gamma- and \math\delta-variables; \
\cfnlb\ \eg\
\index{Wirth, Claus-Peter}%
\makeaciteoftwo{wirthcardinal}{wirth-jal}.}
\pagebreak
The normal form of proofs --- as given by
\lemmref{lemma from C to yields a la heijenoort} --- eliminates
detours via
\index{modus ponens|)}%
\index{modus ponens!elimination|)}%
{\em modus ponens}\/ in a similar fashion
as
\index{Gentzen!'s {\germanfont Hauptsatz}}%
\gentzensHauptsatz\ eliminates the
\index{Cut elimination}%
Cut. \
It \nolinebreak is remarkable not only because it establishes a
connection between \skolem\ terms and free variables
without using any semantics for \skolem\ functions
(and thereby, without using the
\index{axiom!of choice|(}%
\axiomofchoice). \
It \nolinebreak also seems to be the first time
that a normal form of proofs is shown to exist
in which {\em different phases}\/ are considered. \
Even with
\index{Gentzen!'s {\germanfont versch\"arfter Hauptsatz}}%
\gentzensverschaerfterHauptsatz\ following \herbrand\ in
this aspect some years later,
the concrete form of \herbrand's normal form of proofs
remains important to this day, especially in the form of
\index{correction (of Herbrand's False Lemma)!Heijenoort's}%
\heijenoort's \repair,
\cfnlb\ \sectref{section lemma}. \ \
The manner in which modern sequent, tableau, and matrix calculi
organize proof search\footnote
{\Cfnlb\ \eg\ \citep{wallen},
\index{Wirth, Claus-Peter}%
\citep{wirthcardinal},
\index{Autexier, Serge}%
\citep{sergecore}.}
does not follow the
\index{Hilbert!school}%
\hilbert\ school and their
\index{Hilbert!'s epsilon}%
\math\varepsilon-elimination theorems, but
\index{Gentzen!'s calculi}%
\gentzen's and \herbrand's calculi. \
Moreover, regarding their \skolemization, their
\index{inference!deep}%
deep inference,\footnote
{Note that although the deep inference rules of
{\em Generalized}\/ Quantification are an
extension of \herbrand's calculi by
\index{Heijenoort!Jean van}%
\heijenoort,
the deep inference rules of
\index{Rule!of Passage}%
Passage and of
\index{Generalized Rule!of Simplification}%
Generalized Simplification are
\herbrand's original contributions.}
and their focus on \mbox{\math\gamma-quantifiers} and their multiplicity,
these modern
\index{proof search|)}%
proof-search calculi
are even more in \herbrand's tradition than in
\index{Gentzen!'s calculi}%
\gentzen's.
\section
[The \loewenheimskolemtheorem\ and
\\\herbrand's Finitistic Notion of Falsehood in an Infinite Domain]
{\sloppy The \loewenheimskolemtheorem\ and
\herbrand's\protect\linebreak
Finitistic Notion of Falsehood in an Infinite Domain}%
\label{section herbrand loewenheim skolem}%
\index{L\"owenheim!Leopold|(}%
\index{L\"owenheim!--Skolem Theorem|(}%
\index{finitism|(}%
\index{Property C|(}%
\noindent
Let \math A be a \firstorder\ formula whose terms
have a
\index{height of a term}%
height not greater than \nlbmath m. \
\herbrand\ defines that
{\em\math A \nolinebreak is false in an infinite domain}\/
\udiff\
\math A \nolinebreak does not have \propertyC\ of order \nlbmath p
for any positive natural number \nlbmath p.
If, for a given positive natural number \nlbmath p, \hskip.2em the
formula \nlbmath A does not have \propertyC\ of order \nlbmath p,
\hskip.2em then we can
construct a finite structure over the domain
\nlbmath{\termsofdepthnovars{p+m}}
which falsifies \nlbmath{A^{\termsofdepthnovars p}}; \hskip .2em
\cfnlb\ \sectref{section herbrand expansion}. \
Thus, instead of requiring a single infinite structure in which
\nlbmath{A^{\termsofdepthnovars p}} \nolinebreak is false for any
positive natural number \nlbmath p, \hskip.2em
\herbrand's notion of falsehood in an infinite domain
only provides us, \hskip.2em
for each \nlbmath p, \hskip.2em
with a finite structure in which
\nlbmath{A^{\termsofdepthnovars p}} is false. \
\herbrand\ explicitly points out that these structures
do not have to be extensions of each other. \
From a given falsifying structure for some \nlbmath p one can,
of course, generate falsifying structures
for each \nlbmath{p'\!<p}
by restriction to
\nlbmath{\termsofdepthnovars{p'+m}}. \
\herbrand\ thinks, however, that to require an infinite sequence of
structures to be a sequence of extensions would necessarily include
some form of the \axiomofchoice, which he rejects out of principle.
Moreover, he writes that the basic prerequisites of the
\loewenheimskolemtheorem\ are generally misunderstood, but does not
make this point clear.
It seems that \herbrand\ reads \citep{loewenheim-1915} as if it
would be a paper on provability instead of validity, \ie\
that \herbrand\ confuses \loewenheim's \nolinebreak`\math\models'
with \herbrand's \nolinebreak`\tightyields\closesinglequotefullstopextraspace
All in all, this is, on the one hand, so peculiar and, on the other hand,
so relevant for \herbrand's finitistic views of logic
and proof theory that some quotations may illuminate the controversy.
\begin{quote}
``\frenchtextonehundredandone''\footnote
{\Cfnlb\ \frenchtextonehundredandonesourcelocationoriginal. \
\frenchtextonehundredandonesourcelocationmodifierforreprint\
\frenchtextonehundredandonesourcelocationreprint
\begin{quote}``\englishtextonehundredandone'' \getittotheright
{\translationnotewithlongcite{\p 165}{herbrand-logical-writings}
{translation by \dreben\ and \heijenoort}}\notop\end{quote}}
\notop\halftop\end{quote}
After defining the dual notion for unsatisfiability instead of
validity, \herbrand\ continues:
\notop\halftop\begin{quote}
``\frenchtextonehundredandtwo''\footnote
{\Cfnlb\
\frenchtextonehundredandtwosourcelocationoriginal. \
\frenchtextonehundredandtwosourcelocationmodifierforreprint\ also
in: \frenchtextonehundredandtwosourcelocationreprint
\begin{quote}``\englishtextonehundredandtwo'' \getittotheright
{\translationnotewithlongcite{\p 166}{herbrand-logical-writings}
{translated by \dreben\ and \heijenoort}}\notop\end{quote}}
\notop\halftop\end{quote}
\noindent
\index{Herbrand!'s Fundamental Theorem}%
\herbrandsfundamentaltheorem\ equates {\em provability}\/
with \propertyC, \hskip .09em
whereas the \loewenheimskolemtheorem\
equates {\em validity}\/ with \propertyC. \
Thus, it is not the case that \herbrand\ somehow corrected \loewenheim. \
Instead,
the \loewenheimskolemtheorem\ and \herbrandsfundamentaltheorem\
had better be looked upon as a bridge from validity to provability
with two arcs and
\propertyC\ as the eminent pillar in the middle of the river,
offering a magnificent view from the bridge on
properties of \firstorder\ logic; \hskip .2em
as depicted in \nlbsectref{section herbrand fundamental theorem}. \ \
And this was probably also \herbrand's view when he correctly wrote:
\notop\halftop\begin{quote}
``\frenchtextonehundred''\footnote
{\Cfnlb\ \frenchtextonehundredsourcelocationoriginal. \
\frenchtextonehundredsourcelocationmodifierforreprint\ also in:
\frenchtextonehundredsourcelocationreprint.
\begin{quote}``\englishtextonehundred'' \getittotheright
{\translationnotewithlongcite
{\englishtextonehundredsourcelocationpagenumber}
{\englishtextonehundredsourcelocationwithoutpagenumber}
{translated by \dreben\ and \heijenoort}}\notop\notop\notop\end{quote}}
\pagebreak
\end{quote}
\noindent
Moreover, \herbrand\ criticizes
\index{L\"owenheim!Leopold}%
\loewenheim\ for not showing the
\index{consistency!of first-order logic}%
consistency of \firstorder\ logic,
but this, of course, was never
\index{L\"owenheim!Leopold}%
\loewenheim's concern.
The mathematically substantial part of \herbrand's critique of
\index{L\"owenheim!Leopold}%
\loewenheim\
refers to the use of the \axiomofchoice\ in \loewenheim's proof of the
\loewenheimskolemtheorem.
The \loewenheimskolemtheorem\ as found in many textbooks,
such as \cite[\p 141]{enderton},
says that any satisfiable set of \firstorder\ formulas
is satisfiable in a countable structure. \
In \citep{loewenheim-1915}, however, we only find a dual statement,
namely that any invalid \firstorder\ formula has a denumerable counter-model. \
Moreover, what is actually proved, read charitably,\footnote
{As \loewenheim's paper lacks some minor details,
there is an ongoing discussion whether its proof
of the \loewenheimskolemtheorem\ is complete
and what is actually shown. \
Our reading of the proof of the \loewenheimskolemtheorem\
in \citep{loewenheim-1915} \hskip.2em
is a standard one. \
Only in
\index{Skolem!Thoralf}%
\citep[\p\,26\ff]{skolem-1941} and
\citep[\litsectref{6.3.4}]{badesa-loewenheim}
we found an incompatible reading, namely that --- to construct
\nlbmath{\salgebra'} of \lititemref 2
of \theoref{theorem loewenheim skolem loewenheim} \nolinebreak---
\index{L\"owenheim!Leopold}%
\loewenheim's
proof requires an additional falsifying structure of arbitrary cardinality
to be given in advance. \
The similarity of our presentation with
\index{Herbrand!'s Fundamental Theorem|)}%
\herbrandsfundamentaltheorem, however, is in accordance with
\index{Skolem!Thoralf}%
\citep[\p\,30]{skolem-1941}, but not with \citep[\p145]{badesa-loewenheim}. \
The relation of \herbrandsfundamentaltheorem\ to the \loewenheimskolemtheorem\
is further discussed in
\index{Anellis!Irving H.}%
\citep{anellis-loewenheim}. \
\Cfnlb\ also our \noteref{note gap}.}
is the following stronger theorem:
\begin{theorem}[\loewenheimskolemtheorem\ \`a la \citet{loewenheim-1915}]%
\label{theorem loewenheim skolem loewenheim}%
\index{L\"owenheim!--Skolem Theorem!\`a la \citet{loewenheim-1915}}%
\\
Let us assume the \axiomofchoice. \
Let \math A be a \firstorder\ formula.\begin{enumerate}\noitem\item[1.]
If\/ \math A
\nolinebreak has \propertyC\ of order \nlbmath p
for some positive natural number \nlbmath p, \
then \bigmaths{\models A}.\noitem\item[2.]
If\/ \math A does not have \propertyC\ of order \nlbmath p
for any positive natural number \nlbmath p, \
then
we can construct a sequence of partial structures \nlbmath{\salgebra_i}
that converges to a structure\/ \nlbmath{\salgebra'}
with a
denumerable universe such that
\bigmaths{\notmodels_{\salgebra'}\ A}.
\getittotheright\qed\end{enumerate}\end{theorem}
As
\index{Property C|)}%
\propertyC\ of order \nlbmath p can be effectively tested for
\math{p=1,2,3,\ldots}, \
\index{L\"owenheim!Leopold}%
\loewenheim's proof provides us with a
\index{completeness}%
complete proof procedure
which went unnoticed by \skolem\ as well as the
\index{Hilbert!school}%
\hilbert\ school. \
Indeed, there is no mention in the discussion of the
\index{completeness}%
completeness problem
for \firstorder\ logic
in \citep[\p\,68]{grundzuege}, \
where it is considered
as an open problem.\footnote
{Actually,
the completeness problem is slightly ill defined in \citep{grundzuege}. \
\Cf\ \eg\ \citep[\Vol\,I, \PP{44}{48}]{goedelcollected}.}
\enlargethispage{1.7ex}%
Thus, for validity instead of provability,
\index{completeness}%
\index{Goedel@G\"odel!'s C@'s Completeness Theorem}%
\goedel's Completeness Theorem\footnote
{\Cfnlb\ \citep{goedel-completeness}.}
is contained already in
\index{L\"owenheim!Leopold}%
\citep{loewenheim-1915}. \
\goedel\ has actually acknowledged this
for the version of the proof of the
\loewenheimskolemtheorem\ in
\index{Skolem!Thoralf}%
\citep{skolem-1923b}.\footnote
{Letter of
\index{Goedel@G\"odel!Kurt}%
\goedel\ to
\index{Heijenoort!Jean van}%
\heijenoort,
dated \Aug\,14, 1964. \
\Cfnlb\ \citep[\p\,510, \litnoteref i]{heijenoort-source-book}, \
\citep[\Vol\,I, \p\,51; \Vol\,V, \PP{315}{317}]{goedelcollected}.}
Note that the convergence of the structures \nlbmath{\salgebra_i} against
\nlbmath{\salgebra'} in \theoref{theorem loewenheim skolem loewenheim}
is hypothetical in two aspects:
First, as validity is not co-semi-decidable,
in general we can never positively know that
we are actually in Case\,2 of \theoref{theorem loewenheim skolem loewenheim},
\ie\ that a convergence toward \nlbmath{\salgebra'} exists. \
Second, even if we knew about the convergence toward \nlbmath{\salgebra'},
we would
have no general procedure to find out which parts of \nlbmath{\salgebra_i}
will be actually found in \nlbmath{\salgebra'} and which will be removed
by backtracking. \
This makes it hard to get an intuition for \nlbmath{\salgebra'}
and may be the philosophical reason for \herbrand's rejection
of ``falsehood in \nlbmath{\salgebra'}\,''
as a meaningful notion. \
Mathematically, however, we see no
justification in \herbrand's rejection of this notion and will
explain this in the following.
\pagebreak
\herbrand's critical remark concerning the \loewenheimskolemtheorem\
is justified, however, insofar as
\index{L\"owenheim!Leopold}%
\loewenheim\ needs the \axiomofchoice\
at two steps in his proof without mentioning this.
\begin{description}\noitem\item[\nth 1 Step: ]
To show the equivalence of a formula to its
\index{form!Skolemized@(outer) Skolemized}%
outer \skolemizedform,
\index{L\"owenheim!Leopold}%
\loewenheim's proof requires the full \axiomofchoice.
\noitem\item[\nth 2 Step: ]
For constructing the structure \nlbmath{\salgebra'},
\loewenheim\ would need
\index{Koenig's@K\oe nig's Lemma|(}%
\koenigslemma,
which is a weak form of the \axiomofchoice.\footnote
{\koenigslemma\ is Form\,10 in \citep{weakaxiomofchoice}. \
This form is even weaker than the well-known
\index{Principle of Dependent Choice}%
Principle of Dependent Choice,
namely Form\,43 in \citep{weakaxiomofchoice}; \
\cfnlb\ also \citep{axiomofchoice}.}
\noitem\end{description}
\noindent
\enlargethispage{1ex}
Contrary to the general perception\commanospace\footnote
{\label{note gap}This perception is partly based on
the unjustified criticism of
\index{Skolem!Thoralf}%
\skolem, \herbrand, and
\index{Heijenoort!Jean van}%
\heijenoort. \
We are not aware of any negative critique against \citep{loewenheim-1915}
at the time of publication.
\index{Wang, Hao}%
\citep[\p\,27\ff]{wang-skolem},
proof-read by
\index{Bernays, Paul}%
\bernays\ and
\index{Goedel@G\"odel!Kurt}%
\goedel,
after being most critical with the proof in
\index{Skolem!Thoralf}%
\citep{skolem-1923b},
sees no gaps in \loewenheim's proof,
besides the applications of the \axiomofchoice. \
The same holds for \cite[\litsectref 8]{brady},
sharing expertise in the
\index{Peirce!--Schr\protect\oe der tradition}%
\peirce--\schroeder\
tradition\arXivfootnotemarkref{note peirce schroeder tradition}
with \loewenheim.
\par
Let us have a look at the criticism of
\index{Skolem!Thoralf}%
\skolem, \herbrand, and
\index{Heijenoort!Jean van}%
\heijenoort\ in detail:
\par
The following statement of
\index{Skolem!Thoralf}%
\skolem\ on
\citep{loewenheim-1915}
is confirmed
in \citep[\p\,230]{heijenoort-source-book}:
\notop\halftop\begin{quote}
``{\germanfontfootnote\germantextskolemeins}''
\getittotheright{%
\index{Skolem!Thoralf}%
\citep[\p\,220]{skolem-1923b}}
\notop\end{quote}\begin{quote}
``\englishtextskolemeins'' \getittotheright
{\translationnotewithlongcite{\p\,293}{heijenoort-source-book}
{translation by by \bauermengelbergname\index{Bauer-Mengelberg, Stefan}}}
\notop\end{quote}
That detour, however, is not an essential part of the proof,
but serves for the purpose of illustration only.
This is clear from the original paper and also the conclusion
in \citep[\litsectrefs{3.2}{3.3}]{badesa-loewenheim}.
\par
When \herbrand\ criticizes
\index{L\"owenheim!Leopold}%
\loewenheim's proof,
he actually does not criticize the proof as such, but only
\loewenheim's semantical notions; even though \herbrand's verbalization
suggests the opposite, especially in
\citep[\litchapref 2]{herbrand-fundamental-problem},
where \herbrand\ repeats
\index{L\"owenheim!Leopold}%
\loewenheim's reducibility results in finitistic style:
\notop\halftop\begin{quote}
``\frenchtextfourhundred'' \getittotheright
{\citep[\p\,187, \litnoteref 29]{herbrand-ecrits-logiques}}
\notop\end{quote}\begin{quote}
``\englishtextfourhundred'' \getittotheright
{\translationnotewithlongcite{\p\,237, \litnoteref 33}
{herbrand-logical-writings}
{translation by \dreben\ and \heijenoort}}
\notop\end{quote}
\index{Heijenoort!Jean van}%
\heijenoort\ realized
that there is a missing step in
\index{L\"owenheim!Leopold}%
\loewenheim's proof:
\notop\halftop\begin{quote}
``What has to be proved is that, from the assignments thus obtained
for all \nlbmath i,
there can be formed one assignment such that \math{\Pi F}
is true, that is, \bigmaths{\Pi F=0}{} is false. \
This \loewenheim\ does not do.''\getittotheright
{\citep[\p\,231]{heijenoort-source-book}}\notop\halftop\end{quote}
Except for the principle of choice, however, the missing step is trivial
because in \loewenheim's presentation the already fixed part of the assignment
is irrelevant for the extension. \
Indeed, in the ``Note to the Second Printing\closequotecomma
in the preface of the \nth 2 printing,
\index{Heijenoort!Jean van}%
\heijenoort\ partially corrected himself:\notop\halftop\begin{quote}
``I am now inclined to think that \loewenheim\
came closer to
\koenigslemma\ than his paper, on the surface, suggests.
But a rewriting of my introductory note on that point (\p\,231)
will have to wait for another occasion.''\getittotheright
{\citep[\p\,ix]{heijenoort-source-book}}\notop\halftop\end{quote}
This correction is easily overlooked because no note was inserted into the
actual text.\halftop
}
there are no essential gaps in \loewenheim's proof,
with the exception of
the implicit application of the \axiomofchoice,
which was no exception at his time.
Indeed, fifteen years later,
\index{Goedel@G\"odel!Kurt}%
\goedel\ still applies the \axiomofchoice\ tacitly in the proof
of his
\index{completeness}%
\index{Goedel@G\"odel!'s C@'s Completeness Theorem}%
Completeness Theorem.\footnote
{\Cfnlb\ \citep{goedel-completeness}.}
Moreover,
as none of these theorems state any consistency properties,
from the point of view of
\index{finitism}%
\hilbert's finitism
there was no reason to avoid the application of the \axiomofchoice. \
\pagebreak
Indeed, in the proof of his Completeness Theorem, \
\goedel\ \
``is not interested in avoiding an appeal to the
\axiomofchoice.''\footnote{\Cfnlb\
\index{Wang, Hao}%
\citep[\p\,24]{wang-skolem}.} \ \
Thus, again,
as we already noted in \lititemref 2 of
\sectref{section Subject Area and Methodological Background}, regarding
\index{finitism|)}%
finitism, \herbrand\ is more royalist than King \hilbert.
\\\indent
The proof of the \loewenheimskolemtheorem\ in
\index{Skolem!Thoralf}%
\citep{skolem-1920} already avoids the \axiomofchoice\ in the \nth 1 Step
by using
\index{form!Skolem normal}%
{\em\skolemnormalform}\/
instead of
\index{form!Skolemized@(outer) Skolemized!vs.\ Skolem normal form}%
\skolemizedform.\footnote
{\label{note broken}To achieve
\index{form!Skolem normal}%
\skolemnormalform,
\index{Skolem!Thoralf}%
\skolem\ defines predicates
for the subformulas
starting with a \mbox{\math\gamma-quantifier},
and then rewrites the formula
into a
\index{form!prenex}%
prenex form with a
\firstorder\ \math{\gamma^*\delta^*}-prefix.
Indeed, for the proofs of the versions of the \loewenheimskolemtheorem,
the \skolemizedform\ \hskip.1em
(which is used in
\index{L\"owenheim!Leopold}%
\citep{loewenheim-1915},
\index{Skolem!Thoralf}%
\citep{skolem-1928}, and
\citep{herbrand-PhD}) \
is used neither in
\index{Skolem!Thoralf}%
\citep{skolem-1920} nor in
\index{Skolem!Thoralf}%
\citep{skolem-1923b},
which use \skolemnormalform\ instead.
\par
By definition,
\index{form!Skolemized@(outer) Skolemized!vs.\ Skolem normal form}%
{\em\skolemizedform s}\/ have a
\math{\delta^*\gamma^*}-prefix with an implicit
higher-order \nlbmath{\delta^*}, and
\index{raising}%
{\em raising}\/ is the dual of \skolemization\ which
produces a \math{\gamma^*\delta^*}-prefix with a
higher-order \nlbmath{\gamma^*},
\cfnlb\ \citep{miller}. \
The {\em\skolemnormalform}, however, has a \math{\gamma^*\delta^*}-prefix
with {\em\firstorder}\/ \nlbmath{\gamma^*}.}
\\\indent
Moreover, in
\index{Skolem!Thoralf}%
\citep{skolem-1923b},
the choices in the \nth 2 Step of the proof become deterministic,
so that no form of the \axiomofchoice\
(such as \index{Koenig's@K\oe nig's Lemma|)}\koenigslemma) \hskip.2em
is needed anymore. \
This is achieved
by taking the universe of the structure \nlbmath{\salgebra'}
to be the natural numbers and by using
the \wellordering\ of the natural numbers.
\notop\halftop
\begin{theorem}[\loewenheimskolemtheorem\ \`a la \citep{skolem-1923b}]%
\label{loewenheim skolem theorem version 1923}%
\index{L\"owenheim!Leopold|)}%
\index{L\"owenheim!--Skolem Theorem|)}%
\index{L\"owenheim!--Skolem Theorem!\`a la \citep{skolem-1923b}}%
\index{Skolem!Thoralf}%
\\
Let \math\Gamma\ be a (finite of infinite) denumerable
set of \firstorder\ formulas.
Assume \nolinebreak\bigmaths{\notmodels\,\,\Gamma}.\\
Without assuming any form of the \axiomofchoice\
we can construct a sequence of partial structures \nlbmath{\salgebra_i}
that converges to a structure\/ \nlbmath{\salgebra'}
with a universe which is a subset of the natural numbers such that
\bigmaths{\notmodels_{\salgebra'}\ \Gamma}.\getittotheright\qed
\end{theorem}
\noindent
Note that \herbrand\ does not need any form of the \axiomofchoice\
for the following reasons: \
In \nolinebreak the \nth 1 Step,
\herbrand\ does not use the semantics of \skolemizedform s
at all,
because \herbrand's \skolem\ terms are just names for free variables,
\cfnlb\ \sectref{section herbrand fundamental theorem}. \ \
In \nolinebreak the \nth 2 \nolinebreak Step, \herbrand's peculiar notion of
``falsehood in an infinite domain'' makes any choice superfluous. \
This is a device
which --- contrary to what \herbrand\ wrote \nolinebreak---
is not really necessary to avoid the
\axiomofchoice, as the above \theoref{loewenheim skolem theorem version 1923}
shows.
\\\indent
In this way, \herbrand\ came close to proving
the
\index{completeness}%
completeness of
\index{Russell!Bertrand}%
\russell's and
\index{Hilbert!David}%
\hilbert's calculi
for \firstorder\ logic,
but he did not trust the left arc of the bridge depicted in
\sectref{section herbrand fundamental theorem}. \
And thus \goedel\ proved it first when he
submitted his thesis in 1929, in the same year as \herbrand, and the
theorem is now called
\index{completeness}%
\index{Goedel@G\"odel!'s C@'s Completeness Theorem}%
{\em \goedel's Completeness Theorem}\/ in all textbooks on logic.\footnote
{\Cfnlb\ \citep{goedel-completeness}.}
\\\indent
It is also interesting to note that \herbrand\ does
not know how to construct a counter-model without using
the
\index{axiom!of choice|)}%
\axiomofchoice,
as explicitly described in
\index{Skolem!Thoralf}%
\citep{skolem-1923b}. \ \
This \nolinebreak is ---~on the one hand~--- a strong indication
that \herbrand\ was not aware of
\index{Skolem!Thoralf}%
\citep{skolem-1923b}.\footnote
{\Cfnlb\ \p 12 of
\index{Goldfarb, Warren}%
\goldfarb's introduction in
\citep{herbrand-logical-writings}.} On the other hand, \herbrand\
names
\index{Skolem!'s Paradox}%
\skolemsparadox\ several times and
\index{Skolem!Thoralf}%
\makeaciteoftwo{skolem-1923b}{skolem-1929}
seem to be the only written sources of this at \herbrand's time.\footnote
{%
\index{Skolem!'s Paradox}%
\skolemsparadox\ is also briefly mentioned in
\index{Neumann, John von}%
\citep{neumann-1925}, \
not as a paradox, however,
but as unfavorable conclusions on set theory drawn by
\index{Skolem!Thoralf}%
\skolem, who
wrote about a ``peculiar and apparently paradoxical state of
affairs\closequotecomma \cfnlb\ \citep[\p\,295]{heijenoort-source-book}.}
\\\indent
As \herbrand's
\index{Property C}%
\propertyC\ and its use of the outer
\index{form!Skolemized@(outer) Skolemized}%
\skolemizedform\ are most similar to the treatment in
\index{Skolem!Thoralf}%
\citep{skolem-1928}, \hskip .2em
it \nolinebreak seems likely that \herbrand\
had read \citep{skolem-1928}.\footnote
{Without giving any justification,
\index{Heijenoort!Jean van}%
\heijenoort\ assumes,
however, that
\notop\halftop\begin{quote}``He \nolinebreak was
not acquainted either, certainly, with
\index{Skolem!Thoralf}%
\citep{skolem-1928}.'' \getittotheright
{%
\index{Heijenoort!Jean van}%
\citep[\p 112]{heijenoort-work-herbrand}}\vspace*{-4ex}\end{quote}}%
\pagebreak
\section
[\herbrand's First Proof of the Consistency of Arithmetic]
{\herbrand's First Proof of the\\Consistency of Arithmetic}%
\label{section 1 Proof}%
\index{consistency!proof of|(}%
\index{consistency!of arithmetic|(}%
\notop\halftop\noindent\enlargethispage{1ex
Consider a signature of arithmetic that consists only of zero
\nolinebreak`\zeropp\closesinglequotecomma
the successor function
\nolinebreak`\ssymbol\closesinglequotecomma
and the equality predicate
\nolinebreak`\math=\closesinglequotefullstopextraspace
Besides the
\index{axioms!of equality}%
axioms of equality (equivalence and substitutability), \hskip .3em
\herbrand\ considers several subsets of the following axioms:\footnote
{The labels are ours, not \herbrand's. \
\herbrand\ writes `\math{x\tight+1}' instead of
`\spp x\closesinglequotefullstopextraspace
To save the
\index{axiom!of substitutability}%
axiom of substitutability,
\herbrand\ actually uses the biconditional in \nlbmath{(\nat_3)}.}
\par\halftop\noindent\math{\begin{array}{@{}l@{\ \ \ }r@{\ }l@{}}
\inpit{\ident{S}}
\app P \zeropp\nottight\und
\forall y\stopq\inparentheses{\app P y\implies\app P{\spp y}}
{\nottight{\nottight{\nottight\implies}}}
\forall x\stopq\app P x
\\\majorheadroom
(\nat_1)
{ \boundvari x{}\tightequal\zeropp
\nottight{\nottight\oder}
\exists
\boundvari y{}\stopq
\boundvari x{}\tightequal\spp{\boundvari y{}}}
\\\majorheadroom
(\nat_2)
&
\spp x\tightnotequal\zeropp
\\\majorheadroom
(\nat_3)
{\spp x\tightequal\spp y\nottight\implies x\tightequal y}
\\\majorheadroom
(\nat_{4+i})
&\sppiterated{i+1}x\tightnotequal x
\\\end{array}}
\par\halftop\noindent
Axiom \nlbmath{\inpit{\nat_1}} \nolinebreak
together with the \wellfoundedness\ of the
successor relation `\nlbmath\ssymbol'
specifies the natural numbers up to isomorphism\fullstopnospace\footnote
{\Cfnlb\
\index{Wirth, Claus-Peter}%
\citep[\litsectref{1.1.3}]{wirthcardinal}. \
This idea goes back to \citep{pieri}.}
So do the
\index{axioms!Dedekind--Peano}%
\peanoaxiom s \math{\inpit{\nat_2}} and \nlbmath{\inpit{\nat_3}}
together with the \peanoaxiom\ of Structural
Induction \nlbmath{\inpit{\ident S}},
provided that the meta variable \nlbmath P is seen as a universally
quantified \secondorder\ variable with the
standard interpretation\fullstopnospace\footnote
{\Cf\ \eg\
\index{Andrews, Peter B.}%
\citep{andrews}.}
\\\indent
Of course, \herbrand, the finitist,
does not even mention these \secondorder\ properties. \
His discussion is restricted to decidable \firstorder\ axiom sets,
some of which are infinite due to the inclusion of
the infinite sequence \inpit{\nat_4},
\inpit{\nat_5},
\inpit{\nat_6},
\nolinebreak\ldots\ \ \
\\\indent
As \herbrand's axiom sets are first order, \hskip .2em
they cannot specify the natural numbers up to
isomorphism\fullstopnospace\footnote
{For instance, due to the
\index{L\"owenheim!--Skolem--Tarski Theorem!Upward}%
\upwardloewenheimskolemtarskitheorem. \
\Cfnlb\ \eg\ \citep{enderton}.} \
But as the model of arithmetic is infinite,
\herbrand, the finitist, cannot accept it as part of his proof theory. \
Actually, he never even mentions the model of arithmetic.
\\\indent
\herbrand\ shows that
(for the poor signature of \zeropp, \ssymbol, and \nlbmath=) \
the \firstorder\ theory
of the axioms
\nlbmath{\inpit{\nat_i}_{i\geq 1}}
\ (\ie\ \inpit{\nat_i} for any positive natural number \nlbmath i) \
is consistent,
\index{completeness}%
complete, and decidable. \
His constructive proof is elegant, provides a lucid operative
understanding of basic arithmetic, and has been included
inter alia into \litsectref{3.1} of \citep{enderton},
one of the most widely used textbooks on logic. \
\herbrand's proof has two constructive steps:
\begin{description}
\noitem\item[\nth 1 Step: ]
He shows how to rewrite any formula into
an equivalent quantifier-free formula without additional free variables. \
He proceeds by a special form of quantifier elimination, a technique in the
\index{Peirce!--Schr\"oder tradition}%
\peirce--\schroeder\
tradition\arXivfootnotemarkref{note peirce schroeder tradition}
with its first explicit occurrence
in
\index{Skolem!Thoralf}%
\citep{skolem-1919}.\footnote
{More precisely, \cfnlb\
\index{Skolem!Thoralf}%
\citep[\litsectref 4]{skolem-1919}. \
For more information on the subject of quantifier elimination in this context,
\cfnlb\
\index{Anellis!Irving H.}%
\citep[\p 120\f, \litnoteref{33}]{anellis-heijenoort-long},
\index{Wang, Hao}%
\citep[\p\,33]{wang-skolem}.}
\noitem\item[\nth 2 Step: ]
He shows that the quantifier-free fragment is consistent and decidable
and does not depend on the axiom \nlbmath{\inpit{\nat_1}}. \ \
This is achieved with a procedure which rewrites a quantifier-free formula
into an equivalent disjunctive normal form without additional free variables. \
For any quantifier-free formula \nlbmath B,
this normal-form procedure satisfies:\smallfootroom
\\\noindent\LINEnomath{\bigmaths
{\inpit{\nat_i}_{i\geq 2}\yields B}{}
\ \ \uiff\ \ \ the normal form of \bigmaths{\neg B}{} is
\bigmaths{\zeropp\tightnotequal\zeropp}.}\pagebreak\end{description}
\noindent
This elegant
work of \herbrand\ is hardly discussed in the secondary literature,
probably because --- as a decidability result --- it became
obsolete before it was published,
due to the analogous result for this theory extended with addition,
the so-called
\index{Presburger!Arithmetic}%
{\em\presburgerarithmetic}, as it is known today. \
\index{Presburger!Moj\.zes{}z}%
\presburgername\ \presburgerlifetime\footnote
{\presburger's true name is \presburgertruename. \
He was a student of
\index{Tarski, Alfred}%
\tarskiname,
\lukasiewiczname\ \lukasiewiczlifetime,
\ajdukiewiczname\ \ajdukiewiczlifetime,
and
\kuratowskiname\ \kuratowskilifetime\ in \Warszawa. \
He was awarded a master (not a \PhD) in mathematics on \Oct\,7, 1930. \
As he was of Jewish origin, it is likely that he died
in the Holocaust (Shoah), maybe in 1943. \
\Cfnlb\ \citep{presburger-life}.}
gave his talk on the decidability of his theory with similar techniques
on \Sep\,24, 1929,\footnote
{Moreover, note that \citep{presburger}
did not appear in print before\,1930. \
Some citations date \citep{presburger} at 1927, 1928, and 1929. \
There is evidence, however, that these earlier datings are wrong,
\cfnlb\ \citep{presburger-remarks-translation}, \citep{presburger-life}.}
five months {\em after}\/
\herbrand\ finished his \PhDthesis. \
As \index{Tarski, Alfred}%
\tarski's
work on decision methods developed in his 1927/8 lectures in \Warszawa\
also did not appear in print until after World War\,II,\footnote
{\Cfnlb\ \citep{presburger-remarks-translation}, \citep{tarski-decision}.}
we \nolinebreak
have to consider this contribution of \herbrand\ as completely original. \
Indeed:
\notop\halftop\begin{quote}
``{\germanfont\germantextfifty}''\footnote
{\Cfnlb\ \germantextfiftyquotation.
\begin{quote}``\englishtextfifty''\notop\notop\notop\end{quote}}
\notop\halftop\end{quote}
\begin{sloppypar}
\noindent In addition, \herbrand\ gives a constructive proof that
the \firstorder\ theories given by the following two axiom sets are identical:
\begin{itemize}\notop\item
\math{\inpit{\nat_i}_{i\geq 1}}
\noitem\item \inpit{\nat_2}, \nlbmath{\inpit{\nat_3}},
and the \firstorder\ instances of \nlbmath{\inpit{\ident S}},
provided that \nlbmath{\inpit{\ident S}} is
taken as a \firstorder\ axiom scheme instead of a \secondorder\ axiom.
\noitem\end{itemize}\end{sloppypar}
\section
[\herbrand's Second Proof of the Consistency of Arithmetic]
{\herbrand's Second Proof of the\\Consistency of Arithmetic}\label
{section 2 Proof}
\notop\halftop\noindent
\herbrand's contributions to logic discussed so far
are all published in \herbrand's thesis. \
In this section,
we consider his journal publication \citep{herbrand-consistency-of-arithmetic}
as well as some material from \litchapref 4 of his thesis.
First, the signature is now \signatureenlarged\ to include the
\index{function!recursive|(}%
recursive functions, \cfnlb\ \sectref{section
recursive functions}. \ Second, the axiom scheme
\nlbmath{\inpit{\ident S}} is restricted to just those instances which
result from replacing the meta variable \nlbmath P with {\em
quantifier-free}\/ \firstorder\ formulas. \
For this setting, \herbrand\ again gives a constructive proof of consistency. \
This proof consists of the following two steps:
\begin{description}
\pagebreak
\noitem\item[\nth 1 Step: ]\sloppy
\herbrand\ defines recursive functions \nlbmath{\fsymbol_P}
such that \fppeinsindex P x is the least natural number \nlbmath {y \leq x}
such that \math{\neg\app P y} holds, provided that such a \nlbmath y exists,
and \zeropp\ otherwise. \
The functions \nlbmath{\fsymbol_P} are primitive recursive
unless the terms substituted for \math P contain a
non-primitive recursive function.
These functions imply the instances of \nlbmath{\inpit{\ident S}},
rendering them redundant.
\ %
This is similar to the effect of
\index{Hilbert!'s epsilon}%
\hilbert's \nth 2 \math\varepsilon-formula:\footnote
{\Cfnlb\
\citep[\Vol\,II, \litsectref{2.3}, \p\,82\ff;
\Vol\,II, Supplement\,V\,B, \p\,535\ff]{grundlagen}.}%
\footroom\\\noindent\footroom\LINEmaths{
\varepsilon x.\neg\app P x\nottight{=}\spp y
\nottight{\nottight{\nottight\implies}}
\app P y},\\\noindent
\herbrand's procedure, however, is
much simpler but only applicable to quan\-ti\-fier-free \nlbmath P.
\item[\nth 2 Step: ] Consider the universal closures of the
\index{axioms!of equality}%
axioms of equality,
the axioms \inpit{\nat_2} and \inpit{\nat_3},
and an arbitrary finite subset of the axioms for recursive functions.
Take the negation of the conjunction of all these formulas.
As all quantified variables of the resulting
formula are \mbox{\math\gamma-variables},
this is already in \skolemizedform. \
Moreover, for any positive natural number \nlbmath n,
it \nolinebreak is easy to show that this formula
does not have
\index{Property C}%
\propertyC\ of order \nlbmath n: \
Indeed, we just have to construct a proper finite substructure of arithmetic
which satisfies all the considered axioms for the elements of
\nlbmath{\termsofdepthnovars n}\@. \hskip.2em
Thus, by
\index{Herbrand!'s Fundamental Theorem!application|(}%
\herbrandsfundamentaltheorem, consistency is immediate.
\end{description}
\noindent
\index{Herbrand!'s Fundamental Theorem!application|)}%
The \nth 2 step is a prototypical example to demonstrate how
\herbrandsfundamentaltheorem\ helps to answer seemingly
non-finitistic semantical questions on infinite structures
with the help of infinitely many
finite sub-structures. \
Notice that such a semantical argumentation is finitistically acceptable
if and only if the structures are all finite and
effectively constructible. \
And the latter is always the case for \herbrand's work on logic.
As the theory of all recursive functions is sufficiently expressive,
there is the question
why \herbrand's second consistency proof
does not imply the inconsistency of arithmetic
by
\index{Goedel@G\"odel!'s Second Incompleteness Theorem}%
\goedelssecondincompletenesstheorem? \
\herbrand\ explains that we cannot have the theory of {\em all}\/
total recursive functions because they are not recursively enumerable. \
More precisely,
an evaluation function for an enumerable set of recursive functions
cannot be contained in this set by the standard diagonalization argument.
\index{Hilbert!school}%
\hilbert's school had failed to prove
the consistency of arithmetic, \hskip .3em
except for the special case that for the axiom \nlbmath{\inpit{\ident S}},
the variable \math x does not occur within the scope of any binder in
\nlbmath{\app P x}.%
\index{Goedel@G\"odel!'s Second Incompleteness Theorem}%
\footnote
{\Cfnlb\ \citep[\Vol\,II, \litsectref{2.4}]{grundlagen}. \
More precisely, \hilbert's school had failed to
prove the termination of their
first algorithm for computing a valuation
of the
\index{Hilbert!'s epsilon}%
\mbox{\math\varepsilon-terms} in the
\nth 1 and \nth 2 \math\varepsilon-formulas. \
This de facto failure was less spectacular but
internally more discouraging for
\index{programme!Hilbert's}%
\hilbertsprogram\ than
\goedelssecondincompletenesstheorem\
with its restricted area of application,
\cfnlb\ \citep{goedel}. \
Only after developing a deeper understanding of the notion of a
{\em basic typus}\/ of an
\index{Hilbert!'s epsilon}%
\math\varepsilon-term
({\germanfontfootnote Grund\-typu\es}; introduced in
\index{Neumann, John von}%
\citep{neumann-1927};
called {\em\math\varepsilon-matrix}\/
in \citep{scanlon73:_consis_number_theor_via_theor}) \hskip.2em
and especially of
the independence of its valuation of the valuations of
its sub-ordinate \math\varepsilon-expressions,
has the problem been resolved:
The termination problem was cured in
\citep{ackermann-consistency-of-arithmetic}
with the help of a second algorithm of
\index{Hilbert!'s epsilon}%
\math\varepsilon-valuation,
terminating within the ordinal number \nlbmath{\epsilon_0},
just as
\index{Gentzen!'s consistency proof}%
\gentzen's consistency proof
for full arithmetic in
\makeaciteofthree{gentzenfirstconsistent}{gentzenconsistent}{gentzenepsilon}.}
But this fragment of arithmetic is actually equivalent to the
one considered by \herbrand\ here.\footnote
{\Cfnlb\ \citep[\p\,474]{kleene}\@.}
In \nolinebreak this sense,
\herbrand's result on the consistency of arithmetic was
just as strong as the one of the
\index{Hilbert!school}%
\hilbert\ school
by
\index{Hilbert!'s epsilon}%
\math\varepsilon-substitution. \
\herbrand's means, however, are much simpler.%
\index{consistency!proof of|)}%
\index{consistency!of arithmetic|)}%
\footnotemark
\pagebreak
\footnotetext
{\herbrand's results on the consistency of arithmetic have little importance,
however, for today's
\index{theorem proving!inductive}%
\index{theorem proving!automated}%
\index{theorem proving!human-oriented}%
inductive theorem proving
because the restrictions on \nlbmath{\inpit{\ident S}}
can usually not be met in practice. \
\herbrand's restrictions on \nlbmath{\inpit{\ident S}} require us to
avoid the occurrence of \nlbmath x in the scope of
quantifiers in \nlbmath{\app P x}. \
In practice of inductive theorem proving,
this is hardly a problem for the \mbox{\math\gamma-quantifiers},
whose bound variables tend to be easily replaceable with witnessing terms. \
There is a problem, however, with the \mbox{\math\delta-quantifiers}. \
If \nolinebreak we remove the \mbox{\math\delta-quantifiers},
letting their bound \math\delta-variables become \fuv s,
the induction hypothesis typically becomes too weak for
proving the induction step. \
This is because the now free \mbox{\math\delta-variables} do not admit
different instantiation in the induction hypotheses and the
induction conclusion.}%
\section{Foreshadowing Recursive Functions}\label
{section recursive functions}
\noindent\herbrand's notion of
a recursive function is quite abstract: \
A recursive function
is given by any new \math{n_i}-ary
function symbol \nlbmath{\fsymbol_i} plus a set of
quantifier-free formulas for its specification (which \herbrand\ calls
{\em the hypotheses}), such that,
for any natural numbers \nlbmath{k_1,\ldots,k_{n_i}},
there is a constructive proof of the unique existence of
a natural number \nlbmath l such that\footroom
\\\LINEmaths{\yields\ \fppeinsindex i
{\sppiterated{k_1}\zeropp,\ldots,\sppiterated{k_{n_i}}\zeropp}
=\sppiterated{l}\zeropp}.\label{section where text is}
\yestop\begin{quote}
\frenchtextonehundredandthirteenwithquotes\footnote
{\frenchtextonehundredandthirteensourcelocationmodifierforreprint\begin{quote}%
``\englishtextonehundredandthirteen''\end{quote}%
}%
\end{quote}
\noindent
In the letter to
\index{Goedel@G\"odel!Kurt|(}%
\goedel\ dated \herbrandtogoedeldate,
mentioned already in \nlbsectref{sec:life}, \hskip.2em
\herbrand\ added the requirement
that the hypotheses defining \nlbmath
{\fsymbol_i} contain only function symbols \nlbmath
{\fsymbol_j} with \bigmaths{j\tight\leq i},
for natural numbers \math i and \nlbmath j.\footnote
{\Cfnlb\ \citep[\Vol\,V, \PP{14}{21}]{goedelcollected}, \
\citep{sieg05:_only}\@.}
\begin{sloppypar}
\yestop\noindent\goedel's version of \herbrand's notion of a recursive function
is a little different: \
He \nolinebreak speaks
of quantifier-free equations instead of quantifier-free formulas
and explicitly lists the already known functions,
but omits the computability of the functions:
\end{sloppypar}
\pagebreak
\begin{quote}``\englishtexttwohundred''\footnote
{\Cfnlb\ \citep[\p\,26]{goedel-lecture-notes}, \
also in: \citep[\Vol\,I, \p\,368]{goedelcollected}.}
\end{quote}
\noindent\goedel\ took this
paragraph on page\,\,26 of his 1934 \Princetonnostate\ lectures
from the above-mentioned letter from \herbrand\ to \goedel,
which \goedel\ considered to be lost, but which was rediscovered
in February\,1986\@.\footnote
{\Cf\
\index{Dawson, John W., Jr.}%
\citep{goedel-herbrand}.}
In\,1963,
\index{Goedel@G\"odel!Kurt}%
\goedel\ wrote to
\index{Heijenoort!Jean van}%
\heijenoort:
\begin{quote}
``\englishtextgoedelonherbrand''\footnote
{\label{note intuitionism}%
\index{intuitionism}%
Letter of \goedel\ dated \Apr\,23,\,1963. \
\Cfnlb\ \citep[\p\,283]{herbrand-logical-writings}, \
also in:
\index{Heijenoort!Jean van}%
\citep[\p 115\f]{heijenoort-work-herbrand}, \
also in: \citep[\Vol\,V, \p\,308]{goedelcollected}\@.
\par
It is not clear whether \goedel\ refers to
\index{Brouwer, Luitzen}%
\brouwer's intuitionism
or
\index{finitism}%
\hilbert's finitism
when he calls \herbrand\ an ``intuitionist'' here; \hskip .2em
\cfnlb\ \sectref{section Subject Area and Methodological Background}.
\par
And there is more confusion regarding the meaning of
two occurrences of the word ``intuitionistically''
on the page of the above quotation
from \citep{herbrand-consistency-of-arithmetic}. \
Both occurrences carry the same note-mark,
probably because \herbrand\ realized that this time
he uses the word with even a another meaning, different from
his own standard and different from its meaning for the other occurrence
in the same quotation: \
It neither refers to
\index{Brouwer, Luitzen}%
\brouwer's intuitionism nor to
\index{finitism}%
\hilbert's finitism, but actually
to the working mathematician's meta level as opposed to the
object level of his studies;
\index{Neumann, John von}%
\cfnlb\ \eg\ \citep[\p\,2\f]{neumann-1927}.
}
\end{quote}
As we have seen, however,
\goedel's memory was wrong insofar as he had added the restriction to
equations and omitted the computability requirement.
\yestop\noindent
Obviously, \herbrand\ had a clear idea of
our current notion of a total recursive function. \
\herbrand's characterization, however,
just transfers the recursiveness
of the meta level to the object level. \
Such a transfer is of little epistemological value. \
While there seems to be no way to do much more than such a transfer
for consistency
of arithmetic in \goedel izable systems
(due to
\index{Goedel@G\"odel!'s Second Incompleteness Theorem}%
\goedelssecondincompletenesstheorem), \hskip.2em
it is well possible to do more than that for
the notion of recursive functions. \
Indeed,
in the later developments of
the theory of term rewriting systems and the today
standard recursion theory
for total and partial recursive functions,
we find constructive definitions and
\index{consistency!proof of}%
\index{consistency!of arithmetic}%
consistency proofs practically useful in programming and
\index{theorem proving!inductive}%
\index{theorem proving!automated}%
\index{theorem proving!human-oriented}%
inductive theorem proving.\footnote
{Partial recursive functions were introduced in \citep{kleene-1938}. \
For consistency proofs and admissibility conditions for
the practical specification of partial recursive functions
with \pnc\ term rewriting systems \cfnlb\
\index{Wirth, Claus-Peter}%
\citep{wirth-jsc}.}
\ %
Thus, as suggested by
\index{Goedel@G\"odel!Kurt|)}%
\goedel,\footnote
{Letter of
\index{Goedel@G\"odel!Kurt}%
\goedel\ to
\index{Heijenoort!Jean van}%
\heijenoort, dated \Aug\,14, 1964. \
\Cfnlb\
\index{Heijenoort!Jean van}%
\citep[\p 115\f]{heijenoort-work-herbrand}.}
we \nolinebreak may say that \herbrand\ {\em foreshadowed}\/
the notion of a
\index{function!recursive|)}%
recursive function,
although he did not {\em introduce}\/ \nolinebreak it.\footnote
{\Cfnlb\
\index{Heijenoort!Jean van}%
\citep[\p 115\ff]{heijenoort-work-herbrand} for more on this. \
Moreover, note that a general
definition of (total) recursive functions was not required
for \herbrand's second
consistency proof because \herbrand's function \nlbmath{\fsymbol_P}
of \sectref{section 2 Proof}
is actually a {\em primitive} recursive one,
unless \math{P} \nolinebreak contains a non-primitive recursive function.}
\vfill\pagebreak
\section{\herbrand's Influence on Automated Deduction and\\\herbrand's
Unification Algorithm}
\label{sec:atp}
\yestop\yestop\yestop\noindent
\index{proof search|(}%
\index{proof search!automatic|(}%
\index{theorem proving!automated|(}%
In the last fifty years the field of automated deduction,
or automated reasoning as it is more generally called today,
has come a long way:
modern deduction systems are among the most sophisticated and complex
human artefacts we have, they can routinely
search spaces of several
million formulas to find a proof.
Automated theorem proving systems
have solved open mathematical problems and these deduction engines are
used nowadays in many subareas of computer science and artificial
intelligence, including software development and verification as well
as security analysis. The application in industrial software and
hardware development is now
standard practice in most high
quality products.
The handbook
\index{Robinson, J. Alan}%
\citep{HandbookAR} gives a
good impression of the state of the art \nolinebreak today.
\herbrand's work inspired the development of the first computer
programs for automated deduction and mechanical theorem proving,
for the following reason:
The actual test for \herbrand's
\index{Property C}%
\propertyC\ is
very mechanical in nature and thus can be carried out on a computer,
resulting in a mechanical semi-decision procedure for any mathematical
theorem! \
This insight, first articulated in the 1950s,
turned out to be most influential in automated reasoning,
artificial intelligence, and computer science.
\yestop\yestop\yestop\yestop\noindent
Let us recapitulate the general idea as it is common now in most
monographs and introductory textbooks on
\index{theorem proving!automated}%
automated theorem proving.
Suppose we are given a conjecture \nlbmath A\@. \
Let \math{F} be its (validity) \skolemizedform; \hskip.3em
\cfnlb\ \sectref{section skolemization}. \
We then eliminate
all quantifiers in \nlbmath F, and the result is a quantifier-free
formula \nlbmath E\@. \
We now have to show that the
\index{Herbrand!disjunction|(}%
\herbrand\ disjunction
over some possible values of the free variables of \nlbmath E
is valid\@. \
See also \examref{example running herbrand start}
in \sectref{section herbrands properties}.
How do we find these values? \
Well, we do not actually have these
``objects in the domain\closequotecomma
but we can use their {\em names}, \ie\ we
take all terms from the
\index{Herbrand!universe|(}%
{\em\herbranduniverse}\/ and substitute
them in a systematic way into the variables and wait what happens:
every substituted formula obtained that way is sentential, so we can
just check whether their disjunction
is valid or not with one of the many available
decision procedures for sentential logic. \
If \nolinebreak it \nolinebreak is valid,
we are done: the original formula \nlbmath A
must be valid by
\index{Herbrand!'s Fundamental Theorem}%
\herbrandsfundamentaltheorem. \
If the disjunction of instantiated sentential formulas turns out to
be invalid, well, then bad luck and we continue the process of
substituting terms from the {\em\herbranduniverse}. \
This process must terminate, if indeed the original theorem is valid. \
But what happens if the given conjecture is in fact not a theorem? \
Well, in that case
the process will either run forever or sometimes, if we are
lucky, we can nevertheless show this to be the case.
In the following we will present these general ideas a little more
technically.\footnote
{Standard textbooks covering the early period of automated deduction
are \citep{chang-lee} and \citep{loveland}. \
\citet{chang-lee} present this and other algorithms in more detail and rigor.}
\vfill\pagebreak
\yestop\yestop\yestop\yestop\noindent
Arithmetic provided a testbed for the first automated theorem
proving program: In 1954 the program of
\index{Davis!Martin|(}%
\davisname\ \davislifetime\ proved the exciting
theorem that the sum of two even numbers is again even. \
This date is
still considered a hallmark and was used as the date for the \nth{50}
anniversary of automated reasoning in 2004.
The system that proved this and other theorems was based on
\index{Presburger!Arithmetic}%
\presburgerarithmetic, a decidable fragment of \firstorder\ logic,
mentioned already in \sectref{section 1 Proof}.
\yestop\yestop\yestop\yestop\noindent
Another approach, based directly on
\herbrand's ideas, was tried by
\index{Gilmore, Paul C.}%
\gilmorename\ \gilmorelifetime.\footnote
{\Cfnlb\ \citep{gilmore-1960}. \
The idea is actually due to \loewenheim\ and \skolem\ besides \herbrand,
as discussed in detail in \sectref{section herbrand loewenheim skolem}
.}
His program worked as follows: \
A preprocessor
generated
the
\index{Herbrand!disjunction|)}%
{\em\herbrand\ disjunction}\/ in the following sense. \
The formula \nlbmath E
contains finitely many constant and function
symbols, which are used to systematically generate the
{\em\herbranduniverse}\/ for this set; say
\par\noindent\LINEmaths{a, b, f(a, a), f(a, b), f(b, a), f(b, b),
g(a, a), g(a, b), g(b, a), g(b, b),
f(a, f(a, a)),
\ldots}{}\par\noindent
for the constants $a, b$ and the binary function symbols \nlbmath{f,g}. \
The terms of this universe were enumerated and systematically
substituted for the variables in
\nlbmath E
such that the program generates a sequence of propositional
formulas \math{E\sigma_1, E\sigma_2, \ldots}
where $\sigma_1, \sigma_2, \ldots$ are the substitutions. \
Now each of
these sets can be checked for truth-functional validity, for which
\gilmore\ used the ``multiplication method\closequotefullstopextraspace
This method computes the conjunctive normal form\footnote
{Actually, the historic development of
automated theorem proving
did not follow \herbrand\ but
\index{Skolem!Thoralf}%
\skolem\ in
choosing the duality validity--unsatisfiablity.
Thus, instead of proving validity of a conjecture,
the task was to show unsatisfiablity of the negated conjecture.
For this reason, \gilmore\ actually used the {\em disjunctive}\/
normal form here.}
and checks the individual elements of this conjunction in turn: \
If any element contains the disjunction of an atom and its negation, it
must be true and hence can be removed from the overall conjunction. \
As soon as all disjunctions have been
removed, the theorem is proved --- else it goes on forever.
This method is not particularly efficient and served to prove a few
very simple theorems only. \
Such algorithms became known as {\em British Museum
Algorithms}.
That name was originally justified as follows:
\begin{quote}``Thus we reflect the basic nature of theorem proving;
that is, its nature prior to building up sophisticated proof techniques.
We will call this algorithm the
\label{British Museum Algorithm}%
British Museum Algorithm,
in recognition of the supposed originators of this type.''\getittotheright
{\citep{newell-shaw-simon}}
\end{quote}
The name has found several more popular explanations since.
The nicest is the following: If monkeys are placed in front of typewriters
and they type in a guaranteed random fashion,
they will reproduce all the books of the
library of the British Museum,
provided they could type long enough.
\vfill\pagebreak
\yestop\yestop\yestop\yestop\noindent
A few months later \davisname\ and \putnamname\ \putnamlifetime\
experimented with a better idea,
where the multiplication method is replaced by what is now known as the
\index{Davis!--Putnam procedure}%
{\em \davis--\putnam\ procedure}.\footnote
{\Cfnlb\ \citep{davis-putnam-procedure}.}
It works as follows: the initial
formula \nlbmath E is
transformed (once and for all) into
disjunctive normal form and then the variables are systematically
replaced by terms from the \herbranduniverse\ as before. \
But now the truth-functional check is replaced
with a very effective procedure,
which constituted a huge improvement and is still used today
in many applications involving propositional logic.\footnote
{\Cf\ The international SAT Competitions web page
\url{http://www.satcompetition.org/}.}
However, the
most cumbersome aspect remained: the systematic but blind replacement
of variables by terms from the \herbranduniverse.
Could we not
find these replacements in a more goal-directed and intelligent way?
\yestop\yestop\yestop\yestop\noindent
The first step in that direction was done by
\index{Davis!Martin|)}%
\citet{davis-1963} in a
method called linked conjuncts, where the substitution was cleverly
chosen, so that it generated the desired tautologies more directly.
And this idea finally led to the seminal resolution principle
discovered by
\index{Robinson, J. Alan}%
\robinsonname\ \robinsonlifetime,
which dominated the field ever since.\footnote
{\Cfnlb\ \citep{resolution}.}
This technique --- called a
\index{logic!machine-oriented}%
{\em machine-oriented}\/ logic
by
\index{Robinson, J. Alan}%
\robinson\ --- dispenses with the systematic replacement from the
\herbranduniverse\ altogether and finds the proper
substitutions more directly by an ingenious combination of Cut
and a
\index{unification algorithm|(}%
unification algorithm.
It works as follows: first transform
the formula \nlbmath E
into disjunctive normal form,
\ie\ into a disjunctive set of conjunctions. \
Now suppose that the following elements are in this disjunctive set:
\par\noindent\LINEnomath{\bigmaths{K_1\und\ldots\und K_m\und L}{} \ and \
\bigmaths{\neg L\und M_1\und\ldots\und M_n}.}\par\noindent
Then we can add their {\em resolvent}
\par\noindent\LINEmaths{K_1\und\ldots\und K_m\und M_1\und\ldots\und M_n}{}
\par\noindent
to this disjunction,
simply because one of the previous two must be true if the resolvent is true.
Now suppose that the literals $L$ and $\neg L$
are not already complementary
because they still contain variables, for example
such as
\math{P(x, f(a, y))} and \math{\neg P(a, f(z, b))}. \
It is easy to see, that these two atoms can be made equal, if we
substitute $a$ for the variables $x$ and $z$ and the constant $b$ for
the variable $y$. \
The most important aspect of the
\index{proof search|)}%
\index{proof search!automatic|)}%
\index{theorem proving!automated|)}%
resolution principle is
that this substitution can be computed by an algorithm, which is
called \emph{unification}. \
Moreover, there is always \emph{at most
one} (up to renaming) most general substitution which unifies two
atoms, and this single unifier stands for the potentially infinitely many
instances from the
\index{Herbrand!universe|)}%
\herbranduniverse\ that would be generated otherwise.
\yestop\yestop\yestop\yestop\noindent
\index{Robinson, J. Alan}%
\robinson's original unification algorithm is exponential in
time and space.
The race for the fastest \emph{unification algorithm}
lasted more than a quarter of a century and resulted in a linear algorithm
and unification theory became a (small) subfield of computer science,
artificial intelligence, logic, and universal algebra.\footnote
{\Cfnlb\
\index{Siekmann, J\"org}%
\citep{siekmann-1989} for a survey.}
\pagebreak
Unification theory had its heyday in the late 1980s,
when the Japanese challenged the Western
economies with the
\index{programme!Fifth Generation Computer}%
``Fifth Generation Computer Programme'' which was
based among others on logical programming languages. \
The
processors
of these machines realized an ultrafast unification
algorithm cast in silicon, whose performance was measured not in MIPS
(machine instructions per second), as with standard computers, but in
LIPS (logical inferences per second, which amounts to the number of
unifications per second). \
A \nolinebreak myriad of computing machinery was built
in special hardware or software on these new concepts and most
industrial countries even founded their own research laboratories to
counteract the Japanese challenge.\footnote
{Such as the European Computer-Industry Research Center (ECRC),
which was supported by
the French company Bull,
the German Siemens company, and
the British company ICL.}
\yestop\yestop\yestop\yestop\noindent
Interestingly, {\herbrandname} had seen the concept of a unifying
substitution and an algorithm already in his thesis in\,1929. \
Here is his account in the original French idiom:
\nopagebreak
\yestop\yestop\begin{quote}
\frenchtextfivehundredwithfrontandend
{``}
{''\footnote
{\label{note translation unification algorithm}\Cfnlb\
\frenchtextfivehundredsource. \
\frenchtextfivehundredsourcemodifier
\begin{quote}\begin{enumerate}\item[``\relax 1.]
\englishtextfivehundredwithoutfrontandend''
\notop\notop\end{enumerate}\end{quote}}}%
\index{unification algorithm|)}%
\end{quote}
\vfill\pagebreak
\section{Conclusion}
With regard to students interested in logic,
in the previous lectures we have
presented all major contributions of \herbrandname\ to logic,
and our 150~notes give hints on where to continue studying.
With regard to historians,
we uncovered some parts of the historical truth on \herbrand\
which was varnished by contemporaries such as \goedel\ and \heijenoort.
It was already well-known that \goedel's memories on
\herbrand's recursive functions were incorrect,
but to the best of our knowledge
the errors of the reprint of \herbrand's \PhDthesis\
\cite{herbrand-PhD} \hskip.2em
in \cite{herbrand-ecrits-logiques} have not been noted before. \
The English translation in \cite{herbrand-logical-writings}
is based on this contorted reprint: \
The advantage of working with the original prints
should become obvious from
a \nolinebreak comparison of our translation of \herbrand's unification
algorithm in \noteref{note translation unification algorithm}
with the translation in \cite{herbrand-logical-writings}.
With regard to logicians, however, notwithstanding our above critique,
the elaborately commented
book \cite{herbrand-logical-writings} \hskip.2em
is a great achievement and still the best
source on \herbrand\ as a logician
(\cfnlb\ \noteref{note on herbrand-logical-writings}), \
and
our lectures would not have been possible
without \heijenoort's most outstanding and invaluable contributions
to this subject. \
To the best of our knowledge,
what we called
{\em\heijenoort's \repair} of \herbrand's False Lemma
has not been published before,
and we have included it into our version of
\herbrandsfundamentaltheorem.
The consequences of this \repair\ on \herbrand's
\index{modus ponens!elimination}%
{\em Modus Ponens}\/ elimination
(as described in \nlbsectref{section modus ponens elimination}) \hskip.2em
are most relevant still today and should become part of the
standard knowledge on logic, just as \gentzen's
\index{Cut elimination}%
Cut elimination.
While \herbrand's important work on
decidability and consistency of arithmetic was
soon to be topped by \presburger\ and \gentzen,
his \fundamentaltheorem\ will remain of outstanding historical
and practical significance. \
Even under the critical assumptions
(\cfnlb\ the discussion
in \nlbsectref{section herbrand loewenheim skolem}) \hskip.3em
that \herbrand\ took
the outer \skolemizedform\ from \citep{skolem-1928} \hskip.2em
and that he had realized that the presentation in \citep{loewenheim-1915}
included a sound and complete proof procedure, \hskip.3em
\herbrandsfundamentaltheorem\ remains a
truly remarkable creation.
All in all, \herbrandname\ has well
deserved to be the idol that he actually is. \
And thus we were surprised to find out
how little is known on his personality and life, and
that there does not seem to be anything like a
\herbrand\ memorial or museum,
nor even a \nolinebreak street named after him,
nor a decent photo of him available in the
Internet.\footnote
{The best photo of \herbrand\ currently to be found in the Internet
seems to be the one of \figuref{figure herbrand}. \
Outside mathematics, Google hits on \herbrand\ typically refer to
P. Herbrand \& Cie., a historical
street-car production company in \Cologne; \hskip.2em
or else to
\herbrand\ Street close to \russell\ Square in \London,
probably named after \herbrand\ Arthur \russell,
the \nth{11} Duke of Bedford.} \
Moreover, a \nolinebreak careful bilingual edition of \herbrand's complete works
on the basis of the elaborate previous editorial achievements
is in high demand.\footnote
{\Cfnlb\ also \noteref{note on herbrand-logical-writings}.}
\vfill\pagebreak
| {
"timestamp": "2009-05-12T21:09:44",
"yymm": "0902",
"arxiv_id": "0902.4682",
"language": "en",
"url": "https://arxiv.org/abs/0902.4682",
"abstract": "We give some lectures on the work on formal logic of Jacques Herbrand, and sketch his life and his influence on automated theorem proving. The intended audience ranges from students interested in logic over historians to logicians. Besides the well-known correction of Herbrand's False Lemma by Goedel and Dreben, we also present the hardly known unpublished correction of Heijenoort and its consequences on Herbrand's Modus Ponens Elimination. Besides Herbrand's Fundamental Theorem and its relation to the Loewenheim-Skolem-Theorem, we carefully investigate Herbrand's notion of intuitionism in connection with his notion of falsehood in an infinite domain. We sketch Herbrand's two proofs of the consistency of arithmetic and his notion of a recursive function, and last but not least, present the correct original text of his unification algorithm with a new translation.",
"subjects": "Logic in Computer Science (cs.LO); Artificial Intelligence (cs.AI)",
"title": "Lectures on Jacques Herbrand as a Logician",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9005297914570319,
"lm_q2_score": 0.7879311956428946,
"lm_q1q2_score": 0.7095555152947857
} |
https://arxiv.org/abs/2007.11383 | A correction term for the asymptotic scaling of drag in flat-plate turbulent boundary layers | An asymptotic scaling law for drag in flat-plate turbulent boundary layers has been proposed [Dixit SA, Gupta A, Choudhary H, Singh AK and Prabhakaran T. Asymptotic scaling of drag in flat-plate turbulent boundary layers. Phys. Fluids Vol. 32, 041702 (2020)]. In this paper we suggest to amend the scaling law by using a correction term derived from the logarithmic law for the mean velocity in the streamwise direction. | \section{Introduction}
In \cite{dixit_a}, an asymptotic (high Reynolds number) scaling law for drag in zero-pressure-gradient (ZPG) flow has been derived based on an approximation of $M$, the kinematic momentum rate through the boundary layer:
\begin{equation}
\label{eq:dixit_asymp}
M = \int_0^{\delta} U^2 {\rm d}z \sim U_{\tau}^2 \delta,
\end{equation}
\noindent where $\delta$ is the boundary layer thickness, $U$ is the mean velocity in the streamwise direction, $z$ is the distance from the wall and $U_{\tau}$ is the friction velocity (we use $\sim$ to mean "scales as"). In this paper, we will propose a correction term to Equation (\ref{eq:dixit_asymp}).
The paper is structured as follows: In Section \ref{sec:derivation} we derive the correction term from the logarithmic law for the mean velocity in the streamwise direction. We apply this correction term to the measurements from \cite{dixit_a} in Section \ref{sec:application}, discuss the findings in Section \ref{sec:discussion} and conclude in Section \ref{sec:conclusions}.
\section{Derivation of the correction term}
\label{sec:derivation}
Our first step is to introduce the "log law" as formulated in \cite{marusic_a}:
\begin{equation}
\label{eq:log_law}
U^+ = \frac{1}{\kappa} \log (z^+) + A,
\end{equation}
\noindent where $U^+=U/U_{\tau}$, $z^+=z U_{\tau}/\nu$ is the normalized distance from the wall, $\nu$ is the kinematic viscosity, $\kappa$ is the von K\'arm\'an constant and $A$ is a constant for a given wall roughness. Although not strictly correct (close to and far from the wall), as our second step we will assume that the log law holds for the entire boundary layer of ZPG flows and use this to estimate the kinematic momentum rate through the boundary layer:
\begin{eqnarray}
\label{eq:M_log}
M &=& \int_{0}^{\delta} U^2 {\rm d}z \nonumber \\
&\sim& U_{\tau}^2 \delta \times \nonumber \\
& & \left[ \frac{2}{\kappa^2} - \frac{2A}{\kappa} + A^2 +
\log (Re_{\tau}) \left( \frac{2A}{\kappa} - \frac{2}{\kappa^2} \right) + \log(Re_{\tau})^2/\kappa^2 \right],
\end{eqnarray}
\noindent where $Re_{\tau} = \delta U_{\tau} / \nu$ is the friction Reynolds number.
\begin{figure}[!ht]
\centering
\includegraphics[width=10cm]{M.eps}
\caption{$M/U_{\tau}^2 \delta$ as a function of $Re_{\tau}$. Blue circles are measurements from Table I in \cite{dixit_a}, the red solid (black dashed) line is the log law with original (fitted) constants, respectively.}
\label{fig:M}
\end{figure}
The term in the square brackets of Equation (\ref{eq:M_log}) is assumed to be a constant in Equation (\ref{eq:dixit_asymp}) \cite{dixit_a}; however, we show that it is in reality a function of $Re_{\tau}$. In Figure \ref{fig:M}, we show $M/U_{\tau}^2 \delta$ as a function of $Re_{\tau}$ using all measurements from Table I in \cite{dixit_a}. It is clear that this ratio varies with $Re_{\tau}$, i.e. it is not a constant and increases roughly a factor of 3 when $Re_{\tau}$ increases around two orders of magnitude. Also shown are two lines:
\begin{itemize}
\item Red solid line: Log law with original constants from \cite{marusic_a}: $\kappa=0.39$ and $A=4.3$
\item Black dashed line: Log law with fitted constants: $\kappa_{\rm fit}=0.39$ and $A_{\rm fit}=5.7$
\end{itemize}
Thus, we have demonstrated that $M$ is a function of both $\delta$ and $Re_{\tau}$:
\begin{equation}
\label{eq:basse_asymp}
M \sim U_{\tau}^2 \delta \times f(Re_{\tau}),
\end{equation}
\noindent where
\begin{equation}
f(Re_{\tau})=\left[ \frac{2}{\kappa^2} - \frac{2A}{\kappa} + A^2 +
\log (Re_{\tau}) \left( \frac{2A}{\kappa} - \frac{2}{\kappa^2} \right) + \log(Re_{\tau})^2/\kappa^2 \right]
\end{equation}
We note that the coefficient of determination $R^2$ with fitted constants is significantly larger than the one using the original constants, see Table \ref{tab:r_squared_log}. This shows that the fitted constants provide a better match than the original ones. However, we can not expect perfect agreement because of the assumptions made in deriving Equation (\ref{eq:M_log}).
\begin{table}[!ht]
\caption{Fit parameters and coefficient of determination ($R^2$) for the original and fitted $f(Re_{\tau})$.}
\centering
\begin{tabular}{cccc}
\hline\hline
Log law constants & $\kappa$ & $A$ & $R^2$ \\
\hline
Original & 0.39 & 4.3 & 0.80464 \\
Fitted & 0.39 & 5.7 & 0.94960 \\
\hline
\end{tabular}
\label{tab:r_squared_log}
\end{table}
The asymptotic scaling law derived in \cite{dixit_a} is:
\begin{equation}
\label{eq:ZPG_asym_pow_law}
\tilde{U_{\tau}} \sim \frac{1}{\sqrt{\tilde{\delta}}},
\end{equation}
\noindent where
\begin{equation}
\tilde{U_{\tau}} = \frac{U_{\tau} \nu}{M} \sim \frac{\nu}{U_{\tau} \delta} = \frac{1}{Re_{\tau}}
\end{equation}
\noindent is named the "dimensionless drag" and
\begin{equation}
\tilde{\delta} = \frac{\delta M}{\nu^2} \sim \frac{\delta^2 U_{\tau}^2}{\nu^2} = Re_{\tau}^2
\end{equation}
\noindent scales as the friction Reynolds number squared.
Our conclusion is to propose that Equation (\ref{eq:basse_asymp}) should be used instead of Equation (\ref{eq:dixit_asymp}). As a consequence, Equation (\ref{eq:ZPG_asym_pow_law}) is modified to:
\begin{equation}
\tilde{U}_{\tau} \times \sqrt{f(Re_{\tau})} \sim \frac{1}{\sqrt{\tilde{\delta}}},
\end{equation}
\noindent where $\sqrt{f(Re_{\tau})}$ is the correction term.
\section{Application of the correction term}
\label{sec:application}
We fit all measurements in \cite{dixit_a} to:
\begin{equation}
\label{eq:p_law_dixit}
\tilde{U}_{\tau} = C \times \tilde{\delta}^D,
\end{equation}
\noindent where $C$ and $D$ are fit parameters, see Table \ref{tab:r_squared} and Figure \ref{fig:p_law_dixit}. Equations (7) and (8) in \cite{dixit_a} are both power-laws, but fitted to smaller and larger $\tilde{\delta}$ values, respectively: This is referred to as the "discrete model". Another model, the "continuous model" is presented as Equation (9) in \cite{dixit_a} and covers the entire range of $\tilde{\delta}$. As can be seen from Table \ref{tab:r_squared}, the $R^2$ of the continuous model is larger than the $R^2$ of the two discrete models, i.e. the continuous model performs better than the discrete models in fitting the measurements.
\begin{table}[!ht]
\caption{Fit parameters and coefficient of determination ($R^2$) for fits in \cite{dixit_a} and this paper.}
\centering
\begin{tabular}{cccc}
\hline\hline
Equation & $C$ & $D$ & $R^2$ \\
\hline
Equation (7) in \cite{dixit_a} & 0.15144 & -0.55745 & 0.99982 \\
Equation (8) in \cite{dixit_a} & 0.10869 & -0.54261 & 0.99992 \\
Equation (9) in \cite{dixit_a} & - & - & 0.99998 \\
Equation (10) & 0.17291 & -0.56439 & 0.99991 \\
Equation (11) & 1.06598 & -0.50629 & 0.99992 \\
Equation (12) & 1.23257 & -0.51017 & 0.99992 \\
\hline
\end{tabular}
\label{tab:r_squared}
\end{table}
\begin{figure}[!ht]
\centering
\includegraphics[width=10cm]{ZPG_p_law_dixit.eps}
\caption{Measurements from \cite{dixit_a} and fit to Equation (\ref{eq:p_law_dixit}).}
\label{fig:p_law_dixit}
\end{figure}
The next two fits are using the correction term $\sqrt{f(Re_{\tau})}$, either with the original log law constants:
\begin{equation}
\label{eq:p_law_orig}
\tilde{U}_{\tau} \times \sqrt{f(Re_{\tau})_{\rm original~constants}} = C \times \tilde{\delta}^D,
\end{equation}
\noindent or with the fitted log law constants:
\begin{equation}
\label{eq:p_law_fit}
\tilde{U}_{\tau} \times \sqrt{f(Re_{\tau})_{\rm fitted~constants}} = C \times \tilde{\delta}^D,
\end{equation}
\noindent see Table \ref{tab:r_squared} and Figure \ref{fig:p_law_mult}. The quality of the fits is similar to the one from Equation (\ref{eq:p_law_dixit}), but the fits with the correction term are interesting because their exponents are very close to 1/2. Thus, the deviation from 1/2 using Equation (\ref{eq:p_law_dixit}) may not only be because $Re$ is not sufficiently large, but also because the correction term is not included.
\begin{figure}[!ht]
\centering
\includegraphics[width=10cm]{ZPG_p_law_mult.eps}
\caption{Measurements from \cite{dixit_a} with correction terms applied and fits to Equations (\ref{eq:p_law_orig}) and (\ref{eq:p_law_fit}).}
\label{fig:p_law_mult}
\end{figure}
\section{Discussion}
\label{sec:discussion}
By comparing fit results from Equation (\ref{eq:p_law_dixit}) to results from Equations (\ref{eq:p_law_orig}) and (\ref{eq:p_law_fit}) - see Table \ref{tab:r_squared} - we find that the correction term scales weakly with $\tilde{\delta}$:
\begin{equation}
\sqrt{f(Re_{\tau})} \sim \tilde{\delta}^{0.05},
\end{equation}
\noindent which is the reason that the fits with the correction term have an exponent which is closer to 1/2.
For the case with correction term using the original log law constants (Equation (\ref{eq:p_law_orig})), we also see that the multiplier $C$ is close to 1 (1.06598, see Table \ref{tab:r_squared}); thus, for that case we propose an exact equation which matches the measurements quite well:
\begin{equation}
\tilde{U}_{\tau} \times \sqrt{f(Re_{\tau})_{\rm original~constants}} = \frac{1}{\sqrt{\tilde{\delta}}}
\end{equation}
Regarding measurements, we note that there is quite a large variation for large $Re_{\tau}$ (Figure \ref{fig:M}) and, equivalently, at high $\tilde{\delta}$ (Figures \ref{fig:p_law_dixit} and \ref{fig:p_law_mult}). This leads us to speculate that the measurements might have had different roughnesses, which e.g. impacts $A$ in the log law. It is not clear to us from the description in \cite{dixit_a} if this is indeed the case.
\section{Conclusions}
\label{sec:conclusions}
We have derived a correction term to the asymptotic scaling law of drag in ZPG turbulent boundary layers \cite{dixit_a}. The correction term has been applied to existing measurements and demonstrates that it leads to scaling with an exponent closer to -1/2 than the original scaling law.
\paragraph{Acknowledgements}
We are grateful to Google Scholar Alerts for making us aware of \cite{dixit_a} in a 'Recommended articles' e-mail dated 14th of May 2020.
\paragraph{Data availability statement}
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
\label{sec:refs}
| {
"timestamp": "2021-04-07T02:10:37",
"yymm": "2007",
"arxiv_id": "2007.11383",
"language": "en",
"url": "https://arxiv.org/abs/2007.11383",
"abstract": "An asymptotic scaling law for drag in flat-plate turbulent boundary layers has been proposed [Dixit SA, Gupta A, Choudhary H, Singh AK and Prabhakaran T. Asymptotic scaling of drag in flat-plate turbulent boundary layers. Phys. Fluids Vol. 32, 041702 (2020)]. In this paper we suggest to amend the scaling law by using a correction term derived from the logarithmic law for the mean velocity in the streamwise direction.",
"subjects": "Fluid Dynamics (physics.flu-dyn)",
"title": "A correction term for the asymptotic scaling of drag in flat-plate turbulent boundary layers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9399133481428691,
"lm_q2_score": 0.7549149923816048,
"lm_q1q2_score": 0.7095546780526426
} |
https://arxiv.org/abs/1902.07919 | Finite element method for radially symmetric solution of a multidimensional semilinear heat equation | This study aims to present the error and numerical blow up analyses of a finite element method for computing the radially symmetric solutions of semilinear heat equations. In particular, this study establishes optimal order error estimates in $L^\infty$ and weighted $L^2$ norms for the symmetric and nonsymmetric formulation, respectively. Some numerical examples are presented to validate the obtained theoretical results. | \section{Introduction}
This study \ek{was conducted} to investigate the convergence \tn{property} of finite element method (FEM) applied to a parabolic equation with singular coefficients for the function $u=u(x,t)$, $x\in\overline{I}=[0,1]$\ek{,} and $t\ge 0$, as expressed in
\begin{subequations}
\label{eq:1}
\begin{align}
&u_{t}=u_{xx}+\frac{N-1}{x}u_{x}+f(u),
&&x\in I=(0,1),~ t>0, \label{eq:1a}\\
&u_x (0,t)=u(1,t)=0, &&t>0, \label{eq:1b}\\
&u(x,0)=u^0(x), &&x\in I, \label{eq:1c}
\end{align}
\end{subequations}
where $f$ is a given locally Lipschitz continuous function, $u^0$ is a given continuous function, and
\begin{equation}
\label{eq:N}
N\ge 2\quad \mbox{integer}
\end{equation}
is a given parameter.
In the study of
an $N$\ek{-}dimensional semilinear heat equation, the following problem arises \ek{as}
\begin{subequations}
\label{eq:1z}
\begin{align}
& U_{t}=\Delta U+ f(U), && \bm{x}\in\Omega,~t> 0\\
& U=0, &&\bm{x}\in\partial\Omega,~t>0,\\
& U(0,\bm{x})=U^{0}(\bm{x}), && \bm{x}\in \Omega,
\end{align}
\end{subequations}
where $\Omega$ \ek{represents} a bounded domain \tn{in} $\mathbb{R}^{N}$.
If one is concerned with the \emph{radially symmetric solution} $u(|\bm{x}|)=U(\bm{x})$ in the $N$\ek{-}dimensional \tn{ball} $\Omega=\{\bm{x}\in\mathbb{R}^N\mid |\bm{x}|=|\bm{x}|_{\mathbb{R}^N} \tn{<} 1\}$, then \eqref{eq:1z} implies \eqref{eq:1}, where $x=|\bm{x}|$ and $u^0 (x)=U_{0}(\bm{x})$.
For a linear case \ek{in which} $f(u)=0$ is replaced by a given function $f(x,t)$, \tn{the works} \cite{et84,tho06} studied the convergence \tn{property} of the FEM to \eqref{eq:1} along with the \tn{corresponding steady-state problem}, and \ek{two proposed} schemes: the symmetric scheme, wherein they established the optimal order error estimate in the weighted $L^2$ norm\ek{;} and the nonsymmetric scheme, wherein they proved the $L^\infty$ error estimate. \tn{In this paper}, both schemes are applied to the semilinear equation \eqref{eq:1} to derive various error estimates. Moreover, this study includes a discussion of discrete positivity conservation properties, which \ek{earlier} studies \cite{et84,tho06} failed to embrace, but \ek{which} are actually important in the study of diffusion\ek{-}type equations.
Our \ek{emphasis} is on FEM because we are able to use non-uniform partitions of the space variable\ek{.} \ek{Therefore}, the method \ek{is deemed} useful for examining highly concentrated solutions \tn{at the origin}. On this connection, we present our motivation for this study.
\ek{The} critical phenomenon appearing in the semilinear heat equation of the form
\[
U_{t}=\Delta U+ U^{1+\alpha},\quad \alpha>0
\]
in a multidimensional space has attracted considerable attention since the pioneering work of Fujita \cite{fuj66}. According to him, the equation is in the whole $N$ dimensional space\ek{. Any} positive solution blows up in a finite time if $\alpha\le 2/N$, whereas a solution is smooth at any time for a small initial value if $\alpha>2/N$.
Therefore, \ek{expression} $p_c=1+2/N$ is known as \ek{Fujita's critical exponent} (\ek{\cite{lev90,dl00} provides some} critical exponents of other equations). Generally, similar critical exponents can be found for an initial-boundary value problem for the semilinear heat equation\ek{. Some examples are given in reports of earlier studies \cite{ish10,lev90,dl00}.} \ek{However},
the concrete values of those critical conditions \ek{are apparently unknown}.
Therefore, we found it interesting to study the numerical methods for computing the solutions of nonlinear partial \ek{differential} equations in an $N$\tn{-}dimensional space. However, computing the non-stationary four-space dimensional problem \ek{is difficult,} even \ek{for} modern computers.
\ek{We consider} the FEM to solve the one space dimensional equation \eqref{eq:1}. However, we \ek{face} another difficulty in dealing with the singular coefficient $(N-1)/x$, which the FEM reasonably simplified, as \ek{explained} later.\\
\tn{
As described above, the main purpose of this paper is to derive various
optimal order error estimates for the symmetric and nonsymmetric
schemes of \cite{et84,tho06} applied to \eqref{eq:1}. These schemes
are described below as (Sym) and (Non-Sym). To this end, we address
mostly the general nonlinearity $f(u)$. Moreover, we study discrete
positivity conservation properties. We summarize our typical results
here.
\begin{itemize}
\item The solution of (Sym) is positive if $f$ and \ek{if} the discretization
parameters satisfy some conditions, as shown by Theorem \ref{prop:2}.
\item If $f$ is a \emph{globally} Lipschitz continuous function, then the
solution of (Sym) converges to the solution of \eqref{eq:1} in the
weighted $L^2$ norm for the space and in the $L^\infty$ norm for time.
Moreover, the convergence is at the optimal order, as shown by Theorem
\ref{th:s1}.
\item If $f$ is a \emph{locally} Lipschitz continuous function and
$N\le 3$, then the solution of (Sym) converges to the solution of
\eqref{eq:1} in the weighted $L^2$ norm for the space and in the
$L^\infty$ norm for time. The convergence is at the optimal order, as shown by Theorem \ref{th:s3}.
\item If $f(u)=u|u|^\alpha$ with $\alpha\ge 1$ and if the time partition
is uniform,
then the solution of (Non-Sym) converges to the solution of \eqref{eq:1} in
the $L^\infty(0,T;L^\infty(I))$ norm. The convergence is at the optimal
order up to the logarithm factor, as shown by Theorem \ref{th:s6}.
\end{itemize}
However, we do not proceed to applications of our schemes to
the blow-up computation in this work. In fact, from the main
results presented in this paper, we infer that the standard schemes of
\cite{et84,tho06} do not fit for the blow-up computation for large
$N$. For the symmetric scheme, the restriction $N\le 3$ reduces
interest in considering radially symmetric problems.
Moreover, for the nonsymmetric scheme, the use of uniform
time-partitions makes it difficult to apply Nakagawa's time-partitions
control strategy: a powerful technique for computing the
approximate blow-up time, as described in earlier reports \cite{che92,nak76,sai16,cho07,cho10,sai16w,che86}.
Nevertheless, we believe that our results are of interest to
researchers in this and related fields. In fact, the validity issue of the
symmetric scheme only for $N\le 3$ was pointed out earlier in
\cite{akr03} for a nonlinear Schr{\" o}dinger equation with no
mathematical evidence. The analysis reported herein reveals weak
points of the two standard schemes.
As a sequel to this study, we propose a new
finite element scheme for \eqref{eq:1}. The scheme, which uses a nonstandard
mass-lumping approximation, is shown to be positivity-preserving
and convergent for any $N\ge 2$. Details will be reported in a
forthcoming paper.
}\\
\ek{It is noteworthy that} the finite difference method for \eqref{eq:1} has been studied and \ek{that} its optimal order convergence \ek{was} proved in \ek{an earlier report \cite{che92}.} \ek{Its} finite difference scheme uses a special approximation around the origin to \ek{assume} a uniform spatial mesh.
This paper comprises \tn{five} sections. Section \ref{sec:2} presents our finite element schemes.
Well-posedness and positivity conservation are examined in Section \ref{sec:3}.
Section \ref{sec:4} presents the error estimates and their proofs.
Finally, Section \ref{sec:5} \ek{presents} some numerical examples that validate our theoretical results.
\section{Finite element method}
\label{sec:2}
First, we derive two alternate weak formulations of \eqref{eq:1}. Unless otherwise stated explicitly, we assume that $f$ is a locally Lipschitz continuous function \ek{such} that
\begin{equation}
\tag{f1}
\label{eq:f1}
\forall \mu>0, \ \exists M_\mu>0:\
|f(s)-f(s')|\le M_\mu|s-s'|\quad (s,s'\in\mathbb{R},|s|,|s'|\le \mu).
\end{equation}
\ek{Letting} $\chi\in \dot{H}^{1}=\{v\in H^{1}(I)\mid v(1)=0\}$ be arbitrary, then multiplying both sides of \eqref{eq:1a} by $x^{N-1}\chi$ and using integration by parts over $I$, we obtain
\begin{equation}
\label{eq:w1}
\int_I x^{N-1} u_t\chi~dx+
\int_I x^{N-1} u_{x}\chi_{x}~dx=
\int_I x^{N-1} f(u)\chi~dx.
\end{equation}
Otherwise, if we multiply both sides of \eqref{eq:1a} by $x\chi$ instead of $x^{N-1}\chi$ and integrate it over $I$, then we have
\begin{equation}
\label{eq:w2}
\int_I x u_t\chi~dx+
\int_I [x u_{x}\chi_{x}+(2-N)u_{x}\chi]~dx=
\int_I x f(u)\chi~dx.
\end{equation}
We \ek{designate} \eqref{eq:w1} the \emph{symmetric} weak form \ek{because of} the symmetric bilinear form associated with the differential operator $u_{xx}+\frac{N-1}{x}u_{x}$.
\ek{In contrast}, \eqref{eq:w2} is the \emph{nonsymmetric} weak form. Both forms are \tn{identical} at $N=2$.
We \ek{now} establish the finite element schemes based on these identities.
For a positive integer $m$, we introduce node points
\[
0=x_0<x_1<\cdots<x_{j-1}<x_{j}<\cdots<x_{m-1}<x_m=1,
\]
and set $I_{j}=(x_{j-1},x_{j})$ and $h_j=x_j-x_{j-1}$, where $j=1,\ldots,m$. The granularity parameter is defined as $h=\max_{1\le j\le m}h_j$.
Let $\mathcal{P}_k(J)$ be the set of all polynomials in an interval $J$ of degree $\le k$.
We define the $\mathrm{P}1$ finite element space \ek{as}
\begin{equation}
\label{eq:2}
S_{h}=\{ v \in H^{1}(I) \mid v\in\mathcal{P}_1(I_j)~(j = 1,\cdots,m),\ v(1)=0\}.
\end{equation}
\ek{Its} standard basis function $\phi_{j}$, $j=0,1,\cdots,m\tn{,}$ is defined as
\[
\phi_{j}(x_{i})=\delta_{ij},
\]
where $\delta_{ij}$ denotes \ek{Kronecker's} delta.
For \ek{time} discretization, we \tn{introduce} \ek{non-uniform partitions}
\[
t_0=0,\quad t_{n} = \sum_{j=0}^{n-1}\tau_{j} \quad (n\ge 1),
\]
where $\tau_j>0$ denotes the time increments.
Generally, we write $\partial_{\tau_n}u_h^{n+1}=(u_h^{n+1}-u_h^n)/\tau_n$.\\
\tn{
We are now in a position to state the finite element schemes to be considered.
}
\smallskip
\noindent \textbf{(Sym)} Find $u_{h}^{n+1} \in S_{h}$, $n=0,1,\ldots$, such that
\begin{equation}
\label{eq:3}
\left(\partial_{\tau_n}u_h^{n+1},\chi\right) + A(u_{h}^{n+1},\chi)=(f(u_{h}^{n}),\chi)\quad
(\chi \in S_{h},~n=0,1,\ldots),
\end{equation}
where $u_h^0\in S_h$ is assumed to be given. Hereinafter, we set
\begin{subequations}
\label{eq:4}
\begin{align}
(w,v)&=\int_I x^{N-1} wv~dx, & \|w\|^2=(w,w)=\int_I x^{N-1} w^2~dx, \label{eq:4a}\\
A(w,v)&=\int_I x^{N-1}w_{x}v_{x}~dx. \label{eq:4b}
\end{align}
\end{subequations}
\medskip
\noindent \textbf{(Non-Sym)} Find $u_{h}^{n+1} \in S_{h}$, $n=0,1,\ldots,$ such that
\begin{equation}
\label{eq:8}
\dual{\partial_{\tau_n}u_h^{n+1},\chi}+B(u_{h}^{n+1}, \chi)=
\dual{f(u_{h}^{n}),\chi} \quad (\chi\in S_h,~n=0,1,\ldots),
\end{equation}
where
\begin{subequations}
\label{eq:9}
\begin{align}
\dual{w,v}&=\int_Ixwv~dx, \qquad \vnorm{w}^2=\dual{w,w}=\int_Ixw^2~dx,\label{eq:9a}\\
B(w,v)&=\int_I xw_{x}v_{x}~dx+(2-N)\int_I w_{x}v~dx.\label{eq:9b}
\end{align}
\end{subequations}
It is noteworthy that $B(\cdot,\cdot)$ is coercive in $\dot{H}^1$ such that
\begin{equation}
B(w,w)=\dual{w_{x},w_{x}}+(2-N)\int_I w_{x}wdx=\vnorm{w_{x}}^2+\frac{N-2}{2} w(0)^{2}\ge\vnorm{w_{x}}^2. \label{eq:bc2}
\end{equation}
\section{Well-posedness and positivity conservation}
\label{sec:3}
In this section, we \ek{prove} the following theorems.
\begin{thm}[Well-posedness of \textup{(Sym)}]
\label{prop:1}
For a given $u_h^n\in S_h$ with $n\ge 0$, the scheme \textup{(Sym)} admits a unique solution $u_h^{n+1}\in S_h$.
\end{thm}
\begin{thm}[Positivity of (Sym)]
\label{prop:2}
In addition to the basic assumption \eqref{eq:f1}, assume that
\begin{equation}
\tag{f2}
\label{eq:f2}
\mbox{$f$ is a non-decreasing function with $f(0)\ge 0$}.
\end{equation}
\ek{Letting} $n\ge 0$ and $u_{h}^n\ge 0$, and \ek{assuming} that
\begin{equation}
\label{eq:tau1}
\tau_n\ge \frac{1}{4}h^2\ek{,}
\end{equation}
\ek{then}, the solution $u_h^{n+1}$ of \textup{(Sym)} satisfies $u_h^{n+1}\ge 0$.
\end{thm}
\begin{thm}[Comparison principle for (Sym)]
\label{prop:3}
\ek{We let} $n\ge 0$ and assume that $u_{h}^{n},\tilde{u}_h^n\in S_{h}$ satisfies $u_h^n\le \tilde{u}_{h}^{n}$ in $I$. Furthermore, we assume that \eqref{eq:f1} and \eqref{eq:f2} are satisfied.
\ek{Similarly, we let} $u_{h}^{n+1},\tilde{u}_h^{n+1}\in S_{h}$ be the solutions of \textup{(Sym)} with $u_{h}^{n},\tilde{u}_h^n$, respectively, using the same time increment $\tau_n$.
Moreover, \ek{we} assume that \eqref{eq:tau1} is satisfied. \ek{Consequently}, we obtain $u_{h}^{n+1}\le \tilde{u}_{h}^{n+1}$ in $I$\ek{. The equality} holds true if and only if $u_h^n=\tilde{u}_h^n$ in $I$.
\end{thm}
\begin{thm}[Well-posedness of \textup{(Non-Sym)}]
\label{prop:11}
For a given $u_h^n\in S_h$ with $n\ge 0$, the scheme \textup{(Non-Sym)} admits a unique solution $u_h^{n+1}\in S_h$.
\end{thm}
To prove these theorems, we conveniently rewrite \eqref{eq:3} into a matrix form.
That is, we introduce
\begin{align*}
&\mathcal{M}=(\mu_{i,j})_{0\le i,j\le m-1}\in\mathbb{R}^{m\times m},
&& \mu_{i,j}=(\phi_{j},\phi_{i}),\\
&\mathcal{A}=(a_{i,j})_{0\le i,j\le m-1}\in\mathbb{R}^{m\times m} ,
&& a_{i,j}=A(\phi_{j},\phi_{i}),\\
&\bm{u}^{n}=(u_{j}^{n})_{0\le j\le m-1}\in\mathbb{R}^{m}, && u_j^n=u_{h}^{n}(x_{j}),\\
&\bm{F}^{n}=(F_{j}^{n})_{0\le j\le m-1}\in\mathbb{R}^{m}, && F_j^n=(f(u_{h}^{n}),\phi_{j}),
\end{align*}
and express \eqref{eq:3} as
\begin{equation}
\label{eq:3m}
(\mathcal{M}+\tau_{n}\mathcal{A})\bm{u}^{n+1}=\mathcal{M}\bm{u}^{n}+\tau_{n}\bm{F}^{n}\quad (n=0,1,\ldots),
\end{equation}
where $u_m^n=u_h^n(x_m)$ is understood as $u_m^n=0$.
\begin{lemma}
\label{la:1}
$\mathcal{M}$ and $\mathcal{A}$ are both tri-diagonal and positive-definite matrices.
\end{lemma}
\tn{Theorem \ref{prop:1} is a direct consequence of this lemma. We proceed to proofs of other theorems.}
\begin{proof}[Proof of Theorem \ref{prop:2}]
We use the representative matrix \eqref{eq:3m} instead of \eqref{eq:3} and set
\[
\mathcal{C}=(c_{i,j})_{0\le i,j\le m-1}=\mathcal{M}+\tau_{n}\mathcal{A},\quad
c_{i,j}=\mu_{i,j}+\tau_n a_{i,j}.
\]
If $\mathcal{C}^{-1}\ge O$, then we \ek{obtain}
\[
\bm{u}^{n+1}=\mathcal{C}^{-1}\left(\mathcal{M}\bm{u}^{n}+\tau_{n}\bm{F}^{n}\right)\ge \bm{0},
\]
\ek{because} $\mathcal{M}\ge O$ and $\bm{F}^n\ge \bm{0}$ in view of (f2).
The proof that $\mathcal{C}^{-1}\ge O$ is true under \eqref{eq:tau1} is divided into three steps, each described as \ek{presented} below.
\noindent \emph{Step 1.} We show that
\begin{equation}
\label{eq:p21}
\sum_{j=0}^{m-1}c_{i,j}>0\qquad (0\le i\le m-1).
\end{equation}
Letting $1\le i\le m-2$, we calculate
\begin{align*}
\sum_{j=0}^{m-1}c_{i,j}
&= \sum_{j=i-1}^{i+1}\mu_{i,j}+\tau_n\sum_{j=i-1}^{i+1} a_{i,j}\\
&= \sum_{j=i-1}^{i+1}\mu_{i,j}+\tau_n \int_{x_{i-1}}^{x_{i+1}}x^{N-1}(\phi_{i-1}+\phi_i+\phi_{i+1})_{x}(\phi_i)_{x}~dx\\
&= \sum_{j=i-1}^{i+1}\mu_{i,j}>0,
\end{align*}
\ek{because} $\phi_{i-1}+\phi_i+\phi_{i+1}\equiv 1$ in $(x_{i-1},x_{i+1})$. \ek{Cases} $i=0$ and $i=m-1$ are verified similarly.
\noindent \emph{Step 2.} We show that, if
\begin{equation}
\label{eq:p25}
\tau_{n}\ge-\frac{\mu_{i,i+1}}{a_{i,i+1}},-\frac{\mu_{i,i-1}}{a_{i,i-1}}\qquad (i=0,1,\cdots,m-1),
\end{equation}
then $\mathcal{C}^{-1}\ge O$. First, \eqref{eq:p25} implies that $c_{i,i-1},c_{i,i+1}\le 0$ for $0\le i\le m-1$ \ek{because} $a_{i,i-1},a_{i,i+1}<0$.
Matrix $\mathcal{C}$ is decomposed as $\mathcal{C}=\mathcal{D}(\mathcal{I}-\mathcal{E})$, where $\mathcal{D}=(d_{i,j})_{0\le i,j\le m-1}$ and $\mathcal{E}=(e_{i,j})_{0\le i,j\le m-1}$ are defined as
\[
d_{i,j}=
\begin{cases}
c_{i,i} & (i=j)\\
0 & (i\neq j)
\end{cases}
,\qquad
e_{i,j}=
\begin{cases}
0& (i=j)\\
-\frac{c_{i,j}}{c_{i,i}} & (i\neq j),
\end{cases}
\]
and \ek{where} $I$ is the identity matrix.
Apparently, $\mathcal{I}-\mathcal{E}$ is non-singular and $\mathcal{D}\ge O$.
Using \eqref{eq:p21}, we deduce
\[
\|\mathcal{E}\|_\infty
=\max_{0\le i\le m-1}\left( -\frac{c_{i,i-1}}{c_{i,i}}-\frac{c_{i,i+1}}{c_{i,i}}\right)<1.
\]
Therefore, matrix $\mathcal{I}-\mathcal{E}$ is non-singular and $(\mathcal{I}-\mathcal{E})^{-1}=\sum_{k=0}^\infty\mathcal{E}^k\ge O$. Consequently, we have $\mathcal{C}^{-1}=(\mathcal{I}-\mathcal{E})^{-1}\mathcal{D}^{-1}\ge O$.
\noindent \emph{Step 3.} Finally, we \ek{demonstrate} that \eqref{eq:tau1} implies \eqref{eq:p25}. We calculate
\begin{align*}
\mu_{i,i+1}& = \int_{x_{i}}^{x_{i+1}}x^{N-1}\frac{1}{h_{i+1}^{2}}(x-x_{i})(x_{i+1}-x)~dx
\le\frac{1}{4}h_{i+1}^{2}\int_{x_{i}}^{x_{i+1}}\frac{1}{h_{i+1}^{2}}x^{N-1}~dx,\\
-a_{i,i+1}&=\int_{x_{i}}^{x_{i+1}}x^{N-1}\frac{1}{h_{i+1}^{2}}~dx.
\end{align*}
Therefore, we deduce $-\frac{\mu_{i,i+1}}{a_{i,i+1}}\le \frac{1}{4}h^{2}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{prop:3}]
\ek{Because} $f(\tilde{u}_h^n)-f(u_h^{n})\ge 0$ in $I$, the proof follows exactly the same \ek{pattern} as \ek{that of} the proof of Proposition \ref{prop:2}.
\end{proof}
\ek{We} proceed to the result for \textup{(Non-Sym)}:
\begin{align*}
&\mathcal{M}'=(\mu_{i,j}')_{0\le i,j\le m-1}\in\mathbb{R}^{m\times m},
&& \mu_{i,j}'=\dual{\phi_{j},\phi_{i}},\\
&\mathcal{B}=(b_{i,j})_{0\le i,j\le m-1}\in\mathbb{R}^{m\times m} ,
&& b_{i,j}=B(\phi_{j},\phi_{i}),\\
&\bm{G}^{n}=(G_{j}^{n})_{0\le j\le m-1}\in\mathbb{R}^{m}, && G_j^n=\dual{f(u_{h}^{n}),\phi_{j}},
\end{align*}
and express \eqref{eq:8} as
\begin{equation}
\label{eq:8m}
(\mathcal{M}'+\tau_{n}\mathcal{B})\bm{u}^{n+1}=\mathcal{M}'\bm{u}^{n}+\tau_{n}\bm{G}^{n}\quad (n=0,1,\ldots).
\end{equation}
\tn{
In view of \eqref{eq:bc2}, $\mathcal{M}'$ and $\mathcal{B}$ are both tri-diagonal and positive-definite matrices.
Therefore, the proof is completed.
}
\section{Convergence and error analysis}
\label{sec:4}
\subsection{Results}
\label{sec:41}
Our convergence results for (Sym) and (Non-Sym) are stated under a smoothness assumption \ek{of} the solution $u$ of \eqref{eq:1}\ek{: given} $T>0$ and setting $Q_T=[0,1]\times [0,T]$, we assume that $u$ is sufficiently smooth such that
\begin{equation}
\label{eq:smooth1}
\kappa_\nu(u)=
\sum_{j=0}^2 \|\partial_x^j u\|_{L^{\infty}(Q_T)}
+\sum_{l=1}^{2+\nu} \|\partial_t^l u\|_{L^{\infty}(Q_T)}
+\sum_{k=1}^{1+\nu}\|\partial_t^k\partial_x^2u\|_{L^{\infty}(Q_T)}
<\infty,
\end{equation}
where $\nu$ is either 0 or 1.
The partition $\{x_i\}_{j=0}^m$ of $\bar{I}=[0,1]$ is assumed to be quasi\ek{-}uniform, \ek{with} a positive constant $\beta$ independent of $h$ such that
\begin{equation}
\label{eq:beta}
h \le \beta \min_{1\le j \le m}h_{j}.
\end{equation}
Finally, the approximate initial value $u_h^0$ is chosen as
\begin{equation}
\label{eq:iv1}
\|u_h^0-u^0\|\le C_0h^2
\end{equation}
for a positive constant $C_0$.
Moreover, for $k=1,2,\ldots$, we express the positive \ek{constants} $C_k=C_k(\gamma_1,\gamma_2,\ldots)$ and $h_k=h_k(\gamma_1,\gamma_2,\ldots)$ according to the parameters $\gamma_1,\gamma_2,\ldots$. \ek{Particularly}, $C_k$ and $h_k$ are independent of $h$ and $\tau$.
\ek{Next} we state the following theorems.
\begin{thm}[Convergence for (Sym) in $\|\cdot\|$, I]
\label{th:s1}
Assume that $f$ is a globally Lipschitz continuous function; assume \eqref{eq:f1} and
\begin{equation}
\tag{f3}
\label{eq:f3}
M=\sup_{\mu>0}M_\mu<\infty.
\end{equation}
Assume that, for $T>0$, solution $u$ of \eqref{eq:1} is sufficiently smooth \ek{that} \eqref{eq:smooth1} for $\nu=0$ holds true.
Moreover, assume that \eqref{eq:beta} and \eqref{eq:iv1} are satisfied.
Then, there \ek{exists} $h_1=h_1(N,\beta)$ such that, for any $h\le h_1$, we have
\[
\sup_{0\le t_n\le T}\|u_{h}^{n}-u(\cdot,t_{n})\|\le C_1(h^{2}+\tau),
\]
where $C_1=C_1(T, M, \kappa_0(u), C_0, N,\beta)$ and $u_h^n$ is the solution of \textup{(Sym)}.
\end{thm}
\ek{For} $L^\infty$ error estimates, we \ek{must} further assume that $u_h^0$ is chosen as
\begin{equation}
\label{eq:iv2}
A(u_h^0-u^0,v_h)=0 \quad (v_h\in S_h).
\end{equation}
\begin{thm}[Convergence for (Sym) in $\|\cdot\|_{L^\infty(\sigma,1)}$, I]
\label{th:s2}
In addition to the assumption of Theorem \ref{th:s1}, assume that \eqref{eq:iv2} is satisfied. Furthermore, let $\sigma\in(0,1)$ be arbitrary.
Then, there exists an $h_2=h_2(N,\beta)$ such that, for any $h\le h_2$, we have
\[
\sup_{0\le t_n\le T}\|u_{h}^{n}-u(\cdot,t_{n})\|_{L^{\infty}(\sigma,1)}\le C_2\left(h^{2}\log\frac{1}{h}+\tau\right),
\]
where $C_2=C_2(T, M, \kappa_0(u), C_0, N,\beta,\sigma)$ and $u_h^n$ is the solution of \textup{(Sym)}.
\end{thm}
The restriction that $f$ is a globally Lipschitz continuous function with (f3) can be removed in \ek{the following manner.}
\begin{thm}[Convergence of (Sym) in $\|\cdot\|$, II]
\label{th:s3}
Given \ek{that} $T>0$ and \ek{that} only \eqref{eq:f1} is satisfied,
we assume that \eqref{eq:smooth1} with $\nu=0$, \eqref{eq:beta},
and \eqref{eq:iv1} are satisfied.
Furthermore, assume that $N\le 3$ and
that there exist positive constants $c_1$ and $\sigma$ such that
\begin{equation}
\label{eq:mesh}
\tau h^{-N/2}\le c_1h^{\sigma}.
\end{equation}
\ek{Then} there exists an $h_3=h_3(T, \kappa_0(u), C_0, N,\beta)$ such that, for any $h\le h_3$, we have
\[
\sup_{0\le t_n\le T}\|u_{h}^{n}-u(\cdot,t_{n})\|\le C_2(h^{2}+\tau),
\]
where $C_3=C_3(T, \kappa_0(u), C_0, N,\beta)$ and $u_h^n$ is the solution of \textup{(Sym)}. \end{thm}
\begin{thm}[Convergence for (Sym) in $\|\cdot\|_{L^\infty(\sigma,1)}$, II]
\label{th:s4}
Given \ek{that} $T>0$ and \ek{that} \eqref{eq:f1} is satisfied,
we assume that \eqref{eq:smooth1} with $\nu=0$, \eqref{eq:beta},
\eqref{eq:iv1}, \eqref{eq:iv2} and \eqref{eq:mesh} are satisfied.
\ek{Consequently}, there \ek{exists} $h_4=h_4(T, \kappa_0(u), C_0, N,\beta)$ such that, for any $h\le h_4$, we have
\[
\sup_{0\le t_n\le T}\|u_{h}^{n}-u(\cdot,t_{n})\|_{L^{\infty}(\sigma,1)}
\le C_4\left(h^{2}\log\frac{1}{h}+\tau\right),
\]
where $C_4=C_4(T, \kappa_0(u), C_0, N,\beta)$ and $u_h^n$ is the solution of \textup{(Sym)}.
\end{thm}
\tn{Subsequently}, let us proceed to \ek{error} estimates for (Non-Sym).
For the approximate initial value $u_h^0$, we choose
\begin{equation}
\label{eq:iv3}
B(u_h^0-u^0,v_h)=0\qquad (v_h\in S_h).
\end{equation}
Quasi-uniformity is \tn{also} required for the time partition\ek{. Therefore,}
there exists a positive constant $\gamma>0$ such that
\begin{equation}
\label{eq:qt}
\tau \le \gamma \tau_{\min}\ek{,}
\end{equation}
where $\tau_{\min}=\min_{n\ge 0}\tau_n$. Moreover, we set
\begin{equation}
\label{eq:tau}
\delta=\sup_{\tn{t_{k+1}}\in [0,T]}|\tau_{k}-\tau_{k+1}|.
\end{equation}
\begin{thm}[Convergence for (Non-Sym), I]
\label{th:s5}
Let $f$ be a $C^1$ function satisfying
\begin{equation}
\tag{f4}
\label{eq:f4}
M_1=\sup_{s\in\mathbb{R}}|f'(s)|<\infty,\quad
M_2=\sup_{s\ne s'\in \mathbb{R}}\frac{|f'(s)-f'(s')|}{|s-s'|}<\infty.
\end{equation}
Given $T>0$,
we assume that the solution $u$ of \eqref{eq:1} is sufficiently smooth \ek{that} \eqref{eq:smooth1} for $\nu=1$ holds true. Furthermore, we assume that \eqref{eq:beta}, \eqref{eq:iv3} and \eqref{eq:qt} are satisfied. Then, there exists an $h_5=h_5(T, \kappa_1(u), M_1,M_2,\gamma,N,\beta)$ such that, for any $h\le h_5$, we have
\[
\sup_{0\le t_n\le T}\|u_{h}^{n}-u(\cdot,t_{n})\|_{L^{\infty}(I)}
\le C_5\left(\log\frac{1}{h}\right)^{\frac{1}{2}}\left(h^{2}+\tau+\frac{\delta}{\tau_{\min}}\right) ,
\]
where $C_5=C_5(T, \kappa_1(u), M_1,M_2,\gamma,N,\beta)>0$ and $u_h^n$ is the solution of \textup{(Non-Sym)}.
\end{thm}
Finally, we state the error estimates for non-globally Lipschitz continuous function $f$. To avoid \ek{unnecessary} complexity, we deal \ek{only} with the power nonlinearity $f(s)=s|s|^\alpha$.
\begin{thm}[Convergence for (Non-Sym), II]
\label{th:s6}
\ek{Letting} $f(s)=s|s|^\alpha$for $s\in\mathbb{R}$, where $\alpha\ge 1$\ek{, then given} $T>0$,
we assume that \eqref{eq:smooth1} with $\nu=1$, \eqref{eq:beta} and \eqref{eq:iv3} are satisfied.
Then, there exists an $h_6=h_6(T,\kappa_1(u),\gamma,N,\beta)$ such that, for any $h\le h_6$, we have
\[
\sup_{0\le t_n\le T}\|u_{h}^{n}-u(\cdot,t_{n})\|_{L^{\infty}(I)}
\le C_6\left(\log\frac{1}{h}\right)^{\frac{1}{2}}(h^{2}+\tau),
\]
where $C_6=C_6(T, \kappa_1(u),\gamma,N,\beta)$ and $u_h^n$ is the solution of \textup{(Non-Sym)}.
\end{thm}
\subsection{Proof of Theorems \ref{th:s1} and \ref{th:s2}}
\label{sec:43}
We use the projection operator $P_A$ of $\dot{H}^1\to S_h$ associated with $A(\cdot,\cdot)$, defined for $w\in \dot{H}^1$ as
\begin{equation}
P_Aw\in S_h,\quad A(P_{A}w -w,\chi)=0\qquad (\chi\in S_{h}) .
\label{eq:pA}
\end{equation}
In \cite{et84} and \cite{jes78}, the \ek{following} error estimates \ek{are proved}.
\begin{lemma}
\label{prop:tj}
Letting $w\in C^{2}(\bar{I})\cap\dot{H}^1$, and \eqref{eq:beta} be satisfied, \ek{then} for $h\le h_7=h_7(N,\beta)$, we obtain
\begin{align}
\|P_Aw-w\| &\le Ch^{2}\|w_{xx}\|, \label{eq:tj1} \\
\|P_Aw-w\|_{L^{\infty}(I)}&\le C\left( \log \frac{1}{h} \right)h^{2}\|w_{xx}\|_{L^{\infty}(I)}, \label{eq:tj2}
\end{align}
where $C$ is a positive constant depending only on $N$ and $\beta$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{th:s1}]
Using $P_Au$, we distribute the error in the form \ek{shown below.}
\[
u_{h}^{n}-u(t_{n})=\underbrace{(u_{h}^{n}- P_{A}u(t_{n}))}_{=\theta^{n}} + \underbrace{(P_{A}u(t_{n}) - u(t_{n}))}_{=\rho^{n}}\\
\]
\ek{From \eqref{eq:tj1}, it is known} that
\begin{equation}
\label{eq:th1.12}
\|\rho^{n}\| \le Ch^{2}\|u_{xx}(t_n)\| \le Ch^2 \|u_{xx}\|_{L^\infty(Q_T)}.
\end{equation}
\ek{Next} we derive \ek{an estimate} for $\theta^n$.
By considering the symmetric weak form \eqref{eq:w1} at $t=t_{n+1}$, we \ek{obtain}
\begin{multline*}
\left(\partial_{\tau_n} u(t_{n+1}),\chi\right) +A(P_{A}u(t_{n+1}),\chi)= (f(u(t_{n})),\chi)
\\
+( f(u(t_{n+1}))-f(u(t_{n})),\chi)
+\left( \partial_{\tau_n}u(t_{n+1})-u_{t}(t_{n+1}), \chi\right)
\end{multline*}
which, together with \eqref{eq:3}, implies that
\begin{multline}
\label{eq:th1.10}
\left( \partial_{\tau_n}\theta^{n+1},\chi\right)+A(\theta^{n+1},\chi)
=(f(u_h^n)-f(u(t_{n})),\chi) \\
- (f(u(t_{n+1}))-f(u(t_{n})),\chi)
- \left(\partial_{\tau_n}u(t_{n+1})-u_{t}(t_{n+1}),\chi \right)-\left(\partial_{\tau_n}\rho^{n+1},\chi \right).
\end{multline}
Substituting this \ek{expression} for $\chi=\theta^{n+1}$ \ek{yields the following:}
\begin{multline*}
\frac{1}{\tau_{n}}\left\{ \|\theta^{n+1}\|^{2}-\|\theta^{n}\|
\cdot\|\theta^{n+1}\|\right\}
\le M\|\theta^{n}+\rho^{n}\|\cdot\|\theta^{n+1}\| \\
+ M\tau_{n}\|u_{t}\|_{L^{\infty}(Q_T)}\cdot\|\theta^{n+1}\|
+C\tau_{n} \|u_{tt}\|_{L^{\infty}(Q_T)}\|\theta^{n+1}\|
+\left\| \partial_{\tau_n}\rho^{n+1}\right\|\cdot\|\theta^{n+1}\|.
\end{multline*}
Correspondingly, \ek{because}
\[
\partial_{\tau_n}\rho^{n+1}
=P_{A}\left(\frac{u(t_{n+1})-u(t_{n})}{\tau_{n}}\right) -\frac{u(t_{n+1})-u(t_{n})}{\tau_{n}},
\]
we provide an estimate
\begin{equation}
\label{eq:th1.11}
\left\| \partial_{\tau_n}\rho^{n+1}\right\|
\le Ch^{2}\left\|\frac{u_{xx}(t_{n+1})-u_{xx}(t_{n})}{\tau_{n}}\right\|\\
\le Ch^{2}\|u_{xxt}\|_{L^{\infty}(Q_T)}.
\end{equation}
To sum up, we obtain
\[
\|\theta^{n+1}\|-\|\theta^{n}\| \le
\tau_{n}M\|\theta^{n}\| \\
+ Ch^{2}M\tau_{n}+ CM\tau_{n}^{2}
+C\tau_{n}^{2} +Ch^{2}\tau_{n}.
\]
\ek{Therefore,}
\begin{align}
\|\theta^{n}\|
&\le e^{MT}\|u_{h}^{0}-P_{A}u^{0}\|+C\frac{e^{MT}-1}{M}(\tau +h^{2}) \nonumber \\
&\le e^{MT}(\|u_{h}^{0}-u^{0}\|+\|u^{0}-P_{A}u^{0}\|)+C\frac{e^{MT}-1}{M}(\tau +h^{2}) \nonumber \\
&\le C'(\tau +h^{2}),
\label{eq:th1.14}
\end{align}
where $C'=C'(T,\kappa_0(u),M,N,\beta,C_{0})>0$.
\ek{By combining} this expression with \eqref{eq:th1.12}, \ek{one can} deduce the desired error estimate.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:s2}]
We use the same error decomposition process as \ek{that used} in the previous proof where
$u_{h}^{n}-u(t_{n})=\theta^{n}+\rho^{n}$\ek{. Also, we} apply \eqref{eq:tj2} to estimate $\|\rho^{n}\|_{L^{\infty}(I)}$. \ek{Because}
\begin{equation}
\label{eq:th2.23}
\|\theta^{n}\|_{L^{\infty}(\sigma,1)}\le\|\theta^{n}_{x}\|_{L^{1}(\sigma,1)}
\le C(\sigma,N)\|\theta^{n}_{x}\|,
\end{equation}
we perform an estimation for $\|\theta^{n}_{x}\|$.
Substituting \eqref{eq:th1.10} for $\chi=\partial_{\tau_n}\theta^{n+1}$, we \ek{obtain the following.}
\begin{multline*}
\left\|\partial_{\tau_n}\theta^{n+1}\right\|^{2}
+A(\theta^{n+1},\partial_{\tau_n}\theta^{n+1})
\le M\|\theta^{n}\|\cdot\left\|\partial_{\tau_n}\theta^{n+1}\right\|\\
+M\|\rho^{n}\|\cdot \left\|\partial_{\tau_n}\theta^{n+1}\right\|
+M\tau_{n}\|u_{t}\|_{L^{\infty}(Q_T)}\cdot\left\|\partial_{\tau_n}\theta^{n+1} \right\| \\
+\|u_{tt}\|_{L^{\infty}(Q_T)}\tau_{n}\left\|\partial_{\tau_n}\theta^{n+1}\right\|+\left\|\partial_{\tau_n}\rho^{n+1}\right\|\cdot\left\|\partial_{\tau_n}\theta^{n+1}\right\|
\end{multline*}
Correspondingly, we apply \ek{the} elementary identity \ek{shown below}
\begin{align*}
A\left(\theta^{n+1},\partial_{\tau_n}\theta^{n+1}\right)
&=
\frac12 A\left(\theta^{n+1}-\theta^n+\theta^{n+1}+\theta^n,\partial_{\tau_n}\theta^{n+1}\right) \\
&\ge
\frac{1}{2\tau_n}\left[
A\left(\theta^{n+1}, \theta^{n+1}\right)-
A\left(\theta^{n}, \theta^{n}\right) \right]
\end{align*}
along with Young's inequality \ek{to} obtain
\begin{multline*}
\frac{1}{2\tau_{n}}\left[A(\theta^{n+1},\theta^{n+1})- A(\theta^{n},\theta^{n})\right]
\le
\frac{1}{2}\frac{M^{2}}{\delta_{0}^{2}}\|\theta^{n}\|^{2}
+\frac{1}{2}\delta_{0}^{2}\left\|\partial_{\tau_n}\theta^{n+1}\right\|^{2} \\
+\frac{1}{2}\frac{M^{2}}{\delta_{1}^{2}}\|\rho^{n}\|^{2}
+\frac{1}{2}\delta_{1}^{2}\left\|\partial_{\tau_n}\theta^{n+1}\right\|^{2}
+\frac{1}{2}\frac{C^{2}}{\delta_{2}^{2}}\tau_{n}^{2}
+\frac{1}{2}\delta_{2}^{2}\left\|\partial_{\tau_n}\theta^{n+1}\right\|^{2} \\
+ \frac{1}{2}\left\|\partial_{\tau_n}\rho^{n+1}\right\|^{2} +\frac{1}{2}\left\|\partial_{\tau_n}\theta^{n+1}\right\|^{2}
-\left\|\partial_{\tau_n}\theta^{n+1} \right\|^{2},
\end{multline*}
where $\delta_0,\delta_1,\delta_2>0$ are constants.
After setting $\delta_{0}^{2}+\delta_{1}^{2}+\delta_{2}^{2}=1$, we \ek{obtain}
\[
A(\theta^{n+1},\theta^{n+1})-A(\theta^{n},\theta^{n})\le
\tau_{n}\left[\frac{C^{2}}{\delta_{0}^{2}}\|\theta^{n}\|^{2}+\frac{C^{2}}{\delta_{1}^{2}}\|\rho^{n}\|^{2}+\left\|\partial_{\tau_n}\rho^{n+1}\right\|^{2}+\frac{C^{2}}{\delta_{2}^{2}}\tau^2\right].
\]
Therefore,
\[
A(\theta^{n},\theta^{n})\le A(\theta^0,\theta^0)+C^2t_{n} \sup_{1\le k\le n}\left[\|\theta^{k-1}\|^{2}+\|\rho^{k-1}\|^{2}+\left\|\partial_{\tau_{k-1}}\rho^{k} \right\|^{2}+\tau^2\right] .
\]
Consequently, using \eqref{eq:iv2}, \eqref{eq:th1.11}, and \eqref{eq:th1.14}, we deduce
\[
\|\theta^{n}_{x}\| \le C t_{n}^{\frac{1}{2}}\left(\tau+h^{2}\right).
\]
This, together with \eqref{eq:tj2} and \eqref{eq:th2.23}, implies the desired estimate.
\end{proof}
\subsection{Proof of Theorems \ref{th:s3} and \ref{th:s4} }
\label{sec:45}
For the proof, we \ek{use} the inverse inequality that follows.
\begin{lemma}[Inverse inequality]
\label{la:ie}
Under condition \eqref{eq:beta},
\[
\|v_{h}\|_{L^{\infty}(I)}\le C_{\star}h^{-\frac{N}{2}}\|v_{h}\| \qquad (v_h\in S_h),
\]
where $C_{\star}$ is a positive constant depending only on $N$ and $\beta$.
\end{lemma}
\begin{proof}
Let $v_h\in S_h$ be arbitrary.
From the norm equivalence in $\mathbb{R}^2$, we know that
\begin{align*}
\|v_h\|_{L^\infty(I_1)} &\le C_{\star\star}h_1^{-1/2}\|v_h\|_{L^2(\frac{h_1}{2},h_1)},\\
\|v_h\|_{L^\infty(I_j)} &\le C_{\star\star}h_j^{-1/2}\|v_h\|_{L^2(I_j)}\quad (j=2,\ldots,m),
\end{align*}
where $C_{\star\star}$ denotes the absolute positive constant. Given that $\|v_h\|_{L^\infty(I)}=\|v_h\|_{L^\infty(I_1)}$, the expression \ek{is calculable} as
\begin{align*}
\|v_h\|_{L^\infty(I_1)}^2
&\le C_{\star\star}^2h_1^{-1}\int_{h_1/2}^{h_1} x^{-(N-1)}x^{N-1}v_h^2~dx\\
&\le C_{\star\star}^2h_1^{-1}\left(\frac{h_1}{2}\right)^{-(N-1)}\int_{h_1/2}^{h_1} x^{N-1}v_h^2~dx\\
&\le C_{\star\star}^22^{N-1}h^{-N}\left(\frac{h_1}{h}\right)^{-N}\int_{h_1/2}^{h_1} x^{N-1}v_h^2~dx\\
&\le C_{\star}^2 h^{-N} \|v_h\|^2.
\end{align*}
The case $\|v_h\|_{L^\infty(I)}=\|v_h\|_{L^\infty(I_j)}$ with $j=2,\ldots,m$ is examined similarly.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:s3}]
Consider \eqref{eq:1} and (Sym) with replacement $f(s)$ in
\[
\tilde{f}(s)=
\begin{cases}
f(\mu) & (s\ge \mu)\\
f(s)& (-\mu\le s\le\mu)\\
f(-\mu) & (s\le -\mu),
\end{cases}
\]
where $\mu>0$ is determined later. Then, $\tilde{f}$ satisfies condition \eqref{eq:f3} in Theorem \ref{th:s1} such that
\[
\sup_{s,s'\in\mathbb{R},s\ne s'}\frac{|\tilde{f}(s)-\tilde{f}(s')|}{|s-s'|}\le M\equiv \sup_{|\lambda|\le \mu}M_\lambda<\infty.
\]
Let $\tilde{u}$ and $\tilde{u}_{h}^{n}$ be the solutions of \eqref{eq:1} and (Sym) with $\tilde{f}$, respectively, such that
\[
\|\tilde{u}_{h}^{n}\|_{L^{\infty}(I)}\le\|\theta^{n}\|_{L^{\infty}(I)}+\|P_{A}\tilde{u}(t_{n})\|_{L^{\infty}(I)},
\]
where $\theta^{n}=\tilde{u}_{h}^{n}-P_{A}\tilde{u}(t_{n})$ and $\rho^{n}= P_{A}\tilde{u}(t_{n})-\tilde{u}(t_{n})$.
Applying Theorem \ref{th:s1} to $\tilde{u}$ and $\tilde{u}_{h}^{n}$,
\ek{one obtains}
\begin{equation}
\label{eq:s2.1}
\sup_{0\le t_n\le T}\|\tilde{u}_{h}^{n}-\tilde{u}(\cdot,t_{n})\|\le C_2(h^{2}+\tau),
\end{equation}
where $C_2=C_2(T,\kappa_0(\tilde{u}),\mu,C_0, N,\beta)$.
Moreover, \ek{an} estimate \eqref{eq:th1.14} for $\theta^n$ is available. In view of Lemmas \ref{prop:tj} and \ref{la:ie}, we determine \ek{those estimates as}
\begin{align*}
\|\theta^{n}\|_{L^{\infty}(I)}
& \le C_\star h^{-\frac{N}{2}}\|\theta^{n}\| \le C_3h^{-\frac{N}{2}}(h^{2}+\tau),\\
\|\rho\|_{L^{\infty}(I)}&\le C_4\left(h^{2}\log\frac{1}{h}\right)\|\tilde{u}_{xx}(t_{n})\|_{L^\infty(I)},
\end{align*}
where $C_3=C_3(T,\kappa_0(\tilde{u}),\mu,C_0,N,\beta)$ and $C_4=C_4(N,\beta)$.
Therefore, we have
\[
\|P_{A}\tilde{u}(t_{n})\|_{L^{\infty}(I)}\le\|\tilde{u}(t_{n})\|_{L^{\infty}(I)}+C_5\left(h^{2}\log\frac{1}{h}\right)\|\tilde{u}_{xx}(t_{n})\|_{L^{\infty}(I)}
\]
and
\[
\|\tilde{u}_{h}^{n}\|_{L^{\infty}(I)}\le C_3(h^{2-\frac{N}{2}} +h^{-\frac{N}{2}}\tau)+\|\tilde{u}(t_{n})\|_{L^{\infty}(I)}+C_4\left(h^{2}\log\frac{1}{h}\right)\|\tilde{u}_{xx}(t_{n})\|_{L^{\infty}(I)}.
\]
At this stage, we set $\mu=1+\|u\|_{L^{\infty}(Q_T)}$ to obtain $u=\tilde{u}$ in $Q_T$ by uniqueness. Moreover, \ek{because} $N<4$, we can take a very small $h$ such that
\[
C_{6}(h^{2-\frac{N}{2}}+h^{-\frac{N}{2}}\tau)\le \frac{1}{2},\quad
C_{5}\left(h^{2}\log\frac{1}{h}\right)\|u_{xx}(t_{n})\|_{L^{\infty}(I)}\le \frac{1}{2}.
\]
Consequently, $\|\tilde{u}_{h}^{n}\|_{L^{\infty}(I)}\le \mu$\ek{. Also}, by \ek{uniqueness} $u_h^n=\tilde{u}_h^n$. Therefore, \eqref{eq:s2.1} implies the desired conclusion.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:s4}]
The proof follows the exact same \ek{pattern} as \ek{that} for Theorem \ref{th:s3}\ek{, but}
using Theorem \ref{th:s2}
instead of Theorem \ref{th:s1}.
\end{proof}
\subsection{Proof of Theorems \ref{th:s5} and \ref{th:s6} }
\label{sec:48}
We use the projection operator $P_B$ of $\dot{H}^1\to S_h$ associated with $B(\cdot,\cdot)$:
\begin{equation}
B(P_{B}w -w,\chi)=0\qquad (\chi\in S_{h}) .
\label{eq:pB}
\end{equation}
In \cite{et84}, the following error estimates are proved.
\begin{lemma}
\label{prop:tj3}
Letting $w\in C^{2}(\bar{I})\cap\dot{H}^1$ and \eqref{eq:beta} \ek{be} satisfied, \ek{then} for $h\le h_8=h_8(N,\beta)$ we obtain
\begin{equation}
\label{eq:tj3}
\|P_Bw-w\|_{L^{\infty}(I)}\le C_8 h^{2}\|w_{xx}\|_{L^{\infty}(I)},
\end{equation}
where $C_8=C_8(N,\beta)$.
\end{lemma}
We also use a version of Poincar\'{e}'s inequality (see \cite[Lemma 18.1]{tho06}).
\begin{lemma}
\label{la:p}
We have
\begin{equation}
\label{eq:po}
\vnorm{w}\le \vnorm{w_{x}}\qquad (w\in \dot{H}(I)).
\end{equation}
\end{lemma}
We \ek{can now} state the proof that follows.
\begin{proof}[Proof of Theorem \ref{th:s5}]
Using $P_{B}u(t)\in S_{h}$, we decompose the error into
\[
u_{h}^{n}-u(t_{n})=\underbrace{(u_{h}^{n}- P_{B}u(t_{n}))}_{=\theta^{n}} + \underbrace{(P_{B}u(t_{n}) - u(t_{n}))}_{=\rho^{n}}.
\]
We know from \eqref{eq:tj3} that
\begin{subequations}
\label{eq:tj3aa}
\begin{align}
\vnorm{\rho^n}& \le \|\rho^{n}\|_{L^{\infty}(I)} \le
Ch^{2}\|u_{xx}\|_{L^\infty(Q_T)}, \label{eq:tj3a}\\
\vnorm{\partial_{\tau_n}\rho^{n+1}}&\le \|\partial_{\tau_n}\rho^{n+1}\|_{L^{\infty}(I)} \le Ch^{2}\|u_{xxt}\|_{L^\infty(Q_T)}. \label{eq:tj3b}
\end{align}
\end{subequations}
\ek{Therefore}, we will \ek{specifically examine estimation of} $\vnorm{\theta^{n}_{x}}$ \ek{because} we are aware that
\[
\|\chi\|_{L^{\infty}(I)}\le\|\chi_{x}\|_{L^{1}(I)}\le C\left(\log\frac{1}{h}\right)^{\frac{1}{2}}\vnorm{\chi_{x}}\qquad (\chi\in S_{h}).
\]
Furthermore, \eqref{eq:w2} and \eqref{eq:8} give
\begin{multline}
\label{eq:s5.1}
\dual{\partial_{\tau_n}\theta^{n+1}+\partial_{\tau_n}\rho^{n+1},\chi}
+B(\theta^{n+1},\chi)=
\dual{f(u_{h}^{n})-f(u(t_{n})),\chi}\\
-\dual{f(u(t_{n+1}))-f(u(t_{n})),\chi}
-\dual{\partial_{\tau_n} u(t_{n+1})-u_{t}(t_{n+1}),\chi}
\end{multline}
for $\chi\in S_h$.
Substituting this for $\chi=\theta^{n+1}$, we have
\begin{multline}
\label{eq:s5.2}
\dual{\partial_{\tau_n}\theta^{n+1},\theta^{n+1}}
+B(\theta^{n+1},\theta^{n+1})\\
=\dual{f(u_{h}^{n})-f(u(t_{n})),\theta^{n+1}}
-\dual{f(u(t_{n+1}))-f(u(t_{n})),\theta^{n+1}}\\
-\dual{\partial_{\tau_n} u(t_{n+1})-u_{t}(t_{n+1}),\theta^{n+1}}
-\dual{\partial_{\tau_n}\rho^{n+1},\theta^{n+1}}.
\end{multline}
This, together with \eqref{eq:bc2}, implies that
\begin{multline*}
\vnorm{\theta^{n+1}_{x}}^2\le
M\vnorm{u_{h}^{n}-u(t_{n})} \cdot \vnorm{\theta^{n+1}} \\
+M\vnorm{u(t_{n+1})-u(t_{n})}\cdot \vnorm{\theta^{n+1}}
+\vnorm{\partial_{\tau_n} u(t_{n+1})-u_{t}(t_{n+1})}\cdot \vnorm{\theta^{n+1}}\\
+\vnorm{\partial_{\tau_n}\rho^{n+1}}\cdot \vnorm{\theta^{n+1}}
+\vnorm{\partial_{\tau_n}\theta^{n+1}}\cdot \vnorm{\theta^{n+1}}.
\end{multline*}
Therefore, using \eqref{eq:po}, we deduce \ek{that}
\begin{align}
\vnorm{\theta^{n+1}_{x}}
& \le M\vnorm{u_{h}^{n}-u(t_{n})} +M\vnorm{u(t_{n+1})-u(t_{n})}\nonumber \\
& { }\qquad +\vnorm{\partial_{\tau_n} u(t_{n+1})-u_{t}(t_{n+1})}
+\vnorm{\partial_{\tau_n}\rho^{n+1}}
+\vnorm{\partial_{\tau_n}\theta^{n+1}}\nonumber\\
& \le M(\vnorm{\theta^n}+\vnorm{\rho^n}) +M\tau_n\|u_t\|_{L^\infty(Q_T)}\nonumber\\
& { }\qquad +\tau_{n} \|u_{tt}\|_{L^{\infty}(Q_T)}
+\vnorm{\partial_{\tau_n}\rho^{n+1}}
+\vnorm{\partial_{\tau_n}\theta^{n+1}}.
\label{eq:s5.4}
\end{align}
These estimates actually hold\ek{. Nevertheless}, their proof \ek{is} postponed for Appendix \ref{sec:a1}:
\begin{subequations}
\label{eq:a1}
\begin{align}
\vnorm{\theta^n}& \le C (h^2+\tau), \label{eq:a11}\\
\vnorm{\partial_{\tau_n}\theta^{n+1}} &\le
C\left(h^2+\tau+\frac{\delta}{\tau}\right). \label{eq:a12}
\end{align}
\end{subequations}
Using \eqref{eq:tj3a},
\eqref{eq:tj3b},
\eqref{eq:a11}, and
\eqref{eq:a12}, we deduce
\[
\vnorm{\theta^{n+1}_{x}} \le C\left(h^2+\tau+\frac{\delta}{\tau}\right),
\]
which completes the proof of Theorem \ref{th:s5}.
\end{proof}
Finally, we state the \ek{following} proof.
\begin{proof}[Proof of Theorem \ref{th:s6}]
Consider problems \eqref{eq:1} and \eqref{eq:8}
with replacement~$f(s)=s|s|^{\alpha}$ by
\[
\tilde{f}(s)=
\begin{cases}
s|s|^{\alpha}&(|s|\le\mu)\\
[(1+\alpha)\mu^{\alpha}s-\alpha\mu^{1+\alpha}] \operatorname{sgn} (s)&(|s|\ge\mu),
\end{cases}
\]
where $\mu>0$ is determined later. Then,
$\tilde{f}$ is a $C^1$ function and the corresponding values of $\tilde{M}_1$ and $\tilde{M}_2$ in (f4) are expressed as
$\tilde{M}_1=(1+\alpha)\mu^{\alpha}$ and $\tilde{M}_2=(1+\alpha)\alpha\mu^{\alpha-1}$.
\ek{Let} $\tilde{u}$ and $\tilde{u}_{h}^{n}$ \ek{respectively represent} the solutions of \eqref{eq:1} and \eqref{eq:8} with $\tilde{f}$.
If $\mu\ge \kappa_1(u)$, then $u=\tilde{u}$ holds true by uniqueness.
\ek{Consequently}, we can apply Theorem \ref{th:s5} to obtain
\begin{equation}
\|\tilde{u}_{h}^{n}-u(t_{n})\|_{L^{\infty}(I)}\le C\left(\log\frac{1}{h}\right)^{\frac{1}{2}}(h^{2}+\tau),
\label{eq:a100}
\end{equation}
where~$C=C(T,\kappa_1(u),\gamma,N,\beta)$.
At this juncture, we apply small $h$ and $\tau$ such that $C\left(\log\frac{1}{h}\right)^{\frac{1}{2}}(h^{2}+\tau)<1$, and set $\mu=\kappa_1(u)+1$. As $\|\tilde{u}_{h}^{n}\|_{L^{\infty}(I)}\le \kappa_1(u)+1=\mu$, we obtain $\tilde{u}_{h}^{n}=u_{h}^{n}$ by the uniqueness theorem.
Therefore, \eqref{eq:a100} implies the desired estimate.
\end{proof}
\section{Numerical examples}
\label{sec:5}
\tn{
This section presents some numerical examples to validate our theoretical results.
For this purpose, throughout this section, we set
\[
f(s) = s|s|^{\alpha},~\alpha>0
\]
If this were the case, then the solution of (1) might blow up in the finite time.
Therefore, one must devote particular attention to setting of the time increment $\tau_{n}$.
Particularly, following Nakagawa \cite{nak76} (see also Chen \cite{che92} and Cho--Hamada--Okamoto \cite{cho07}), we
use the time-increment control
\begin{equation}
\label{eq:6.1a}
\tau_{n}=\tau\cdot \min\left\{1,\
\frac{1}{\|u_{h}^{n}\|_{2}^{\alpha}}\right\}
\qquad \left(\|u_{h}^{n}\|_{2}^2=
\textstyle\sum\limits_{j=0}^{m-1}hx_{j+1}^{N-1}u_{h}^{n}(x_j)^{2}
\right),
\end{equation}
where $\tau=\lambda h^2$ and $\lambda=1/2$.
}
\begin{figure}[htbp]
\begin{minipage}{.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{ronbun2018_fig1-crop.pdf} \\
(a) (Sym)
\end{center}
\end{minipage}
\begin{minipage}{.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{ronbun2018_fig2-crop.pdf}\\
(b) (Non-Sym)
\end{center}
\end{minipage}
\caption{$N=5$, $\alpha=\frac{4}{3}$ and $u(0,x)=\cos\frac{\pi}{2}x$.}
\label{fig:1}
\end{figure}
First, we compared the shapes of both solutions of (Sym) and (Non-Sym), as shown in
Fig.~\ref{fig:1} for $N=5$, $\alpha=\frac{4}{3}$ and $u(0,x)=\cos\frac{\pi}{2}x$. We used the uniform space mesh $x_j=jh$ ($j=0,\ldots,m$) and $h=1/m$ with $m=50$.\\
We \ek{computed them} continuously until $t_{n}\tn{=} T=0.2$ or $\|u_{h}\|_{2}^{-1}<\epsilon=10^{-8}$, wherein
both solutions exist globally in time and \ek{approach} $0$ uniformly in $\overline{I}$ as $t\to\infty$.
No \ek{marked} differences were observed in Figs.~\ref{fig:1}(a) and \ek{~\ref{fig:1}(b).}
Subsequently, we took Fig.~\ref{fig:2} \tn{for} the case \ek{in which} the initial value was $u(0,x)=13\cos\frac{\pi}{2}x$\ek{. The} rest of the parameters are the same. At this point, the solutions of (Sym) and (Non-Sym) blew up after $x=0.06$ with the distinct observation that the solution of the former blew up earlier than that of the latter. Furthermore, the solution of (Non-Sym) had negative values \ek{whereas} that of (Sym) was always positive.
\begin{figure}[htbp]
\begin{minipage}{.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{mod-ronbun2018_fig3-crop.pdf} \\
(a) (Sym)
\end{center}
\end{minipage}
\begin{minipage}{.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{mod-ronbun2018_fig4-crop.pdf}\\
(b) (Non-Sym)
\end{center}
\end{minipage}
\caption{$N=5$, $\alpha=\frac{4}{3}$ and $u(0,x)=13\cos\frac{\pi}{2}x$.}
\label{fig:2}
\end{figure}
\ek{We} examined the error estimates of the solutions for the same uniform space mesh $x_j=jh$ ($j=0,\ldots,m$) and $h=1/m$\ek{. Also, we} regarded the numerical solution with $h'=1/480$ as the exact solution.
The following quantities were compared:
\begin{align*}
& \mbox{$L^{1}$err} && \|u_{h'}^{n}-u_{h}^{n}\|_{L^{1}(I)};\\
& \mbox{$L^{2}$err} && \left\|u_{h'}^{n}-u_{h}^{n}\right\|=\left\|x^{\frac{N-1}{2}}(u_{h'}^{n}-u_{h}^{n})\right\|_{L^{2}(I)};\\
& \mbox{$L^{\infty}$err} && \|u_{h'}^{n}-u_{h}^{n}\|_{L^{\infty}(I)}.
\end{align*}
Fig.~\ref{fig:3} \ek{presents} results for
$N=3$, $\alpha=\frac{4}{3}$ and $u(0,x)=\cos\frac{\pi}{2}x$.
We used the uniform time increment $\tau_n=\tau=\lambda h^2$ $(n=0,1,\ldots)$ with $\lambda=1/2$ and computed until $t\le T=0.005$.
For (Sym), we observed the theoretical convergence rate $h^2+\tau$ in the $\|\cdot\|$ norm (see Theorem \ref{th:s3})\ek{,} whereas the rate in the $L^\infty$ norm \tn{deteriorated} slightly. For (Non-Sym), we observed \ek{second-order} convergence in the $L^\infty$ norm, which supports the results \ek{presented} in Theorem \ref{th:s5}.
\begin{figure}[htbp]
\begin{minipage}{.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{ronbun2018_fig9-crop.pdf} \\
(a) (Sym)
\end{center}
\end{minipage}
\begin{minipage}{.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{ronbun2018_fig10-crop.pdf}\\
(b) (Non-Sym)
\end{center}
\end{minipage}
\caption{Errors. $N=3$, $\alpha=\frac{4}{3}$ and $u(0,x)=\cos\frac{\pi}{2}x$.}
\label{fig:3}
\end{figure}
Moreover, we considered the case for $N=4$, which is not supported in Theorem \ref{th:s3} for (Sym)\ek{. Also, we} chose $\alpha=4$ and $u(0,x)=3\cos\frac{\pi}{2}x$ for this case. Fig.~\ref{fig:4}(d) displays the shape of the solution, which blew up at approximately $T=0.0035$.
Furthermore, we computed errors until $T=0.0011, 0.0022$, and $0.0033$ using the uniform meshes $x_j$ and $\tau_n$ with $\lambda=0.11$.
From Fig.~\ref{fig:4}, we observed the second-order convergence in the $\|\cdot\|$ norm, suggesting the possibility of removing assumption \tn{$N\le 3$}.
\begin{figure}[htbp]
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{rate_7-2-crop.pdf} \\
(a) $T=0.0011$
\end{center}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{rate_8-2-crop.pdf}\\
(b) $T=0.0022$
\end{center}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{rate_9-2-crop.pdf}\\
(c) $T=0.0033$
\end{center}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{mod-graph_counterexample-crop.pdf}\\
(d) solution shape
\end{center}
\end{minipage}
\caption{Errors. $N=4$, $\alpha=4$ and $u(0,x)=3\cos\frac{\pi}{2}x$.}
\label{fig:4}
\end{figure}
Finally, we \tn{observed the non-increasing property of the energy functional. The energy functional associated with (1) is given as
\[
J(t)=\frac{1}{2}\|u_{x}\|^2-\frac{1}{\alpha+2}\int_I x^{N-1}|u|^{\alpha+2}~dx.
\]
We can use the standard method to prove that $J(t)$ is non-increasing in $t$.
This non-increasing property plays an important role in the blow-up analysis of the solution of (1)\ek{, as presented by} Nakagawa \cite{nak76}.
Therefore, it is of interest whether a discrete version of this non-increasing property holds true.
Actually, introducing the discrete energy functional associated with (Sym) as
\[
J_{h}(n)=\frac{1}{2}\|(u^{n}_{h})_x\|^{2}-\frac{1}{\alpha+2}\int_Ix^{N-1}|u_{h}^{n}|^{\alpha+2}~dx,
\]
we prove the following. \ek{Appendix B presents the proof.}
\begin{prop}
\label{prop:5.5}
$J_{h}(n)$ is a non-increase sequence of $n$.
\end{prop}
}
Now let $N=3$, $\alpha=\frac{4}{3}$, and $u(0,x)=\cos\frac{\pi}{2}x,~13\cos\frac{\pi}{2}x$. We determined the time increment $\tau_{n}$ through \eqref{eq:6.1a} \ek{for} the uniform space mesh $x_j=jh$ with $h=1/m$ and \tn{$m=50$}.
Fig.~\ref{fig:6} \ek{presents} the results, which support that of Proposition \ref{prop:5.5}.
\begin{figure}[htbp]
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[width=0.9\textwidth]{mod-ronbun2018_fig7-crop.pdf} \\
(Sym) \& $u(0,x)=\cos\frac{\pi}{2}x$
\end{center}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\begin{center}
\includegraphics[width=0.9\textwidth]{mod-ronbun2018_fig8-crop.pdf}\\
(Sym) \& $u(0,x)=13\cos\frac{\pi}{2}x$
\end{center}
\end{minipage}
\caption{Energy functional.}
\label{fig:6}
\end{figure}
| {
"timestamp": "2019-08-28T02:18:34",
"yymm": "1902",
"arxiv_id": "1902.07919",
"language": "en",
"url": "https://arxiv.org/abs/1902.07919",
"abstract": "This study aims to present the error and numerical blow up analyses of a finite element method for computing the radially symmetric solutions of semilinear heat equations. In particular, this study establishes optimal order error estimates in $L^\\infty$ and weighted $L^2$ norms for the symmetric and nonsymmetric formulation, respectively. Some numerical examples are presented to validate the obtained theoretical results.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Finite element method for radially symmetric solution of a multidimensional semilinear heat equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850897067342,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7095349964762196
} |
https://arxiv.org/abs/1205.5983 | Abelian ideals of a Borel subalgebra and root systems | Let $g$ be a simple Lie algebra and $Ab$ the poset of non-trivial abelian ideals of a fixed Borel subalgebra of $g$. In 2003 (IMRN, no.35, 1889--1913), we constructed a partition of $Ab$ into the subposets $Ab_\mu$, parameterised by the long positive roots of $g$, and established some properties of these subposets. In this note, we show that this partition is compatible with intersections, relate it to the Kostant-Peterson parameterisation of abelian ideals and to the centralisers of abelian ideals. We also prove that the poset of positive roots of $g$ is a join-semilattice. | \section*{Introduction}
\noindent
Let $\g$ be a complex simple Lie algebra with a triangular decomposition
$\g=\ut\oplus\te\oplus \ut^-$. Here $\te$ is a fixed Cartan subalgebra and $\be=\ut\oplus\te$
is a fixed Borel subalgebra. Accordingly, $\Delta$ is the set of roots of $(\g,\te)$, $\Delta^+$
is the set of positive roots corresponding to $\ut$, and $\Pi$ is the set of simple roots in
$\Delta^+$. Write $\theta$ for the highest root in $\Delta^+$.
A subspace $\ah\subset\ut$ is an {\it abelian ideal\/} (of $\be$) if
$[\be,\ah]\subset \ah$ and $[\ah,\ah]=0$.
The set of abelian ideals of $\be$ is denoted by $\Ab$.
In the landmark paper~\cite{ko98}, Kostant elaborated on
Dale Peterson's theory of Abelian ideals (in particular, the astounding result
that $\#\Ab=2^{\rk\g}$) and related abelian ideals with problems in representation theory. Since then,
abelian ideals attracted a lot of attention,~see e.g.
\cite{cp1,cp2,cp3,pr,imrn,subsets,suter}.
We think of $\Ab$ as a poset with respect to inclusion.
As $\ah\in\Ab$ is a sum of certain root spaces, we may (and will) identify such $\ah$
with the corresponding subset $I=I_\ah$ of $\Delta^+$.
Let $\Abo=\Abo(\g)$ denote the set of nonzero abelian ideals and $\Delta^+_l$ the set
of long positive roots. In the simply-laced case, all roots are assumed to be long.
In \cite[Sect.\,2]{imrn}, we defined a surjective mapping
$\tau: \Abo \to \Delta^+_l$ and studied its fibres.
If $\ah\in \Abo$ and $\tau(\ah)=\mu$, then $\mu$ is called the
{\it rootlet\/} of $\ah$, also denoted by $\rt(\ah)$ or $\rt(I_\ah)$.
Letting $\Ab_\mu=\tau^{-1}(\mu)$, we get a partition of $\Abo$ parameterised by
$\Delta^+_l$. Each fibre $\Ab_\mu$ is regarded as a
sub-poset of $\Ab$. It is known that, for any $\mu\in \Delta^+_l$,
$\Ab_\mu$ has a unique minimal and unique maximal element \cite[Sect.\,3]{imrn}.
Regarding abelian ideals as subsets of $\Delta^+$, we write $I(\mu)_{\min}$ (resp. $I(\mu)_
{\max}$) for the minimal (resp. maximal) element of $\Ab_\mu$.
We also say that
$I(\mu)_{\min}$ is the $\mu$-{\it minimal\/} and $I(\mu)_{\max}$ is the
$\mu$-{\it maximal\/} ideal.
Various properties of the $\mu$-minimal ideals are obtained in \cite[Sect.\,4]{imrn}. In particular, it is known that
\begin{itemize}
\item \ $\# I(\mu)_{\min}=(\rho, \theta^\vee-\mu^\vee)+1$, where $\rho=\frac{1}{2}
\sum_{\gamma\in\Delta^+} \gamma$ and $\mu^\vee=2\mu/(\mu,\mu)$;
\item \ $I=I(\mu)_{\min}$ for some $\mu\in\Delta^+_l$ if and only if $I\subset
\EuScript H:=\{\gamma\in \Delta^+ \mid (\gamma, \theta)\ne 0\}$;
\item \ $I(\mu)_{\min} \subset I(\mu')_{\min}$ if and only if $\mu'\preccurlyeq \mu$, where `$\preccurlyeq$' is the usual {\it root order\/} on $\Delta^+$.
\item \ $I(\mu)_{\min}=I(\mu)_{\max}$ if and only if $(\mu,\theta)=0$ \cite[Thm.\,5.1]{imrn}.
\end{itemize}
If $\rt(I)\not\in \Pi$, then there is $I'\in \Ab$ such that
$I'\supset I$, $\#I'=\#I+1$ and $\rt(I')\prec \rt(I)$. This is implicit in~\cite[Thm.\,2.6]{imrn},
cf. also Proposition~\ref{prop:elem-ext}. This implies that the (globally) maximal ideals of $\Ab$ are precisely the maximal elements of the posets $\Ab_{\ap}$ for $\ap\in \Pi\cap\Delta^+_l=:\Pi_l$, see \cite[Cor.\,3.8]{imrn}. A closed formula for the dimension of all maximal abelian ideals is proved in
\cite[Sect.\,8]{cp3}.
In this paper, we elaborate on further properties of the partition
\beq \label{eq:parti}
\Abo=\sqcup_{\mu\in\Delta^+_l}\Ab_\mu \
\eeq
and related properties of abelian ideals and root systems.
In Section~\ref{sect:intersect}, we show that partition \eqref{eq:parti} behaves well with respect to intersections.
\begin{thm} \label{thm:intr2}
Let $\mu,\mu'\in\Delta^+_l$.
\begin{itemize}
\item[\sf (i)] \ If $I \in \Ab_\mu$ and $I'\in \Ab_{\mu'}$, then $I\cap I'$ belongs to
$\Ab_{\nu}$, where $\nu$ does not depend on the choice of $I$ and $I'$. Actually,
$\nu$ is the unique smallest long positive root such that $\nu\succcurlyeq \mu$
and $\nu\succcurlyeq \mu'$. In particular, such $\nu$ always exists;
\item[\sf (ii)] \ Furthermore, $I(\mu)_{\min}\cap I(\mu')_{\min}=I(\nu)_{\min}$,
$I(\mu)_{\max}\cap I(\mu')_{\max}=I(\nu)_{\max}$, and every ideal in $\Ab_\nu$ occurs as
intersection of two ideals from $\Ab_\mu$ and $\Ab_{\mu'}$.
\end{itemize}
\end{thm}
\noindent
The root $\nu$ occurring in (i) is denoted by $\mu\vee\mu'$.
In our approach, the existence of $\mu\vee\mu'$ \ ($\mu,\mu'\in \Delta^+_l$) comes up as a
by-product of our theory of posets $\Ab_\mu$.
This prompts the natural question of whether `$\vee$' is well-defined for
{\sl all\/} pairs of positive roots, not necessarily long.
The corresponding general assertion is proved in the Appendix (see Theorem~\ref{thm-app2}).
It seems that this property of root systems has not been noticed before.
In Section~\ref{sect:sovpad}, we give a characterisation of $\mu$-minimal abelian ideals that relates two different approaches to $\Ab$. We have associated the rootlet $\rt(I)\in
\Delta^+_l$ to a nonzero abelian ideal $I$. On the other hand, there is a bijection between
$\Ab$ and certain elements in the coroot lattice $Q^\vee$, which is due to Kostant and Peterson~\cite{ko98}. Namely,
\[
\Ab \stackrel{1:1}{\longleftrightarrow} \EuScript Z_1=\{z\in Q^\vee \mid -1\le (z,\gamma)\le 2 \text{ for all }
\gamma\in \Delta^+\} .
\]
The element $z\in Q^\vee$ corresponding to $I\in\Ab$ is denoted by $z_I$. Our result is
\begin{thm} \label{thm:intr1}
For an abelian ideal $I$, we have
$I=I(\mu)_{\min}$ for $\mu=\rt(I)$ if and only if\/ $\rt(I)^\vee=z_I$.
\end{thm}
\noindent We also prove that
\\ \indent
\textbullet \quad an abelian ideal $I$ belongs to $\Ab_\mu$ if and only if
$I\cap\gH=I(\mu)_{\min}$;
\\ \indent
\textbullet \quad $I(\mu)_{\max}\subset \{\nu\in\Delta^+\mid \nu\succcurlyeq\mu\}$.
In Section~\ref{sect:central}, we consider the centralisers of abelian ideals. If $\ah\in\Ab$,
then the centraliser $\z_\g(\ah)$ is a $\be$-stable subspace of $\g$.
However, $\z_\g(\ah)$ is not always contained in $\be$. We give criteria for $\z_\g(\ah)$ to
be a nilpotent subalgebra or a sum of abelian ideals. We also prove
\begin{thm} \label{thm:intr3} Let $\ah\in\Ab$.
Then $\z_\g(\ah)$ is again an abelian ideal if and only if $\rt(\ah)\in \Pi_l$.
In particular, $\z_\g(\ah)=\ah$ if and only if $\ah$ is a maximal ideal in $\Ab$.
\end{thm}
\noindent
In fact, Theorem~\ref{thm:intr3} is closely related to the following interesting observation.
For any $\gS\subset \Delta^+$, let $\min(\gS)$ and $\max(\gS)$ denote the sets of minimal
and maximal elements of $\gS$, respectively.
\begin{thm} \label{thm:intr4}
For every $\ap\in\Pi_l$, there is a one-to-one correspondence between
$\min\bigl( I(\ap)_{\min}\bigr)$ and $\max \bigl(\Delta^+\setminus I(\ap)_{\max}\bigr)$.
Namely, for any
$\nu \in \min\bigl( I(\ap)_{\min}\bigr)$, there is $\nu'\in
\max \bigl(\Delta^+\setminus I(\ap)_{\max}\bigr)$ such that $\nu+\nu'=\theta$; and vice versa.
\end{thm}
In particular, $\max \bigl(\Delta^+\setminus I(\ap)_{\max}\bigr)\subset \gH$.
An analogous statement for arbitrary long roots (in place of $\ap\in\Pi_l$) is false.
However, there is a modification of Theorem~\ref{thm:intr4}
that applies to the connected subsets of $\Pi_l$,
see Theorem~\ref{thm:modification}. Unfortunately, our proof of these two theorems
is based on the classification.
\noindent
We refer to \cite{bour,hump} for standard results on root systems and (affine) Weyl groups.
{\small
{\bf Acknowledgements.} This work was done during my visits to CRC~701 at
the Universit\"at Bielefeld and
Max-Planck-Institut f\"ur Mathematik (Bonn). I thank both Institutions for
the hospitality and support.
I am grateful to E.B.\,Vinberg for fruitful discussions related to results in Appendix~\ref{app:A}.
}
\section{Preliminaries on abelian ideals and minuscule elements}
\label{sect:odin}
\noindent
Throughout this paper, $\Delta$ is the root system of $(\g,\te)$ with positive roots
$\Delta^+$ corresponding to $\ut$, simple roots $\Pi=\{\ap_1,\dots,\ap_n\}$, and
Weyl group $W$. Set $\Pi_l:=\Pi\cap \Delta^+_l$.
We equip $\Delta^+$ with the usual partial ordering `$\preccurlyeq$'.
This means that $\mu\preccurlyeq\nu$ if $\nu-\mu$ is a non-negative integral linear combination
of simple roots. Write $\mu\prec\nu$ if $\mu\preccurlyeq\nu$ and $\mu\ne\nu$.
If $\ah$ is an abelian ideal of $\be$, then $\ah$ is a sum of certain root spaces in $\ut$, i.e.,
$\ah=\bigoplus_{\gamma\in I_\ah}\g_\gamma$. The relation $[\be,\ah]\subset \ah$ is equivalent to
that $I=I_\ah$ is an {\it upper ideal\/} of the poset $(\Delta^+, \preccurlyeq)$, i.e.,
if $\nu\in I$, $\gamma\in\Delta^+$, and $\nu\preccurlyeq \gamma$, then $\gamma\in I$.
The property of being abelian means that
$\gamma'+\gamma''\not\in \Delta^+$ for all $\gamma',\gamma''\in I$.
We often work in the setting of root systems, so that a $\be$-ideal $\ah\subset\ut$
is being identified with the corresponding subset $I$ of positive roots.
The theory of abelian ideals relies on the relationship, due to Peterson, between the
abelian ideals and the so-called {\it minuscule elements\/} of the affine Weyl
group of $\Delta$. Recall the necessary setup.
We have the vector space $V=\oplus_{i=1}^n{\mathbb R}\ap_i$,
the Weyl group $W$ generated by simple reflections
$s_1,\dots,s_n$, and a $W$-invariant inner product $(\ ,\ )$ on $V$.
Letting $\widehat V=V\oplus {\mathbb R}\delta\oplus {\mathbb R}\lb$, we extend
the inner product $(\ ,\ )$ on $\widehat V$ so that $(\delta,V)=(\lb,V)=
(\delta,\delta)= (\lb,\lb)=0$ and $(\delta,\lb)=1$. Set $\ap_0=\delta-\theta$, where
$\theta$ is the highest root in $\Delta^+$.
Then
\begin{itemize}
\item[] \
$\widehat\Delta=\{\Delta+k\delta \mid k\in {\mathbb Z}\}$ is the set of affine
(real) roots;
\item[] \ $\HD^+= \Delta^+ \cup \{ \Delta +k\delta \mid k\ge 1\}$ is
the set of positive affine roots;
\item[] \ $\HP=\Pi\cup\{\ap_0\}$ is the corresponding set
of affine simple roots;
\item[] \ $\mu^\vee=2\mu/(\mu,\mu)$ is the coroot corresponding to
$\mu\in \widehat\Delta$;
\item[] \ $Q=\oplus _{i=1}^n {\mathbb Z}\ap_i$
is the {\it root lattice\/} and $Q^\vee=\oplus _{i=1}^n {\mathbb Z}\ap_i^\vee$
is the {\it coroot lattice\/} in $V$.
\end{itemize}
\noindent
For each $\ap_i\in \HP$, let $s_i$ denote the corresponding reflection in $GL(\HV)$.
That is, $s_i(x)=x- (x,\ap_i)\ap_i^\vee$ for any $x\in \HV$.
The affine Weyl group, $\HW$, is the subgroup of $GL(\HV)$
generated by the reflections $s_0,s_1,\dots,s_n$.
The extended inner product $(\ ,\ )$ on $\widehat V$ is $\widehat W$-invariant.
The {\it inversion set\/} of $w\in\HW$ is $\EuScript N(w)=\{\nu\in\HD^+\mid w(\nu)\in -\HD^+\}$.
Following Peterson, we say that $w\in \HW$ is {\it minuscule\/}, if
$\EuScript N(w)=\{-\gamma+\delta\mid \gamma\in I_w\}$
for some subset $I_w\subset \Delta$.
One then proves that {\sf (i)} $I_w\subset \Delta^+$, {\sf (ii)} $I_w$ is an abelian ideal, and
{\sf (iii)} the assignment
$w\mapsto I_w$ yields a bijection between the minuscule elements of
$\HW$ and the abelian ideals, see \cite{ko98}, \cite[Prop.\,2.8]{cp1}.
Accordingly, if $I\in\Ab$, then $w_I$ denotes the corresponding minuscule
element of $\HW$. Obviously, $\# I=\#\EuScript N(w_I)=\ell(w_I)$, where $\ell$ is the usual length function on $\HW$.
Using minuscule elements of $\HW$, one can assign an element of $Q^\vee$
to any abelian ideal~\cite{ko98}.
In fact, one can associate an element of $Q^\vee$ to any $w\in\HW$.
The following is exposed in a more comprehensive form in \cite[Sect.\,2]{losh}.
\noindent
Recall that $\HW$ is a semi-direct product of $W$ and $Q^\vee$, and it can also be regarded as
a group of affine-linear transformations of $V$ \cite[4.2]{hump}.
For any $w\in \HW$, there is a unique decomposition
\beq \label{eq:decomp-w}
w=v{\cdot}t_r,
\eeq
where $v\in W$ and $t_r$ is the translation of $V$ corresponding to $r\in Q^\vee$, i.e.,
$t_r\ast x=x+r$ for all $x\in V$.
Then we assign the element $v(r)\in Q^\vee$ to $w\in\HW$.
An alternative way for doing so, which does not explicitly use the semi-direct product
structure, is based on the relation between
the linear $\HW$-action on $\HV$ and decomposition \eqref{eq:decomp-w}.
Given $w\in\HW$,
define the integers $k_i$, $i=1,\dots,n$, by the formula
$w^{-1}(\ap_i)=\mu_i+k_i\delta$ ($\mu_i\in \Delta$).
Then $v(r)\in Q^\vee$ is determined by the conditions that $(v(r),\ap_i)=k_i$.
The reason is that $w^{-1}=v^{-1}{\cdot}t_{-v(r)}$ and
the linear $\HW$-action on $\HV$ satisfies the following relation
\beq \label{eq:general-aff-lin}
w^{-1}(x)=v^{-1}(x)+(x,v(r))\delta \quad \forall x\in V\oplus\BR\delta .
\eeq
[It suffices to verify that $t_r(x)=x-(x,r)\delta$.]
If $w=w_I$ is minuscule, then we also write $z_I$ for the resulting element of $Q^\vee$.
By \cite[Theorem\,2.5]{ko98}, the mapping
$I \mapsto z_I\in V$ sets up a bijection between $\Ab$
and $\EuScript Z_1=\{ z\in Q^\vee \mid (z,\gamma)\in \{-1,0,1,2\} \quad \forall \gamma\in \Delta^+\}$.
A proof of this result is given in \cite[Appendix~A]{subsets}.
Given $I\in\Abo$ and the corresponding non-trivial minuscule element $w_I\in\HW$,
the {\it rootlet\/} of $I$ is defined by
\[
\rt(I)=w_I(\ap_0)+\delta=w_I(2\delta-\theta) .
\]
By \cite[Prop.\,2.5]{imrn}, we have $\rt(I)\in \Delta^+_l$.
The next result describes a procedure for extensions of abelian ideals. Namely, if
the rootlet of $I=I_w$ is not simple, then one can construct a larger ideal $I'$ such that
$\# I'=\# I+1$ and $\rt(I')=s_\ap(\rt(I))\prec \rt(I)$ for some $\ap\in\Pi$.
\begin{prop} \label{prop:elem-ext}
Let $w\in\HW$ be minuscule and $\mu=\rt(I_w)$.
Suppose that $\mu\not\in\Pi$ and
take any $\ap\in\Pi$ such that $(\ap,\mu)>0$. Then $s_\ap w$ is again minuscule.
Moreover, the only root in $I_{s_\ap w}\setminus I_w$ belongs to $\gH$.
\end{prop}
\begin{proof}
Set $\mu'=s_\ap(\mu)=s_\ap w(2\delta-\theta)$ and $\mu''=\mu-\ap$. (Note that
$\mu'=\mu''$ if and only if $\ap\in\Pi_l$). Then $w(2\delta-\theta)=\mu''+\ap$ and
$w^{-1}(\mu'')+w^{-1}(\ap)=2\delta-\theta$. Therefore,
$\begin{cases} w^{-1}(\mu'')=k\delta-\mu_1 & \\
w^{-1}(\ap)=(2-k)\delta-\mu_2 & \end{cases}$, where $\mu_1,\mu_2\in \Delta$ and
$\mu_1+\mu_2=\theta$.
\\[.6ex]
This clearly implies that both $\mu_1$ and $\mu_2$ are positive and
hence $\mu_1,\mu_2\in \gH$.
Furthermore, since $w$ is minuscule, both $w^{-1}(\mu'')$ and $w^{-1}(\ap)$ must be
positive. [Indeed, if, say, $w^{-1}(\mu'')$ is negative, then $k\le 0$. Hence $w(\mu_1)=
k\delta-\mu''$ is negative and $\mu_1\in\EuScript N(w)$, which contradicts the definition of
minuscule elements.]
Therefore, one must have $k=1$. Then
$w(\delta-\mu_2)=\ap\in\Pi$. Since $\EuScript N(s_\ap w)=\EuScript N(w)\cup \{w^{-1}(\ap)\}$,
we then conclude that
$s_\ap w$ is minuscule and the corresponding abelian ideal is
$I_{s_\ap w}=I_w\cup\{\mu_2\}$.
\\
Note also that $\rt(I_{s_\ap w})=\mu'\prec \mu$.
\end{proof}
\section{Intersections of abelian ideals and posets $\Ab_\mu$}
\label{sect:intersect}
\noindent
In this section, we prove that taking intersection of abelian ideals is compatible with partition~\eqref{eq:parti}.
First of all, we notice that for any collection of non-empty abelian ideals (subsets of $\Delta^+$)
their intersection
is non-empty, since all these ideals contain the highest root $\theta$. In particular,
if $\mu_1,\dots,\mu_s\in \Delta^+_l$, then
\[
I=\bigcap_{i=1}^s I(\mu_i)_{\min}
\]
is again an abelian ideal. Since $I(\mu_i)_{\min}\subset \gH$ for all $i$, we have
$I\subset \gH$, and therefore $I=I(\mu)_{\min}$ for certain $\mu\in \Delta^+_l$ \cite[Thm.\,4.3]{imrn}.
Since $I(\mu)_{\min}\subset I(\mu_i)_{\min}$, we conclude that $\mu\succcurlyeq\mu_i$
\cite[Cor.\,3.3]{imrn}.
On the other hand, if $\gamma\in\Delta^+_l$ and $\gamma\succcurlyeq\mu_i$ for all $i$, then
$I(\gamma)_{\min}\subset I(\mu_i)_{\min}$ \cite[Thm.\,4.5]{imrn}. Therefore, $I(\gamma)_{\min}\subset I(\mu)_{\min}$, i.e.,
$\gamma\succcurlyeq\mu$. Thus, we have proved
\begin{thm} \label{thm:sup-min}
For any collection $\mu_1,\dots,\mu_s\in \Delta^+_l$,
{\sf (i)} \ there exists a unique long root $\mu$ such that $\mu\succcurlyeq\mu_i$ for all $i$, and if $\gamma\in\Delta^+_l$ and $\gamma\succcurlyeq\mu_i$
for all $i$, then $\gamma\succcurlyeq\mu$;
{\sf (ii}) \ $\bigcap_{i=1}^s I(\mu_i)_{\min}=I(\mu)_{\min}$.
\end{thm}
The root $\mu$ occurring in part (i) is denoted by $\mu_1\vee\ldots\vee\mu_s
=\vee_{i=1}^s\mu_i$. We also say that $\mu$ is the {\it least upper bound\/} or
{\it join\/} of $\mu_1,\dots,\mu_s$.
\begin{rmk}
Clearly, the operation `$\vee$' is associative, and
it suffices to describe the least upper bound for only two (long) roots.
In Appendix~\ref{app:A}, we prove directly that the join exists for {\sl all\/} pairs of roots, not
necessarily long ones, and
give an explicit formula for it.
\end{rmk}
We are going to play the same game with arbitrary ideals in $\Ab_{\mu_i}$.
To this end, we need an analogue of
\cite[Thm.\,4.5]{imrn} for the $\mu$-maximal ideals, see Corollary~\ref{cor:1}(i) below.
This can be achieved as follows.
\begin{prop} \label{prop:long-ext}
Let $\mu,\mu'$ be long roots such that $\mu'\prec \mu$. Then
\begin{itemize}
\item[{\sf (i)}] \ for any $I\in\Ab_\mu$, there exists $I'\subset \Ab_{\mu'}$ such that
$I'\supset I$ and $\# I'=\# I+(\rho,\mu^\vee-{\mu'}^\vee)$;
\item[{\sf (ii)}] \ moreover, if $I=I_0\subset I_1\subset \ldots\subset I_m=I'$ is any chain of
ideals with $m=(\rho,\mu^\vee-{\mu'}^\vee)$ and
$\# I_j=\# I_{j-1}+1$, then $\rt(I_j)\ne \rt(I_{j-1})$ for all $j$.
\end{itemize}
\end{prop}
\begin{proof}
If $\mu\not\in\Pi_l$ and $\ap\in\Pi$ with $(\ap,\mu)>0$, then a direct calculation shows
that $(\rho,\mu^\vee-s_\ap(\mu)^\vee)=1$. [Use the relations $(\rho,\ap^\vee)=1$ and
$(\ap,\mu^\vee)=1$.]
(i) \ Arguing by induction, one readily proves that if $\mu,\mu'$ are both long and
$\mu'\prec\mu$, then $\mu'$ can be reached
from $\mu$ by a sequence of simple reflections:
\[
\mu=\mu_0\to s_{\gamma_1}(\mu_0)=\mu_1\to s_{\gamma_2}(\mu_1)=\mu_2\to \ldots \to
s_{\gamma_m}(\mu_{m-1})=\mu_m=\mu' ,
\]
where $\gamma_i\in\Pi$ and $(\gamma_i,\mu_{i-1})>0$. The number of steps $m$ equals
$(\rho,\mu^\vee-{\mu'}^\vee)$. If $I\in\Ab_{\mu}$ is arbitrary and $w_I$ is the corresponding
minuscule element, then the repeated application of Proposition~\ref{prop:elem-ext} shows that
$w':=s_{\gamma_1}\ldots s_{\gamma_m}w_I$ is again minuscule and $I'=I_{w'}$ is a required ideal.
(ii) \ Let $w_j\in \HW$ be the minuscule element corresponding to $I_j$.
Then $w_j=s_{i_j}w_{j-1}$ for a sequence $(\ap_{i_1},\dots,\ap_{i_m})$ of affine simple roots.
The corresponding sequence of rootlets is
\[
\mu=\mu_0 \to s_{i_1}\mu_0=\mu_1\to s_{i_2}\mu_1=\mu_2 \to \dots \to \mu_m=\mu' .
\]
If $i_j=0$, i.e., the $j$-th step is the reflection with the respect to $\ap_0=\delta-\theta$, then
$\mu_{j-1}=\mu_j$, see \cite[Prop.\,3.2]{imrn}.
For the steps corresponding to $\ap_{i_j}\in\Pi$, the value of $(\rho,\mu_j^\vee)$ is reduced
by at most $1$. Consequently, the sequence $(\ap_{i_1},\dots,\ap_{i_m})$ does not contain
$\ap_0$ and the value of $(\rho,\mu_j^\vee)$ decrease by $1$ at each step, i.e., all these
rootlets are different.
\end{proof}
\begin{cl} \label{cor:1}
If $\mu,\mu'$ are long roots such that $\mu'\preccurlyeq \mu$, then
\begin{itemize}
\item[{\sf (i)}] \ $I(\mu)_{\max}\subset I(\mu')_{\max}$;
\item[{\sf (ii)}] \ $\#\Ab_{\mu'} \ge \#\Ab_\mu$.
\end{itemize}
\end{cl}
\begin{proof}
(i) \ This readily follows from Proposition~\ref{prop:long-ext}(i) applied to $I=I(\mu)_{\max}$.
\\
(ii) \ Argue by induction on $m=(\rho,\mu^\vee-{\mu'}^\vee)$. For $m=1$, the assertion
follows from Proposition~\ref{prop:elem-ext}.
\end{proof}
\begin{thm} \label{thm:sup-max}
For any set $\{\mu_1,\dots,\mu_s\}\subset \Delta^+_l$ and $\mu=\vee_{i=1}^s\mu_i$, we have
\begin{itemize}
\item[{\sf (i)}] \ $\bigcap_{i=1}^s I(\mu_i)_{\max}=I(\mu)_{\max}$,
\item[{\sf (ii)}] \ If $I_i\in \Ab_{\mu_i}$ for $i=1,\dots,s$, then $\bigcap_{i=1}^s I_i\in \Ab_\mu$.
\item[{\sf (iii)}] \ For every $I\in \Ab_\mu$, there exist $I_i\in\Ab_{\mu_i}$ such that
$I=\bigcap_{i=1}^s I_i$.
\end{itemize}
\end{thm}
\begin{proof}
(i) \ Consider the abelian ideal $I= \bigcap_{i=1}^s I(\mu_i)_{\max}$. Since $I\subset I(\mu_i)_{\max}$, we have $\rt(I)\succcurlyeq \mu_i$ for all $i$, hence $\rt(I)\succcurlyeq \vee_{i=1}^s\mu_i=\mu$.
We also have $I\supset \bigcap_{i=1}^s I(\mu_i)_{\min}=I(\mu)_{\min}$, hence
$\rt(I)\preccurlyeq \mu$ by \cite[Cor.\,3.3]{imrn}.
It follows that $\rt(I)=\mu$ and $I\subset I(\mu)_{\max}$.
Since $\mu\succcurlyeq\mu_i$, by Corollary~\ref{cor:1}(i), we have
$I(\mu)_{\max}\subset I(\mu_i)_{\max}$ for all $i$, and $I(\mu)_{\max}\subset I$.
Thus, $I=I(\mu)_{\max}$.
(ii) \ It follows from Theorem~\ref{thm:sup-min}(ii) and part (i) that
$I(\mu)_{\min} \subset \bigcap_{i=1}^s I_i \subset I(\mu)_{\max}$.
By \cite[Thm.\,3.1(iii)]{imrn}, the intermediate ideal $\bigcap_{i=1}^s I_i $ also belongs to $\Ab_\mu$.
(iii) \
Given $I\in \Ab_\mu$, we construct the ideals $I_i\in \Ab_{\mu_i}$,
$i=1,\dots,s$, as prescribed in Proposition~\ref{prop:long-ext}(i).
Then $I\subset \bigcap_{i=1}^s I_i=:J$ and $\rt(J)=\vee_{i=1}^s \mu_i=\mu$.
That is, $\rt (I)=\rt(J)$. By Proposition~\ref{prop:long-ext}(ii), this is only possible if
$J=I$.
\end{proof}
Combining Theorems~\ref{thm:sup-min} and \ref{thm:sup-max} yields
Theorem~\ref{thm:intr2} in the Introduction.
For any $\gamma\in\Delta^+$, set $I\langle{\succcurlyeq}\gamma\rangle=\{\nu\in\Delta^+\mid
\nu\succcurlyeq\gamma\}$. We also say that $I\langle{\succcurlyeq}\gamma\rangle$
is the {\it principal\/} upper ideal of $\Delta^+$ {\it generated\/} by $\gamma$.
It is not necessarily abelian.
\begin{ex} \label{ex:all-maximal}
Let $\ap_1,\dots,\ap_s$ be the set of all long simple roots.
Then $\vee_{i=1}^s\ap_i=\sum_{i=1}^s\ap_s=\vert\Pi_l\vert$ and
$\{ I(\ap_i)_{\max}\mid i=1,\dots,s\}$ is the set of all maximal abelian ideals in $\Ab$.
Hence $\bigcap_{i=1}^s I(\ap_i)_{\max}$ is an ideal with rootlet
$\vert\Pi_l\vert$.
Inspecting the list of root systems, we notice that the ideal
$\bigcap_{i=1}^s I(\ap_i)_{\min}=I(\vert\Pi_l\vert)_{\min}$ has a nice uniform description.
For any $\gamma=\sum_{i=1}^n a_i\ap_i\in\Delta^+$, we set $[\gamma/2]=\sum_{i=1}^n [a_i/2]\ap_i$. Then
$I(\vert\Pi_l\vert)_{\min}$ is the upper ideal of $\Delta^+$ generated by the root $\theta-[\theta/2]$. (It is true that $\theta-[\theta/2]$ is always a root in $\gH$.)
In the {\bf A-D-E} case, we have $\vert \Pi_l\vert=|\Pi |$ and hence $(\theta, |\Pi_l |)\ne 0$.
In fact, $(\theta, |\Pi_l |)\ne 0$ for all simple Lie algebras except type $\GR{C}{n}$, $n\ge 2$.
The condition $(\theta, |\Pi_l |)\ne 0$ implies that $\# \Ab_{|\Pi_l |}=1$ \cite[Thm.\,5.1]{imrn}, i.e.,
$I(|\Pi_l |)_{\min}=I(|\Pi_l |)_{\max}$ if $\g$ is not of type $\GR{C}{n}$.
\end{ex}
\begin{rmk} \label{rmk:comm-roots}
The interest in $[\theta/2]$ is also justified by the following observations.
As in \cite{rodstv}, we say that $\gamma\in\Delta^+$ is {\it commutative}, if the
$\be$-submodule of $\g$ generated by $\g_\gamma$ is an abelian ideal; equivalently, if
the upper ideal $I\langle{\succcurlyeq}\gamma\rangle$
is abelian. Let $\Delta^+_{\mathsf{com}}$ denote the set of all commutative roots.
Clearly, $\Delta^+_{\mathsf{com}}=\bigcup_{\ap_i\in\Pi_l}I(\ap_i)_{\max}$.
It was noticed in \cite[Thm.\,4.4]{rodstv} that $\Delta^+\setminus \Delta^+_{\mathsf{com}}$ has a
unique maximal element, and this maximal element is $[\theta/2]$.
For any $\gamma\in\Delta^+$, it appears to be true that $[\gamma/2]\in \Delta^+\cup\{0\}$ and
$\gamma-[\gamma/2]\in\Delta^+$. It would be interesting to have a conceptual explanation
for this.
\end{rmk}
\section{Some properties of posets $\Ab_\mu$}
\label{sect:sovpad}
Let $I\subset \Delta^+$ be an abelian ideal and $w_I=v{\cdot}t_r\in\HW$ the
corresponding minuscule element. Recall that $v\in W$ and $r\in Q^\vee$.
We have associated two objects to these data:
the rootlet $\rt(I)=w_I(2\delta-\theta)\in \Delta^+_l\subset Q$ and the element
$z_I:=v(r)\in Q^\vee$.
\begin{thm}
For an abelian ideal $I$, the following conditions are equivalent:
\begin{itemize}
\item[\sf (i)] \ $\rt(I)^\vee=z_I$;
\item[\sf (ii)] \ $I=I(\mu)_{\min}$ \ for $\mu=\rt(I)$.
\end{itemize}
\end{thm}
\begin{proof}
1) \ Suppose that $I=I(\mu)_{\min}$. By \cite[Thm.\,4.3]{imrn},
$w_I=v_\mu s_0$, where $v_\mu \in W$ is the unique element of minimal length such
that $v_\mu(\theta)=\mu$. Here $\ell(v_\mu)=(\rho,\theta^\vee-\nu^\vee)$. It is easily seen that for $w=s_0$
decomposition~\eqref{eq:decomp-w} is $s_0=s_\theta{\cdot}t_{-\theta^\vee}$, where
$s_\theta\in W$ is the reflection with respect to $\theta$. Hence
the linear part of $w_I$ is $v_\mu s_\theta$ and
$r=-\theta^\vee$.
Therefore, $v_\mu s_\theta(-\theta^\vee)=v_\mu(\theta^\vee)=\mu^\vee$, as required.
2) \ Conversely, if $\rt(I)=\mu$ and $I\ne I(\mu)_{\min}$, then $z_I\ne z_{I(\mu)_{\min}}$.
By the first part, we have
$z_{I(\mu)_{\min}}=\mu^\vee $. Thus, $z_I\ne \rt(I)^\vee$.
\end{proof}
Applying formulae~\eqref{eq:decomp-w} and \eqref{eq:general-aff-lin} to arbitrary minuscule $w_I$,
we obtain
\[
w_I(2\delta-\theta)=v{\cdot}t_r(2\delta-\theta)= v(2\delta+(\theta,r)\delta-\theta)=
-v(\theta)+(2+(\theta,r))\delta .
\]
As we know that $\rt(I)\in \Delta^+$, one must have $(\theta,r)=-2$ and
$-v(\theta)\in\Delta^+$. Therefore, the equality $\rt(I)^\vee=z_I$ is equivalent to that
$v(r)=-v(\theta^\vee)$, i.e., $r=-\theta^\vee$.
This can be summarised as follows:
\noindent
{\it If $w_I=v{\cdot}t_r\in \HW$ is minuscule, then $(\theta,r)=-2$. Moreover,
$\rt(I)^\vee=z_I$ if and only if $r=-\theta^\vee$, i.e., $r$ is the shortest element in the affine hyperplane $\{x\in V\mid (x,\theta)=-2\}$.}
The theory developed in \cite[Sect.\,4]{imrn} yields, in principle, a very good understanding of
$\mu$-minimal ideals. In particular, an abelian ideal $I$ is minimal in some $\Ab_\mu$ if and only if
$I\subset \gH=\{\nu\in\Delta^+ \mid (\nu,\theta)\ne 0\}$ \cite[Thm.\,4.3]{imrn}.
The other ideals in $\Ab_\mu$ can be characterised as follows.
\begin{prop} \label{prop:other-in-fibre}
For $\mu\in\Delta^+_l$ and $I\in\Ab$, we have
$I\in\Ab_\mu$ if and only if $I\cap\gH=I(\mu)_{\min}$.
\end{prop}
\begin{proof}
($\Rightarrow$) Since $I(\mu)_{\min}\subset \gH$, we have $I(\mu)_{\min}\subset I\cap\gH$.
Moreover, $I\cap\gH=I(\mu')_{\min}$ for some $\mu'\in\Delta^+_l$.
Then $I(\mu)_{\min}\subset I(\mu')_{\min}\subset I$. By \cite[Cor.\,3.3]{imrn}, this yields
opposite inequalities for the rootlets, i.e.,
$\mu\succcurlyeq\mu'\succcurlyeq \mu$.
($\Leftarrow$) If $\mu'=\rt(I)$, then $I\cap\gH=I(\mu')_{\min}$ according to the previous part.
Hence $\mu'=\mu$.
\end{proof}
This means that partition \eqref{eq:parti} can also be defined using the equivalence relation in $\Abo$: \
$I\sim J$ if and only if $I\cap\gH=J\cap\gH$. However, this definition does not explain what long
root is associated with a given equivalence class. One also sees
that all the ideals in $\Ab_\mu$ can be obtained from $I(\mu)_{\min}$ by adding
suitable roots outside $\gH$. In particular,
$I(\mu)_{\max}$ is maximal among all abelian ideals having the prescribed intersection,
$I(\mu)_{\min}$, with $\gH$.
Our next goal is to compare the upper ideals $I\langle{\succcurlyeq}\mu\rangle$ and
$I(\mu)_{\max}$ ($\mu\in\Delta^+_l$). This will be achieved in two steps.
\begin{prop} \label{prop:mu_min-inclus}
For any $\mu\in\Delta^+_l$, we have $I(\mu)_{\min}\subset I\langle{\succcurlyeq}\mu\rangle$.
\end{prop}
\begin{proof}
As above, $v_\mu\in W$ is the element of minimal length such that $v_\mu(\theta)=\mu$
and $w=v_\mu s_0$ is the minuscule element for $I(\mu)_{\min}$. Then
$I(\mu)_{\min}=\{\gamma\in\Delta^+\mid -\gamma+\delta\in \EuScript N(w)\}$ and
$\EuScript N(w)=\{\ap_0\}\cup s_0(\EuScript N(v_\mu))$.
Therefore, if $\gamma\in I(\mu)_{\min}$, then either $\gamma=\theta$, or
$-\gamma+\delta\in s_0(\EuScript N(v_\mu))$, i.e., $\theta-\gamma\in \EuScript N(v_\mu)$.
Clearly, $\EuScript N(v_\mu^{-1})=- v_\mu(\EuScript N(v_\mu))$.
Hence $\theta-\gamma\in \EuScript N(v_\mu)$ if and only if
$-v_\mu(\theta-\gamma)=v_\mu(\gamma)-\mu\in \EuScript N(v_\mu^{-1})$.
Consequently,
\[
\gamma\in I(\mu)_{\min} \ \ \& \ \ \gamma\ne\theta \Leftrightarrow v_\mu(\gamma)-\mu\in
\EuScript N(v_\mu^{-1})
\]
Set $\nu=v_\mu(\gamma)-\mu$. Then $\gamma=v_\mu^{-1}(\nu+\mu)=\theta+v_\mu^{-1}(\nu)$. Hence our goal is to prove that
\\[.8ex]
\hbox to \textwidth{\ $(\ast)$ \hfil
{\it for any $\nu\in \EuScript N(v_\mu^{-1})$, one has
$\theta+v_\mu^{-1}(\nu)\succcurlyeq \mu$.} \hfil }
We will argue by induction on $\ell(v_\mu)=(\rho, \theta^\vee-\mu^\vee)$. To perform the induction step, assume that $\mu\not\in\Pi_l$ and $(\ast)$ is satisfied. Take any $\ap\in\Pi$
such that $(\ap,\mu)>0$ and set $\mu':=s_\ap(\mu)\prec \mu$.
Consider $v_{\mu'}=s_\ap v_\mu$, which corresponds to the minuscule element
$w'=s_\ap w=v_{\mu'} s_0$ (Proposition~\ref{prop:elem-ext}) and the larger abelian ideal
$I(\mu')_{\min}$. Then
\[
\EuScript N(v_{\mu'}^{-1})=\{\ap\}\cup s_\ap(\EuScript N(v_\mu^{-1})) .
\]
Thus, to prove the analogue of $(\ast)$ for $\nu'\in \EuScript N(v_{\mu'}^{-1})$, we have to handle two possibilities:
a) \ $\nu'=s_\ap(\nu)$ for $\nu\in \EuScript N(v_\mu^{-1})$. \\
Then $\theta+v_{\mu'}^{-1}(\nu')=\theta+v_\mu^{-1}(\nu)\succcurlyeq \mu \succ \mu'$, as required.
b) \ $\nu'=\ap$. \\
We have to prove here that $\theta+v_{\mu'}^{-1}(\ap)=\theta-v_\mu^{-1}(\ap)\succcurlyeq \mu'=s_\ap(\mu)$.
To this end, take a reduced decomposition
$v_\mu^{-1}=s_{\gamma_k}{\cdots} s_{\gamma_1}$, where
$\{ \gamma_1,\ldots,\gamma_k\}$ is a multiset of simple roots.
Recall that $v_\mu^{-1}(\mu)=\theta$ and $k=(\rho,\theta^\vee-\mu^\vee)$.
Since $(\rho, s_\ap(\nu)^\vee-\nu^\vee)\in \{-1,0,1\}$ for any $\nu\in\Delta^+_l$ and $\ap\in\Pi$, the chain of roots
\[
\mu_0=\mu,\ \mu_1=s_{\gamma_1}(\mu),\ \mu_2=s_{\gamma_2}s_{\gamma_1}(\mu),\dots,\ \mu_k=\theta ,
\]
has the property that $\mu_i\prec \mu_{i+1}$ and
each simple reflection $s_{\gamma_i}$ increases the "level" $(\rho, (\cdot)^\vee)$ by $1$. Then
we must have $\theta=\mu+\sum_{i=1}^k n_i\gamma_i$, where \\
$n_i=\begin{cases} 1 & \text{if $\gamma_i$ is long} \\
|\! |{\rm long}|\! |^2 / |\! | {\rm short} |\! |^2 & \text{if $\gamma_i$ is short}.
\end{cases}$
\\
We also have $s_\ap(\mu)=\mu-(\mu,\ap^\vee)\ap$ and
$v_{\mu}^{-1}(\ap)\preccurlyeq \ap + \sum_{i=1}^k n_i\gamma_i$. Whence
\[
v_\mu^{-1}(\ap)+ s_\ap(\mu) \preccurlyeq \mu + \sum_{i=1}^k n_i\gamma_i +(1-(\mu,\ap^\vee))\ap \preccurlyeq \theta .
\]
This completes the induction step and proof of proposition.
\end{proof}
\begin{thm} \label{thm:mu_max-inclus}
For any $\mu\in\Delta^+_l$, we have $I(\mu)_{\max}\subset I\langle{\succcurlyeq}\mu\rangle$.
In particular, if $I\in\Ab_\mu$, then $I\subset I\langle{\succcurlyeq}\mu\rangle$.
\end{thm}
\begin{proof}
Suppose that $\gamma\in I(\mu)_{\max}$. In particular, $\gamma$ is a commutative root.
\\
\textbullet\quad If $\gamma\in\Delta^+_l$, then the ideal $I(\gamma)_{\min}$ is well-defined and
\[
I(\gamma)_{\min}\ \underset{Prop.~\ref{prop:mu_min-inclus}}{\subset}\ I\langle{\succcurlyeq}\gamma\rangle\cap \gH\ \ {\subset} \ \
I(\mu)_{\max}\cap \gH\underset{Prop.~\ref{prop:other-in-fibre}}{=}I(\mu)_{\min} .
\]
By \cite[Thm.\,4.5]{imrn}, we conclude that $\gamma\succcurlyeq\mu$. (This completes the proof in the {\bf A-D-E} case!)
\\
\textbullet\quad If $\gamma$ is short and $\gamma\in \gH$, then $\gamma\in I(\mu)_{\min}\subset
I\langle{\succcurlyeq}\mu\rangle$ by Propositions~~\ref{prop:other-in-fibre} and
\ref{prop:mu_min-inclus}.
\\
\textbullet\quad The remaining possibility is that $\gamma$ is short and $\gamma\not\in \gH$.
But, there is no such commutative roots for $\GR{B}{n},\GR{F}{4},\GR{G}{2}$. (For
$\GR{B}{n}$, the only short commutative root is $\esi_1$ and $\theta=\esi_1+\esi_2$.)
For $\GR{C}{n}$, such commutative roots are of the form $\gamma=\esi_i+\esi_j$
with $2\le i<j\le n$. Here $\gH=\{\esi_1\pm\esi_j\mid 2\le j\le n\}\cup\{2\esi_1\}$ and $I\langle {\succcurlyeq}\esi_i+\esi_j\rangle\cap\gH=I\langle {\succcurlyeq}\esi_1+\esi_j\rangle$.
Then using Proposition~\ref{prop:other-in-fibre} shows that
$\rt \bigl(I\langle {\succcurlyeq}\esi_i+\esi_j\rangle\bigr)=2\esi_j$. Clearly, we have
$\esi_i+\esi_j\succcurlyeq 2\esi_j$.
(As usual, the simple roots of $\GR{C}{n}$ are $\esi_1-\esi_2,\dots,\esi_{n-1}-\esi_n,2\esi_n$.)
\end{proof}
\begin{rmk}
If $\g$ is of type $\GR{A}{n}$ or $\GR{C}{n}$, then
$I(\mu)_{\max}= I\langle{\succcurlyeq}\mu\rangle$ for all $\mu\in\Delta^+_l$. For all other
types, this is not always the case.
\end{rmk}
\section{Centralisers of abelian ideals}
\label{sect:central}
\noindent In this section, we mostly regard abelian ideals as subspaces $\ah$ of $\ut$.
Accordingly, for $\mu\in\Delta^+_l$, the minimal and maximal elements
of $\Ab_\mu$ are denoted by $\ah(\mu)_{\min}$ and $\ah(\mu)_{\max}$, respectively.
If $\ce\subset \g$ is a subspace, then $\z_\g(\ce)$ denotes the {\it centraliser\/} of
$\ce$ in $\g$. If $\ce$ is $\be$-stable, then so is $\z_\g(\ce)$.
If $\ah\in\Ab$, then $\z_\g(\ah)$ is a $\be$-stable subalgebra of $\g$ and
$\z_\g(\ah)\supset \ah$. However, $\z_\g(\ah)$ may contain semisimple elements
and it can happen that $\z_\g(\ah)\not\subset \be$.
Consider the following properties of abelian ideals:
\noindent
(P1): \ $\z_\g(\ah)$ belongs to $\ut$; \
(P2): \ $\z_\g(\ah)$ a sum of abelian ideals; \
(P3): \ $\z_\g(\ah)$ an abelian ideal.
Clearly, (P3)$\Rightarrow$(P2)$\Rightarrow$(P1). If (P1) holds, then $\z_\g(\ah)$ is determined by
the corresponding set of roots (upper ideal) $I_{\z_\g(\ah)}\subset \Delta^+$. In this situation,
(P2) is equivalent to that every root in $I_{\z_\g(\ah)}$ is commutative.
\noindent
We say that $\ah$ is of {\it full rank}, if $I_\ah$ contains $n$ linearly independent roots ($n=\rk\g$).
\begin{lm} \label{lm:full-rang}
Let $\ah\in\Ab$. Then $\z_\g(\ah)\subset \ut$ if and only if $\ah$ is of full rank.
\end{lm}
\begin{proof}
If $\ah$ is not of full rank, then $\z_\g(\ah)\cap\te\ne 0$. If $\ah$ is of full rank, then
$\z_\g(\ah)\cap \te=0$ and $\z_\g(\ah)$ is $\be$-stable. Therefore, $\z_\g(\ah)$ cannot contain
root spaces corresponding to negative roots.
\end{proof}
\begin{rema}
This assertion is true for any $\be$-ideal $\ah\subset\n$, not necessarily abelian.
\end{rema}
\begin{lm} \label{lm:sum-abelian}
$\z_\g(\ah)$ is a sum of abelian ideals if and only if\/ $\ah$ is of full rank and
$\theta-[\theta/2]\in I_\ah$.
\end{lm}
\begin{proof} The root space $\g_{[\theta/2]}$ belongs to $\z_\g(\ah)$ if and only if
$\theta-[\theta/2]\not\in I_\ah$.
The rest follows from the fact that $[\theta/2]$ is the unique maximal
noncommutative root.
\end{proof}
Recall that $\{\ah(\ap)_{\max} \mid \ap\in \Pi_l\}$ is the complete set of
maximal abelian ideals.
For any $\ah\in\Ab$, $\z_\g(\ah)$ contains the sum of all maximal abelian ideals that
contain $\ah$. Therefore, if $\z_\g(\ah)$ is an abelian ideal, then
$\z_\g(\ah)=\ah(\ap)_{\max}$ for some $\ap\in\Pi_l$ and $\ah(\ap)_{\max}$ is
the only maximal abelian ideal containing $\ah$.
\begin{lm} \label{lm:3}
An abelian ideal $\ah$ belongs to a unique maximal abelian ideal if and only if
there is a unique $\ap\in\Pi_l$ such that $\rt(\ah)\succcurlyeq \ap$. In particular, in the simply-laced case, the last condition means precisely that $\rt(\ah)\in \Pi_l$.
\end{lm}
\begin{proof}
Follows from the fact that the inclusion $\ah\subset\tilde\ah$ implies that $\rt(\ah)\succcurlyeq
\rt(\tilde\ah)$, see \cite[Cor.\,3.3]{imrn}. (Cf. also \cite[Thm.\,2.6(3)]{imrn}.)
\end{proof}
Note that {\it if $\ah$ is a maximal abelian ideal, then $\z_\g(\ah)=\ah$ and thereby
$\z_\g(\ah)$ is an abelian ideal}. For, if $\z_\g(\ah)\supsetneqq\ah$ and $\gamma$ is a
maximal element in $I_{\z_\g(\ah)}\setminus I_\ah$, then $\ah\oplus \g_\gamma$ would be
a larger abelian ideal!
To get a general answer, we need one more preliminary result.
\begin{lm} \label{lm:4}
For any $\ap\in\Pi_l$, the ideal $\ah(\ap)_{\min}$ is of full rank.
\end{lm}
\begin{proof}
By \cite[Thm.\,4.3]{imrn}, the corresponding minuscule element $w\in\HW$
equals $v_\ap s_0$, where $v_\ap\in W$ is the unique element of minimal length
taking $\theta$ to $\ap$.
Since $v_\ap(\theta)=\ap$, any reduced decomposition of $v_\ap$ contains all simple reflections
corresponding to $\Pi\setminus \{\ap\}$. Therefore $w$ contains reflections corresponding
to $n=\#(\Pi)$ linearly independent roots. This easily implies that the inversion set
$\EuScript N(w)$ contains $n$ linearly independent affine roots. Hence $\ah(\ap)_{\min}$ is of full rank.
\end{proof}
\begin{thm}
Let $\ah\in\Ab$. The following conditions are equivalent:
\begin{itemize}
\item[\sf (1)] \ $\z_\g(\ah)$ is an abelian ideal;
\item[\sf (2)] \ $\z_\g(\ah)=\ah(\ap)_{\max}$ \ for some $\ap\in\Pi_l$;
\item[\sf (3)] \ $\rt(\ah)\in\Pi_l$.
\end{itemize}
\end{thm}
\begin{proof}
(1)$\Rightarrow$(2): \ See the paragraph in front of Lemma~\ref{lm:3}.
(2)$\Rightarrow$(1): \ Obvious.
(2)$\Rightarrow$(3): \ Here $\ah(\ap)_{\max}$ is the only maximal abelian ideal that contains $\ah$. Therefore, in the simply-laced case, the assertion follows from Lemma~\ref{lm:3}.
For the non-simply-laced case, assume that $\rt(\ah)=\gamma\not\in\Pi_l$, but still
$\gamma$ majorizes a unique long simple root. Then $\gamma$ also majorizes a short simple
root, whence $\gamma\not\preccurlyeq |\Pi_l|$.
We claim that $\theta-[\theta/2]\not\in I_\ah$, and thereby $\z_\g(\ah)$ is not a sum of abelian ideals, in view of Lemma~\ref{lm:sum-abelian}.
Indeed, assume that $\theta-[\theta/2]\in I_\ah$. Then $I_\ah$ contains the upper ideal of
$\Delta^+$ generated by $\theta-[\theta/2]$, which is exactly
$\bigcap_{\ap\in\Pi_l}I(\ap)_{\min}=I(|\Pi_l|)_{\min}$, see Example~\ref{ex:all-maximal}.
Then the inclusion $I_\ah\supset I(|\Pi_l|)_{\min}$ implies that
$\gamma=\rt(\ah)\preccurlyeq |\Pi_l|$, a contradiction!
(3)$\Rightarrow$(2): \ It suffices to prove that the centraliser of $\ah(\ap)_{\min}$
equals $\ah(\ap)_{\max}$ for any $\ap\in\Pi_l$.
To this end, we have to check that:
(i) \ $\z_\g(\ah(\ap)_{\min})$ contains no semisimple
elements of $\g$ (i.e., $\ah(\ap)_{\min}$ is of full rank), and
(ii) \ the nilpotent subalgebra $\z_\g(\ah(\ap)_{\min})$ cannot be larger
than $\ah(\ap)_{\max}$, i.e., for any
$\gamma\in \max \bigl(\Delta^+\setminus I(\ap)_{\max}\bigr)$,
there exists a $\nu\in I(\ap)_{\min}$ such that $\gamma+\nu\in\Delta^+$.
For (i): This is Lemma~\ref{lm:4}.
For (ii): This property follows from a precise relation between
$\max \bigl(\Delta^+\setminus I(\ap)_{\max}\bigr)$ and $\min(I(\ap)_{\min})$, which implies that such $\nu$ can chosen to be a {\sl minimal\/} element of $I(\ap)_{\min}$,
see Theorem~\ref{thm:stunning} below.
\end{proof}
\begin{thm} \label{thm:stunning}
For $\ap\in\Pi_l$, there is a one-to-one correspondence between
$\min\bigl( I(\ap)_{\min}\bigr)$ and $\max \bigl(\Delta^+\setminus I(\ap)_{\max}\bigr)$.
Namely, for every
$\nu \in \min\bigl( I(\ap)_{\min}\bigr)$, there is $\nu'\in
\max \bigl(\Delta^+\setminus I(\ap)_{\max}\bigr)$ such that $\nu+\nu'=\theta$;
and vice versa.
\end{thm}
\begin{proof}
The only proof I know consists of case-by-case considerations.
The minimal elements of all maximal abelian ideals, i.e., the ideals
$I(\ap)_{\max}$, $\ap\in\Pi_l$, are indicated in \cite[Sect.\,4]{pr}. Then it is not hard to determine the
maximal elements of the complements $\Delta^+\setminus I(\ap)_{\max}$. The minimal elements of $I(\ap)_{\min}$ can be found via the properties of the corresponding minuscule element,
see \cite[Prop.\,4.6]{imrn}. Another possibility is to use the relation $I(\ap)_{\max}\cap \gH=
I(\ap)_{\min}$, see Prop.~\ref{prop:other-in-fibre}.
We provide relevant data in the two extreme cases---the most classical ($\GR{A}{n}$) and
most exceptional ($\GR{E}{8}$).
As usual, $\Delta^+(\GR{A}{n})=\{ \esi_i-\esi_j \mid1\le i<j\le n+1\}$, and
$\ap_i=\esi_i-\esi_{i+1}$. Here \\
$\min\bigl( I(\ap_i)_{\max}\bigr){=}\{\ap_i\}$ and $\gH=\{\esi_i-\esi_j \mid i=1 \text{ or } j=n+1\}$. Therefore
$\max\bigl(\Delta^+\setminus I(\ap_i)_{\max}\bigr)=\{\esi_1{-}\esi_i, \esi_{i+1}{-}\esi_{n+1} \}$,
$\min\bigl( I(\ap_i)_{\min}\bigr)=\{\esi_i{-}\esi_{n+1}, \esi_1{-}\esi_{i+1} \}$.
\\
The respective roots in the previous row sums to $\theta=\esi_1-\esi_{n+1}$.
For $\GR{E}{8}$, we use the natural numbering of $\Pi$, i.e.,
$\left(\text{\begin{E8}{1}{2}{3}{4}{5}{6}{7}{8}\end{E8}}\right)$. The root
$\gamma=\sum_{i=1}^8 n_i\ap_i$ is denoted by $n_1 n_2\dots n_8$.
Then $\theta=23456423$ and $\gamma\in\gH$ if and only if $n_1\ne 0$. The respective maximal and minimal elements are gathered in Table~\ref{tabl-E}, and we see that the sum of elements in the 3rd and 4th columns always equals $\theta$.
\end{proof}
\begin{table}[htb]
\begin{center}
\caption{Data for the root system of type $\GR{E}{8}$}
\begin{tabular}{cccc} \label{tabl-E}
$i$ & $\min\bigl( I(\ap_i)_{\max}\bigr)$ & $\min\bigl( I(\ap_i)_{\min}\bigr)$ &
$\max\bigl(\Delta^+\setminus I(\ap_i)_{\max}\bigr)$ \\ \hline
1 & 12222101 & 12222101 & 11234322 \\ \cline{2-4}
2 & 12222111 & 12222111 & 11234312 \\
& 01234322 & 11234322 & 12222101 \\ \cline{2-4}
3 & 12222211 & 12222211 & 11234212 \\
& 01234312 & 11234312 & 12222111 \\ \cline{2-4}
4 & 12223211 & 12223211 & 11233212 \\
& 01234212 & 11234212 & 12222211 \\ \cline{2-4}
& 12223212 & 12223212 & 11233211 \\
5 & 12233211 & 12233211 & 11223212 \\
& 01233212 & 11233212 & 12223211 \\ \cline{2-4}
6 & 12333211 & 12333211 & 11123212 \\
& 01223212 & 11223212 & 12233211 \\ \cline{2-4}
7 & 00123212 & 11123212 & 12333211 \\ \cline{2-4}
8 & 01233211 & 11233211 & 12223212 \\ \hline
\end{tabular}
\end{center}
\end{table}
Theorem~\ref{thm:stunning} is not true for arbitrary long roots in place
of $\ap\in\Pi_l$. However, it can slightly be extended as follows.
\begin{thm} \label{thm:modification}
Let $\gS$ be any connected subset of $\Pi_l$. Then there is
a one-to-one correspondence between
\min\bigl(\bigcap_{\ap_i\in\gS} I(\ap_i)_{\min}\bigr)$ and
\max\bigl(\bigcap_{\ap_i\in\gS}(\Delta^+\setminus I(\ap_i)_{\max})\bigr)$.
Namely, for every
$\nu \in \min\bigl(\bigcap_{\ap_i\in\gS} I(\ap_i)_{\min}\bigr)$, there is
$\nu'\in \max \bigl(\bigcap_{\ap_i\in\gS}(\Delta^+\setminus I(\ap_i)_{\max})\bigr)$
such that $\nu+\nu'=\theta$.
\end{thm}
Again, the proof is based on direct calculations, which are omitted.
It's a challenge to find a conceptual argument, at least in the setting of Theorem~\ref{thm:stunning}.
\begin{ex}
For $\# \gS=1$, we have Theorem~\ref{thm:stunning}. At the other extreme, if $\gS=\Pi_l$, then $\bigcap_{\ap_i\in\Pi_l}(\Delta^+\setminus I(\ap_i)_{\max})=
\Delta^+\setminus \bigcup_{\ap_i\in\Pi_l}I(\ap_i)_{\max}=
\Delta^+\setminus \Delta^+_{\mathsf{com}}$.
Therefore,
\[
\max\Bigl(\bigcap_{\ap_i\in\Pi_l}\bigl(\Delta^+\setminus I(\ap_i)_{\max}\bigr)\Bigr)=\{[\theta/2]\} .
\]
Also, $\bigcap_{\ap_i\in\Pi_l} I(\ap_i)_{\min}=I(|\Pi_l|)_{\min}$ and the unique minimal element of this ideal is $\theta-[\theta/2]$, see Example~\ref{ex:all-maximal}.
Thus, an a priori proof of Theorem~\ref{thm:modification} would provide an explanation
of properties of abelian ideals with rootlet $|\Pi_l|$, cf. Example~\ref{ex:all-maximal} and
Remark~\ref{rmk:comm-roots}.
\end{ex}
| {
"timestamp": "2012-05-29T02:01:50",
"yymm": "1205",
"arxiv_id": "1205.5983",
"language": "en",
"url": "https://arxiv.org/abs/1205.5983",
"abstract": "Let $g$ be a simple Lie algebra and $Ab$ the poset of non-trivial abelian ideals of a fixed Borel subalgebra of $g$. In 2003 (IMRN, no.35, 1889--1913), we constructed a partition of $Ab$ into the subposets $Ab_\\mu$, parameterised by the long positive roots of $g$, and established some properties of these subposets. In this note, we show that this partition is compatible with intersections, relate it to the Kostant-Peterson parameterisation of abelian ideals and to the centralisers of abelian ideals. We also prove that the poset of positive roots of $g$ is a join-semilattice.",
"subjects": "Representation Theory (math.RT); Combinatorics (math.CO)",
"title": "Abelian ideals of a Borel subalgebra and root systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983085087724427,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7095349950455029
} |
https://arxiv.org/abs/0805.3795 | Approximating with Gaussians | Linear combinations of translations of a single Gaussian, e^{-x^2}, are shown to be dense in L^2(R). Two algorithms for determining the coefficients for the approximations are given, using orthogonal Hermite functions and least squares. Taking the Fourier transform of this result shows low-frequency trigonometric series are dense in L^2 with Gaussian weight function. | \section{Linear combinations of Gaussians with a single variance are dense in
$L^{2}$}
$L^{2}\left( \mathbb{R}\right) $ denotes the space of square integrable
functions $f:\mathbb{R}\rightarrow\mathbb{R}$ with norm $\left\| f\right\|
_{2}:=\sqrt{\int_{\mathbb{R}}\left| f\left( x\right) \right| ^{2}dx}$. We
use $f\underset{\epsilon}{\approx}g$ to mean $\left\| f-g\right\|
_{2}<\epsilon$. The following result was announced in \cite{CalcGaussians}.
\begin{theorem}
\label{ThmBumps}For any $f\in L^{2}\left( \mathbb{R}\right) $ and any
$\epsilon>0$ there exists $t>0$ and $N\in\mathbb{N}$ and $a_{n}\in\mathbb{R}$
such that%
\[
f\underset{\epsilon}{\approx}\ \overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}e^{-\left( x-nt\right) ^{2}}\text{.}%
\]
\end{theorem}
\begin{proof}
Since the span of the Hermite functions is dense in $L^{2}\left(
\mathbb{R}\right) $ we have for some $N$%
\begin{equation}
f\underset{\epsilon/2}{\approx}\ \overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}b_{n}\frac{d^{n}}{dx^{n}}\left( e^{-x^{2}}\right) \text{.}%
\label{PfMainHermite}%
\end{equation}
Now use finite backward differences to approximate the derivatives. We have
for some small $t>0$%
\begin{align}
& \overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}b_{n}\frac{d^{n}}{dx^{n}}\left( e^{-x^{2}}\right) \nonumber\\
& \underset{\epsilon/2}{\approx}b_{0}e^{-x^{2}}+b_{1}\tfrac{1}{t}\left[
e^{-x^{2}}-e^{-\left( x-t\right) ^{2}}\right] +b_{2}\tfrac{1}{t^{2}}\left[
e^{-x^{2}}-2e^{-\left( x-t\right) ^{2}}+e^{-\left( x-2t\right) ^{2}%
}\right] \nonumber\\
& +b_{3}\tfrac{1}{t^{3}}\left[ e^{-x^{2}}-3e^{-\left( x-t\right) ^{2}%
}+3e^{-\left( x-2t\right) ^{2}}-e^{-\left( x-3t\right) ^{2}}\right]
+\cdots\nonumber\\
& =\overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}b_{n}\frac{1}{t^{n}}\overset{n}{\underset{k=0}{%
{\textstyle\sum}
}}\left( -1\right) ^{k}\binom{n}{k}e^{-\left( x-kt\right) ^{2}}%
\text{.}\label{PfMainCoeffs}%
\end{align}
\end{proof}
This result may be surprising; it promises we can approximate to any degree of
accuracy a function such as the following characteristic function of an
interval%
\[
\chi_{\left[ -11,-10\right] }\left( x\right) :=\left\{
\begin{array}
[c]{l}%
1\\
0
\end{array}
\right.
\begin{array}
[c]{l}%
\text{for }x\in\left[ -10,-11\right] \\
\text{otherwise}%
\end{array}
\]
with support far from the means of the Gaussians $e^{-\left( x-nt\right)
^{2}}$ which are located in $\left[ 0,\infty\right) $ at the points $x=nt$.
The graphs of these functions $e^{-\left( x-nt\right) ^{2}}$ are extremely
simple geometrically, being Gaussians with the same variance. We only use the
right translates, and they all shrink precipitously (exponentially) away from
their means.
\noindent%
\[%
{\parbox[b]{2.5261in}{\begin{center}
\includegraphics[
natheight=9.364200in,
natwidth=13.749600in,
height=1.7244in,
width=2.5261in
]%
{I100.png}%
\\
$\sum a_n e^{-\left( x-nt\right) ^2}\approx$ characteristic function?
\end{center}}}%
\]
\bigskip\bigskip
\textit{Surely there is a gap in this sketchy little proof?}
No. We will, however, flesh out the details in section \ref{SecAppAlgo}. The
coefficients $a_{n}$ are explicitly calculated and the $L^{2}$ convergence
carefully justified. But these details are elementary. We include them in the
interest of appealing to a broader audience.
\textit{Then is this merely another pathological curiosity from analysis? We
probably need impractically large values of }$N$\textit{\ to approximate any
interesting functions.}
No, $N$ need only be as large as the Hermite expansion demands. Certainly this
particular approach depends on the convergence of the Hermite expansion, and
for many applications Hermite series converge slower than other Fourier
approximations--after all, Hermite series converge on all of $\mathbb{R}$
while, e.g., trigonometric series focus on a bounded interval. Hermite
expansions do have powerful convergence properties, though. For example,
Hermite series converge uniformly on finite compact subsets whenever $f$ is
twice continuously differentiable (i.e., $C^{2}$) and $O\left( e^{-cx^{2}%
}\right) $ for some $c>1$ as $x\rightarrow\infty$. Alternately if $f$ has
finitely many discontinuities but is still $C^{2}$ elsewhere and $O\left(
e^{-cx^{2}}\right) $ the expansion again converges uniformly on any closed
interval which avoids the discontinuities \cite{Stone}, \cite{Szego}:. If $f$
is smooth and properly bounded, the Hermite series converges faster than
algebraically \cite{Gottlieb}.
\textit{Then is the method unstable?}
Yes, there are two serious drawbacks to using Theorem \ref{ThmBumps}.
\noindent1. Numerical differentiation is inherently unstable. Fortunately we
are estimating the derivatives of Gaussians, which are as smooth and bounded
as we could hope, and so we have good control with an explicit error formula.
It is true, though, that dividing by $t^{n}$ for small $t$ and large $n$ will
eventually lead to huge coefficients $a_{n}$ and round-off error. There are
quite a few general techniques available in the literature for combatting
round-off error in numerical differentiation. We review the well-known
$n$-point difference formulas for derivatives in section \ref{SecImpConv}.
\noindent2. The surprising approximation is only possible because it is weaker
than the typical convergence of a series in the mean. Unfortunately%
\[
f\left( x\right) \neq\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}e^{-\left( x-nt\right) ^{2}}%
\]
Theorem \ref{ThmBumps} requires recalculating all the $a_{n}$ each time $N$ is
increased. Further, the $a_{n}$ are not unique. The least squares best choice
of $a_{n}$ are calculated in section \ref{SecLeastSqrs}, but this approach
gives an ill-conditioned matrix. A different formula for the $a_{n}$ is given
in Theorem \ref{AlgBump} which is more computationally efficient.
Despite these drawbacks the result is worthy of note because of the new and
unexpected opportunities which arise using an approximation method with such
simple functions. In this vein, section \ref{SecLowFTrig} details an
interesting corollary of Theorem \ref{ThmBumps}: apply the Fourier transform
to see that low-frequency trigonometric series are dense in $L^{2}\left(
\mathbb{R}\right) $ with Gaussian weight function.
\section{Calculating the coefficients with orthogonal
functions\label{SecAppAlgo}}
In this section Theorem \ref{AlgBump} gives an explicit formula for the
coefficients $a_{n}$ of Theorem \ref{ThmBumps}. Let's review the details of
the Hermite-inspired expansion%
\[
f\left( x\right) =\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}b_{n}\frac{d^{n}}{dx^{n}}\left( e^{-x^{2}}\right)
\]
claimed in the proof. The formula for these coefficients is%
\[
b_{n}:=\tfrac{1}{n!2^{n}\sqrt{\pi}}\underset{\mathbb{R}}{%
{\textstyle\int}
}f\left( x\right) e^{x^{2}}\frac{d^{n}}{dx^{n}}\left( e^{-x^{2}}\right)
dx\text{.}%
\]
Be warned this is not precisely the standard Hermite expansion, but a simple
adaptation to our particular requirements. Let's check this formula for the
$b_{n}$ using the techniques of orthogonal functions.
Remember the following properties of the Hermite polynomials $H_{n}$
(\cite{Szego}, e.g.). Define $H_{n}\left( x\right) :=\left( -1\right)
^{n}e^{x^{2}}\frac{d^{n}}{dx^{n}}e^{-x^{2}}$. The set of Hermite functions%
\[
\left\{ h_{n}\left( x\right) :=\dfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}%
H_{n}\left( x\right) e^{-x^{2}/2}:n\in\mathbb{N}\right\}
\]
is a well-known basis of $L^{2}\left( \mathbb{R}\right) $ and is orthonormal
since%
\begin{equation}
\underset{\mathbb{R}}{%
{\textstyle\int}
}H_{m}\left( x\right) H_{n}\left( x\right) e^{-x^{2}}dx=n!2^{n}\sqrt{\pi
}\delta_{m,n}\text{.}\label{ExHermite2}%
\end{equation}
This means given any $g\in L^{2}\left( \mathbb{R}\right) $ it is possible to
write%
\begin{equation}
g\left( x\right) =\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}\tfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}H_{n}\left( x\right) e^{-x^{2}%
/2}\label{ExHermite10}%
\end{equation}
$($equality in the $L^{2}$ sense$)$ where%
\[
c_{n}:=\tfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}\underset{\mathbb{R}}{%
{\textstyle\int}
}g\left( x\right) H_{n}\left( x\right) e^{-x^{2}/2}dx\in\mathbb{R}\text{.}%
\]
The necessity of this formula for $c_{n}$ can easily be checked by multiplying
both sides of $\left( \ref{ExHermite10}\right) $ by $H_{n}\left( x\right)
e^{-x^{2}/2}$, integrating and applying $\left( \ref{ExHermite2}\right) $.
However, we want%
\[
f\left( x\right) =\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}b_{n}\frac{d^{n}}{dx^{n}}e^{-x^{2}}%
\]
so apply this process to $g\left( x\right) =f\left( x\right) e^{x^{2}/2}$.
But $f\left( x\right) e^{x^{2}/2}$ may not be $L^{2}$ integrable. If it is
not, we must truncate it: $f\left( x\right) e^{x^{2}/2}\chi_{\left[
-M,M\right] }\left( x\right) $ is $L^{2}$ for any $M<\infty$ and
$f\cdot\chi_{\left[ -M,M\right] }\underset{\epsilon/3}{\approx}f$ for a
sufficiently large choice of $M$. Now we get new $c_{n}$ as follows%
\begin{align*}
f\left( x\right) e^{x^{2}/2}\chi_{\left[ -M,M\right] }\left( x\right)
& =\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}\tfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}H_{n}\left( x\right) e^{-x^{2}%
/2}\text{\qquad so}\\
f\left( x\right) \chi_{\left[ -M,M\right] }\left( x\right) &
=\overset{\infty}{\underset{n=0}{%
{\textstyle\sum}
}}c_{n}\tfrac{\left( -1\right) ^{n}}{\sqrt{n!2^{n}\sqrt{\pi}}}\left(
-1\right) ^{n}H_{n}\left( x\right) e^{-x^{2}}=\overset{\infty}%
{\underset{n=0}{%
{\textstyle\sum}
}}b_{n}\frac{d^{n}}{dx^{n}}e^{-x^{2}}%
\end{align*}
where%
\begin{align*}
c_{n} & =\tfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}\underset{\mathbb{R}}{%
{\textstyle\int}
}f\left( x\right) e^{x^{2}/2}\chi_{\left[ -M,M\right] }\left( x\right)
H_{n}\left( x\right) e^{-x^{2}/2}\left( x\right) dx\\
& =\tfrac{1}{\sqrt{n!2^{n}\sqrt{\pi}}}\underset{\mathbb{R}}{%
{\textstyle\int}
}f\left( x\right) \chi_{\left[ -M,M\right] }\left( x\right) H_{n}\left(
x\right) dx
\end{align*}
so we must have%
\begin{equation}
b_{n}=c_{n}\tfrac{\left( -1\right) ^{n}}{\sqrt{n!2^{n}\sqrt{\pi}}}=\tfrac
{1}{n!2^{n}\sqrt{\pi}}\underset{\mathbb{R}}{%
{\textstyle\int}
}f\left( x\right) \chi_{\left[ -M,M\right] }\left( x\right) e^{x^{2}%
}\frac{d^{n}}{dx^{n}}e^{-x^{2}}dx\text{.}\label{Line b_n}%
\end{equation}
Now the second step of the proof of Theorem \ref{ThmBumps} claims that the
Gaussian's derivatives may be approximated by divided backward differences%
\[
\frac{d^{n}}{dx^{n}}e^{-x^{2}}\approx\frac{1}{t^{n}}\overset{n}{\underset
{k=0}{%
{\textstyle\sum}
}}\left( -1\right) ^{k}\binom{n}{k}e^{-\left( x-kt\right) ^{2}}%
\]
in the $L^{2}\left( \mathbb{R}\right) $ norm. We'll use the ``big oh''
notation: for a real function $\Psi$ the statement `` $\Psi\left( t\right)
=O\left( t\right) $ as $t\rightarrow0$ '' means there exist $K>0$ and
$\delta>0$ such that $\left| \Psi\left( t\right) \right| <K\left|
t\right| $ for $0<\left| t\right| <\delta$.
\begin{proposition}
\label{propLpDivDiff}For each $n\in\mathbb{N}$ and $p\in\left( 0,\infty
\right) $%
\[
\left( \underset{\mathbb{R}}{\int}\left| \frac{d^{n}}{dx^{n}}e^{-x^{2}%
}-\frac{1}{t^{n}}%
{\textstyle\sum_{k=0}^{n}}
\left( -1\right) ^{k}\binom{n}{k}e^{-\left( x-kt\right) ^{2}}\right|
^{p}dx\right) ^{1/p}=O\left( t\right) \text{.}%
\]
\end{proposition}
\begin{proof}
In Appendix \ref{SecImpConv} the pointwise formula is derived:%
\[
\frac{d^{n}}{dx^{n}}g\left( x\right) =\frac{1}{t^{n}}%
{\textstyle\sum_{k=0}^{n}}
\left( -1\right) ^{k}\binom{n}{k}g\left( x-kt\right) -\dfrac{t}{\left(
n+1\right) !}\overset{n}{\underset{k=0}{%
{\textstyle\sum}
}}\left( -1\right) ^{k}\binom{n}{k}k^{n+1}g^{\left( n+1\right) }\left(
\xi_{k}\right)
\]
where all of the $\xi_{k}$ are between $x$ and $x+nt$. Therefore the
proposition holds with $g\left( x\right) =e^{-x^{2}}$ since $g^{\left(
n+1\right) }\left( \xi_{k}\right) $ is integrable for each $k$. This is not
perfectly obvious because we don't have explicit formulae for the $\xi_{k}$.
But the tails of $g^{\left( n+1\right) }$ vanish exponentially, the
continuity of $g^{\left( n+1\right) }$ guarantees a finite maximum on the
bounded interval between the tails, and $\left| \xi_{k}-x\right| <k\left|
t\right| $.
\end{proof}
Continuing the derivation of the coefficients $a_{n}$ we now have for
sufficiently small $t\neq0$%
\begin{equation}
f\underset{\epsilon}{\approx}\ \overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}b_{n}\frac{1}{t^{n}}\overset{n}{\underset{k=0}{%
{\textstyle\sum}
}}\left( -1\right) ^{k}\binom{n}{k}e^{-\left( x-kt\right) ^{2}}%
=\overset{N}{\underset{k=0}{%
{\textstyle\sum}
}}\left[ \overset{N}{\underset{n=k}{%
{\textstyle\sum}
}}b_{n}\frac{\left( -1\right) ^{k}}{t^{n}}\binom{n}{k}\right] e^{-\left(
x-kt\right) ^{2}}\label{Line f approxi}%
\end{equation}
In the last equality we just switched the order of summation (see
\cite{Knuth}, section 2.4 for an overview of such tricks). Combining $\left(
\ref{Line b_n}\right) $ and $\left( \ref{Line f approxi}\right) $ we have
\begin{theorem}
\label{AlgBump}For any $f\in L^{2}\left( \mathbb{R}\right) $ and any
$\epsilon>0$ there exist $N\in\mathbb{N}$ and $t_{0}>0$ such that for any
$t\neq0$ with $\left| t\right| <t_{0}$%
\[
f\underset{\epsilon}{\approx}\ \overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}e^{-\left( x-nt\right) ^{2}}%
\]
for some choice of $a_{n}\in\mathbb{R}$ dependent on $N$ and $t$.
\noindent If $f\left( x\right) e^{x^{2}/2}$ is integrable, then one choice
of coefficients is%
\[
a_{n}=\frac{\left( -1\right) ^{n}}{n!\sqrt{\pi}}\overset{N}{\underset{k=n}{%
{\textstyle\sum}
}}\tfrac{1}{\left( k-n\right) !\left( 2t\right) ^{k}}\underset{\mathbb{R}%
}{%
{\textstyle\int}
}f\left( x\right) e^{x^{2}}\frac{d^{k}}{dx^{k}}\left( e^{-x^{2}}\right)
dx\text{.}%
\]
If $f\left( x\right) e^{x^{2}/2}$ is not integrable, replace $f$ in the
above formula with $f\cdot\chi_{\left[ -M,M\right] }$ where $M$ is chosen
large enough that $\left\| f-f\cdot\chi_{\left[ -M,M\right] }\right\|
_{2}<\epsilon$.
\end{theorem}
\begin{remark}
\label{RemUnifG}The approximation in Theorem \ref{AlgBump} also holds on
$C\left[ a,b\right] $ with the uniform norm since the Hermite expansion is
uniformly convergent on $C^{2}\left[ a,b\right] $ (see \cite{Stone},
\cite{Szego}) and the finite difference formula's error term from Appendix
\ref{SecImpConv} converges to 0 uniformly as $t\rightarrow0^{+}$. The
Stone-Weierstrass Theorem does not apply in this situation because linear
combinations of Gaussians with a single variance do not form an algebra.
\end{remark}
\begin{remark}
\label{RemDense}As a consequence of Theorem \ref{AlgBump} for any $\epsilon>0$
the closed linear span of $\left\{ e^{-\left( x-s\right) ^{2}}:s\in\left[
0,\epsilon\right) \right\} $ is $L^{2}\left( \mathbb{R}\right) $. It is
even sufficient to replace $\left[ 0,\epsilon\right) $ with $\left\{
\frac{i}{2^{j}}:i,j\in\mathbb{N}\right\} \cap\left[ 0,\epsilon\right) $.
\end{remark}
Let's explore some concrete examples in applying Theorem \ref{AlgBump}. Choose
an interesting function with discontinuities and some support negative:%
\[
f\left( x\right) :=\left( x-1\right) ^{2}\chi_{\left[ -1,2\right]
}\left( x\right) :=\left\{
\begin{array}
[c]{l}%
\left( x-1\right) ^{2}\\
0
\end{array}
\right.
\begin{array}
[c]{l}%
\text{for }x\in\left[ -1,2\right] \\
\text{otherwise}%
\end{array}
\]
and observe graphically:
\noindent$%
{\parbox[b]{1.7253in}{\begin{center}
\includegraphics[
natheight=6.103900in,
natwidth=8.864300in,
height=1.1917in,
width=1.7253in
]%
{I200.png}%
\\
$f\left( x\right) :=\left( x-1\right) ^2\chi_{\left[ -1,2\right]} \left(
x\right) $
\end{center}}}%
{\parbox[b]{1.6094in}{\begin{center}
\includegraphics[
natheight=5.917100in,
natwidth=8.583300in,
height=1.1122in,
width=1.6094in
]%
{I210.png}%
\\
Hermite series $N=20$
\end{center}}}%
{\parbox[b]{1.6025in}{\begin{center}
\includegraphics[
natheight=5.822800in,
natwidth=8.437100in,
height=1.1087in,
width=1.6025in
]%
{I220.png}%
\\
Hermite $N=40$
\end{center}}}%
$%
\[%
{\parbox[b]{1.6155in}{\begin{center}
\includegraphics[
natheight=6.229200in,
natwidth=9.052000in,
height=1.1147in,
width=1.6155in
]%
{I230.png}%
\\%
\begin{tabular}
[c]{c}%
Theorem \ref{AlgBump}\\
$N=20$, $t=.05$%
\end{tabular}
\end{center}}}%
{\parbox[b]{1.6172in}{\begin{center}
\includegraphics[
natheight=5.989700in,
natwidth=8.698300in,
height=1.1156in,
width=1.6172in
]%
{I240.png}%
\\
{}%
\begin{tabular}
[c]{c}%
Theorem \ref{AlgBump}\\
$N=20$, $t=.01$%
\end{tabular}
\end{center}}}%
{\parbox[b]{1.6163in}{\begin{center}
\includegraphics[
natheight=6.124600in,
natwidth=8.895500in,
height=1.1147in,
width=1.6163in
]%
{I250.png}%
\\
{}%
\begin{tabular}
[c]{c}%
Theorem \ref{AlgBump}\\
$N=40$, $t=.01$%
\end{tabular}
\end{center}}}%
\]
\bigskip\bigskip
The Hermite approximation is slowed by discontinuities, but does converge. The
next choice of $f$ is continuous but not smooth.
\noindent%
\[%
{\parbox[b]{1.6414in}{\begin{center}
\includegraphics[
natheight=5.906700in,
natwidth=8.614400in,
height=1.1277in,
width=1.6414in
]%
{I300.png}%
\\
$f\left( x\right) :=\left( \sin x\right) \chi_{\left[ -\pi,\pi\right]}
\left( x\right) $
\end{center}}}
{\parbox[b]{1.6622in}{\begin{center}
\includegraphics[
natheight=6.416900in,
natwidth=9.458500in,
height=1.1312in,
width=1.6622in
]%
{I311.png}%
\\%
\begin{tabular}
[c]{c}%
Hermite expansion\\
$N=10$%
\end{tabular}
\end{center}}}
{\parbox[b]{1.6302in}{\begin{center}
\includegraphics[
natheight=6.979000in,
natwidth=10.073300in,
height=1.132in,
width=1.6302in
]%
{I320.png}%
\\%
\begin{tabular}
[c]{c}%
Hermite expansion\\
$N=20$%
\end{tabular}
\end{center}}}
\]%
\[%
{\parbox[b]{1.6431in}{\begin{center}
\includegraphics[
natheight=6.698000in,
natwidth=9.801800in,
height=1.126in,
width=1.6431in
]%
{I330.png}%
\\%
\begin{tabular}
[c]{l}%
Theorem \ref{AlgBump}\\
$N=10$, $t=.01$%
\end{tabular}
\end{center}}}
{\parbox[b]{1.6579in}{\begin{center}
\includegraphics[
natheight=6.906400in,
natwidth=10.114900in,
height=1.1346in,
width=1.6579in
]%
{I340.png}%
\\%
\begin{tabular}
[c]{l}%
Theorem \ref{AlgBump}\\
$N=20$, $t=.05$%
\end{tabular}
\end{center}}}
{\parbox[b]{1.6475in}{\begin{center}
\includegraphics[
natheight=6.906400in,
natwidth=10.114900in,
height=1.1277in,
width=1.6475in
]%
{I350.png}%
\\%
\begin{tabular}
[c]{l}%
Theorem \ref{AlgBump}\\
$N=20$, $t=.01$%
\end{tabular}
\end{center}}}
\]
\bigskip
In section \ref{SecImpConv} we review a standard technique accelerating this
convergence in $t$. In our experiments, though, we've found the Hermite
expansion is generally the bottleneck, not the round-off error of the
derivative approximations for $e^{-x^{2}}$.%
\[%
{\parbox[b]{1.4961in}{\begin{center}
\includegraphics[
natheight=6.770600in,
natwidth=9.916800in,
height=1.0222in,
width=1.4961in
]%
{I400.png}%
\\%
\begin{tabular}
[c]{c}%
Hermite expansion\\
$N=60$%
\end{tabular}
\end{center}}}
{\parbox[b]{1.4875in}{\begin{center}
\includegraphics[
natheight=6.854500in,
natwidth=10.031000in,
height=1.0188in,
width=1.4875in
]%
{I410.png}%
\\%
\begin{tabular}
[c]{c}%
Hermite expansion\\
$N=100$%
\end{tabular}
\end{center}}}
{\parbox[b]{1.5333in}{\begin{center}
\includegraphics[
natheight=6.687600in,
natwidth=10.135600in,
height=1.0144in,
width=1.5333in
]%
{I420.png}%
\\%
\begin{tabular}
[c]{c}%
Hermite expansion\\
$N=120$%
\end{tabular}
\end{center}}}
\]
We need about 120 terms before visual accuracy is achieved for this simple
function. There is a host of methods in the literature for improving
convergence of the Hermite expansion, but generally we have better success
with functions that are smooth and bounded \cite{Gottlieb}. Our last examples
in this section illustrate how convergence is faster for functions which are
smooth and ``clamped off'', meaning multiplied by $\left( x-a\right)
^{n}\left( x+a\right) ^{n}\chi_{\left[ -a,a\right] }$ whether or not they
are positive or symmetric.
\noindent%
{\parbox[b]{2.3281in}{\begin{center}
\includegraphics[
natheight=6.301900in,
natwidth=9.718800in,
height=1.5134in,
width=2.3281in
]%
{I500.png}%
\\
Hermite $N=10$%
\end{center}}}
{\parbox[b]{2.3298in}{\begin{center}
\includegraphics[
natheight=6.718700in,
natwidth=10.364800in,
height=1.5152in,
width=2.3298in
]%
{I510.png}%
\\
Hermite $N=25$%
\end{center}}}
\noindent%
{\parbox[b]{2.335in}{\begin{center}
\includegraphics[
natheight=6.729100in,
natwidth=10.395900in,
height=1.516in,
width=2.335in
]%
{I520.png}%
\\
Hermite $N=10$%
\end{center}}}
{\parbox[b]{2.3367in}{\begin{center}
\includegraphics[
natheight=6.698000in,
natwidth=10.333600in,
height=1.5195in,
width=2.3367in
]%
{I530.png}%
\\
Hermite $N=25$%
\end{center}}}
\bigskip
\section{Calculating the coefficients with least squares\label{SecLeastSqrs}}
Theorem \ref{ThmBumps} promises any $L^{2}$ function can be approximated
$f\left( x\right) \approx\overset{N}{\underset{n=0}{\sum}}a_{n}e^{-\left(
x-nt\right) ^{2}}$. Theorem \ref{AlgBump} gives a formula for the
coefficients $a_{n}$ but this formula is not unique, and in fact is not
``best'' according to the classical continuous least squares technique.
\quad\quad%
{\parbox[b]{1.7521in}{\begin{center}
\includegraphics[
natheight=6.489500in,
natwidth=9.437700in,
height=1.2064in,
width=1.7521in
]%
{I600.png}%
\\%
\begin{tabular}
[c]{l}%
Least squares approximation\\
$N=5$, $t=.01$%
\end{tabular}
\end{center}}}
\quad%
{\parbox[b]{1.7556in}{\begin{center}
\includegraphics[
natheight=6.604600in,
natwidth=9.625400in,
height=1.2064in,
width=1.7556in
]%
{I610.png}%
\\%
\begin{tabular}
[c]{l}%
Theorem \ref{AlgBump} approximation\\
$N=5$, $t=.01$%
\end{tabular}
\end{center}}}
\bigskip
\noindent In least squares we minimize the error function%
\[
E_{2}\left( a_{0},...,a_{N}\right) :=\underset{\mathbb{R}}{\int}\left|
f\left( x\right) -\overset{N}{\underset{n=0}{\sum}}a_{n}e^{-\left(
x-nt\right) ^{2}}\right| ^{2}dx
\]
by setting $\frac{\partial E_{2}}{\partial a_{j}}=0$ for $j=0,...,N$ and
solving for the $a_{n}$. These $N+1$ linear equations are called the
\textbf{normal equations}. The matrix form of this system is $M\overrightarrow
{v}=\overrightarrow{b}$ where $M$ is the matrix%
\[
M=\left[ \sqrt{\frac{\pi}{2}}e^{-\left( k^{2}+j^{2}-\frac{\left(
k+j\right) ^{2}}{2}\right) t^{2}}\right] _{j,k=0}^{N}%
\]
and%
\[
\overrightarrow{v}=\left[ a_{j}\right] _{j=0}^{N}\text{\qquad and\qquad
}\overrightarrow{b}=\left[ \underset{\mathbb{R}}{\int}f\left( x\right)
e^{-\left( x-jt\right) ^{2}}dx\right] _{j=0}^{N}%
\]
$M$ is symmetric and invertible, so we can always solve for the $a_{n}$. But
these least squares matrices are notorious for being ill-conditioned when
using non-orthogonal approximating functions. The Hilbert matrix is the
archetypical example. The current application is no exception since the matrix
entries are very similar for most choices of $N$ and $t$,$\ $so round-off
error is extreme. Choosing $N=7$ instead of $5$ in the graphed example above
requires almost 300 significant digits.
\section{Low-frequency trig series are dense in $L^{2}$ with Gaussian
weight\label{SecLowFTrig}}
For $f\in L^{2}\left( \mathbb{R},\mathbb{C}\right) $ define the norm%
\[
\left\| f\right\| _{2,G}:=\left( \underset{\mathbb{R}}{%
{\textstyle\int}
}\left| f\left( x\right) \right| ^{2}e^{-x^{2}}dx\right) ^{1/2}\text{.}%
\]
Write $f\underset{\epsilon,G}{\approx}$ $g$ to mean $\left\| f-g\right\|
_{2,G}<\epsilon$.
\begin{theorem}
\label{ThmLowFreq}For every $f\in L^{2}\left( \mathbb{R},\mathbb{C}\right) $
and $\epsilon>0$ there exists $N$ $\in\mathbb{N}$ and $t_{0}>0$ such that for
any $t\neq0$ with $\left| t\right| <t_{0}$
\[
f\left( x\right) \underset{\epsilon,G}{\approx}\text{ }\overset{N}%
{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}e^{-intx}%
\]
for some choice of $a_{n}\in\mathbb{C}$ dependent on $N$ and $t$.
\end{theorem}
\begin{proof}
We use the Fourier transform with convention%
\[
\mathcal{F}\left[ f\right] \left( s\right) =\frac{1}{\sqrt{2\pi}}%
\underset{\mathbb{R}}{%
{\textstyle\int}
}f\left( x\right) e^{-isx}dx\text{.}%
\]
$\mathcal{F}$ is a linear isometry of $L^{2}\left( \mathbb{R},\mathbb{C}%
\right) $ with%
\begin{align*}
\mathcal{F}\left[ e^{-\alpha x^{2}}\right] & =\frac{1}{\sqrt{2\alpha}%
}e^{-\frac{s^{2}}{4\alpha}}\text{,}\\
\mathcal{F}\left[ f\left( x+r\right) \right] & =e^{-irs}\mathcal{F}%
\left[ f\left( x\right) \right] \text{\qquad and}\\
\mathcal{F}\left[ g\ast h\right] & =\sqrt{2\pi}\mathcal{F}\left[ g\right]
\mathcal{F}\left[ h\right] \text{.}%
\end{align*}
where $\ast$ is convolution.
Let $f\in L^{2}$ and we now show $f_{2}\left( x\right) :=\frac{1}{\sqrt
{2\pi}}e^{-x^{2}}\ast\mathcal{F}^{-1}\left[ f\right] \left( x\right) \in
L^{2}$. Notice $g:=\mathcal{F}^{-1}\left[ f\right] \in L^{2}$ and%
\begin{align*}
\left\| f_{2}\right\| _{2}^{2} & =\underset{\mathbb{R}}{%
{\textstyle\int}
}\left| \underset{\mathbb{R}}{%
{\textstyle\int}
}\frac{1}{\sqrt{2\pi}}g\left( x-y\right) e^{-y^{2}}dy\right| ^{2}%
ds\leq\frac{1}{2\pi}\underset{\mathbb{R}}{%
{\textstyle\int}
}\underset{\mathbb{R}}{%
{\textstyle\int}
}\left| g\left( x-y\right) \right| ^{2}e^{-2y^{2}}dyds\\
& =c\left\| \mathcal{W}_{t_{0}}\left[ \left| g\right| ^{2}\right]
\right\| _{1}=c\left\| g^{2}\right\| _{1}=c\left\| g\right\| _{2}%
^{2}=c\left\| f\right\| _{2}^{2}<\infty
\end{align*}
for some $c>0$. Here $\mathcal{W}_{t}\left[ h\right] $ is the solution to
the diffusion equation for time $t$ and initial condition $h$. (The notation
$\mathcal{W}$ refers to the Weierstrass transform.) The reason for the third
equality in the previous calculation is that $\mathcal{W}_{t}$ maintains the
$L^{1}$ integral of any positive initial condition $h$ for all time $t>0$
\cite{WidderHeat}.
Now approximate the real and imaginary parts of $f_{2}$ with Theorem
\ref{AlgBump}. Then we get%
\[
\tfrac{1}{\sqrt{2\pi}}e^{-x^{2}}\ast\mathcal{F}^{-1}\left[ f\right] \left(
x\right) \underset{\epsilon}{\approx}\ \overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}e^{-\left( x-nt\right) ^{2}}\text{\qquad}a_{n}\in\mathbb{C}%
\]
and applying $\mathcal{F}$ gives%
\[
\tfrac{1}{\sqrt{2}}e^{-s^{2}/4}f\left( s\right) \underset{\epsilon}{\approx
}\text{ }\overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}e^{-ints}\tfrac{1}{\sqrt{2}}e^{-s^{2}/4}%
\]
Hence%
\[
f\left( s\right) \underset{\sqrt{2}\epsilon,G}{\approx}\text{ }\overset
{N}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}e^{-ints}%
\]
using the fact that $e^{-s^{2}/4}>e^{-s^{2}}$.
\end{proof}
This result is surprising, even in the context of this paper, because for
instance, series of the form $\overset{N}{\underset{n=-N}{%
{\textstyle\sum}
}}a_{n}e^{-i\left( x+nt\right) }$ for all $t$ and $a_{n}$ are not dense in
$L^{2}$ and in fact only inhabit a 4-dimensional subspace of the infinite
dimensional Hilbert space \cite{CalcFoliation}.
\begin{corollary}
On any finite interval $\left[ a,b\right] $ for any $\omega>0$ the finite
linear combinations of sine and cosine functions with frequency lower than
$\omega$ are dense in $L^{2}\left( \left[ a,b\right] ,\mathbb{R}\right) $.
\end{corollary}
\begin{proof}
On $\left[ a,b\right] $ the Gaussian is bounded and so the norms with or
without weight function are equivalent. Apply Theorem \ref{ThmLowFreq} to
$f\in L^{2}\left( \left[ a,b\right] ,\mathbb{R}\right) $ and choose $t$
such that $Nt<\omega$ to get%
\[
f\underset{\epsilon}{\approx}\text{ }\overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}\operatorname{Re}\left( a_{n}\right) \cos\left( ntx\right)
+\operatorname{Im}\left( a_{n}\right) \sin\left( ntx\right)
\]
where
\[
a_{n}=\frac{\left( -1\right) ^{n}}{n!2\pi}\overset{N}{\underset{k=n}{%
{\textstyle\sum}
}}\tfrac{1}{\left( k-n\right) !\left( 2t\right) ^{k}}\underset{\mathbb{R}%
}{%
{\textstyle\int}
}\left[ e^{-x^{2}}\ast\mathcal{F}^{-1}\left[ f\right] \left( x\right)
\right] e^{x^{2}}\frac{d^{k}}{dx^{k}}\left( e^{-x^{2}}\right) dx\text{.}%
\]
\end{proof}
Applying Remark \ref{RemDense} to this result shows even discrete sets of
positive frequencies that approach 0 make the span of the corresponding sine
and cosine functions equal to$L^{2}\left( \left[ a,b\right] ,\mathbb{R}%
\right) $.
Finally, low-frequency cosines span the even functions:
\begin{proposition}
On any finite interval $\left[ 0,b\right] $ for any $\omega>0$ the finite
linear combinations of cosine functions with frequency lower than $\omega$ are
dense in $L^{2}\left( \left[ 0,b\right] ,\mathbb{R}\right) $.
\end{proposition}
\begin{proof}
Let $f\in L^{2}\left( \left[ 0,b\right] ,\mathbb{R}\right) $ and extend it
as an even function on $\left[ -b,b\right] $. Now use the previous corollary
to write
\[
f\underset{\epsilon}{\approx}\text{ }\overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}\cos\left( ntx\right) +b_{n}\sin\left( ntx\right) \text{.}%
\]
We'd like to conclude right now that the $b_{n}=0$ or $b_{n}\approx0$, but
that is not true. However, every function $g$ on $\left[ -b,b\right] $ may
be written uniquely as a sum of even and odd functions%
\begin{align*}
g & =g_{e}+g_{o}\\
g_{e}\left( x\right) & =\frac{g\left( x\right) +g\left( -x\right) }%
{2}\\
g_{e}\left( x\right) & =\frac{g\left( x\right) -g\left( -x\right) }{2}%
\end{align*}
and so%
\[
g\underset{\epsilon}{\approx}\text{ }h\text{\quad}\Rightarrow\text{\quad}%
g_{e}\underset{\epsilon}{\approx}\text{ }h_{e}\text{.}%
\]
Therefore%
\[
f=f_{e}\underset{\epsilon}{\approx}\text{ }\left[ \overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}\cos\left( ntx\right) +b_{n}\sin\left( ntx\right) \right]
_{e}=\overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}\cos\left( ntx\right) \text{.}%
\]
\end{proof}
Beware this last result; it's not as strong as Fourier approximation. The
coefficients for the sine functions calculated above may be large; the
proposition merely promises the linear combination of the sine terms is small.
Using least squares, however, will have vanishing sine coefficients.
\section{Origins and generalizations}
The mathematical inspiration for Theorem \ref{ThmBumps} comes from geometrical
investigations in infinite dimensional control theory. We noticed that
function translation and vector translation in $L^{2}\left( \mathbb{R}%
\right) $ do not commute. Specifically, ``function translation'' is a flow on
the infinite dimensional vector space $L^{2}\left( \mathbb{R}\right) $ given
by the map $F:L^{2}\left( \mathbb{R}\right) \times\mathbb{R}\rightarrow
L^{2}\left( \mathbb{R}\right) $ where $F_{t}\left( f\right) \left(
x\right) :=f\left( x+t\right) $. ``Vector translation'' in the direction of
$g\in L^{2}\left( \mathbb{R}\right) $ is the flow $G:L^{2}\left(
\mathbb{R}\right) \times\mathbb{R}\rightarrow L^{2}\left( \mathbb{R}\right)
$ where $G_{t}\left( f\right) :=f+tg$. Taking for example $g\left(
x\right) :=e^{-x^{2}}$ and composing $F$ and $G$ we see $F_{t}\circ G_{t}\neq
G_{t}\circ F_{t}$ since for $f\equiv0$%
\[
F_{t}\circ G_{t}\left( f\right) \left( x\right) =te^{-\left( x+t\right)
^{2}}\text{ \qquad while\qquad}G_{t}\circ F_{t}\left( f\right) \left(
x\right) =te^{-x^{2}}\text{.}%
\]
Notice however the key fact%
\[
\frac{F_{t}\circ G_{t}-G_{t}\circ F_{t}}{t^{2}}\left( f\right)
\rightarrow\frac{d}{dx}\left( e^{-x^{2}}\right) \text{\qquad as
}t\rightarrow0
\]
In finite dimensions the commutator quotient above gives the Lie bracket
$\left[ X,Y\right] $ of the vector fields $X$ and $Y$ which generate the
flows $F$ and $G$, respectively. A fundamental result in finite-dimensional
control theory states that the reachable set via $X$ and $Y$ is given by the
integral surface to the distribution made up of iterated Lie brackets starting
from $X$ and $Y$ (Chow's Theorem, which is an interpretation of Frobenius'
Foliation Theorem, see \cite{Sontag}, e.g.). The idea we are exploiting is
that iterated Lie brackets for our flows $F$ and $G$ will give successive
derivatives of the Gaussian, whose span is dense in $L^{2}\left(
\mathbb{R}\right) $. Consequently, the reachable set via $F$ and $G$ from
$f\equiv0$ should be all of $L^{2}\left( \mathbb{R}\right) $. That is to
say, sums of translates and multiples of one Gaussian (with fixed variance)
can approximate any integrable function.
Unfortunately this program doesn't automatically work on the infinite
dimensional vector space $L^{2}\left( \mathbb{R}\right) $ since the function
translation flow is not generated by a simple vector field on $L^{2}\left(
\mathbb{R}\right) $. So instead of studying vector fields, we consider flows
as primary. The fundamental results can be rewritten and still hold in the
general context of a metric space \cite{CalcFoliation}. Then other functions
besides $g\left( x\right) =e^{-x^{2}}$ can be checked to be derivative
generating and other flows may be used in place of translation. E.g., Fourier
approximation is achieved using dilation $F:L^{2}\left( \mathbb{R}%
,\mathbb{C}\right) \times\mathbb{R}\rightarrow L^{2}\left( \mathbb{R}%
,\mathbb{C}\right) $ where $F_{t}\left( f\right) \left( x\right)
:=f\left( e^{t}x\right) $ and $G_{t}\left( f\right) \left( x\right)
:=f\left( x\right) +te^{ix}$. This gives us a general tool for determining
the density of various families of functions.
Another opportunity for generalizing the results of this paper presents itself
with the observation that Hermite expansions are valid for functions defined
on $\mathbb{C}$ or $\mathbb{R}^{n}$ and in spaces of tempered distributions;
and divided differences works in all of these spaces as well.
Note also that while the results of section \ref{SecAppAlgo} work for uniform
approximations of continuous functions on finite intervals (Remark
\ref{RemUnifG}), this is an open question for low-frequency trigonometric approximations.
The results of this paper can be ported to the language of control theory
where we can then conclude the system%
\begin{equation}
u_{t}=c_{1}\left( t\right) u_{x}+c_{2}(t)e^{-x^{2}}\label{LineControl2}%
\end{equation}
is bang-bang controllable with controls of the form $c_{1},c_{2}%
:\mathbb{R}^{+}\rightarrow\left\{ -1,0,1\right\} $. Theorem \ref{AlgBump}
drives the initial condition $f\equiv0$ to any state in $L^{2}$ under the
system $\left( \ref{LineControl2}\right) $, but may be nowhere near optimal
for approximating a function such as $e^{-\left( x+10\right) ^{2}}$, since
it uses only Gaussians $e^{-\left( x+s\right) ^{2}}$ with choices of $s<<10$.
Finally, interpreting Theorem \ref{ThmBumps} in terms of signal analysis, we
see a Gaussian filter is a universal synthesizer with arbitrarily short load
time. Let $G\left( x\right) :=\frac{1}{\sqrt{\pi}}e^{-x^{2}}$. A Gaussian
filter is a linear time-invariant system represented by the operator%
\[
\mathcal{W}\left( f\right) \left( x\right) :=\left( f\ast G\right)
\left( x\right) =\frac{1}{\sqrt{\pi}}\int_{\mathbb{R}}f\left( y\right)
e^{-\left( s-x\right) ^{2}}dy\text{.}%
\]
Notice if you feed $\mathcal{W}$ a Dirac delta distribution $\delta_{t}$ (an
ideal impulse at time $x=t$) you get $\mathcal{W}\left( \delta_{t}\right)
=G\left( x-t\right) $. Then Theorem \ref{ThmBumps} gives
\begin{corollary}
For any $f\in L^{2}\left( \mathbb{R}\right) $ and any $\epsilon>0$ and any
$\tau>0$ there exists $t>0$ and $N\in\mathbb{N}$ with $tN<\tau$ such that%
\[
f\underset{\epsilon}{\approx}\mathcal{W}\left( \overset{N}{\underset{n=0}{%
{\textstyle\sum}
}}a_{n}\delta_{nt}\right)
\]
for some choice of $a_{n}\in\mathbb{R}$.
\end{corollary}
Feed a Gaussian filter a linear combination of impulses and we can synthesize
any signal and arbitrarily small load time $\tau$. The design of physical
approximations to an analog Gaussian filter are detailed in \cite{Dishal},
\cite{Madrenas}.
\section{Appendix\label{SecImpConv}: Approximating higher derivatives}
The results in this paper may be much improved with voluminous techniques
available from numerical analysis. E.g., \cite{Greengard} gives an algorithm
which speeds the calculation of sums of Gaussians, and \cite{Leibon} explores
Hermite expansion acceleration useful in step 1 of the proof of Theorem
\ref{ThmBumps}. This section is devoted to reviewing methods which improve the
error in step 2, approximating derivatives of the Gaussian with finite
differences. We also derive the error formula used in Proposition
\ref{propLpDivDiff}.
Above we approximated derivatives with the formula%
\begin{equation}
\frac{d^{n}}{dx^{n}}f\left( x\right) =%
\begin{tabular}
[c]{c}%
$\underbrace{\frac{1}{t^{n}}%
{\textstyle\sum_{k=0}^{n}}
\left( -1\right) ^{n-k}\binom{n}{k}f\left( x+kt\right) }$\\
gives round-off error as $t\rightarrow0^{+}$%
\end{tabular}%
\begin{tabular}
[c]{c}%
$\underset{}{+}$%
\end{tabular}%
\begin{tabular}
[c]{l}%
$\underbrace{O\left( t\right) }$\\
truncation error
\end{tabular}
\text{.}\label{LineNthDer=O(t)}%
\end{equation}
The N\"{o}rlund-Rice integral may be of interest for extremely large $n$ as it
avoids the calculation of the binomial coefficient by evaluating a complex
integral. In this section, though, we devote our attention to deriving
$n$-point formulas; these formulas decrease round-off error by increasing the
number of evaluations $f\left( x+kt\right) $--this shrinks the truncation
error without sending $t\rightarrow0$.
In approximating the $k$th derivative with an $n+1$ point formula%
\[
f^{\left( k\right) }\left( x\right) \approx\frac{1}{t^{k}}\overset
{n}{\underset{i=0}{%
{\textstyle\sum}
}}c_{i}f\left( x+k_{i}t\right)
\]
we wish to calculate the coefficients $c_{i}$. In the forward difference
method, the $k_{i}=i$, but keeping these values general allows us to find the
coefficients for the central or backward difference formulas just as easily.
The following method for finding the $c_{i}$ was shown to us by our student
Jeffrey Thornton who rediscovered the formula.
Taylor's Theorem has%
\[
f\left( x+k_{i}t\right) =\overset{n}{\underset{j=0}{%
{\textstyle\sum}
}}\frac{\left( k_{i}t\right) ^{j}}{j!}f^{\left( j\right) }\left(
x\right) +\frac{\left( k_{i}t\right) ^{n+1}}{\left( n+1\right)
!}f^{\left( n+1\right) }\left( \xi_{i}\right)
\]
for some $\xi_{i}$ between $x$ and $x+k_{i}t$. From this it follows%
\begin{align*}
& \overset{n}{\underset{i=0}{%
{\textstyle\sum}
}}c_{i}f\left( x+k_{i}t\right) \\
& =\left[
\begin{tabular}
[c]{c}%
$f\left( x\right) $\\
$tf^{\prime}\left( x\right) $\\
$\vdots$\\
$t^{n}f^{\left( n\right) }\left( x\right) $\\
$t^{n+1}$%
\end{tabular}
\right] ^{T}\left[
\begin{tabular}
[c]{cccc}%
$1$ & $1$ & $\cdots$ & $1$\\
$k_{0}$ & $k_{1}$ & $\cdots$ & $k_{n}$\\
$\frac{k_{0}^{2}}{2!}$ & $\frac{k_{1}^{2}}{2!}$ & $\cdots$ & $\frac{k_{n}^{2}%
}{2!}$\\
$\vdots$ & $\vdots$ & $\ddots$ & $\vdots$\\
$\frac{k_{0}^{n}}{n!}$ & $\frac{k_{1}^{n}}{n!}$ & $\cdots$ & $\frac{k_{n}^{n}%
}{n!}$\\
$\tfrac{k_{0}^{n+1}f^{\left( n+1\right) }\left( \xi_{0}\right) }{\left(
n+1\right) !}$ & $\frac{k_{1}^{n+1}f^{\left( n+1\right) }\left( \xi
_{1}\right) }{\left( n+1\right) !}$ & $\cdots$ & $\frac{k_{n}%
^{n+1}f^{\left( n+1\right) }\left( \xi_{n}\right) }{\left( n+1\right)
!}$%
\end{tabular}
\right] \left[
\begin{tabular}
[c]{c}%
$c_{0}$\\
$c_{1}$\\
$\vdots$\\
$c_{n}$%
\end{tabular}
\right]
\end{align*}
Now pick $c=\left[ c_{i}\right] $ as a solution to%
\begin{equation}
\left[
\begin{tabular}
[c]{cccc}%
$1$ & $1$ & $\cdots$ & $1$\\
$k_{0}$ & $k_{1}$ & $\cdots$ & $k_{n}$\\
$\frac{k_{0}^{2}}{2!}$ & $\frac{k_{1}^{2}}{2!}$ & $\cdots$ & $\frac{k_{n}^{2}%
}{2!}$\\
$\vdots$ & $\vdots$ & $\ddots$ & $\vdots$\\
$\frac{k_{0}^{n}}{n!}$ & $\frac{k_{1}^{n}}{n!}$ & $\cdots$ & $\frac{k_{n}^{n}%
}{n!}$%
\end{tabular}
\right] \left[
\begin{tabular}
[c]{c}%
$c_{0}$\\
$c_{1}$\\
$\vdots$\\
$c_{n}$%
\end{tabular}
\right] =\left[
\begin{tabular}
[c]{c}%
$0$\\
$\vdots$\\
$1$\\
$\vdots$\\
$0$%
\end{tabular}
\right] \label{LineNumDiffCoeffMatrix}%
\end{equation}
which is possible since the $k_{i}$ are different, so the matrix is
invertible, as is seen using the Vandermonde determinant%
\[
\det=\frac{\underset{0\leq i<j\leq n}{\Pi}\left( k_{j}-k_{i}\right)
}{\underset{2\leq i\leq n}{\Pi}i!}\text{.}%
\]
Then we must have%
\begin{align*}
\overset{n}{\underset{i=0}{%
{\textstyle\sum}
}}c_{i}f\left( x+k_{i}t\right) & =\left[
\begin{tabular}
[c]{c}%
$f\left( x\right) $\\
$tf^{\prime}\left( x\right) $\\
$\vdots$\\
$t^{n}f^{\left( n\right) }\left( x\right) $\\
$t^{n+1}$%
\end{tabular}
\right] ^{T}\left[
\begin{tabular}
[c]{l}%
$0$\\
$\vdots$\\
$1$\quad($k$-th position)\\
$\vdots$\\
$0$\\
$\frac{1}{\left( n+1\right) !}\overset{n}{\underset{i=0}{%
{\textstyle\sum}
}}c_{i}k_{i}^{n+1}f^{\left( n+1\right) }\left( \xi_{i}\right) $%
\end{tabular}
\right] \\
& =t^{k}f^{\left( k\right) }\left( x\right) +\frac{t^{n+1}}{\left(
n+1\right) !}\overset{n}{\underset{i=1}{%
{\textstyle\sum}
}}c_{i}k_{i}^{n+1}f^{\left( n+1\right) }\left( \xi_{i}\right) \text{.}%
\end{align*}
Therefore%
\[
f^{\left( k\right) }\left( x\right) =\frac{1}{t^{k}}\overset{n}%
{\underset{i=0}{%
{\textstyle\sum}
}}c_{i}f\left( x+k_{i}t\right) +Error
\]
for $c_{i}$ which satisfy $\left( \ref{LineNumDiffCoeffMatrix}\right) $
where%
\[
Error=-\dfrac{t^{n+1-k}}{\left( n+1\right) !}\overset{n}{\underset{i=0}{%
{\textstyle\sum}
}}c_{i}k_{i}^{n+1}f^{\left( n+1\right) }\left( \xi_{i}\right) \text{.}%
\]
This $Error$ formula shows how truncation error may be decreased by increasing
$n$ without shrinking $t$, thus combatting round-off error at the expense of
increased computation of sums.
The coefficients in $\left( \ref{LineNthDer=O(t)}\right) $ are obtained by
solving $M$ for the $c_{i}$ with $k_{i}$ chosen as $k_{i}=i$.
Thornton also points out that the $k_{i}$ may be chosen as complex values when
$f$ is analytic (as is the case with our Gaussians). This gives us another
opportunity to mitigate round-off error, since a greater quantity of
regularly-spaced nodes $k_{i}$ can be packed into an epsilon ball around zero
in the complex plane than on the real line.
As final note we mention there have been numerous advances to the present day
in inverting the Vandermonde matrix. We mention only the earliest application
to numerical differentiation \cite{Spitzbart} which gives a formula in terms
of the Stirling numbers.
| {
"timestamp": "2008-05-26T15:16:49",
"yymm": "0805",
"arxiv_id": "0805.3795",
"language": "en",
"url": "https://arxiv.org/abs/0805.3795",
"abstract": "Linear combinations of translations of a single Gaussian, e^{-x^2}, are shown to be dense in L^2(R). Two algorithms for determining the coefficients for the approximations are given, using orthogonal Hermite functions and least squares. Taking the Fourier transform of this result shows low-frequency trigonometric series are dense in L^2 with Gaussian weight function.",
"subjects": "Classical Analysis and ODEs (math.CA); Functional Analysis (math.FA)",
"title": "Approximating with Gaussians",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850842553892,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7095349925417482
} |
https://arxiv.org/abs/1108.3479 | The semicircle law for matrices with independent diagonals | We investigate the spectral distribution of random matrix ensembles with correlated entries. We consider symmetric matrices with real valued entries and stochastically independent diagonals. Along the diagonals the entries may be correlated. We show that under sufficiently nice moment conditions the empirical eigenvalue distribution converges almost surely weakly to the semi-circle law. | \section{Introduction}
Large-dimensional random matrices are, among others, of interest in statistics and in theoretical physics, in particular when studying the properties of atoms with heavy
nuclei. One of the most interesting and best studied questions, has been to investigate the properties of the eigenvalues of random matrices. For example, Wigner, in his
seminal paper \cite{Wigner} showed that the spectral distribution of symmetric or Hermitian random matrices with independent Gaussian entries, otherwise, under appropriate
scaling converges to the semi-circle law. This was generalized by Arnold \cite{Arnold} to the situation of symmetric or Hermitian random matrices filled with independent
and identically distributed (i.i.d.) random variables with sufficiently many moments. Other generalizations of Wigner's semi-circle law concern matrix ensembles with entries
drawn according to weighted Haar measures on classical (e.g., orthogonal, unitary, symplectic) groups. Such results are particularly interesting, since such random matrices
also play a major role in non-commutative probability (see e.g. \cite{alice_stflour}); other applications are in graph theory, combinatorics or algebra.
This note addresses a question that is much in the spirit of Arnold's generalization of the semi-circle law. Even though a couple of random matrix models include situations
with stochastically correlated entries (see especially \cite{brycdembo}, where the case of random Toeplitz and Hankel matrices is treated), the dependencies are not very
natural from a stochastic point of view. A generic way to construct random matrices with dependent entries could be to consider a two dimensional (stationary) random field
indexed by $\mathbb{Z}^d$ with correlations that decay with the distance of the indices and to take an $n \times n$ block as entries for a random $n \times n$ matrix.
The present note is a first step to study the asymptotic eigenvalue distribution of such matrix ensembles. Here we will deviate from the independence assumption by
considering (real) random fields with entries that may be dependent on each diagonal, but with stochastically independent diagonals. For such matrices we will prove a semi-circle law.
The setup may look at first glance a bit more artificial than a situation where the matrices are filled with row- or columnwise independent random variables (e.g.
with row- or columnwise independent Markov chains). Note, however, that in order guarantee for real eigenvalues we will need to restrict ourselves to symmetric random matrices.
This would imply that a matrix with rowwise independent entries above the diagonal has columnwise independent entries below it. Not only is this a rather strange setup, also can one
see from simulations that their asymptotic eigenvalue distribution is probably not the semi-circle law.
It also should be mentioned that a similar situation has been studied by Khorunzhy and Pastur in \cite{KhorunzhyPastur94}. They consider the eigenvalue distribution of so called deformed Wigner ensembles that consist of matrices which can be written as a sum of Wigner matrix (a symmetric matrix with independent entries above the diagonal) and a deterministic matrix. It is proven that in this situation the
empirical eigenvalue density converges in probability to a non-random limit. This setup, yet similar, is different from ours.
The rest of the note is organized as follows. In the second section we will formalize the situation we want to consider and state our main result. Section 3 is devoted to the
proof, that is based on a moment method. Section 4 contains some examples.
\section{Main Results}
In this section we will state our main theorem, a semi-circle law for symmetric random matrices with independent diagonals (for a precise formulation see Theorem
\ref{main} below). The limit law for their empirical eigenvalue distribution is the semi-circle distribution. Its density is given by
$$
f(x)=\left\{\begin{array}{ll}
\frac 1 {2\pi} \sqrt {4-x^2} & \mbox{if } -2 \le x \le 2\\
0 & \mbox{otherwise.}
\end{array} \right.
$$
We want to consider the following setup: Let $\left\{a(p,q), 1\leq p\leq q< \infty\right\}$ be a real valued random field.
For any $n\in \mathbb{N}$, define the symmetric random $n\times n$ matrix $\textbf{X}_n$ by
\begin{equation*}
\textbf{X}_n (q,p) = \textbf{X}_n (p,q) = \frac{1}{\sqrt{n}} a(p,q), \qquad 1\leq p\leq q\leq n,
\end{equation*}
We will have to impose the following conditions on $\textbf{X}_n$: \\
\begin{enumerate}
\item[(C1)] $\mathbb{E}\left[a(p,q)\right]=0$, $\mathbb{E}\left[a(p,q)^{2}\right] = 1$ and
\begin{equation*}
m_k:=\sup_{n\in\mathbb{N}} \max_{1\leq p\leq q\leq n} \mathbb{E}\left[\left|a(p,q)\right|^{k}\right] < \infty, \quad k\in\mathbb{N}.
\end{equation*}
\item[(C2)] the diagonals of $\textbf{X}_n$, i.e. the families $\left\{a(p,p+r), p\in\mathbb{N}\right\}$, $r\in\mathbb{N}_0$, are independent,
\item[(C3)] the covariance of two entries on the same diagonal depends only on their distance, i.e. for any $\tau\in\mathbb{N}_0$ we can define
\begin{equation*}
\mathrm{Cov}(\tau) := \mathrm{Cov}(a(p,q),a(p+\tau,q+\tau)), \qquad p,q\in\mathbb{N},
\end{equation*}
\item[(C4)] the entries on the diagonals have a quickly decaying dependency structure, which will be expressed in terms of the condition
\begin{equation*}
\sum_{\tau=0}^{\infty} \left|\mathrm{Cov}(\tau)\right| < \infty.
\end{equation*}
\end{enumerate}
We will denote the (real) eigenvalues of $ \textbf{X}_{n}$ by $\lambda_1^{(n)} \le \lambda_2^{(n)} \le \ldots \lambda_n^{(n)}$. Let $\mu_n$ be the empirical eigenvalue distribution, i.e.
\begin{equation*}
\mu_n = \frac{1}{n} \sum_{k=1}^n \delta_{\lambda_k^{(n)}}.
\end{equation*}
With these notations we are able to formulate the
central result of this note.
\begin{theorem}
Assume that the symmetric random matrix $\textbf{X}_n$ as defined above satisfies the conditions $(C1)$, $(C2)$, $(C3)$ and $(C4)$. Then, with probability $1$, the empirical spectral distribution of $\textbf{X}_n$ converges weakly to the standard semi-circle distribution, i.e.
\begin{equation*}
\mu_n \Rightarrow \mu \qquad \mbox{ as } n \to \infty
\end{equation*}
both, in expectation and $\mathbb{P}-\mbox{almost surely}$.
Here ''$\Rightarrow$'' denotes weak convergence.
\label{main}
\end{theorem}
\begin{rem}
\normalfont{
Note that in order for the semi-circle law to hold, it is not possible to renounce condition $(C4)$ without any replacement. Consider for example a Toeplitz matrix, that is a Hermitian matrix with identical entries on each diagonal. For such a matrix, we clearly have
\begin{equation*}
\sum_{\tau=0}^{\infty} \left|\mathrm{Cov}(\tau)\right| = \infty.
\end{equation*}
Indeed, it was shown in \cite{brycdembo} that the empirical distribution of a sequence of Toeplitz matrices tends with probability $1$ to a nonrandom probability measure with unbounded support.
}
\end{rem}
\section{Proof of Theorem \ref{main}}
We want to resort to the method of moments to prove Theorem \ref{main} (this method has been applied in similar situations in \cite{Arnold} or \cite{Schenker_Schulz-Baldes}, among (many) others). To this end, let $Y$ be distributed according to the semi-circle distribution. For the proof of the theorem it will be important to notice that the moments of $Y$ are given by
\begin{equation}
\mathbb{E}(Y^{k})=\begin{cases} 0, & \text{if} \ k \ \text{is odd}, \\ C_{\frac{k}{2}}, & \text{if} \ k \ \text{is even}, \end{cases}
\end{equation}
where $C_{\frac{k}{2}} = \frac{k!}{\frac{k}{2}!\left(\frac{k}{2}+1\right)!}$ denote the Catalan numbers. Since these moments determine the semicircle distribution uniquely, the weak convergence of the expected empirical distribution will follow from the relation
\begin{equation*}
\lim_{n\to\infty} \frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] = \begin{cases} 0, & \text{if} \ k \ \text{is odd}, \\ C_{\frac{k}{2}}, & \text{if} \ k \ \text{is even,} \end{cases}
\end{equation*}
where $\mathrm{tr}(\cdot)$ denotes the trace operator. The first part of the proof is to verify this convergence.\\
To start with, consider the set $\mathcal{T}_n(k)$ of $k$-tuples of consistent pairs, that is elements of the form $\left(P_1,\ldots,P_k\right)$ with $P_j = (p_j,q_j) \in \left\{1,\ldots,n\right\}^2$ satisfying $q_j = p_{j+1}$ for any $j=1,\ldots,k$, where $k+1$ is identified with $1$. Then, we have
\begin{equation*}
\frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] = \frac{1}{n^{1+\frac{k}{2}}} \sum_{\left(P_1,\ldots,P_k\right)\in\mathcal{T}_n(k)} \mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right].
\end{equation*}
Further, define $\mathcal{P}(k)$ to be the set of all partitions $\pi$ of $\left\{1,\ldots,k\right\}$. Any partition $\pi$ induces an equivalence relation $\sim_\pi$ on $\left\{1,\ldots,k\right\}$ by
\begin{equation*}
i\sim_\pi j \quad :\Longleftrightarrow \quad \text{$i$ and $j$ belong to the same set of the partition} \ \pi.
\end{equation*}
We say that an element $\left(P_1,\ldots,P_k\right)\in\mathcal{T}_n(k)$ is a $\pi$ consistent sequence if
\begin{equation*}
\left|p_i - q_i\right| = \left|p_j - q_j\right| \quad \Longleftrightarrow \quad i\sim_{\pi} j.
\end{equation*}
Due to condition $(C2)$, this implies that $a(P_{i_1}),\ldots,a(P_{i_l})$ are stochastically independent if $i_1,\ldots,i_l$ belong to $l$ different blocks of $\pi$. The set of all $\pi$ consistent sequences $\left(P_1,\ldots,P_k\right)\in\mathcal{T}_n(k)$ is denoted by $S_n(\pi)$. Thus, we can write
\begin{equation*}
\frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] = \frac{1}{n^{1+\frac{k}{2}}} \sum_{\pi \in \mathcal{P}(k)} \sum_{\left(P_1,\ldots,P_k\right)\in S_n(\pi)} \mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right].
\label{sum}
\end{equation*}
Now fix a $k\in\mathbb{N}$. For any $\pi\in\mathcal{P}(k)$ let $\# \pi$ denote the number of equivalence classes of $\pi$. We distinguish different cases.\\
\noindent \textbf{First case:} $\quad \# \pi > \frac{k}{2}$ \\
Since $\pi$ is a partition of $\left\{1,\ldots,k\right\}$, there is at least one equivalence class with a single element $l$. Consequently, for any sequence $\left(P_1,\ldots,P_k\right)\in S_n(\pi)$ we have
\begin{equation*}
\mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right] = \mathbb{E}\Big[\prod_{i\neq l}a(P_i)\Big] \cdot \mathbb{E}\left[a(P_l)\right] = 0,
\end{equation*}
due to the independence of elements in different equivalence classes.
Hence, we obtain
\begin{equation*}
\frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] = \frac{1}{n^{1+\frac{k}{2}}} \sum_{\underset{\# \pi \leq \frac{k}{2}}{\pi \in \mathcal{P}(k),}} \sum_{\left(P_1,\ldots,P_k\right)\in S_n(\pi)} \mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right].
\end{equation*}
\noindent \textbf{Second case:} $\quad r:= \# \pi < \frac{k}{2}$ \\
We need to calculate $\# S_n(\pi)$. To fix an element $\left(P_1,\ldots,P_k\right)\in S_n(\pi)$, we first choose the pair $P_1 = (p_1,q_1)$. There are at most $n$ possibilities to assign a value to $p_1$ and another $n$ possibilities for $q_1$. To fix $P_2 = (p_2,q_2)$, note that the consistency of the pairs implies $p_2 = q_1$. If now $1\sim_\pi 2$, the condition $\left|p_1 - q_1\right| = \left|p_2 - q_2\right|$ allows at most two choices for $q_2$. Otherwise, if $1\not\sim_\pi 2$, we have at most $n$ possibilities. We now proceed sequentially to determine the remaining pairs. When arriving at some index $i$, we check whether $i$ is equivalent to any preceding index $1,\ldots,i-1$. If this is the case, then we have at most two choices for $P_i$ and otherwise, we have $n$. Since there are exactly $r$ different equivalence classes, we can conclude that
\begin{equation*}
\# S_n(\pi) \leq n^2 \cdot n^{r-1} \cdot 2^{k-r} \leq C \cdot n^{r+1}
\end{equation*}
with a constant $C=C(r,k)$ depending on $r$ and $k$.
Now the uniform boundedness of the moments and the H\"{o}lder inequality together imply that for any sequence $(P_1,\ldots,P_k)$,
\begin{equation}
\left|\mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right] \right| \leq \left[\mathbb{E}\left|a(P_1)\right|^{k}\right]^{\frac{1}{k}} \cdot \ldots \cdot \left[\mathbb{E}\left|a(P_k)\right|^{k}\right]^{\frac{1}{k}} \leq m_k.
\label{holder}
\end{equation}
Consequently, taking account of the relation $r< \frac{k}{2}$, we get
\begin{equation*}
\frac{1}{n^{1+\frac{k}{2}}} \sum_{\underset{\# \pi < \frac{k}{2}}{\pi \in \mathcal{P}(k),}} \sum_{\left(P_1,\ldots,P_k\right)\in S_n(\pi)} \left|\mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right] \right| \leq C \cdot \frac{1}{n^{1+\frac{k}{2}}} \cdot n^{r+1} = o(1).
\end{equation*} \\
\noindent Combining the calculations in the first and the second case, we can conclude that
\begin{equation*}
\lim_{n\to\infty} \frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] = \lim_{n\to\infty} \frac{1}{n^{1+\frac{k}{2}}} \sum_{\underset{\# \pi = \frac{k}{2}}{\pi \in \mathcal{P}(k),}} \sum_{\left(P_1,\ldots,P_k\right)\in S_n(\pi)} \mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right],
\end{equation*}
if the limits exist. \\
\noindent Now consider the case where $k$ is \textbf{odd}. Since then the condition $\# \pi = \frac{k}{2}$ cannot be satisfied, the considerations above immediately yield
\begin{equation*}
\lim_{n\to\infty} \frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] = 0.
\end{equation*} \\
\noindent It remains to cope with \textbf{even} $k$. Denote by $\mathcal{P}\mathcal{P}(k)\subset \mathcal{P}(k)$
the set of all pair partitions of $\left\{1,\ldots,k\right\}$. In particular, $\# \pi = \frac{k}{2}$ for any $\pi \in \mathcal{P}\mathcal{P}(k)$.
On the other hand, if $\#\pi = \frac{k}{2}$ but $\pi \notin \mathcal{P}\mathcal{P}(k)$, we can conclude that $\pi$ has at least one equivalence class with a
single element and hence, as in the first case, the expectation corresponding to the $\pi$ consistent sequences will become zero. Consequently,
\begin{equation*}
\lim_{n\to\infty} \frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] = \lim_{n\to\infty} \frac{1}{n^{1+\frac{k}{2}}} \sum_{\pi\in\mathcal{P}\mathcal{P}(k)} \sum_{\left(P_1,\ldots,P_k\right)\in S_n(\pi)} \mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right],
\end{equation*}
if the limits exist. We have now reduced the original set $\mathcal{P}(k)$ to the subset $\mathcal{P}\mathcal{P}(k)$. Next we want to fix a $\pi\in\mathcal{P}\mathcal{P}(k)$ and cope with the set $S_n(\pi)$.
\begin{lemma}[cf. \cite{brycdembo}, Proposition 4.4.]
Let $S_{n}^{*}(\pi) \subseteq S_n(\pi)$ denote the set of $\pi$ consistent sequences $(P_1,\ldots,P_k)$ satisfying
\begin{equation*}
i\sim_\pi j \quad \Longrightarrow \quad q_i - p_i = p_j - q_j
\end{equation*}
for all $i\neq j$. Then, we have
\begin{equation*}
\# \left(S_n(\pi)\backslash S_{n}^{*}(\pi)\right) = o\left(n^{1+\frac{k}{2}}\right).
\end{equation*}
\label{snstar}
\end{lemma}
\begin{proof}
We call a pair $(P_i,P_j)$ with $i\sim_\pi j$, $i\neq j$, positive if $q_i-p_i = q_j - p_j > 0$ and negative if $q_i - p_i = q_j - p_j < 0$. Since $\sum_{i=1}^k q_i - p_i = 0$ by consistency, the existence of a negative pair implies the existence of a positive one. Thus, we can assume that any sequence $(P_1,\ldots,P_k) \in S_n(\pi)\backslash S_{n}^{*}(\pi)$ contains a positive pair $(P_l,P_m)$. To fix such a sequence, we first determine the positions of $l$ and $m$ and then, we fix the signs of the remaining differences $q_i - p_i$. The number of possibilities to accomplish that depends only on $k$ and not on $n$. Now we choose one of $n$ possible values for $p_l$. In a next step, we fix the values of the differences $\left|q_i - p_i\right|$ for all $P_i$ except for $P_l$ and $P_m$. Since in each case two pairs are equivalent, i.e. the difference of the indices is equal, we have $n^{\frac{k}{2}-1}$ possibilities for that. Then, $\sum_{i=1}^k q_i - p_i = 0$ implies that
\begin{equation*}
0 < 2(q_l - p_l) = q_l - p_l + q_m - p_m = \sum_{\underset{i\neq l,m}{i=1,}}^k p_i - q_i.
\end{equation*}
Since we have already chosen the signs of the differences as well as their absolute values, we know the value of the sum on the right hand side. Hence, the difference $q_l - p_l = q_m - p_m$ is fixed. We now have the index $p_l$, all differences $\left|q_i - p_i\right|, i\in\left\{1,\ldots,k\right\}$, and their signs. Thus, we can start at $P_l$ and go systematically through the whole sequence $(P_1,\ldots,P_k)$ to see that it is uniquely determined. Consequently, our considerations lead to
\begin{equation*}
\# \left(S_n(\pi)\backslash S_{n}^{*}(\pi)\right) \leq C \cdot n^{\frac{k}{2}} = o\left(n^{1+\frac{k}{2}}\right).
\end{equation*}
\end{proof}
\noindent As a consequence of Lemma~\ref{snstar} and relation \eqref{holder}, we obtain
\begin{equation*}
\lim_{n\to\infty} \frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] = \lim_{n\to\infty} \frac{1}{n^{1+\frac{k}{2}}} \sum_{\pi\in\mathcal{P}\mathcal{P}(k)} \sum_{\left(P_1,\ldots,P_k\right)\in S_{n}^{*}(\pi)} \mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right],
\end{equation*}
\noindent if the limits exist. \\
\noindent We call a pair partition $\pi \in \mathcal{P}\mathcal{P}(k)$ \textbf{crossing} if there are indices $i<j<l<m$ with $i\sim_\pi l$ and $j\sim_\pi m$. Otherwise, we call $\pi$ \textbf{non-crossing}. The set of all non-crossing pair partitions is denoted by $\mathcal{N}\mathcal{P}\mathcal{P}(k)$.
\begin{lemma}
For any crossing $\pi \in \mathcal{P}\mathcal{P}(k) \backslash \mathcal{N}\mathcal{P}\mathcal{P}(k)$, we have
\begin{equation*}
\sum_{\left(P_1,\ldots,P_k\right)\in S_{n}^{*}(\pi)} \mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right] = o\left(n^{\frac{k}{2}+1}\right).
\end{equation*}
\label{crossing}
\end{lemma}
\begin{proof}
Let $\pi$ be crossing and consider a sequence $\left(P_1,\ldots,P_k\right)\in S_{n}^{*}(\pi)$. Note that if there is an $l\in\left\{1,\ldots,k\right\}$ with $l\sim_\pi l+1$, where $k+1$ is identified with $1$, we immediately have
\begin{equation*}
a(P_l) = a(P_{l+1}),
\end{equation*}
since $q_l = p_{l+1}$ by consistency and then $p_l = q_{l+1}$ by definition of $S_{n}^{*}(\pi)$. In particular,
\begin{equation*}
\mathbb{E}\left[a(P_l) \cdot a(P_{l+1})\right] = 1.
\end{equation*}
The sequence $\left(P_1,\ldots, P_{l-1},P_{l+2}, \ldots,P_k\right)$ is still consistent because of the relation $q_{l-1} = p_l = q_{l+1} = p_{l+2}$. Since there are at most $n$ choices for $q_l = p_{l+1}$, it follows
\begin{equation*}
\# S_{n}^{*}(\pi) \leq n \cdot \# S_{n}^{*}(\pi^{(1)}),
\end{equation*}
where $\pi^{(1)}\in \mathcal{P}\mathcal{P}(k-2) \backslash \mathcal{N}\mathcal{P}\mathcal{P}(k-2)$ is the pair partition induced by $\pi$ after eliminating the indices $l$ and $l+1$. Let $r$ denote the maximum number of pairs of indices that can be eliminated in this way. Since $\pi$ is crossing, there are at least two pairs left and hence, $r\leq\frac{k}{2}-2$. By induction, we conclude that
\begin{equation*}
\# S_{n}^{*}(\pi) \leq n^r \cdot \# S_{n}^{*}(\pi^{(r)}),
\end{equation*}
where now $\pi^{(r)}\in \mathcal{P}\mathcal{P}(k-2r) \backslash \mathcal{N}\mathcal{P}\mathcal{P}(k-2r)$ is the still crossing pair partition induced by $\pi$. Thus, we so far have
\begin{align}
\begin{split}
& \sum_{\left(P_1,\ldots,P_k\right)\in S_{n}^{*}(\pi)} \left|\mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right]\right| \\
& \qquad \qquad \qquad \qquad \leq n^r \sum_{(P^{(r)}_1,\ldots,P^{(r)}_{k-2r})\in S_{n}^{*}(\pi^{(r)})} \left|\mathbb{E}\left[a(P^{(r)}_1)\cdot \ldots \cdot a(P^{(r)}_k)\right]\right|.
\end{split}
\label{estimate}
\end{align}
Choose $i\sim_{\pi^{(r)}} i+j$ such that $j$ is minimal. We want to count the number of sequences $(P_1^{(r)},\ldots,P_{k-2r}^{(r)})\in S_{n}^{*}(\pi^{(r)})$ given that $p^{(r)}_i$ and $q^{(r)}_{i+j}$ are fixed. Therefore, we start with choosing one of $n$ possible values for $q^{(r)}_i$. But then, we can also deduce the value of
\begin{equation*}
p^{(r)}_{i+j} = q^{(r)}_i - p^{(r)}_i + q^{(r)}_{i+j}.
\end{equation*}
Since $j$ is minimal, any element in $\left\{i+1,\ldots,i+j-1\right\}$ is equivalent to some element outside the set $\left\{i,\ldots,i+j\right\}$. There are $n$ possibilities to fix $P^{(r)}_{i+1}$ as $p^{(r)}_{i+1}=q^{(r)}_i$ is already fixed. Proceeding sequentially, we have $n$ possibilities for the choice of any pair $P^{(r)}_l$ with $l\in \left\{i+2,\ldots,i+j-2\right\}$ and there is only one choice for $P^{(r)}_{i+j-1}$ since $q^{(r)}_{i+j-1}=p^{(r)}_{i+j}$ is already chosen. For any other pair that has not yet been fixed, there are at most $n$ possibilities if it is not equivalent to one pair that has already been chosen. Otherwise, there is only one possibility. Hence, assuming that the elements $p^{(r)}_i$ and $q^{(r)}_{i+j}$ are fixed, we have at most
\begin{equation*}
n \cdot n^{j-2} \cdot n^{\frac{k}{2}-r-j} = n^{\frac{k}{2}-r-1}
\end{equation*}
possibilities to choose the rest of the sequence $(P^{(r)}_1,\ldots,P^{(r)}_{k-2r})\in S_{n}^{*}(\pi^{(r)})$. Consequently, estimating the term in \eqref{estimate} further, we obtain
\begin{align*}
\sum_{\left(P_1,\ldots,P_k\right)\in S_{n}^{*}(\pi)} \left|\mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right]\right| & \leq n^{\frac{k}{2}-1} \sum_{p^{(r)}_i,q^{(r)}_{i+j}=1}^{n} | \mathrm{Cov}(|q^{(r)}_{i+j}-p^{(r)}_i|) | \\
& \leq C \cdot n^{\frac{k}{2}} \sum_{\tau=0}^{n-1} \left|\mathrm{Cov}(\tau)\right| = o\left(n^{1+\frac{k}{2}}\right),
\end{align*}
since $\sum_{\tau=0}^{\infty} \left|\mathrm{Cov}(\tau)\right| < \infty$ by condition $(C4)$.
\end{proof}
\noindent Lemma~\ref{crossing} now guarantees that we need to consider only non-crossing pair partitions, that is
\begin{equation*}
\lim_{n\to\infty} \frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] = \lim_{n\to\infty} \frac{1}{n^{1+\frac{k}{2}}} \sum_{\pi\in\mathcal{N}\mathcal{P}\mathcal{P}(k)} \sum_{\left(P_1,\ldots,P_k\right)\in S_{n}^{*}(\pi)} \mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right],
\end{equation*}
if the limits exist.
\begin{lemma}
Let $\pi \in \mathcal{N}\mathcal{P}\mathcal{P}(k)$. For any $\left(P_1,\ldots,P_k\right)\in S_{n}^{*}(\pi)$, we have
\begin{equation*}
\mathbb{E}\left[a(P_1)\cdot\ldots\cdot a(P_k)\right] = 1.
\end{equation*}
\label{le1}
\end{lemma}
\begin{proof}
Let $l<m$ with $m\sim_\pi l$. Since $\pi$ is non-crossing, the number $l-m-1$ of elements between $l$ and $m$ must be even. In particular, there is $l\leq i< j\leq m$ with $i\sim_\pi j$ and $j=i+1$. By the properties of $S_{n}^{*}$, we have $a(P_i)=a(P_j)$, and the sequence $\left(P_1,\ldots, P_l,\ldots,P_{i-1},P_{i+2},\ldots,P_m,\ldots,P_k\right)$ is still consistent. Applying this argument successively, all pairs between $l$ and $m$ vanish and we see that the sequence $\left(P_1,\ldots,P_l,P_m,\ldots,P_k\right)$ is consistent, that is $q_l=p_m$. Then, the identity $p_l=q_m$ also holds. In particular, $a(P_l)=a(P_m)$. Since $l,m$ have been chosen arbitrarily, we obtain
\begin{equation*}
\mathbb{E}\left[a(P_1)\cdot\ldots\cdot a(P_k)\right] = \prod_{\stackrel{l< m}{l\sim_\pi m}} \mathbb{E}\left[a(P_l)\cdot a(P_m)\right] = 1.
\end{equation*}
\end{proof}
\noindent It remains to verify
\begin{lemma}
For any $\pi\in\mathcal{N}\mathcal{P}\mathcal{P}(k)$, we have
\begin{equation*}
\lim_{n\to\infty} \frac{\# S_{n}^{*}(\pi)}{n^{\frac{k}{2}+1}} = 1.
\end{equation*}
\label{le2}
\end{lemma}
\begin{proof}
To calculate the number of elements in $S_{n}^{*}(\pi)$, first choose $P_1$. There are $n^2$ possibilities for that choice. If $1\sim_\pi 2$, then $P_2$ is uniquely determined since $p_2 = q_1$ and by definition of $S_{n}^{*}(\pi)$, $q_2 = p_1$. If $1\not\sim_\pi 2$, then there are $n-1$ possibilities to fix $P_2$. Proceeding in the same way, we see that if $i \in \left\{2,\ldots,k\right\}$ is equivalent to some element in $\left\{1,\ldots,i-1\right\}$, there is always only one value $P_i$ can take. Otherwise there are asymptotically $n$ choices. The latter case will occur exactly $\frac{k}{2}-1$ times. In conclusion,
\begin{equation*}
\# S_{n}^{*}(\pi) \sim n^2 \cdot n^{\frac{k}{2}-1} = n^{1+\frac{k}{2}}.
\end{equation*}
\end{proof}
\noindent Lemma~\ref{le1} and Lemma~\ref{le2} now provide that
\begin{align*}
\lim_{n\to\infty} \frac{1}{n} \mathbb{E}\left[\mathrm{tr}\left(\textbf{X}_{n}^{k}\right)\right] &= \lim_{n\to\infty} \frac{1}{n^{1+\frac{k}{2}}} \sum_{\pi\in\mathcal{N}\mathcal{P}\mathcal{P}(k)} \sum_{\left(P_1,\ldots,P_k\right)\in S_{n}^{*}(\pi)} \mathbb{E}\left[a(P_1)\cdot \ldots \cdot a(P_k)\right] \\
&= \lim_{n\to\infty} \frac{1}{n^{1+\frac{k}{2}}} \sum_{\pi\in\mathcal{N}\mathcal{P}\mathcal{P}(k)} \# S_{n}^{*}(\pi) = \# \mathcal{N}\mathcal{P}\mathcal{P}(k).
\end{align*}
\noindent Since the number of non-crossing pair partitions $\# \mathcal{N}\mathcal{P}\mathcal{P}(k)$ equals exactly the Catalan number $C_{\frac{k}{2}}$, we can conclude that the expected empirical spectral distribution of $\textbf{X}_n$ tends to the semi-circle law. This is the asserted convergence in expectation. \\
It remains to deduce almost sure convergence. Therefore, we want to follow the ideas of \cite{brycdembo}. To this end, we need
\begin{lemma}
Suppose the conditions of Theorem~\ref{main} hold. Then, for any $k,n \in\mathbb{N}$,
\begin{equation*}
\mathbb{E}\left[\left(\mathrm{tr}\left(\textbf{X}_{n}^{k}\right) - \mathbb{E}\left[\mathrm{tr} \left(\textbf{X}_{n}^{k}\right)\right]\right)^4\right] \leq C \cdot n^{2}.
\end{equation*}
\label{lemma}
\end{lemma}
\begin{proof}
Fix $k,n \in\mathbb{N}$. Using the notation
\begin{equation*}
P = (P_1,\ldots,P_k) = ( (p_1,q_1), \ldots, (p_k,q_k) ), \qquad a (P) = a(P_{1})\cdot \ldots \cdot a(P_{k}),
\end{equation*}
\noindent we have that
\begin{multline}
\mathbb{E}\left[\left(\mathrm{tr}\left(\textbf{X}_{n}^{k}\right) - \mathbb{E}\left[\mathrm{tr} \left(\textbf{X}_{n}^{k}\right)\right]\right)^4\right] \\
= \frac{1}{n^{2k}} \sum_{\pi^{(1)},\ldots,\pi^{(4)} \in \mathcal{P}(k)} \sum_{P^{(i)}\in S_n\left(\pi^{(i)}\right), i=1,\ldots,4} \mathbb{E}\Big[\prod_{j=1}^{4} \left(a (P^{(j)}) - \mathbb{E}\left[a (P^{(j)})\right]\right)\Big].
\label{eq1}
\end{multline}
Now consider a partition $\boldsymbol{\pi}$ of $\left\{1,\ldots,4k\right\}$. We say that a sequence $(P^{(1)},\ldots,P^{(4)})$ is $\boldsymbol{\pi}$ consistent if each $P^{(i)}, i=1,\ldots,4$, is a consistent sequence and
\begin{equation*}
\big|q_{l}^{(i)} - p_{l}^{(i)}\big| \ = \ \left|q_{m}^{(j)} - p_{m}^{(j)}\right| \quad \Longleftrightarrow \quad l + (i-1) k \ \sim_{\boldsymbol{\pi}} \ m + (j-1) k.
\end{equation*}
Let $\mathcal{S}_n (\boldsymbol{\pi})$ denote the set of all $\boldsymbol{\pi}$ consistent sequences with entries in $\left\{1,\ldots,n\right\}$. Then, \eqref{eq1} becomes
\begin{multline}
\mathbb{E}\left[\left(\mathrm{tr}\textbf{X}_{n}^{k} - \mathbb{E}\left[\mathrm{tr} \textbf{X}_{n}^{k}\right]\right)^4\right] \\
= \frac{1}{n^{2k}} \sum_{\boldsymbol{\pi}\in\mathcal{P}(4k)} \sum_{(P^{(1)}, \ldots, P^{(4)})\in \mathcal{S}_n\left(\boldsymbol{\pi}\right)} \mathbb{E}\Big[\prod_{j=1}^{4} \left(a (P^{(j)}) - \mathbb{E}\left[a (P^{(j)})\right]\right)\Big].
\label{eq2}
\end{multline}
We want to analyze the expectation on the right hand side. Therefore, fix a $\boldsymbol{\pi} \in \mathcal{P}(4k)$. We call $\boldsymbol{\pi}$ a matched partition if
\begin{enumerate}
\item any equivalence class of $\boldsymbol{\pi}$ contains at least two elements and
\item for any $i\in\left\{1,\ldots,4\right\}$ there is a $j\neq i$ and $l,m\in\left\{1,\ldots,k\right\}$ with
\begin{equation*}
l + (i-1) k \ \sim_{\boldsymbol{\pi}} \ m + (j-1) k.
\end{equation*}
\end{enumerate}
In case $\boldsymbol{\pi}$ is not matched, we can conclude that
\begin{align*}
\sum_{(P^{(1)}, \ldots, P^{(4)})\in \mathcal{S}_n\left(\boldsymbol{\pi}\right)} \mathbb{E}\Big[\prod_{j=1}^{4} \left(a (P^{(j)}) - \mathbb{E}\left[a (P^{(j)})\right]\right)\Big] = 0.
\end{align*}
\noindent Thus, we only have to consider matched partitions to evaluate the sum in \eqref{eq2}. Let $\boldsymbol{\pi}$ be such a partition and denote by $r = \#\boldsymbol{\pi}$ the number of equivalence classes of $\boldsymbol{\pi}$. Note that condition $(i)$ implies $r\leq 2k$. To count all $\boldsymbol{\pi}$ consistent sequences $(P^{(1)},\ldots,P^{(4)})$, we first choose one of at most $n^r$ possibilities to fix the $r$ different equivalence classes. Afterwards, we fix the elements $p_1^{(1)},\ldots,p_1^{(4)}$, which can be done in $n^4$ ways. Since now the differences $|q_{l}^{(i)} - p_{l}^{(i)}|$ are uniquely determined by the choice of the corresponding equivalence classes, we can proceed sequentially to see that there are at most two choices left for any pair $P_l^{(i)}$. To sum up, we have at most
\begin{equation*}
2^{4k} \cdot n^4 \cdot n^r = C \cdot n^{r+4}
\end{equation*}
possibilities to choose $(P^{(1)},\ldots,P^{(4)})$. If now $r\leq 2k-2$, we can conclude that
\begin{equation}
\# \mathcal{S}_n(\boldsymbol{\pi}) \leq C \cdot n^{2k+2}.
\label{eq3}
\end{equation}
Hence, it remains to consider the case where $r=2k-1$ and $r=2k$, respectively. \\
To begin with, let $r=2k-1$. Then, we have either two equivalence classes with three elements or one equivalence class with four. Since $\boldsymbol{\pi}$ is matched, there must exist an $i\in\left\{1,\ldots,4\right\}$ and an $l\in\left\{1,\ldots,k\right\}$ such that $P_l^{(i)}$ is not equivalent to any other pair in the sequence $P^{(i)}$. Without loss of generality, we can assume that $i=1$. In contrast to the construction of $(P^{(1)},\ldots,P^{(4)})$ as above, we now alter our procedure as follows: We fix all equivalence classes except of that $P_l^{(1)}$ belongs to. There are $n^{r-1}$ possibilities to accomplish that. Now we choose again one of $n^4$ possible values for $p_1^{(1)},\ldots,p_1^{(4)}$. Hereafter, we fix $q_m^{(1)}$, $m=1,\ldots,l-1$, and then start from $q_k^{(1)} = p_1^{(1)}$ to go backwards and obtain the values of $p_{k}^{(1)}, \ldots, p_{l+1}^{(1)}$. Each of these steps leaves at most two choices to us, that is $2^{k-1}$ choices in total. But now, $P_l^{(1)}$ is uniquely determined since $p_l^{(1)} = q_{l-1}^{(1)}$ and $q_l^{(1)} = p_{l+1}^{(1)}$ by consistency. Thus, we had to make one choice less than before, implying \eqref{eq3}. \\
Now, let $r=2k$. In this case, each equivalence class has exactly two elements. Since we consider a matched partition, we can find here as well an $l\in\left\{1,\ldots,k\right\}$ such that $P_l^{(1)}$ is not equivalent to any other pair in the sequence $P^{(1)}$. But in addition to that, we also have an $m\in\left\{1,\ldots,k\right\}$ such that, possibly after relabeling, $P_m^{(2)}$ is neither equivalent to any element in $P^{(1)}$ nor to any other element in $P^{(2)}$. Thus, we can use the same argument as before to see that this time, we can reduce the number of choices to at most $C \cdot n^{r+2} = C \cdot n^{2k+2}$. In conclusion, \eqref{eq3} holds for any matched partition $\boldsymbol{\pi}$. To sum up our results, we obtain that
\begin{align*}
& \mathbb{E}\left[\left(\mathrm{tr}\textbf{X}_{n}^{k} - \mathbb{E}\left[\mathrm{tr} \textbf{X}_{n}^{k}\right]\right)^4\right] \\
& \quad = \frac{1}{n^{2k}} \sum_{\stackrel{\boldsymbol{\pi}\in\mathcal{P}(4k),}{\boldsymbol{\pi} \ \text{matched}}} \sum_{(P^{(1)}, \ldots, P^{(4)})\in \mathcal{S}_n\left(\boldsymbol{\pi}\right)} \mathbb{E}\Big[\prod_{j=1}^{4} \left(a (P^{(j)}) - \mathbb{E}\left[a (P^{(j)})\right]\right)\Big] \leq C \cdot n^{2},
\end{align*}
which is the statement of Lemma~\ref{lemma}.
\end{proof}
From Lemma~\ref{lemma} and Chebyshev's inequality, we can now conclude that for any $\varepsilon>0$ and any $k,n\in\mathbb{N}$,
\begin{equation*}
\mathbb{P}\left( \left| \frac{1}{n} \mathrm{tr}\textbf{X}_n^k - \mathbb{E} \left[\frac{1}{n}\mathrm{tr}\textbf{X}_n^k\right] \right| > \varepsilon \right) \leq \frac{C}{\varepsilon^4 n^2}.
\end{equation*}
Hence, the convergence in expectation part of Theorem~\ref{main} together with the Borel-Cantelli lemma yield that
\begin{equation*}
\lim_{n\to\infty} \frac{1}{n} \mathrm{tr}\textbf{X}_n^k = \mathbb{E} \left[Y^k\right] \qquad \text{almost surely},
\end{equation*}
where $Y$ is distributed according to the standard semi-circle law. In particular, we have that, with probability $1$, the empirical spectral distribution of $\textbf{X}_n$ converges weakly to the semi-circle law.
\section{Examples}
\subsection{Gaussian processes} Let $\left\{a(p,p+r), p\in\mathbb{N}\right\}$, $r\in{\mathbb N}_0$, be independent families of stationary Gaussian Markov processes with mean $0$ and variance $1$. In addition to this, we assume that the processes are non-degenerate in the sense that $\mathbb{E}\left[a(p,p+r)|a(q,q+r),q\leq p-1\right] \neq a(p,p+r)$. In this case, the conditions of Theorem~\ref{main} are satisfied. Indeed, for fixed $r\in{\mathbb N}_0$ and any $p\in{\mathbb N}$, we can represent $a_p := a(p,p+r)$ as
\begin{equation*}
a_p = x_p \sum_{j=1}^{p} y_j \xi_j,
\end{equation*}
where $\left\{\xi_j\right\}$ is a family of independent standard Gaussian variables and $x_p,y_1,\ldots,y_p \in {\mathbb R}\backslash\left\{0\right\}$. Then, we obtain
\begin{equation*}
\mathrm{Cov}(\tau) = \mathrm{Cov}(a_p,a_{p+\tau}) = \frac{x_{p+\tau}}{x_p},
\end{equation*}
implying $\mathrm{Cov}(\tau)=\mathrm{Cov}(1)^\tau$ for any $\tau\in{\mathbb N}_0$. By calculating the second moment of $a_2 = x_2 y_2 \xi_2 + \mathrm{Cov}(1) a_1$, we can conclude that $|\mathrm{Cov}(1)|<1$. Thus, we have $\sum_{\tau=0}^{\infty} |\mathrm{Cov}(\tau)|<\infty$.
\subsection{Markov chains with finite state space} We want to verify that condition (C$4$) holds for stationary $N$-state Markov chains which are ergodic and reversible. Let $\left\{X_k,k\in{\mathbb N}\right\}$ be such a Markov chain with mean $0$ and variance $1$. Denote by $P$ the corresponding $N\times N$ transition matrix and by $\pi$ its stationary distribution. Reversibility yields that $P$ is diagonalizable. Hence, for any $k\in{\mathbb N}$, we can write
\begin{equation*}
P^k = TD^kT^{-1}
\end{equation*}
for some invertible matrix $T$ and a diagonal matrix $D=\mathrm{diag}(\lambda_1,\ldots,\lambda_N)$. Denoting by $s_1,\ldots,s_N$ the $N$ possible states of the chain, we get
\begin{align*}
\mathrm{Cov}(X_n,X_{n+k}) = \sum_{i,j=1}^{N} s_i s_j \pi(i) P^k(i,j) = \sum_{l=1}^{N} \lambda_l^k \left(\sum_{i=1}^{N} s_i \pi(i) T(i,l) \sum_{j=1}^{N} s_j T^{-1}(l,j) \right).
\end{align*}
Since $P$ is stochastic, we have $|\lambda_l|\leq 1$ for any $l=1,\ldots,N$. If $|\lambda_l| = 1$, ergodicity implies that $\lambda_l=1$. The corresponding space of right eigenvectors is spanned by the vector $v=(1,\ldots,1)^T$. Consequently,
\begin{equation*}
\sum_{i=1}^{N} s_i \pi(i) T(i,l) = c \ \mathbb{E}\left[X_1\right] = 0.
\end{equation*}
Thus $\mathrm{Cov}(X_n,X_{n+k})$ decays exponentially to $0$ as $k\to\infty$ and condition (C$4$) is satisfied.
\bibliographystyle{alpha}
| {
"timestamp": "2011-08-23T02:02:41",
"yymm": "1108",
"arxiv_id": "1108.3479",
"language": "en",
"url": "https://arxiv.org/abs/1108.3479",
"abstract": "We investigate the spectral distribution of random matrix ensembles with correlated entries. We consider symmetric matrices with real valued entries and stochastically independent diagonals. Along the diagonals the entries may be correlated. We show that under sufficiently nice moment conditions the empirical eigenvalue distribution converges almost surely weakly to the semi-circle law.",
"subjects": "Probability (math.PR); Statistics Theory (math.ST)",
"title": "The semicircle law for matrices with independent diagonals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850832642354,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7095349918263897
} |
https://arxiv.org/abs/1608.06086 | Power of Two as sums of Three Pell Numbers | In this paper, we find all the solutions of the Diophantine equation $P_\ell + P_m +P_n=2^a$, in nonnegative integer variables $(n,m,\ell, a)$ where $P_k$ is the $k$-th term of the Pell sequence $\{P_n\}_{n\ge 0}$ given by $P_0=0$, $P_1=1$ and $P_{n+1}=2P_{n}+ P_{n-1}$ for all $n\geq 1$. | \section{Introduction}
\noindent The Pell sequence $\{P_n\}_{n\ge 0}$ is the binary reccurent sequence given by $P_0=0$, $P_1=1$ and $P_{n+1}=2P_{n}+ P_{n-1}$ for all $n\geq 0$. There are many papers in the literature dealing with Diophantine equations obtained by asking that
members of some fixed binary recurrence sequence be squares, factorials, triangular, or belonging to some other interesting sequence of positive integers.
For example, in $2008$, A. Peth\H o \cite{AP} found all the perfect powers (of exponent larger than $1$) in the Pell sequence. His result is the following.
\begin{theorem}[A. Peth\H o, \cite{AP}]
\label{th:1}
The only positive integer solutions $(n,q,x)$ with $q\ge 2$ of the Diophantine equation
$$P_n=x^q
$$
are $(n,q,x)=(1,q,1)$ and $(7,2,13)$. That is, the only perfect powers of exponent larger than $1$ in the Pell numbers are
$$
P_1=1 \quad \hbox{and}\quad P_7=13^2.
$$
\end{theorem}
The case $q=2$ had been treated earlier by Ljunggren \cite{LJ}. Peth\H o's result was rediscovered by J. H. E. Cohn \cite{Cohn}.
In this paper, we study the following Diophantine equation: Find all nonnegative solutions $(\ell,m,n,a)$ of the equation
\begin{equation}
\label{eq:main}
P_\ell+P_m+P_n=2^a.
\end{equation}
There is already a vast literature on equations similar to \eqref{eq:main}. For example, putting for positive integers $a\ge 2$ and $n$, $s_a(n)$ for the sum of the base $a$ digits of $n$,
Senge and Straus \cite{SS} showed that for each fixed $K$ and multiplicatively independent positive integers $a$ and $b$, the set $\{n: s_a(n)<K~{\text{\rm and}}~s_b(n)<K\}$ is finite.
This was made effective by Stewart \cite{CL} using Baker's theory of lowers bounds for linear forms in logarithms of algebraic numbers (see also \cite{LuQu}).
More concretely, the analogous equation \eqref{eq:main} when Pell numbers are replaced by Fibonacci numbers was solved in \cite{EJ3} (the special case when only two Fibonacci numbers are involved on the left has been solved
earlier in \cite{EJ1}). Variants of this problem with $k$-generalized Fibonacci numbers and Lucas numbers instead of Fibonacci numbers were studied in \cite{BGL16} and \cite{EJ2}, respectively. In \cite{BHL}, all Fibonacci numbers which are sums of three factorials were found, while in \cite{LuSi}, all factorials which are sums of three Fibonacci numbers were found. Repdigits which are sums of three Fibonacci numbers were found in \cite{LuRep}, while Fibonacci numbers which are sums of at most two repdigits were found in \cite{DL}.
Our main result concerning \eqref{eq:1} is the following.
\medskip
\begin{theorem}
\label{th:2}
The only solutions $(n,m,\ell,a)$ of the Diophantine equation
\begin{equation}
\label{eq:1}
P_n+P_m+P_\ell = 2^a
\end{equation}
in integers $n\geq m \geq \ell \geq 0$ are in
$$
\{(2,1,1,2),(3,2,1,3),(5,2,1,5),(6,5,5,7),(1,1,0,1),(2,2,0,2),(2,0,0,1),(1,0,0,0)\}.
$$
\end{theorem}
We use the method from \cite{LuRep}.
\section{Preliminary results}
\noindent Let $(\alpha,\beta)=(1+{\sqrt{2}},1-{\sqrt{2}})$ be the roots of the characteristic equation $x^2-2x-1=0$ of the Pell sequence $\{P_n\}_{n\ge 0}$. The Binet formula for $P_n$ is
\begin{equation}
\label{eq:BinetP}
P_n= \frac{\alpha^n - \beta^n}{\alpha-\beta} \quad {\text{\rm for~ all}}\quad n\ge 0.
\end{equation}
This implies easily that the inequalities
\begin{equation}
\label{eq:sizePn}
\alpha^{n-2}\le P_n\le \alpha^{n-1}
\end{equation}
hold for all positive integers $n$.
Let $\{Q_n\}_{n\geq 0}$ be the companion Lucas sequence of the Pell sequence given by $Q_0=2$, $Q_1=2$ and $Q_{n+2}=2Q_{n+1}+ Q_{n}$ for all $n\ge 0$. For a prime $p$ and a nonzero integer $\delta$ let $\nu_p(\delta)$ be the exponent with which $p$ appears in the prime factorization of $\delta$. The following result is well-known and easy to prove.
\begin{lemma}
\label{lem:orderof2}
The relations
\begin{itemize}
\item[(i)] $\nu_2(Q_n)=1$,
\item[(ii)] $\nu_2(P_n)=\nu_2(n)$
\end{itemize}
hold for all positive integers $n$.
\end{lemma}
The following result is an immediate consequence of Carmichael's primitive divisor theorem for Lucas sequences with real roots (see \cite{Car}).
\begin{lemma}
\label{lem:prim}
If $n\ge 13$, then $P_n$ has a prime factor $\ge n-1$.
\end{lemma}
We also need a Baker type lower bound for a nonzero linear form in logarithm of algebraic numbers. We choose to use the result of Matveev in \cite{MV}. Before proceeding further, we recall some basics notions from algebraic number theory.
Let $\eta$ be an algebraic number of degree $d$ over $\mathbb{Q}$ with minimal primitive polynomial over the integers
$$
f(X) = a_0 \prod_{i=1}^{d}(X-\eta^{(i)}) \in \mathbb{Z}[X],
$$
where the leading coefficient $a_0$ is positive and the $\eta^{(i)}$ are conjugates of $\eta$. The logarithmic
height of $\eta$ is given by
$$
h(\eta) = \dfrac{1}{d}\left(\log a_0 + \sum_{i=1}^{d}\log\max\{|\eta^{(i)}|,1\}\right).
$$
The following properties of the logarithms height, which will be used in the next section without special reference, are also known:
\begin{itemize}
\item $h(\eta\pm \gamma)\leq h(\eta) + h(\gamma) + \log 2.$
\item $h(\eta\gamma^{\pm})\leq h(\eta) + h(\gamma).$
\item $h(\eta^{s})=|s|h(\eta).$
\end{itemize}
With these above notations, Matveev proved the following theorem (see also \cite{BMS}).
\begin{theorem}[Matveev \cite{MV}, Theorem 9.4 \cite{BMS}]
\label{thm:Matveev}
Let ${\bbbk}$ be a number field of degree $D$ over ${\mathbb Q}$, $\eta_1, \ldots, \eta_t$ be positive
real numbers of ${\mathbb K}$, and $b_1, \ldots, b_t$ rational integers. Put
$$
\Lambda = \eta_1^{b_1} \cdots \eta_t^{b_t}-1
\qquad
\text{and}
\qquad
B \geq \max\{|b_1|, \ldots ,|b_t|\}.
$$
Let $A_i \geq \max\{Dh(\eta_i), |\log \eta_i|, 0.16\}$ be real numbers, for
$i = 1, \ldots, t.$
Then, assuming that $\Lambda \not = 0$, we have
$$
|\Lambda| > \exp(-1.4 \times 30^{t+3} \times t^{4.5} \times D^2(1 + \log D)(1 + \log B)A_1 \cdots A_t).
$$
\end{theorem}
In $1998$, Dujella and Peth\H o in \cite[Lemma 5$(a)$]{DP} gave a version of the reduction method originally proved by Baker and Davenport \cite{Baker-Davenport}. We next present the following lemma from \cite{BL1} (see also \cite{BGL16}), which is an immediate variation of the result due to Dujella and Peth\H o from \cite{DP}, and is the key tool used to reduce the upper bound on the variable $n$. For a real number $x$ we put $\|x\|=\min\{|x-n|: n\in\mathbb{Z}\}$ for the distance from $x$ to the nearest integer.
\begin{lemma}
\label{reduce}
Let $M$ be a positive integer, let $p/q$ be a convergent of the continued fraction of the irrational $\gamma$ such that $q>6M$, and let $A,B,\mu$ be some real numbers with $A>0$ and $B>1$. Let $\epsilon:=||\mu q||-M||\gamma q||$. If $\epsilon >0$, then there is no solution to the inequality
$$
0<|u\gamma-v+\mu|<AB^{-w},
$$
in positive integers $u,v$ and $w$ with
$$
u\leq M \quad\text{and}\quad w\geq \frac{\log(Aq/\epsilon)}{\log B}.
$$
\end{lemma}
\section{Proof of Theorem \ref{th:2}}
\subsection{The case $\ell=0$}
If $\ell=m=0$, we then get that $P_n=2^a$. This implies that $n\le 12$ by Lemma \ref{lem:prim}.
If $\ell=0$ but $m>0$, we then get
\begin{equation}
\label{eq:tv}
P_n+P_m=2^a.
\end{equation}
Since $P_m$ and $P_n$ are positive, we get that $a>0$, so $P_n$ and $P_m$ have the same parity. The left--hand side above factors as
\begin{equation}
\label{eq:RelP1}
P_n+P_m= P_{(n+\delta m)/2}Q_{(n-\delta m)/2},
\end{equation}
where $\delta\in \{\pm 1\}$ is $1$ if $n\equiv m\pmod{4}$ and $-1$ otherwise, a fact easily checked. Thus, equation \eqref{eq:tv} becomes
$$
P_{(n+\delta m)/2}Q_{(n-\delta m)/2}=2^a.
$$
Lemmas \ref{lem:orderof2} and \ref{lem:prim} show that $(n-\delta m)/2\in \{0,1\}$ and $(n+\delta m)/2\le 12$, and all solutions can now be easily found. All in all, the case $\ell=0$ gives the last four solutions listed in the statement of Theorem \ref{th:2}.
\subsection{Bounding $n-m$ and $n-\ell$ in terms of $n$}
From now, we assume $n\geq m\geq \ell\geq 1$. First of all, if $n=m=\ell$, equation \eqref{eq:1} become $3P_n=2^a$ which is impossible. Thus, we assume from now that either $n> m$ or $m> \ell$. We next perform a computation showing to show that there are no others solutions to equation \eqref{eq:1} than those listed in Theorem \ref{th:2} in the range $1\leq \ell\leq m\leq n\leq 150.$ So, from now on we work under the assumption that $n>150$.
We find a relation between $a$ and $n$. Using equation \eqref{eq:1} and the right-hand side of inequality \eqref{eq:sizePn}, we get that
$$
2^a<\alpha^{n-1}+\alpha^{m-1} +\alpha^{\ell-1}< 2^{2n-2}( 1 + 2^{2(m-n)}+2^{2(\ell-n)})< 2^{2n+1}.
$$
where in the middle nequality we used the fact that $\alpha<2^2.$ Hence, we have that $a\leq 2n$.
We rewrite equation \eqref{eq:1} using \eqref{eq:BinetP} as
$$
\frac{\alpha^n}{2\sqrt{2}}-2^a=\frac{\beta^n}{2\sqrt{2}}-(P_m+ P_\ell).
$$
We take absolute values in both sides of the above relation with the right-hand side of \eqref{eq:sizePn} obtaining
$$
\Big| \frac{\alpha^n}{2\sqrt{2}}-2^a\Big|\leq \frac{|\beta|^n}{2\sqrt{2}}+P_m+P_\ell< \frac{1}{2}+\(\alpha^m+\alpha^\ell\).
$$
Dividing both sides by $\alpha^n/(2\sqrt{2})$, we get
\begin{equation}
\label{eq:2}
\Big| 1-2^{a+1}\cdot\alpha^{-n}\cdot\sqrt{2}\Big|< \frac{8}{\alpha^{n-m}}.
\end{equation}
We are in a situation to apply Matveev's result Theorem \ref{thm:Matveev} to the left--hand side of \eqref{eq:2}. The expression on the left-hand side of \eqref{eq:2} is nonzero, since this expression being zero means that $2^{a+1}=(\alpha^n/\sqrt{2})$, so $\alpha^{2n}\in {\mathbb Z}$ for some positive integer $n$, which is false. Hence, we take ${\mathbb K}:={\mathbb Q}({\sqrt{2}})$ for which $D=2$. We take
$$
t:=3,\quad \eta_1:=2,\quad \eta_2:=\alpha,\quad \eta_3:=\sqrt{2},\quad b_1:=a+1,\quad b_2:=-n,\quad b_3:=1.
$$
So, we can take $A_1:=1.4$, $A_2:=0.9$ and $A_3:=0.7$. Finally we recall that $a\leq 2n$ and deduce that $\max\{|b_1|,|b_2|,|b_3|\}\leq 2n+1$, so we take $B:=2n+1$.
Theorem \ref{thm:Matveev} implies that a lower bound on the left-hand side of \eqref{eq:2} is
\begin{equation}
\label{eq:3}
\exp\left(-1.4\times 30^6\times 3^{4.5}\times 2^2\times (1+\log 2)(2\log n)\times 1.4\times 0.9\times 0.7\right).
\end{equation}
In the above inequality, we used $1+\log (2n+1)< 2\log n$, which holds in our range of $n$. Taking logarithms in inequality \eqref{eq:2} and comparing the resulting inequality with \eqref{eq:3}, we get that
\begin{equation}
\label{eq:4}
(n-m)\log \alpha < 1.8\times 10^{12}\log n.
\end{equation}
We now consider a second linear form in logarithms by rewriting equation \eqref{eq:1} in a different way. Using the Binet formula \eqref{eq:BinetP}, we get that
$$
\frac{\alpha^n}{2\sqrt{2}} +\frac{\alpha^m}{2\sqrt{2}}-2^a=\frac{\beta^n}{2\sqrt{2}} +\frac{\beta^m}{2\sqrt{2}}-P_\ell,
$$
which implies
$$
\Big| \frac{\alpha^n}{2\sqrt{2}}(1+\alpha^{m-n})-2^a\Big|\leq \frac{|\beta|^n + |\beta|^m}{2\sqrt{2}}+P_\ell<\frac{1}{2}+\alpha^\ell.
$$
Dividing both sides of the above inequality by the first term of the left-hand side, we obtain
\begin{equation}\label{eq:5}
\Big| 1-2^{a+1}\cdot\alpha^{-n}\cdot\sqrt{2}(1+\alpha^{m-n})^{-1}\Big|< \frac{5}{\alpha^{n-\ell}}.
\end{equation}
We apply again Matveev Theorem \ref{thm:Matveev} with the same ${\mathbb K}$ as before.
We take
$$
t:=3,\quad \eta_1:=2,\quad \eta_2:=\alpha,\quad \eta_3:=\sqrt{2}(1+\alpha^{m-n})^{-1},\quad b_1:=a+1,\quad b_2:=-n,\quad b_3:=1.
$$
So, we can take $A_1:=1.4$, $A_2:=0.9$ and $B:=2n+1$. We observe that the left-hand side of \eqref{eq:5} is not zero because otherwise we would get
\begin{equation}
\label{eq:6}
2^{a+1}\sqrt{2}=\alpha^n(1+\alpha^{m-n})=\alpha^n+\alpha^m.
\end{equation}
By conjugating the above in $\mathbb{K}$ we get that
\begin{equation}
\label{eq:7}
-2^{a+1}\sqrt{2}=\beta^n+\beta^m.
\end{equation}
Equations \eqref{eq:6} and \eqref{eq:7}, lead to
$$
\alpha^n<\alpha^n+\alpha^m=|\beta^n + \beta^m|\leq |\beta|^n+|\beta|^m<1,
$$
which is impossible. Now, let us have a look on the logarithmic height of $\eta_3$. Since,
$$
\eta_3=\sqrt{2}(1+\alpha^{m-n})^{-1}<\sqrt{2}\quad\hbox{and}\quad\eta_3^{-1}=\frac{1+\alpha^{m-n}}{\sqrt{2}}<\frac{2}{\sqrt{2}},
$$
we get that $|\log \eta_3|<1$. Furthermore, we notice that
$$
h(\eta_3)\leq \log \sqrt{2}+|m-n|\(\frac{\log \alpha}{2}\)+\log 2=\log(2\sqrt{2})+(n-m)\(\frac{\log \alpha}{2}\).
$$
Thus, we can take $A_3:= 3+(n-m)\log\alpha>\max\{2h(\eta_3),|\log\eta_3|,0.16\}.$
As before, Theorem \ref{thm:Matveev} and \eqref{eq:5} imply that
\begin{equation}
\label{eq:8}
\exp\left(-2.45\times 10^{12}\times \log n\times (3+(n-m)\log\alpha)\right)<\frac{5}{\alpha^{n-\ell}}
\end{equation}
giving
\begin{equation}
\label{eq:9}
(n-\ell)\log\alpha < 2.5\times 10^{12}\times \log n\times (3+(n-m)\log\alpha).
\end{equation}
Inserting inequality \eqref{eq:4} into \eqref{eq:9}, we obtain
\begin{equation}
\label{eq:9bis}
(n-\ell)\log \alpha<5\times 10^{24}\log^2 n.
\end{equation}
\subsection{Bounding $n$}
We now use a third linear form in logarithms by rewriting equation \eqref{eq:1} in a different way. Using the Binet formula \eqref{eq:BinetP}, we get that
$$\frac{\alpha^n}{2\sqrt{2}} +\frac{\alpha^m}{2\sqrt{2}}+\frac{\alpha^\ell}{2\sqrt{2}}-2^a=\frac{\beta^n}{2\sqrt{2}} +\frac{\beta^m}{2\sqrt{2}}+\frac{\beta^\ell}{2\sqrt{2}}, $$
which implies
$$\Big| \frac{\alpha^n}{2\sqrt{2}}(1+\alpha^{m-n}+\alpha^{\ell-n})-2^a\Big|\leq \frac{|\beta|^n + |\beta|^m + |\beta|^\ell}{2\sqrt{2}}<\frac{1}{2}$$
for all $n> 150$ and $m\geq\ell\geq 1$. Dividing both sides of the above inequality by the first term of the left-hand side, we obtain
\begin{equation}
\label{eq:5bis}
\Big| 1-2^{a+1}\cdot\alpha^{-n}\cdot\sqrt{2}(1+\alpha^{m-n} + \alpha^{\ell-n})^{-1}\Big|< \frac{2}{\alpha^{n}}.
\end{equation}
As before, we use Matveev Theorem \ref{thm:Matveev} with the same ${\mathbb K}$ as before and with
$$
t:=3, ~ \eta_1:=2,~\eta_2:=\alpha,~ \eta_3:=\sqrt{2}(1+\alpha^{m-n}+\alpha^{\ell-n})^{-1},\quad b_1:=a+1,~ b_2:=-n, b_3:=1.
$$
As before we take $A_1:=1.4$, $A_2:=0.9$ and $B:=2n+1$. It remains us to prove that the left-hand side of \eqref{eq:5bis} is not zero. Assuming the contrary, we would get
\begin{equation}
\label{eq:6bis}
2^{a+1}\sqrt{2}=\alpha^n(1+\alpha^{m-n}+ \alpha^{\ell-n})=\alpha^n+\alpha^m + \alpha^{\ell}.
\end{equation}
Conjugating the above relation in $\mathbb{K}$ we get that
\begin{equation}
\label{eq:7bis}
-2^{a+1}\sqrt{2}=\beta^n+\beta^m + \beta^{\ell}.
\end{equation}
Equations \eqref{eq:6bis} and \eqref{eq:7bis}, lead to
$$
\alpha^n<\alpha^n+\alpha^m +\alpha^\ell =|\beta^n + \beta^m+ \beta^{\ell}|\leq |\beta|^n+|\beta|^m + |\beta|^\ell<1
$$
which is impossible since $\alpha>2$. It remains to estimate the logarithmic height of $\eta_3$. Since,
$$
\eta_3=\sqrt{2}(1+\alpha^{m-n}+ \alpha^{\ell-n})^{-1}<\sqrt{2}\quad\hbox{and}\quad\eta_3^{-1}=\frac{1+\alpha^{m-n}+\alpha^{\ell-n}}{\sqrt{2}}<\frac{3}{\sqrt{2}},
$$
it follows that $|\log \eta_3|<1.$, Furthermore, we notice that
\begin{eqnarray*}
h(\eta_3) &\leq &\log \sqrt{2}+|m-n|\(\frac{\log \alpha}{2}\)+ |\ell-n|\(\frac{\log \alpha}{2}\)+2\log 2 \\
&=&\log(4\sqrt{2})+(n-m)\(\frac{\log \alpha}{2}\) + (n-\ell)\(\frac{\log \alpha}{2}\).
\end{eqnarray*}
Thus, we can take
$$
A_3:= 4+(n-m)\log\alpha+ (n-\ell)\log\alpha>\max\{2h(\eta_3),|\log\eta_3|,0.16\}.
$$
As before, Theorem \ref{thm:Matveev} and \eqref{eq:5bis} implies that
\begin{equation}\label{eq:8bis}
\exp\left(-2.45\times 10^{12}\times \log n\times (4+(n-m)\log\alpha+ (n-\ell)\log\alpha)\right)< \frac{2}{\alpha^{n}}
\end{equation}
which leads to
\begin{equation}
\label{eq:91bis}
n\log\alpha < 2.5\times 10^{12}\times \log n\times (4+(n-m)\log\alpha+ (n-\ell)\log\alpha).
\end{equation}
Inserting inequalities \eqref{eq:4} and \eqref{eq:9bis} into \eqref{eq:91bis} and performing the required computations, we obtain
\begin{equation}
\label{eq:92bis}
n<1.7\times 10^{37}\log^3 n,
\end{equation}
giving $n<1.7\times 10^{43}.$ We summarize the conclusion of this section as follows.
\medskip
\begin{lemma}
\label{lem:1}
If $(n,m,\ell, a)$ is a solution in positive integers of equation \eqref{eq:1}, with $n\ge m\ge \ell$,
then
$$
a< 2n+1<4\times 10^{43}.
$$
\end{lemma}
\subsection{Reducing the bound on $n$}
We use several times Lemma \ref{reduce} to reduce the bound for $n$. We return to \eqref{eq:2}. Put
$$
\Lambda_1:=(a+1)\log 2-n\log \alpha+\log\sqrt{2}.
$$
Then \eqref{eq:2} implies that
\begin{equation}
\label{eq:10}
|1-e^{\Lambda_1}|<\frac{8}{\alpha^{n-m}}.
\end{equation}
Note that $\Lambda_1>0$ since
$$
\frac{\alpha^n}{2\sqrt{2}}<P_n+1\leq P_n+P_m+P_\ell=2^{a}.
$$
Hence, using the fact that $1+x<e^x$ holds for all positive real numbers $x$, we get that
$$
0<\Lambda_1\leq e^{\Lambda_1}-1<\frac{8}{\alpha^{n-m}}.
$$
Dividing across by $\log\alpha$, we get
\begin{equation}
\label{eq:12}
0< (a+1)\left(\frac{\log 2}{\log\alpha}\right)-n+\left(\frac{\log\sqrt{2}}{\log\alpha}\right)<\frac{10}{\alpha^{n-m}}.
\end{equation}
We are now ready to apply Lemma \ref{reduce} with the obvious parameters
$$
\gamma:=\frac{\log 2}{\log \alpha},\quad \mu:=\frac{\log\sqrt{2}}{\log\alpha},\quad A:=10,\quad B:=\alpha.
$$
It is easy to see that $\gamma$ is irrationnal. We can take $M:=4\times 10^{43}$. Applying Lemma \ref{reduce} and performing the calculations with $q_{91}>6M$ and $\epsilon:=||\mu q_{91}||-M||\gamma q_{91}||>0$, we get that if $(n,m,\ell,a)$ is a solution to equation \eqref{eq:1}, then $n-m\in[0,130]$.
We now work with inequality \eqref{eq:5} to obtain an upper bound on $n-\ell$. We put
$$
\Lambda_2:=(a+1)\log 2-n\log \alpha+\log g(n-m),
$$
where we put $g(x):=\sqrt{2}(1+\alpha^{-x})^{-1}$. Then \eqref{eq:5} implies that
\begin{equation}
\label{eq:11bis}
|1-e^{\Lambda_2}|<\frac{5}{\alpha^{n-\ell}}.
\end{equation}
Using the Binet formula of the Pell sequence with \eqref{eq:1}, one can show that $\Lambda_2> 0$ since
$$
\frac{\alpha^n}{2\sqrt{2}}+\frac{\alpha^m}{2\sqrt{2}}<P_n+P_m+1\leq P_n+P_m+P_\ell=2^{a}.
$$
From this and \eqref{eq:11bis} we get
$$
0<\Lambda_2<\frac{5}{\alpha^{n-\ell}}.
$$
Replacing $\Lambda_2$ in the above inequality by its formula and arguing as in \eqref{eq:12}, we get that
\begin{equation}
\label{eq:13}
0< (a+1)\left(\frac{\log 2}{\log\alpha}\right)-n+\frac{\log g(n-m)}{\log\alpha}<\frac{6}{\alpha^{n-\ell}}.
\end{equation}
Here, we take $M:=4\times 10^{43}$ and as we explain before, we apply Lemma \ref{reduce} to inequality \eqref{eq:13} for all possible choices of $n-m \in [0,130]$, except when $n-m=1,2$. Computing all the possible cases with suitable values for the parameter $q$, we find that if $(n,m,\ell,a)$ is a solution of \eqref{eq:1}, with $n-m\neq 1,2$, then $n-\ell\leq 140$.
For the special cases where $n-m=1,2$, we have that
\begin{equation*}
\label{eq:RelP}
\frac{\log g(x)}{\log\alpha}=\left\{ \begin{matrix}
0 & {\text{if}} & x=1;\vspace{0.3cm}\\
1-\frac{\log 2}{\log\alpha} & {\text{if}} & x=2.
\end{matrix}\right.
\end{equation*}
Thus, we cannot apply Lemma \ref{reduce}, because the value for the parameter $\varepsilon$ is always $\leq 0.$ Thus, in these cases, the reduction algorithm is not useful. However, we can see that if $n-m=1,2$, then the resulting inequality from \eqref{eq:13} has the shape $0 <|x\gamma-y|<6/\alpha^{n-\ell}$ with $\gamma$ being an irrational number and $x,y \in\mathbb{Z}$. So, we can appeal to the known properties of the convergents of the continued fractions to obtain a nontrivial lower bound for $|x\gamma-y|$. This gives us an upper bound for $n-\ell$. Let's see the details.
When $n-m=1$, $\log g(n-m)/\log\alpha=0$ and we get from \eqref{eq:13} that
\begin{equation}
\label{eq:15}
0< (a+1)\gamma-n<\frac{6}{\alpha^{n-\ell}} \quad\hbox{where}\quad \gamma:=\frac{\log 2}{\log\alpha}.
\end{equation}
Let $[a_0,a_1,a_2,\ldots]=[0, 1, 3, 1, 2,\ldots]$ be the continued fraction expression of the above $\gamma$ and let $p_k/q_k$ be the its $k$⁻th convergents. Recall that $a+1< 4\times 10^{43}.$ A quick computation with Mathematica shows that
$$q_{87}<4\times 10^{43}<q_{88}.$$
Furthermore $a_M:=\max\{a_i: i=1\ldots,88\}=100.$ Then, from the properties of the continued fractions, inequality \eqref{eq:15} becomes
$$
\frac{1}{(a_M+2)(a+1)}< (a+1)\gamma-n<\frac{6}{\alpha^{n-\ell}}
$$
which yields to
\begin{equation}
\label{exp1}
\alpha^{n-\ell}<6\cdot 102\cdot 4\times 10^{43}.
\end{equation}
Thus, $n-\ell<122.$ The same argument as before gives that $n-\ell<122$ in the case when $n-m=2.$ Therefore, $n-\ell\leq 140$ always holds.
Finally, in order to obtain a better upper bound on $n$, we use again inequality \eqref{eq:5bis} where we put
$$
\Lambda_3:=(a+1)\log 2-n\log \alpha+\log \phi(n-m,n-\ell),
$$
with $\phi(x_1,x_2):=\sqrt{2}(1+\alpha^{-x_1}+\alpha^{-x_2})^{-1}$. Then \eqref{eq:5bis} implies that
\begin{equation}
\label{eq:11}
|1-e^{\Lambda_3}|<\frac{2}{\alpha^{n}}.
\end{equation}
We observe that $\Lambda_3\neq 0$. We now analyze the cases $\Lambda_3>0$ and $\Lambda_3<0$. If $\Lambda_3>0,$ then
$$
0<\Lambda_3<\frac{2}{\alpha^n}.
$$
Suppose now that $\Lambda_3<0.$ Since $2/\alpha^n<1/2$ for $n>150$, from \eqref{eq:11}, we get that $|e^{\Lambda_3}-1|<1/2$, therefore $e^{|\Lambda_3|}<2.$ Since $\Lambda_3<0$, we have that
$$
0<|\Lambda_3|\leq e^{|\Lambda_3|}-1=e^{|\Lambda_3|}|e^{\Lambda_3}-1|<\frac{4}{\alpha^n}.
$$
Thus, we get in both cases that
$$
0<|\Lambda_3|<\frac{4}{\alpha^n}.
$$
Replacing $\Lambda_3$ in the above inequality by its formula and arguing as in \eqref{eq:12}, we get that
\begin{equation}
\label{eq:16}
0< \left|(a+1)\left(\frac{\log 2}{\log\alpha}\right)-n+\left(\frac{\log \phi(n-m,n-\ell)}{\log\alpha}\right)\right|<\frac{5}{\alpha^{n}}.
\end{equation}
Here, we take $M:=4\times 10^{43}$ and as we explained before, we apply Lemma \ref{reduce} to inequality \eqref{eq:16} for all possible choices of $n-m \in [0,130]$ and $n-\ell\in [0,140]$. With the help of Mathematica, we find that if $(n, m, \ell, a)$ is a possible solution of the equation \eqref{eq:1}, then $n < 150$, contradicting our assumption that $n>150$. This finishes the proof of the theorem.
\section*{Acknowledgments}
\noindent J.~J.~B.~was supported in part by Project VRI ID 3744 (Universidad del Cauca). B. F. thanks AIMS for the AASRG. Her work in this project were carried out with financial support from the government of Canada's International Development Research Centre (IDRC) and whithin the framework of the AIMS Research for Africa Project.
| {
"timestamp": "2016-08-23T02:07:48",
"yymm": "1608",
"arxiv_id": "1608.06086",
"language": "en",
"url": "https://arxiv.org/abs/1608.06086",
"abstract": "In this paper, we find all the solutions of the Diophantine equation $P_\\ell + P_m +P_n=2^a$, in nonnegative integer variables $(n,m,\\ell, a)$ where $P_k$ is the $k$-th term of the Pell sequence $\\{P_n\\}_{n\\ge 0}$ given by $P_0=0$, $P_1=1$ and $P_{n+1}=2P_{n}+ P_{n-1}$ for all $n\\geq 1$.",
"subjects": "Number Theory (math.NT)",
"title": "Power of Two as sums of Three Pell Numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850827686585,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7095349914687104
} |
https://arxiv.org/abs/2006.03639 | Topological charges and conservation laws involving an arbitrary function of time for dynamical PDEs | Dynamical PDEs that have a spatial divergence form possess conservation laws that involve an arbitrary function of time. In one spatial dimension, such conservation laws are shown to describe the presence of an $x$-independent source/sink; in two and more spatial dimensions, they are shown to describe a topological charge. Two applications are demonstrated. First, a topological charge gives rise to an associated spatial potential system, allowing nonlocal conservation laws and symmetries to be found for a given dynamical PDE. Second,when a conserved density involves derivatives of an arbitrary function of time in addition to the function itself, its integral on any given spatial domain reduces to a boundary integral, which in some situations can place restrictions on initial/boundary data for which the dynamical PDE will be well-posed. Several examples of nonlinear PDEs from applied mathematics and integrable system theory are used to illustrate these new results. | \section{Introduction}\label{sec:intro}
Many dynamical PDEs have the form of a spatial divergence.
In one spatial dimension, the simplest such form consists of
\begin{equation}\label{1D.eqn}
u_{tx} = D_x F(t,x,u,u_x,u_{xx},\ldots)
\end{equation}
with $D$ denoting a total derivative.
A prominent physical example is
the Lagrangian form of the Korteweg-de Vries (KdV) equation
for uni-directional shallow water waves,
$u_{tx} +\alpha u_xu_{xx} +\beta u_{xxxx}=0$,
where $u_x=v$ is the wave amplitude.
(Throughout, $\alpha,\beta$, etc.\ will denote constants.)
In two and three spatial dimensions,
the analogous form (up to a point transformation) is given by
\begin{equation}\label{multiD.eqn}
\begin{aligned}
u_{tx} = &
D_x F^x(t,x,y,z,u,\nabla u,\nabla^2 u,\ldots) + D_y F^y(t,x,y,z,u,\nabla u,\nabla^2 u,\ldots)
\\&\quad
+ D_z F^z(t,x,y,z,u,\nabla u,\nabla^2 u,\ldots)
\end{aligned}
\end{equation}
where $\nabla=(\partial_x,\partial_y,\partial_z)$ is the spatial gradient operator.
One important example is the Kadomtsev--Petviashvili (KP) equation \cite{KadPet}
$u_{tx} + (\alpha uu_{x} +\beta u_{xxx})_x \pm u_{yy}=0$,
where $u$ is the wave amplitude.
In the ``$+$'' case,
it describes shallow water waves with small surface tension,
and in the ``$-$'' case,
waves in thin films with large surface tension.
Two related examples are
$(u_{t} + \alpha uu_{x})_x + u_{yy}+u_{zz}=0$
which describes weakly nonlinear, weakly diffracting acoustic waves \cite{ZabKho},
and $(u_{t} + \beta u^2 u_{x})_x + u_{yy}+u_{zz}=0$
which describes nonlinear, linearly-polarized shear waves \cite{Zab}.
Another physical example is the Zakharov--Kuznetsov (ZK) equation \cite{ZakKuz}
in Lagrangian form
$u_{tx} +\alpha u_xu_{xx} +\beta u_{xxxx} + \gamma(u_{xxyy} + u_{xxzz})=0$,
which describes ion-acoustic waves in a magnetized plasma,
where $u_x=v$ is the wave amplitude.
In addition,
it is possible to consider PDEs with higher spatial derivatives of $u_t$.
A notable physical example is the vorticity equation in incompressible fluid flow in two dimensions \cite{MajBer}
$\Delta u_t +u_x \Delta u_y -u_y \Delta u_x =\mu \Delta^2 u$,
where $(-u_y,u_x)$ are the components of the fluid velocity
and $\Delta u$ is the vorticity scalar.
An example from the theory of integral systems is
the Novikov--Veselov (NV) equation \cite{VesNov},
which has the potential form
$u_{txy} +\alpha (u_{xy}u_{xx})_x +\beta (u_{xy} u_{yy})_y + u_{xxxxy} + u_{xyyyy} =0$.
This equation arises from isospectral flows
for the two-dimensional Schr\"odinger operator at zero energy \cite{NovVes}.
All dynamical PDEs \eqref{1D.eqn} and \eqref{multiD.eqn}
possess at least one family of conservation laws
involving an arbitrary function $f(t)$ of time $t$:
\begin{equation}\label{multiD.conslaw.f}
D_x( f(t)(u_t - F^x) ) + D_y( {-}f(t)F^y ) + D_z( {-}f(t)F^z ) =0
\end{equation}
holding on solutions $u(t,x,y,z)$.
This conservation law family arises from $f(t)$ being a multiplier,
whose product with the PDE yields a total divergence.
A full discussion of multipliers can be found in \Ref{Olv-book,BCA-book,Anc-review}.
When a dynamical PDE can be expressed in a form given by higher spatial derivatives,
then it can admit additional conservation law families involving $f(t)$.
For instance, consider
\begin{equation}\label{multiD.eqn.2ndorder}
\begin{aligned}
u_{tx} = &
D_x F^x(t,x,y,z,u,\nabla u,\nabla^2 u,\ldots)_x + D_y^2 F^y(t,x,y,z,u,\nabla u,\nabla^2 u,\ldots)
\\&\quad
+ D_z F^z(t,x,y,z,u,\nabla u,\nabla^2 u,\ldots)
\end{aligned}
\end{equation}
whose form has a second-order derivative with respect to $y$.
Every such dynamical PDE possesses an additional conservation law family
\begin{equation}\label{multiD.conslaw.yf}
D_x( y f(t)(u_t - F^x) ) + D_y( f(t)F^y -yf(t)(F^y)_y ) +D_z( {-}yf(t)F^z ) =0
\end{equation}
which arises from $yf(t)$ being a multiplier.
Both the KP equation and the ZK equation are examples.
A converse statement can be made,
by seeking the most general form for a dynamical PDE that possesses
a family of conservation laws involving an arbitrary function of $t$.
As shown recently in \Ref{PopBih},
any PDE that admits a multiplier $f(t)$ necessarily has the form of a spatial divergence,
and any PDE that admits a multiplier $f_0(t)+f_1(t)y$ will be given by
a spatial divergence form having a second-order derivative with respect to $y$.
An analogous general form characterizes dynamical PDEs for which
$f_0(t)+f_1(t)x+f_2(t)y + f_3(t)z$ is a multiplier.
In general,
conservation laws of dynamical PDEs are central to the analysis of solutions
by providing physical, conserved quantities as well as conserved norms needed
for studying well-posedness, stability, and global behaviour.
Conservation laws are also important in checking the accuracy of numerical schemes
and in devising numerical schemes that have good properties \cite{BihDosPop,WanBihNav}.
For a given dynamical PDE,
all of its conservation laws (up to any specified differential order)
can be found systematically by the multiplier method \cite{Olv-book,BCA-book,Anc-review}.
Multipliers that are linear in an arbitrary function $f(t)$ and possibly a finite number of its derivatives
will determine conservation laws in which $f(t)$ and its derivatives appear linearly.
The wide variety of physically important dynamical PDEs that possess
conservation laws involving an arbitrary function of $t$
motivates asking some fundamental questions about such conservation laws:
What is their mathematical structure? What physical meaning do they have?
What information do they provide about solutions of the given dynamical PDE?
The present paper is addressed to these questions and finds an interesting answer:
\begin{itemize}
\item
a non-trivial conservation law involving an arbitrary function of $t$
in one spatial dimension
describes the presence of an $x$-independent \emph{source/sink};
\\
\item
in two and more spatial dimensions,
a non-trivial conservation law involving an arbitrary function of $t$
describes a \emph{topological charge}.
\end{itemize}
These main results have some interesting applications for
the study of dynamical PDEs.
One application is that any non-trivial conservation law involving an arbitrary function $f(t)$
in two or more spatial dimensions
can be used to introduce a spatial potential system,
allowing nonlocal conservation laws and symmetries to be found
for a given dynamical PDE.
This type of potential system has a different form and different gauge freedom
compared to potential systems that arise from ordinary conservation laws.
In general, any potential system can also be useful for doing analysis,
because it provides an equivalent formulation which may have better properties
for the study of solutions of a PDE.
A more analytical, deeper application emerges when the conserved integral
on a spatial domain is considered for
a non-trivial conservation law involving an arbitrary function $f(t)$
in two or more spatial dimensions.
For all solutions of the given dynamical PDE,
the conserved integral can be shown to be equal to a lower-dimensional integral over the domain boundary,
which thereby can be evaluated entirely in terms of the boundary values of $u$ and its derivatives.
This yields an integral constraint relation
whenever the conserved integral is non-trivial off of the solution space of the dynamical PDE.
Specifically, if the initial/boundary data is chosen such that the boundary integral vanishes,
then the conserved integral itself has to vanish.
This kind of constraint relation has important implications
for the well-posedness of the Cauchy problem,
especially when the conserved integral is given by a Sobolev norm or an energy.
Both of these applications have not been studied in any generality previously,
and thus they advance the study of dynamical PDEs as well as the study of conservation laws.
The rest of the paper is organized as follows.
In section~\ref{sec:results},
we state and prove the main results in a general form for dynamical PDEs in $n\geq 1$ spatial dimensions.
We also give an explicit explanation of these results
for dynamical PDEs with a spatial-divergence form
in one, two, and three spatial dimensions.
In section~\ref{sec:applications},
we discuss the applications of these results
to constructing spatial potential systems
and to uncovering boundary-value integral relations.
In section~\ref{sec:examples},
we illustrate all of the preceding developments
by considering some nonlinear PDEs
from applied mathematics and integrable system theory:
the KdV equation in Lagrangian form,
the KP equation,
a universal modified KP equation,
and equations of shear waves.
These examples cover one, two, and three spatial dimensions.
We also consider two examples of nonlinear PDEs with higher spatial derivatives of $u_t$:
the Novikov--Veselov equation in potential form,
and the vorticity equation.
For all of these examples, we derive conserved topological charges
from conservation laws that involve an arbitrary function $f(t)$,
and we write down the associated spatial potential systems.
These conservation laws are found by the standard multiplier method (see e.g.~\Ref{Anc-review}).
In the case of conservation laws whose conserved density is non-trivial off of solutions,
we derive integral constraint relations on initial data
for the Cauchy problem.
In section~\ref{sec:conclude},
we make some concluding remarks.
\section{Main results}\label{sec:results}
We begin by considering a general dynamical PDE of order $m\geq 1$
\begin{equation}\label{dyn.pde}
G(t,x,u,\partial u,\ldots,\partial^m u)=0
\end{equation}
for $u(t,x)$ in $n\geq1$ spatial dimensions,
where $t$ is time, and $x=(x^1,\ldots,x^n)$ are spatial coordinates in ${\mathbb R}^n$.
Here $\partial = (\partial_t,\partial_{x^1},\ldots,\partial_{x^n})$.
The space of all formal solutions to the PDE will be denoted ${\mathcal E}$.
A \emph{local conservation law} for a PDE \eqref{dyn.pde}
is a continuity equation
\begin{equation}\label{conslaw}
(D_t T + {\rm Div}\, \mathbf\Phi)|_{\mathcal E} =0
\end{equation}
holding for all solutions $u(t,x)$ of the PDE,
where $T$ is the conserved density,
and $\mathbf\Phi=(\Phi^1,\ldots,\Phi^n)$ is the spatial flux,
which are functions of $t$, $x$, $u$, and derivatives of $u$ up to a finite order.
The pair $(T,\mathbf\Phi)$ is called a \emph{conserved current}.
When solutions $u(t,x)$ are considered
in a given spatial domain $\Omega\subseteq{\mathbb R}^n$,
every local conservation law yields a corresponding conserved integral
\begin{equation}\label{conserved.integral}
\mathcal{C}[u]= \int_{\Omega} T|_{\mathcal E} dV
\end{equation}
satisfying the global balance equation
\begin{equation}\label{global.conslaw}
\frac{d}{dt}\mathcal{C}[u]
= -\oint_{\partial\Omega} \mathbf\Phi|_{\mathcal E}\cdot d\mathbf{A}
\end{equation}
with $d\mathbf{A}=\hat{\mathbf n}dA$,
where $\hat{\mathbf n}$ is the unit outward normal vector of the domain boundary $\partial\Omega$,
and where $dA$ is the boundary volume element.
This global equation \eqref{global.conslaw} has the physical meaning that
the rate of change of the quantity \eqref{conserved.integral} on the spatial domain
is balanced by the net outward flux through the boundary of the domain.
A conservation law is \emph{locally trivial}
\cite{Olv-book,BCA-book,Anc-review}
when, for all solutions $u(t,x)$ in $\Omega$,
the global balance equation \eqref{global.conslaw} becomes an identity.
This happens iff the conserved density reduces to a spatial divergence,
$T|_{\mathcal E} ={\rm Div}\,\mathbf\Psi|_{\mathcal E}$,
and the spatial flux reduces to a time derivative,
$\mathbf\Phi|_{\mathcal E} =-D_t\mathbf\Psi|_{\mathcal E}$
modulo a spatial curl, ${\rm Div}\,\underline{\mathbf\Theta}|_{\mathcal E}$
where $\underline{\mathbf\Theta}$ is a skew-tensor
(namely, ${\rm Div}\,({\rm Div}\,\underline{\mathbf\Theta})=0$ holds identically off of ${\mathcal E}$).
Likewise, two conservation laws are \emph{locally equivalent}
\cite{Olv-book,BCA-book,Anc-review}
if they differ by a locally trivial conservation law,
for all solutions $u(t,u)$ in $\Omega$.
A conservation law with $T|_{\mathcal E}=0$ is called
a \emph{spatial-flux} conservation law,
\begin{equation}\label{fluxconslaw}
{\rm Div}\,\mathbf\Phi|_{\mathcal E} =0 .
\end{equation}
Its corresponding conserved integral \eqref{conserved.integral} vanishes.
The meaning of this conserved integral depends on the number of spatial dimensions $n$,
and the topology of the domain $\Omega\subseteq{\mathbb R}^n$.
In the one-dimensional case ($n=1$),
when $\Omega$ is a connected interval,
the boundary $\partial\Omega$ consists of two endpoints.
The global conservation law \eqref{global.conslaw} then reduces to
boundary source/sink terms
\begin{equation}\label{1D.sourcesink}
(\Phi|_{\partial\Omega})|_{\mathcal E} =0 .
\end{equation}
Its content is that the source/sink flux at one endpoint is balanced by
the source/sink flux at the other endpoint.
This balance is non-trivial iff $\Phi\neq 0$.
In the multi-dimensional case ($n\geq2$),
there are two different general situations.
If $\Omega$ is a connected volume that is topologically a solid ball,
then the boundary $\partial\Omega$ consists of
a closed hypersurface $S$ that is topologically a hypersphere.
Hence the conserved integral \eqref{multiD.charge}
shows that the net spatial flux through this hypersurface vanishes,
\begin{equation}\label{multiD.charge}
\oint_{S} \mathbf\Phi|_{\mathcal E}\cdot d\mathbf{A} =0 .
\end{equation}
This integral describes a vanishing \emph{topological charge},
which is conserved (namely, time-independent).
It is a topological quantity in the sense that it is unchanged
if the closed hypersurface $S$ is continuously deformed.
The same interpretation holds if $\Omega$ has a more general topology
such as a solid torus or its higher genus counterparts.
This topological charge \eqref{multiD.charge} is analogous to electric/magnetic charge
for electromagnetic fields in free space,
where the net electric/magnetic flux through a closed surface vanishes due to the absence of electric/magnetic charges inside the volume bounded by the surface.
Alternatively, if $\Omega$ is a connected volume that is topologically a shell,
then the boundary $\partial\Omega$ comprises two closed hypersurfaces, $S_1$ and $S_2$,
each of which is topologically a hypersphere.
The global conservation law \eqref{multiD.charge} thereby
shows that the net spatial flux through two hypersurfaces are equal,
\begin{equation}\label{multiD.fluxes}
\oint_{S_1} \mathbf\Phi|_{\mathcal E}\cdot d\mathbf{A}
=\oint_{S_2} \mathbf\Phi|_{\mathcal E}\cdot d\mathbf{A} .
\end{equation}
This equality describes the absence of any source/sink of net flux
inside $\Omega$.
It is analogous to conservation of net electric/magnetic flux
for electromagnetic fields in free space.
The same interpretation holds if $\Omega$ is given by
taking some connected volume with a more general topology
and removing from its interior some other connected volume.
These hypersurface integrals \eqref{multiD.charge} and \eqref{multiD.fluxes}
will be identically zero by Stokes' theorem iff
$\mathbf\Phi|_{\mathcal E} = {\rm Div}\,\underline{\mathbf\Theta}|_{\mathcal E}$
holds for some skew-tensor $\underline{\mathbf\Theta}$,
for all solutions $u(t,x)$ of the PDE.
Consequently, non-triviality is characterized by the following condition.
\begin{prop}\label{prop:nontriv.flux.multiD}
The topological charge integral \eqref{multiD.charge}
and the source/sink flux integrals \eqref{multiD.fluxes}
are non-trivial iff
$\mathbf\Phi|_{\mathcal E} \neq {\rm Div}\,\underline{\mathbf\Theta}|_{\mathcal E}$
holds for all skew-tensors $\underline{\mathbf\Theta}$
which are functions of $t$, $x$, $u$, and derivatives of $u$ up to a finite order.
\end{prop}
We next state and prove the main results about conservation laws that involve an arbitrary function of time.
\begin{thm}\label{thm:main}
If a dynamical PDE \eqref{dyn.pde} possesses a local conservation law \eqref{conslaw}
given by a family of conserved currents
\begin{align}
T & =\sum_{i=0}^{N} T_i(t,x,u,\partial u,\ldots,\partial^l u) \partial_t^i f(t)
\label{dens.arbfunct}
\\
\mathbf\Phi & = \sum_{i=0}^{N+1} {\mathbf\Phi}_i(t,x,u,\partial u,\ldots,\partial^l u) \partial_t^i f(t)
\label{flux.arbfunct}
\end{align}
involving an arbitrary function $f(t)$,
where each $T_i$ and $\mathbf\Phi_i$ does not contain $f(t)$ and its derivatives,
then the conservation law is locally equivalent to a spatial-flux conservation law
\begin{equation}\label{chargeflux.conslaw}
f(t){\rm Div}\,\mathbf\Gamma|_{\mathcal E} =0,
\quad
\mathbf\Gamma = \sum_{j=0}^{N+1} (-D_t)^{j}\mathbf\Phi_{j}(t,x,u,\partial u,\ldots,\partial^l u) .
\end{equation}
\end{thm}
\begin{proof}
There are three main steps.
We will show first that $T$ is locally trivial,
and next that $\mathbf\Phi$ is locally trivial modulo a conserved flux.
Finally, we will show that the resulting conservation law is locally equivalent to a spatial-flux conservation law.
The $t$-derivative of the conserved density \eqref{dens.arbfunct} is given by
\begin{equation}\label{Dt.dens}
D_t T = \sum_{i=0}^{N} D_t T_i \partial_t^i f(t) + \sum_{i=0}^{N} T_i \partial_t^{i+1} f(t)
= D_t T_0 f(t) + \sum_{i=1}^{N} (D_tT_i +T_{i-1})\partial_t^i f(t) + T_N \partial_t^{N+1} f(t) .
\end{equation}
Hence, the conservation law \eqref{conslaw} implies
\begin{equation}\label{Div.flux}
(D_t T_0+{\rm Div}\,\mathbf\Phi_0)|_{\mathcal E} f(t)
+\sum_{i=1}^{N} (D_tT_i +T_{i-1}+{\rm Div}\,\mathbf\Phi_i)|_{\mathcal E} \partial_t^i f(t)
+(T_N+{\rm Div}\,\mathbf\Phi_{N+1})|_{\mathcal E} \partial_t^{N+1} f(t) =0.
\end{equation}
This equation splits with respect to $f(t),\partial_t f(t),\ldots,\partial_t^{N+1} f(t)$,
because $f(t)$ is an arbitrary function of $t$,
and so their separate coefficients yield the relations
\begin{align}
{\rm Div}\,\mathbf\Phi_{N+1}|_{\mathcal E} & = -T_{N}|_{\mathcal E} ,
\label{dens.N.eqn}\\
{\rm Div}\,\mathbf\Phi_i|_{\mathcal E} & = -(D_t T_i + T_{i-1})|_{\mathcal E},
\quad
i=1,\ldots,N-1,
\label{dens.i.eqn}\\
{\rm Div}\,\mathbf\Phi_0|_{\mathcal E} & = -D_t T_0|_{\mathcal E} .
\label{flux.0.eqn}
\end{align}
Equations \eqref{dens.N.eqn} and \eqref{dens.i.eqn}
can be solved recursively for $T_N,\ldots,T_0$:
\begin{equation}\label{dens.i}
T_i|_{\mathcal E} = -\sum_{j=0}^{N-i} (-D_t)^j{\rm Div}\,\mathbf\Phi_{i+j+1}|_{\mathcal E},
\quad
i=0,\ldots,N .
\end{equation}
This shows that each $T_i$ is locally trivial,
namely
\begin{equation}\label{dens.triv.i}
T_i|_{\mathcal E} = {\rm Div}\,\mathbf\Psi_i|_{\mathcal E},
\end{equation}
with
\begin{equation}\label{triv.i}
\mathbf\Psi_i= -\sum_{j=0}^{N-i} (-D_t)^j\mathbf\Phi_{i+j+1},
\quad
i=0,\ldots,N .
\end{equation}
Therefore,
\begin{equation}\label{dens.triv}
T|_{\mathcal E} = {\rm Div}\,\mathbf\Psi|_{\mathcal E},
\quad
\mathbf\Psi = -\sum_{i=0}^{N} \partial_t^i f(t) \sum_{j=0}^{N-i} (-D_t)^j \mathbf\Phi_{i+j+1}
\end{equation}
is locally trivial.
Next, the $t$-derivative of $\mathbf\Psi$ is given by
\begin{equation}\label{Dt.triv}
\begin{aligned}
D_t\mathbf\Psi
& = \sum_{i=0}^{N} \big(
\partial_t^{i+1} f(t)\, \mathbf\Psi_{i} + \partial_t^{i} f(t) D_t\mathbf\Psi_{i}
\big)
\\
&
= f(t) D_t\mathbf\Psi_{0}
+ \sum_{i=1}^{N} \partial_t^{i} f(t)(D_t\mathbf\Psi_{i} + \mathbf\Psi_{i-1})
+\partial_t^{N+1}f(t)\, \mathbf\Psi_{N} ,
\end{aligned}
\end{equation}
which can be written in terms of the expressions $\mathbf\Phi_i$.
In the last term in \eqref{Dt.triv},
\begin{equation}\label{last.term}
\mathbf\Psi_{N}
= -\mathbf\Phi_{N+1}
\end{equation}
holds from expression \eqref{triv.i} for $i=N$.
Likewise the first term in \eqref{Dt.triv} can be simplified by observing
that
$D_t\mathbf\Psi_{0}
= \sum_{j=0}^{N} (-D_t)^{j+1}\mathbf\Phi_{j+1}$
holds from expression \eqref{triv.i} for $i=0$,
and that
$\sum_{j=0}^{N} (-D_t)^{j+1}{\rm Div}\,\mathbf\Phi_{j+1}|_{\mathcal E} = -{\rm Div}\, \mathbf\Phi_0|_{\mathcal E}$
follows from equation \eqref{flux.0.eqn} combined with equation \eqref{dens.i} for $i=0$.
This implies
\begin{equation}\label{first.term}
D_t\mathbf\Psi_{0}
= \sum_{j=0}^{N} (-D_t)^{j+1}\mathbf\Phi_{j+1}
= (\mathbf\Gamma -\mathbf\Phi_0),
\quad
{\rm Div}\,\mathbf\Gamma|_{\mathcal E}=0 ,
\end{equation}
holds for some vector function $\mathbf\Gamma(t,x,u,\partial u,\ldots,\partial^{l'} u)$.
For the remaining terms in \eqref{Dt.triv},
\begin{equation}\label{middle.terms}
(D_t\mathbf\Psi_{i} + \mathbf\Psi_{i-1})
= \sum_{j=0}^{N-i} (-D_t)^{j+1}\mathbf\Phi_{i+j+1}
-\sum_{j=0}^{N-i+1} (-D_t)^{j}\mathbf\Phi_{i+j}
= -\mathbf\Phi_i ,
\quad
i=1,\ldots,N
\end{equation}
reduces to a telescoping sum.
Substituting equations \eqref{last.term}--\eqref{middle.terms}
into equation \eqref{Dt.triv}
yields
\begin{equation}
D_t\mathbf\Psi =
f(t)\mathbf\Gamma -\sum_{i=0}^{N+1} \partial_t^i f(t) \mathbf\Phi_i .
\end{equation}
This equation combined with the flux expression \eqref{flux.arbfunct}
then gives
\begin{equation}\label{flux.triv}
\mathbf\Phi|_{\mathcal E} = -D_t\mathbf\Psi|_{\mathcal E} + f(t)\mathbf\Gamma|_{\mathcal E} ,
\end{equation}
which consists of a locally trivial term plus a divergence-free term.
Finally,
consider the locally trivial conserved current
$({\rm Div}\,\mathbf\Psi,-D_t\mathbf\Psi)|_{\mathcal E}$.
When this conserved current is added to the conserved current
given by the density \eqref{dens.triv} and the flux \eqref{flux.triv},
the resulting locally equivalent conservation law is given by the conserved current
\begin{equation}
(0,f(t)\mathbf\Gamma)|_{\mathcal E}
\end{equation}
as shown by equations \eqref{dens.triv} and \eqref{flux.triv}.
To conclude the proof,
note that equation \eqref{first.term} yields the expression for $\mathbf\Gamma$
given in equation \eqref{chargeflux.conslaw}.
\end{proof}
Existence of a spatial-flux conserved current \eqref{chargeflux.conslaw} will require
that a dynamical PDE \eqref{dyn.pde} have a certain form.
The problem of exactly determining this form can be addressed by
first working out the general form for the multiplier,
which must look like
$\sum_{i=0}^{N'} \partial_t ^i f(t) Q_i(t,x,u,\partial u,\ldots,\partial^{l'_i} u)$,
and then generalizing the methods in \Ref{PopBih} using Euler operators
for determining the form of PDEs that possess multipliers $f(t)$.
This is a non-trivial problem and will be left for elsewhere.
It will be useful for the sequel to remark that
the underlying mathematical setting for all the developments here
is calculus on jet space \cite{Olv-book}.
For a given PDE \eqref{dyn.pde}, the associated jet space is simply
the coordinate space $J= (t,x,u,\partial u,\partial^2 u,\ldots)$.
The solution space ${\mathcal E}$ of the PDE is represented by the surface
defined by equation \eqref{dyn.pde} in $J$
along with the corresponding surfaces given by all derivatives of equation \eqref{dyn.pde}.
Each solution $u(t,x)$ of the PDE is represented by a point on these surfaces.
When an expression such as $T$ or $\mathbf\Phi$ is evaluated on the solution space ${\mathcal E}$,
this means that the expression $T|_{\mathcal E}$ or $\mathbf\Phi|_{\mathcal E}$ is evaluated on the solution surfaces in $J$.
For purposes of computation,
the evaluation of any expression on ${\mathcal E}$ can be carried out by
first writing the PDE \eqref{dyn.pde} in a solved form
with respect to some leading derivative,
and then substituting the leading derivative and all of its derivatives into the expression.
See \Ref{Anc-review} for more details and examples.
\subsection{Topological charges for dynamical PDEs with a spatial-divergence form}\label{sec:topologicalcharge}
As as a consequence of Theorem~\ref{thm:main},
the following result is obtained.
\begin{cor}\label{cor:topological.charge}
For a dynamical PDE \eqref{dyn.pde} in $n>1$ spatial dimensions,
any conservation law that involves an arbitrary function of $t$
will yield a corresponding conserved topological charge
\begin{equation}
\oint_{\partial\Omega} \mathbf\Gamma|_{\mathcal E}\cdot d\mathbf{A} =0
\end{equation}
associated to any spatial domain $\Omega\subseteq{\mathbb R}^n$
whose boundary is a closed hypersurface $\partial\Omega$.
The topological nature of the charge is that it is unchanged
under continuous deformations of the hypersurface.
\end{cor}
This result will now be specialized to dynamical PDEs with a spatial-divergence form
\begin{equation}\label{pde.divform}
\hat{\mathbf k}\cdot\nabla u_t = \nabla\cdot {\mathbf F}(t,x,u,\nabla u,\ldots,\nabla^m u)
\end{equation}
where
$\mathbf{F}= (F^1,\ldots,F^n)$ is a vector function,
and $\hat{\mathbf k}=(k^1,\ldots,k^n)$ is a constant unit vector.
Any such PDE can be directly expressed as a spatial-flux conservation law
\begin{equation}\label{nD.flux.conslaw}
\nabla\cdot\mathbf\Gamma|_{\mathcal E} =0,
\quad
\mathbf\Gamma= u_t \hat{\mathbf k} - \mathbf{F} .
\end{equation}
This conservation law is non-trivial if and only if
the flux does not have the form of a spatial curl,
$\mathbf\Gamma|_{\mathcal E} \neq {\rm Div}\,\underline{\mathbf\Theta}|_{\mathcal E}$,
for all skew-tensor functions
$\underline{\mathbf\Theta}(t,x,u,\partial u,\ldots,\partial^l u)$.
But the relation $u_t\hat{\mathbf k} -\mathbf{F}={\rm Div}\,\underline{\mathbf\Theta}|_{\mathcal E}$
would clearly be inconsistent in any finite jet space
because the left side contains $u_t$ but no derivatives of $u_t$,
while if $\underline{\mathbf\Theta}|_{\mathcal E}$ contains $u_t$ (or its derivatives)
then the right side will always contain at least $\nabla u_{t}$ (or its derivatives).
This argument can be made rigorous by employing
the spatial Euler operator \cite{BCA-book,Anc-review}
similarly to the methods in \Ref{PopBih}.
We will now look at the content of the resulting global conservation law
first for PDEs in one spatial dimension
and then for PDEs in two and more spatial dimensions.
Consider a dynamical PDE \eqref{pde.divform} in one spatial dimension,
$u_{tx} = D_x F(t,x,u,\partial_x u,\ldots,\partial_x^m u)$.
Every such PDE has the form of a spatial-flux conservation law $D_x\Phi|_{\mathcal E}=0$
in which the flux is given by $\Phi = u_t - F$.
For solutions $u(t,x)$ on a connected interval $\Omega\subseteq{\mathbb R}$,
the corresponding global form of the conservation law is obtained by
integration with respect to $x$,
yielding
\begin{equation}\label{1D.sourcesink.conslaw}
\begin{aligned}
& \int_{\Omega} D_x(u_t-F)|_{\mathcal E}\,dx = 0 \\&\quad
=((u_t-F)|_{\partial\Omega})|_{\mathcal E}
\end{aligned}
\end{equation}
which is a boundary term at the two endpoints $\partial\Omega$ of the interval.
This boundary-type conservation law corresponds to the integrated form of the PDE
\begin{equation}\label{1D.ut}
u_{t} = F(t,x,u,\partial_x u,\ldots,\partial_x^m u) + w(t)
\end{equation}
whereby $u_t-F = w$ is independent of $x$.
Here $w$ can be viewed as a source/sink
in the resulting evolution equation \eqref{1D.ut}.
Note that, through this equation,
each solution $u(t,x)$ of the PDE
$u_{tx} = D_x F(t,x,u,\partial_x u,\ldots,\partial_x^m u)$
will give rise to a corresponding function $w(t)$
which, in general, will be non-zero.
As a consequence,
the source/sink conservation law \eqref{1D.sourcesink.conslaw}
is non-trivial.
The situation differs for a dynamical PDE \eqref{pde.divform}
in two spatial dimensions.
First, note that we can put $\hat{\mathbf k}=(1,0)$ by a point transformation,
so that the PDE takes the specific form
\begin{equation}\label{2D.eqn}
u_{tx} = \nabla_x F^x(t,x,y,u,\nabla u,\ldots,\nabla^m u) + \nabla_y F^y(t,x,y,u,\nabla u,\ldots,\nabla^m u) .
\end{equation}
This is a spatial-flux conservation law
$(D_x\Phi^x +D_y\Phi^y)|_{\mathcal E}=0$
in which the flux is given by the vector function
\begin{equation}\label{2D.flux}
(\Phi^x,\Phi^y) = (u_t - F^x,-F^y) .
\end{equation}
The corresponding global form of the conservation law is obtained by
integration with respect to $x,y$ over a given connected domain $\Omega\subseteq{\mathbb R}^2$,
yielding
\begin{equation}\label{2D.charge.conslaw}
\begin{aligned}
& \int_{\Omega} (D_x(u_t - F^x) + D_y(-F^y))|_{\mathcal E} \,dxdy =0 \\&\quad
= \oint_{\partial\Omega} F^y\,dx + (u_t - F^x)\,dy|_{\mathcal E}
\end{aligned}
\end{equation}
for all solutions $u(t,x)$.
This line integral is the two-dimensional form of the topological charge \eqref{multiD.charge}.
It is non-trivial because the vector function \eqref{2D.flux}
cannot be expressed in a curl form $(D_y\Theta,-D_x\Theta)$
for any scalar function $\Theta$ of $t$, $x$, $y$, $u$, and derivatives of $u$ up to a finite order,
for all solutions of the PDE \eqref{2D.eqn}.
(This follows from the argument used to show non-triviality of the conservation law \eqref{nD.flux.conslaw}.)
The topological charge conservation law \eqref{2D.charge.conslaw}
can also be viewed as corresponding to the integrated form of the PDE \eqref{2D.eqn}
as given by
\begin{equation}\label{2D.integrated.eqn}
u_t-F^x(t,x,y,u,\nabla u,\ldots,\nabla^m u) = w_y,
\quad
F^y(t,x,y,u,\nabla u,\ldots,\nabla^m u) =w_x .
\end{equation}
Note that each solution $u(t,x,y)$ of the PDE \eqref{2D.eqn}
will give rise to a corresponding function $w(t,x,y)$.
In particular, $w$ depends nonlocally on $u$,
and so the equations \eqref{2D.integrated.eqn}
cannot be viewed as surfaces in the jet space $J= (t,x,u,\partial u,\partial^2 u,\ldots)$.
In terms of $w$,
the topological charge conservation law \eqref{2D.charge.conslaw}
takes the form $\oint_{\partial\Omega} w_x\,dx + w_y\,dy=0$
which holds for any function $w(t,x,y)$
by the gradient line integral theorem.
There is also a dynamical interpretation of the topological charge conservation law \eqref{2D.charge.conslaw},
which comes from expressing it in the form
\begin{equation}\label{2D.dyn.flux.conslaw}
\frac{d}{dt}\oint_{C} u\,dy|_{\mathcal E}
= \oint_{C} (F^x\,dy - F^y\,dx)|_{\mathcal E}
\end{equation}
where $C$ is any closed curve in the $(x,y)$-plane.
The line integral on the left side is the net circulation $\oint_{C} u\,dy|_{\mathcal E}$ of
the transverse transport vector $(0,u)= u\hat{\mathbf k}_\perp$ around the closed curve,
while the line integral on the right side is the net circulation of
the vector field $(-F^y,F^x)|_{\mathcal E}$.
Thus, this vector field drives the rate of change of the net circulation of
the transport vector.
In three or more spatial dimensions,
the global form of the spatial-flux conservation law \eqref{nD.flux.conslaw}
is given by the vanishing flux integral
\begin{equation}\label{fluxcharge.conslaw}
\oint_{S} (\hat{\mathbf k} u_t -\mathbf{F})|_{\mathcal E}\cdot d\mathbf{A} =0
\end{equation}
on any closed hypersurface $S$ in ${\mathbb R}^n$.
This integral describes a conserved topological charge
for all solutions $u(t,x)$ of the PDE \eqref{pde.divform}.
It can be interpreted dynamically as stating that
the rate of change of the net flux of the transport vector $u \hat{\mathbf k}$
through a closed hypersurface
is balanced by the net flux of the vector field $\mathbf{F}$:
\begin{equation}\label{dyn.flux.conslaw}
\frac{d}{dt} \oint_{S} u \hat{\mathbf k}|_{\mathcal E} \cdot d\mathbf{A}
= \oint_{S} \mathbf{F}|_{\mathcal E} \cdot d\mathbf{A} .
\end{equation}
Further understanding will be provided by the examples in Section~\ref{sec:examples}.
A general discussion of topological conservation laws
in three dimensions can be found in \Ref{AncChe2018}.
\section{Applications}\label{sec:applications}
We will present two innovative applications of the results in Theorem~\ref{thm:main} and Corollary~\ref{cor:topological.charge}:\\
\indent$\bullet$
constructing spatial potential systems; \\
\indent$\bullet$
uncovering integral constraint relations on solutions. \\
For each application, we first consider a general dynamical PDE \eqref{dyn.pde},
and then we will specialize the presentation to the situation
when the PDE has a spatial-divergence form \eqref{pde.divform}.
Examples will be given in section~\ref{sec:examples}.
\subsection{Potential systems}
Any spatial-flux conservation law \eqref{chargeflux.conslaw}
for a given dynamical PDE \eqref{dyn.pde} in $n>1$ spatial dimensions
can be converted into a spatial potential system
via the introduction of a potential $\underline{\mathbf w}(t,x)$
in the form of a skew-tensor.
This is achieved by expressing the spatial-flux vector in a curl form
\begin{equation}\label{div.curl.flux}
\mathbf\Gamma(t,x,u,\partial u,\ldots) ={\rm Div}\,\underline{\mathbf w}
\end{equation}
in terms of the potential.
Then ${\rm Div}\,\mathbf\Gamma(t,x,u,\partial u,\ldots) =0$
holds identically (off of the solution space ${\mathcal E}$ of the PDE),
as ${\rm Div}\,({\rm Div}\,\underline{\mathbf w})={\rm Div}\,^2\underline{\mathbf w}$
is identically zero
due to the Hessian matrix ${\rm Div}\,^2$ being symmetric
while the potential $\underline{\mathbf w}$ is skew.
Note this is the $n$-dimensional version of the identity that
the divergence of a curl in three dimensions is identically zero.
We view the vector-flux equation \eqref{div.curl.flux}
as defining a potential system with variables $(u,\underline{\mathbf w})$
associated to the given dynamical PDE \eqref{dyn.pde}.
Such a potential system is purely spatial,
since it does not contain a time derivative of $\underline{\mathbf w}$,
in contrast to the more familiar potential systems that arise from
ordinary conservation laws in which the conserved density is non-trivial
(see e.g. \Ref{BCA-book}).
Moreover, it exists only for $n>1$,
since curls do not exist in dimension $n=1$.
There is a natural correspondence between
the solution space of the dynamical PDE
and the solution space of the spatial potential system
modulo some gauge freedom which we will describe next.
This gauge freedom is different in form than the gauge freedom inherent in non-spatial potential systems.
Consider, firstly, the situation in two spatial dimensions ($n=2)$.
The spatial-flux vector $\mathbf\Gamma=(\Gamma^x,\Gamma^y)$
has two components,
while the potential can be expressed explicitly as a skew matrix
\begin{equation}
\underline{\mathbf w}=\begin{pmatrix}0&w\\-w&0\end{pmatrix}
\end{equation}
involving a single scalar variable $w$.
The spatial potential system \eqref{div.curl.flux} is given by
\begin{equation}\label{2D.div.curl.flux}
\Gamma^x = w_y,
\quad
\Gamma^y = -w_x .
\end{equation}
It is unchanged under the gauge freedom
\begin{equation}\label{2D.gaugefreedom}
w\to w+ \chi(t)
\end{equation}
where $\chi(t)$ is an arbitrary differentiable function of $t$.
Every solution $(u(t,x),w(t,x))$ of this potential system \eqref{2D.div.curl.flux}
yields a solution $u(t,x)$ of the given dynamical PDE.
Conversely, every solution $u(t,x)$ of the given dynamical PDE
determines a set of solutions $(u(t,x),w(t,x)+\chi(t))$ of the potential system.
Modulo the gauge freedom, this correspondence is one-to-one.
For a dynamical PDE that has a spatial-divergence form \eqref{2D.eqn}
in two spatial dimensions,
the spatial potential system \eqref{2D.div.curl.flux} is given by
the pair of equations \eqref{2D.integrated.eqn} in terms of $(u,w)$.
By comparison, non-spatial potential systems in two spatial dimensions
involve three potential variables
and have a gauge freedom which involves
arbitrary functions of all three independent variables $t,x,y$.
Such gauge freedom requires that a gauge condition be appended to the potential system
so that it can yield potential symmetries and potential conservation laws \cite{AncBlu1997b,BCA-book}.
In contrast,
a spatial potential system \eqref{2D.div.curl.flux} has less gauge freedom,
and therefore gauge conditions are not necessarily required.
Secondly, consider the situation in three spatial dimensions ($n=3$).
Now the spatial-flux vector has three components
$\mathbf\Gamma=(\Gamma^x,\Gamma^y,\Gamma^z)$,
and the potential can be expressed explicitly as a $3\times 3$ skew matrix
\begin{equation}
\underline{\mathbf w}=\begin{pmatrix}0&w^z&-w^y\\-w^z&0&w^x\\w^y&-w^x&0\end{pmatrix}
\end{equation}
which involves three scalar variables $(w^x,w^y,w^z)$.
The resulting spatial potential system \eqref{div.curl.flux} has the form
\begin{equation}\label{3D.div.curl.flux}
\Gamma^x=(w^z)_y-(w^y)_z,
\quad
\Gamma^y=(w^x)_z-(w^z)_x,
\quad
\Gamma^z=(w^y)_x-(w^x)_y .
\end{equation}
It is unchanged under the gauge freedom
\begin{equation}\label{3D.gaugefreedom}
(w^x,w^y,w^z)\to(w^x+\partial_x\chi(t,x,y,z),w^y+\partial_y\chi(t,x,y,z),w^z+\partial_z\chi(t,x,y,z))
\end{equation}
where $\chi(t,x,y,z)$ is an arbitrary differentiable function of all independent variables.
Modulo this gauge freedom, there is a one-to-one correspondence between
the solution space of the potential system and the solution space of the given dynamical PDE.
When a dynamical PDE in three dimensions
has a spatial-divergence form
\begin{equation}\label{3D.eqn}
\begin{aligned}
u_{tx} &
= \nabla_x F^x(t,x,y,z,u,\nabla u,\ldots,\nabla^m u) + \nabla_y F^y(t,x,y,z,u,\nabla u,\ldots,\nabla^m u)
\\&\qquad
+ \nabla_z F^z(t,x,y,z,u,\nabla u,\ldots,\nabla^m u) ,
\end{aligned}
\end{equation}
its spatial potential system \eqref{3D.div.curl.flux} is given by
\begin{subequations}\label{3D.integrated.eqn}
\begin{align}
u_t -F^x(t,x,y,z,u,\nabla u,\ldots,\nabla^m u) & = (w^z)_y-(w^y)_z,
\\
F^y(t,x,y,z,u,\nabla u,\ldots,\nabla^m u) & = (w^x)_z-(w^z)_x,
\\
F^z(t,x,y,z,u,\nabla u,\ldots,\nabla^m u) & = (w^y)_x-(w^x)_y,
\end{align}
\end{subequations}
in terms of $(u,w^x,w^y,w^z)$.
Non-spatial potential systems in three spatial dimensions involve six potentials and four gauge functions.
Thus, spatial potential systems \eqref{3D.div.curl.flux} are considerably easier to use
(in particular, they require only a single gauge condition).
A similar correspondence extends to $n>3$ spatial dimensions,
where the spatial potential system \eqref{div.curl.flux} for a dynamical PDE
with a spatial-divergence form \eqref{pde.divform}
is given by
\begin{equation}
\hat{\mathbf k} u_t -{\mathbf F}(t,x,u,\nabla u,\ldots,\nabla^m u)
={\rm Div}\,\underline{\mathbf w}
\end{equation}
for $(u,\underline{\mathbf w})$.
The gauge freedom in the skew-tensor variable consists of
\begin{equation}
\underline{\mathbf w}(t,x) \to\underline{\mathbf w}(t,x) + {\rm Div}\,\underline{\boldsymbol\chi}(t,x)
\end{equation}
with $\underline{\boldsymbol\chi}(t,x)$ being an antisymmetric rank-3 tensor
whose components are arbitrary differentiable functions of all independent variables $(t,x)$.
In particular,
note that ${\rm Div}\,\Div\underline{\boldsymbol\chi}(t,x)={\rm Div}\,^2\underline{\boldsymbol\chi}(t,x)$ is identically zero
since ${\rm Div}\,^2$ is symmetric while $\underline{\boldsymbol\chi}$ is totally antisymmetric.
\subsection{Integral constraint relations}
For a given dynamical PDE \eqref{dyn.pde} in $n>1$ spatial dimensions,
any conservation law \eqref{dens.arbfunct}--\eqref{flux.arbfunct}
involving an arbitrary function $f(t)$ and its derivatives $\partial_t^i f(t)$,
$i=0,1,\ldots,N$,
gives rise to the set of $N+1$ divergence relations \eqref{dens.triv.i}--\eqref{triv.i}
which hold for all solutions $u(t,x)$ of the PDE.
Each of these relations can be integrated
over any given spatial domain $\Omega\subseteq{\mathbb R}^n$ to obtain
an associated integral relation
\begin{equation}\label{integral.relation}
\int_{\Omega} T_i|_{\mathcal E}\,dV
= \oint_{\partial\Omega} \mathbf\Psi_i|_{\mathcal E}\cdot d\mathbfA,
\quad
i=0,\ldots,N
\end{equation}
where
$T_i$ is a scalar function of $t$, $x$, $u$, and derivatives of $u$,
and where
$\mathbf\Psi_i$ is a vector function of $t$, $x$, $u$, and derivatives of $u$.
This relation \eqref{integral.relation}
holds for all solutions $u(t,x)$ of the PDE.
It shows that the volume integral on the left-hand side
can be evaluated entirely in terms of the values of $u$ and its derivatives
on the domain boundary
appearing in the hypersurface flux integral on the righthand side.
We remark that, off of the solution space of a given PDE,
an integral relation \eqref{integral.relation}
is equivalent to a divergence-type identity
\begin{equation}\label{dens.div.id}
T = {\rm Div}\, \mathbf\Psi + R(G)
\end{equation}
where $R$ is a linear differential operator in total derivatives
whose coefficients are non-singular on the solution space ${\mathcal E}$ of the PDE.
An integral relation \eqref{integral.relation} is significant
when the Cauchy problem for a given dynamical PDE is considered
on a spatial domain $\Omega\subseteq{\mathbb R}^n$.
For simplicity, suppose that the PDE has a spatial divergence form \eqref{pde.divform},
whereby it can be expressed as a nonlocal evolution equation
$u_t = \hat{\mathbf k}\cdot{\mathbf F} + (\hat{\mathbf k}\cdot\nabla)^{-1}{\mathbf F}_\perp$
where $\hat{\mathbf k}\cdot{\mathbf F}_\perp=0$.
The Cauchy problem then consists of specifying
initial data $u_0(x)$ for $u$ at $t=0$ on $\Omega$.
There are two basic situations:
$\Omega\subset{\mathbb R}^n$ is finite;
or $\Omega={\mathbb R}^n$ is infinite.
When the domain is finite,
if boundary conditions for $u$ are posed on $\partial\Omega$ for $t\geq0$
such that $\oint_{\partial\Omega} \mathbf\Psi\cdot d\mathbfA=0$,
then the initial data must satisfy the restriction
\begin{equation}\label{u0.finitedomain}
\int_{\Omega} T|_{u=u_0}\,dV =0 .
\end{equation}
Likewise, when domain is infinite,
if $u$ obeys asymptotic conditions posed for $t\geq0$
such that $\lim_{r\to\infty}\oint_{S^{n-1}(r)} \mathbf\Psi\cdot d\mathbfA=0$
where $S^{n-1}(r)$ is a sphere of radius $r>0$ in ${\mathbb R}^n$,
then the initial data must satisfy the restriction
\begin{equation}\label{u0.infinitedomain}
\int_{{\mathbb R}^n} T|_{u=u_0}\,dV =0 .
\end{equation}
Such a restriction \eqref{u0.finitedomain} or \eqref{u0.infinitedomain}
has significant implications if the integral is
a non-negative energy expression
or a norm in a function space such as $L^p$ or $H^s$.
In this situation,
the integral would be a priori non-negative for all non-trivial solutions
$u(t,x)$ of the given dynamical PDE on the spatial domain $\Omega$,
which seems to suggest a contradiction.
But in fact it implies that there must exist some a priori restrictions
on the possible kinds of initial conditions and boundary conditions
that can be posed to have the resulting initial-boundary value problem for the PDE
be well-posed in the function space determined by the norm
$\int_\Omega T|_{\mathcal E}\,dV$.
To illustrate in general how such a non-trivial integral relation could exist,
consider a dynamical PDE with a spatial-divergence form \eqref{2D.eqn}
in two dimensions:
$G = u_{tx} -D_x F^x -D_y F^y=0$.
For simplicity, suppose that the PDE possesses a scaling symmetry
$t\to\lambda t$, $x\to \lambda^a x$, $y\to \lambda^b y$, $u\to \lambda^c u$,
with scaling weights $a,b,c$.
We can derive an integral relation for $T=u_x^{p+1}$,
with $p$ assumed to be a positive integer,
by the following steps.
First, we start from a scaling-homogeneous multiplier of the form
\begin{equation}
Q=f(t) u_x^p + f'(t) y
\end{equation}
where the constants $p,a,b,c$ are related by $(c-a)p=b-1$
due to scaling homogeneity.
Next, we observe
\begin{equation}
\begin{aligned}
QG & =(f(t) u_x^p + f'(t) y)(u_{tx} -D_xF^x -D_yF^y)
\\
& = D_t( \tfrac{1}{p+1} u_x^{p+1}f(t) ) +D_x( y (u_t-F^x)f'(t) ) -D_y( \tfrac{1}{2p+1} u_x^{2p+1} f(t) +y F^y f'(t) )
\\&\qquad
- u_x^p(D_xF^x +D_y\tilde F^y)f(t) +\tilde F^y f'(t) ,
\quad
\tilde F^y = F^y -\tfrac{1}{p+1} u_x^{p+1} .
\end{aligned}
\end{equation}
This identity shows that $Q$ will be multiplier iff the last two terms are separately a total spatial derivative
\begin{equation}
u_x^p(D_xF^x +D_y\tilde F^y) = D_x X_0 + D_y Y_0,
\quad
\tilde F^y = D_x X_1 + D_y Y_1 ,
\end{equation}
for some functions $X_0$, $X_1$, $Y_0$, $Y_1$
of $u$ and its spatial derivatives as well as possibly $t,x,y$.
The necessary and sufficient conditions are given by
\begin{equation}
E_u(u_x^{p-1}(u_{xx}F^x +u_{xy}\tilde F^y)) =0,
\quad
E_u(\tilde F^y)=0
\end{equation}
where $E_u$ is the Euler operator (variational derivative) with respect to $u$.
If we specify a maximum differential order for $F^x$ and $\tilde F^y$
as functions in jet space,
then this pair of equations splits with respect to all higher-order jet variables,
yielding a PDE system on $F^x$ and $\tilde F^y$.
The system turns out to have non-trivial solutions
as shown by the example of the mKP equation
in section~\ref{sec:examples}.
Hence, we obtain a conservation law of the form \eqref{dens.arbfunct} and \eqref{flux.arbfunct}
with
\begin{subequations}
\begin{gather}
T_0 =\tfrac{1}{p+1} u_x^{p+1},
\\
\Phi^x_0 = -X_0,
\quad
\Phi^x_1 = y(u_t-F^x) +X_1 ,
\\
\Phi^y_0 = -\tfrac{1}{2p+1} u_x^{2p+1} -Y_0,
\quad
\Phi^y_1 = -y F^y +Y_1 ,
\end{gather}
\end{subequations}
where $N=0$.
Finally, from Theorem~\ref{thm:main},
we obtain a single spatial divergence relation \eqref{dens.triv.i}--\eqref{triv.i},
which is explicitly given by
\begin{equation}
\begin{aligned}
T_0|_{\mathcal E} & = \tfrac{1}{p+1} u_x^{p+1} \\
& = -(D_x\Phi^x_1 + D_y\Phi^y_1)|_{\mathcal E} = D_x( y(F^x-u_t)-X_1 )|_{\mathcal E} +D_y( y F^y -Y_1 )|_{\mathcal E} .
\end{aligned}
\end{equation}
Such an identity is highly non-trivial because it expresses a power of $u_x$ as a local divergence.
This yields a corresponding non-trivial integral relation
\begin{equation}
\int_{\Omega} \tfrac{1}{p+1} u_x^{p+1}\,dV
= \oint_{\partial\Omega} ( y(F^x-u_t)-X_1 )|_{\mathcal E} dy +( Y_1 - y F^y)|_{\mathcal E} dx
\end{equation}
where $\Omega\subseteq {\mathbb R}^2$ is a domain in the $(x,y)$-plane,
and $\partial\Omega$ is its boundary curve.
If $p$ is an odd integer,
then we conclude that the $L^{p+1}(\Omega)$ norm of $u_x$
is determined by the values of $u$ and derivatives of $u$ on the boundary.
In particular,
this holds for any initial data posed for the given PDE \eqref{2D.eqn}.
As a consequence,
for initial data on $\Omega={\mathbb R}^2$
with asymptotic decay conditions at $\partial\Omega=\lim_{r\to\infty} S^1(r)$,
the Cauchy problem for $u_t = F^x + \partial_x^{-1}(D_yF^y)$
cannot be well-posed when the initial data have
sufficiently rapid radial decrease such that
\begin{equation}
\| u_x \|_{L^{p+1}}<\infty
\text{ and }
\lim_{r\to\infty}\oint_{S^1(r)} \big( (X_1+r(\partial_x^{-1}(D_yF^y)\sin\theta)\cos\theta +(Y_1-rF^y\sin\theta)\sin\theta \big)|_{\mathcal E} r\,d\theta =0
.
\end{equation}
In general,
the connection established by equation \eqref{integral.relation} relating
integral constraints \eqref{u0.finitedomain}--\eqref{u0.infinitedomain}
and conservation laws \eqref{dens.arbfunct}--\eqref{flux.arbfunct} involving an arbitrary function of $t$
provides a systematic way to detect and to find
integral constraints and corresponding local differential identities.
\section{Examples}\label{sec:examples}
We will consider several examples of PDEs with a spatial divergence form
in one, two, and three spatial dimensions.
The examples come from applied mathematics and integrable system theory.
For each PDE,
we first present conservation laws that involve an arbitrary function $f(t)$.
These conservation laws are obtained by the standard multiplier method,
explained in \Ref{Olv-book,BCA-book,Anc-review}.
We then write down the topological charge(s) and discuss the applications
explained in section~\ref{sec:applications}.
\subsection{KdV equation}
The KdV equation $v_t + vv_x + v_{xxx}=0$,
where the coefficients have been scaled to $1$,
is a well-known integrable system that describes uni-directional shallow water waves.
It has a Lagrangian formulation in terms of a potential $u$
given by $v=u_x$:
\begin{equation}\label{pKdV}
u_{tx} + u_xu_{xx} + u_{xxxx}=0 .
\end{equation}
The potential has gauge freedom $u\to u+f(t)$ where $f(t)$ is arbitrary.
Equation \eqref{pKdV} matches the form of the general spatial divergence PDE \eqref{pde.divform}
for $n=1$ with
\begin{equation}
F=-(\tfrac{1}{2}u_x^2 +u_{xxx}) .
\end{equation}
Through Noether's theorem,
all local conservation laws of equation \eqref{pKdV} arise from multipliers $Q$,
which coincide with the characteristics of variational symmetries.
The gauge freedom in $u$ corresponds to a variational point symmetry
and hence a multiplier
\begin{equation}
Q=f(t) .
\end{equation}
The corresponding conservation law is given by the conserved current
\begin{equation}
(T,\Phi) = (0, (u_t + \tfrac{1}{2}u_x^2 + u_{xxx})f(t) ) .
\end{equation}
This represents a spatial-flux conservation law.
Its global form on any connected interval $[a,b]$ consists of
$f(t)(u_t + \tfrac{1}{2}u_x^2 + u_{xxx})|^{b}_{a} =0$
for all KdV solutions $u(t,x)$.
The overall factor $f(t)$ can be dropped without loss of generality,
yielding the global conservation law
\begin{equation}
(u_t + \tfrac{1}{2}u_x^2 + u_{xxx})|^{b}_{a} =0
\end{equation}
holding on the KdV solution space.
Since $a,b$ can be chosen freely,
this global conservation law can be expressed as
$u_t =h(t) - \tfrac{1}{2}u_x^2 -u_{xxx}$,
with $h(t)$ describing an $x$-independent source/sink term
as explained in section~\ref{sec:topologicalcharge}.
In particular, since $u$ describes a mass density,
$h(t)>0$ will drive an increase in mass and $h(t)<0$ will drive a decrease in mass.
\subsection{KP equation}
The KP equation,
given by \cite{KadPet}
\begin{equation}\label{KP}
u_{tx} + (uu_{x} + u_{xxx})_x + \sigma u_{yy}=0,
\quad
\sigma=\pm 1
\end{equation}
up to a scaling of the coefficients,
is an integrable generalization of the KdV equation in two spatial dimensions.
It describes weakly-transverse shallow water waves when $\sigma=+1$,
and weakly-transverse thin film waves when $\sigma=-1$,
with transverse motion occurring with respect to the $x$ direction.
This equation \eqref{KP} matches the form of the general spatial divergence PDE \eqref{pde.divform}
for $n=2$ with
\begin{equation}
F^x=-(uu_x +u_{xxx}) = -(\tfrac{1}{2}u^2 + u_{xx})_x ,
\quad
F^y = -\sigma u_y = -(\sigma u)_y .
\end{equation}
Note that both components $(F^x,F^y)$ are themselves a spatial divergence.
Equation \eqref{KP} admits the two multipliers
\begin{equation}\label{KP.Q1}
f(t),
\quad
y f(t) ,
\end{equation}
as expected from the discussion in section~\ref{sec:intro}.
A direct computation of all multipliers
that have the form $Q(t,x,y,u,u_t,u_x,u_y)$
and that involve an arbitrary function $f(t)$
yields two additional multipliers:
\begin{equation}\label{KP.Q2}
x f(t) -\tfrac{1}{2} \sigma y^2 f'(t),
\quad
xy f(t) -\tfrac{1}{6} \sigma y^3 f'(t) .
\end{equation}
The existence of these multipliers is due to the second-order divergence form of the spatial terms in equation \eqref{KP}.
These four multipliers \eqref{KP.Q1}--\eqref{KP.Q2}
correspond to all variational point symmetries
admitted by the Lagrangian form of the KP equation
as shown by the results in \Ref{AncGanRec2018}.
The corresponding conservation laws of the KP equation,
which involve $f(t)$ and $f'(t)$,
are given by the following conserved currents $(T,\Phi^x,\Phi^y)$:
\begin{align}
& f(t)( 0, u_{t} +uu_{x} + u_{xxx}, \sigma u_{y} ) ;
\label{KP.conslaw1}
\\
& f(t)( 0, y(u_{t} +uu_{x} + u_{xxx}), \sigma (yu_{y}-u) ) ;
\label{KP.conslaw2}
\\
&\begin{aligned}
& f(t)\big( u , \tfrac{1}{2} u^2 +u_{xx}-x(u_{t}+uu_{x}+u_{xxx}), -\sigma x u_{y} \big)
\\&\quad
+ f'(t)\big( 0,
\tfrac{1}{2} \sigma y^2( u_{t}+ u u_{x} +u_{xxx} ) ,
{-y} u + \tfrac{1}{2} y^2 u_{y}
\big) ;
\end{aligned}
\label{KP.conslaw3}
\\
&\begin{aligned}
& f(t)\big( yu, y(\tfrac{1}{2} u^2 +u_{xx})-xy(u_{t}+uu_{x} +u_{xxx}), \sigma x (u - yu_{y}) \big)
\\&\quad
+ f'(t)\big( 0,
\tfrac{1}{6} \sigma y^3 ( u_{t}+ u u_{x} + u_{xxx}),
{-\tfrac{1}{2} y^2} u +\tfrac{1}{6} y^3 u_{y}
\big) .
\end{aligned}
\label{KP.conslaw4}
\end{align}
The first two conserved currents \eqref{KP.conslaw1} and \eqref{KP.conslaw2}
represent spatial-flux conservation laws.
In global form \eqref{2D.charge.conslaw},
they yield the topological charges
$\oint_{C} F^y\,dx + (u_t - F^x)\,dy=0$
and
$\oint_{C} (yF^y-\sigma u)\,dx + y(u_t - F^x)\,dy=0$
holding for KP solutions $u(t,x,y)$,
where $C$ is any closed curve in the $(x,y)$-plane.
Alternatively, they can be expressed as balance equations
\begin{align}
\frac{d}{dt} \oint_{C} u \,dy & = \oint_{C} \sigma u_y \,dx - (uu_x + u_{xxx})\,dy ,
\label{KP.charge1}
\\
\frac{d}{dt} \oint_{C} y u\,dy & = \oint_{C} \sigma (yu_y -u)\,dx - y(uu_x + u_{xxx})\,dy ,
\label{KP.charge2}
\end{align}
for the circulation of the transverse mass transport vector $(0,u)$ and its $y$-moment $(0,y u)$
around closed curves $C$.
Note these vectors lie in the $y$ direction.
If we take $f(t)=1$,
then the conservation laws given by the other two conserved currents \eqref{KP.conslaw3} and \eqref{KP.conslaw4}
describe balance equations for rate of change of
the mass $\int_{{\rm int}(C)} u\,dxdy$
and for rate of change of
the $y$-moment of the mass $\int_{{\rm int}(C)} yu\,dxdy$,
holding for KP solutions $u(t,x,y)$,
where ${\rm int}(C)$ is the interior of the closed curve $C$.
The divergence relation \eqref{dens.div.id} for each conservation law
yields the identities
\begin{align}
& u =
( \tfrac{1}{2} \sigma y^2(u_{t}+u u_{x} +u_{xxx}) )_x
- (yu-\tfrac{1}{2}y^2u_{y})_y
+ \tfrac{1}{2} \sigma y^2 G ,
\label{KP.id1}
\\
& yu =
( \tfrac{1}{6} \sigma y^3(u_{t}+ u u_{x} + u_{xxx}) )_x
- (\tfrac{1}{2}y^2u -\tfrac{1}{6}y^3u_{y})_y
+ \tfrac{1}{6} \sigma y^3 G ,
\label{KP.id2}
\end{align}
where $G = u_{tx} + (uu_{x} + u_{xxx})_x + \sigma u_{yy}$.
When the Cauchy problem for the KP equation in the form
$u_{t} + (uu_{x} + u_{xxx}) + \sigma \partial_x^{-1} u_{yy}=0$
is considered on ${\mathbb R}^2$,
these two identities \eqref{KP.id1} and \eqref{KP.id2} imply that
initial data $u|_{t=0}=u_0(x,y)$ must satisfy the integral constraints \cite{MolSauTzv}
$\int_{{\mathbb R}^2} u_0\,dxdy = 0$ and $\int_{{\mathbb R}^2} yu_0\,dxdy =0$.
The origin of these integral constraints from conservation laws has not been previously noticed.
Moreover, due to the identities \eqref{KP.id1} and \eqref{KP.id2},
the conserved currents \eqref{KP.conslaw3} and \eqref{KP.conslaw4}
are locally equivalent to the following respective spatial-flux currents
\begin{align}
&\begin{aligned}
\big( 0,&\,
\tfrac{1}{2} u^2 +u_{xx} -x(u_t + uu_{x}+u_{xxx}) -\tfrac{1}{2} \sigma y^2(u_t u_x + u_{tx} + u_{xx} + u_{txxx}), \\&\quad
yu_t -\sigma x u_{y} -\tfrac{1}{2}y^2 u_{ty}
\big) ,
\end{aligned}
\label{KP.current3}
\\
&\begin{aligned}
\big( 0,&\,
y(\tfrac{1}{2} u^2 +u_{xx}) -xy(u_{t}+uu_{x}+u_{xxx}) -\tfrac{1}{6} \sigma y^3(u_t u_x + u_{tx} + u_{xx} + u_{txxx}), \\&\quad
\sigma x (u-yu_{y}) +\tfrac{1}{2}y^2 u_{t} -\tfrac{1}{6}y^3 u_{ty}
\big) ,
\end{aligned}
\label{KP.current4}
\end{align}
given by Theorem~\ref{thm:main},
where, without loss of generality, we have dropped an overall factor $f(t)$.
Their global form \eqref{2D.charge.conslaw}
yields two non-trivial topological charges
\begin{align}
&\begin{aligned}
\oint_{C} & (-\tfrac{1}{2} u^2 - u_{xx} + x(u_t + uu_x + u_{xxx} ) + \tfrac{1}{2} \sigma y^2 (u_t u_x + u_{tx} + u_{xx} + u_{txxx} ))\, dy
\\&\qquad
+ (yu_t -\sigma xu_y -2 y u_{ty})\, dx = 0 ,
\end{aligned}
\label{KP.charge3}
\\
&\begin{aligned}
\oint_{C} & (-y(\tfrac{1}{2} u^2 +u_{xx}) +xy(uu_{x}+u_{t}+u_{xxx}) +\tfrac{1}{6} \sigma y^3(u_t u_x + u_{tx} + u_{xx} + u_{txxx}) )\,dy
\\&\qquad
+ (\sigma x (u-yu_{y}) +\tfrac{1}{2}y^2 u_{t} -\tfrac{1}{6}y^3 u_{ty} )\,dx =0 ,
\end{aligned}
\label{KP.charge4}
\end{align}
which are independent of the closed curve $C$,
for KP solutions $u(t,x,y)$.
Finally,
the spatial potential systems constructed from
the four spatial-flux conserved currents \eqref{KP.conslaw1}, \eqref{KP.conslaw2}, \eqref{KP.current3}, \eqref{KP.current4}
consist of pairs of equations for $(u,w)$:
\begin{align}
&
u_t + uu_x + u_{xxx} = w_y ,
\quad
u_y =-\tfrac{1}{\sigma} w_x ;
\label{KP.potsys1}
\\
&
y(u_t + uu_x + u_{xxx}) = w_y ,
\quad
yu_y-u =-\tfrac{1}{\sigma} w_x;
\label{KP.potsys2}
\\
&
\begin{aligned}
& \tfrac{1}{2}u^2 +u_{xx} - x(u_t + uu_x + u_{xxx} ) -\tfrac{1}{2} \sigma y^2 (u_t u_x + u_{tx} + u_{xx} + u_{txxx} ) = w_y,
\\&
yu_t -\sigma xu_y -\tfrac{1}{2} y^2 u_{ty} = -w_x ;
\end{aligned}
\label{KP.potsys3}
\\
&
\begin{aligned}
& y(\tfrac{1}{2} u^2 +u_{xx}) -xy(u_{t}+uu_{x}+u_{xxx}) -\tfrac{1}{6} \sigma y^3(u_t u_x + u_{tx} + u_{xx} + u_{txxx}) = w_y,
\\&
\sigma x (u-yu_{y}) +\tfrac{1}{2}y^2 u_{t} -\tfrac{1}{6}y^3 u_{ty} =-w_x .
\end{aligned}
\label{KP.potsys4}
\end{align}
Each system is equivalent (modulo gauge freedom \eqref{2D.gaugefreedom})
to the KP equation.
\subsection{Universal modified KP equation}
A general modified version of the KP equation is given by
\begin{equation}\label{universalmKP}
(v_t -v^2 v_x + \alpha v_x\partial_x^{-1}v_y + \beta vv_y+ v_{xxx})_x +\sigma v_{yy}=0
\end{equation}
up to scaling of the coefficients, where $\sigma^2= 1$.
This equation has recently been derived \cite{RatBri} as the governing equation of
phase modulations of travelling waves in general nonlinear systems
in 2+1 dimensions.
The case $\alpha^2=2$, $\beta=0$, $\sigma =1$ is called the mKP equation,
which is known to be integrable \cite{KonDub1984}.
In the general case,
equation \eqref{universalmKP} can be written as a PDE
in terms of a potential $u$ via $v=u_x$,
which yields
$(u_{tx} - u_x^2 u_{xx} + \alpha u_y u_{xx} + \beta u_x u_{xy} + u_{xxxx})_x +\sigma u_{xyy}=0$.
By integrating this PDE with respect to $x$, and dropping the integration constant (a function of $t,y$), we obtain the equation
\begin{equation}\label{mKP}
u_{tx} - u_x^2 u_{xx} + \alpha u_y u_{xx} + \beta u_x u_{xy} + u_{xxxx} +\sigma u_{yy}=0,
\quad
\sigma^2= 1
\end{equation}
which we will refer to as the universal mKP (\emph{umKP}) equation.
Note that it matches the form of the general spatial divergence PDE \eqref{pde.divform}
for $n=2$ with
\begin{equation}
F^x=\tfrac{1}{3} u_x^3 -\alpha u_y u_x -u_{xxx},
\quad
F^y = \tfrac{1}{2} (\alpha - \beta) u_x^2 -\sigma u_y .
\end{equation}
The umKP equation \eqref{mKP} admits the multiplier $f(t)$
given by an arbitrary function of $t$.
Similarly to the KP equation,
another multiplier is $yf(t)$ when $F^y$ is a $y$-derivative,
namely in the case $\alpha=\beta$.
From a recent classification result in \Ref{AncGanRec2020}
for conservation laws of a $p$-power generalization of equation \eqref{mKP},
we obtain four additional multipliers:
\begin{equation}
(\alpha -\beta) u_{x} f(t) + y f'(t) ;
\label{mKP.Q1}
\end{equation}
\begin{equation}
2u_y f(t) -(\tfrac{3}{4} \alpha x +\tfrac{1}{2} y u_x) f'(t) +\tfrac{3}{8} \alpha y^2 f''(t)
\label{mKP.Q2}
\end{equation}
in the case $\alpha^2=\tfrac{2}{3}$, $\beta=2\alpha$, $\sigma=1$;
and
\begin{gather}
(x - \alpha yu_{x}) f(t)-\tfrac{1}{2} y^2 f'(t) ,
\label{mKP.Q3}\\
(\tfrac{4}{3} \alpha u_{x} u_{y} + \tfrac{2}{3} u_{t} -\tfrac{8}{9} u_{x}^3 + \tfrac{8}{3} u_{xxx}) )f(t)
-\tfrac{2}{3}\alpha x u_{x} f'(t)
+\tfrac{1}{3}( y^2 -\alpha x y ) f''(t)
+\tfrac{1}{18} y^3 f'''(t) ,
\label{mKP.Q4}
\end{gather}
in the integrable case $\alpha^2=2$, $\beta=0$, $\sigma=1$.
Each multiplier yields a conservation law of the umKP equation,
involving $f(t)$ and its derivatives.
We will omit the lengthy expressions for the fluxes
and present only the conserved densities:
\begin{gather}
\tfrac{1}{2} (\alpha-\beta) u_{x}^2 f(t) ;
\label{mKP.dens1}
\\
u_{x} u_{y} f(t) +\tfrac{1}{4} (3\alpha u -y u_{x}^2) f'(t) ;
\label{mKP.dens2}
\\
(u +\tfrac{1}{2} \alpha y u_{x}^2)f(t) ;
\label{mKP.dens3}
\\
u_{xx}^2 +\tfrac{1}{6} (u_{x}^2 -\alpha u_{y})^2 ) f(t)
+\tfrac{1}{3} x u_{x}^2 f'(t)
-\tfrac{1}{6}( 2\alpha y u + y^2 u_{x}^2 ) f''(t) .
\label{mKP.dens4}
\end{gather}
If we take $f(t)=1$,
then the first density \eqref{mKP.dens1} yields the $L^2$ norm of $u_x$:
$\| u_x \|_{L^2(\Omega)}^2 = \int_\Omega u_x^2\,dx\,dy$,
on any spatial domain $\Omega\subseteq{\mathbb R}^2$.
The same conserved density is admitted by the KP equation
and physically describes an $x$-momentum quantity
where $u_x=v$ is the wave amplitude.
Likewise for $f(t)=1$,
the second density \eqref{mKP.dens2} yields a $y$-momentum quantity
while the third density \eqref{mKP.dens3} yields a conserved quantity
related to the $y$-moment of the $x$-momentum.
The fourth density \eqref{mKP.dens3} for $f(t)=1$ yields the conserved energy
$E[u] = \int_\Omega ( u_{xx}^2 +\tfrac{1}{6} (u_{x}^2 -\alpha u_{y})^2 )\,dx\,dy$,
which was first found in \Ref{KenMar}
in a study of the Cauchy problem for the mKP equation.
The divergence relation \eqref{dens.div.id} can be applied to each of these four conservation laws,
yielding the following remarkable identities
where $G=u_{tx} - u_x^2 u_{xx} + \alpha u_y u_{xx} + u_{xxxx} +\sigma u_{yy}$:
\begin{align}
& u_{x}^2 =
\tfrac{2}{\alpha-\beta}\big( D_x X +D_y Y + y G \big),
\label{mKP.id1}\\
&
X=-y( u_{t} -\tfrac{1}{3} u_{x}^3 +\alpha u_{y}u_{x} +u_{xxx} ) ,
\quad
Y = \sigma u -y( \sigma u_{y} -\tfrac{1}{2}(\alpha-\beta) u_{x}^2 ) ;
\label{mKP.id1.flux}
\end{align}
and
\begin{align}
& u_{x}u_{y} = D_x X + D_y Y
- (\tfrac{1}{2} u_{x} y +\tfrac{3}{4} \alpha x) G
-\tfrac{1}{4\alpha} y^2 G_t ,
\label{mKP.id2}\\
&\begin{aligned}
X= &
{-\tfrac{3}{4}} \alpha u_{xx}
+x (
\tfrac{1}{2} u_{x} u_{y}
-\alpha ( \tfrac{1}{4} u_{x}^3 -\tfrac{3}{4}(u_{t}+u_{xxx}) )
)
\\&\quad
-y (
\tfrac{1}{8} u_{x}^4
+\tfrac{1}{4} u_{y}^2
+\tfrac{1}{4} u_{xx}^2
-\tfrac{1}{2} u_{x} u_{xxx}
-\tfrac{1}{4} \alpha u_{x}^2 u_{y}
)
\\&\quad
+y^2 (
\tfrac{1}{4}(u_{y} u_{tx}+u_{x} u_{ty})
+\alpha \tfrac{3}{8}(u_{tt} -u_{x}^2 u_{tx} +u_{txxx})
)
\big)
,
\\
Y = &
x (
\tfrac{3}{4} \alpha u_{y}+\tfrac{1}{4} u_{x}^2
)
+ y (
-\alpha(\tfrac{3}{4} u_{t} - \tfrac{1}{4} u_{x}^3)
+\tfrac{1}{2} u_{x} u_{y}
)
+y^2 (
\tfrac{1}{4} u_{x} u_{tx}+\tfrac{3}{8} \alpha u_{ty}
)
,
\end{aligned}
\label{mKP.id2.flux}
\end{align}
in the case $\alpha^2=\tfrac{2}{3}$, $\beta=2\alpha$, $\sigma=1$;
and also
\begin{align}
& y u_{x}^2 + \alpha u = D_x X + D_y Y -\tfrac{1}{2}\alpha y^2 G ,
\label{mKP.id3}\\
& X =
-y^2 (u_{y} u_{x} + \tfrac{1}{2}\alpha (u_{t} +\tfrac{1}{3} u_{x}^3 -u_{xxx})
)
,
\quad
Y =
\alpha y u - \tfrac{1}{2} y^2(\alpha u_{y} -u_{x}^2)
,
\label{mKP.id3.flux}
\end{align}
\begin{flalign}
& \begin{aligned}
& u_{xx}^2 +\tfrac{1}{6} (\alpha u_{y} -u_{x}^2)^2
\\& =
D_x X + D_y Y
+\tfrac{1}{3} \alpha (2 x u_{x} + y^2 u_{tx}) G
-\tfrac{1}{3}(2\alpha x y - y^2 u_{x}) G_{t}
-\tfrac{1}{9}\alpha y^3 G_{tt} ,
\end{aligned}
\label{mKP.id4}\\
& \begin{aligned}
X = &
\tfrac{2}{3} u_{x} u_{xx}
-\tfrac{1}{3}\alpha y u_{txx}
+ x (
\tfrac{1}{6} u_{x}^4
+\tfrac{1}{3} u_{y}^2
-\tfrac{1}{3}\alpha u_{x}^2 u_{y}
+\tfrac{1}{3} u_{xx}^2
-\tfrac{2}{3} u_{x} u_{xxx}
)
\\&\quad
+ x y (
\tfrac{2}{3}( u_{x} u_{ty}+ u_{y}u_{tx} )
+\tfrac{1}{3}(u_{tt} - u_{x}^2 u_{tx} u_{txxx})
)
\\&\quad
+y^2 (
\tfrac{1}{3}(
u_{y}u_{ty}
+ u_{x}^3 u_{tx}
+ u_{xx} u_{txx}
- u_{tx} u_{xxx}
- u_{x} u_{txxx}
)
-\tfrac{1}{6}\alpha ( 2u_{x} u_{y} u_{tx} + u_{x}^2u_{ty} )
)
\\&\quad
+ y^3( \tfrac{1}{9}( u_{y} u_{ttx} + u_{x} u_{tty} -2 \alpha u_{tx} u_{ty} )
-\tfrac{1}{18}\alpha ( 2 u_{x} u_{tx}^2 + u_{x}^2 u_{ttx} -u_{ttt} -u_{ttxxx} )
)_ ,
\\
Y = &
-x (
\alpha (\tfrac{1}{3} u_{t} -\tfrac{1}{9} u_{x}^3)
+\tfrac{2}{3} u_{x} u_{y}
)
+\tfrac{1}{3} x y (
\alpha u_{ty}
-2 u_{x} u_{tx}
)
\\&\quad
-\tfrac{1}{6} y^2 (
\alpha (u_{tt} -u_{x}^2 u_{tx})
+2( u_{y}u_{tx} + u_{x} u_{ty} )
)
-\tfrac{1}{18} y^3 (
2u_{tx}^2
+2u_{x} u_{ttx}
-\alpha u_{tty}
)
,
\end{aligned}
\label{mKP.id4.flux}
\end{flalign}
in the integrable case $\alpha^2=2$, $\beta=0$, $\sigma=1$.
For umKP solutions $u(t,x,y)$,
the first identity \eqref{mKP.id1} implies that the $L^2$ norm of $u_x$
can be expressed as a line integral around the boundary of the spatial domain,
while the fourth identity \eqref{mKP.id4} implies that the energy norm of $u$
can be expressed in a similar form.
Consequently,
if we consider the Cauchy problem for the umKP equation
\begin{equation}\label{mKP.ut.eqn}
u_{t} = \tfrac{1}{3}u_x^3 - u_{xxx} -\alpha u_y u_{x} -\beta u_x u_{xy} +\partial_x^{-1}(\tfrac{1}{2}(\alpha-\beta) u_x^2 -\sigma u_{y})_y
\end{equation}
on ${\mathbb R}^2$,
then it cannot be well-posed in the following two situations:
\begin{equation}
\| u_x \|_{L^2({\mathbb R}^2)} <\infty
\quad\text{ and }\quad
\lim_{r\to\infty}\oint_{S^1(r)} ( Y_{L^2} \sin\theta + X_{L^2} \cos\theta )|_{\mathcal E} r\,d\theta =0
\end{equation}
where $(X_{L^2},Y_{L^2})$ is given by expressions \eqref{mKP.id1.flux};
or
\begin{equation}
\alpha^2=2,
\
\beta=0,
\
\sigma=1,
\
E[u] <\infty
\ \text{ and }
\lim_{r\to\infty}\oint_{S^1(r)} ( Y_{E} \sin\theta + X_{E} \cos\theta )|_{\mathcal E} r\,d\theta =0
\end{equation}
where $(X_{E},Y_{E})$ is given by expressions \eqref{mKP.id4.flux}.
In both line integrals,
$u_t$ is eliminated through equation \eqref{mKP.ut.eqn},
so that the integrand depends on only $u$ and its $x,y$-derivatives.
The conservation laws corresponding to the four conserved densities \eqref{mKP.dens1}--\eqref{mKP.dens4}
are each locally equivalent to a spatial flux conservation law by Theorem~\ref{thm:main},
which yields four conserved non-trivial topological charges for the umKP equation.
In particular, through the divergence relation \eqref{mKP.id3},
the energy quantity can be expressed as a topological charge:
\begin{equation}
E[u] = \oint_{\partial\Omega} (X_{E}\, dy - Y_{E}\, dx )|_{\mathcal E}
\end{equation}
where $\partial\Omega$ is the boundary of the spatial domain $\Omega$.
This remarkable integral relation provides
an explanation to a question raised in \Ref{KenMar}
about why the conserved energy could not be used to obtain
a global existence result for solutions to the Cauchy problem for the integrable mKP equation.
Finally, each topological charge
gives rise to an associated spatial potential system for the umKP equation,
in the same fashion as for the KP equation.
\subsection{Shear-wave equations}
The propagation of shear waves in incompressible nonlinear solids
can be modelled by the equation
\begin{equation}\label{ZZKeqn}
(u_{t} + (\alpha u +\beta u^2)u_{x})_x + \Delta_\perp u=0
\end{equation}
up to a scaling of the coefficients.
Here $\Delta_\perp = \partial_y^2 +\partial_z^2$ is the transverse Laplacian.
For $\alpha=0$,
equation \eqref{ZZKeqn} describes nonlinear, linearly-polarized shear waves \cite{Zab},
while for $\beta=0$,
it describes weakly nonlinear, weakly diffracting shear waves \cite{DesLaiOriSac,ZabKho}.
In the latter case, the equation is known as the Khokhlov--Zabolotskaya equation,
and its local conservation laws have been found in \Ref{Sha}.
This equation has the form of a dispersionless Gardner equation extended to three spatial dimensions.
It matches the form of the general spatial divergence PDE \eqref{pde.divform}
for $n=3$ with
\begin{equation}
F^x=-(\alpha u +\beta u^2)u_{x} = -(\tfrac{1}{2}\alpha u^2 +\tfrac{1}{3}\beta u^3)_x ,
\quad
F^y = -u_y ,
\quad
F^z = -u_z ,
\end{equation}
where all components $(F^x,F^y,F^z)$ are themselves a spatial divergence.
Equation \eqref{ZZKeqn} admits the multipliers $f(t)$, $yf(t)$, $zf(t)$,
in accordance with the discussion in section~\ref{sec:intro}.
In fact, these multipliers are special cases of a more general multiplier
\begin{equation}\label{ZZK.Q}
f_t(t,y,z) -x \Delta_\perp f(t,y,z),
\quad
\Delta_\perp^2 f(t,y,z) =0 ,
\end{equation}
which is obtained by a direct computation of all multipliers
that have the form $Q(t,x,y,z,u,u_t,u_x,u_y,u_z)$
with arbitrary dependence on $t$.
Note that if $f(t,z,y)$ is assumed to be analytic in $t$
then without loss of generality
we can put $f(t,y,z)=\tilde f(t) \phi(y,z)$
where $\phi$ satisfies the biharmonic equation $\Delta^2\phi=0$,
and where $\tilde f(t)$ is an arbitrary function of $t$.
The conservation law arising from this multiplier \eqref{ZZK.Q}
is given by the following conserved current $(T,\Phi^x,\Phi^y,\Phi^z)$:
\begin{equation}\label{ZZK.current}
\begin{aligned}
\big( &
(\Delta_\perp u) f ,
-(x \Delta_\perp u_t) f
+( \tfrac{1}{2}\alpha u^2 +\tfrac{1}{3}\beta u^3 -x(\alpha u +\beta u^2) u_x )\Delta_\perp f
+(u_t + (\alpha u +\beta u^2)u_x)f_t
,
\\&
x( u_{txy} f -u_{tx} f_y -u_y \Delta_\perp f + u \Delta_\perp f_y ),
x( u_{txz} f -u_{tx} f_z -u_z \Delta_\perp f + u \Delta_\perp f_z )
\big) .
\end{aligned}
\end{equation}
For $f=\phi$,
this conserved current describes a balance equation
for the rate of change of the weighted mass
$\int_{{\rm int}(S)} \phi \Delta_\perp u \,dxdydz$,
holding for solutions $u(t,x,y,z)$ of equation \eqref{ZZKeqn},
where ${\rm int}(S)$ is the interior of any volume bounded by a closed surface $S$
in $(x,y,z)$-space.
The divergence relation \eqref{dens.div.id} applied to the conserved current \eqref{ZZK.current}
yields the identity
\begin{equation}\label{ZZK.id}
\phi \Delta_\perp u = D_x\big( (u_t + (\alpha u +\beta u^2)u_x)\phi \big) +\phi G ,
\end{equation}
where $G= (u_{t} + (\alpha u +\beta u^2)u_{x})_x + \Delta_\perp u$.
Consequently, the conserved current is locally equivalent to
a spatial flux conservation law.
The resulting topological charge is given by
\begin{equation}\label{ZZK.charge}
\oint_{S} (u_t+ (\alpha u +\beta u^2)u_x)\phi \,dydz =0 .
\end{equation}
It can be expressed as a balance equation
\begin{equation}
\frac{d}{dt}\oint_{S} u \phi \,dydz = -\oint_{S} (\alpha u +\beta u^2)u_x \phi \,dydz
\end{equation}
for the weighted flux of the mass transport vector $(u,0,0)$
through closed surfaces $S$,
with weighting factor $\phi$.
In particular, if $S$ is taken to comprise the planar surfaces bounding a rectangular volume with infinite extent in the $y$ and $z$ directions,
then $\oint_{S} u \,dydz$ is the net flux through the two transverse surfaces,
say $x=a$ and $x=b$, for solutions $u(t,x,y)$ with sufficient transverse decay.
The spatial potential system arising from the topological charge \eqref{ZZK.current}
consists of the following system of equations for $(u,w^x,w^y,w^z)$:
\begin{equation}
\begin{aligned}
& ( \tfrac{1}{2}\alpha u^2 +\tfrac{1}{3}\beta u^3 -x(\alpha u +\beta u^2) u_x )\Delta_\perp\phi
-( u_{tt} + (\alpha +2\beta u)u_{t}u_{x} + (\alpha u +\beta u^2) u_{tx} +x\Delta_\perp u_t )\phi
\\&
=(w^z)_y - (w^y)_z,
\\&
x( u_{txy} \phi -u_y \Delta_\perp\phi -u_{tx} \phi_y + u \Delta_\perp\phi_y )
= (w^x)_z - (w^z)_x,
\\&
x( u_{txz} \phi -u_{tx} \phi_z -u_z \Delta_\perp\phi + u \Delta_\perp \phi_z ) = (w^y)_x - (w^x)_y,
\end{aligned}
\end{equation}
where $\phi$ is an arbitrary biharmonic function.
This system is equivalent (modulo gauge freedom \eqref{3D.gaugefreedom})
to equation \eqref{ZZKeqn}.
Finally,
when the Cauchy problem is considered for equation \eqref{ZZKeqn} in the form
$u_{t} + (\alpha u +\beta u^2)u_{x} + \partial_x^{-1} \Delta_\perp u=0$
on ${\mathbb R}^3$,
the identity \eqref{ZZK.id} implies that
initial data $u|_{t=0}=u_0(x,y,z)$ must satisfy the integral constraint
$\int_{{\mathbb R}^3} \phi \Delta_\perp u_0 \,dxdydz =0$
for existence of solutions $u(t,x,y,z)$ with sufficiently strong spatial decay.
Similarly,
if Dirichlet boundary conditions on $u$ are imposed at $x=a$ and $x=b$,
then the identity \eqref{ZZK.id} imposes an integral constraint
$\int_{V} \phi \Delta_\perp u\,dx\,dy\,dz=0$
on solutions of the corresponding boundary-value problem
posed in infinite rectangular volume $V$ enclosed by the surfaces $x=a$ and $x=b$.
\subsection{Novikov--Veselov equation}
The Novikov--Veselov (NV) equation \cite{VesNov,NovVes}
\begin{equation}\label{NV.isoflow}
v_t +\alpha (v\partial_y^{-1}v_x)_x +\beta (v\partial_x^{-1}v_y)_y + v_{xxx} + v_{yyy} =0
\end{equation}
generates isospectral flows for the two-dimensional Schr\"odinger operator
at zero energy.
It generalizes the KdV equation in the sense that
any NV solution $v(t,x,y)$ produces a KdV solution given by $v(t,x,x)$
up to a scaling of $t,x,v$.
Equation \eqref{NV.isoflow} can be written as a PDE
by introducing a potential $u$
given by $v=u_{xy}$, which satisfies
\begin{equation}\label{NVeqn}
u_{txy} +\alpha (u_{xy}u_{xx})_x +\beta (u_{xy} u_{yy})_y + u_{xxxxy} + u_{xyyyy} =0 .
\end{equation}
Special cases of this PDE are $\alpha=0$ and $\beta=0$.
In these cases, the PDE can be expressed in a lower-order form
through $\tilde u=u_y$ or $\tilde u=u_x$, which respectively yield
\begin{align}
\tilde u_{tx} + \beta (\tilde u_{x} \tilde u_{y})_y + \tilde u_{xxxx} + \tilde u_{xyyy} =0,
\quad
\tilde u_{ty} + \alpha (\tilde u_{x} \tilde u_{y})_x + \tilde u_{xxxy} + \tilde u_{yyyy} =0 .
\end{align}
These two PDEs are related by $x\leftrightarrow y$ and $\alpha\leftrightarrow \beta$;
they are sometimes referred to as the asymmetric NV equation.
The NV equation \eqref{NVeqn} has a higher-order spatial divergence form
$u_{txy} =F_{xy}+ (F^x)_x + (F^y)_y$ given by
$F= -(u_{xxx} + u_{yyy})$,
$F^x= - \alpha u_{xy}u_{xx}$,
$F^y = -\beta u_{xy} u_{yy}$.
This equation (with $\alpha,\beta\neq0$) admits the multipliers
\begin{equation}\label{NV.Qs}
f(t),
\quad
2 u_{xx} f(t)- (1/\alpha) x f'(t) ,
\quad
2 u_{yy} f(t)- (1/\beta) y f'(t) ,
\end{equation}
which come from a direct computation of all multipliers
that have the form $Q(t,x,y,u,u_t,u_x,u_y,u_{tx},u_{ty},u_{xy},u_{xx},u_{yy})$.
The conservation law corresponding to the first multiplier
is given by the following conserved current $(T,\Phi^x,\Phi^y)$:
\begin{equation}\label{NV.current1}
f(t)\big( 0,
u_{xxxy}+\alpha u_{xy} u_{xx} +\tfrac{1}{2} u_{ty},
u_{xyyy}+\beta u_{xy} u_{yy}+\tfrac{1}{2} u_{tx}
\big) .
\end{equation}
Likewise,
the conservation laws corresponding to the second and third multipliers
are given by a pair of conserved currents
\begin{equation}\label{NV.current2}
\begin{aligned}
& f(t)\big( u_{xx} u_{xy} ,
-u_{tx} u_{xy} \alpha u_{xx}^2 u_{xy} -\beta u_{xy}^2 u_{yy} +u_{xyy}^2 +2 u_{xx} u_{xxxy} ,
\\&\quad
\tfrac{1}{3} (\alpha u_{xx}^3 + \beta u_{xy}^3)
-u_{xxx}^2 -2 u_{xxy} u_{xyy}
+(u_{tx}+2 \beta u_{xy} u_{yy}+2 u_{xyyy}) u_{xx}
\big)
\\&
+f'(t) \big( 0,
(1/\alpha) u_{xxx}
-x (u_{xx} u_{xy} +(1/\alpha) u_{xxxy}) ,
-(1/\alpha) x (u_{tx} +\beta u_{xy} u_{yy} +u_{xyyy})
\big)
\end{aligned}
\end{equation}
and
\begin{equation}\label{NV.current3}
\begin{aligned}
& f(t)\big( u_{yy} u_{xy} ,
\tfrac{1}{3} (\alpha u_{xy}^3 + \beta u_{yy}^3)
-u_{yyy}^2 -2 u_{xxy} u_{xyy}
+(u_{ty} +2\alpha u_{xx} u_{xy} +2 u_{xxxy})u_{yy} ,
\\&\quad
-\alpha u_{xx} u_{xy}^2
+\beta u_{xy} u_{yy}^2
-u_{ty} u_{xy}
+u_{xxy}^2
+2 u_{yy} u_{xyyy}
\big)
\\&
+ f'(t) \big( 0,
-(1/\beta) y (\alpha u_{xy} u_{xx} +u_{ty}+u_{xxxy}) ,
(1/\beta) u_{yyy} -y (u_{xy} u_{yy} +(1/\beta) u_{xyyy} )
\big)
\end{aligned}
\end{equation}
which are related by reflection under $x\leftrightarrow y$.
The first conserved current \eqref{NV.current1}
represents a spatial-flux conservation law.
Its global form \eqref{2D.charge.conslaw} is a conserved topological charge
$\oint_{C} Y\,dx - X\,dy=0$
given by
\begin{equation}
X = \tfrac{1}{2} u_{ty} +\alpha u_{xx}u_{xy} + u_{xxxy},
\quad
Y = \tfrac{1}{2} u_{tx} +\beta u_{xy}u_{yy} + u_{xyyy}
\label{NV.charge1}
\end{equation}
for NV solutions $u(t,x,y)$,
where $C$ is any closed curve in the $(x,y)$-plane.
This conservation law can also be expressed as a balance equation
\begin{equation}
\frac{d}{dt} \oint_{C} u_{x}\,dx - u_{y}\,dy =
2\oint_{C} (\alpha u_{xx}u_{xy} + u_{xxxy})\,dy -(\beta u_{xy}u_{yy} + u_{xyyy})\,dx
\end{equation}
for the circulation of the vector $(-u_x,u_y)$ whose curl is $2u_{xy}=2v$.
If we take $f(t)=1$,
then the other two conserved currents \eqref{NV.current2} and \eqref{NV.current3}
represent conservation laws describing balance equations
for rate of change of momenta
$\int_{{\rm int}(C)} u_{xx} u_{xy}\,dxdy$
and $\int_{{\rm int}(C)} u_{xy} u_{yy}\,dxdy$,
holding for NV solutions $u(t,x,y)$,
where ${\rm int}(C)$ is the interior of any closed curve $C$
in the $(x,y)$-plane.
The divergence relation \eqref{dens.div.id} applied to these conservation laws
yields the respective identities
\begin{align}
& u_{xx} u_{xy}
= (1/\alpha)\big( D_x( x (\alpha u_{xx} u_{xy} + u_{xxxy}) )
+ D_y( x (u_{tx} +\beta u_{xy} u_{yy} + u_{xyyy}) -u_{xxx} )
- x G f(t) \big),
\label{ZZK.id1}
\\
& u_{xy} u_{yy}
= (1/\beta)\big( D_x( y (u_{ty} +\alpha u_{xx} u_{xy} + u_{xxxy}) -u_{yyy} )
+ D_y( y (\beta u_{xy} u_{yy} + u_{xyyy}) )
-y G f(t) \big),
\label{ZZK.id2}
\end{align}
where $G = u_{txy} +\alpha (u_{xy}u_{xx})_x +\beta (u_{xy} u_{yy})_y + u_{xxxxy} + u_{xyyyy}$.
When the Cauchy problem for the NV equation \eqref{NVeqn} is considered on ${\mathbb R}^2$,
these identities \eqref{ZZK.id1} and \eqref{ZZK.id2} imply that
initial data $v|_{t=0}=v_0(x,y)=u_0{}_{xy}$
must satisfy the integral constraints
$\int_{{\mathbb R}^2} u_0{}_{xx}u_0{}_{xy}\,dxdy = \int_{{\mathbb R}^2} u_0{}_{yy}u_0{}_{xy}\,dxdy = 0$.
In addition, the identities \eqref{KP.id1} and \eqref{KP.id2}
show that each of the momenta conservation laws is locally equivalent to
a spatial flux conservation law.
The corresponding topological charges
$\oint_{C} Y\,dx - X\,dy=0$ for any closed curve $C$
are given by the fluxes
\begin{equation}\label{NV.charge2}
\begin{aligned}
X = &
x (u_{xy} u_{txx}+u_{xx} u_{txy}+ (1/\alpha) u_{txxxy})
-u_{xy} u_{tx} +\alpha u_{xx}^2 u_{xy} -\beta u_{xy}^2 u_{yy}
+u_{xyy}^2 +2 u_{xx} u_{xxxy} ,
\\
Y = &
(1/\alpha) x ( u_{ttx} +\beta (u_{yy} u_{txy} + u_{xy} u_{tyy}) +u_{txyyy} )
+\tfrac{1}{3}(\alpha u_{xx}^3 +\beta u_{xy}^3)
-u_{xxx}^2
\\&\quad
+(u_{tx} + 2 \beta u_{xy} u_{yy} +2 u_{xyyy}) u_{xx}
-2 u_{xxy} u_{xyy} -(1/\alpha) u_{txxx} ,
\end{aligned}
\end{equation}
and
\begin{equation}\label{NV.charge3}
\begin{aligned}
X = &
(1/\beta) x ( u_{tty} +\alpha (u_{xx} u_{txy} + u_{xy} u_{txx}) +u_{txxxy} )
+\tfrac{1}{3}(\alpha u_{xx}^3 +\beta u_{xy}^3)
-u_{yyy}^2
\\&\quad
+(u_{ty} +2 \alpha u_{xy} u_{xx} +2 u_{xxxy}) u_{yy}
-2 u_{xxy} u_{xyy} -(1/\beta) u_{tyyy} ,
\\
Y = &
x (u_{xy} u_{tyy}+u_{yy} u_{txy}+ (1/\beta) u_{txyyy})
-u_{xy} u_{ty} -\alpha u_{xy}^2 u_{xx} +\beta u_{yy}^2 u_{xy}
+u_{xxy}^2 +2 u_{yy} u_{xyyy} .
\end{aligned}
\end{equation}
Finally,
the spatial potential systems arising from the three topological charges \eqref{NV.charge1}, \eqref{NV.charge2} and \eqref{NV.charge3}
consist of pairs of equations for $(u,w)$:
\begin{equation}
X= w_y,
\quad
Y = -w_x .
\end{equation}
Each system is equivalent (modulo gauge freedom \eqref{2D.gaugefreedom})
to the NV equation.
\subsection{Vorticity equation}
In two spatial dimensions,
the Navier--Stokes equations for incompressible fluid flow
\begin{equation}
\vec{v}_t + \vec{v}\cdot\nabla\vec{v}
= -(1/\rho)\nabla P + \mu \Delta\vec{v},
\quad
\nabla\cdot\vec{v}=0
\end{equation}
have a well-known formulation in terms of a scalar potential $u$,
where $\mu$ is the viscosity,
and where $\vec{v} = (-u_y,u_x)$ is the fluid velocity.
The vorticity of the fluid is a scalar given by \cite{MajBer}
$\omega = (v^y)_x-(v^x)_y=\Delta u$,
with $\Delta = \partial_x^2 + \partial_y^2$
being the two-dimensional Laplacian.
An evolution equation for $\omega$ is obtained by taking the curl of the velocity equation,
yielding
\begin{equation}\label{vorteqn}
\Delta u_t +u_x \Delta u_y -u_y \Delta u_x -\mu \Delta^2 u =0 .
\end{equation}
This equation has a higher-order spatial divergence form
$\Delta u_t = \nabla\cdot\vec{F}$
given by
\begin{equation}
\vec{F} = (u_y\Delta u +\mu \Delta u_x, -u_x\Delta u +\mu \Delta u_y) \\
= -(\Delta u)\vec v + \mu \vec\nabla \Delta u .
\end{equation}
As expected from the discussion in section~\ref{sec:intro},
equation \eqref{vorteqn} admits the multipliers
\begin{equation}
f(t) ,
\end{equation}
and in the inviscid case $\mu=0$,
\begin{equation}
xf(t),
\quad
yf(t) .
\end{equation}
There are no additional multipliers
that have the form $Q(t,x,y,u,u_t,u_x,u_y,u_{tx},u_{ty},u_{xy},$ $u_{xx},u_{yy})$
and that involve an arbitrary function $f(t)$,
as shown by a direct computation.
The corresponding conservation laws are given by
the following conserved currents $(T,\Phi^x,\Phi^y)$:
\begin{equation}
f(t)\big( 0, u_{tx} -u_y\Delta u -\mu \Delta u_x, u_{ty} +u_x\Delta u -\mu \Delta u_y \big) ;
\end{equation}
and in the case $\mu=0$,
\begin{align}
& f(t)\big( 0,
-u_t +u_x u_y + x(u_{tx} -u_y u_{xx} + u_x u_{xy}) ,
-u_x^2 + x (u_{ty} -u_y u_{xy} + u_x u_{yy})
\big) ,
\\
& f(t)\big( 0,
u_y^2 +y (u_{tx} -u_y u_{xx} +u_x u_{xy}) ,
-u_t -u_x u_y + y (u_{ty} -u_y u_{xy} +u_x u_{yy})
\big) .
\end{align}
The resulting topological charges are given by the line integrals
\begin{align}
& \oint_{C} (u_{ty} -F^y)\,dx -(u_{tx} -F^x)\,dy =0 ,
\label{vort.charge1}\\
& \oint_{C} ( x(u_{ty} -u_y u_{xy} + u_x u_{yy}) -u_x^2 )\,dx -( x(u_{tx} -u_y u_{xx} + u_x u_{xy}) -u_t +u_x u_y )\,dy =0 ,
\label{vort.charge2}\\
&
\oint_{C} ( y(u_{ty} -u_y u_{xy} +u_x u_{yy}) -u_t - u_x u_y )\,dx -( y(u_{tx} -u_y u_{xx} +u_x u_{xy}) +u_y^2 )\,dy =0 ,
\label{vort.charge3}
\end{align}
where $C$ is any fixed closed curve in the $(x,y)$-plane.
They can be expressed equivalently in the form of circulation balance equations
\begin{align}
\frac{d}{dt}\oint_{C} u_{y}\,dx -u_{x}\,dy
& = \oint_{C} F^y\,dx -F^x\,dy ,
\\
\frac{d}{dt}\oint_{C} x u_{y}\,dx +(u -xu_{x})\,dy
& = \oint_{C} ( x(u_y u_{xy} -u_x u_{yy}) +u_x^2 )\,dx -( x(u_y u_{xx} - u_x u_{xy}) -u_x u_y )\,dy ,
\\
\frac{d}{dt} \oint_{C} (y u_{y} -u)\,dx - y u_{x}\,dy
& = \oint_{C} ( y(u_y u_{xy} -u_x u_{yy}) +u_x u_y )\,dx -(y(u_y u_{xx} -u_x u_{xy}) +u_y^2 )\,dy .
\end{align}
The first equation relates the rate of change
in the circulation of fluid velocity $\oint_{C} \vec v\cdot d\vec s$ around closed curves
to the flux of $\vec F$ through the curve.
The other two equations are generalizations involving a weighted circulation of fluid velocity.
The spatial potential systems for $(u,w)$ arising from
the topological charges \eqref{vort.charge1}--\eqref{vort.charge3}
are given by
\begin{align}
&
u_{tx} -u_y\Delta u -\mu \Delta u_x =w_y,
\quad
u_{ty} +u_x\Delta u -\mu \Delta u_y =-w_x;
\\
&
{-}u_t +u_x u_y + x (u_{tx} - u_y u_{xx} + u_x u_{xy})
= w_y,
\quad
u_x^2 - x (u_{ty} - u_y u_{xy} + u_x u_{yy})
= w_x ,
\\
&
u_y^2 +y (u_{tx} -u_y u_{xx} +u_x u_{xy})
= w_y,
\quad
u_t + u_x u_y - y (u_{ty} -u_y u_{xy} +-u_x u_{yy})
=w_x .
\end{align}
\section{Concluding remarks}\label{sec:conclude}
For dynamical PDEs with a spatial divergence form,
we have shown how conservation laws that involve an arbitrary function of time
contain interesting useful information which goes beyond what is contained
in ordinary conservation laws.
Conservation laws in one spatial dimension
involving an arbitrary function $f(t)$
describe the presence of an $x$-independent source/sink;
in two and more spatial dimensions,
such conservation laws are locally equivalent to spatial-flux conservation laws
that yield non-trivial topological charges.
We have explored two important systematic applications of
this type of conservation law:
construction of an associated spatial potential system which allows
the possibility for finding nonlocal symmetries and nonlocal conservation laws;
and derivation of an integral constraint relation which imposes
restrictions on well-posedness of the Cauchy problem
with associated initial/boundary data.
It is well known that the existence of a non-trivial conservation law of a PDE
has a cohomological meaning in the setting of the variation bi-complex \cite{Olv-book}.
Existence of a topological charge can be understood as stating that
a given dynamical PDE possesses a non-trivial spatial cohomology.
All of these developments will have a natural counterpart
for conservation laws that involve an arbitrary function of
other independent variables.
\section*{Acknowledgements}
SCA is supported by an NSERC research grant
and thanks the University of C\'adiz for additional support during the period
when this work was initiated.
| {
"timestamp": "2020-12-29T02:09:29",
"yymm": "2006",
"arxiv_id": "2006.03639",
"language": "en",
"url": "https://arxiv.org/abs/2006.03639",
"abstract": "Dynamical PDEs that have a spatial divergence form possess conservation laws that involve an arbitrary function of time. In one spatial dimension, such conservation laws are shown to describe the presence of an $x$-independent source/sink; in two and more spatial dimensions, they are shown to describe a topological charge. Two applications are demonstrated. First, a topological charge gives rise to an associated spatial potential system, allowing nonlocal conservation laws and symmetries to be found for a given dynamical PDE. Second,when a conserved density involves derivatives of an arbitrary function of time in addition to the function itself, its integral on any given spatial domain reduces to a boundary integral, which in some situations can place restrictions on initial/boundary data for which the dynamical PDE will be well-posed. Several examples of nonlinear PDEs from applied mathematics and integrable system theory are used to illustrate these new results.",
"subjects": "Mathematical Physics (math-ph); Analysis of PDEs (math.AP)",
"title": "Topological charges and conservation laws involving an arbitrary function of time for dynamical PDEs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850902023109,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.709534990949939
} |
https://arxiv.org/abs/1705.01973 | Evolving Affine Evolutoids | The envelope of straight lines affine normal to a plane curve C is its affine evolute; the envelope of the affine lines tangent to C is the original curve, together with the entire affine tangent line at each inflexion of C. In this paper, we consider plane curves without inflexions. We use some techniques of singularity theory to explain how the first envelope turns into the second, as the (constant) slope between the set of lines forming the envelope and the set of affine tangents to C changes from 0 to 1. In particular, we guarantee the existence of the first slope for which singularities occur. Moreover, we explain how these singularities evolve in the discriminant surface. | \section{Introduction}
\label{intro}
Let $\gamma$ be a plane curve, which we shall assume closed, smooth and without affine inflexions. The {\it envelope} of a family of lines is formed by intersections of infinitesimal consecutive lines or equivalently a curve tangent to all the lines. For example, the envelope of the family of affine tangents lines to $\gamma$ contains at least itself and the envelope of affine normals is called the {\it affine evolute} of $\gamma$.
It is natural to ask what lies "between" the envelope of affine tangents and the envelope of affine normals. Let us fix a number $\alpha$ ranging between 0 and 1 and consider the lines $L_\alpha$ that through by $\gamma(s)$ of slope $\alpha\gamma_s+(1-\alpha)\gamma_{ss}$, where $s$ is the parameter of affine arc-length. The euclidean case was investigated by Giblin and Warder \cite{Giblin3}.
This work explicits the envelope of lines $L_\alpha$, which we call affine evolutoid, and provide some results, such as: the regularity conditions of the envelope, existence of first $\alpha$ where singularities and conditions for existence of ordinary affine cusps occur. Moreover, we apply the results of the singularity theory to prove how the singularities evolve on the discriminant of the family to three parameters obtained from the equations that define $L_\alpha.$ More precisely, we found (locally) that the discriminant surfaces are cuspidal edges or swallowtail surfaces.
\section{Review of the affine geometry of planar curves}
In this section, we present the basic concepts of the affine differential geometry of planar smooth curves. For further details, see \cite{Nomizu,Buchin}.
Let $\gamma: [0,1] \longrightarrow \mathbb{R}^2$ be a planar curve parametrized by $t$. The basic purpose of the planar affine differential geometry is to define a new parametrization, $s$, which is an affine-invariant, and the simplest affine-invariant parametrization $s$ is given by requiring, at every curve point $\gamma(s)$, the relation
\begin{equation}\label{pcaa}
[ \gamma_s, \gamma_{ss}]=1,
\end{equation}
\noindent where $[ ,]$ is the notation for determinants.
When a curve satisfies equation \eqref{pcaa}, we say it is
parameterized by affine arclength.
The vectors $\gamma_s$ and $\gamma_{ss}$ are the affine tangent and the affine normal, respectively.
The parameters $s$ and $t$ are related by
$$ [\gamma_t,\gamma_{tt}]=\left[\gamma_ss_t,
\gamma_{ss}(s_t)^2+\gamma_ss_{tt}\right]=s_t^3[\gamma_s,\gamma_{ss}]=s_t^3$$
Thus,
$$ \dfrac{ds}{dt}= [\gamma_t,\gamma_{tt}]^{\frac{1}{3}}.$$
By differentiating the equation \eqref{pcaa}, we obtain
$$ [\gamma_s, \gamma_{sss}]=0 \Rightarrow
\gamma_{sss}+ \mu(s)\gamma_s=0,$$ for some $\mu(s) \in \mathbb{R}.$
The function $\mu(s)$ is the affine curvature and the simplest non-trivial affine differential invariant. Notice that
$$ [\gamma_{s}, \gamma_{sss}]=0 \Rightarrow \gamma_{sss}=-\mu(s)\gamma_s,$$ therefore, we conclude that
$$\mu(s)=[\gamma_{ss}, \gamma_{sss}].$$
\begin{theorem}\cite{Nomizu} \label{thm1}
Curves have constant affine curvature if and only if they are conic sections.
\end{theorem}
\section{The affine normal and the affine curvature of a curve non parameterized by affine arclenght}
\begin{proposition} Let $\gamma: \mathbb{R} \longrightarrow \mathbb{R}$ be a regular curve parametrized by an arbitrary parameter $t$. The affine normal $\xi(t)$ is given by:
\begin{eqnarray*}\label{eq0}
\xi(t)=\kappa^{-\dfrac{2}{3}}\gamma_{tt}-\dfrac{1}{3}\kappa_t\kappa^{-\dfrac{5}{3}}\gamma_t
\end{eqnarray*}
\end{proposition}
The affine curvature of a planar curve $\gamma$ parametrized by an arbitrary parameter is given in the next result.
\begin{proposition}
Let $\gamma$ be a smooth plane curve without inflexion points parametrized by an arbitrary parameter $t$. Considering
$\kappa=[\gamma_t,\gamma_{tt}]$, we conclude that the affine curvature is given by
\begin{eqnarray} \label{eq00}
\mu = \dfrac{1}{9}\left(3\kappa\kappa_{tt}-5\kappa_t^2+9\kappa[\gamma_{tt},\gamma_{ttt}]\right)\kappa^{-\frac{8}{3}}.
\end{eqnarray}
\end{proposition}
\begin{proof}
Note that $s_t=\kappa^{\frac{1}{3}}$ and
$\gamma_s=\gamma_t\kappa^{-\frac{1}{3}}$. Now, calculate
$\gamma_{ss}$, $\gamma_{sss}$ and use the fact that
$\kappa_t=[\gamma_t,\gamma_{ttt}]$, thus
$\mu=[\gamma_{ss},\gamma_{sss}].$
\end{proof}
Consider a plane curve in the Monge's form without euclidean inflexions close to origin, that is,
$$\gamma(t)=\left(t, \displaystyle\frac{1}{2}a_2t^2+ \cdots +\dfrac{1}{k!}a_kt^k+g(t)t^{k+1}
\right),$$ where $a_i \in \mathbb{R}$, $a_2\neq0$ and $g$ is a smooth function. Using the previous theorem, the affine curvature of $\gamma$ in
$\gamma(0)$ is
$$\mu(0)=\dfrac{3a_2a_4-5a_3^2}{9a_2^{\frac{8}{3}}}$$
This means that the affine curvature function is an invariant affine differential of order 4 of $\gamma$.
\section{Affine Envelopes}\label{sec4}
Let $\gamma: I \longrightarrow \mathbb{R}^2$ be a smooth closed curve without affine inflexions. It is known that the envelope of affine tangents to $\gamma$ is formed by the curve itself and by affine tangents in the affine inflexion points, \cite{Sano}. It is also known that the affine normals are the affine evolute of curve $\gamma$. Inspired in the work \cite{Giblin3}, we asked what the envelope of lines with slope between affine tangent and affine normal to curve $\gamma$ would be.
Let $(1-|\alpha|)\gamma_s+\alpha\gamma_{ss}$ be a vector between $\gamma_s$ and $\gamma_{ss}$, where $\alpha \in [-1,1]$. In this paper, we consider the case where $\alpha>0$, the case $\alpha<0$ is similar.
We are interested in the envelope of lines with slope $v^\alpha=(1-\alpha)\gamma_s+\alpha\gamma_{ss}, \alpha \in [0,1],$ which we denote by $L_\alpha$. The equation of line $L_\alpha$ is given by $$\begin{array}{cccl}
F: & \mathbb{R}^2\times I & \longrightarrow & \mathbb{R}^2 \\
& (X,s) & \longmapsto & F(X,s)=\left[X-\gamma, (1-\alpha)\gamma_s+\alpha\gamma_{ss}\right] \\
\end{array},
$$ where $[ \, ,]$ is the notation for determinants.
For $\alpha$ fixed, $F(X,s)=0$ refers to a family of lines, e.g., for each $\alpha$ we have a line and when $s$ varies, the line moves in the plane $xy.$
The envelope of family $F(X,s)$ is given by $$E_\alpha=\left\{X=(x,y)\in \mathbb{R}^2 | \textrm{there is} \ s \ \textrm{such that} \ F(X,s)=F_s(X,s)=0\right\}.$$
As $\alpha$ is fixed (constant) , it follows that $$F_s(X,s)=\left[-\gamma_s,(1-\alpha)\gamma_s+\alpha\gamma_{ss}\right]+\left[X-\gamma,(1-\alpha)\gamma_{ss}-\alpha\mu\gamma_s\right].$$
Here, we use the fact that $s$ is the parameter affine arclenght. Therefore, $\gamma_{sss}=-\mu(s)\gamma_s,$ where $\mu$ is the affine curvature of $\gamma$.
By solving the system $F=F_s=0$, we obtain
\begin{equation}\label{envelope}
X(s)=\gamma(s) + \dfrac{\alpha}{(1-\alpha)^2+\mu(s)\alpha^2}\left((1-\alpha)\gamma_s(s)+\alpha\gamma_{ss}(s)\right).
\end{equation}
\begin{remark} \label{r1}
\
\begin{enumerate}
\item[(a)] Notice that $(1-\alpha)^2+\mu(s)\alpha^2\neq0.$ Otherwise, the affine curvature should be a negative constant and thus $\gamma$ would not be closed, see Theorem \ref{thm1}.
\item[(b)] If $\alpha=1$, then the lines $F(X,s)=0$ are the affine normals to $\gamma$ and the envelope is the affine evolute, e. g., the set of points $\gamma+\dfrac{1}{\mu}\gamma_{ss}$, which are centers of conics doing $5-$contact with $\gamma$, also called centers of affine curvature of $\gamma.$
\item[(c)] If $\alpha=0$, then the lines are the affine tangents to $\gamma$ and the envelope is the original curve $\gamma.$
\end{enumerate}
\end{remark}
\section{Regularity of envelope}
Consider the envelope of family $F$ given by equation \eqref{envelope}. We propose to investigate when this curve is regular or not regular. In the next proposition, we give the conditions for this.
\begin{proposition}\label{regularidade_do_envelope}
The envelope \eqref{envelope} is not regular if and only if
\begin{equation}\label{nonregularity}
\mu_s= \dfrac{(1-\alpha)((1-\alpha)^2+\mu\alpha^2)}{\alpha^3},
\end{equation} where $\mu_s$ is the derivative of affine curvature with respect to $s$ on $\gamma$.
\end{proposition}
\begin{proof}
Assume $\mu(s)\neq0$. By differentiating the solution \eqref{envelope} of the envelope of $F$ with respect to parameter affine arclength $s$, we obtain $X_s=A(s)\left((1-\alpha)\gamma_s+\alpha\gamma_{ss}\right),$ where $$A(s)=\dfrac{(1-\alpha)}{{(1-\alpha)^2+\mu\alpha^2}}-\dfrac{\alpha^3\mu_s}{[(1-\alpha)^2+\mu\alpha^2]^2}.$$
Therefore, $X_s$ is zero if and only if $A(s)=0$, e. g., $\mu_s= \dfrac{(1-\alpha)((1-\alpha)^2+\mu\alpha^2)}{\alpha^3}.$
\end{proof}
For $\alpha=1$, the envelope corresponds to affine evolute. By differentiating the equation \eqref{envelope}, we obtain the familiar condition $\mu_s=0$, e. g., it says that $\gamma$ has an extreme of affine curvature, e. g., $\gamma$ has an affine vertex. In the case $\alpha=0$, the envelope corresponds to curve $\gamma$ itself, which is regular by assumption.
\begin{example}\label{ellipse}
Consider an ellipse parameterized by $\gamma(t)=(a\cos(t),b\sin(t)),$ where $b>a>0$ (for $a=2, b=3$ \ see Fig. \ref{fig1}). The reparameterization by affine arclenght is $\alpha(s)=\left(a\cos\left(\dfrac{s}{(ab)^{\frac{1}{3}}}\right),b\sin\left(\dfrac{s}{(ab)^{\frac{1}{3}}}\right)\right)$. If we apply the condition of Proposition \ref{regularidade_do_envelope}, we conclude that, for any $\alpha$, the affine evolutoid is smooth. This was expected because the affine curvature of $\alpha$ is always constant.
\end{example}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2]{affine_evolutoid_ellipse.eps}\\
\caption{Ellipse $\gamma(t)=(2\cos(t),3\sin(t))$ and the affine evolutoid to $\alpha=0.75$. In true, for all $0\leq\alpha<1$, the affine evolutoids are smooth and for $\alpha=1$ the affine evolutoid is the degenerated affine evolute.}
\label{fig1}
\end{figure}
\begin{example}\label{ex2}
Consider the curve $\gamma(t)=\left(\cos(2t)-\cos(t+a), \sin(2t)+\sin(t)\right)$. Here, the affine evolutoid presents singularities (see Fig. \ref{fig2}).
\end{example}
\bigskip
\begin{figure}[ht]
\centering
\includegraphics[scale=0.2]{affine_evolutoid_example.eps}\\
\caption{Curve $\gamma(t)=\left(\cos(2t)-\cos(t+1.9), \sin(2t)+\sin(t)\right)$ and the affine evolutoid for $\alpha=0.9$. }
\label{fig2}
\end{figure}
The existence of a first $\alpha$ such that the affine evolutoid is not smooth is guaranteed in the next result.
\begin{theorem}[$\alpha$ born]\label{born}
Consider $\gamma$ as in Section \ref{sec4}. There is a first $\alpha$ such that the condition \eqref{nonregularity} given in Proposition \eqref{regularidade_do_envelope} occurs.
\end{theorem}
\begin{proof}
The ordinary differential equation given in the condition \eqref{nonregularity} has the solution below: $$\mu(s)=-\dfrac{(1-\alpha)^2}{\alpha^2}+ Ce^{\dfrac{(1-\alpha)}{\alpha}s},$$ where $C\in \mathbb{R}.$
Define the function $G : (0,1]\times I \longrightarrow \mathbb{R},$ given by $G(\alpha,s)=-\mu(s)-\dfrac{(1-\alpha)^2}{\alpha^2}+ Ce^{\dfrac{(1-\alpha)}{\alpha}s}$. Fixing $s$, such function is continuous and defines a curve. Observe that $\lim_{\alpha\rightarrow0^+}|G(\alpha,s)|=\infty.$ Then, given any real number $M>0$, there is $\delta>0$, such that $|G(\alpha,s)|>M.$ Thus, $G|_{[\delta,1]\times\{s\}}$ is a continuous function defined in a compact set. Therefore there is $\alpha_0=\alpha_0(s)$, such that $G(\alpha_0,s)=\min_{\delta\leq\alpha\leq1}(G(\alpha,s))$.
\end{proof}
\begin{remark}
In the Example \ref{ellipse}, the $\alpha$ born occurs for $\alpha=1$. On the other hand, in the Example \ref{ex2}, the existence of first $\alpha$ such that the affine evolutoid is not smooth is guaranteed by Theorem \ref{born}, but, it is non trivial to explicit.
\end{remark}
\begin{proposition}\label{p5}
Assume $\mu\neq0$ and $\alpha\in (0,1).$ Then, the affine cusps, presented in Proposition \ref{regularidade_do_envelope}, are ordinary affine cusps if and only if $\alpha\mu_{ss}\neq (1-\alpha)\mu_s$, where the derivatives are evaluated at the affine cusp point.
\end{proposition}
\begin{proof}
The condition for an ordinary affine cusp is in fact that the second and third derivatives of $\gamma$ evaluated at the affine cusp point should be independent vectors. We see that $X_s=A(s)v^\alpha,$ where $v^\alpha=((1-\alpha)\gamma_s+\alpha\gamma_{ss})$. By differentiating $X_s$ twice, we obtain $$ X_{ss}=A_sv^\alpha+Av^\alpha_s, \ \ X_{sss}=A_{ss}v^\alpha+2A_sv^\alpha_s+Av^\alpha_{ss}.$$
If $A(s)=0$, we have $$[X_{ss},X_{sss}]=2A_s^2[v^\alpha,v^\alpha_s]=2A_s^2\left((1-\alpha)^2+\mu\alpha^2\right).$$ Thus, the condition for these vectors to be linearly dependent is $2A_s^2[v^\alpha,v^\alpha_s]=0$, e. g., $A_s=0$. Thus, we obtain the required formula, since $(1-\alpha)^2+\mu\alpha^2\neq0$, already that $\mu\neq0$ and $\alpha\in(0,1)$.
\end{proof}
This article aims to study not only a single value of $\alpha$ but also what happens to $E_\alpha$ as $\alpha$
varies. For such, the investigation is conducted in a broader context.
\section{Discriminants and singularity theory}
Consider the family of functions of one variable $s$ with three parameters $(x,y,\alpha)$
\begin{equation}\label{F}
F(X,\alpha,s)=\left[ X-\gamma(s), (1-\alpha)\gamma_s+\alpha\gamma_{ss}\right],
\end{equation}
where $X=(x,y), \gamma(s)=(x(s),y(s))$ and $s$ is the parameter of affine arclength. The discriminant of this family is given by
\begin{equation}\label{discriminant}
D_F=\{(X,\alpha) : \textrm{there is} \ s \ \textrm{such that} \ F(X,\alpha,s)=F_s(X,\alpha,s)=0\}.
\end{equation}
This discriminant is the union of all the envelopes of lines $L_\alpha$ for each $\alpha.$
Now, consider the discriminant $D_F$ and the function $h(x,y)=\alpha$. The level sets $h=constant$ are the individual envelopes of the family. We intend to investigate precisely how they change as $\alpha$ varies.
\begin{example} Consider the ellipse $\gamma(t)=(3\cos(t),2\sin(t))$ and the curve $\sigma(t)=(\cos(2t)-\cos(t+1.9), $ $\sin(2t)+\sin(t))$. Let $D_F$ be the discriminant surface associated to $\gamma$ and $D_G$, the discriminant surface associated to $\sigma$, as illustrated in Fig. \ref{fig3}. Remark that, in discriminant surface, $D_G$ seems to have cuspidal edges and swallowtail surfaces\footnote{For details about cusps, cuspidal edges and swallowtail surface, see \cite{Giblin4}.}, and the function $h$ seems to have level sets which undergo a swallowtail transition for certain values of $\alpha$. We are interested in verifying these observations.
\end{example}
\begin{figure}[ht]
\centering
\subfigure[Discriminant surface $D_F$]{\includegraphics[width=4cm]{{discrimant_ellipse.eps}}}
\subfigure[Discriminant surface $D_G$]{\includegraphics[width=4cm]{{discrimant_curve.eps}}}
\caption{For the discriminant surface $D_F$ ($\alpha$-axis is vertical) $\alpha=0$ corresponds to the bottom that is the original ellipse and $\alpha=1$ is the top, which corresponds to the envelope of affine normals, which degenerates at a point. For the discriminant surface $D_G$, $\alpha=0$ corresponds to original curve $\sigma$, and $\alpha=1$ corresponds to affine evolute (envelope of affine normals, which has six cusps). In $D_G$, cuspidal edges appear and the horizontal sections seem to undergo a swallowtail transition.} \label{fig3}
\end{figure}
To verify the observations given in the example above, we shall apply the results from the singularity theory which allow us to make precise statements about how the envelopes evolve as $\alpha$ changes.
\begin{definition}
For $(X_0,\alpha_0)=(x_0,y_0,\alpha_0)$ the function $f(s) = F(X_0,\alpha_0,s)$ has singularity
\begin{enumerate}
\item[(i)] type $A_2$ at $s = s_0$ if $f'(s_0) = f''(s_0) = 0, f'''(s_0)\neq 0$, \\
\item[(ii)] type $A_3$ at $s = s_0$ if $f'(s_0) = f''(s_0) = 0, f'''(s_0)=0, f^{(4)}(s_0)\neq 0$.
\end{enumerate}
\end{definition}
\begin{proposition}\label{P5}
Let the point $(x_0,y_0,\alpha_0)=(X_0,\alpha_0)$, which satisfies $F=F_s=0$ and suppose $\mu(s_0)\neq 0.$ Then, $f(s)=F(X_0,\alpha_0,s)$ has singularity
\begin{enumerate}
\item[$($i$)$] type $A_2$ at $s_0$, if $ \alpha\mu_{ss}-(1-\alpha)\mu_s\neq0;$ \\
\item[$($ii$)$] type $A_3$ at $s_0$, if $\alpha^5\mu_{sss}\neq-(1-\alpha)^3((1-\alpha)^2+\alpha^2\mu).$
\end{enumerate}
\end{proposition}
\begin{proof}
$(i)$ The equation $F_{ss}=0$ implies
\begin{equation} \label{eq12}
\alpha^3\mu_s=(1-\alpha)((1-\alpha)^2+\alpha^2\mu).
\end{equation}
By differentiating $F$ with respect to $s$ three times, using the equation \eqref{eq12} and the hypothesis $\alpha\mu_{ss}-(1-\alpha)\mu_s\neq0$, we obtain $F_{sss}\neq0.$
$(ii)$ The equation $F_{sss}=0$ implies
\begin{equation} \label{eq13}
\alpha\mu_{ss}-(1-\alpha)\mu_s=0.
\end{equation}
By differentiating $F$ with respect to $s$ four times, using the equation \eqref{eq13} and the hypothesis $\alpha^5\mu_{sss}\neq-(1-\alpha)^3((1-\alpha)^2+\alpha^2\mu),$ we obtain $F_{ssss}\neq0.$
\end{proof}
We highlight the following criterion (for further details, see \cite{Giblin4}), which is used for studying the behavior of singularities.
\begin{definition}[Criterion for versality]\label{versal} Let $H(X,z,s)=H(x,y,z,s)$ be a family to $3$ parameters. Suppose that $H = H_s = 0$ at $(X_0,z_0, s_0)$ and $h(s) = H(X_0,z_0,s)$ has an $A_r$ singularity at $s_0.$ Consider the partial derivatives $H_{x} ,H_y, H_z$, evaluated at $(X_0,z_0,s_0)$ and, in particular, their Taylor polynomials $T_i$ up to degree $r-1,$ expanded about $s_0$ (so these have $r$ terms). The family $H(X,z,s)$ is called a versal unfolding of $h$ at $s_0$ if the $T_i$ spans a vector space of dimension $r$. Thus, if the coefficients in the $T_i$ are placed as the columns of an $r \times 3$ matrix, the rank is $r.$ Clearly, this is possible only for $r \leq 3.$
\end{definition}
\begin{remark}[See \cite{Giblin4}]\label{T3}
It is known about the Singularity Theory that, if a family $H$ satisfies the criterion of the definition above, then in a neighborhood of $(X_0,z_0)\in D_H$, the discriminant is locally diffeomorphic to a cuspidal edge surface when $r=2$, and to a swallowtail surface, when $r=3$.
\end{remark}
In the next result, we showed that $F$, defined by equation \eqref{F}, satisfies the criterion given in Definition \ref{versal}.
\begin{theorem}
The family $F$ satisfies the conditions of the criterion given in Definition \ref{versal}. Thus, when $f(s) = F(X_0,\alpha_0, s)$ has an $A_r$ singularity at $s_0, r = 2$ or $3$, in the cases covered by Proposition \ref{P5}, the discriminant $D_F$ is always locally diffeomorphic to a standard cuspidal edge ($r = 2$) or a standard swallowtail surface ($r = 3$) in a neighborhood of $(X_0,\alpha_0)$.
\end{theorem}
\begin{proof}
Recall that
$$
\begin{array}{ccl}
F(X,\alpha,s) & = & [X-\gamma,(1-\alpha)\gamma_s+\alpha\gamma_{ss}] \\
& = & (x-x(s))((1-\alpha)y_s+\alpha y_{ss})-(y-y(s))((1-\alpha)x_s+\alpha x_{ss}).
\end{array}$$
We shall prove that $F$ is versal, e. g., that the matrix $J$ bellow has rank $2$. $$J=\left(\begin{array}{ccc}
F_x & F_y & F_\alpha \\
F_{xs} & F_{ys} & F_{\alpha s} \\
\end{array}\right)=$$$$=\left(\begin{array}{ccc}
(1-\alpha)y_s+\alpha y_{ss} & -(1-\alpha)x_s-\alpha x_{ss} & \dfrac{\alpha}{(1-\alpha)^2+\mu\alpha^2} \\
(1-\alpha)y_{ss}-\alpha \mu y_{s} & -(1-\alpha)x_{ss}+\alpha \mu x_{s} & -\dfrac{1-\alpha}{(1-\alpha)^2+\mu\alpha^2} \\
\end{array}\right).
$$
Observe that the determinant of $J1$ is $(1-\alpha)^2+\mu\alpha^2$, where $$J1=\left(\begin{array}{cc}
F_x & F_y \\
F_{xs} & F_{ys} \\
\end{array}\right),$$ which is nonzero by assumption (see Remark \ref{r1} (a)). This finalizes the case where the singularity is type $A_2$ at $s_0.$
In the case where the singularity is type $A_3$ at $s_0$ we shall prove that the matrix $\bar{J}$ bellow has rank $3.$ to show that $F$ is versal.
$$\bar{J}=\left(\begin{array}{ccc}
F_x & F_y & F_\alpha \\
F_{xs} & F_{ys} & F_{\alpha s} \\
F_{xss} & F_{yss} & F_{\alpha ss} \\
\end{array}\right)=$$$$=\dfrac{1}{(1-\alpha)^2+\mu\alpha^2}\left(\begin{array}{ccc}
(1-\alpha)y_s+\alpha y_{ss} & -(1-\alpha)x_s-\alpha x_{ss} & \alpha \\
(1-\alpha)y_{ss}-\alpha \mu y_{s} & -(1-\alpha)x_{ss}+\alpha \mu x_{s} & -(1-\alpha) \\
F_{xss} & F_{yss} & \dfrac{(1-\alpha)^2}{\alpha} \\
\end{array}\right),
$$
where the derivatives $F_{xss}$ and $F_{yss}$ are given by
$$F_{xss}=-(1-\alpha)^3y_s-\alpha^3y_{ss}-2\alpha^2(1-\alpha)\mu y_s$$
$$F_{yss}= (1-\alpha)^3x_s+\alpha^3x_{ss}+2\alpha^2(1-\alpha)\mu x_s.$$ Observe that, using the equation \eqref{eq12}, the determinant of $\bar{J}$ is given by $(\alpha^2\mu+3(1-\alpha)^2)/\alpha$, which is equal to zero iff $\mu(s_0)=-3(1-\alpha)^2/\alpha^2<0.$ If $\mu(s)$, for $s\in I$, does not change the signal, then $\gamma$ is not closed, which is contradiction. Now, if $\mu(s)$ changes the signal, then for some $\bar{s_0} \in I$, we have $\mu(\bar{s_0})=0$. Therefore, $\gamma$ has av affine inflexion at this point, which is another contradiction.
\end{proof}
The next result presents properties of the behavior of discriminant surface. For details, see \cite{arnold,bruce}.
\begin{proposition}
\begin{itemize}\label{swal}
\item[(i)] For a cuspidal edge surface, with $(X_0,\alpha_0)$ on the line of cusps, the level sets of $h$ on $D_F$ will all be cusped curves, provided that the plane $K$ does not contain the tangent to the line of cusps through $(X_0,\alpha_0).$ (“$K$ is transverse to the line of cusps”.)
On the other hand, the level sets undergo a “beaks” or “lips” transition, provided that $K$ does contain this tangent but does not coincide with the limiting tangent plane to the cuspidal edge surface at points approaching $(X_0,\alpha_0).$ (“$K$ is transverse to this limiting tangent plane”.)
\item[(ii)] For a swallowtail surface, with $(X_0,\alpha_0)$ at the swallowtail point, the level sets on $D_F$ undergo a swallowtail transition, with two cusps merging and disappearing, provided that $K$ does not contain the limiting tangent to the lines of cusps on $D_F$ at $(X_0,\alpha_0)$. (“$K$ is transverse to this limiting tangent line”.)
\end{itemize}
\end{proposition}
In the next result, we prove that the Proposition \ref{swal} is always satisfied for the discriminant $D_F$.
\begin{theorem}
The affine evolutoids $E_\alpha$ evolve locally according to a stable cusp at $A_2$ points, where the affine curvature $\mu\neq0$; according to a swallowtail transition at $A_3$ points, where $\mu\neq0$. At all other points, the envelope $E_\alpha$ is a smooth curve.
\end{theorem}
\begin{proof}
Consider that $f(s)=F(X_0,\alpha_0,s)$ has an $A_2$ or $A_3$ singularity in $s=s_0$. In the case $A_2$, we known that $D_F$ is locally diffeomorphic to a cuspidal edge surface close to $(X_0,\alpha_0).$ This cuspidal edge is given by $F=F_s=F_{ss}=0$, e.g., three equations in the four variables $x, y, \alpha, s,$ and the solutions are then projected to $(x,y,\alpha)-$space, where $D_F$ lies. Let $J_2$ be the matrix formed by three columns of $\bar{J}$, thus increasing the column formed by $F_s, F_{ss}$ and $F_{sss}$. Notice that the fourth column is the transpose of vector $(0,0,F_{sss}\neq0)$ at an $A_2$ point, and the transpose of $(0,0,0)$ at an $A_3$ point. Thus, for an $A_2$ point, we can always find a kernel vector $(\bar{x},\bar{y},\bar{\alpha},\bar{s})$ whose first three components are not all zero, using the first two rows of $J_2$, and then determine $\bar{s}$ using the third row of $J_2$, since $F_{sss}\neq0.$ Then, $(\bar{x},\bar{y},\bar{\alpha})$ is a nonzero tangent vector to the line of cusps $C$ on $D_F$ in $(x,y,\alpha)-$space. However, this cannot be done with $\bar{\alpha}=0$ in view of the non-singularity of $J_1$, sub-matrix of $J_2$. So, a tangent vector to $C$ will never be horizontal, and changing $\alpha$ to nearby values gives a stable cusp.
There is clearly a problem with this argument at an $A_3$ point $(X_0,\alpha_0)$, where $F_{sss}=0$, since in view of the non-singularity of $\bar{J}$, all kernell vectors of $J_2$ have the form $(0,0,0,\bar{s})$. This simply says that, in $(x,y,\alpha)-$space, the curve $C$ on $D_F$ is singular at $(x_0,y_0,\alpha_0)$, which is true, since $D_F$ is a swallowtail surface and the space curve $C$ itself has a cusp at $(x_0,y_0,\alpha_0)$. However, the above argument still applies, by taking a unit tangent vector $(\bar{x},\bar{y},\bar{\alpha})$ and moving towards $(x_0,y_0,\alpha_0)$ along $C$: the last component cannot tend to $0$ without the other two tending to $0$ as well, which is a contradiction. In the present case, we can be more explicit: a tangent vector to $C$, obtained from the first two rows of $J_2$ is $$\left(2\alpha(\alpha-1)x_{ss}+x_s(\alpha^2\mu-(1-\alpha)^2),2\alpha(\alpha-1)y_{ss}+y_s(\alpha^2\mu-(1-\alpha)^2),-((1-\alpha)^2+\alpha^2\mu)^2\right).$$ It is clear that this cannot have a limit in which the third component is $0.$
\end{proof}
\bibliographystyle{amsplain}
| {
"timestamp": "2017-05-08T02:00:54",
"yymm": "1705",
"arxiv_id": "1705.01973",
"language": "en",
"url": "https://arxiv.org/abs/1705.01973",
"abstract": "The envelope of straight lines affine normal to a plane curve C is its affine evolute; the envelope of the affine lines tangent to C is the original curve, together with the entire affine tangent line at each inflexion of C. In this paper, we consider plane curves without inflexions. We use some techniques of singularity theory to explain how the first envelope turns into the second, as the (constant) slope between the set of lines forming the envelope and the set of affine tangents to C changes from 0 to 1. In particular, we guarantee the existence of the first slope for which singularities occur. Moreover, we explain how these singularities evolve in the discriminant surface.",
"subjects": "Differential Geometry (math.DG)",
"title": "Evolving Affine Evolutoids",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850892111574,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7095349902345807
} |
https://arxiv.org/abs/1912.11518 | Fluctuations of the spectrum in rotationally invariant random matrix ensembles | We investigate traces of powers of random matrices whose distributions are invariant under rotations (with respect to the Hilbert--Schmidt inner product) within a real-linear subspace of the space of $n\times n$ matrices. The matrices we consider may be real or complex, and Hermitian, antihermitian, or general. We use Stein's method to prove multivariate central limit theorems, with convergence rates, for these traces of powers, which imply central limit theorems for polynomial linear eigenvalue statistics. In contrast to the usual situation in random matrix theory, in our approach general, nonnormal matrices turn out to be easier to study than Hermitian matrices. | \section{Introduction}
The limiting behavior of the eigenvalues of random matrices is a
central problem in modern probability, with applications and
connections in statistics, physics, and beyond. The eigenvalues of
the classical ensembles have been studied extensively, and much is
known. However, there are many other ensembles which are natural in
applied contexts that have been less thoroughly explored. In this
paper, we study the eigenvalues of rotationally invariant random
matrix ensembles; i.e., probability measures on real-linear spaces of
$n \times n$ matrices which are invariant under rotations within those
spaces. We emphasize that this is different from the more common
assumption of invariance under conjugation by orthogonal or unitary
$n\times n$ matrices; ensembles with the latter invariance property
are most often referred to as ``matrix models'' or as ``orthogonally
invariant'' or ``unitarily invariant'' ensembles, respectively, but
are unfortunately also sometimes referred to as rotationally
invariant. The spaces we consider include the spaces of all real or
complex $n \times n$ matrices, the space of all $n \times n$ Hermitian
matrices, or others. The classical Gaussian random matrix ensembles
are of this type, and so are random matrices chosen uniformly from the
sphere with respect to the Hilbert--Schmidt norm. Beyond the
classical Gaussian cases, such ensembles have been studied in the
physics literature (see, e.g.,
\cite{AdToKu,CaDe,DeCa,Guhr,LiZh,MuKl}), frequently under the names
``fixed trace ensembles'' (for matrices uniformly distributed on a
sphere for the Hilbert--Schmidt norm) or ``norm-dependent ensembles'';
fixed trace ensembles have also been investigated in the numerical
analysis literature \cite{Demmel,Edelman-cond,Edelman-det} and in more
mathematically oriented work on random matrix theory
\cite{EiKn,GoGoLe,GoGo}.
In this paper we investigate the fluctuations of traces of powers of
such random matrices, showing that these fluctuations have a jointly
Gaussian distribution, under certain hypotheses, in the
high-dimensional limit. This implies, in particular, that linear
eigenvalue statistics $\sum_{j=1}^n f(\lambda_j)$ are asymptotically
Gaussian, where $\lambda_1, \dots, \lambda_n$ are the eigenvalues of
our random matrix and $f$ is a polynomial function. Gaussian limits
for fluctuations of linear eigenvalue statistics have been studied
intensively for other random matrix ensembles; we mention in
particular \cite{AnZe,BaSi,Johansson2,LyPa,Shcherbina,SiSo} for
Wigner-type matrices (random Hermitian matrices whose entries on and
above the diagonal are independent),
\cite{DiEv,DoSt,DoSt2,Fulman2,Johansson,Soshnikov,Stein,Wieand} for
Haar-distributed random matrices from the classical compact groups,
and \cite{CiErSc,NoPe,Rider,RiSi,RiVi} for the typically most
difficult case of random matrices with all independent entries.
Our proofs are based on the infinitesimal or continuous version
Stein's method of exchangeable pairs, which has found a number of
applications in random matrix theory, and which is particularly well
suited to the analysis of settings like ours that exhibit continuous
geometric symmetries. This method has been used to prove central
limit theorems for linear eigenvalue statistics for various random
matrix ensembles in \cite{DoSt,DoSt2,Fulman2,JoSm,JoSmWe,Stein,Stolz};
other applications in random matrix theory appear in
\cite{ChMe,Fulman1,EM-linear,MM-quantum}. We also mention
\cite{Chatterjee-gauge}, which does not apply Stein's method for
distributional approximation but uses a continuous family of
exchangeable pairs to prove identities for expectations, similar to
our proof of Theorem \ref{T:traces-of-powers-means} below; and
\cite{Chatterjee,NoPe}, which apply other versions of Stein's method
to investigate linear eigenvalue statistics of random matrices.
An unusual feature of our proofs is that they allow a unified approach
to both the Hermitian and non-Hermitian cases. More surprisingly, it
turns out that, in contrast to the usual situation in random matrix
theory, the non-Hermitian case is easier to handle here, for reasons
that will be discussed below.
One can reasonably object that in the non-Hermitian case it is natural
to consider more general linear eigenvalue statistics; for example in
the polynomial setting one should allow the test function $f$ to be a
polynomial in both $z$ and $\overline{z}$. However, as in the present
work, most known central limit theorems for linear eigenvalue
statistics for non-normal random matrices require $f$ to be analytic
or otherwise highly restricted (as in, e.g.,
\cite{NoPe,ORoRe,Rider,RiSi}), and even so, the proofs are more
difficult than in the Hermitian case. An exception is the very recent
work \cite{CiErSc}, which handles random matrices with i.i.d.\ complex
entries and test functions with only $2+\epsilon$ derivatives.
We now turn to a more precise description of the random matrix
ensembles we consider and our results.
The random matrices we consider are drawn from a real-linear subspace
$V$ of the space $\mat{n}{\mathbb{C}}$ of $n\times n$ matrices over $\mathbb{C}$. We
take $V$ to be one of the following: $\mat{n}{\mathbb{C}}$ itself; the space
$\mat{n}{\mathbb{R}}$ of $n\times n$ matrices over $\mathbb{R}$; the space
$\symmat{n}{\mathbb{R}}$ of real symmetric $n\times n$ matrices; the space
$\symmat{n}{\mathbb{C}}$ of complex Hermitian $n\times n$ matrices; the space
$\asymmat{n}{\mathbb{R}}$ of real antisymmetric $n\times n$ matrices; and the
space $\asymmat{n}{\mathbb{C}}$ of complex anti-Hermitian $n\times n$
matrices. All of these spaces are real inner product spaces with
respect to the inner product $\inprod{A}{B}=\Re\tr(AB^*)$, and have
the associated Hilbert--Schmidt norm $\norm{A} = \sqrt{\tr (AA^*)}$.
(We will also make some use of the complex (Hilbert--Schmidt) inner
product, and the operator norm $\norm{A}_{op}$.)
The distributions we consider on $V$ are rotationally invariant in the
sense that they are invariant under linear isometries of the entire
space $V$ equipped with this inner product; this is stronger than the
more commonly considered property of invariance under multiplication
or conjugation by a unitary matrix in $\mat{n}{\mathbb{C}}$. If $X \in V$ has
a rotationally invariant distribution, then we can write
$X = \norm{X} \widetilde{X}$, where $\widetilde{X}$ is uniformly
distributed on the unit sphere (with respect to the Hilbert--Schmidt
norm) of $V$ and is independent from $\norm{X}$. (In fact, in the
proofs below it will be convenient to use a slightly different
normalization for $\widetilde{X}$.)
Rotationally invariant distributions can also be described concretely
in terms of orthonormal bases on each space. Let $E_{jk}$ denote the
$n\times n$ matrix with a one in the $(j,k)$ position and zeroes
everywhere else. For $j<k$, let
$F_{jk}=\frac{1}{\sqrt{2}}(E_{jk}+E_{kj})$ and
$G_{jk}=\frac{1}{\sqrt{2}}(E_{jk}-E_{kj})$. Let $d$ be the (real)
dimension of $V$. We denote by $\{B_\alpha\}_{\alpha=1}^d$ orthonormal
bases (with respect to the real inner product
$\inprod{A}{B}=\Re\tr(AB^*)$) for the spaces $V$ above, as follows.
\begin{center}\begin{tabular}{|c|c|}
\hline
$V$&$\phantom{\Big|}\{B_\alpha\}_{\alpha=1}^d$\\
\hline\hline
$\mat{n}{\mathbb{C}}$&$\phantom{\Big|}\{E_{jk}\}_{1\le j,k\le n}\cup \{iE_{jk}\}_{1\le
j,k\le n}$\\
\hline
$\mat{n}{\mathbb{R}}$&$\phantom{\Big|}\{E_{jk}\}_{1\le j,k\le n}$\\
\hline
$\symmat{n}{\mathbb{R}}$&$\phantom{\Big|}\{E_{jj}\}_{1\le j\le n}\cup \{F_{jk}\}_{1\le
j<k\le n}$\\
\hline
$\symmat{n}{\mathbb{C}}$&$\phantom{\Big|}\{E_{jj}\}_{1\le j\le n}\cup \{F_{jk}\}_{1\le
j<k\le n}\cup \{iG_{jk}\}_{1\le
j<k\le n}$\\
\hline
$\asymmat{n}{\mathbb{R}}$&$\phantom{\Big|} \{G_{jk}\}_{1\le
j<k\le n}$\\
\hline
$\asymmat{n}{\mathbb{C}}$&$\phantom{\Big|}\{iE_{jj}\}_{1\le j\le n}\cup \{G_{jk}\}_{1\le
j<k\le n}\cup \{iF_{jk}\}_{1\le
j<k\le n}$\\
\hline
\end{tabular}\end{center}
For each choice of $V$, consider a random vector
$\{X_\alpha\}_{\alpha=1}^d$ with a rotationally invariant distribution
in $\mathbb{R}^d$, normalized so that \(\mathbb{E} \sum_{\alpha=1}^d X_\alpha^2=n,\) and define
$$X=\sum_{\alpha=1}^{d}X_\alpha B_\alpha.$$
The random matrix $X \in V$ then has a rotationally invariant
distribution in $V$ and satisfies \(\mathbb{E}\|X\|^2=n\). Note that choosing
the random vector $\{X_\alpha\}_{\alpha=1}^d$ according to a Gaussian
distribution results in various classical random matrix ensembles: in
the case of unrestricted real or complex matrices, we have the real,
respectively complex Ginibre ensembles, and in the case of real
symmetric or complex Hermitian matrices, we have the Gaussian
Orthogonal Ensemble (GOE) and Gaussian Unitary Ensemble (GUE),
respectively.
Our first main result identifies the means of the random variables
$W_p = \tr X^p$ for $p \in \mathbb{N}$. This result is essentially known, and
can easily be deduced from the Gaussian cases, where the
classical proofs make essential use of the independence of the
entries. Here we give an independent proof which is an easy
by-product of the analysis of the exchangeable pair used to prove
Theorems \ref{T:traces-of-powers-clt-nonnormal} and
\ref{T:traces-of-powers-clt-normal} below. (As noted above, a similar
approach was used in \cite{Chatterjee-gauge} to prove identities for
expectations of functions of random orthogonal matrices.)
\begin{thm}\label{T:traces-of-powers-means}
Let $X$ be a random matrix in $V\subseteq \mat{n}{\mathbb{C}}$ as above,
whose distribution is invariant under rotations of V.
Suppose that $\mathbb{E}\|X\|^2=n$ and that for each $k$, there is a
constant $\alpha_k$ depending only on $k$ such that
\[
t_k(X)=\left|n^{-k/2}\mathbb{E}\|X\|^{k}-1\right|\le\frac{\alpha_k}{n}.
\]
For $p\in\mathbb{N}$, let $W_p=\tr(X^p)$. In all cases, if $p$ is odd, then
$\mathbb{E} W_p=0$. For $p=2r$,
\[
\mathbb{E} W_p=\begin{cases}0 & \text{if } V=\mat{n}{\mathbb{C}},\\
1+O\left(\frac{1}{n}\right) & \text{if } V=\mat{n}{\mathbb{R}},\\
nC_r+O(1) & \text{if } V=\symmat{n}{\mathbb{C}} \text{ or } \symmat{n}{\mathbb{R}},\\
(-1)^{r}nC_r+O(1) & \text{if }
V=\asymmat{n}{\mathbb{C}} \text{ or } \asymmat{n}{\mathbb{R}},
\end{cases}
\]
where $C_r = \frac{1}{r+1}\binom{2r}{r}$ is the $r$th Catalan
number.
\end{thm}
In Theorem \ref{T:traces-of-powers-means}, as well as all the
following results, the $O$ terms refer to $n \to \infty$, with implied
constants that may depend on $p$ (or $m$ below) and the constants
$\alpha_k$, but do not otherwise depend on the precise distribution of
$X$.
In just the first case of Theorem \ref{T:traces-of-powers-means} (when
$V = \mat{n}{\mathbb{C}}$), the hypothesis on $t_k(X)$ can be replaced by the
weaker assumption that $t_k(X) < \infty$ for each $k$; that is, simply
that all moments of $\norm{X}$ are finite. In the other cases that
hypothesis can be weakened to assuming each $t_k(X)$ is $o(1)$, at the
expense of more complicated version of the error terms. We have
chosen here to assume a simple and quite mild hypothesis that lets us
state a clean result.
Theorems \ref{T:traces-of-powers-clt-nonnormal} and
\ref{T:traces-of-powers-clt-normal} describe the fluctuations of the
$W_p$, formulated as comparisons of integrals of $C^2$ test
functions. In what follows, for $g\in C^2(\mathbb{R}^m)$,
\[
M_1(g)=\sup_{x\in\mathbb{R}^m}|\nabla g(x)|
\] denotes the Lipschitz constant of $g$ and
\[
M_2(g)=\sup_{x\in\mathbb{R}^m}\|\mathrm{Hess\,}(g)(x)\|_{op}
\]
the maximum operator norm of the Hessian of $g$. For $g:\mathbb{C}^m\to\mathbb{R}$,
these quantities are computed by identifying $g$ with a function on
$\mathbb{R}^{2m}$.
We begin with the cases of unrestricted real or complex $n \times n$
matrices.
\begin{thm}\label{T:traces-of-powers-clt-nonnormal}
Let $X$ be a random matrix in $V = \mat{n}{\mathbb{C}}$ or $\mat{n}{\mathbb{R}}$,
whose distribution is invariant under rotations of $V$.
Suppose that $\mathbb{E}\|X\|^2=n$ and that for each $k$, there is a
constant $\alpha_k$ depending only on $k$ such that
\[
t_k(X)=\left|n^{-k/2}\mathbb{E}\|X\|^{k}-1\right|\le\frac{\alpha_k}{n}.
\]
Fix $m\in\mathbb{N}$ with $m \ge 3$ and
\[W=(W_1,W_2,\ldots,W_m)=(\tr(X),\tr(X^2),\ldots,\tr(X^m))\in\mathbb{R}^m.\]
\begin{enumerate}
\item If $V=\mat{n}{\mathbb{C}}$ and $G$ is a standard complex Gaussian random vector in $\mathbb{C}^m$, then for any $f\in C^2(\mathbb{C}^m)$,
\[\big|\mathbb{E} f(W)-\mathbb{E} f(\Sigma^{1/2}G)\big|\le \frac{\kappa_mM_2(f)}{n},\]
where $\Sigma$ is the diagonal matrix with $p$-$p$ entry given by
\(\sigma_{pp}=p\) and $\kappa_m$ is a positive constant depending only
on $m$ and $\alpha_1,\ldots, \alpha_m$.
\item If $V=\mat{n}{\mathbb{R}}$ and $G$ is a standard Gaussian random vector in $\mathbb{R}^m$, then for any $f\in C^2(\mathbb{R}^m)$,
\[\big|\mathbb{E} f(W-\mathbb{E} W)-\mathbb{E} f(\Sigma^{1/2}G)\big|\le \frac{\kappa_m(M_1(f)+M_2(f))}{n},\]
where $\Sigma$ is the diagonal matrix with $p$-$p$ entry given by
\(\sigma_{pp}=p\) and $\kappa_m$ is a positive constant depending only
on $m$ and $\alpha_1,\ldots, \alpha_m$.
\end{enumerate}
\end{thm}
As in Theorem \ref{T:traces-of-powers-means}, the hypothesis on $t_k$
can be weakened somewhat, at the expense of a more complicated version
of the conclusion.
Theorem \ref{T:traces-of-powers-clt-nonnormal} immediately implies the
following.
\begin{cor}
\label{T:polynomial-clt-nonnormal}
For each $n$, let $X_n$ be an $n\times n$ satisfying the hypotheses
of Theorem \ref{T:traces-of-powers-clt-nonnormal} (with constants
$\alpha_k$ independent of $n$). Given any polynomial function,
$g:\mathbb{C} \to \mathbb{C}$, define $X_{g,n} = \tr g(X_n)$. Then the stochastic
process $\{X_{g,n} - n g(0) \}_g$ indexed by polynomials converges
as $n \to \infty$, in the sense of finite dimensional distributions,
to a centered complex-valued Gaussian process $\{Z_g\}_g$ with
covariance given by
\[
\mathbb{E} Z_g \overline{Z_h} = \frac{1}{\pi} \int_D g'(z)
\overline{h'(z)} \ d^2 z,
\]
where $D = \Set{z \in \mathbb{C}}{\abs{z} \le 1}$ and $d^2 z$ refers to
integration with respect to Lebesgue measure.
\end{cor}
Results of the same form as Corollary \ref{T:polynomial-clt-nonnormal}
are proved in \cite{RiSi,CiErSc} and \cite{NoPe} for random matrices
with independent complex or real entries (satisfying some technical
conditions), respectively. In \cite{RiSi} the test functions need not
be polynomials, but are required to be analytic on a neighborhood
of the disc of radius $4$; in \cite{CiErSc} this is weakened
substantially to $C^{2+\epsilon}$ test functions. One might hope to
extend Corollary \ref{T:polynomial-clt-nonnormal} from polynomials to
analytic or still more general test functions by approximation (as is
done, for example, in \cite{DoSt} in the case of Haar-distributed
random unitary matrices). However, the dependence on the constants
$\kappa_m$ in Theorem \ref{T:traces-of-powers-clt-nonnormal} on $m$
provided by our proofs is insufficient to carry out such an
approximation argument. (Moreover, as seen in \cite{RiVi,CiErSc},
extending beyond analytic functions requires a more complicated
description of the limiting covariance structure.)
\bigskip
There are several key differences between the Hermitian case and the
case of unrestricted complex matrices, the most crucial of which is
that $W_2=\tr(X^2)=\tr(XX^*)=\|X\|^2$ when $X$ is Hermitian. In
particular, a multivariate central limit theorem cannot hold in
general for the vector
\[W=(W_1,W_2,\ldots,W_m)\]
because the second component need not have Gaussian fluctuations. In the case of $X$ uniformly distributed on the sphere of
radius $\sqrt{n}$ in $\symmat{n}{\mathbb{C}}$, $W_2$ is deterministic, and so
one could hope for a central limit theorem involving a covariance
matrix of rank $m-1$ (and indeed this is the case).
A related difference from the non-Hermitian case is that $\mathbb{E} W_p$ is
of order $n$ for all even $p$ in the Hermitian case; a consequence of
this fact is that it is necessary to make a stronger (though still
rather mild) concentration hypothesis for $\norm{X}$ than in Theorems
\ref{T:traces-of-powers-means} and
\ref{T:traces-of-powers-clt-nonnormal}.
\begin{thm}\label{T:traces-of-powers-clt-normal}
Let $X$ be a random matrix in $V = \symmat{n}{\mathbb{C}}$ or $\symmat{n}{\mathbb{R}}$,
whose distribution is invariant under rotations of $V$.
Suppose that $\mathbb{E}\|X\|^2=n$ and that for each $k$, there is a
constant $\alpha_k$ depending only on $k$ such that
\[t_k(X)=\left|n^{-k/2}\mathbb{E}\|X\|^{k}-1\right|\le\frac{\alpha_k}{n^2}.\]
Fix $m\in\mathbb{N}$ with $m \ge 3$ and
\[W=(W_1,W_2,\ldots,W_m)=(\tr(X),\tr(X^2),\ldots,\tr(X^m))\in\mathbb{R}^m.\]
Let $Y_2=\|X\|^2-n$ and define
\[
Z=(Z_1,Z_3,Z_4,\ldots,Z_m)\qquad\qquad Z_p=W_p-\mathbb{E} W_p-\frac{p\mathbb{E}
W_p}{2n}Y_2.
\]
Let $\Sigma=A^{-1}B,$ where $A$ and $B$ are indexed by
$\{1,\ldots,m\}\setminus\{2\}$ with entries
\[
a_{pq}=\begin{cases}
-2pC_{(p-2-q)/2}& \text{if $1\le q\le p-2$ and $p-q$ is even,}\\
p& \text{if }q=p,\\
0&\mbox{otherwise}.
\end{cases}
\]
and
\[
b_{pq}=2pq \begin{cases}
C_{(p+q-2)/2} - C_{p/2} C_{q/2} & \text{if $p$ and $q$ are both even,} \\
C_{(p+q-2)/2} & \text{if $p$ and $q$ are both odd,} \\
0 & \text{if $p$ and $q$ have opposite parities}.
\end{cases}
\]
and $C_r$ again denotes the $r$th Catalan number.
Then for any $f\in C^2(\mathbb{R}^{m-1})$,
\[
\big|\mathbb{E} f(Z)-\mathbb{E} f(\Sigma^{1/2}G)\big|\le
\frac{\kappa_m(M_1(f)+M_2(f))}{n},
\]
where $\kappa_m$ is a positive constant depending only on $m$
and $\alpha_1,\ldots, \alpha_m$, and $G$ is a standard
Gaussian random vector in $\mathbb{R}^{m-1}$.
\end{thm}
It is not obvious from the form given in the statement of Theorem
\ref{T:traces-of-powers-clt-normal} that the covariance matrix
$\Sigma$ is symmetric, let alone positive semidefinite. It will,
however, follow from the proof of Theorem
\ref{T:traces-of-powers-clt-normal} that this is indeed the case.
Theorem \ref{T:traces-of-powers-clt-normal} immediately implies a
multivariate central limit theorem for traces of \emph{odd} powers of
$X$. It also implies a central limit theorem for traces of powers
other than $2$ if $X$ is uniformly distributed on the sphere of radius
$\sqrt{n}$ in $\symmat{n}{\mathbb{C}}$ or $\symmat{n}{\mathbb{R}}$, or more generally
if $\mathbb{E} \abs{Y_2} = o(1)$. In either of those two situations, one can
deduce a result analogous to Corollary
\ref{T:polynomial-clt-nonnormal}, although with more complicated
expressions both for the means (as seen in Theorem
\ref{T:traces-of-powers-means}) and for the covariance; for brevity we
omit a precise statement here. Analogous results for Wigner matrices,
in various levels of generality, were proved in
\cite{Johansson2,LyPa,Shcherbina,SiSo}.
Rotationally invariant ensembles of antihermitian matrices reduce to
the Hermitian case: if $X$ is a rotationally invariant Hermitian
random matrix, then $iX$ is a rotationally invariant antihermitian
matrix, and in particular $\tr (iX)^p = i^p \tr (X^p)$. A version of
Theorem \ref{T:traces-of-powers-clt-normal} for antihermitian matrices
is therefore a formal consequence of Theorem
\ref{T:traces-of-powers-clt-normal} itself. The explicit statement
will be somewhat complicated, however, since the random vector $Z$
will be distributed in a particular $(m-1)$-dimensional real subspace
of $\mathbb{C}^{m-1}$.
In contrast, the case of real antisymmetric matrices requires an
independent analysis. Note in particular that if $X \in
\asymmat{n}{\mathbb{R}}$ then $\tr (X^p) = 0$ for every odd $p$. We have the
following result for such random matrices.
\begin{thm}\label{T:real-asymm-clt}
Let $X$ be a random matrix in $V = \asymmat{n}{\mathbb{R}}$
whose distribution is invariant under rotations of $V$.
Suppose that $\mathbb{E}\|X\|^2=n$ and that for each $k$, there is a
constant $\alpha_k$ depending only on $k$ such that
\[t_k(X)=\left|n^{-k/2}\mathbb{E}\|X\|^{k}-1\right|\le\frac{\alpha_k}{n^2}.\]
Let
$m \ge 4$ be even and
\[
W=(W_4,W_6,\ldots,W_m)=(\tr(X^4),\tr(X^6),\ldots,\tr(X^m)).
\]
Then for each even $p$,
\[
\mathbb{E} W_p= (-1)^{p/2} n C_{p/2}+O(1),
\]
where $C_{p/2}$ is the $p/2$ Catalan number.
Let $Y_2=\|X\|^2-n$ and define
\[
Z=(Z_4,Z_6,\ldots,Z_m)\qquad\qquad Z_p=W_p-\mathbb{E} W_p-(-1)^{p/2}\frac{p\mathbb{E}
W_p}{2n}Y_2.
\]
Let $\Sigma=A^{-1}B,$ where $A$ and $B$ have entries (for $p,q \ge 4$
even)
\[
a_{pq}=
\begin{cases}
(-1)^{(p-q)/2}2 C_{(p-2-q)/2} & \text{if } 4\le q\le p-2,\\
1 & \text{if } q=p, \\
0 &\mbox{otherwise},\end{cases}
\]
and
\[
b_{pq} = (-1)^{(p+q)/2} q (C_{(p+q-2)/2} - C_{p/2} C_{q/2}).
\]
Then for any $f\in C^2(\mathbb{R}^{(m-2)/2})$,
\[\big|\mathbb{E} f(Z)-\mathbb{E} f(\Sigma^{1/2}G)\big|\le
\frac{\kappa_m(M_1(f) + M_2(f))}{n},\] where $\kappa_m$ is a
positive constant depending only on $m$ and
$\alpha_1,\ldots, \alpha_m$, and $G$ is a standard Gaussian random
vector in $\mathbb{R}^{(m-2)/2}$.
\end{thm}
To our knowledge, the only previous paper whose results explicitly
include a central limit theorem for linear eigenvalue statistics of
real antisymmetric random matrices is \cite{ORoRe}, although the
methods of most previous works on Hermitian random matrices could
presumably be adapted to cover the real antisymmetric case as well.
Our results are proved using a general Gaussian approximation theorem
for exchangeable pairs \cite{EM-Stein-multi,DoSt}. In section
\ref{S:pair} below we state the general approximation theorem, define
the exchangeable pair for an arbitrary matrix subspace $V$, and carry
out as much of the analysis as possible without specifying $V$; this
may be characterized as the essentially ``algebraic'' part of our
proofs. The remaining sections carry out the ``asymptotic'' part of
the argument, on a case-by-case basis, for each of the subspaces $V$
considered here. In sections \ref{S:nonsym-c} and \ref{S:nonsym-r} we
prove Theorems \ref{T:traces-of-powers-means} and
\ref{T:traces-of-powers-clt-nonnormal} for the cases of
$V = \mat{n}{\mathbb{C}}$ and $\mat{n}{\mathbb{R}}$, respectively. In section
\ref{S:Hermitian} we prove Theorems \ref{T:traces-of-powers-means} and
\ref{T:traces-of-powers-clt-normal} for $V = \symmat{n}{\mathbb{C}}$. In
section \ref{S:symmetric} we indicate how to modify the proofs of
section \ref{S:Hermitian} for $V = \symmat{n}{\mathbb{R}}$. The proof of
Theorem \ref{T:real-asymm-clt} is yet another variation on the same
theme, and is omitted.
\section{Common framework: The exchangeable pair}\label{S:pair}
As discussed in the introduction, the proof of Theorem
\ref{T:traces-of-powers-means} is essentially a by-product of the
proofs of Theorems \ref{T:traces-of-powers-clt-nonnormal} and
\ref{T:traces-of-powers-clt-normal}, and so we postpone the proof of
Theorem \ref{T:traces-of-powers-means} for the moment. The other main
theorems are proved via a version of Stein's method. The complex form
of the multivariate infinitesimal version of Stein's method of
exchangeable pairs stated below is due to D\"obler and Stolz
\cite{DoSt}, following earlier work of E.\ Meckes
\cite{EM-Stein-multi} in the real case.
\begin{thm}\label{T:inf-abstract}
Let $W$ be a centered random vector in $\mathbb{C}^m$ and, for each
$\epsilon\in(0,1)$, suppose that $(W,W_\epsilon)$ is an exchangeable
pair. Let $\mathcal{G}$ be a $\sigma$-algebra with respect to which
$W$ is measurable. Suppose that there is an invertible matrix
$\Lambda$, a symmetric, non-negative definite matrix $\Sigma$, a
$\mathcal{G}$-measurable random vector $E \in \mathbb{C}^m$,
$\mathcal{G}$-measurable random matrices $E', E'' \in \mat{m}{\mathbb{C}}$,
and a deterministic function $s(\epsilon)$ such that
\begin{enumerate}
\item \label{inf-lincond}
$$\frac{1}{s(\epsilon)}\mathbb{E}\left[W_\epsilon-W\big|\mathcal{G}\right]\xrightarrow[\epsilon\to0]{L_1}-\Lambda W+E,$$
\item \label{inf-quadcond}
$$\frac{1}{s(\epsilon)}\mathbb{E}\left[(W_\epsilon-W)(W_\epsilon-W)^*\big|\mathcal{G}\right]\xrightarrow[\epsilon\to0]{L_1(\|\cdot\|)}2\Lambda\Sigma+E',$$
\item \label{inf-quadcond2}
$$\frac{1}{s(\epsilon)}\mathbb{E}\left[(W_\epsilon-W)(W_\epsilon-W)^T\big|\mathcal{G}\right]\xrightarrow[\epsilon\to0]{L_1(\|\cdot\|)}E''.$$
\item \label{inf-tricond}
For each $\rho>0$,
$$\lim_{\epsilon\to0}\frac{1}{s(\epsilon)}\mathbb{E}\left[|W_\epsilon-W|^2\textbf{1}(|W_\epsilon-W|^2>\rho)\right]=0.$$
\end{enumerate}
Then for $g\in C^2(\mathbb{C}^m)$,
\begin{equation}\begin{split}\label{inf-bd1}
\big|\mathbb{E} g(W)-\mathbb{E}
g(\Sigma^{1/2}Z)\big|&\le\|\Lambda^{-1}\|_{op}\left[
M_1(g)\mathbb{E}|E|+\frac{\sqrt{m}}{4}M_2(g)
\left(\mathbb{E}\|E'\|+\mathbb{E}\|E''\|\right)\right],
\end{split}\end{equation}
where $Z$ is a standard complex Gaussian random vector in $\mathbb{C}^m$;
i.e., $Z_j=X_j+iY_j$, where $\{X_1,Y_1,\ldots,X_m,Y_m\}$ are i.i.d.\
$\mathcal{N}\left(0,\tfrac{1}{2}\right)$, and $\abs{E}$
denotes the Euclidean norm of the random vector $E$.
\end{thm}
\noindent \emph {Remarks:}
\begin{enumerate}
\item To recover the real case of Theorem \ref{T:inf-abstract}, one
omits condition (3) and the term $\mathbb{E} \norm{E''}$ in
\eqref{inf-bd1}. The real case will be used for all the proofs
below except for the case of $V = \mat{n}{\mathbb{C}}$.
\item In practice, we typically replace condition (4) with
the formally stronger condition
\[
\lim_{\epsilon\to0}\frac{1}{s(\epsilon)}\mathbb{E}|W_\epsilon-W|^3=0.
\]
This condition is trivially satisfied in our applications, since
$W_\epsilon$ is constructed so that $W_\epsilon-W=\epsilon Y$ for
some random vector $Y$ with $\mathbb{E}|Y|^3<\infty$.
\end{enumerate}
\bigskip
A parametrized family $(X,X_\epsilon)$ of
exchangeable pairs of random matrices can be constructed as
follows. As above, let $X = \sum_{\alpha=1}^d X_\alpha B_\alpha$,
where $\{X_\alpha\}_{\alpha = 1}^d$ is a random vector in $\mathbb{R}^d$ with a
rotationally invariant distribution and $\{B_\alpha\}_{\alpha = 1}^d$
is an orthonormal basis of a $d$-dimensional subspace $V$ of
$\mat{n}{\mathbb{C}}$. We assume that $\mathbb{E} \norm{X}^2 = n$ and that $\mathbb{E}
\norm{X}^{2m} < \infty$.
For a $d \times d$ matrix $A=\left[a_{jk}\right]_{j,k=1}^d$ in the
orthogonal group $\Orthogonal{d}$, denote by $A(X)$ the transformation
of $X$ given by
\[
A(X)=\sum_{\alpha=1}^d\left(\sum_{\beta=1}^da_{\alpha\beta}X_\beta\right)B_\alpha.
\]
Now fix $\epsilon$, and let
$$R_\epsilon=\begin{bmatrix}\sqrt{1-\epsilon^2}&\epsilon\\-\epsilon&
\sqrt{1-\epsilon^2}\end{bmatrix}\oplus I_{d-2}\in\Orthogonal{d}.$$
That is, $R_\epsilon$ represents a rotation by $\arcsin(\epsilon)$ in
the plane spanned by the first two standard basis vectors of $\mathbb{R}^d$.
Choose $U\in \Orthogonal{d}$ according to Haar measure, independent
of $X$, and let
$$X_\epsilon=(UR_\epsilon U^T)(X).$$
That is, $X_\epsilon$ is a small random rotation (in matrix space) of the random matrix $X$, and so $(X,X_\epsilon)$ is exchangeable for each $\epsilon$.
For each $p\in\{1,\ldots,m\}$, define
\[W_{\epsilon,p}:=\tr(X_\epsilon^p);\]
the $m$-dimensional random vectors $(W,W_\epsilon)$ are then exchangeable for each $\epsilon$.
To apply Theorem \ref{T:inf-abstract}, the difference $W_\epsilon-W$
must be expanded in powers of $\epsilon$. First,
\[UR_\epsilon U^T=U\left[I_d+\epsilon C\oplus 0_{d-2}+\left(-\frac{\epsilon^2}{2}+O(\epsilon^4)\right)I_2\oplus 0_{d-2}\right]U^T,\]
where $0_{n}$ is the $n\times n$ matrix of all zeroes, $C$ is the $2\times 2$ matrix
\[C=\begin{bmatrix}0&1\\-1&0\end{bmatrix},\]
and the $O(\epsilon^4)$ is the deterministic error in replacing $\sqrt{1-\epsilon^2}-1$ by $-\frac{\epsilon^2}{2}$. Letting $K$ denote the first two columns of $U$ and $Q:=KCK^T$, we have
\[UR_\epsilon U^T=I_d+\epsilon Q+\left(-\frac{\epsilon^2}{2}+O(\epsilon^4)\right)KK^T.\]
It follows that
\begin{equation}\begin{split}\label{E:diff-expansion}
W_{\epsilon,p}&-W_p\\&=\tr(X_\epsilon^p-X^p)\\&=\tr\left(\left[X+\epsilon Q(X)+\left(-\frac{\epsilon^2}{2}+O(\epsilon^4)\right)KK^T(X)\right]^p-X^p\right)\\&=\epsilon\sum_{j=0}^{p-1}\tr\left(X^j[Q(X)]X^{p-1-j}\right)\\&\qquad+\epsilon^2\left[\sum_{j=0}^{p-2}\sum_{k=0}^{p-2-j}\tr\left(X^j[Q(X)]X^k[Q(X)]X^{p-2-j-k}\right)-\frac{1}{2}\sum_{j=0}^{p-1}\tr\left(X^j[KK^T(X)]X^{p-1-j}\right)\right]+O(\epsilon^3)\\&=\epsilon p\tr(X^{p-1}[Q(X)])\\&\qquad+\epsilon^2\left[\sum_{\ell=0}^{p-2}(\ell+1)\tr\left(X^\ell[Q(X)]X^{p-2-\ell}[Q(X)]\right)-\frac{p}{2}\tr(X^{p-1}[KK^T(X)])\right]+O(\epsilon^3),
\end{split}\end{equation}
where the implied constant in the error $O(\epsilon^3)$ is a random
variable (with all moments finite) depending on $X$ and
$U$. (The $O(\varepsilon^3)$
terms here and below may depend on $\mathbb{E} \norm{X}^{2m}$, and hence are
not necessarily uniform in either $n$ or $m$ without more assumptions
than have been made up to this point. However, in Theorem
\ref{T:inf-abstract} the limits as $\epsilon \to 0$ are taken with $n$
and $m$ both fixed, so this poses no difficulty.)
Analyzing this expression comes down to integrals over the orthogonal
group $\Orthogonal{d}$ and over the sphere $\mathbb{S}^{d-1}$. The
following concentration result from \cite{MeSz} plays an important
technical role. Given a
polynomial $Q(x,y)$ in two variables, we refer to a function
$P(X) = Q(X,X^*)$ on $\mat{n}{\mathbb{C}}$ as a $*$-polynomial.
\begin{prop}\label{T:star-poly-concentration}
Let $P$ be a $*$-polynomial of degree at most $p$, and let $X$ be a
random $n\times n$ matrix uniformly distributed in a sphere of radius
$\sqrt{n}$ in a subspace of $\mat{n}{\mathbb{C}}$ of dimension $d \ge c n^2$. Then
\[
\mathbb{P}[\abs{\tr P(X) - \mathbb{E} \tr P(X)} \ge t] \le \kappa_p \exp[-c_p \min\{t^2,
nt^{2/p}\}]
\]
and
\[
\norm{\tr P(X) - \mathbb{E} \tr P(X)}_q \le C_p \max\left\{\sqrt{q}, \left(\frac{q}{n}\right)^{p/2}\right\}
\]
for each $q \ge 1$.
Here $\kappa_p, c_p, C_p \ge 0$ are constants depending only on $p$
and $c$, and $\norm{Y}_q = (\mathbb{E} \abs{Y}^q)^{1/q}$ denotes the $L_q$
norm of a random variable.
\end{prop}
The following lemma is key in applying Theorem \ref{T:inf-abstract}.
\begin{lemma}\label{T:limits_all_spaces} For $W$ as above and $p$
fixed,
\begin{enumerate}
\item \label{P:lin}\begin{equation*}\begin{split}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}&\mathbb{E}\left[W_{\epsilon,p}-W_p\big|X\right]\\
&=\sum_{\ell=0}^{p-2}\frac{2(\ell+1)
\|X\|^2}{d(d-1)}\tr\left(X^\ell\sum_\alpha B_\alpha X^{p-2-\ell}B_\alpha\right)-\frac{p(p+d-2)}{d(d-1)}W_p,\end{split}\end{equation*}
\item \label{P:quad_bar}\begin{equation*}\begin{split}\lim_{\epsilon\to0}&\frac{1}{\epsilon^2}\mathbb{E}\left[(W_\epsilon-W)_p\overline{(W_\epsilon-W)_q}|X\right]\\&=\frac{2
pq}{d(d-1)}\left[\|X\|^2\sum_{\alpha=1}^d
\tr(X^{p-1}B_\alpha)\overline{\tr(X^{q-1}B_\alpha)}-\sum_{\alpha,\beta=1}^dX_\alpha
X_\beta\tr(X^{p-1}B_\alpha)\overline{\tr(X^{q-1}B_\beta)}\right],\end{split}\end{equation*}
\item \label{P:quad_no_bar}\begin{equation*}\begin{split}\lim_{\epsilon\to0}&\frac{1}{\epsilon^2}\mathbb{E}\left[(W_\epsilon-W)_p(W_\epsilon-W)_q|X\right]\\&=\frac{2
pq}{d(d-1)}\left[\|X\|^2\sum_{\alpha=1}^d
\tr(X^{p-1}B_\alpha)\tr(X^{q-1}B_\alpha)-\sum_{\alpha,\beta=1}^dX_\alpha
X_\beta\tr(X^{p-1}B_\alpha)\tr(X^{q-1}B_\beta)\right],\end{split}\end{equation*}
\item \label{P:cube}\[\lim_{\epsilon\to0}\frac{1}{\epsilon^2}\mathbb{E}|W_\epsilon-W|^3=0.\]
\end{enumerate}
In each case, the convergence is in the $L_1$ sense.
\end{lemma}
\begin{proof}
By the expansion of $W_{\epsilon,p}-W_p$ in powers of $\epsilon$ given in \eqref{E:diff-expansion}, it follows from the independence of $X$ and $U$ that
\begin{equation*}\begin{split}
\mathbb{E}\left[W_{\epsilon,p}-W_p\big|X\right]
&=\epsilon p\tr(X^{p-1}\mathbb{E}\left[Q(X)\big|X\right])\\
&\quad+\epsilon^2\left[\sum_{\ell=0}^{p-2}(\ell+1)\mathbb{E}\left[\left.\tr\left(X^\ell[Q(X)]X^{p-2-\ell}[Q(X)]\right)\right|X\right]\right.\\
&\qquad \qquad \left.\phantom{\sum_{\ell=0}^{p-2}}
-\frac{p}{2}\tr\left(X^{p-1}\mathbb{E}\left[KK^T(X)\big|X\right]\right)\right]+O(\epsilon^3),
\end{split}\end{equation*}
where here and in what follows, the implied constants in the error
term are random but bounded in $L_1$.
The entries of $KK^T$ and $Q$ are given in terms of the entries of $U=[u_{jk}]_{jk=1}^d$ by
\[
[KK^T]_{jk}=u_{j1}u_{k1}+u_{j2}u_{k2},
\qquad\qquad[Q]_{jk}=u_{j1}u_{k2}-u_{j2}u_{k1}.
\]
From this it is easy to see that
\[
\mathbb{E}[KK^T]=\frac{2}{d}I_d,\qquad\qquad\mathbb{E}[Q]=0,
\]
and thus
\begin{equation}\begin{split}\label{E:diff1}\mathbb{E}&\left[W_{\epsilon,p}-W_p\big|X\right]=\epsilon^2\left[\sum_{\ell=0}^{p-2}(\ell+1)\mathbb{E}\left[\left.\tr\left(X^\ell[Q(X)]X^{p-2-\ell}[Q(X)]\right)\right|X\right]-\frac{p}{d}W_p\right]+O(\epsilon^3).\end{split}\end{equation}
Now,
\[\mathbb{E}\left[\left.\tr\left(X^\ell[Q(X)]X^{p-2-\ell}[Q(X)]\right)\right|X\right]=\tr\left(X^\ell\mathbb{E}\left[\left.[Q(X)]X^{p-2-\ell}[Q(X)]\right|X\right]\right).\]
For notational convenience, write $A:=X^{p-2-\ell}$. If $q_{\alpha\beta}$ denotes the $(\alpha,\beta)$ entry of $Q$, then by expanding in the basis $\{B_j\}$,
\[[Q(X)]A[Q(X)]=\sum_{\alpha,\beta,\gamma,\delta=1}^dq_{\alpha\beta}q_{\gamma\delta}X_\beta
X_\delta B_\alpha AB_\gamma\]
and so
\[\mathbb{E}\left[\left.[Q(X)]X^{p-2-\ell}[Q(X)]\right|X\right]=\sum_{\alpha,\beta,\gamma,\delta=1}^d\mathbb{E}\left[q_{\alpha\beta}q_{\gamma\delta}\right]X_\beta
X_\delta B_\alpha AB_\gamma.\]
The formulae above for $q_{\alpha\beta}$ in terms of the entries of $U$ can be used to derive the following (see Lemma 9 of \cite{ChMe})
\begin{equation}\label{E:qmoments}\mathbb{E}\left[q_{\alpha\beta}q_{\gamma\delta}\right]=\frac{2}{d(d-1)}\left[\delta_{\alpha\gamma}\delta_{\beta\delta}-\delta_{\alpha\delta}\delta_{\beta\gamma}\right],\end{equation}
and so
\begin{equation*}\begin{split}
\mathbb{E}\left[\left.[Q(X)]X^{p-2-\ell}[Q(X)]\right|X\right]&=\frac{2}{d(d-1)}\left[\sum_{\alpha,\beta}X_\beta^2B_\alpha
AB_\alpha-\sum_{\alpha,\beta}X_\alpha X_\beta B_\alpha
AB_\beta\right]\\&=\frac{2}{d(d-1)}\left[\|X\|^2\sum_{\alpha}B_\alpha
AB_\alpha -XAX\right].
\end{split}\end{equation*}
It thus follows from \eqref{E:diff1} that
\begin{equation*}\begin{split}
\mathbb{E}&\left[W_{\epsilon,p}-W_p\big|X\right]\\&=\epsilon^2\left[\sum_{\ell=0}^{p-2}\frac{2(\ell+1)}{d(d-1)}\left(\|X\|^2\tr\left(X^\ell\sum_\alpha
B_\alpha X^{p-2-\ell}B_\alpha\right)-W_p\right)-\frac{p}{d}W_p\right]+O(\epsilon^3)
\\&=\epsilon^2\left[\sum_{\ell=0}^{p-2}\frac{2(\ell+1)
\|X\|^2}{d(d-1)}\tr\left(X^\ell\sum_\alpha B_\alpha X^{p-2-\ell}B_\alpha\right)-\frac{p(p+d-2)}{d(d-1)}W_p\right]+O(\epsilon^3),
\end{split}\end{equation*}
whence the statement of part \ref{P:lin} of the lemma.
For part \ref{P:quad_bar}, again using the expansion of $W_\epsilon-W$ in
\eqref{E:diff-expansion} yields
\begin{equation*}\begin{split}\mathbb{E}&\left[(W_\epsilon-W)_p\overline{(W_\epsilon-W)_q}|X\right]\\&\qquad=\epsilon^2pq\mathbb{E}\left[\left.\tr(X^{p-1}[Q(X)])\overline{\tr(X^{q-1}[Q(X)])}\right|X\right]+O(\epsilon^3)\\&\qquad=\epsilon^2 pq\mathbb{E}\left[\left.\tr\left(\sum_{\alpha,\beta=1}^dq_{\alpha\beta}X_\beta X^{p-1}B_\alpha\right) \overline{\tr\left(\sum_{\gamma,\delta=1}^dq_{\gamma\delta}X_\delta X^{q-1} B_\gamma\right)}\right|X\right]+O(\epsilon^3)\\&\qquad=\epsilon^2 pq\sum_{\alpha,\beta,\gamma,\delta=1}^d\mathbb{E}[q_{\alpha\beta}q_{\gamma\delta}]X_\beta X_\delta\tr(X^{p-1}B_\alpha)\overline{\tr(X^{q-1}B_\gamma)} +O(\epsilon^3).\end{split}\end{equation*}
Making use of the moment formula for $Q$ given in \eqref{E:qmoments}
then gives that
\begin{equation*}\begin{split}\mathbb{E}&\left[(W_\epsilon-W)_p\overline{(W_\epsilon-W)_q}|X\right]\\&=\frac{2\epsilon^2
pq}{d(d-1)}\left[\sum_{\alpha,\beta=1}^dX_\beta^2
\tr(X^{p-1}B_\alpha)\overline{\tr(X^{q-1}B_\alpha)}-\sum_{\alpha,\beta=1}^dX_\alpha
X_\beta\tr(X^{p-1}B_\alpha)\overline{\tr(X^{q-1}B_\beta)}\right]+O(\epsilon^3).\end{split}\end{equation*}
Exactly the same argument for part \ref{P:quad_no_bar} gives that
\begin{equation*}\begin{split}\mathbb{E}&\left[(W_\epsilon-W)_p(W_\epsilon-W)_q|X\right]\\&=\frac{2\epsilon^2
pq}{d(d-1)}\left[\sum_{\alpha,\beta=1}^dX_\beta^2
\tr(X^{p-1}B_\alpha)\tr(X^{q-1}B_\alpha)-\sum_{\alpha,\beta=1}^dX_\alpha
X_\beta\tr(X^{p-1}B_\alpha)\tr(X^{q-1}B_\beta)\right]+O(\epsilon^3).\end{split}\end{equation*}
Finally, it is clear from the expansion in $\epsilon$ that
\[\mathbb{E}|W_\epsilon-W|^3=O(\epsilon^3),\]
which completes the proof.
\end{proof}
At this point in the analysis, it is necessary to consider the various
subspaces separately; this is carried out in the following sections.
\section{Rotationally invariant ensembles in $\mat{n}{\mathbb{C}}$}\label{S:nonsym-c}
We begin with the following technical lemma.
\begin{lemma}\label{T:prelim-nosymm-c}
Let $X$ be a random matrix in $\mat{n}{\mathbb{C}}$ whose distribution is
invariant under rotations within $\mat{n}{\mathbb{C}}.$ Suppose that
$\mathbb{E}\|X\|^2=n$ and that for each $k$, there is a constant $\alpha_k$
depending only on $k$ such that
\[
t_k(X)=\left|n^{-k/2}\mathbb{E}\|X\|^{k}-1\right|\le\frac{\alpha_k}{n}.
\]
Then for $p,q\in\mathbb{N}$,
\[
\mathbb{E}\left[\|X\|^2\tr(X^p(X^*)^{q})\right]=\begin{cases}n^2+O(n),&p=q;\\0,&\mbox{otherwise}.\end{cases}
\]
Here, the implied constant in the $O(n)$ may depend on $p,q,$ and the
constants $\alpha_k$.
\end{lemma}
\begin{proof}
For $p\neq q$, $\mathbb{E}\left[\|X\|^2\tr(X^{p}(X^{q})^*)\right]=0$ by
symmetry. We suppose from now on that $p=q$.
By the rotational invariance of $X$, we can write
$X = \frac{\norm{X}}{\sqrt{n}}\widetilde{X}$, where $\widetilde{X}$ is uniformly distributed
on the sphere of radius $\sqrt{n}$ in $\mat{n}{\mathbb{C}}$ and $\widetilde{X}$ is
independent from $\norm{X}$. We then have
\begin{align*}
\mathbb{E}\left[\|X\|^2\tr(X^{p}(X^*)^{p})\right]
& = \left( \frac{\mathbb{E} \norm{X}^{2p+2}}{n^{p+1}}\right)
n \mathbb{E} \tr(\widetilde{X}^{p}(\widetilde{X}^*)^{p}) ,
\end{align*}
and thus
\begin{equation} \label{Eq:uniform-to-general}
\abs{\mathbb{E}\left[\|X\|^2\tr(X^{p}(X^*)^{p})\right]
- n \mathbb{E} \tr(\widetilde{X}^{p}(\widetilde{X}^*)^{p}) }
\le n t_{2p+2}(X) \mathbb{E} \tr(\widetilde{X}^{p}(\widetilde{X}^*)^{p}).
\end{equation}
It therefore suffices to prove the lemma under the assumption that
$X$ is uniformly distributed on the sphere of radius $\sqrt{n}$ in
$\mat{n}{\mathbb{C}}$; the general case follows from
\eqref{Eq:uniform-to-general} and the assumption on $t_k(X)$.
Making this assumption, we now consider the expansion
\begin{equation}\label{E:Eentry-explicit}
\mathbb{E} \tr(X^{p}(X^*)^{p})
= \sum_{i_1,\ldots,i_{2p}}\mathbb{E}\left[x_{i_1i_2}x_{i_2i_3}\cdots
x_{i_{p}i_{p+1}}\overline{x_{i_{p+2}i_{p+1}}}\cdots
\overline{x_{i_1i_{2p}}}\right].
\end{equation}
By rotational symmetry, a term on the right side of
\eqref{E:Eentry-explicit} is non-zero only if each $x_{ij}$ appears
the same number of times as $\overline{x_{ij}}$. Consider the
contribution to the sum such that $i_1,\ldots,i_{p+1}$ are distinct,
and $i_2=i_{2p}$, $i_3=i_{2p-1}$, and so on. The contribution of
such terms is
\begin{align*}
n(n-1)\cdots(n-p)
&\mathbb{E}\left[|x_{11}|^2|x_{12}|^2\cdots|x_{1p}|^2\right]
= n^p \frac{n (n-1) \cdots (n-p)}{(n^2+p-1) \cdots n^2}
= n + O(1)
\end{align*}
making use of the standard formula for integrating polynomials over
the sphere (see, e.g., Lemma 14 of \cite{MM-quantum}).
The sum of remaining terms of \eqref{E:Eentry-explicit} is $O(1)$,
since they necessarily involve the choice of fewer indices from
$\{1,\ldots,n\}$, while the expectations on the right hand side
which appear all have the same order in $n$ (this is immediate from
the formula in \cite{MM-quantum}). By \eqref{Eq:uniform-to-general}
this completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorems \ref{T:traces-of-powers-means} and \ref{T:traces-of-powers-clt-nonnormal} for $V=\mat{n}{\mathbb{C}}$]
Recall that in this context, the orthonormal basis
$\{B_\alpha\}_{\alpha=1}^d$ is $\{E_{jk}\}_{1\le j,k\le n}\cup
\{iE_{jk}\}_{1\le j,k\le n}$. It follows that for $A\in\mat{n}{\mathbb{C}}$,
\[\sum_{\alpha=1}^dB_\alpha AB_\alpha=\sum_{j,k=1}^nE_{jk}AE_{jk}+\sum_{j,k=1}^n(iE_{jk})A(iE_{jk})=0.\]
Part \ref{P:lin} of Lemma \ref{T:limits_all_spaces} then implies that
\begin{equation}\label{E:cmat-linearity}
\lim_{\epsilon \to 0}
\frac{1}{\epsilon^2}\mathbb{E}\left[W_{\epsilon,p}-W_p\big|X\right]=-\frac{p(p+d-2)}{d(d-1)}W_p.
\end{equation}
Note that by taking expectations of both sides of Equation
\ref{E:cmat-linearity}, the exchangeability of $(W_p,W_{\epsilon,p})$ implies that $\mathbb{E} W_p=0$ for all $p$; this is
also apparent from symmetry considerations (and hence the
$\mat{n}{\mathbb{C}}$ case of Theorem \ref{T:traces-of-powers-means}).
Equation \ref{E:cmat-linearity} shows that the matrix $\Lambda$ in the statement of Theorem
\ref{T:inf-abstract} may be taken to be diagonal, with $(p,p)$ entry
given by $\frac{p(p+d-2)}{d(d-1)}$, and that the random vector $E=0$. In particular,
\[\norm{\Lambda^{-1}}_{op}=d.\]
Next, consider part \ref{P:quad_bar} of Lemma \ref{T:limits_all_spaces}.
Let $\inprod{A}{B}_{HS}$ denote the complex Hilbert--Schmidt inner
product $\inprod{A}{B}_{HS}=\tr(AB^*)$. Since
$\{B_\alpha\}_{\alpha=1}^d=\{E_{jk}\}_{j,k=1}^n\cup\{iE_{jk}\}_{j,k=1}^n$,
\begin{align*}
\sum_\alpha\tr(X^{p-1}B_\alpha)\overline{\tr(X^{q-1}B_\alpha)}
&=2\sum_{j,k=1}^n \tr(X^{p-1}E_{jk})\overline{\tr(X^{q-1}E_{jk})}\\
&=2\sum_{j,k=1}^n \inprod{X^{p-1}}{E_{kj}}_{HS} \overline{\inprod{X^{q-1}}{E_{kj}}_{HS}}\\
&=2\inprod{X^{p-1}}{X^{q-1}}_{HS}\\
&=2\tr(X^{p-1} (X^{q-1})^*),
\end{align*}
where the third equality follows from the fact that $\{E_{kj}\}_{j,k=1}^n$ is an orthonormal basis for the complex inner product $\inprod{\cdot}{\cdot}_{HS}$.
Similarly,
\begin{align*}
\sum_{\alpha=1}^dX_\alpha\tr(X^{p-1}B_\alpha)
&=\sum_{j,k=1}^n\inprod{X}{E_{jk}}\tr(X^{p-1}E_{jk})+\sum_{j,k=1}^n\inprod{X}{iE_{jk}}\tr(X^{p-1}iE_{jk})\\
&=\sum_{j,k=1}^n\left[\inprod{X}{E_{jk}}+i\inprod{X}{i E_{jk}}\right]\tr(X^{p-1}E_{jk}) \\
&=\sum_{j,k=1}^n \inprod{X}{E_{jk}}_{HS} \overline{\inprod{(X^{p-1})^*}{E_{jk}}_{HS}}\\
&= \inprod{X}{(X^{p-1})^*}_{HS} = \tr(X^p).
\end{align*}
It therefore follows from Lemma \ref{T:limits_all_spaces} that
\begin{equation}\label{E:cnosymm-quadcond1}
\lim_{\epsilon\to 0} \frac{1}{\epsilon^2}
\mathbb{E}[(W_\epsilon-W)_p\overline{(W_\epsilon-W)_q}|X]
=\frac{2pq}{d(d-1)}\left(2\norm{X}^2\tr(X^{p-1}(X^{q-1})^*)-W_p\overline{W_q}\right).
\end{equation}
Note that if $p\neq q$, the expectation of both terms on the right is zero by symmetry. If $p=q$, then taking expectations of both sides of
\eqref{E:cnosymm-quadcond1} gives that
\begin{align*}
2\mathbb{E}[\|X\|^2\tr(X^{p-1}(X^*)^{p-1})]-\mathbb{E}|W_p|^2&=\lim_{\epsilon\to0}\frac{d(d-1)}{2p^2\epsilon^2}\mathbb{E}[|(W_\epsilon-W)_p|^2]\\&=\lim_{\epsilon\to0}\frac{-d(d-1)}{p^2\epsilon^2}\mathbb{E}[(W_\epsilon-W)_p\overline{W_p}]\\&=\lim_{\epsilon\to0}\frac{-d(d-1)}{p^2\epsilon^2}\mathbb{E}\Big[\mathbb{E}\big[(W_\epsilon-W)_p\big|W\big]\overline{W_p}\Big]\\&=\frac{p+d-2}{p}\mathbb{E}|W_p|^2,
\end{align*}
where the second line follows by exchangeability and the last line
follows from formula \eqref{E:cmat-linearity} for
$\mathbb{E}[(W_\epsilon-W)_p|W]$.
Since $d=2n^2$, combining this computation with Lemma \ref{T:prelim-nosymm-c} means that
\begin{equation}
\label{E:var-bound}
\mathbb{E}|W_p|^2=\frac{2p}{2p+2n^2-2}\mathbb{E}\left[\|X\|^2\tr(X^{p-1}(X^*)^{p-1})\right]=p+O\left(\frac{1}{n}
\right),
\end{equation}
and then by Equation \eqref{E:cnosymm-quadcond1},
\[
\lim_{\epsilon\to 0} \frac{1}{\epsilon^2}
\mathbb{E}[(W_\epsilon-W)_p\overline{(W_\epsilon-W)_q}]
= \left(\frac{2p^2(p+d-2)}{d(d-1)}+ O\left(\frac{1}{n^3}\right)
\right)\delta_{pq}.
\]
We define $\Sigma$ to be the diagonal matrix with
$\sigma_{pp}=p$. Taking $\mathcal{G} = \sigma(X)$ in Theorem \ref{T:inf-abstract}, the random matrix $E'$ then has $(p,q)$ entry
\begin{align*}
[E']_{pq} & = \frac{2pq}{d(d-1)} \left[ 2 \norm{X}^2 \tr
\bigl(X^{p-1} (X^{q-1})^*\bigr) - W_p
\overline{W_q}\right] - \frac{2p^2 (p+d-2)}{d(d-1)}
\delta_{pq} \\
& =\frac{2pq}{d(d-1)}\Big[2\|X\|^2 \tr(X^{p-1}(X^{q-1})^*)-\mathbb{E}\left[2\|X\|^2\tr(X^{p-1}(X^{q-1})^*)\right]\\
&\qquad \qquad \qquad -W_p\overline{W_q}+\mathbb{E} \left(W_p\overline{W_q}\right)\Bigl]+O\left(\frac{1}{n^3}\right)\delta_{pq}.
\end{align*}
We will estimate the expected Hilbert--Schmidt norm by
\[
\mathbb{E} \norm{E'} \le \mathbb{E} \sum_{p,q=1}^m \abs{[E']_{pq}}.
\]
We first have
\[
\mathbb{E}\abs{W_p\overline{W_q}-\mathbb{E} \left(W_p\overline{W_q}\right)}\le 2 \mathbb{E} \abs{W_p\overline{W_q}} \le 2 \sqrt{\mathbb{E}
\abs{W_p}^2} \sqrt{\mathbb{E} \abs{W_q}^2} = pq + O\left(\frac{1}{n}\right)
\]
by the Cauchy--Schwarz inequality and \eqref{E:var-bound}.
As in the proof of Lemma \ref{T:prelim-nosymm-c}, we write
$X = \frac{\norm{X}}{\sqrt{n}}\widetilde{X}$, where $\widetilde{X}$ is uniformly distributed
on the sphere of radius $\sqrt{n}$ in $\mat{n}{\mathbb{C}}$ and $\widetilde{X}$ is
independent from $\norm{X}$. We then have
\begin{align*}
\mathbb{E}\Big|\|X\|^2
&\tr(X^{p-1}(X^{p-1})^*)-\mathbb{E}\left[\|X\|^2
\tr(X^{p-1}(X^{p-1})^*)\right]\Big|\\
&=n\mathbb{E}\left|\frac{\|X\|^{2p}}{n^{p}}\tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*)-\left(\mathbb{E}\frac{\|X\|^{2p}}{n^{p}}\right)\mathbb{E}
\tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*)\right|\\
&\le n\mathbb{E} \tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*) \mathbb{E}\left|\frac{\|X\|^{2p}}{n^{p}}-\mathbb{E}
\frac{\|X\|^{2p}}{n^{p}}\right| \\
& \qquad +n\left(\mathbb{E}
\frac{\|X\|^{2p}}{n^{p}}\right) \mathbb{E} \abs{\tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*) - \mathbb{E} \tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*)}\\
&\le n \mathbb{E} \tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*)\left(\sqrt{t_{4p}(X)-2t_{2p}(X)}+ t_{2p} (X) \right) \\
& \qquad + n (1 + t_{2p}(X)) \mathbb{E} \abs{\tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*) - \mathbb{E} \tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*)}.
\end{align*}
Lemma \ref{T:prelim-nosymm-c} implies that
\[
\mathbb{E} \tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*)=n+O(1),
\]
and Proposition \ref{T:star-poly-concentration} implies that
\[
\mathbb{E} \abs{\tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*) - \mathbb{E} \tr(\widetilde{X}^{p-1}(\widetilde{X}^{p-1})^*)} \le \kappa_p.
\]
We therefore have
\begin{align*}\mathbb{E}\Big|\|X\|^2&\tr(X^{p-1}(X^{p-1})^*)-\mathbb{E}\left[\|X\|^2
\tr(X^{p-1}(X^{p-1})^*)\right]\Big|\\&\le \kappa
n^2 \left(\sqrt{t_{4p}(X)-2t_{2p}(X)}+ t_{2p} (X) \right) +\kappa_pn(1+t_{2p}(X))=O(n).
\end{align*}
Similarly, recalling that when $p\neq q$ the means are $0$,
\begin{align*}
\mathbb{E} \abs{\norm{X}^2 \tr \bigl(X^{p-1}(X^{q-1})^*\bigr)}
& = n \mathbb{E} \abs{\frac{\norm{X}^{p+q}}{n^{(p+q)/2}}} \mathbb{E} \abs{\tr
\bigl(\widetilde{X}^{p-1} (\widetilde{X}^{q-1})^*\bigr)}
\le \kappa n (1 + t_{p+q}(X)) = O(n).
\end{align*}
Making
use of the fact that $\|\Lambda^{-1}\|_{op}=d=2n^2$, it now follows
that
\[
\|\Lambda^{-1}\|_{op}\mathbb{E}\|E'\|\le \frac{\kappa_m}{n}
\]
for some constant $\kappa_m$ depending only on $m$.
Finally, consider part \ref{P:quad_no_bar} of Lemma \ref{T:limits_all_spaces}.
Observe that
\begin{align*}
\sum_\alpha&\tr(X^{p-1}B_\alpha)\tr(X^{q-1}B_\alpha)\\&=\sum_{j,k=1}^n\tr(X^{p-1}E_{jk})\tr(X^{q-1}E_{jk})-\sum_{j,k=1}^n\tr(X^{p-1}E_{jk})\tr(X^{q-1}E_{jk})=0,
\end{align*}
and from above,
\begin{align*}
\sum_{\alpha=1}^dX_\alpha\tr(X^{p-1}B_\alpha)=\tr(X^p).
\end{align*}
It thus follows from Lemma \ref{T:limits_all_spaces} that
\[
[E'']_{p,q}=\lim_{\epsilon\to0}\frac{1}{\epsilon^2}\mathbb{E}[(W_\epsilon-W)_p(W_\epsilon-W)q|X]=\frac{2pq}{d(d-1)}W_pW_q.
\]
By the Cauchy--Schwarz inequality and \eqref{E:var-bound},
$\mathbb{E} |W_pW_q|$ is bounded independent of $n$, and so
\[\mathbb{E}\|E''\|\le\frac{\kappa_m}{d(d-1)},\]
where $\kappa_m$ is a constant depending only on $m$. This completes the
proof of Theorem \ref{T:traces-of-powers-clt-nonnormal} in the case of $\mat{n}{\mathbb{C}}$.
\end{proof}
\section{Rotationally invariant ensembles in $\mat{n}{\mathbb{R}}$}\label{S:nonsym-r}
As in the previous section, we begin with a technical lemma.
\begin{lemma}\label{T:traces-of-products}
Let $X$ be a random matrix in $\mat{n}{\mathbb{R}}$ whose distribution is
invariant under rotations in $\mat{n}{\mathbb{R}}$. Suppose that
$\mathbb{E}\|X\|^2=n$ and that for each $k$, there is a constant $\alpha_k$
such that
\[
t_k(X)=\mathbb{E}\Big|n^{-k/2}\|X\|^k-1\Big|\le\frac{\alpha_k}{n}.
\]
Then for all $p,q\in \mathbb{N}$,
\[
\mathbb{E}\left[\|X\|^2\tr(X^p(X^T)^q)\right]=\begin{cases}n^2+O(n)&
\text{if }p=q,
\\O(n)& \text{if $p\neq q$ and $p-q$ is even},\\0&
\text{if $p-q$ is odd};\end{cases}
\]
where the implied constants depend on $p$, $q$, and the $\alpha_k$.
\end{lemma}
\begin{proof}
First note that if $p - q$ is odd, then
$\mathbb{E}\left[\|X\|^2\tr(X^{p}(X^T)^{q})\right]=0$ by symmetry.
If $p-q$ is even, then as in the proof of Lemma
\ref{T:prelim-nosymm-c} we may first assume that $X$ is uniformly
distributed on the sphere of radius $\sqrt{n}$ in $\mat{n}{\mathbb{R}}$.
We have the expansion
\begin{equation}\label{Eq:real-expansion}
\mathbb{E}\left[\tr(X^{p}(X^T)^{q})\right]=
\sum_{\substack{i,j\\i_1,\ldots,i_{p-1}\\j_1,\ldots,j_{q-1}}}
\mathbb{E}\left[x_{ii_1}x_{i_1i_2}\cdots
x_{i_{p-1}j}x_{ij_1}x_{j_1j_2}\cdots x_{j_{q-1}j}\right]
\end{equation}
Consider first the case that $p=q$. A term on the right side of
\eqref{Eq:real-expansion} is nonzero only if each matrix entry
$x_{ij}$ appears an even number of times. The total contribution
from terms in which the indices (including $i$ and $j$) are chosen
such that $i_1=j_1,\ldots,i_{p-1}=j_{p-1}$, but are otherwise
distinct, is
\begin{align*}
n(n-1)\cdots\left(n-p\right) \mathbb{E}\left[x_{11}^2\cdots
x_{1p}^2\right] =
n^p \frac{n(n-1)\cdots\left(n-p\right)}{(n^2+2p-2)(n^2+2p-4)\cdots
n^2}
& =n+O(1).
\end{align*}
Each of the non-zero expectations has the same order in $n$, and so
this is the main contribution to the sum, since it involves the
maximum number of distinct indices.
Now suppose that $p\neq q$ and $p-q$ is even; assume without loss of
generality that $p< q$ and write $q = p + 2k$. Consider the
contribution of those terms on the right side of
\eqref{Eq:real-expansion} in which $i_\ell=j_\ell$ for
$1\le \ell\le p-1$, $j_\ell=j_{k+\ell}$ for $p\le\ell\le p+k-1$, and
$j = j_p = j_{q-1}$, and the indices are distinct except for these
restrictions. The contribution of these terms is
\begin{align*}
n(n-1)\cdots\left(n-p-k+1\right)
\mathbb{E}\left[x_{11}^2\cdots x_{1,p+k}^2\right]
&
=n^{(p+q)/2}\frac{n(n-1)\cdots\left(n-\frac{p+q}{2}+1\right)}{(n^2+p+q-2)(n^2+p+q-4)\cdots n^2} \\
& =1+O\left(n^{-1}\right).
\end{align*}
The leading contribution to \eqref{Eq:real-expansion} is made by
these terms, and others obtained by permuting the equality structure
among the indices $j, j_p, \dots, j_{q-1}$; these equality
structures maximize the number of indices which can be chosen to be
distinct in this group, and all nonzero terms are of the same order
in $n$. Since we are not interested in the leading coefficient in
\eqref{Eq:real-expansion} in this case, it suffices for our purposes
to note that the number of permutations is bounded in terms of $p$
and $q$.
\end{proof}
We now proceed with the proofs of the $\mat{n}{\mathbb{R}}$ cases of Theorems
\ref{T:traces-of-powers-means} and \ref{T:traces-of-powers-clt-nonnormal}.
\begin{proof}[Proof of Theorem \ref{T:traces-of-powers-means} for $V=\mat{n}{\mathbb{R}}$]
Trivially, if $p$ is odd then $\mathbb{E} W_p=0$.
To treat the case that $p$ is even, we make use of Lemma
\ref{T:limits_all_spaces}. Recall that in $\mat{n}{\mathbb{R}}$, the orthonormal basis
$\{B_\alpha\}_{\alpha=1}^d=\{E_{jk}\}_{j,k=1}^n$, so that given $A\in\mat{n}{\mathbb{R}}$,
\[
\sum_{\alpha=1}^dB_\alpha AB_\alpha =\sum_{j,k=1}^nE_{jk}AE_{jk}=A^T.
\]
It follows from this computation and part \ref{P:lin} of Lemma
\ref{T:limits_all_spaces} that
\begin{equation}\label{E:lin-diff-real-nosym}\begin{split}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}\mathbb{E}\left[W_{\epsilon,p}-W_p\big|X\right]
&=\sum_{\ell=0}^{p-2}\frac{2\|X\|^2(\ell+1)
}{d(d-1)}\tr\left(X^\ell(X^T)^{p-2-\ell}\right)-\frac{p(p+d-2)}{d(d-1)}W_p \\
&=\sum_{\ell=0}^{p-2}\frac{\|X\|^2 p}{d(d-1)}\tr\left(X^\ell(X^T)^{p-2-\ell}\right)-\frac{p(p+d-2)}{d(d-1)}W_p,
\end{split}
\end{equation}
where the second line follows by replacing $\ell$ with $p-2-\ell$, and
averaging the resulting expression with the first.
Since the expectation of the left-hand side of \eqref{E:lin-diff-real-nosym} is zero by
exchangeability, the expectation of the right-hand side is zero as well, and so taking the expectation of both sides of the formula above gives that
\[
\mathbb{E}
W_p=\sum_{\ell=0}^{p-2}\frac{1}{(p+d-2)}\mathbb{E}\left[\|X\|^2\tr\left(X^\ell(X^T)^{p-2-\ell}\right)\right].
\]
By Lemma \ref{T:traces-of-products},
\[
\mathbb{E}\left[\|X\|^2\tr\left(X^{\frac{p}{2}-1}(X^T)^{\frac{p}{2}-1}\right)\right]=n^2+O(n),
\]
and all the other terms in the above sum are $O(n)$, and thus
$\mathbb{E} W_p=1+O(n^{-1})$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:traces-of-powers-clt-nonnormal} for
$V=\mat{n}{\mathbb{R}}$]
We begin with condition \emph{(\ref{inf-lincond})} of Theorem
\ref{T:inf-abstract}. Starting from Equation
\eqref{E:lin-diff-real-nosym} above, since both sides of the equation
have mean zero, it follows that if $Y_p:=W_p-\mathbb{E} W_p$ and
$Y_{\epsilon,p}:=W_{\epsilon,p}-\mathbb{E} W_{\epsilon,p}$, then
\begin{equation*}\begin{split}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}&\mathbb{E}\left[Y_{\epsilon,p}-Y_p\big|X\right]\\
&=\frac{p}{d(d-1)}\sum_{\ell=0}^{p-2}\left[\|X\|^2\tr\left(X^\ell(X^T)^{p-2-\ell}\right)
-\mathbb{E}\left(\|X\|^2
\tr\left(X^\ell(X^T)^{p-2-\ell}\right)\right)\right]\\
&\qquad\qquad-\frac{p(p+d-2)}{d(d-1)}Y_p.
\end{split}\end{equation*}
It follows essentially as in the previous section that for any $\ell$,
\begin{align*}\mathbb{E}\Big|\|X\|^2&\tr(X^{\ell}(X^{T})^{p-2-\ell})-\mathbb{E}\left[\|X\|^2 \tr(X^{\ell}(X^{T})^{p-2-\ell})\right]\Big|=O\big(n\big),
\end{align*}
and we therefore choose the matrix $\Lambda$ in the statement of Theorem \ref{T:inf-abstract} to be diagonal with $p$th entry given by $\frac{p(p+d-2)}{d(d-1)}$, the function $s(\epsilon)=\epsilon^2$, and the error $E$ to have $p$th entry
\[E_p=\frac{p}{d(d-1)}\left[\|X\|^2\tr\left(\sum_{\ell=0}^{p-2}X^\ell(X^T)^{p-2-\ell}\right)-\mathbb{E}\left[\|X\|^2 \tr\left(\sum_{\ell=0}^{p-2}X^\ell(X^T)^{p-2-\ell}\right)\right]\right],\]
so that
\[\|\Lambda^{-1}\|_{op}\mathbb{E}|E|\le \frac{\kappa_m}{n},\]
with the constant $\kappa_m$ depending only on $m$.
Moving on to condition \emph{(\ref{inf-quadcond})} of Theorem \ref{T:inf-abstract}, it follows from Lemma \ref{T:limits_all_spaces} exactly as in the previous case that
\[\lim_{\epsilon\to0}\frac{1}{\epsilon^2}\mathbb{E}\left[(W_\epsilon-W)_p(W_\epsilon-W)_q|X\right]=\frac{2
pq}{n^2(n^2-1)}\left[\|X\|^2\tr((X^T)^{p-1}X^{q-1})-W_pW_q\right].\]
It follows from Proposition \ref{T:star-poly-concentration} and the fact that $|\mathbb{E} W_p|=O(1)$ that $\mathbb{E}|W_pW_q|\le \kappa_{p,q}$ for some constant depending only on $p$ and $q$, and so if we choose $\Sigma$ to be diagonal with $\sigma_{pp}=p$,
the random matrix $E'$ in the statement of Theorem
\ref{T:inf-abstract} has $p$-$q$ entry
\begin{align*}[E']_{pq}=\frac{2pq}{n^2(n^2-1)}&\left[2\|X\|^2\tr((X^T)^{p-1}X^{q-1})-\mathbb{E} \left[2\|X\|^2\tr((X^T)^{p-1}X^{q-1})\right]\right.\\&\qquad\qquad\left.-W_pW_q+\mathbb{E} W_pW_q+O(n)\right].\end{align*}
By Proposition \ref{T:star-poly-concentration},
$\mathbb{E}[|W_p|^2-\mathbb{E}|W_p|^2]^2$ is bounded independently of $n$, and we have observed already that
\begin{align*}\mathbb{E}\Big|\|X\|^2&\tr((X^T)^{p-1}X^{q-1})-\mathbb{E}\left[\|X\|^2\tr((X^T)^{p-1}X^{q-1})\right]\Big|=O(n)\end{align*} and so
(making use of the fact that $\|\Lambda^{-1}\|_{op}=d=n^2$),
\[\|\Lambda^{-1}\|_{op}\mathbb{E}\|E'\|\le \frac{\kappa'_m}{n}\]
for some constant $\kappa_m'$ depending only on $m$.
\end{proof}
\section{Rotationally invariant ensembles in $\symmat{n}{\mathbb{C}}$}\label{S:Hermitian}
We initially proceed via Lemma \ref{T:limits_all_spaces} as above.
Since in $\symmat{n}{\mathbb{C}}$, the orthonormal basis is
\[\{B_\alpha\}_{\alpha=1}^d=\{E_{jj}\}_{j=1}^n\cup\{F_{jk}\}_{1\le
j<k\le n}\cup\{iG_{jk}\}_{1\le j<k\le n},\]
for a given $A\in\symmat{n}{\mathbb{C}}$,
\begin{equation*}\begin{split}\sum_{\alpha=1}^dB_\alpha AB_\alpha&=\sum_{j=1}^nE_{jj}AE_{jj}+\frac{1}{2}\sum_{1\le
j<k\le
n}(E_{jk}+E_{kj})A(E_{jk}+E_{kj}) \\&\qquad\qquad-\frac{1}{2}\sum_{1\le
j<k\le
n}(E_{jk}-E_{kj})A(E_{jk}-E_{kj})\\&=\sum_{j,k=1}^nE_{jk}AE_{kj}\\&=\tr(A)I.\end{split}\end{equation*}
It thus follows from Lemma \ref{T:limits_all_spaces} that
\begin{equation}\begin{split}\label{E:csym-linear1}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}
\mathbb{E}\left[W_{\epsilon,p}-W_p\big|X\right]
& =\sum_{\ell=0}^{p-2}\frac{2(\ell+1)}{d(d-1)} W_2W_\ell
W_{p-2-\ell}-\frac{p(p+d-2)}{d(d-1)}W_p \\
& = \frac{p}{d(d-1)} W_2 \sum_{\ell=0}^{p-2}W_\ell
W_{p-2-\ell}-\frac{p(p+d-2)}{d(d-1)}W_p,
\end{split}\end{equation}
where the second line follows by replacing $\ell$ with $p-2-\ell$, and
averaging the resulting expression with the first.
We first use this expression to prove Theorem \ref{T:traces-of-powers-means}.
\begin{proof}[Proof of Theorem \ref{T:traces-of-powers-means} for $V=\symmat{n}{\mathbb{C}}$.]
If $p$ is odd, then $\mathbb{E} W_p=\mathbb{E}[\tr(X^p)]=0$ by symmetry.
Suppose now that $p$ is even.
As in the proofs of Lemmas \ref{T:prelim-nosymm-c} and
\ref{T:traces-of-products}, we may assume that $X$ is uniformly
distributed in the sphere of radius $\sqrt{n}$ in $\symmat{n}{\mathbb{C}}$, so
that $W_2 = n$ is constant.
Equation \eqref{E:csym-linear1} and the fact that
$(W_{\epsilon, p}, W_p)$ is exchangeable imply that
\begin{equation}
\label{Eq:EW_p-symm-c}
\mathbb{E} W_p
=\frac{n}{p+d-2}\sum_{\ell=0}^{p-2}\mathbb{E}[W_{\ell}W_{p-\ell-2}].
\end{equation}
Proposition
\ref{T:star-poly-concentration} implies that
\begin{equation}
\label{Eq:W-covariance-bound-hermitian}
\mathbb{E} [W_{\ell}W_{p-\ell-2}] - (\mathbb{E} W_{\ell})(\mathbb{E} W_{p-\ell-2}) =
\cov(W_{\ell},W_{p-2-\ell})
\le \sqrt{\var(W_{\ell}) \var(W_{p-2-\ell})} = O(1),
\end{equation}
and so by \eqref{Eq:EW_p-symm-c},
\begin{align*}
\frac{\mathbb{E} W_p}{n} & = \frac{n^2}{p+d-2}\left(\sum_{\ell=0}^{p-2}\frac{\mathbb{E}
W_{\ell}}{n}\frac{\mathbb{E} W_{p-\ell-2}}{n} + O(n^{-2})\right).
\end{align*}
Writing $p = 2r$ and $\beta_r = \frac{\mathbb{E} W_{2r}}{n}$, we therefore
have that $\beta_0 = \beta_1 = 1$ and
\[
\beta_r = \left(\sum_{k=0}^{r-1} \beta_k \beta_{r-k-1} + O(n^{-2})\right)(1 + O(n^{-2}))
\]
for $r \ge 2$. Recalling that the Catalan numbers
$C_r = \frac{1}{r+1}\binom{2r}{r}$ satisfy the recurrence $C_0 = 1$
and $C_r = \sum_{k=0}^{r-1} C_k C_{r-k-1}$, it now follows by induction on
$r$ that $\beta_r = C_r + O(n^{-2})$, where the $O$ term may also
depend on $r$.
\end{proof}
Note that if $X$ is uniformly distributed in the sphere of
$\symmat{n}{\mathbb{C}}$, then $iX$ is uniformly distributed in the sphere of
$\asymmat{n}{\mathbb{C}}$. The anti-Hermitian case of Theorem
\ref{T:traces-of-powers-means} thus follows immediately from the
Hermitian case.
Recall that if $X$ is uniformly distributed on the sphere of
radius $\sqrt{n}$ in $\symmat{n}{\mathbb{C}}$, it follows from Proposition
\ref{T:star-poly-concentration} that the $W_p$ have bounded variance;
the following proposition shows that this also holds under the
concentration condition we have put on $\|X\|^2$.
\begin{prop}\label{T:variance-bound-hermitian}
Let $X$ be a random matrix in $\symmat{n}{\mathbb{C}}$ as above, whose
distribution is invariant under rotations in $\symmat{n}{\mathbb{C}}$, and let $W_p=\tr(X^p)$.
Suppose that $\mathbb{E}\|X\|^2=n$ and that for each $k$, there is a constant $\alpha_k$ depending only on $k$ such that
\[t_k(X)=\left|n^{-k/2}\mathbb{E}\|X\|^{k}-1\right|\le\frac{\alpha_k}{n^2}.\]
Then for each fixed $p\in\mathbb{N}$, there are constants $\kappa_{p,2}$ and
$\kappa_{p,4}$, depending on
$p$ and the $\alpha_k$
but not $n$, such that
\[
\var(W_p) \le \kappa_{p,2}
\qquad \text{and} \qquad
\mathbb{E} (W_p - \mathbb{E} W_p)^4 \le \kappa_{p,4} n^2.
\]
\end{prop}
\begin{proof}
As above, we write $X = \frac{\norm{X}}{\sqrt{n}}\widetilde{X}$,
where $\widetilde{X}$ is uniformly distributed on the sphere of
radius $\sqrt{n}$ in $\symmat{n}{\mathbb{C}}$ and $\widetilde{X}$ is
independent from $\norm{X}$. Let $R:=\frac{\|X\|}{\sqrt{n}}$ and
$\widetilde{W}_p=\tr(\widetilde{X}^p)$.
We have
\begin{align*}
W_p - \mathbb{E} W_p & = R^p \widetilde{W}_p - (\mathbb{E} R^p) (\mathbb{E}
\widetilde{W}_p) \\
& = R^p (\widetilde{W}_p - \mathbb{E} \widetilde{W}_p) + (\mathbb{E}
\widetilde{W}_p)\bigl[ (R^p - 1) - (\mathbb{E} R^p - 1)\bigr]
\end{align*}
and therefore
\begin{align*}
\left(\mathbb{E} \abs{W_p - \mathbb{E} W_p}^q\right)^{1/q}
& \le \left(1 + t_{pq}(X) \right)^{1/q} \left(\mathbb{E} \abs{\widetilde{W}_p -
\mathbb{E} \widetilde{W}_p}^q\right)^{1/q} \\
& \qquad + \abs{\mathbb{E} \widetilde{W}_p} \left[\left(\mathbb{E} \abs{R^p-1}^q\right)^{1/q}
+ t_p(X)\right]
\end{align*}
for any $q \ge 1$ by the $L^q$ triangle inequality. By Proposition
\ref{T:star-poly-concentration}, Theorem
\ref{T:traces-of-powers-means}, and the fact that
$t_k(X) = O(n^{-2})$ for each $k$, we have
$\left(\mathbb{E} \abs{\widetilde{W}_p - \mathbb{E} \widetilde{W}_p}^q\right)^{1/q}
= O(1)$, $\abs{\mathbb{E} \widetilde{W}_p} = O(n)$,
$\left(\mathbb{E} R^{pq} \right)^{1/q} = O(1)$, and
$\abs{\mathbb{E} R^p - 1} = O(n^{-2})$.
To complete the proof, observe that
\[
\mathbb{E} (R^p - 1)^2 = (\mathbb{E} R^{2p} - 1) - 2 (\mathbb{E} R^p - 1) \le t_{2p}(X) +
2 t_p(X) = O(n^{-2})
\]
and similarly
\[
\mathbb{E} (R^p - 1)^4 \le t_{4p}(X) + 4 t_{3p}(X) + 6 t_{2p}(X) + 4
t_p(X) = O(n^{-2}).
\qedhere
\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:traces-of-powers-clt-normal} for $V=\symmat{n}{\mathbb{C}}$]
We begin with condition \eqref{inf-lincond} of Theorem
\ref{T:inf-abstract}. We write $R:=\frac{\|X\|}{\sqrt{n}}$,
$Y_p:=W_p-\mathbb{E} W_p$, and
$Y_{\epsilon,p}:=W_{\epsilon,p}-\mathbb{E} W_{\epsilon,p}$. By
\eqref{E:csym-linear1},
\begin{equation}\begin{split}\label{E:lin-diff-Hermite}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}
&\mathbb{E}\left[Y_{\epsilon,p}-Y_p\big|X\right]\\
&=\sum_{\ell=0}^{p-2}\frac{p}{d(d-1)}\left[W_2W_\ell W_{p-2-\ell}
-\mathbb{E}\left[W_2W_\ell W_{p-2-\ell}\right]\right]
-\frac{p(p+d-2)}{d(d-1)}Y_p\\
&=\sum_{\ell=0}^{p-2}\frac{pn}{d(d-1)}\left[R^2W_\ell W_{p-2-\ell}
-\mathbb{E}\left[R^2W_{p-2-\ell}W_\ell\right]\right]
-\frac{p(p+d-2)}{d(d-1)}Y_p.
\end{split}\end{equation}
Observe that
\begin{align*}
R^2 W_\ell
W_{p-2-\ell} -\mathbb{E}\left[R^2W_{p-2-\ell}W_\ell\right] = Y_\ell\mathbb{E} W_{p-2-\ell}+Y_{p-2-\ell}\mathbb{E}
W_\ell+Y_2\left(\frac{\mathbb{E} W_\ell\mathbb{E}
W_{p-2-\ell}}{n}\right)+F_{p,\ell},
\end{align*}
where
\begin{equation}\label{Eq:Fpl}
\begin{split}
F_{p,\ell}
&:= R^2Y_\ell
Y_{p-2-\ell}-\mathbb{E}[R^2Y_\ell
Y_{p-2-\ell}]+\left[(R^2-1)Y_\ell-\mathbb{E}[(R^2-1)Y_\ell]\right]\mathbb{E}
W_{p-2-\ell}\\
&\qquad+\left[(R^2-1)Y_{p-2-\ell}-\mathbb{E}[(R^2-1)Y_{p-2-\ell}]\right]\mathbb{E}
W_{\ell}.\\
&= (R^2-1)Y_\ell
Y_{p-2-\ell} -\mathbb{E}[(R^2-1) Y_\ell
Y_{p-2-\ell}] + Y_\ell Y_{p-2-\ell} - \mathbb{E} [Y_\ell
Y_{p-2-\ell}] \\
&\qquad +\left[(R^2-1)Y_\ell-\mathbb{E}[(R^2-1)Y_\ell]\right]\mathbb{E}
W_{p-2-\ell} \\
& \qquad +\left[(R^2-1)Y_{p-2-\ell}-\mathbb{E}[(R^2-1)Y_{p-2-\ell}]\right]\mathbb{E}
W_{\ell}.
\end{split}
\end{equation}
Using this expression in \eqref{E:lin-diff-Hermite} yields
\begin{equation*}\begin{split}
\lim_{\epsilon\to0}&\frac{1}{\epsilon^2}\mathbb{E}\left[Y_{\epsilon,p}-Y_p\big|X\right]\\
&=\sum_{\ell=0}^{p-2}\frac{pn}{d(d-1)}\left[Y_\ell\mathbb{E} W_{p-2-\ell}+Y_{p-2-\ell}\mathbb{E}
W_\ell+Y_2\left(\frac{\mathbb{E} W_\ell\mathbb{E}
W_{p-2-\ell}}{n}\right)+F_{p,\ell}\right]-\frac{p(p+d-2)}{d(d-1)}Y_p
\\
&=\frac{p}{d(d-1)} \left[2n \sum_{\ell=0}^{p-2} Y_\ell\mathbb{E}
W_{p-2-\ell}+ Y_2\sum_{\ell=0}^{p-2}\mathbb{E} W_\ell\mathbb{E} W_{p-2-\ell}
+n \sum_{\ell=0}^{p-2} F_{p,\ell}
- (p+d-2) Y_p\right].
\end{split}\end{equation*}
As in the statement of Theorem \ref{T:traces-of-powers-clt-normal}, take
\[
Z_p=Y_p-\frac{p\mathbb{E} W_p}{2n}Y_2,
\]
for $p\ge 0$ (in particular, $Z_0 = Z_2 = 0$), and
\[
Z_{\epsilon,p}=Y_{\epsilon,p}-\frac{p\mathbb{E} W_p}{2n}Y_{\epsilon,2}
=Y_{\epsilon,p}-\frac{p\mathbb{E} W_p}{2n}Y_2,
\]
where the last equality follows since
$W_{\epsilon,2} = \|X_\epsilon\|^2 = \|X\|^2 = W_2$, so that
$Z_{\epsilon,p} - Z_p = Y_{\epsilon,p} - Y_p$. We then have
\begin{equation}\begin{split}\label{E:Z-linear}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}
\mathbb{E}\left[Z_{\epsilon,p}-Z_p\big|X\right]
=\frac{p}{d(d-1)} &\left[2n\sum_{\ell=0}^{p-2}Z_\ell\mathbb{E} W_{p-2-\ell}
+(p+1) Y_2 \sum_{\ell=0}^{p-2}\mathbb{E} W_\ell\mathbb{E}
W_{p-2-\ell} \right.\\
&\qquad\left.+n\sum_{\ell=0}^{p-2} F_{p,\ell} - (p+d-2)\left(Z_p+\frac{p\mathbb{E} W_p}{2n}Y_2\right)\right].
\end{split}\end{equation}
By Theorem \ref{T:traces-of-powers-means},
\[
\sum_{\ell=0}^{p-2} \mathbb{E} W_\ell \mathbb{E} W_{p-2-\ell} = n^2
\sum_{\ell=0}^{p-2} C_{\ell} C_{p-2-\ell} = O(n^2)
\]
if $p$ is even, and is $0$ otherwise. Equation \eqref{E:Z-linear}
therefore implies that
\begin{align*}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}&\mathbb{E}\left[Z_{\epsilon,p}-Z_p\big|X\right]
=\frac{p}{d(d-1)}\left[2n \sum_{\ell=0}^{p-2} Z_\ell\mathbb{E}
W_{p-2-\ell} -(p+d-2) Z_p + n \sum_{\ell=0}^{p-2} F_{p,\ell} + O(n^2) Y_2\right].
\end{align*}
We define a matrix $\Lambda$ with entries indexed by
$p,q\in\{1,\ldots,m\}\setminus\{2\}$ as follows:
\[
\big[\Lambda\big]_{pq}=\begin{cases}-\frac{2pC_{(p-2-q)/2}}{d-1} &
\text{if $1\le
q\le p-2$, $q\neq 2$, and $p-2-q$ is even};
\\\frac{p}{d-1} & \text{if }q=p;\\0 &\text{otherwise}.\end{cases}
\]
Then $\Lambda=\frac{1}{d-1}T$ where $T$ is an invertible, lower
triangular matrix which is independent of $n$, and so
$\|\Lambda^{-1}\|_{op}=(d-1)\|T^{-1}\|_{op}\le \kappa_mn^2.$
We define the random error $E$ to be
\[
E=\lim_{\epsilon\to 0}\mathbb{E}\left[Z_\epsilon-Z\big|X\right]+\Lambda Z,
\]
so that
\begin{equation}\begin{split}\label{E:Hermitian-linear-error}
E_p&=\frac{2p}{d-1}\sum_{\ell=0}^{p-2}Z_\ell\left(\frac{\mathbb{E}
W_{p-2-\ell}}{n}-C_{(p-2-\ell)/2}\right)
-\frac{p(p-2)}{d(d-1)}Z_p
+ \frac{2 p}{n(d-1)}\sum_{\ell=0}^{p-2}F_{p,\ell} +O(n^{-2}) Y_2.
\end{split}\end{equation}
Since $Y_\ell$ and $Y_2$ are centered with bounded variance and
$\mathbb{E} W_\ell = O(n)$, $Z_\ell$ is centered with bounded variance as
well. We also claim that $\mathbb{E} \abs{F_{p,\ell}} = O(1)$.
To see this, observe first that by H\"older's inequality and
Proposition \ref{T:variance-bound-hermitian},
\begin{align*}
\mathbb{E} \abs{(R^2 - 1) Y_\ell Y_{p-2-\ell}}
& \le \left(\mathbb{E} (R^2 - 1)^2\right)^{1/2} \left(\mathbb{E}
Y_\ell^4\right)^{1/4} \left(\mathbb{E} Y_{p-2-\ell}^4\right)^{1/4} = O(1)
\end{align*}
where $\mathbb{E} (R^2 - 1)^2$ is bounded as in the proof of Proposition
\ref{T:variance-bound-hermitian}. The other terms in
\eqref{E:Hermitian-linear-error} are bounded in $L^1$ similarly. It
then follows that \( \mathbb{E}|E|\le \kappa_m n^{-3}. \)
Finally, we consider part \ref{P:quad_bar} of Lemma
\ref{T:limits_all_spaces}. In the present context, the
Hilbert--Schmidt inner product is real and therefore coincides with
the real inner product $\inprod{\cdot}{\cdot}$. Therefore
\begin{align*}
\sum_{\alpha}\tr(X^{p-1}B_\alpha)\tr(X^{q-1}B_\alpha)
=\sum_\alpha\inprod{X^{p-1}}{B_\alpha}\inprod{X^{q-1}}{B_\alpha}=\inprod{X^{p-1}}{X^{q-1}}=W_{p+q-2}
\end{align*}
and
\begin{align*}
\sum_\alpha X_\alpha\tr(X^{p-1}B_\alpha)
= \sum_{\alpha} \inprod{X}{B_\alpha} \inprod{X^{p-1}}{B_\alpha}
= \inprod{X}{X^{p-1}}
=W_p.
\end{align*}
It thus follows from part (\ref{P:quad_no_bar}) of Lemma \ref{T:limits_all_spaces} that
\begin{align*}\lim_{\epsilon\to0}\frac{1}{\epsilon^2}\mathbb{E}\left[(W_\epsilon-W)_p(W_\epsilon-W)_q\big|X\right]=\frac{2pq}{d(d-1)}\left[W_2W_{p+q-2}-W_pW_q\right].\end{align*}
As before, $\mathbb{E} W_\epsilon=\mathbb{E} W$, so that $W_\epsilon-W=Y_\epsilon-Y$, and since
$Y_{\epsilon,2}=Y_2$, it is furthermore the case that
$Y_\epsilon-Y=Z_\epsilon-Z$. That is, for $p,q\in\{1,\ldots,m\}\setminus\{2\}$,
\begin{align*}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}\mathbb{E}\left[(Z_\epsilon-Z)_p(Z_\epsilon-Z)_q\big|X\right]=\frac{2pq}{d(d-1)}\left[W_2W_{p+q-2}-W_pW_q\right].
\end{align*}
Theorem \ref{T:traces-of-powers-means}, Proposition
\ref{T:variance-bound-hermitian}, and symmetry imply that
\[
\mathbb{E} W_p W_q = \begin{cases}
n^2 C_{p/2} C_{q/2} + O(n) & \text{if $p$ and $q$ are both even,} \\
O(1) & \text{if $p$ and $q$ are both odd,} \\
0 & \text{if $p$ and $q$ have opposite parities}.
\end{cases}
\]
We define the matrix $\Gamma$ indexed by $p,q\in\{1,\ldots,m\}\setminus\{2\}$ by
\[
\Gamma_{p,q} = \frac{pq}{d-1}\begin{cases}
C_{(p+q-2)/2} - C_{p/2} C_{q/2} & \text{if $p$ and $q$ are both even,} \\
C_{(p+q-2)/2} & \text{if $p$ and $q$ are both odd,} \\
0 & \text{if $p$ and $q$ have opposite parities}.
\end{cases}
\]
and let $\Sigma=\Lambda^{-1} \Gamma$. With these choices of $\Lambda$
and $\Sigma$, the random error matrix $E'$ of Theorem
\ref{T:inf-abstract} can be bounded as in the previous sections to
complete the proof. As noted in the introduction, it is not obvious
from this form that $\Sigma$ is positive semidefinite. However, the
argument above shows that $\Sigma$ arises as the limit of a sequence
of covariance matrices, and therefore must be positive semidefinite.
\end{proof}
\section{Rotationally invariant ensembles in $\symmat{n}{\mathbb{R}}$}\label{S:symmetric}
Proceeding by Lemma \ref{T:limits_all_spaces} as before, let $A\in\symmat{n}{\mathbb{R}}$ and recall that in the case of $\symmat{n}{\mathbb{R}}$, $\{B_\alpha\}_{\alpha=1}^d=\{E_{jj}\}_{j=1}^n\cup\{F_{jk}\}_{1\le j<k\le n}.$ We have
\begin{equation*}\begin{split}\sum_{\alpha=1}^dB_\alpha AB_\alpha&=\sum_{j=1}^nE_{jj}AE_{jj}+\frac{1}{2}\sum_{1\le
j<k\le
n}(E_{jk}+E_{kj})A(E_{jk}+E_{kj})\\&=\frac{1}{2}\sum_{j,k=1}^n E_{jk}AE_{jk}+\frac{1}{2}\sum_{j,k=1}^nE_{jk}AE_{kj}\\&=\frac{1}{2}A+\frac{1}{2}\tr(A)I,\end{split}\end{equation*}
using the fact that $A$ is symmetric. It thus follows from Lemma \ref{T:limits_all_spaces} that
\begin{equation}\begin{split}\label{E:rsym-linear1}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}&\mathbb{E}\left[W_{\epsilon,p}-W_p\big|X\right]\\
&=\sum_{\ell=0}^{p-2}\frac{(\ell+1)
\|X\|^2}{d(d-1)}\left(W_{p-2}+W_\ell
W_{p-2-\ell}\right)-\frac{p(p+d-2)}{d(d-1)}W_p\\
&=\frac{p}{2d(d-1)} \left[(p-1)W_2W_{p-2}+W_2\sum_{\ell=0}^{p-2}
W_\ell W_{p-2-\ell}-2(p+d-2)W_p\right],
\end{split}\end{equation}
Compare with the corresponding expression \eqref{E:csym-linear1} in
the Hermitian case: the first term here is new, but the remaining two
terms are, to top order, $\frac{1}{2}$ times the corresponding term in
the Hermitian case (recall that in $\symmat{n}{\mathbb{R}}$,
$d=\frac{n(n-1)}{2}$). The first term in \eqref{E:rsym-linear1} is of
smaller order than the remaining terms, and so the proofs of Theorems
\ref{T:traces-of-powers-means} and \ref{T:traces-of-powers-clt-normal}
are essentially the same as in the complex Hermitian case.
The proof of Proposition \ref{T:variance-bound-hermitian} carries over
verbatim to this case.
Returning to condition \eqref{inf-lincond} of Theorem
\ref{T:inf-abstract}, since the expectation of the left-hand side of
\eqref{E:rsym-linear1} is zero, if $Y_p:=W_p-\mathbb{E} W_p$ and
$Y_{\epsilon,p}:=W_{\epsilon,p}-\mathbb{E} W_{\epsilon,p}$, then
\begin{align*}
\lim_{\epsilon\to0}\frac{1}{\epsilon^2}\mathbb{E}\left[\left.Y_{\epsilon,p}-Y_p\right|X\right]&=\frac{p(p-1)}{2d(d-1)}(W_2W_{p-2}-\mathbb{E}
W_2W_{p-2})\\&\quad+\frac{p}{2d(d-1)}\sum_{\ell=0}^{p-2}\left[W_2W_\ell W_{p-2-\ell}-\mathbb{E}
W_2W_\ell W_{p-2-\ell}\right]-\frac{p(p+d-2)}{d(d-1)}Y_p.
\end{align*}
Again, the first term is new and the second two are very similar to
the Hermitian case, differing only in factors of 2 that correspond to
the change in dimension. By recentering and applying Proposition
\ref{T:star-poly-concentration}, it is straightforward to check that
the new term is of smaller order than the others and can be
incorporated into the constant in the final bound. The proof of
Theorem \ref{T:traces-of-powers-clt-normal} then proceeds identically to the
Hermitian case.
\section*{Acknowledgements}
This research was partially supported by grants from the U.S.\
National Science Foundation (DMS-1612589 to E.M.)\ and the Simons
Foundation (\#315593 to M.M.). The authors thank the anonymous
referees for numerous comments that improved the exposition of this
paper.
\bibliographystyle{plain}
| {
"timestamp": "2020-06-24T02:06:12",
"yymm": "1912",
"arxiv_id": "1912.11518",
"language": "en",
"url": "https://arxiv.org/abs/1912.11518",
"abstract": "We investigate traces of powers of random matrices whose distributions are invariant under rotations (with respect to the Hilbert--Schmidt inner product) within a real-linear subspace of the space of $n\\times n$ matrices. The matrices we consider may be real or complex, and Hermitian, antihermitian, or general. We use Stein's method to prove multivariate central limit theorems, with convergence rates, for these traces of powers, which imply central limit theorems for polynomial linear eigenvalue statistics. In contrast to the usual situation in random matrix theory, in our approach general, nonnormal matrices turn out to be easier to study than Hermitian matrices.",
"subjects": "Probability (math.PR); Mathematical Physics (math-ph)",
"title": "Fluctuations of the spectrum in rotationally invariant random matrix ensembles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850892111574,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7095349902345807
} |
https://arxiv.org/abs/1509.05304 | On a magnetic characterization of spectral minimal partitions | Given a bounded open set $\Omega$ in $ \mathbb R^n$ (or in a Riemannian manifold) and a partition of $\Omega$ by $k$ open sets $D_j$, we consider the quantity $\max_j \lambda(D_j)$ where $\lambda(D_j)$ is the ground state energy of the Dirichlet realization of the Laplacian in $D_j$. If we denote by $ \mathfrak L_k(\Omega)$ the infimum over all the $k$-partitions of $ \max_j \lambda(D_j)$, a minimal $k$-partition is then a partition which realizes the infimum. When $k=2$, we find the two nodal domains of a second eigenfunction, but the analysis of higher $k$'s is non trivial and quite interesting. In this paper, we give the proof of one conjecture formulated previously by V. Bonnaillie-Noel and B. Helffer about a magnetic characterization of the minimal partitions when $n=2$. | \section{Introduction}\label{Section1}
\subsection{Main definitions}
We consider mainly the Dirichlet Laplacian in a bounded
domain $\Omega\subset \mathbb R^2$.
We would like to analyze the relations between the nodal domains
of the real-valued eigenfunctions of this Laplacian and the partitions of
$\Omega$
by $ k$
open sets $ D_i$ which are minimal in the sense that the
maximum over the $
D_i$'s of the ground state energy\footnote{The ground state energy is
the smallest eigenvalue.} of the Dirichlet
realization
of the Laplacian $H(D_i)$ in $ D_i$ is minimal. In the case of a
Riemannian compact manifold, the natural extension is to consider
the Laplace Beltrami operator.
We denote by $ \lambda_j(\Omega)$ the
increasing sequence of its eigenvalues and by $u_j$ some associated
orthonormal basis of real-valued eigenfunctions.
The ground state $ u_1$ can be chosen to be
strictly positive in $ {\Omega}$, but the other eigenfunctions
$ u_k$ must have zerosets.
For any real-valued
$ u\in C_0^0(\overline{\Omega})$, we define the zero set as
\begin{equation}
N(u)=\overline{\{x\in {\Omega}\:\big|\: u(x)=0\}}
\end{equation}
and call the components of $ {\Omega}\setminus N(u)$ the nodal
domains of $ u$.
The number of
nodal domains of $ u$ is called $ \mu(u)$. These
$\mu(u)$ nodal
domains define a $k$-partition of $ \Omega$, with $k=\mu(u)$.
We recall that the Courant nodal
theorem says that, for $k\geq 1$, and if $\lambda_k$ denotes the
$k$-th eigenvalue and $ E(\lambda_k)$ the eigenspace of
$ H(\Omega)$ associated with $\lambda_k$, then, for all real-valued $ u\in E(\lambda_k)\setminus \{0\}\;,\;
\mu (u)\le k\;.
$
In dimension $1$ the Sturm-Liouville theory says that we have
always equality (for Dirichlet in a bounded interval) in the previous theorem (this is what we will call
later a Courant-sharp situation). A theorem due to Pleijel \cite{Pleijel:1956} in
1956 says
that this cannot be true when the dimension (here we consider the
$2D$-case) is larger than one.
We now introduce for $k\in \mathbb N$ ($k\geq 1$),
the notion of $k$-partition. We
will call {\bf $ k$-partition} of $ \Omega$ a family
$ \mathcal D=\{D_i\}_{i=1}^k$ of mutually disjoint sets in $\Omega$.
We call it {\bf open} if the $D_i$ are open sets of
$ \Omega$,
{\bf connected}
if the $ D_i$ are connected. We denote by $ \mathfrak O_k(\Omega)$ the set of open connected
partitions of $\Omega$.
We now introduce the notion of spectral minimal partition sequence.
\begin{definition}\label{regOm}~\\
For any integer $ k\ge 1$,
and for $ \mathcal D$ in $ \mathfrak O_k(\Omega)$, we
introduce
\begin{equation}\label{LaD}
\Lambda(\mathcal D)=\max_{i}{\lambda}(D_i).
\end{equation}
Then we define
\begin{equation}\label{frakL}
\mathfrak L_{k}(\Omega)=\inf_{\mathcal D\in \mathfrak O_k}\:\Lambda(\mathcal D).
\end{equation}
and call $ \mathcal D\in \mathfrak O_k$ a minimal $k$-partition if
$ \mathfrak L_{k}=\Lambda(\mathcal D)$.
\end{definition}
If $ k=2$, it is rather well known
(see \cite{HH:2005a} or
\cite{CTV:2005}) that $ \mathfrak L_2 =\lambda_2$ and
that the associated minimal $ 2$-partition is
a {\bf nodal partition}, i.e. a partition whose elements
are the nodal domains of
some eigenfunction corresponding to
$ \lambda_2$.
A partition $ \mathcal D=\{D_i\}_{i=1}^k$ of $
\Omega$ in $ \mathfrak
O_k$ is called {\bf strong} if
\begin{equation}\label{defstr}
{\rm Int\,}(\overline{\cup_i D_i}) \setminus \pa {\Omega} ={\Omega}\;,
\end{equation}
where, for a set $A\subset \mathbb R^2$, ${\rm Int\,} (A)$ means the interior of $A$.
Attached to a strong partition, we associate a closed
set in $ \overline{\Omega}$, which is called the {\bf boundary set} of the partition~:
\begin{equation}\label{assclset}
N(\mathcal D) = \overline{ \cup_i \left( \partial D_i \cap \Omega
\right)}\;.
\end{equation}
$ N(\mathcal D)$ plays the role
of the nodal set (in the case of a nodal partition).
This suggests the following definition:
\begin{definition}\label{AMS}~\\
We call a partition $\mathcal D$ regular if its associated
boundary set $ N(\mathcal D) $, has the following properties~:\\
(i)
Except for finitely many distinct $ x_i\in {\Omega}\cap N$
in the neighborhood of which $ N$ is the union of $\nu_i= \nu(x_i)$
smooth curves ($ \nu_i\geq 3$) with one end at $ x_i$,
$ N$ is locally diffeomorphic to a regular
curve.\\
(ii)
$ \pa{\Omega}\cap N$ consists of a (possibly empty) finite set
of points $ z_i$. Moreover
$N$ is near $ z_i$ the union
of $ \rho_i$ distinct smooth half-curves which hit
$ z_i$.\\
(iii) $ N$ has the {\bf equal angle
meeting
property}
\end{definition}
The $x_i$ are called the critical points and define the set
$X(N)$. Similarly
we denote by $Y(N)$ the set of the boundary points $z_i$.
By {\bf equal angle meeting property}, we mean that the half curves meet with equal angle at each critical
point of $ N$ and also at the boundary together with the
tangent to the boundary.
We say that $ D_i,D_j$ are {\bf neighbors}
or $ D_i\sim D_j$, if $
D_{ij}:={\rm Int\,}(\overline {D_i\cup D_j})\setminus \pa {\Omega}$ is
connected.
We associate with
each $ \mathcal D$ a {\bf graph}
$ G(\mathcal D)$ by
associating with each $ D_i$ a vertex and to each
pair $ D_i\sim D_j$ an edge. We will say that the graph is
{\bf bipartite} if it
can be colored by two colors (two neighbours having two different
colors). We recall that the graph associated
with a collection of nodal domains of an eigenfunction is always
bipartite.
\subsection{Motivation and outlook}
Before we state some results on spectral minimal partitions, discuss their
properties and finally formulate and prove the central result of the present paper, we give an informal outlook on our results.
The main result is a new characterization of minimal partitions via specific magnetic Hamiltonians, see Section \ref{Section4} for
the necessary definitions and explanations of those operators.
In \cite{HHOT} we have characterized via minimal partitions the case of equality in Courant's nodal theorem,
see Theorem \ref{L=L} below. Roughly speaking, see Theorem \ref{partnod}, if a minimal partition could in principle stem
from an eigenfunction it must be already be produced by the nodal domains of an eigenfunction and this can only happen if
there is equality in \eqref{Courant}. Pleijel's result, \cite{Pleijel:1956}, implies, roughly speaking,
that eigenfunctions associated to higher eigenvalues cannot lead to equality in \eqref{Courant}.
In Section \ref{Section3} we give a few pictures of non-nodal minimal partitions, or more precisely natural candidates,
since it is notoriously hard to work out explicit examples for such partitions. A first glance shows that
there are points where an odd number of nodal arcs meet.
More than 10 years ago together with Maria Hoffmann-Ostenhof
and Mark Owen we investigated some special magnetic Schr\"odinger operators, called Aharonov Bohm Hamiltonians, i.e.
Hamiltonians with zero magnetic field but with singular
magnetic vector potential and with half integer circulation around holes in \cite{HHOO, HHOO1}, see Section \ref{Section4}.
This investigation was motivated by the at this time surprising result of Berger and Rubinstein, \cite{BeRu}, about the
zeroset of a groundstate for such a problem with one hole. For more than one hole similar results were obtained
on zerosets: each hole was hit by an odd number of nodal arcs.\footnote{ Similar results for
punctured domains were later obtained in \cite{AFT}.}
The findings in \cite{HHOO, HHOO1} motivated the conjecture in \cite{BH} and \cite{HeEg} and is reformulated in the present paper.
The result says roughly that spectral minimal partitions are obtained by minimizing a certain eigenvalue
of a Aharonov Bohm Hamiltonian with respect to the number and the position of poles if we assume that ${\Omega}$ is simply connected.
See Theorem \ref{Theorem5.1} for the full result.
This new approach to spectral minimal partitions sheds new light on those spectral minimal partitions. While in in original
formulation, \cite{HHOT}, say for a fixed ${\Omega}$ the $\mathfrak L_k({\Omega})$ and the associated minimal partitions as defined by
Definition \ref{regOm} require the calculation of $\Lambda(\mathcal D)$ for k-partitions, the new formulation can be considered as an,
admittedly involved, eigenvalue minimization.
\paragraph{Acknowlegments}~\\
When writing this paper we benefitted from useful discussion with V. Bonnaillie-No\"el and S. Terracini.
\section{Basic properties of minimal partitions}\label{Section2}
The following theorem has been proved by Conti-Terracini-Verzini \cite{CTV0, CTV2,
CTV:2005}
and Helffer--T.~Hoffmann-Ostenhof--Terracini \cite{HHOT}:
\begin{theorem}\label{thstrreg}~\\
For any $ k$, there exists a minimal regular $
k$-partition. Moreover
any minimal $ k$-partition has a regular
representative\footnote{Modulo sets of capacity $0$.}.
\end{theorem}
Other proofs of a somewhat weaker version of this statement have been
given by Bucur-Buttazzo-Henrot \cite{BBH}, Caffarelli- F.H. Lin \cite{CL1}.
A natural question is whether a minimal partition of $ \Omega$
is a nodal partition, i.e. the family of
nodal domains of an eigenfunction of $ H(\Omega)$.
We have first the following converse theorem (\cite{HH:2005a}, \cite{HHOT}):
\begin{theorem}\label{partnod}~\\
If the graph of a minimal partition
is bipartite, then this partition is nodal.
\end{theorem}
A natural question is now to determine how general the previous
situation is. Surprisingly this only occurs in the so
called Courant-sharp situation.
We say that $ u$ is {\bf Courant-sharp} if
$$ u\in E(\lambda_k)\setminus \{0\} \quad \mbox{
and}\quad
\mu(u)=k\;.
$$
For any integer $ k\ge 1$, we denote by $ L_k(\Omega)$
the smallest eigenvalue of $H(\Omega)$, whose eigenspace
contains an eigenfunction with $ k$ nodal domains. We set
$L_k(\Omega) =\infty$, if there are no
eigenfunction with $ k$ nodal domains.
In general, one can
show that
\begin{equation}
\lambda_k(\Omega) \leq\mathfrak L_k(\Omega) \leq L_k(\Omega) \;.
\end{equation}
The last result gives the full picture of the equality cases~:
\begin{theorem}\label{L=L}~\\
Suppose $ {\Omega}\subset \mathbb R^2$ is regular.
If $\mathfrak L_k(\Omega)=L_k(\Omega)$ or $\mathfrak L_k(\Omega)=\lambda_k(\Omega)$
then
\begin{equation}\label{Courant}
{\lambda}_k(\Omega)=\mathfrak L_k(\Omega)=L_k(\Omega)\,.
\end{equation}
In addition, one can find in $ E({\lambda}_k) $ a Courant-sharp
eigenfunction.
\end{theorem}
This answered a question posed in \cite{BHIM} (Section 7).
\begin{remark}~\\
Very recently spectral partitions for discrete problems, namely
quantum graphs, have been investigated in \cite{Bandetal}.
\end{remark}
\section{Examples of minimal $k$-partitions for special domains}\label{Section3}
Using Theorem \ref{L=L},
it is now easier to analyze the situation for the disk or for rectangles
(at least in the irrational case), since we have just to check for which eigenvalues
one can find associated Courant-sharp eigenfunctions.
The possible topological types of a minimal partition $\mathcal D$ rely essentially
on Euler's formula and the fact that the $D_i$'s have to be nice, that means
\begin{equation}\label{nice}
{\rm Int\,}(\overline D_i)\cap \Omega =D_i\,.
\end{equation}
Figures \ref{fig.5part} and \ref{fig.disk} illustrate possible situations.
\begin{proposition}\label{Euler}~\\
Let $U $ be an open set in $\mathbb R^2$ with piecewise-$C^{1}$ boundary
and let $N$ a closed set such that $U \setminus N$ has $k$
components and such that $N$ satisfies the properties of Definition
\ref{AMS}. Let $b_0$ be the number of components of
$\pa U$ and $b_1$ be the number of components of $N\cup\pa U$. Denote by $\nu(x_i)$ and $\rho(z_i)$
the numbers of arcs associated with the $x_i\in X(N)$, respectively $z_i\in Y(N)$. Then
\begin{equation}\label{Emu}
k =b_1-b_0+\sum_{x_i\in X(N)}(\frac{\nu(x_i)}{2}-1)+
\frac{1}{2}\sum_{z_i\in Y(N)}\rho(z_i)+1\,.
\end{equation}
\end{proposition}
This allows us to analyze minimal partitions of a specific
topological type. If in addition the domain has some symmetries and we assume that a minimal partition keeps some of these symmetries, then we find
natural candidates for minimal partitions.
\paragraph{Minimal $3$-partitions}~\\
In the case of the disk (see \cite{HH:2006}), we have no proof that the minimal $ 3$-partition
is the ``Mercedes star'' or $Y$-partition, i.e. the partition created by three straight rays meeting
at the center with equal angle. But if we assume that the minimal
$ 3$-partition has a unique singular point at the center then one can show that is indeed the $Y$-partition.This point of view is explored numerically
by Bonnaillie-Helffer \cite{BH} (using some method equivalent to the Aharonov-Bohm approach and playing with the location of the critical point). There is also an interesting theoretical analysis
by Noris-Terracini \cite{NT}.\\
We have no example of minimal $3$-partitions with two critical points.
For the disk and the square the minimal $4$-partitions are nodal.
\paragraph{Minimal $5$-partitions}~\\
Using the covering approach, we were able (with V. Bonnaillie) in \cite{BH} to produce numerically the
following
candidate $\mathcal D_1$ for a minimal $ 5$-partition assuming a specific topological
type.
\begin{figure}[h!bt]
\begin{center}
\includegraphics[height=3cm]{carrePart5}
\caption{Candidate $\mathcal D_1$ for the $5$-partition of the square.}
\end{center}
\end{figure}
It is interesting to compare with other possible topological types of
minimal $5$-partitions. They can be classified by using Euler's
formula (see formula \eqref{Emu}). Inspired by numerical
computations in \cite{CyBaHo}, one looks for a
configuration which has the symmetries of the square and four
critical points. We get two types of models that we can reduce
to a Dirichlet-Neumann problem on a triangle corresponding to the
eighth of the square. Moving the Neumann boundary on one side
like in \cite{BHV} leads us to two candidates $\mathcal D_2$ and
$\mathcal D_3$.
One has a lower energy $\Lambda(\mathcal D)$ and
one recovers the pictures in \cite{CyBaHo}.
\begin{figure}[h!bt]
\begin{center}
\begin{tabular}{ccc}
$\Lambda(\mathcal D_1)= 111.910$
& $\Lambda(\mathcal D_2)=104.294$
& $\Lambda(\mathcal D_3) =131.666$\\
\includegraphics[height=3cm]{carrePart5}
& \includegraphics[height=3cm]{glistri_022_part5}
& \includegraphics[height=3cm]{glistri_033_part5}
\end{tabular}
\caption{Three candidates for the $5$-partition of the square.\label{fig.5part}}
\end{center}
\end{figure}
Note that in the case of the disk a similar analysis leads to a
different answer. The partition of the disk by five half-rays with equal
angles has a lower energy than the
minimal $ 5$-partition with
four singular points.
\begin{figure}[h!bt]
\begin{center}
\begin{tabular}{cc}
$104.367$
& $110.832$\\
\includegraphics[height=3cm]{disque1}
& \includegraphics[height=3cm]{disque2}
\end{tabular}
\caption{Two candidates for the $5$-partition of the disk.\label{fig.disk}}
\end{center}
\end{figure}
\section{The Aharonov-Bohm approach}\label{Section4}
Let us recall some definitions and results about the Aharonov-Bohm
Hamiltonian (for short ${{\bf A}{\bf B}}X$-Hamiltonian) defined in an open set $\Omega$ which can be simply connected or not. These results were initially motivated
by the work of Berger-Rubinstein \cite{BeRu}, and further developed in \cite{AFT, HHOO, HHOO1,BHHO,BH}.
\paragraph{Simply connected case : one pole}~\\
We first consider the case when one pole, denoted by $X=(x_{0},y_{0})$, is chosen in $\Omega$ and
introduce the magnetic potential~:
\begin{equation}
{{\bf A}^X}(x,y) = (A_1^X(x,y),A_2^X(x,y))=\frac{\Phi}{2\pi} \, \left( -\frac{y-y_{0}}{r^2}, \frac{x-x_{0}}{r^2}\right)\,.
\end{equation}
We know that in this case the magnetic field vanishes identically in
$\dot\Omega_{X}\,$, where
\begin{equation}
\dot \Omega_{X}= \Omega \setminus \{X\}\,.
\end{equation}
The ${{\bf A}{\bf B}}X$-Hamiltonian is defined by considering the Friedrichs
extension starting from $C_0^\infty(\dot \Omega_{X})$
and the associated differential operator is
\begin{equation}
-\Delta_{{\bf A}^X} := (D_x - A_1^X)^2 + (D_y-A_2^X)^2\,\mbox{ with }D_x =-i\pa_x\mbox{ and }D_y=-i\pa_y.
\end{equation}
We will consider in the sequel the very special case when the flux $\Phi$ created at $X=(x_0,y_0)$, which can be computed by considering the circulation of ${\bf A}^X$ along a simple closed path turning once anti-clockwise around $X$, satisfies:
\begin{equation}\label{condflux}
\frac{\Phi}{2\pi}=\frac 12 \,.
\end{equation}
Under assumption \eqref{condflux}, let $K_{X}$ be the anti-linear operator
$$ K_{X} = e^{i \theta_{X}} \; \Gamma\,,$$
with $ (x-x_0)+ i (y-y_0) = \sqrt{|x-x_0|^2+|y-y_0|^2}\, e^{i\theta_{X}}\,$,
where $\Gamma$ is the complex conjugation operator
$$\Gamma u = \bar u\,
$$
and
\begin{equation}\label{deftheta}
\nabla \theta_X = 2 {\bf A}^X\,,
\end{equation}
which can also be rewritten in the form
$$
-{\bf A}^X = {\bf A}^X - \nabla \theta_X\,.
$$
The flux condition \eqref{condflux} shows that one can find a solution $\theta_X$ of \eqref{deftheta} (a priori multi-valued) such that $e^{i \theta_{X}} $ is uni-valued and $C^\infty$. Hence $-\Delta_{{\bf A}^X}$ and $-\Delta_{-{\bf A}^X}$ are intertwined by
the gauge transformation associated with $e^{i \theta_{X}} $. \\
Then we have
\begin{equation}\label{commuterel}
K_X \; \Delta_{{\bf A}^X}= \Delta_{{\bf A}^X}\; K_X\,.
\end{equation}
We say that a function $u$ is $K_{X}$-real, if it satisfies
$K_{X} u =u.$
Then the operator $-\Delta_{{\bf A}^X}$ is preserving the
$K_{X}$- real functions. In the same way one proves that the usual Dirichlet Laplacian admits an orthonormal basis of real valued eigenfunctions or one restricts this Laplacian to the vector space over $\mathbb R$ of the real-valued $L^2$ functions, one can construct for $-\Delta_{{\bf A}^X}$ a
basis of $K_{X}$-real eigenfunctions or, alternately, consider the
restriction of the ${{\bf A}{\bf B}}X$-Hamiltonian
to the vector space over $\mathbb R$
$$
L^2_{K_{X}}(\dot{\Omega}_{X})=\{u\in L^2(\dot{\Omega}_{X}) \;,\; K_{X}\,u =u\,\}\,.
$$
\paragraph{Non simply connected case}~\\
In this situation, magnetic potentials in $\Omega$ with zero magnetic field can be different from gradients if some fluxes around some holes are not in $(2\pi)\mathbb Z$. In this situation we will be interested in potentials where the created flux by some hole is $\pi$. This will be realized in this article by introducing a pole in the hole. Except that $\dot {\Omega}_X =\Omega$ (there are no singularity in $\Omega$) all what has been defined before goes through and this is actually the initial case treated in the pioneering work by \cite{BeRu}.\\
\paragraph{Poles and holes}~\\
We can extend our construction of an Aharonov-Bohm Hamiltonian
in the case of a
configuration with $\ell$ distinct points $X_1,\dots, X_\ell$ (putting a flux $\pi$ at each
of these points). These points can be chosen in $\Omega$ or in the holes. They are distinct and each hole contains at most one $X_k$. We can just take as magnetic potential
$$
{\bf A}^{{\bf X}} = \sum_{j=1}^\ell {\bf A}^{X_j}\,,
$$
where ${\bf X}=(X_1,\dots,X_\ell)$.
Our Hamiltonian will be defined in
$
\dot{\Omega}_{{\bf X}} = \Omega \setminus {\bf X}\,.
$
We can also construct (see \cite{HHOO,HHOO1}) the anti-linear
operator $K_{\bf X}$, where $\theta_X$ is replaced by a
multivalued function $\phi_{\bf X}$ such that $\nabla \phi_{\bf X} = 2 {\bf A}^{{\bf X}}$ and $e^{i
\phi_{\bf X}}$ is uni-valued and $C^\infty$. We can then consider the
real subspace of the $K_{{\bf X}}$-real
functions in $L^2_{K_{{\bf X}}}(\dot{\Omega}_{{\bf X}})$ and our operator as an unbounded selfadjoint operator on $L^2_{K_{{\bf X}}}(\dot{\Omega}_{{\bf X}})$.
It was shown in \cite{HHOO,HHOO1} for the case with holes and in \cite{AFT} for the case with poles that the nodal set of such a $K_X$-real eigenfunction has
the same structure as the nodal set of a real-valued eigenfunction of the
Laplacian except that an odd number of half-lines meet at each pole and at the boundary of each hole containing some $X_k$.
In the case of one hole, this fact was first observed by Berger-Rubinstein \cite{BeRu} for a first
eigenfunction (assuming that the first eigenvalue is simple).
We denote
by $L_k(\dot{\Omega}_{{\bf X}})$ the lowest eigenvalue,
if it exists, such that there exists a $K_{{\bf X}}$-real eigenfunction with
$k$ nodal domains and we set $L_k(\dot{\Omega}_{{\bf X}})=+\infty$ if there is no such eigenvalue.
\section{The magnetic
characterization
of a minimal partition}\label{Section5}
We now prove the following conjecture presented (in the simply-connected case) in
\cite{BH} and \cite{HeEg}.
\begin{theorem}\label{Theorem5.1}~\\
Suppose ${\Omega}$ is a bounded, not necessarily simply connected, domain with $m$ disjoint closed holes $B_i$ ($i=1,\dots,m$) with non empty interiors.
Again we assume that $\pa{\Omega}$ is piecewise $C^{1}$. Then
\begin{equation} \label{magL}
\mathfrak L_k({\Omega})=\inf_{\ell\in \mathbb N}\:\inf_{X_1,\dots,X_\ell}L_k(\dot{{\Omega}}_{{\bf X}})
\end{equation}
where in the infimum each $X_j= (x_j,y_j)$ is either in $ {\rm Int\,}(B_i)$ or in ${\Omega}$. In each $B_i$ there is either one or no $X_i$. The $X_i \in {\Omega}$
are distinct points.
\end{theorem}
Let us first give the proof in the simply connected case. \\
{\bf Step 1} : $\inf_{\ell \in \mathbb N}\; \inf_{X_1,\dots,
X_\ell} L_k (\dot{\Omega}_{{\bf X}})\,\leq \mathfrak L_k(\Omega)$\\
Considering a minimal $k$-partition $\mathcal D=(D_1,\dots,D_k)$, we know that it has a
regular representative and we denote by $X^{odd}(\mathcal D):=(X_1,
\dots, X_\ell)$ the critical points of the boundary set of the partition for which
an odd number of half-curves meet.
For proving Step 1, we have indeed just to prove that, for this family of points ${\bf X}=X^{odd}(\mathcal D) $,
$\mathfrak L_k(\Omega)$ is an eigenvalue of the Aharonov-Bohm Hamiltonian associated with $\dot{\Omega}_{{\bf X}}$
and to explicitly construct the corresponding eigenfunction with $k$ nodal domains described by the $D_i$'s.
For this, we recall that we have proven in
\cite{HHOT} the existence of a family $(u_i)_{i=1,\dots,k}$ such that $u_i$ is a
ground state of $\,H(D_i)$ and $u_i -u_j$ is a second eigenfunction of $H(D_{ij})$
when $D_i\sim D_j$. The claim is that one can find a sequence $\epsilon_i(x)$
of $\mathbb S^1$-valued functions, where $\epsilon_i$ is a suitable\footnote{
Note that by construction the $D_i$'s never contain any point of ${\bf X}$. Hence the ground state energy of the Hamiltonian $H(D_I)$ is the same
as the ground state energy of $H_{{\bf A}^{\bf X}} (D_i)$.}
square root of $e^{i\phi_{{\bf X}}}$ in $D_i$,
such that
$\sum_i \epsilon_i(x) u_i(x)$ is an eigenfunction of the ${\bf A}{\bf B}{\bf X}$-Hamiltonian associated with
the
eigenvalue $\mathfrak L_k$.\\
More explicitly, let us describe how we can construct $\epsilon_i(x)$. We start from some $i_0$ and define $\epsilon_{i_0} (x) = e^{\frac {i}{2}\phi_{{\bf X}}}$. According to the footnote
$\epsilon_{i_0}(x)$ is a well defined $C^\infty$ function. Let $D_i$ a nearest neighbor of $D_{i_0}$ then we define $\epsilon_i(x) = - e^{\frac {i}{2} \phi_{{\bf X}}}$. Then we can
extend the definition by considering the neighbors of the neighbors. Now we have to check that the construction is consistent. The problem can be reduced to the following question. Consider a closed simple path $\gamma$ in $\dot{\Omega}_X$ transversal to $\mathcal N(\mathcal D)$ (and avoiding the critical points). Take some origin $x_0$ on $\gamma \cap D_{i_1}$. We start
from $\epsilon(x) = e^{\frac i2 \phi_{{\bf X}}(x)}$ in $D_{i_1}$ and, choosing the positive orientation, multiply by $-1$ each time that we cross an arc of $\mathcal N(\mathcal D)$. It is then a consequence of Euler's formula
that the number of crossings along $\gamma$ is odd if and only if there is an odd number of points of ${\bf X}$ inside $\gamma$ (apply Euler's formula \eqref{Emu} with $U$ being the open set delimited by $\gamma$). It is then clear that $\epsilon (x)$ is well defined along $\gamma$.\\
{\bf Step 2}: $\inf_{\ell \in \mathbb N}\; \inf_{X_1,\dots,
X_\ell} L_k (\dot{\Omega}_{{\bf X}})\,\geq \mathfrak L_k(\Omega)$\\
Conversely, given $\ell$ distinct points $X_i$ in $\Omega$, any family of nodal domains of a $K_{{\bf X}}$-real eigenfunction of the Aharonov-Bohm
operator on $\dot{\Omega}_{{\bf X}}$ corresponding to $L_k$
gives a $k$-partition. Using the results of \cite{HHOO} and \cite{AFT}, we immediately see that the $X_i$'s corresponding to the "odd" singular points of the partitions.
In each of these nodal domains $D_i$, $L_k$ is an eigenvalue of the Dirichlet realization of the Schr\"odinger operator with magnetic potential ${\bf A}^{\bf X}$, which is by the diamagnetic inequality higher as the ground state energy of the Dirichlet Laplacian in $D_i$ without magnetic field. Hence the energy $\Lambda_k (\mathcal D)$ of this partition is indeed less than $L_k(\dot{\Omega}_{{\bf X}}) $.\\
{\bf Step 3}: Proof in the non simply connected case~\\
The main change is in step 1. In the non simply connected case, the set ${\bf X}$ consists of the singular points of the boundary set inside $\Omega$ where an odd number of half-lines arrive together with those points in the holes whose boundary is hit by an odd number of half-curves.\\
\paragraph{Examples}~\\
Let us present a few examples illustrating the theorem in the case of a simply connected domain.
When
$k=2$, there is no need to consider punctured $\Omega$'s. The infimum
is obtained for $\ell =0$. When $k=3$,
it is possible to show (see Remark \ref{Rem5.3} below) that it is enough to minimize over $\ell =0$,
$\ell =1$ and $\ell =2$. In the case of the disk and the square, it is
proven that the infimum cannot be for $\ell =0$ and we
conjecture that the infimum is for $\ell =1$ and attained for the punctured
domain at the center. For $k=5$, it seems
that the infimum is for $\ell =4$ in the case of the square (See Figure \ref{fig.5part}) and for $\ell =1$ in the case of the disk (see Figure \ref{fig.disk}).\\
\begin{remark}\label{Rem5.2}~\\
If $\mathcal D$ is a regular representative of a minimal $k$-partition and if $\dot{\Omega}_{\bf X} $ is constructed like in Step 1 of the proof of the previous theorem, then
$\mathfrak
L_k(\Omega) =\lambda_k(\dot{\Omega}_{\bf X})$ (Courant sharp
situation). Coming back indeed to this step, one can follow the proof of Theorem 1.13 (Section 6) in \cite{HHOT}.
\end{remark}
\begin{remark}\label{Rem5.3}~\\
Euler's formula \eqref{Emu}, implies that for a minimal $k$-partition
$\mathcal D$ of a simply connected domain $\Omega$
the cardinality of $X^{odd}(\mathcal D)$ satisfies
\begin{equation}
\# X^{odd}(\mathcal D ) \leq 2k -3\,.
\end{equation}
Note that if $b_1=b_0$, we necessarily have a singular point in the boundary. The argument depends only on Euler's formula. If we implement the additional property
that the open sets $D_i$'s of a minimal partition are nice (see \eqref{nice}), we can exclude the case when there is only one point on the boundary. We emphasize that this was not a priori excluded from the results of \cite{HHOO,AFT}. Hence, we obtain
$$
b_1-b_0 + \frac 12 \sum \rho(y_i) \geq 1\,,
$$
which implies the inequality
\begin{equation}
\# X^{odd}(\mathcal D ) \leq 2k -4 \,.
\end{equation}
This estimate seems optimal for a general geometry although all the known candidates for minimal partitions
for $k=3$ and $5$ have a lower cardinality of odd critical points.
\end{remark}
\begin{remark}\label{Rem5.4}~\\
The argument around \eqref{nice} shows that a
nodal set of a $K_{\mathbf X}$-real eigenfunction that corresponds to a minimal partition
cannot have a critical point that is met only by one nodal arc. Actually that can happen
for ground states of Aharonov-Bohm Hamiltonians, see \cite{HHOO} which of course do not
correspond to minimal partitions.
\end{remark}
\begin{remark}~\\
It would be interesting to look at the case of the sphere (already considered in \cite{HHOT1}) and the first problem in this case is to define the suitable magnetic Laplacian.
We refer to \cite{WY} for one of the first papers on this question. More specifically, we would like to construct in our case an Aharomov-Bohm Hamiltonian. Note for example that we can not have such an operator with one pole and a flux $\pi$ around this pole. Fortunately there are no minimal $k$-partition whose boundary set consists of one "odd" critical point
on the sphere, as can be seen by
Euler's formula for the sphere (see in \cite{HHOT1}, Remark 4.2). We indeed know that the cardinality of "odd" critical points is even. This is actually
a standard result from graph theory that the number of vertices with odd degree is even.
(See for example Corollary 1.2 in \cite{BM}).\\
This suggests that instead of putting the flux $\pi$ around each pole, we take
alternately $\pi$ and $-\pi$ for the fluxes in order to get a total flux equal to $0$. In other words, we should probably describe $X^{odd} (\mathcal D)$ as a union of dipoles.
\end{remark}
| {
"timestamp": "2015-09-18T02:11:12",
"yymm": "1509",
"arxiv_id": "1509.05304",
"language": "en",
"url": "https://arxiv.org/abs/1509.05304",
"abstract": "Given a bounded open set $\\Omega$ in $ \\mathbb R^n$ (or in a Riemannian manifold) and a partition of $\\Omega$ by $k$ open sets $D_j$, we consider the quantity $\\max_j \\lambda(D_j)$ where $\\lambda(D_j)$ is the ground state energy of the Dirichlet realization of the Laplacian in $D_j$. If we denote by $ \\mathfrak L_k(\\Omega)$ the infimum over all the $k$-partitions of $ \\max_j \\lambda(D_j)$, a minimal $k$-partition is then a partition which realizes the infimum. When $k=2$, we find the two nodal domains of a second eigenfunction, but the analysis of higher $k$'s is non trivial and quite interesting. In this paper, we give the proof of one conjecture formulated previously by V. Bonnaillie-Noel and B. Helffer about a magnetic characterization of the minimal partitions when $n=2$.",
"subjects": "Spectral Theory (math.SP)",
"title": "On a magnetic characterization of spectral minimal partitions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850887155808,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7095349898769016
} |
https://arxiv.org/abs/1111.3454 | Permanents of heavy-tailed random matrices with positive elements | We study the asymptotic behavior of permanents of $n \times n$ random matrices $A$ with positive entries. We assume that $A$ has either i.i.d. entries or is a symmetric matrix with the i.i.d. upper triangle. Under the assumption that elements have power law decaying tails, we prove a strong law of large numbers for $\log \perm A$. We calculate the values of the limit $\lim_{n \to \infty}\frac{\log \perm A}{n \log n}$ in terms of the exponent of the power law distribution decay, and observe a first order phase transition in the limit as the mean becomes infinite. The methods extend to a wide class of rectangular matrices. It is also shown that, in finite mean regime, the limiting behavior holds uniformly over all submatrices of linear size. | \section{Introduction}
The permanent of an $m \times n$ matrix $A$ (height $m$ and width $n$) satisfying $m \leq n$ is defined as
\[
\perm A = \sum_{\pi \in S_{m,n}} \prod_{i=1}^n a_{i,\pi(i)},
\]
where $S_{m,n}$ is the set of one-to-one functions from $[m]=\{1,\dots,m\}$ to $[n]=\{1,\dots,n\}$. When $m=n$, that is when $A$ is a square matrix, $S_{m,n} = S_n$ the set of permutations on $[n]$.
In this paper we will study asymptotics of permanents of large matrices with positive, independent and identically distributed elements.
Permanents of random matrices of similar type have been studied in a number of papers.
In \cite{Girko71} and \cite{Girko72} Girko proved that
\[
\lim_{n \to \infty}\frac{\log |\perm A| - \mathbb{E}\log |\perm A|}{n} \to 0,
\]
(without estimating $\mathbb{E}\log |\perm A|$) for $n \times n$ square matrices $A$ with independent elements, when either characteristic functions or Laplace transforms of elements is of the form $\exp(-c|t|^{\alpha})$ (and some other finite man cases).
Working in the context of perfect matchings on random bipartite graphs, Janson \cite{Janson94} proved central limit theorems for permanents of matrices with $0$-$1$ iid elements.
In a series of papers Rempa{\l}a and Weso{\l}owski studied the permanents of large rectangular matrices with identically distributed elements of non-zero mean and finite variance (allowing some correlation among elements in each column). In the case of iid elements, relying on earlier results of van Es and Helmers \cite{EsHelmers88} and Borovskikh and Korolyuk \cite{KB92},
they proved central limit theorems \cite{RempalaWesolowski99} for $(\perm A) / \mathbb{E}\perm A$
and later certain strong laws of large numbers \cite{RW02}. See also Chapter 3 in \cite{RempalaWesolowski08} for a self-contained discussion of these results.
Recently Tao and Vu \cite{TaoVu08} obtained significantly different behavior for $n \times n$ matrices $A$ with independent mean zero Bernoulli $\pm 1$ elements. They showed that with high probability $|\perm(A)| = n^{(\frac{1}{2}+o(1))n}$.
The above results demonstrate the contrast between the non-zero mean, finite variance case and the Bernoulli case, which can be summarized as
\begin{equation}\label{eq:old_limits}
\lim_{m,n \to \infty}\frac{\log |\perm A|}{m \log n} = \left\{\begin{array}{r l} 1, & \text{ in the case of the finite variance and non zero mean,} \\ \frac{1}{2}, & \text{ in the Bernoulli case with zero mean for } m=n. \end{array}\right.
\end{equation}
In the non-zero finite mean case ($\mu$ being mean of the elements) the value of the limit, and especially the upper bounds, can be inferred by calculating the first moment $\mathbb{E}(\perm A)=\binom{n}{m}m!\mu^n$, and in the Bernoulli case from the second moment $\mathbb{E}(|\perm A|^2)=n!$.
In this paper we will calculate the value of this limit under the assumption that elements are positive and have power law decaying tails $\mathbb{P}(\xi \geq t) = t^{-1/\beta +o(1)}$. In the case $\beta > 1$ elements have infinite mean which prevents us from guessing the value of the limit. Actually in Theorem \ref{thm:main} we will observe a first order phase transition in the limit at $\beta =1$, when the mean becomes infinite.
\section{Setup and the Result}
In the text we will assume that for $m \leq n$, $A_{m,n}$ is an $m \times n$ matrix ($A_n$ when $m=n$) with independent positive elements distributed as $\xi$. Note that we will drop the subscripts when there is no confusion.
Assuming that the matrices are constructed on a common probability space, theorems below give strong laws of large numbers for $\frac{\log \perm A_n}{n\log n}$ (in particular they imply a weak law of large numbers without the assumption that $A_n$ are given on a common probability space). Extensions to rectangular matrices are given in Section \ref{sec:non-square}.
\begin{theorem}\label{thm:main}
Let $\xi$ be a positive random variable satisfying
\begin{equation}\label{eq:heavy_tail_assumption}
\lim_{t \to \infty}\frac{\log \mathbb{P}(\xi \geq t)}{\log t} = - \frac{1}{\beta},
\end{equation}
for some $\beta > 0$. If $(A_n)_n$ is a sequence $n \times n$ matrices on a common probability space with elements which are independent and identically distributed as $\xi$, then almost surely
\begin{equation}\label{eq:main_result}
\lim_{n \to \infty} \frac{\log \perm A_n}{n \log n} = \max(1,\beta).
\end{equation}
\end{theorem}
Random variable $\xi$ in \eqref{eq:heavy_tail_assumption} has finite variance for $\beta < 1/2$, finite mean for $\beta < 1$, and infinite mean for $\beta > 1$ when we observe a limit different from the values in \eqref{eq:old_limits}.
The following result generalizes the case $\beta < 1$. It does not require the finite variance assumption and gives the general lower bounds and the upper bounds in the case of finite mean uniformly over all submatrices of linear size. Note that for an $m \times n$ matrix $A=(a_{ij})$ any matrix $B=(a_{ij})_{i\in I,j\in J}$, where $I \subset [m]$, $J \subset [n]$ is called a submatrix of of $A_n$.
\begin{theorem}\label{thm:general_bounds}
\begin{itemize}
Assume that $(A_n)_n$ is a sequence of $n \times n$ matrices on a common probability space with elements which are independent and identically distributed as $\xi$, and let $0 < \alpha < 1$.
\item[i)] We have
\begin{equation}\label{eq:general_lower_bounds}
\liminf_{n \to \infty} \min_{(k,B)}\frac{\log \perm B}{k \log k} \geq 1,
\end{equation}
where the minimum is taken over all integers $\alpha n \leq k \leq n$ and all $k \times k$ submatrices $B$ of $A_n$.
\item[ii)]
If $\xi$ has a finite mean then
\begin{equation}\label{eq:general_upper_bounds}
\limsup_{n \to \infty} \max_{(k,B)}\frac{\log \perm B}{k \log k} = 1,
\end{equation}
where the maximum is again taken over all integers $\alpha n \leq k \leq n$ and all $k \times k$ submatrices $B$ of $A_n$.
\end{itemize}
\end{theorem}
Condition \eqref{eq:heavy_tail_assumption} in Theorem \ref{thm:main} is satisfied with $\beta > 1$ for many common heavy tail distributions including Pareto distribution, L\'{e}vy distribution, Inverse-Gamma distribution, Beta-prime distribution and many more.
We are particularly interested in the case when $\xi$ has Pareto distribution with parameter $\beta$, that is $\mathbb{P}(\xi \geq t) =t^{-1/\beta}$ for $t \geq 1$. Actually in Section \ref{sec:upper_bounds} the upper bounds in Theorem \ref{thm:main} will be proven in the Pareto case and then extended to the general case via simple stochastic domination. Note that when the convergence \eqref{eq:heavy_tail_assumption} fails to hold, one cannot guarantee the existence of the limit in \eqref{eq:main_result} (see Example \ref{ex:no_convergence} in Section \ref{sec:non-square}). However, the upper bound on the $\limsup$ in \eqref{eq:heavy_tail_assumption} will imply the upper bound in \eqref{eq:main_result} and similarly the lower bound for $\liminf$.
\begin{remark}\label{rem:matchings}
From a more combinatorial point of view permanents can be interpreted in the context of saturated matchings (or perfect matchings for $m=n$) of bipartite graphs. For a bipartite graph $G=(V,E)$ let $V=V_1 \cup V_2$, $|V_1| \leq |V_2|$ be a decomposition of the vertex sets into subsets so that no two vertices in $V_i$ are connected by an edge.
Saturated matchings of $G$ can be defined as subsets $\mathcal{M} \subset E$ of the edge set with the property that every vertex is adjacent to at most one edge in $\mathcal{M}$ and that every vertex in the smaller component $V_1$ is adjacent to at least one edge in $\mathcal{M}$.
For $m \leq n$ and an $m\times n$ matrix $A=(a_{ij})$ containing only elements $0$ and $1$, construct a bipartite graph $G$ with $m+n$ vertices $\{v_1, \dots v_m, w_1, \dots w_n\}$ so that $v_i$ and $w_j$ are connected by an edge if and only if $a_{ij}=1$. Clearly every one-to-one function $\pi\colon [m] \to [n]$ for which $\prod_{i=1}^n a_{i\pi(i)}$ is non-vanishing corresponds to a saturated matching on $G$ in $1-1$ manner. Therefore $\perm A$ is equal to the number of saturated matchings on $G$. For general matrices $A$ one can construct the graph by drawing an edge between $v_i$ and $w_j$ whenever $a_{ij} \neq 0$ and putting the weight $a_{ij}$ on this edge. Then $\perm A$ can be interpreted as the total weight of all the saturated matchings on $G$ (a weight of a matching being the product of the weight on its edges). All the results in this paper can be interpreted in this way.
\end{remark}
In the following section we prove the upper bounds in Theorem \ref{thm:main} and in Section \ref{sec:lower_bounds} we provide the lower bounds and prove Theorem \ref{thm:general_bounds}. In the last section we will extend the results to a large class of rectangular matrices and show an example demonstrating that, in general without \eqref{eq:heavy_tail_assumption} Theorem \ref{thm:main} fails to hold.
\section{Proof of the upper bounds in Theorem \ref{thm:main}}\label{sec:upper_bounds}
The exact calculations needed for the proof of the upper bounds in Theorem \ref{thm:main} are easier to perform when we are given a concrete distribution of $\xi$ to work with. The proof will be provided for the Pareto case, but first we will see how this yields the upper bounds in Theorem \ref{thm:main} for the general case.
\begin{remark}\label{rem:stochastic_domination}
Throughout the paper we will use the following two simple observations.
\emph{i)}
For any $m \times n$ matrix $A$ and $\lambda \in \mathbb{R}$ we have that $\perm (\lambda A) = \lambda^m \perm A$. Thus the value of the limit of $\log \perm A_n /(n \log n )$ in Theorem \ref{thm:main} (as well as $\liminf$ and $\limsup$) is unchanged if we replace the generic random variable $\xi$ by random variables $\lambda \xi$, for any $\lambda >0$.
\emph{ii)}
Assume that we are given two random variables $\xi_1$ and $\xi_2$ with right continuous cumulative distribution functions $F_1(t)$ and $F_2(t)$ (and denote $F_i^-(t) = \lim_{s \uparrow t}F_i(s)$).
If $\xi_2$ stochastically dominates $\xi_1$, that is $F_2(t) \leq F_1(t)$ for any $t> 0$ (equivalently $\mathbb{P}(\xi_1 \geq t) \leq \mathbb{P}(\xi_2 \geq t)$), then one can enlarge the probability space $\Omega$ that supports $\xi_1$ and construct a version of $\xi_2$ on the larger probability space which dominates $\xi_1$ pointwise. For example, if $\xi_1$ is defined on $(\Omega,\mathbf{P})$ then on $(\Omega \times [0,1], \mathbf{P}\times d\lambda)$, where $d\lambda$ is the Lebesgue measure on $[0,1]$, we define $u = \lambda F_1(\xi_1) + (1-\lambda)F_1^-(\xi_1)$. It is easy to check that $u$ is uniformly distributed on $[0,1]$. Knowing this it is also easy to check that $\overline{\xi}_2 = \inf\{t:F_2(t) \geq u\}$ has the same distribution as $\xi_2$ and that $F_1^-(\xi_1) \leq u \leq F_2(\overline{\xi_2})$. The last inequalities imply that $\xi_1 \leq \overline{\xi}_2$ everywhere. Thus if $(A_{m,n}^1)$ is a sequence of $m \times n$ matrices with independent elements distributed as $\xi_1$ defined on a common probability space, one can enlarge the probability space and construct on it a sequence of $m \times n$ matrices $(A_{m,n}^2)$ whose elements are independent and distributed as $\xi_2$ such that for every $n$, the $ij$th element of $A_{m,n}^1$ is not larger than the corresponding element of $A_{m,n}^2$. In particular, if $\xi_1$ and $\xi_2$ are almost surely positive then $\perm A_{m,n}^1 \leq \perm A_{m,n}^2$. The analogous claim holds when $\xi_1$ dominates $\xi_2$.
\end{remark}
\begin{proof}[Proof of the upper bounds in Theorem \ref{thm:main} assuming it holds for the Pareto case]
Fix $\epsilon>0$ and take $M>1$ so that $\mathbb{P}(\xi \geq t) \leq t^{-1/(\beta+\epsilon)}$ holds for all $t \geq M$. Denote by $\overline{\xi}_{\beta+\epsilon}$ a Pareto distributed random variable with parameter $\beta+\epsilon$ and observe that $\mathbb{P}(\xi \geq t) \leq \mathbb{P}(M\overline{\xi}_{\beta+\epsilon} \geq t)$ holds for all $t$. Assuming the statement holds for the Pareto case, Remark \ref{rem:stochastic_domination} implies that almost surely
\[
\limsup_n \frac{\log \perm A_n}{n \log n} \leq \beta + \epsilon.
\]
Since $\epsilon > 0$ was arbitrary the claim follows.
\end{proof}
The rest of this section is devoted to the proof of the upper bounds in the Pareto case in which we show explicit calculations.
A useful observation which we will use extensively is the fact that if $\xi$ is a Pareto distributed random variable with parameter $\beta$, then $Y = (\log \xi)/\beta$ has exponential distribution with rate $1$, that is $\mathbb{P}(Y \geq t) = e^{-t}$ for $t\geq 0$. We start by proving some basic estimates for maxima of independent exponential random variables.
\begin{lemma}
\label{lemma: Max}
Let $n \geq 2$ and $Y_i$, $1 \leq i \leq n$ be independent exponential random variables with rate 1 and $ R=\frac{\max_{1 \leq i \leq n} Y_i}{\log n}$.
\begin{itemize}
\item[i)] For any positive $t$
\begin{equation}
\label{eq: Max1}
\mathbb{P}(R \leq t) \leq \exp\bigl(-n^{1-t}\bigr), \textrm{ and } \ \mathbb{P}(R \geq t) \leq n^{1-t}.
\end{equation}
\item[ii)] The expectation of $e^{R}$ can be bounded as
\begin{equation}
\label{eq: Max2}
\mathbb{E}\left(e^R\right) \leq \exp\biggl(1+\frac{1}{\log n -1}\biggr).
\end{equation}
\end{itemize}
\end{lemma}
\begin{proof}
i) Both inequalities are straightforward. First we calculate
\begin{equation}
\label{eq: RDown1}
\mathbb{P} (R \leq t) = \mathbb{P} (\max_{1 \leq i \leq n}Y_i \leq t\log n) = \left(1-e^{-t\log n}\right)^n = \left(1-n^{-t}\right)^n.
\end{equation}
Now applying the inequality $1-x \leq e^{-x}$ to the right hand side of (\ref{eq: RDown1}) we obtain the first inequality in (\ref{eq: Max1}).
Using (\ref{eq: RDown1}) and the inequality $(1-x)^n \geq 1-nx$ which holds for all positive integers $n$ and all $0 < x < 1$ we prove the second inequality in (\ref{eq: Max1}):
\begin{equation*}
\mathbb{P}(R \geq t) = 1- (1-n^{-t})^n \leq n^{1-t}.
\end{equation*}
ii) Using the second inequality in (\ref{eq: Max1}) we obtain for $t \geq 1$
$$
\mathbb{P} \left(e^{R} \geq t\right) = \mathbb{P}(R \geq \log t) \leq \frac{n}{n^{\log t}} = \frac{n}{t^{\log n}},
$$
from where we get
\begin{multline*}
\mathbb{E}\left(e^R\right) = \int_0^{\infty}\mathbb{P}\left(e^R \geq t\right) dt \leq e+\int_e^{\infty}\frac{n}{t^{\log n}} dt = e+\frac{n}{\log n -1}\frac{1}{e^{\log n-1}} \\
= e \left(1 + \frac{1}{\log n -1}\right) \leq \exp\biggl(1+\frac{1}{\log n- 1}\biggr)
\end{multline*}
\end{proof}
The idea of the proof of the upper bounds in the Pareto case is to estimate (by evaluating the expectation) the number of permutations $\pi$ for which the product $\prod_{i=1}^n \xi_{i\pi(i)}$ will lie in some given interval. The key estimate is provided in Lemma \ref{prop: BoundZMax}. We will only consider the intervals not exceeding $(n\sqrt{\log n})^{\beta n}$, since
as the following lemma shows, the largest product $\prod_{i}\xi_{i\pi(i)}$ typically does not exceed this value.
\begin{lemma}
\label{lemma: MaxPerm}
If $(Y_{i,j})_{i,j}$ are independent exponential random variables with rate 1 then for any $\lambda>0$
\begin{equation}
\label{eq: MaxPermStrong}
\sum_{n = 2}^{\infty}\mathbb{P}\left(\max_{\pi \in S_n}\sum_{i=1}^n Y_{i,\pi(i)} \geq n\log n + n \frac{\log\log n}{\lambda}\right) < \infty.
\end{equation}
\end{lemma}
\begin{proof}
Denote $R_i := \frac{\max_{1 \leq j \leq n}Y_{i,j}}{\log n}$. From the definition of $R_i$ it is obvious that $$ \max_{\pi \in S_n} \sum_{i=1}^n Y_{i,\pi(i)} \leq \log n \left(\sum_{i=1}^n R_i\right).$$ Using the inequality (\ref{eq: Max2}) we have
\begin{multline*}
\mathbb{P} \left(\max_{\pi \in S_n}\sum_{i=1}^n Y_{i,\pi(i)} \geq n\log n + n \frac{\log\log n}{\lambda}\right) \leq \mathbb{P}\left(\sum_{i=1}^n R_i \geq n + n \frac{\log\log n}{\lambda\log n}\right) \\
\leq \mathbb{E}\left(e^{\sum_{i=1}^nR_{i} - n - n \frac{\log\log n}{\lambda\log n}}\right)
= \left(\frac{\mathbb{E}\left(e^{R_{1}}\right)}{e(\log n)^{\frac{1}{\lambda \log n}}}\right)^n \leq \exp\biggl(\Big(\frac{1}{\log n -1} - \frac{\log \log n}{\lambda \log n}\Big)n\biggr).
\end{multline*}
The right hand side above is summable in $n$ which proves the lemma.
\end{proof}
\begin{lemma}
\label{prop: BoundZMax}
Let $(Y_{i,j})_{i,j}$ be independent exponential random variables with rate 1 and
\begin{equation}\label{eq:number_in_interval}
Z_{n,k}=\bigg|\biggl\{\pi \in S_n: (k-1)n \leq \sum_{i=1}^nY_{i,\pi(i)} < kn \biggr\}\biggr|.
\end{equation}
Then for any $\gamma>1$ and $\lambda > 1$ we have
\begin{equation}
\label{eq: BoundZMax}
\sum_{n = 2}^{\infty} \mathbb{P}\biggl(\bigcup_{1 \leq k \leq \log n + \frac{\log \log n}{\lambda}}\left\{Z_{n,k} > \mathbb{E}(Z_{n,k})^{\gamma}\right\}\biggr) < \infty.
\end{equation}
\end{lemma}
\begin{proof}
First by Markov's inequality
\begin{align*}
\label{eq: BoundZMarkov}
\mathbb{P}\biggl(\bigcup_{1 \leq k \leq \log n + \frac{\log \log n}{\lambda}}\left\{Z_{n,k} > \mathbb{E}(Z_{n,k})^{\gamma}\right\}\biggr) & \leq \sum_{1 \leq k \leq \log n + \frac{\log \log n}{\lambda}} \mathbb{P}\left(Z_{n,k} > \mathbb{E}(Z_{n,k})^{\gamma}\right)
\\
&
\leq \sum_{1 \leq k \leq \log n + \frac{\log \log n}{\lambda}} \frac{1}{ \mathbb{E}(Z_{n,k})^{\gamma-1}}.
\end{align*}
Let's calculate the expectation of $Z_{n,k}$. Clearly for any fixed $\pi \in S_n$ we have $$\mathbb{E}(Z_{n,k})=n!\mathbb{P}\biggl((k-1)n \leq \sum_{i=1}^nY_{i,\pi(i)} < kn \biggr).$$ Since for a fixed $\pi$, $\sum_{i=1}^nY_{i,\pi(i)}$ is the sum of $n$ independent exponential random variables with mean $1$, it has Gamma density $\frac{x^{n-1}e^{-x}}{(n-1)!}$.
Therefore the expectation of $Z_{n,k}$ is given by
\begin{equation}
\label{eq: Z mean}
\mathbb{E}(Z_{n,k})=n! \int_{(k-1)n}^{kn} \frac{x^{n-1}e^{-x}}{(n-1)!}\ dx = n \int_{(k-1)n}^{kn} x^{n-1}e^{-x}\ dx.
\end{equation}
In particular we have
\begin{equation}
\label{eq: upper bound for Z mean}
\mathbb{E}(Z_{n,k}) \leq e^{-(k-1)n}\int_{(k-1)n}^{kn}nx^{n-1}dx \leq \left(kn\right)^{n}e^{-(k-1)n}.
\end{equation}
Furthermore by (\ref{eq: Z mean})
\begin{equation}
\label{eq: Z1}
\mathbb{E}(Z_{n,k})=n \int_{(k-1)n}^{kn} x^{n-1}e^{-x}\ dx \geq e^{-kn}\int_{(k-1)n}^{kn}nx^{n-1}dx = e^{-kn}n^n(k^n-(k-1)^n).
\end{equation}
In particular $\mathbb{E}(Z_{n,1}) \geq \left( \frac{n}{e}\right)^n$, which implies that the series $\sum_{n=1}^{\infty}\mathbb{E}(Z_{n,1})^{1-\gamma}$ converges to a finite limit. Therefore we are left to prove
\begin{equation}
\label{eq: Z2}
\sum_{n =2}^{\infty} \sum_{2 \leq k \leq \log n + \frac{\log \log n}{\lambda}} \frac{1}{ \mathbb{E}(Z_{n,k})^{\gamma-1}} < \infty.
\end{equation}
Again using (\ref{eq: Z1}) and the inequality $k^n \geq (k-1)^n + n(k-1)^{n-1}$ we obtain
$$
\mathbb{E}(Z_{n,k}) \geq e^{-kn}n^{n+1}(k-1)^{n-1} \geq e^{-kn}n^n(k-1)^n,
$$
from where
\begin{equation}\label{eq: BoundUp}
\sum_{2 \leq k \leq \log n + \frac{\log \log n}{\lambda}} \frac{1}{ \mathbb{E}(Z_{n,k})^{\gamma-1}} \leq \frac{e^{n (\gamma-1)}}{n^{n(\gamma-1)}} \sum_{1 \leq k \leq \log n + \frac{\log \log n}{\lambda}} \frac{e^{kn(\gamma-1)}}{k^{n (\gamma-1)}}.
\end{equation}
The function $g(t)=e^{tn(\gamma-1)}t^{-n (\gamma-1)}$ is convex since $$g''(t)=n(\gamma-1) \frac{e^{tn(\gamma-1)}}{t^{n (\gamma-1) +2}}(n (\gamma-1) (t-1)^2+1) \geq 0.$$
Therefore for any $1 \leq k \leq \log n + \frac{\log \log n}{\lambda}$ we have
\begin{equation}
\label{eq: gConvex1}
\frac{e^{kn(\gamma-1)}}{k^{n (\gamma-1)}} = g(k) \leq \max\left\{g(1),g\left(\log n + \frac{\log \log n}{\lambda}\right)\right\}.
\end{equation}
For any $n$ large enough we have
$$
g\left(\log n + \frac{\log \log n}{\lambda}\right) = \frac{n^{n(\gamma-1)}(\log n)^{\frac{n (\gamma-1)}{\lambda}}}{(\log n + \frac{\log \log n}{\lambda})^{n (\gamma-1)}} \geq \left(\frac{n}{2(\log n)^{1-1/\lambda}}\right)^{n (\gamma-1)} \geq e^{n (\gamma-1)} = g(1),
$$
and, for such $n$, using (\ref{eq: gConvex1}) we also get
$$
\frac{e^{kn(\gamma-1)}}{k^{n (\gamma-1)}} \leq g\left(\log n + \frac{\log \log n}{\lambda}\right) = \frac{n^{n(\gamma-1)}(\log n)^{\frac{n (\gamma-1)}{\lambda}}}{(\log n + \frac{\log \log n}{\lambda})^{n (\gamma-1)}} \leq \frac{n^{n(\gamma-1)}}{(\log n)^{n(\gamma-1)(1-1/\lambda)}}.
$$
Thus, for $n$ large enough (\ref{eq: BoundUp}) yields
\begin{align*}
\sum_{2 \leq k \leq \log n + \frac{\log \log n}{\lambda}} \frac{1}{ \mathbb{E}(Z_{n,k})^{\gamma-1}} & \leq \frac{e^{n (\gamma-1)}}{n^{n(\gamma-1)}} \frac{n^{n(\gamma-1)}}{(\log n)^{n(\gamma-1)(1-1/\lambda)}}\left(\log n + \frac{\log \log n}{\lambda}\right) \\
& = \left(\frac{e}{(\log n)^{1-1/\lambda}}\right)^{n(\gamma-1)}\biggl(\log n + \frac{\log \log n}{\lambda}\biggr).
\end{align*}
The expression on the right hand side is summable in $n$ which proves (\ref{eq: Z2}) and thus also (\ref{eq: BoundZMax}).
\end{proof}
Now we are ready to finish the proof of upper bounds.
\begin{proof}[Proof of the upper bounds in Theorem \ref{thm:main} for the Pareto case]
Define $Z_{n,k}$ as in \eqref{eq:number_in_interval} and
fix an arbitrary $\gamma > 1$.
Lemmas \ref{lemma: MaxPerm} and \ref{prop: BoundZMax} and Borel-Cantelli lemma imply that almost surely there exists a positive integer $n_0$ such that for all $n \geq n_0$ we have
\begin{align*}
& \max_{\pi \in S_n}\sum_{i=1}^nY_{i,\pi(i)} \leq n \log n + \frac{n \log\log n}{2} \ \ {\rm and} \\
& Z_{n,k} \leq \mathbb{E}(Z_{n,k})^{\gamma}, \text{ for each } 1 \leq k \leq \log n + \frac{\log \log n}{2}.
\end{align*}
Using (\ref{eq: upper bound for Z mean}) the following inequalities are almost surely satisfied for $n$ large enough
\begin{multline}
\label{eq: PermEstimate}
\perm A = \sum_{\pi \in S_n} e^{\beta \sum_{i=1}^{n}Y_{i,\pi(i)}} \leq \sum_{1 \leq k \leq \log n + \frac{\log \log n}{2}}e^{\beta kn}Z_{n,k} \leq \sum_{1 \leq k \leq \log n + \frac{\log \log n}{2}}e^{\beta kn}\mathbb{E}(Z_{n,k})^{\gamma} \\
\leq e^{\gamma n} n^{\gamma n} \sum_{1 \leq k \leq \log n + \frac{\log \log n}{2}}k^{\gamma n}e^{(\beta - \gamma) kn} \\
\leq e^{\gamma n} n^{\gamma n} \left(\log n + \frac{\log \log n}{2}\right) \max_{1 \leq \tau \leq \log n+\frac{\log \log n}{2 }}\Big(\tau^{\gamma n}e^{ (\beta - \gamma)\tau n}\Big).
\end{multline}
If $\beta > 1$ and $\gamma$ is such that $\beta > \gamma > 1$, $\tau^{\gamma n}e^{ (\beta - \gamma)\tau n}$ is an increasing function in $\tau$ and thus for $n$ large enough
$$
\perm A \leq e^{\gamma n}n^{\beta n} (\log n)^{\frac{(\beta - \gamma)n}{2}} \left(\log n + \frac{\log \log n}{2}\right )^{\gamma n +1}.
$$
This yields
$$
\frac{\log \perm A}{n \log n} \leq \frac{\gamma}{\log n} + \beta + \frac{\beta - \gamma}{2}\frac{\log \log n}{\log n} + \left (\gamma + \frac{1}{n}\right)\frac{\log \left(\log n + \frac{\log \log n}{2}\right)}{\log n},
$$
from where clearly
$$
\limsup_{n \to \infty} \frac{\log \perm A}{n \log n} \leq \beta.
$$
In the case $\beta \leq 1$ we want to maximize the function $ \tau^{\gamma n}e^{(\beta - \gamma)\tau n}$. Write $e^{h(\tau)}:= \tau^{\gamma n}e^{(\beta - \gamma)\tau n}$. We get
$$h(\tau)=\gamma n \log \tau + (\beta - \gamma) \tau n, \ \ h'(\tau) = \frac{\gamma n}{\tau} + (\beta - \gamma)n, \ \ h''(\tau)=- \frac{\gamma n}{\tau^2} < 0.$$ Therefore function $h$ is concave and the maximum occurs when $h'(\tau)=0$, that is when $\tau = \frac{\gamma}{\gamma-\beta}$ at which the value of the function $e^{h(\tau)}$ is equal to $\left(\frac{\gamma}{(\gamma - \beta)e}\right)^{\gamma n}$.
From (\ref{eq: PermEstimate}) we get
$$
\perm A \leq n^{\gamma n}\left(\log n + \frac{\log \log n}{2}\right)\left(\frac{\gamma}{\gamma - \beta}\right)^{\gamma n}
$$
and so
$$
\frac{\log \perm A}{n \log n} \leq \gamma + \frac{\log \left(\log n + \frac{\log \log n}{2}\right)}{n \log n} +\gamma\frac{\log \gamma - \log (\gamma - \beta)}{\log n} \to \gamma,
$$
as $n\to \infty$. Since $\gamma >1$ was arbitrary the claim follows.
\end{proof}
\section{Lower bounds and the proof of Theorem \ref{thm:general_bounds}}\label{sec:lower_bounds}
In this section we prove Theorem \ref{thm:general_bounds} as well as the lower bounds in Theorem \ref{thm:main}.
An important ingredient is the use of stochastic domination to reduce certain technical issues to iid $0$, $1$ matrices.
The following result proven by Hall \cite{Hall48} and Mann and Ryser in \cite{Mann_Ryser53} provides lower bounds for permanents of such matrices (see also Theorem 1.2 in Chapter 4 of \cite{Minc78}).
\begin{proposition}\label{thm:lower_bounds_for_01}
Let $A$ be an $m \times n$ matrix, $m\leq n$ whose all elements are equal to $0$ or $1$. Assume that each row of $A$ contains at least $k$ elements equal to $1$. If $k \geq m$, then
\begin{equation}\label{eq:lower_bounds_on_01_1}
\perm A \geq \frac{k!}{(k-m)!}.
\end{equation}
If $k < m$ and $\perm A > 0$ then
\begin{equation}\label{eq:lower_bounds_on_01_2}
\perm A \geq k!.
\end{equation}
\end{proposition}
As discussed in the introduction (see Remark \ref{rem:matchings}) permanents of matrices with $0$, $1$ elements can be viewed as the number of saturated matchings on corresponding bipartite graphs.
To ensure the positivity of the permanent, when applying \eqref{eq:lower_bounds_on_01_2}, we will exploit this connection through the classical Hall's marriage theorem, which can be easily stated in this setting (see \cite{Hall35}).
\begin{theorem}\label{thm:positivity_of_01}
Let $G=(V,E)$ be a bipartite graph and let $V = V_1 \cup V_2$ be a decomposition of the vertex set so that no two vertices in $V_i$ are connected by an edge, $i=1,2$. Assuming $|V_1| \leq |V_2|$, there exists a saturated matching on $G$ if and only if for any subset $W \subset V_1$ we have $|W| \leq |\{v: v\sim w, w \in W\}|$.
\end{theorem}
Restating the above theorem in terms of permanents of $0$, $1$ matrices yields the following lemma.
\begin{lemma}\label{lemma:permanents_and_matchings}
Let $B$ be an $m \times n$ matrix whose all elements are either $0$ or $1$. If for any $1 \leq k \leq m$ any $k \times (n-k+1)$ submatrix of $B$ has at least one element equal to $1$, then $\perm B \geq 1$.
\end{lemma}
All the necessary applications of Proposition \ref{thm:lower_bounds_for_01} and Lemma \ref{lemma:permanents_and_matchings} are summarized in Lemma \ref{lemma:lower_bound_for_submatrices} which, in particular, proves the lower bounds in Theorem \ref{thm:general_bounds}.
\begin{remark}\label{ex:stirling}
Recall that Stirling's formula says that
\[
\lim_{n \to \infty} n!e^nn^{-(n+1/2)} = \sqrt{2\pi}.
\]
In particular there are constants $c_1< c_2$ so that for any $n$ and $1 \leq k \leq n-1$
\begin{equation}\label{eq:stirling_for_binom}
c_1 \frac{n^{n+1/2}}{k^{k+1/2}(n-k)^{n-k+1/2}} \leq \binom{n}{k} \leq c_2 \frac{n^{n+1/2}}{k^{k+1/2}(n-k)^{n-k+1/2}}.
\end{equation}
\end{remark}
\begin{lemma}\label{lemma:lower_bound_for_submatrices}
Let $\xi$ be a positive random variable and let $A_n$ be a sequence of $n \times n$ matrices whose elements are independent and identically distributed as $\xi$.
For any $0 < \alpha < 1$ and any $\delta > 0$ there exists $r>0$ with the following property: Almost surely there exists $n_0$ such that for any $n \geq n_0$ and any $\alpha n \leq k \leq n$, any $k \times k$ submatrix $B$ of $A_n$ satisfies $\perm B \geq r^k k^{(1-\delta)k}$.
\end{lemma}
\begin{proof}
Let $q>0$ be such that $\mathbb{P}(\xi \leq q) < \eta$, where $\eta$ is to be chosen later.
Define the random variable $\tilde{\xi} = \mathbf{1}_{(\xi \geq q)}$. and define the matrix $\tilde{A}_n = (\tilde{\xi}_{ij})$.
Let $\mathfrak{B}_n$ denote the event that some row of $\tilde{A}_n$ contains more than $\alpha \delta n $ zeros and let $\mathfrak{C}_n$ denote the event that for some $k_1$ and $k_2$ satisfying $\alpha n \leq k_1 + k_2$ there exists a $k_1 \times k_2$ submatrix of $\tilde{A}_n$ containing only zeros. By Lemma \ref{lemma:permanents_and_matchings} on the event $\frak{C}_n^c$ any $k\times k$ submatrix of $\tilde{A}_n$ has a positive permanent, for $\alpha n \leq k \leq n$. Furthermore on the event $\frak{B}_n^c$ every $k \times k$ submatrix of $\tilde{A}_n$ for $k \geq \alpha n$ contains at least $(1-\delta)k$ ones.
Thus on the event $\frak{B}_n^c \cap \frak{C}_n^c$ by \eqref{eq:lower_bounds_on_01_2} we have for any $k \geq \alpha n$ and any $k \times k$ submatrix $B$ of $A_n$
\[
\perm B \geq q^k \perm \tilde{B} \geq q^k \lfloor (1-\delta)k\rfloor! \geq \Big(\frac{q(1-\delta)}{e}\Big)^kk^{(1-\delta)k},
\]
where $\tilde{B}$ is the submatrix of $\tilde{A}_n$ having the same rows and columns as $B$ in $A_n$.
Note that the last inequality above holds for $n$ large enough by Stirling's approximation.
Thus we only need to prove that the probabilities of the events $\frak{B}_n \cup \frak{C}_n$ are summable (since then they happen only finitely many times almost surely). To end this observe that the average number of $1$s in every row and column of $\tilde{A}_n$ is greater than $n(1-\eta)$, so for $\eta < \alpha \delta$ by standard large deviation arguments there exists a constant $C$ such that $\mathbb{P}(\frak{B}_n) \leq Cne^{-n(\alpha\delta-\eta)/C}$, which is clearly summable. For $\frak{C}_n$ use union bound to obtain
\begin{multline}\label{eq:large_deviations_for _rectangles}
\mathbb{P}(\frak{C}_n) \leq 2\sum_{\alpha n \leq k_1+k_2 \leq n \atop k_1 \geq k_2}\binom{n}{k_1}\binom{n}{k_2}\eta^{k_1k_2} \leq 2\sum_{\alpha n/2 \leq k_1 \leq n} \binom{n}{k_1} \sum_{1\leq k_2 \leq n}\binom{n}{k_2}\eta^{k_1k_2} \\
\leq 2\sum_{\alpha n/2 \leq k_1 \leq n} \binom{n}{k_1} \Big(\Big(1+\eta^{k_1}\Big)^n-1\Big) \leq 2\sum_{\alpha n/2 \leq k_1 \leq n} \binom{n}{k_1} (2\eta)^{k_1},
\end{multline}
for $\eta$ small enough.
It is easy to check the last inequality by writing $\big(1+\eta^{k_1}\big)^n-1 = \eta^{k_1} \sum_{\ell=0}^{n-1}\big(1+\eta^{k_1}\big)^\ell$ and bounding each of the terms on the right hand side.
To prove that the right hand side above is summable, observe that for $\eta=\eta(\alpha)$ sufficiently small the following inequalities hold for $\alpha n/2 \leq k_1 \leq n$
\[
(2\eta)^{k_1/2} \leq (2\eta)^{\alpha n/4} \leq (1-\sqrt{2\eta})^{(1-\alpha/2)n} \leq (1-\sqrt{2\eta})^{n-k_1}.
\]
Plugging this back into \eqref{eq:large_deviations_for _rectangles} we get
\[
\mathbb{P}(\frak{C}_n) \leq 2 \sum_{\alpha n/2 \leq k_1 \leq n} \binom{n}{k_1} (2\eta)^{k_1/2}(1-\sqrt{2\eta})^{n-k_1}.
\]
The right hand side is just twice the probability that the sum of $n$ independent random variables having value $1$ with probability $\sqrt{2\eta}$ and value $0$ otherwise, is greater than $\alpha n/2$. Choosing $\sqrt{2\eta} < \alpha /2$, large deviation principle implies that this probability is exponentially small, and thus summable in $n$. This finishes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:general_bounds}]
i) Taking an arbitrary $\delta > 0$ by Lemma \ref{lemma:lower_bound_for_submatrices} we can find $r > 0$ small enough so that almost surely for $n$ large enough
\[
\frac{\log \perm B}{k \log k} \geq \frac{\log r}{\log k} + (1-\delta),
\]
for any $\alpha n \leq k \leq n$ and any $k \times k$ submatrix $B$ of $A_n$. Thus almost surely
\[
\liminf_n \min_{B,k}\frac{\log \perm B}{k \log k} \geq 1-\delta.
\]
Since $\delta > 0$ was arbitrary the claim follows.
ii)
Let $\tilde{\xi}$ be parameter 1 Pareto distributed random variable.
By Markov inequality, for all $t \geq \mathbb{E}(\xi)$ we have
\[
\mathbb{P}(\xi \geq t) \leq \frac{\mathbb{E}(\xi)}{t} = \mathbb{P}(\mathbb{E}(\xi)\tilde{\xi} \geq t),
\]
Thus $\mathbb{E}(\xi) \tilde{\xi}$ stochastically dominates $\xi$ from above and by Remark \ref{rem:stochastic_domination} we can construct a sequence $(\tilde{A}_n)$ of random $n \times n$ matrices whose elements are independent and identically distributed as $\tilde{\xi}$, such that
for any $\alpha n \leq k \leq n$ and any $k \times k$ submatrix $B$ of $A_n$ for the corresponding submatrix $\tilde{B}$ of $\tilde{A}_n$ we have $\perm B \leq \mathbb{E}(\xi)^k \perm \tilde{B}$. Thus it is enough to prove the claim in the case when we replace $\xi$ with $\tilde{\xi}$. If $\tilde{B}$ is an arbitrary $k \times k$ submatrix of $\tilde{A}_n$ then after permuting the rows and columns we can assume that $\tilde{B}$ is at the intersection of the first $k$ rows and columns. Denote by $\tilde{B}^c$ the matrix at the intersection of the other $n-k$ rows and columns and observe that since $\tilde{A}_n$ has positive elements $\perm \tilde{A}_n \geq \perm \tilde{B} \perm \tilde{B}^c$. Furthermore since all the elements are larger than $1$ we have $\perm \tilde{B}^c \geq (n-k)!$ and thus
\[
\frac{\perm \tilde{B}}{k \log k} \leq \frac{\perm \tilde{A}_n}{k \log k} - \frac{\log(n-k)!}{k \log k}
\leq \frac{\perm \tilde{A}_n}{k \log k} - \frac{(n-k)(\log(n-k) -1) - c}{k \log k},
\]
for some $c> 0$,
where the second inequality inequality follows from Stirling's formula (note that we can assume that $k<n$, since for $k=n$ the upper bounds have been proven in the previous section). By the upper bounds in Theorem \ref{thm:main}, for any $\epsilon >0$ almost surely there is $n_0$ such that for all $n \geq n_0$ we have
$\perm \tilde{A}_n \leq (1+\epsilon) n\log n$. For such $n$ we have
\begin{multline*}
\frac{\perm \tilde{B}}{k \log k} \leq (1+\epsilon)\frac{n \log n}{k \log k} - \frac{(n-k)\log(n-k)}{k \log k} + \frac{n-k}{k \log k} + \frac{c}{k \log k}\\
\leq \frac{1}{\alpha}\Big(\frac{(1+\epsilon)\log n}{\log n + \log \alpha} -1\Big) + \frac{n-k}{k \log k} + \frac{c}{k \log k}
+ \frac{n}{k} -\Big(\frac{n}{k}-1\Big)\frac{\log (n-k)}{\log k}.
\end{multline*}
For a fixed $\alpha$ the first three terms on the right hand side vanish in the limit and it suffices to show that
\begin{equation}\label{eq:useful_lowerbound}
\limsup_{n \to \infty}\max_{2 \leq k < n}\Big(\frac{n}{k} -\Big(\frac{n}{k}-1\Big)\frac{\log (n-k)}{\log k}\Big) \leq 1.
\end{equation}
Denoting $n=tk$, where $t \geq 1$ we have
\[
\frac{n}{k} -\Big(\frac{n}{k}-1\Big)\frac{\log (n-k)}{\log k} = 1-\frac{(t-1)\log(t-1)}{\log k}.
\]
When taking supremum one can assume that $1 < t < 2$, that is $k > n/2$. In that case the claim follows from the fact that $s \mapsto s \log s$ is bounded on $(0,1)$.
\end{proof}
\begin{proof}[Proof of the lower bounds in Theorem \ref{thm:main}]
The lower bounds for $\beta \leq 1$ follow from Theorem \ref{thm:general_bounds} i), so in the rest of the proof we will assume that $\beta > 1$.
First define the random variable $Y =(\log \xi)/\beta$ and observe that
\[
\lim_{t \to \infty}\frac{\log \mathbb{P}(Y \geq t)}{t} = -1,
\]
and thus for any $\epsilon >0$ we have $\mathbb{P}(Y \geq t) \geq \exp(-t(1+\epsilon))$, for $t$ large enough. Let $(Y_{ij})$ be an array of independent random variables distributed as $Y$, and for a fixed $n$ define
\[
Q_k = \max_{1 \leq i \leq n-k+1} Y_{ki}, \text{ for } 1 \leq k \leq n.
\]
Following the same arguments as in the proof of the first inequality in \eqref{eq: Max1} one can prove that for any $t'>0$
\[
\mathbb{P}(Q_1 \leq t'\log n) \leq \exp(-n^{1-t'(1+\epsilon)}),
\]
holds for $n$ large enough.
In particular if we fix $0<t'<1$, $\epsilon>0$ such that $t'(1+\epsilon) < 1$ and $0<\rho < 1$ then for $n$ large enough
\begin{align}\label{eq:lower_bounds_estimates_on_almost_exponential}
\sum_{1 \leq k \leq \rho n}\mathbb{P}\left(\frac{Q_k}{\log (n-k+1)} \leq t'\right)
&\leq \sum_{1 \leq k \leq \rho n} \exp\big(-(n-k+1)^{1-t'(1+\epsilon)}\big) \nonumber \\
& \leq \rho n\exp\bigl(-(1-\rho)^{1-t'(1+\epsilon)}n^{1-t'(1+\epsilon)}\bigr).
\end{align}
We will need this estimate later in the proof.
Now we will run a greedy algorithm to extract a large submatrix of $A_n$ with a large permanent. Starting from the first row of $A_n$ we pick a largest possible admissible element in each row. First define $A^{(1)} = A_n$ and $m_1$ as the smallest index between $1$ and $n$, such that $\zeta_1 := \xi_{1,m_1} = \max_{1 \leq i \leq n} \xi_{1,i}$. Next we inductively construct the matrix $A^{(k+1)}$ from $A^{(k)}$ by deleting the first row and the $m_{k}$-th column of the matrix $A^{(k)}$. At each step $\zeta_k$ is defined as the largest element in the first row of the matrix $A^{(k)}$ and $m_k$ as the first column which contains such an element.
Note that conditioned on the elements of the first $k-1$ rows of the matrix $A_n$ the matrix $A^{(k)}$ is just an $(n-k+1) \times (n-k+1)$ matrix with independent elements distributed as $\xi$. This immediately implies that $\zeta_1, \dots , \zeta_n$ are independent and that
the first row in $A^{(k)}$ is distributed as $(\xi_{k,1}, \dots \xi_{k,n-k+1})$. Therefore
\begin{equation}\label{eq:distribution_of_max_in_rows}
\Big(\frac{ \log \zeta_1}{\beta}, \dots , \frac{\log \zeta_n}{\beta}\Big) \stackrel{d}{=} (Q_1, \dots Q_n).
\end{equation}
Now fix some $0 < t < t' < 1$ and $t/t' < \rho < 1$. Then for $n$ large enough
\begin{equation}\label{eq: HelpBound}
t \frac{n \log n}{\log n + \log (1-\rho)} \leq t'(n\rho-1).
\end{equation}
Let $k = \lfloor \rho n\rfloor$ and consider the $(n-k) \times (n-k)$ submatrix $B=A^{(k+1)}$ which is at the intersection of the last $(n-k)$ rows and columns $[n]\backslash\{m_i:1 \leq i \leq k\}$. Consider the complement submatrix $B^c$ which lies at the intersection of the first $k$ rows and columns $\{m_i:1 \leq i \leq k\}$. Since $\zeta_i$ is the largest element in the $i$-th row of $B^c$ we have $\perm B^c \geq \prod_{i=1}^k\zeta_i$, and by \eqref{eq:distribution_of_max_in_rows} we have that
\begin{equation}\label{eq:lower_bound_on_the_extreacted_part}
\mathbb{P}\Big(\frac{\log \perm B^c}{\beta n \log n} \leq t\Big) \leq \mathbb{P}\Big(\sum_{i=1}^kQ_i \leq tn\log n\Big)\leq \mathbb{P}\Big(\sum_{i=1}^k\frac{Q_i}{\log(n-i+1)} \leq \frac{t n \log n}{\log ((1-\rho)n)}\Big).
\end{equation}
Now using \eqref{eq: HelpBound} we see that if the event under the probability on the right hand side happens, then $Q_i \leq t'\log(n-i+1)$ happens for some $1 \leq i \leq k$.
Therefore \eqref{eq:lower_bounds_estimates_on_almost_exponential} and the sub-additivity imply that for any $\epsilon > 0$ such that $t'(1+\epsilon < 1$ we have
\begin{equation}\label{eq: ThirdBound}
\mathbb{P}\Big(\frac{\log \perm B^c}{\beta n \log n} \leq t\Big)\leq \sum_{i=1}^k\mathbb{P}\Big(\frac{Q_i}{\log(n-i+1)} \leq t'\Big)
\leq \rho n\exp\bigl(-(1-\rho)^{1-t'(1+\epsilon)}n^{1-t'(1+\epsilon)}\bigr),
\end{equation}
for $n$ large enough. Since the right hand side is summable in $n$ we see that almost surely $\frac{\log \perm B^c}{ n \log n} \leq \beta t$ happens for only finitely many integers $n$.
On the other hand, Lemma \ref{lemma:lower_bound_for_submatrices} implies that for any $\delta > 0$ there is $r>0$ such that almost surely
\[
\perm B \geq r^{n-k}(n-k)^{(1-\delta)(n-k)} \geq \Big(r(1-\rho)^{1-\delta}\Big)^{(1-\rho)n}n^{(1-\delta)(1-\rho)n}.
\]
Since $A_n$ has positive elements, the inequality $\perm A_n \geq \perm B \perm B^c$ holds and thus almost surely for $n$ large enough
\[
\frac{\log \perm A_n}{n \log n} \geq \beta t + (1-\delta)(1-\rho) + \frac{(1-\rho)(\log r + (1-\delta)\log(1-\rho)}{\log n}.
\]
Taking the limit as $n \to \infty$ and then $t \uparrow 1$ (which forces $\rho \uparrow 1$) yields the claim.
\begin{comment}
This is quite obvious since we can surely find some constant $C$ such that $\displaystyle \rho ne^{-(1-\rho)^{1-t'}n^{1-t'}} \leq \frac{C}{n^2}$. From here Borel-Cantelli lemma implies that almost surely $\displaystyle \liminf_{n \to \infty}\frac{\max_{\pi \in S_n} \sum_{i = 1}^n Y_{i,\pi(i)}}{n \log n} \geq t$ for any $t<1$. Since $t<1$ was arbitrary we obtain the lower bound in (\ref{eq: MaxPerm}).
\end{comment}
\begin{comment}
\begin{remark}
Notice that the expressions on the right hand side of ... are summable in $n$. This in fact gives us stronger statement that $\lim_{n \to \infty} \frac{\sum_{i=1}^n Y_{i, \pi(i)}}{n \log n} = 1$ almost surely.
\end{remark}
\end{comment}
\end{proof}
\begin{comment}
\begin{lemma}
\label{prop: BoundMaxY}
Let $(Y_{i,j})_{i,j}$ be independent exponential random variables with rate 1. Then
\begin{equation}
\label{eq: BoundMaxY}
\lim_{n \to \infty}\mathbb{P}\left(\max_{\pi \in S_n}\sum_{i=1}^n Y_{i,\pi(i)} > n\log n + \frac{n \log \log n}{2}\right) = 0.
\end{equation}
\end{lemma}
\begin{proof}
Denote $R_{n,i} := \frac{\max_{1 \leq j \leq n}Y_{i,j}}{\log n}$ so $\max_{\pi \in S_n} \sum_{i=1}^n Y_{i,\pi(i)} \leq \log n \sum_{i=1}^n R_{n,i}$ and
\begin{align*}
\mathbb{P} \left(\max_{\pi \in S_n}\sum_{i=1}^n Y_{i,\pi(i)} > n\log n + \frac{n \log \log n}{2}\right) & \leq \mathbb{P}\left(\sum_{i=1}^n R_{n,i} > n + \frac{n \log \log n}{2\log n}\right) \\
& \leq \mathbb{E}\left(e^{\left(\sum_{i=1}^nR_{n,i} - n - \frac{n \log \log n}{2\log n}\right)}\right) \\
& = \left(\frac{\mathbb{E}\left(e^{R_{n,1}}\right)}{e(\log n)^{\frac{1}{2 \log n}}}\right)^n.
\end{align*}
To estimate $\mathbb{E}\left(e^{R_{n,1}}\right)$ we calculate for $t>1$ and $n \geq 2$
\begin{multline}
\label{eq: ProbR}
\mathbb{P} \left(e^{R_{n,1}} \geq t\right) = \mathbb{P}(R_{n,1} \geq \log t) = 1- \mathbb{P} (\max_{1 \leq j \leq n}Y_{1,j} < \log n \log t) \\
= 1-(1-e^{-\log n \log t})^n = 1- (1-\frac{1}{t^{\log n}})^n \leq \frac{n}{t^{\log n}}.
\end{multline}
Here we used the inequality $(1-x)^n \geq 1-nx$ for all positive integers $n$ and all $0 < x < 1$.
Now, assuming $n \geq 3$, the expectation is estimated as
$$
\mathbb{E}\left(e^{R_{n,1}}\right) = \int_0^{\infty}\mathbb{P}\left(e^{R_{n,1}} \geq t\right) dt \leq e+\int_e^{\infty}\frac{n}{t^{\log n}} dt = e+\frac{n}{\log n -1}\frac{1}{e^{\log n-1}} = e \left(1 + \frac{1}{\log n -1}\right) \leq e^{1+\frac{1}{\log n- 1}}.
$$
Plugging this back into we get
$$
\mathbb{P} \left(\max_{\pi \in S_n}\sum_{i=1}^n Y_{i,\pi(i)} > n\log n + \frac{n \log \log n}{2}\right) \leq \left(\frac{\mathbb{E}\left(e^{R_{n,1}}\right)}{e(\log n)^{\frac{1}{2 \log n}}}\right)^n \leq \left(\frac{e^{\frac{1}{\log n -1}}}{(\log n)^{\frac{1}{2 \log n}}}\right)^n = e^{\left(\frac{1}{\log n -1} - \frac{\log \log n}{2 \log n}\right)n}
$$
Since $\frac{1}{\log n -1} - \frac{\log \log n}{2 \log n} \to -\infty$ as $n \to \infty$ the right hand side of (\ref{}) converges to $0$ which proves the proposition.
\end{proof}
\end{comment}
\section{Non-square matrices and the necessity of \eqref{eq:heavy_tail_assumption}}\label{sec:non-square}
In this section we sketch how the above arguments extend to a large class of rectangular matrices. We will still assume that elements are sampled independently from a distribution supported on $\mathbb{R}^+$, but will now allow the width of the matrix to be significantly larger than the height, in particular it will suffice for the height to grow polynomially in the logarithm of the width.
The precise condition under the method extends is that matrix $A_n$ is $m_n \times n$, that is has height $m_n$ and width $n$, and the height satisfies the condition
\begin{equation}\label{eq:condition_on_the_height}
\liminf_{n}\frac{m_n \log \log n}{(\log n)^2} > 1.
\end{equation}
Observe that for an $m \times n$ matrix with iid elements of mean $\mu$ we have $\mathbb{E}(\perm A_n) = \binom{n}{m} m! \mu^m$ which demonstrates that the scaling function $n \log n$ will have to be replaced by $m_n \log n$.
\begin{theorem}\label{thm:main-rectangular}
Let $\xi$ be a positive random variable satisfying \eqref{eq:heavy_tail_assumption}
for some $\beta > 0$.
If $(A_n)_n$ is a sequence of $m_n \times n$ matrices on a common probability space with elements which are independent and identically distributed as $\xi$ and satisfying \eqref{eq:condition_on_the_height}, then almost surely
\begin{equation}\label{eq:main_result_rectangular}
\lim_{n \to \infty} \frac{\log \perm A_n}{m_n \log n} = \max(1,\beta).
\end{equation}
\end{theorem}
The uniformity over all submatrices of linear size holds as well.
\begin{theorem}\label{thm:general_bounds-rectangular}
Assume that $(A_n)$ is a sequence of $m_n \times n$ matrices on a common probability space with elements which are independent and identically distributed as $\xi$ and satisfying \eqref{eq:condition_on_the_height}, and let $0 < \alpha < 1$.
\begin{itemize}
\item[i)] We have
\begin{equation}\label{eq:general_lower_bounds_rectangular}
\liminf_{n \to \infty} \min_{(k_1, k_2,B)}\frac{\log \perm B}{k_1 \log k_2} \geq 1,
\end{equation}
where the minimum is taken over all pairs of integers $(k_1, k_2)$ satisfying $\alpha m_n \leq k_1 \leq m_n$, $\alpha n \leq k_2 \leq n$ and $k_1 \leq k_2$ and all $k_1 \times k_2$ submatrices $B$ of $A_n$.
\item[ii)]
If $\xi$ has a finite mean then
\begin{equation}\label{eq:general_upper_bounds_rectangular}
\limsup_{n \to \infty} \max_{(k_1, k_2, B)}\frac{\log \perm B}{k_1 \log k_2} = 1,
\end{equation}
where the minimum is taken over all pairs of integers $(k_1,k_2)$ satisfying $\alpha m_n \leq k_1 \leq m_n$, $\alpha n \leq k_2 \leq n$ and $k_1 \leq k_2$ and all $k_1 \times k_2$ submatrices $B$ of $A_n$.
\end{itemize}
\end{theorem}
The proofs of these theorems are modifications of the arguments in the previous two sections. We will briefly sketch how the modifications go and at which points one needs to do a more careful analysis.
Note that, to simplify notation, we will drop the ceiling and the floor notation throughout the section.
\begin{proof}[Sketch of the proof of the upper bounds in Theorem \ref{thm:main-rectangular}]
As before, by stochastic domination, it suffices to prove the claim when elements are Pareto distributed. To end this one needs to prove a version of Lemma \ref{lemma: MaxPerm} which states that when $(Y_{i,j})$ are independent exponentially distributed with rate one and $\lambda > 1$, we have
\[
\sum_{n = 2}^{\infty}\mathbb{P}\left(\max_{\pi \in S_{m_n,n}}\sum_{i=1}^{m_n} Y_{i,\pi(i)} \geq m_n\log n + m_n \frac{\log\log n}{\lambda}\right) < \infty.
\]
Proceeding as in the proof of Lemma \ref{lemma: MaxPerm} one is left to show that
\[
\sum_{n=2}^\infty\exp\biggl(\Big(\frac{1}{\log n -1} - \frac{\log \log n}{\lambda \log n}\Big)m_n\biggr) < \infty,
\]
for some $\lambda > 1$.
By \eqref{eq:condition_on_the_height} for $\lambda > 1$ small enough the expression in the exponent is not larger than $-\alpha \log n$ for some $\alpha > 1$ which yields the claim.
Next one defines the analog of \eqref{eq:number_in_interval} as
\begin{equation}\label{eq:Z_for_rectangular}
Z_{n,k}=\bigg|\biggl\{\pi \in S_{m_n,n} : (k-1)m_n \leq \sum_{i=1}^{m_n}Y_{i,\pi(i)} < km_n \biggr\}\biggr|,
\end{equation}
and needs to prove \eqref{eq: BoundZMax}. Calculating expectation of $Z_{n,k}$ as in \eqref{eq: Z mean} yields
\begin{equation}\label{eq:expectation_of_Z_rectangular}
\binom{n}{m_n}e^{-km_n}m_n^{m_n}(k^{m_n} - (k-1)^{m_n}) \leq \mathbb{E}(Z_{n,k}) \leq \binom{n}{m_n}e^{-(k-1)m_n}(km_n)^{m_n}.
\end{equation}
For $k=1$ this yields $\mathbb{E}(Z_{n,1})\geq \binom{n}{m_n}m_n!e^{-m_n}$ which handles the sum $\sum_{n \geq 2}\mathbb{E}(Z_{n,1})^{1-\gamma}$.
We are left to prove the analog of \eqref{eq: BoundUp}, that
\[
\sum_{2 \leq k \leq \log n + \frac{\log \log n}{\lambda}} \frac{1}{ \mathbb{E}(Z_{n,k})^{\gamma-1}} \leq \frac{e^{m_n (\gamma-1)}}{\binom{n}{m_n}^{\gamma-1}m_n^{m_n(\gamma-1)}} \sum_{1 \leq k \leq \log n + \frac{\log \log n}{\lambda}} \frac{e^{km_n(\gamma-1)}}{k^{m_n (\gamma-1)}}
\]
is summable in $n$.
Again by the convexity of $g(t)=e^{tm_n(\gamma-1)}t^{-m_n (\gamma-1)}$ and the fact that $g(1) \leq g(\log n + \frac{1}{\lambda} \log \log n)$, proceeding as before one is left to prove that
\[
\left(\frac{ne}{m_n (\log n)^{1-1/\lambda}}\right)^{m_n(\gamma-1)}\frac{\log n + \frac{1}{\lambda}\log \log n}{\binom{n}{m_n}^{\gamma-1}}
\]
is summable in $n$.
Since $(\log n)^{-\kappa m_n}$ is summable, for any $\kappa > 0$, it is enough to show that $\binom{n}{m_n} \geq (cn/m_n)^{m_n}$, for some $c> 0$. Taking $\log$s on both sides and using \eqref{eq:stirling_for_binom} (we can assume that $m_n < n$) it is enough to show that
\[
(n-m_n +1/2) (\log n - \log (n-m_n)) \geq m_n \log c + \frac{1}{2}\log m_n,
\]
which holds for $c< 1$ and $n$ large enough (since the left hand side is positive and the right hand side negative). This proves \eqref{eq: BoundZMax} for \eqref{eq:Z_for_rectangular}.
To finish the proof assume that both
\begin{align*}
& \max_{\pi \in S_{m_n,n}}\sum_{i=1}^{m_n}Y_{i,\pi(i)} \leq m_n \log n + \frac{m_n \log\log n}{\lambda} \ \ {\rm and} \\
& Z_{n,k} \leq \mathbb{E}(Z_{n,k})^{\gamma}, \text{ for each } 1 \leq k \leq \log n + \frac{\log \log n}{\lambda},
\end{align*}
hold for some $\lambda > 1$, which is true for $n$ large enough almost surely.
The same calculations as in the proof of of the upper bounds in Theorem \ref{thm:main} and \eqref{eq:expectation_of_Z_rectangular} yield
\[
\perm A \leq \binom{n}{m_n}^{\gamma}n^{(\beta-\gamma)m_n}m_n^{m_n\gamma}(\log n)^{\frac{\beta - \gamma}{\lambda}m_n}e^{m_n\gamma}\Big(\log n + \frac{1}{\lambda}\log \log n\Big)^{\gamma m_n}
\]
and
\[
\frac{\log \perm A}{m_n \log n} \leq \gamma\frac{\log \binom{n}{m_n}}{m_n \log n} + \beta -\gamma + \gamma \frac{\log m_n}{\log n}+ \frac{\beta - \gamma}{\lambda}\frac{\log \log n}{\log n} + \frac{\gamma}{\log n} + \gamma \frac{\log (\log +\frac{1}{\lambda} \log \log n)}{\log n}.
\]
The last three terms vanish in the limit and so it remains to prove
\[
\limsup_{n \to \infty}\Big(\frac{\log \binom{n}{m_n}}{m_n \log n} + \frac{\log m_n}{\log n}\Big) \leq 1.
\]
After applying \eqref{eq:stirling_for_binom} we are left with
\[
\limsup_{n} \Big(\frac{n}{m_n}-\Big(\frac{n}{m_n} -1\Big)\frac{\log (n-m_n)}{\log n}\Big) \leq 1.
\]
which follows from \eqref{eq:useful_lowerbound}. For $\beta \leq 1$ one can repeat the calculations, or simply refer to stochastic domination.
\end{proof}
The proof of Theorem \ref{thm:general_bounds-rectangular} is based on the following equivalent of Lemma \ref{lemma:lower_bound_for_submatrices}.
\begin{lemma}\label{lemma:lower_bound_for_submatrices_rectangular}
Let $\xi$ be a positive random variable and let $A_n$ be a sequence of $m_n \times n$ matrices whose elements are independent and identically distributed as $\xi$ and which height $m_n$ satisfies \eqref{eq:condition_on_the_height}.
For any $0 < \alpha < 1$ and any $\delta > 0$ there exists $r>0$ with the following property: Almost surely there exists $n_0$ such that for any $n \geq n_0$ and any pair of integers $(k_1,k_2)$ satisfying $\alpha m_n \leq k_1 \leq m_n$, $\alpha n \leq k_2 \leq n$, and $k \leq k_2$, any $k_1 \times k_2$ submatrix $B$ of $A_n$ satisfies $\perm B \geq r^{k_1} k_2^{(1-\delta)k_1}$.
\end{lemma}
\begin{proof}[Sketch of the proof of Lemma \ref{lemma:lower_bound_for_submatrices_rectangular}]
One follows the proof of Lemma \ref{lemma:lower_bound_for_submatrices}. In the definitions one needs to write $n$ for the width of the matrix and $m_n$ for the height, for example $\mathcal{C}_n$ is defined as the event that for some pair of integers $(k_1,k_2)$ satisfying $1 \leq k_1 \leq m_n$, $1 \leq k_2 \leq n$ and $k_1 + k_2 \geq \alpha n$ some $k_1 \times k_2$ submatrix of $A_n$ contains only zeros, and $\mathcal{B}_n$ is defined as before. On $\mathcal{B}_n^c \cap \mathcal{C}_n^c$ one has
\[
\perm B \geq \left\{\begin{array}{ll}
q^{k_1} \lfloor(1-\delta)k_2 \rfloor!, & \text{ for } k_1 \geq (1-\delta)k_2\\
q^{k_1} \frac{\lfloor(1-\delta)k_2 \rfloor!}{(\lfloor(1-\delta)k_2 \rfloor - k_1)!}, & \text{ for } k_1 <(1-\delta)k_2.
\end{array}\right.
\]
In the first case the logarithm of the right hand side will be bounded from below by
\[
k_1 \log q + (1-\delta)k_2 \log ((1-\delta)k_2) - (1-\delta)k_2 \geq k_1 (\log q -1 + \log(1-\delta)) + (1-\delta)k_1\log k_2,
\]
and in the second
\begin{multline*}
k_1 \log q + ((1-\delta)k_2+1/2)\log ((1-\delta)k_2) - ((1-\delta)k_2-k_1+1/2)\log ((1-\delta)k_2 - k_1) + k_1 \\
\geq k_1\log ((1-\delta)k_2) + k_1(1+ \log q),
\end{multline*}
which is sufficient in both cases.
Probability of the event $\mathcal{B}_n$ is estimated as before. For $\mathcal{C}_n$ one can follow the arguments in \eqref{eq:large_deviations_for _rectangles} starting with
\[
\mathbb{P}(\frak{C}_n) \leq 2\sum_{\alpha n/2 \leq k_2 \leq n} \binom{n}{k_2} \sum_{1\leq k_1 \leq m_n}\binom{m_n}{k_1}\eta^{k_1k_2}.
\]
This inequality follows from the simple fact that $m_n \leq n$ and $k_1 \leq k_2$ imply that $\binom{m_n}{k_2}\binom{n}{k_1} \leq \binom{m_n}{k_1}\binom{n}{k_2}$.
\end{proof}
\begin{proof}[Sketch of the proof of Theorem \ref{thm:general_bounds-rectangular}]
Lower bounds follow from Lemma \ref{lemma:lower_bound_for_submatrices_rectangular}. For the upper bounds again use Markov's inequality and reduce to the case when elements of $A_n$ are parameter 1 Pareto distributed.
Similarly as before observe that for any $k_1 \times k_2$ submatrix $B$, any term in the sum defining $\perm B$ can be expanded in $\binom{n-k_1}{m_n - k_1}(m_n-k_1)!$ ways to a term in the sum defining $\perm A_n$. Since all elements of $A_n$ are greater or equal than $1$ we have
\begin{multline*}
\log \perm B \leq \log \perm A_n - \log \left(\binom{n-k_1}{m_n - k_1}(m_n-k_1)!\right) \\
\leq \log \perm A_n - (n-k_1+1/2)\log(n-k_1) + (n-m_n+1/2)\log (n-m_n) + (m_n-k_1).
\end{multline*}
Terms $m_n - k_1$, $\log (n-m_n)$ and $\log (n-k_1)$ are $o(k_1 \log k_2)$. After disregarding them, by the proven upper bounds in Theorem \ref{thm:main-rectangular}, almost surely for any $\epsilon > 0$ one has for $n$ large enough
\[
\frac{\log \perm B}{k_1 \log k_2} \leq \frac{\log n}{\log k_2}\left(\epsilon\frac{m_n}{k_1} + \frac{m_n \log n + (n-m_n)\log(n-m_n) - (n-k_1)\log(n-k_1)}{k_1 \log n}\right).
\]
We are left to prove that
\begin{equation}\label{eq:left_to_prove}
- (n-k_1) \log(n-k_1) + (n-m_n)\log(n-m_n) + (m_n-k_1)\log n \leq o(m_n \log n),
\end{equation}
uniformly in $\alpha m_n \leq k_1 \leq m_n$.
Since the left hand side (as a function of $k_1$) is increasing on $(0, n(1-1/e))$ and decreasing on $(n(1-1/e),1)$ it suffices to prove \eqref{eq:left_to_prove} for $k_1 =m_n$ when $m_n \leq n(1-1/e)$ and for $k_1 = n(1-1/e)$ when $n(1-1/e) < m_n \leq n$ (assuming $\alpha < 1-1/e$ which is of no loss of generality). For $k_1 = m_n$ the left hand side is $0$ so there is nothing to prove, and for $k_1 = n(1-1/e)$ the left hand side is equal to
\[
\frac{n}{e} - (n-m_n)\big(\log n - \log (n-m_n)\big),
\]
which is not larger than an $o(n \log n)$ term, and this suffices in the case $n(1-1/e) < m_n \leq n$.
\end{proof}
\begin{proof}[Sketch of the proof of the lower bounds in Theorem \ref{thm:main-rectangular}]
The lower bounds for $\beta \leq 1$ case follow directly from Theorem \ref{thm:general_bounds-rectangular} i). For the case $\beta > 1$ one can follow the arguments almost verbatim. In \eqref{eq:lower_bounds_estimates_on_almost_exponential} one needs to replace $\rho n$ by $\rho m_n$ in the upper limit in the sum and in the factor in the end. Moreover the greedy algorithm is run for $k=\rho m_n$ steps, where $\rho$ is defined as in \eqref{eq: HelpBound} with $n$ replaced by $m_n$ (except under $\log$s). One defines the submatrices $B$ and $B^c$ as before and arrives at the analog of \eqref{eq: ThirdBound}
\[
\mathbb{P}\Big(\frac{\log \perm B^c}{\beta m_n \log n} \leq t\Big)\leq \sum_{i=1}^k\mathbb{P}\Big(\frac{Q_i}{\log(n-i+1)} \leq t'\Big)
\leq \rho m_n\exp\bigl(-(1-\rho)^{1-t'(1+\epsilon)}n^{1-t'(1+\epsilon)}\bigr).
\]
To finish the proof apply Lemma \ref{lemma:lower_bound_for_submatrices_rectangular} on $B$ identically as Lemma \ref{lemma:lower_bound_for_submatrices} in Section \ref{sec:lower_bounds}.
\end{proof}
This example shows that Theorem \ref{thm:main} in general fails when the limit in \eqref{eq:heavy_tail_assumption} does not exist. Actually this is possible at arbitrary small oscillations of the sequence in \eqref{eq:heavy_tail_assumption}. We present the argument for square matrices.
\begin{example}\label{ex:no_convergence}
Let $S=\{k_i\}$ be a set of positive integers labeled so that $k_{i+1} > 2k_i$. Fix $C_2 > C_1 > \lambda > 1$ and for every $k \geq 1$ define the following sequences of positive real numbers
\[
t_k = \exp(\lambda^k), \ \tilde{p}_k' = \exp(- \lambda^k/C_1), \text{ and } p_k' = \left\{\begin{array}{ll}\exp(- \lambda^k/C_1), & \text{ for }k \notin S \\
\exp(- \lambda^k/C_2), & \text{ for } k \in S.
\end{array}
\right.
\]
Clearly both series $\sum_k p_k'$ and $\sum_k \tilde{p}_k'$ converge, so we can normalize the sequences with the its sums $Z$ and $\tilde{Z}$ respectively and obtain sequences $p_k = p_k'/Z$ and $\tilde{p}_k = \tilde{p}_k'/\tilde{Z}$.
Let $\xi$ and $\tilde{\xi}$ be random variables supported on the set $\{t_k\}$ with distributions $\mathbb{P}(\xi = t_k) = p_k$ and $\mathbb{P}(\tilde{\xi} = t_k) = \tilde{p}_k$. Observing that the mappings $t \mapsto \mathbb{P}(\xi \geq t)$ and $t \mapsto \mathbb{P}(\tilde{\xi} \geq t)$ are constant on $(t_k , t_{k+1}]$ and that $\mathbb{P}(\xi \geq t_k) \leq 2p_k$, for all $k \in S$ large enough and for infinitely many $k \notin S$ as well, and $\mathbb{P}(\tilde{\xi} \geq t_k) \leq 2\tilde{p}_k$, for all $k$ large enough, it is easy to see that
\begin{align}
&\liminf_{t \to \infty} \frac{\log \mathbb{P}(\xi \geq t)}{\log t} = -\frac{1}{C_1/\lambda} < -\frac{1}{C_2} = \limsup_{t \to \infty} \frac{\log \mathbb{P}(\xi \geq t)}{\log t}, \label{eq:example_bounds_1} \\
&\liminf_{t \to \infty} \frac{\log \mathbb{P}(\tilde{\xi} \geq t)}{\log t} = -\frac{1}{C_1/\lambda} < -\frac{1}{C_1} = \limsup_{t \to \infty} \frac{\log \mathbb{P}(\tilde{\xi} \geq t)}{\log t} \label{eq:example_bounds_2}.
\end{align}
As usual let $(A_n)$ denote a sequence of $n \times n$ matrices on a common probability space with independent elements distributes as $\xi$. We will prove that
\begin{equation}\label{eq:example_statement}
\liminf_n \frac{\log \perm A_n}{n \log n} \leq C_1, \text{ and } \limsup_n \frac{\log \perm A_n}{n \log n} = C_2
\end{equation}
To get the upper bound on $\liminf$ take a sequence $(\ell_i)$ of positive integers such that $k_i < \ell_i < k_{i+1}$ and that sequences
$(\ell_i - k_i)$ and $(k_{i+1} -\ell_i)$ are strictly increasing.
Define integers $n_i = \exp(\lambda^{\ell_i}/C_1)$. By a simple union bound the probability that $A_n$ contains an element $t_\ell$, for some $\ell \geq k_{i+1}$ is bounded from above by
\[
2n_i^2p_{k_{i+1}} =\frac{1}{Z}\exp(2\lambda^{\ell_i}/C_1) 2\exp(-\lambda^{k_{i+1}} /C_2),
\]
for $i$ large enough. This expression is summable in $i$, so almost surely $A_{n_i}$ does not contain elements greater than $t_{k_{i+1}}$ for $i$ large enough. Thus to prove the first inequality in \eqref{eq:example_statement} one can assume that elements in $A_{n_i}$ are distributed as $\xi\mathbf{1}_{(\xi < t_{k_{i+1}})}$.
Next observe that $\xi\mathbf{1}_{(\xi < t_{k_{i+1}})}$ is stochastically dominated by $t_{k_i}\tilde{\xi}$, that is
\[
\mathbb{P}(t \leq \xi < t_{k_{i+1}}) \leq \mathbb{P}(t_{k_i}\tilde{\xi} \geq t).
\]
While the inequality is trivial for $t\geq t_{k_{i+1}}$ and for $t \leq t_{k_i}$, for $t_{k_i} < t < t_{k_{i+1}}$ it follows from the fact that for $J \subset (t_{k_i},t_{k_{i+1}})$
\[
\mathbb{P}(\xi \in J) =\frac{\sum_{k : t_k\in J}p_k'}{Z} \leq \frac{\sum_{k : t_k\in J}\tilde{p}_k'}{\tilde{Z}} = \mathbb{P}(\tilde{\xi} \in J),
\]
since $p_k' = \tilde{p}_k'$, for $t_k \in J$ and $\tilde{Z} \leq Z$.
Thus if $\tilde{A}_n$ is the sequence of $n \times n$ matrices whose elements are identical and distributed as $\tilde{\xi}$ then
\[
\liminf_{n} \frac{\log \perm A_n}{n \log n} \leq \lim_i \frac{n_i\log t_{k_i} + \log \perm \tilde{A}_{n_i}}{n_i \log n_i} \leq \lim_i \frac{\lambda^{k_i}}{\lambda^{\ell_i}/C_1} + C_1 = C_1.
\]
Here the second inequality follows form the upper bounds in Theorem \ref{thm:main} and \eqref{eq:example_bounds_2}.
To prove the second relation in \eqref{eq:example_statement} fix $k\in S$, $\epsilon > 0$ and define the integer $n = n_k = \exp((1+\epsilon)\lambda^k/C_2)$. We proceed with a greedy algorithm analogous to the one in the proof of the lower bound in Theorem \ref{thm:main}. With the probability $1- (1-p_k)^n$ there is an element in the first row of $A^{(0)} = A_n$ equal to $t_k$. On this event take the first such element, remove the corresponding column and the first row from $A_n$ and obtain the $(n-1) \times (n-1)$ matrix $A^{(1)}$ which is independent of the first row and is distributed as $A_{n-1}$. Now repeat the step with $A^{(1)}$ instead of $A^{(0)}$ and proceed recursively as long as one is successful at each step. For $0 < \rho < 1$ one will not be able to proceed till step $\rho n $ with probability at most
\[
\sum_{(1-\rho) n \leq i \leq n}(1-p_k)^i \leq \frac{(1-p_k)^{(1-\rho)n}}{p_k} \leq 3Z\exp\Big(-\frac{1-\rho}{Z}\exp(\epsilon\lambda^k/C_2)+\lambda^k/C_2\Big),
\]
if $k$ is chosen large enough. The right hand side is clearly summable in $k$ and thus almost surely for $k$ large enough and $n=n_k$ constructed as above, the above algorithm will be successful for $\rho n$ steps. In that case one can get lower bound on $\perm A_n$ as in the proof of the lower bounds in Theorem \ref{thm:main}: The matrix at the intersection of the first $\rho n$ rows and the removed columns is bounded from below by the product of extracted elements, that is $t_k^{\rho n}$ and the matrix at the intersection of the last $(1-\rho)n$ rows and non-removed columns is bounded from below by $((1-\rho) n)!$ (since all of it's elements are greater or equal than $1$). Therefore for $k$ large enough and $n$ constructed as above $\perm A_n \geq t_k^{\rho n} ((1-\rho) n)!$. Since $\log ((1-\rho) n)!/(n \log n) \to 1-\rho$ and
\[
\frac{\log t_k^{\rho n}}{n \log n} \geq \frac{\rho\log t_k}{\log n} \geq \frac{\rho \lambda^k }{(1+\epsilon)\lambda^k /C_2} \to \frac{C_2 \rho}{1+\epsilon},
\]
we obtain that almost surely
\[
\limsup_{n \to \infty} \frac{\log \perm A_n}{n \log n} \geq \frac{C_2 \rho}{1+\epsilon} + 1- \rho.
\]
By sending $\rho \to 1$ and $\epsilon \to 0$ we get $C_2$ as the lower bound on the $\limsup$, and by Theorem \ref{thm:main} and \eqref{eq:example_bounds_1} it is equal to $C_2$.
\end{example}
\textbf{Acknowledgments}
The author would like to thank Professor Sourav Chatterjee for suggesting this problem and for generous help and guidance with this paper.
| {
"timestamp": "2011-11-16T02:01:13",
"yymm": "1111",
"arxiv_id": "1111.3454",
"language": "en",
"url": "https://arxiv.org/abs/1111.3454",
"abstract": "We study the asymptotic behavior of permanents of $n \\times n$ random matrices $A$ with positive entries. We assume that $A$ has either i.i.d. entries or is a symmetric matrix with the i.i.d. upper triangle. Under the assumption that elements have power law decaying tails, we prove a strong law of large numbers for $\\log \\perm A$. We calculate the values of the limit $\\lim_{n \\to \\infty}\\frac{\\log \\perm A}{n \\log n}$ in terms of the exponent of the power law distribution decay, and observe a first order phase transition in the limit as the mean becomes infinite. The methods extend to a wide class of rectangular matrices. It is also shown that, in finite mean regime, the limiting behavior holds uniformly over all submatrices of linear size.",
"subjects": "Probability (math.PR)",
"title": "Permanents of heavy-tailed random matrices with positive elements",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850887155808,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7095349898769016
} |
https://arxiv.org/abs/2005.12909 | $Δ$-critical graphs with a vertex of degree 2 | Let $G$ be a simple graph with maximum degree $\Delta$. A classic result of Vizing shows that $\chi'(G)$, the chromatic index of $G$, is either $\Delta$ or $\Delta+1$. We say $G$ is of \emph{Class 1} if $\chi'(G)=\Delta$, and is of \emph{Class 2} otherwise. A graph $G$ is \emph{$\Delta$-critical} if $\chi'(G)=\Delta+1$ and $\chi'(H)<\Delta+1$ for every proper subgraph $H$ of $G$, and is \emph{overfull} if $|E(G)|>\Delta \lfloor (|V(G)|-1)/2 \rfloor$. Clearly, overfull graphs are Class 2. Hilton and Zhao in 1997 conjectured that if $G$ is obtained from an $n$-vertex $\Delta$-regular Class 1 graph with maximum degree greater than $n/3$ by splitting a vertex, then being overfull is the only reason for $G$ to be Class 2. This conjecture was only confirmed when $\Delta\ge \frac{n}{2}(\sqrt{7}-1)\approx 0.82n$. In this paper, we improve the bound on $\Delta$ from $\frac{n}{2}(\sqrt{7}-1)$ to $0.75n$. Considering the structure of $\Delta$-critical graphs with a vertex of degree 2, we also show that for an $n$-vertex $\Delta$-critical graph with $\Delta\ge \frac{3n}{4}$, if it contains a vertex of degree 2, then it is overfull. We actually obtain a more general form of this result, which partially supports the overfull conjecture of Chetwynd and Hilton from 1986, which states that if $G$ is an $n$-vertex $\Delta$-critical graph with $\Delta>n/3$, then $G$ contains an overfull subgraph $H$ with $\Delta(H)=\Delta$. Our proof techniques are new and might shed some light on attacking both of the conjectures when $\Delta$ is large. | \section{Introduction}
For two integer $p,q$ with $q\ge p$, we use $[p,q]$ to denote the set of all integer between $p$
and $q$, inclusively.
We consider only simple graphs. Let $G$ be a graph with maximum degree $\Delta(G)=\Delta$. We denote by $V(G)$ and $E(G)$ the vertex set and edge set of $G$, respectively.
Let $v\in V(G)$ with $d_G(v)=t\ge 2$ and $N_G(v)=\{u_1,\ldots, u_t\}$.
A \emph{vertex-splitting} in $G$ at $v$ into two vertices $v_1$ and $v_2$ gives a new graph $G'$ such that
$V(G')=(V(G)\setminus\{v\})\cup \{v_1,v_2\}$ and $E(G')=E(G-v)\cup \{v_1v_2\}\cup \{v_1u_i: i\in [1,s]\} \cup \{v_2u_i: i\in [s+1,t]\}$, where $s\in [1,t-1]$ is any integer. We say $G'$ is obtained from $G$ by a vertex-splitting.
An {\it edge $k$-coloring\/} or simply a \emph{$k$-coloring} of $G$ is a mapping $\varphi$ from $E(G)$ to the set of integers
$[1,k]$, called {\it colors\/}, such that no two adjacent edges receive the same color with respect to $\varphi$.
The {\it chromatic index\/} of $G$, denoted $\chi'(G)$, is defined to be the smallest integer $k$ so that $G$ has an edge $k$-coloring.
We denote by $\mathcal{C}^k(G)$ the set of all edge $k$-colorings of $G$.
A graph $G$ is \emph{$\Delta$-critical} if $\chi'(G)=\Delta+1$ and $\chi'(H)<\Delta+1$ for every proper subgraph $H$ of $G$.
In 1960's, Vizing~\cite{Vizing-2-classes} showed that a graph of maximum degree
$\Delta$ has chromatic index either $\Delta$ or $\Delta+1$.
If $\chi'(G)=\Delta$, then $G$ is said to be of {\it Class 1\/}; otherwise, it is said to be
of {\it Class 2\/}.
Holyer~\cite{Holyer} showed that it is NP-complete to determine whether an arbitrary graph is of Class 1.
Nevertheless, if a graph $G$ has too many edges, i.e., $|E(G)|>\Delta \lfloor |V(G)|/2\rfloor$, then we have to color $E(G)$ using exactly $(\Delta+1)$ colors. Such graphs are called \emph{overfull}.
Clearly, overfull graphs have an odd order, all regular graphs of an odd order are overfull,
and all graphs obtained from a regular Class 1 graph by a vertex-splitting is overfull.
Being overfull is definitely a cause for a graph to be Class 2, but is that the only cause?
Hilton and Zhao~\cite{MR1460574} in 1997 conjectured the following:
Let $G$ be an $n$-vertex Class 1 $\Delta$-regular graph with $\Delta>\frac{n}{3}$. If $G^*$
is obtained from $G$ by a vertex-splitting, then $G^*$ is $\Delta$-critical (vertex-splitting conjecture).
Clearly, $G^*$ is overfull. The conjecture asserts that for every $e\in E(G^*)$, $G^*-e$
is no longer Class 2. In other words, being overfull is the only cause for $G^*$ to be Class 2.
This conjecture was verified when $\Delta\ge \frac{n}{2}(\sqrt{7}-1)\approx 0.82n$ by
Hilton and Zhao~\cite{MR1460574} in 1997. Song~\cite{MR1874750} in 2002 showed that the conjecture holds for
a particular class of $n$-vertex Class 1 $\Delta$-regular graphs with $\Delta\ge \frac{n}{2}$.
No other progress on this conjecture has been achieved since then.
We support this conjecture as below.
\begin{THM}\label{thm:vertex spliting}
Let $n$ and $ \Delta$ be positive integers such that $\Delta\ge \frac{3(n-1)}{4}$.
If $G$ is obtained from an $(n-1)$-vertex $\Delta$-regular Class 1 graph by a vertex-splitting, then $G$
is $\Delta$-critical.
\end{THM}
If $G$ is an $n$-vertex overfull graph, then $|E(G)|\ge \Delta(n-1)/2 +1$.
Thus $\sum_{v\in V(G)} (\Delta-d_G(v)) \le \Delta-2$. Therefor, if $G$ has a vertex of degree 2,
then all other vertices of $G$ are of maximum degree. Is the converse of this statement true?
That is, when will be a Class 2 graph with a degree 2 vertex overfull? We investigate this question and show that this happens
when $\Delta$ is large. In general, for two adjacent vertices $u,v\in V(G)$,
we call $(u,v)$ a \emph{full-deficiency pair} of $G$
if $d(u)+d(v)=\Delta(G)+2$. In particular, if $v$ is of degree 2 in a $\Delta$-critical graph $G$, then each neighbor $u$ of
$v$ has degree $\Delta$ by Vizing's Adjacency Lemma (this lemma will be introduced in Section 2). Therefore, $(u,v)$ is a full-deficiency pair of $G$.
We obtain the following result.
\begin{THM}\label{thm:Delta-critical}
Let $n$ and $ \Delta$ be positive integers such that $\Delta\ge \frac{3(n-1)}{4}$, and
$G$ be an $n$-vertex $\Delta$-critical graph.
If $G$ has a full-deficiency pair, then $G$ is overfull.
Consequently, $G$ is obtained from an $(n-1)$-vertex $\Delta$-regular Class 1 multigraph by a vertex-splitting.
\end{THM}
Theorem~\ref{thm:Delta-critical} partially supports a conjecture of
Chetwynd and Hilton from 1986~\cite{MR848854,MR975994}. The conjecture states
the following: Let $G$ be a simple graph with $\Delta(G)>\frac{1}{3}|V(G)|$. Then $G$ is Class 2 implies that
$G$ contains an overfull subgraph $H$ with $\Delta(H)=\Delta(G)$ (overfull conjecture).
The overfull conjecture was confirmed only for some special classes of graphs.
Chetwynd and Hilton~\cite{MR975994} in 1989 verified the conjecture for
$n$-vertex graphs with $\Delta\ge n-3$.
Hoffman and Rodger~\cite{comm} in 1992 confirmed the conjecture for complete multipartite graphs.
Plantholt~\cite{MR2082738} in 2004 showed that the overfull conjecture is affirmative for
graphs $G$ with an even order $n$, maximum degree $\Delta$ and minimum degree $\delta$ satisfying
$(3\delta-\Delta)/2 \ge cn $ for any $c\ge \frac{3}{4}$.
The overfull conjecture was also confirmed for large regular graphs in 2013~\cite{MR3185848,MR3545109}.
Both the overfull conjecture and the vertex-splitting conjecture are best possible in terms of the
condition on the maximum degree, by considering the critical Class 2 graph $P^*$, which is obtained from the Petersen graph by deleting a vertex. Hilton and Zhao~\cite{MR1460574} proved that the overfull conjecture implies the
vertex-splitting conjecture.
The results in Theorem~\ref{thm:vertex spliting} and Theorem~\ref{thm:Delta-critical} together
imply that all $n$-vertex $\Delta$-critical graphs
with a vertex of degree 2 can be obtained from an $(n-1)$-vertex $\Delta$-regular Class 1 multigraph
by a vertex splitting, when $\Delta\ge \frac{3(n-1)}{4}$.
Thereby, these results provide
a way of constructing dense $\Delta$-critical graphs,
which
are known to be hard.
The reminder of this paper is organized as follows.
We introduce some definitions and preliminary results in Section 2.
In Section 3, we prove Theorem~\ref{thm:vertex spliting} and Theorem~\ref{thm:Delta-critical}
by assuming the truth of
several lemmas. These lemmas will be proved in the last section.
\section{Definitions and Preliminary Results}\label{lemma}
Let $G$ be a graph.
For $e\in E(G)$, $G-e$
denotes the graph obtained from $G$ by deleting the edge $e$.
The symbol $\Delta$ is reserved for $\Delta(G)$, the maximum degree of $G$
throughout this paper. A \emph{$k$-vertex} in $G$ is a vertex of degree exactly $k$
in $G$, and a \emph{$k$-neighbor} of a vertex $v$ is a neighbor of $v$ that is a $k$-vertex in $G$.
For $u,v\in V(G)$, we use $\dist_G(u,v)$ to denote the distance between $u$ and $v$, which is the length of a shortest path connecting $u$
and $v$ in $G$. For $S\subseteq V(G)$, define $\dist_G(u,S)=\min_{v\in S} \dist_G(u,v)$.
An edge $e\in E(G)$ is a \emph{critical edge} of $G$ if $\chi'(G-e)<\chi'(G)$.
It is not hard to see that if
$G$ is connected, $\chi'(G)=\Delta+1$ and every edge of $G$ is critical, then $G$ is $\Delta$-critical.
Critical graphs are useful since they provide more information about the structure around a vertex than general Class 2 graphs. For
example, Vizing's Adjacency Lemma (VAL) from 1965~\cite{Vizing-2-classes} is a useful tool that reveals certain structure at a vertex
by assuming the criticality of an edge.
\begin{LEM}[Vizing's Adjacency Lemma (VAL)]Let $G$ be a Class 2 graph with maximum degree $\Delta$. If $e=xy$ is a critical edge of $G$, then $x$ has at least $\Delta-d_G(y)+1$ $\Delta$-neighbors in $V(G)\setminus \{y\}$.
\label{thm:val}
\end{LEM}
Let $G$ be a graph and
$\varphi\in \mathcal{C}^k(G-e)$ for some edge $e\in E(G)$ and some integer $k\ge 0$.
For any $v\in V(G)$, the set of colors \emph{present} at $v$ is
$\varphi(v)=\{\varphi(f)\,:\, \text{$f$ is incident to $v$}\}$, and the set of colors \emph{missing} at $v$ is $\overline{\varphi}(v)=[1,k]\setminus\varphi(v)$.
For a vertex set $X\subseteq V(G)$, define
$$
\overline{\varphi}(X)=\bigcup _{v\in X} \overline{\varphi}(v).
$$
The set $X$ is called \emph{elementary} with respect to $\varphi$ or simply \emph{$\varphi$-elementary} if $\overline{\varphi}(u)\cap \overline{\varphi}(v)=\emptyset$
for every two distinct vertices $u,v\in X$. Sometimes, we just say that $X$
is elementary if the edge coloring is understood.
For two distinct colors $\alpha,\beta \in [1,k]$, let $H$ be the subgraph of $G$
with $V(H)=V(G)$ and $E(H)$ consisting of edges from $E(G)$ that are colored by $\alpha$
or $\beta$ with respect to $\varphi$. Each component of $H$ is either
an even cycle or a path, which is called an \emph{$(\alpha,\beta)$-chain} of $G$
with respect to $\varphi$. If we interchange the colors $\alpha$ and $\beta$
on an $(\alpha,\beta)$-chain $C$ of $G$, we get a new edge $k$-coloring of $G$,
and we write $$\varphi'=\varphi/C.$$
This operation is called a \emph{Kempe change}.
For a color $\alpha$, a sequence of
{\it Kempe $(\alpha,*)$-changes} is a sequence of
Kempe changes that each involves the exchanging of the color $\alpha$
and another color from $[1,k]$.
Let $x,y\in V(G)$, and $\alpha, \beta, \gamma\in [1,k]$ be three colors. If $x$ and $y$
are contained in a same $(\alpha,\beta)$-chain of $G$ with respect to $\varphi$, we say $x$
and $y$ are \emph{$(\alpha,\beta)$-linked} with respect to $\varphi$.
Otherwise, $x$ and $y$ are \emph{$(\alpha,\beta)$-unlinked} with respect to $\varphi$. Without specifying $\varphi$, when we just say $x$ and $y$ are $(\alpha,\beta)$-linked or $x$ and $y$ are $(\alpha,\beta)$-unlinked, we mean they are linked or unlinked with respect to the current edge coloring.
Let $P$ be an
$(\alpha,\beta)$-chain of $G$ with respect to $\varphi$ that contains both $x$ and $y$.
If $P$ is a path, denote by $\mathit{P_{[x,y]}(\alpha,\beta, \varphi)}$ the subchain of $P$ that has endvertices $x$
and $y$. By \emph{swapping colors} along $P_{[x,y]}(\alpha,\beta,\varphi)$, we mean
exchanging the two colors $\alpha$
and $\beta$ on the path $P_{[x,y]}(\alpha,\beta,\varphi)$.
The notion $P_{[x,y]}(\alpha,\beta)$ always represents the $(\alpha,\beta)$-chain
with respect to the current edge coloring.
Define $P_x(\alpha,\beta,\varphi)$ to be an $(\alpha,\beta)$-chain or an $(\alpha,\beta)$-subchain of $G$ with respect to $\varphi$ that starts at $x$ and ends at a different vertex missing exactly one of $\alpha$ and $\beta$.
(If $x$ is an endvertex of the $(\alpha,\beta)$-chain that contains $x$, then $P_x(\alpha,\beta,\varphi)$ is unique. Otherwise, we take one segment of the whole chain to be
$P_x(\alpha,\beta,\varphi)$. We will specify the segment when it is used.)
If $u$ is a vertrex on $P_x(\alpha,\beta,\varphi)$, we write {$\mathit {u\in P_x(\alpha,\beta, \varphi)}$}; and if $uv$ is an edge on $P_x(\alpha,\beta,\varphi)$, we write {$\mathit {uv\in P_x(\alpha,\beta, \varphi)}$}. Similarly, the notion $P_x(\alpha,\beta)$ always represents the $(\alpha,\beta)$-chain
with respect to the current edge coloring.
If $u,v\in P_x(\alpha,\beta)$ such that $u$ lies between $x$ and $v$,
then we say that $P_x(\alpha,\beta)$ \emph{meets $u$ before $v$}.
Suppose that $\alpha\in \overline{\varphi}(x)$ and $\beta,\gamma\in \varphi(x)$. An $\mathit{(\alpha,\beta)-(\beta,\gamma)}$
\emph{swap at $x$} consists of two operations: first swaps colors on $P_x(\alpha,\beta, \varphi)$ to get an edge $k$-coloring $\varphi'$, and then swaps
colors on $P_x(\beta,\gamma, \varphi')$.
By convention, an $(\alpha,\alpha)$-swap at $x$ does nothing at $x$.
Suppose the current color of an edge $uv$ of $G$
is $\alpha$, the notation $\mathit{uv: \alpha\rightarrow \beta}$ means to recolor the edge $uv$ using the color $\beta$. Recall that $\overline{\varphi}(x)$ is the set of colors not present
at $x$.
If $|\overline{\varphi}(x)|=1$, we will also use $\overline{\varphi}(x)$ to denote the color that is missing at $x$.
Let $\alpha, \beta, \gamma, \tau,\eta\in[1,k]$.
We will use a matrix with two rows to denote a sequence of operations taken on $\varphi$.
Each entry in the first row represents a path or a sequence of vertices.
Each entry in the second row, indicates the action taken on the object above this entry.
We require the operations to be taken to follow the ``left to right'' order as they appear in
the matrix.
For example, the matrix below indicates
three sequential operations taken on the graph based
on the coloring from the previous step:
\[
\begin{bmatrix}
P_{[a, b]}(\alpha, \beta) & rs & ab \\
\alpha/\beta & \gamma \rightarrow \tau & \eta
\end{bmatrix}.
\]
\begin{enumerate}[Step 1]
\item Swap colors on the $(\alpha,\beta)$-subchain $P_{[a, b]}(\alpha, \beta,\varphi) $.
\item Do $rs: \gamma \rightarrow \tau $.
\item Color the edge $ab$ using color $\eta$.
\end{enumerate}
Let
$T$ be a sequence of vertices and edges of $G$. We denote by \emph{$V(T)$}
the set of vertices from $V(G)$ that are contained in $T$, and by
\emph{$E(T)$} the set of edges from $E(G)$ that are contained in $T$.
\subsection{Multifan}
Let $G$ be a graph, $e=rs_1\in E(G)$ and $\varphi\in \mathcal{C}^k(G-e)$ for some integer $k\ge 0$.
A \emph{multifan} centered at $r$ with respect to $e$ and $\varphi$
is a sequence $F_\varphi(r,s_1:s_p):=(r, rs_1, s_1, rs_2, s_2, \ldots, rs_p, s_p)$ with $p\geq 1$ consisting of distinct vertices $r, s_1,s_2, \ldots , s_p$ and distinct edges $rs_1, rs_2,\ldots, rs_p$ satisfying the following condition:
\begin{enumerate}[(F1)]
\item For every edge $rs_i$ with $i\in [2,p]$, there exists $j\in [1,i-1]$ such that
$\varphi(rs_i)\in \overline{\varphi}(s_j)$.
\end{enumerate}
We will simply denote a multifan $F_\varphi(r,s_1: s_{p})$ by $F$ if
$\varphi$ and the vertices and edges in $F_\varphi(r,s_1: s_{p})$ are clear.
Let $F_\varphi(r,s_1: s_{p})$ be a multifan.
By its definition, for any $p^*\in [1,p]$, $F_\varphi(r,s_1: s_{p^*})$
is a multifan.
The following result regarding a multifan can be found in \cite[Theorem~2.1]{StiebSTF-Book}.
\begin{LEM}
\label{thm:vizing-fan1}
Let $G$ be a Class 2 graph and $F_\varphi(r,s_1:s_p)$ be a multifan with respect to a critical edge $e=rs_1$ and a coloring $\varphi\in \mathcal{C}^\Delta(G-e)$. Then the following statements hold.
\begin{enumerate}[(a)]
\item $V(F)$ is $\varphi$-elementary. \label{thm:vizing-fan1a}
\item Let $\alpha\in \overline{\varphi}(r)$. Then for every $i\in [1,p]$ and $\beta\in \overline{\varphi}(s_i)$, $r$
and $s_i$ are $(\alpha,\beta)$-linked with respect to $\varphi$. \label{thm:vizing-fan1b}
\end{enumerate}
\end{LEM}
Let $F_\varphi(r,s_1:s_p)$ be a multifan. We call $s_{\ell_1},s_{\ell_2}, \ldots, s_{\ell_k}$, a subsequence of $s_1:s_p$, an \emph{$\alpha$}-sequence with respect to $\varphi$ and $F$ if the following holds:
$$
\varphi(rs_{\ell_1})= \alpha\in \overline{\varphi}(s_1), \quad \varphi(rs_{\ell_i})\in \overline{\varphi}(s_{\ell_{i-1}}), \quad i\in [2,k].
$$
A vertex in an $\alpha$-sequence is called an \emph{$\alpha$-inducing vertex} with respect to $\varphi$ and $F$, and a missing color at an $\alpha$-inducing vertex is called an \emph{$\alpha$-inducing color}. For convenience, $\alpha$ itself is also an $\alpha$-inducing color. We say $\beta$ is {\it induced by} $\alpha$ if $\beta$ is $\alpha$-inducing. By Lemma~\ref{thm:vizing-fan1} (a) and the definition of multifan, each color in $\overline{\varphi}(V(F))$ is induced by a unique color in $\overline{\varphi}(s_1)$. Also if $\alpha_1,\alpha_2$ are two distinct colors in $\overline{\varphi}(s_1)$, then an $\alpha_1$-sequence is disjoint with an $\alpha_2$-sequence. For two distinct $\alpha$-inducing colors $\beta$ and $\delta$, we write {$\mathit \delta \prec \beta$} if there exists an $\alpha$-sequence $s_{\ell_1},s_{\ell_2}, \ldots, s_{\ell_k}$ such that $\delta\in\overline{\varphi}(s_{\ell_i})$, $\beta\in\overline{\varphi}(s_{\ell_j})$ and $i<j$. For convenience, $\alpha\prec\beta$ for any $\alpha$-inducing color $\beta\not=\alpha$.
As a consequence of Lemma~\ref{thm:vizing-fan1} (a), we have the following properties for a multifan.
A proof of the result can be found in~\cite[Lemma 3.2]{HZ}.
\begin{LEM}
\label{thm:vizing-fan2}
Let $G$ be a Class 2 graph and $F_\varphi(r,s_1:s_p)$ be a multifan with respect to a critical edge $e=rs_1$ and a coloring $\varphi\in \mathcal{C}^\Delta(G-e)$. For two colors $\delta\in \overline{\varphi}(s_i)$ and $\lambda\in \overline{\varphi}(s_j)$ with $i,j\in [1,p]$ and $i\ne j$, the following statements hold.
\begin{enumerate}[(a)]
\item If $\delta$ and $\lambda$ are induced by different colors, then $s_i$ and $s_j$ are $(\delta, \lambda)$-linked with respect to $\varphi$.
\label{thm:vizing-fan2-a}
\item If $\delta$ and $\lambda$ are induced by the same color, $\delta\prec\lambda$, and $s_i$ and $s_j$ are $(\delta, \lambda)$-unlinked with respect to $\varphi$,
then $r\in P_{s_j}(\lambda, \delta, \varphi)$. \label{thm:vizing-fan2-b}
\end{enumerate}
\end{LEM}
\subsection{Kierstead path}
Let $G$ be a graph, $e=v_0v_1\in E(G)$, and $\varphi\in \mathcal{C}^k(G-e)$ for some integer $k\ge 0$.
A \emph{Kierstead path} with respect to $e$ and $\varphi$
is a sequence $K=(v_0, v_0v_1, v_1, v_1v_2, v_2, \ldots, v_{p-1}, v_{p-1}v_p, v_p)$ with $p\geq 1$ consisting of distinct vertices $v_0,v_1, \ldots , v_p$ and distinct edges $v_0v_1, v_1v_2,\ldots, v_{p-1}v_p$ satisfying the following condition:
\begin{enumerate}[(K1)]
\item For every edge $v_{i-1}v_i$ with $i\in [2,p]$, there exists $j\in [1,i-1]$ such that
$\varphi(v_{i-1}v_i)\in \overline{\varphi}(v_j)$.
\end{enumerate}
Clearly a Kierstead path with at most 3 vertices is a multifan. We consider Kierstead paths with $4$ vertices. The result below was proved in Theorem 3.3 from~\cite{StiebSTF-Book}.
\begin{LEM}[]\label{Lemma:kierstead path1}
Let $G$ be a Class 2 graph,
$e=v_0v_1\in E(G)$ be a critical edge, and $\varphi\in \mathcal{C}^\Delta(G-e)$. If $K=(v_0, v_0v_1, v_1, v_1v_2, v_2, v_2v_3, v_3)$ is a Kierstead path with respect to $e$
and $\varphi$, then the following statements hold.
\begin{enumerate}[(a)]
\item If $\min\{d_G(v_2), d_G(v_3)\}<\Delta$, then $V(K)$ is $\varphi$-elementary.
\item $|\overline{\varphi}(v_3)\cap (\overline{\varphi}(v_0)\cup \overline{\varphi}(v_1))|\le 1$.
\end{enumerate}
\end{LEM}
\section{Proof of Theorems~\ref{thm:vertex spliting} and~\ref{thm:Delta-critical}}
We will prove Theorems~\ref{thm:vertex spliting} and~\ref{thm:Delta-critical} based on the following lemmas, whose proof will be
presented in the last section.
General properties on Kierstead paths with 5 vertices was proved by the first author
of this paper~\cite{K5}. Here we stress only one of the cases.
\begin{LEM}\label{lem:5vexKpathsettingup}
Let $G$ be a Class 2 graph, $ab\in E(G)$ be a critical edge, $\varphi\in \mathcal{C}^\Delta(G-ab)$, and
$K=(a, ab,b,bu,u, us, s, st, t)$ be a Kierstead path with respect to $ab$ and $\varphi$.
If $|\overline{\varphi}(t)\cap(\overline{\varphi}(a)\cup \overline{\varphi}(b))|\ge 3$, then the following hold:
\begin{enumerate}[(a)]
\item
There exists $\varphi^*\in \mathcal{C}^\Delta(G-ab)$
satisfies the following properties:
\begin{enumerate}[(i)]
\item $\varphi^*(bu)\in \overline{\varphi}^*(a)\cap \overline{\varphi}^*(t)$,
\item $\varphi^*(us)\in \overline{\varphi}^*(b)\cap \overline{\varphi}^*(t)$, and
\item $\varphi^*(st)\in \overline{\varphi}^*(a)$.
\end{enumerate}
\item $d_G(b)=d_G(u)=\Delta$.
\end{enumerate}
Figure~\ref{f3} shows a Kierstead path with the properties described in (a).
\end{LEM}
\begin{figure}[!htb]
\begin{center}
\begin{tikzpicture}[scale=1,rotate=90]
{\tikzstyle{every node}=[draw ,circle,fill=white, minimum size=0.5cm,
inner sep=0pt]
\draw[blue,thick](0,-2) node (a) {$a$};
\draw[blue,thick](0,-3.5) node (b) {$b$};
\draw [blue,thick](0, -5) node (u) {$u$};
\draw [blue,thick](0, -6.5) node (s) {$s$};
\draw [blue,thick](0, -8) node (t) {$t$};
}
\path[draw,thick,black!60!green]
(b) edge node[name=la,pos=0.7, above] {\color{blue} $\alpha$\quad\quad} (u)
(u) edge node[name=la,pos=0.7, above] {\color{blue}$\beta$\quad\quad} (s)
(s) edge node[name=la,pos=0.7,above] {\color{blue} $\gamma$\quad\quad} (t);
\draw[dashed, red, line width=0.5mm] (b)--++(140:1cm);
\draw[dashed, red, line width=0.5mm] (t)--++(200:1cm);
\draw[dashed, red, line width=0.5mm] (t)--++(340:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(40:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(140:1cm);
\draw[blue] (-0.5, -3.4 ) node {$\beta$};
\draw[blue] (0.6, -1.8) node {$\gamma$};
\draw[blue] (-0.6, -1.8) node {$\alpha$};
\draw[blue] (0.6, -8.) node {$\beta$};
\draw[blue] (-0.6, -8.) node {$\alpha$};
{\tikzstyle{every node}=[draw, red ,circle,fill=red, minimum size=0.05cm,
inner sep=0pt]
\draw(-0.2,-1.4) node (f1) {};
\draw(0,-1.4) node (f1) {};
\draw(0.2, -1.4) node (f1) {};
}
\end{tikzpicture}
\end{center}
\caption{Colors on a Kierstead path of 5 vertices}
\label{f3}
\end{figure}
\begin{LEM}\label{lem:5vexKpathsettingup2}
Let $G$ be a Class 2 graph, $ab\in E(G)$ be a critical edge, $\varphi\in \mathcal{C}^\Delta(G-ab)$, and
$K=(a, ab,b,bu,u, us, s, st, t)$ and $K^*=(a,ab,b,bu,u,ux,x)$ be two Kierstead paths with respect to $ab$ and $\varphi$, where $x\not\in V(K)$.
If $|\overline{\varphi}(t)\cap(\overline{\varphi}(a)\cup \overline{\varphi}(b))|\ge 4$ and $\overline{\varphi}(x) \subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$, then $d_G(x)=\Delta$.
\end{LEM}
A {\it short-kite} $H$ is a graph with
$$V(H)=\{a,b,c,u,x,y\} \quad \text{and}\quad E(H)=\{ab,ac,bu,cu,ux,uy\}.$$
The lemma below reveals some properties of a short-kite with specified colors on its edges.
\begin{LEM}\label{lemma:class2-with-fullDpair2}
Let $G$ be a Class 2 graph,
$H\subseteq G$
be a short-kite with $V(H)=\{a,b,c,u,x,y\}$, and let $\varphi\in \mathcal{C}^\Delta(G-ab)$.
Suppose $$K=(a,ab,b,bu,u,ux,x) \quad \text{and} \quad K^*=(b,ab,a,ac,c,cu,u,uy)$$
are two Kierstead path with respect to $ab$ and $\varphi$.
If $\overline{\varphi}(x)\cup \overline{\varphi}(y)\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$, then $\max\{d_G(x),d_G(y)\}=\Delta $.
\end{LEM}
A {\it kite} $H$ is a graph with
$$V(H)=\{a,b,c,u,s_1,s_2,t_1,t_2\} \quad \text{and}\quad E(H)=\{ab,ac,bu,cu,us_1,us_2, s_1t_1,s_2t_2\}.$$
The lemma below reveals some properties of a kite with specified colors on its edges.
\begin{LEM}\label{lem:kite}
Let $G$ be a Class 2 graph, $H\subseteq G$
be a kite with $V(H)=\{a,b,c,u,s_1,s_2,t_1,t_2\}$, and let $\varphi\in \mathcal{C}^\Delta(G-ab)$.
Suppose $$K=(a,ab,b,bu,u,us_1, s_1,s_1t_1,t_1) \quad \text{and} \quad K^*=(b,ab,a,ac,c,cu,u,us_2, s_2,s_2t_2,t_2)$$
are two Kierstead paths with respect to $ab$ and $\varphi$.
If $\varphi(s_1t_1)=\varphi(s_2t_2)$,
then $|\overline{\varphi}(t_1)\cap \overline{\varphi}(t_2) \cap ( \overline{\varphi}(a)\cup \overline{\varphi}(b))|\le 4$.
\end{LEM}
Let $G$ be a $\Delta$-critical graph, $ab\in E(G)$, and $\varphi \in \mathcal{C}^\Delta(G-ab)$.
A {\it fork} $H$ with respect to $\varphi$ is a graph with
$$V(H)=\{a,b,u,s_1,s_2,t_1,t_2\} \quad \text{and}\quad E(H)=\{ab,bu,us_1,us_2, s_1t_1,s_2t_2\}$$
such that $\varphi(bu)\in \overline{\varphi}(a)$, $\varphi(us_1), \varphi(us_2) \in \overline{\varphi}(a)\cup \overline{\varphi}(b)$, and $\varphi(s_1t_1)\in (\overline{\varphi}(a)\cup \overline{\varphi}(b))\cap \overline{\varphi}(t_2) $
and $\varphi(s_2t_2)\in (\overline{\varphi}(a)\cup \overline{\varphi}(b))\cap \overline{\varphi}(t_1)$.
Fork was defined in~\cite{av2} and
it was shown in~\cite[Proposition B]{av2} that a fork can not exist in a $\Delta$-critical graph if
the degree sum of $a$, $t_1$ and $t_1$ is small.
\begin{LEM}\label{lem:fork}
Let $G$ be a $\Delta$-critical graph, $ab\in E(G)$, and $\{u,s_1,s_2, t_1,t_2\}\subseteq V(G)$.
If $\Delta\ge d_G(a)+d_G(t_1)+d_G(t_2)+1$,
then for any $\varphi\in \mathcal{C}^\Delta(G-ab)$, $G$ does not contain a fork on $\{a,b,u,s_1,s_2,t_1,t_2\}$ with respect to $\varphi$.
\end{LEM}
We need the following two additional results to prove Theorem~\ref{thm:Delta-critical}.
Since all vertices not missing a given color $\alpha$
are saturated by the matching that consists of all edges colored by $\alpha$ in $G$, we have the
following result.
\begin{LEM}[Parity Lemma]
Let $G$ be an $n$-vertex graph and $\varphi\in \mathcal{C}^\Delta(G)$.
Then for any color $\alpha\in [1,\Delta]$,
$|\{v\in V(G): \alpha\in \overline{\varphi}(v)\}| \equiv n \pmod{2}$.
\end{LEM}
\begin{LEM}\label{lemma:class2-with-fullDpair}
If $G$ is an $n$-vertex Class 2 graph with a full-deficiency pair $(a,b)$ such that $ab$ is a critical edge of $G$,
then $G$ satisfies the following properties.
\begin{enumerate}[$(i)$]
\item For every $x\in (N_G(a)\cup N_G(b))\setminus\{a,b\}$, $d_G(x)=\Delta$;
\item For every $x\in V(G)\setminus\{a,b\}$, if $\dist_G(x, \{a,b\})=2$, then
$d_G(x)\ge \Delta-1$.
Furthermore,
if $d_G(a)<\Delta$ and $d_G(b)<\Delta$, then $d_G(x)=\Delta$;
\item For
every $x\in V(G)\setminus\{a,b\}$, if $d_G(x)\ge n-|N_G(b)\cup N_G(a)|$,
then $d_G(x)\ge \Delta-1$.
Furthermore,
if $d_G(a)<\Delta$ and $d_G(b)<\Delta$, then $d_G(x)=\Delta$;
\item If there exists $x\in V(G)\setminus \{a,b\}$ such that $d_G(x)<\Delta$, then there exists
$y\in V(G)\setminus \{a,b,x\}$ such that $d_G(y)<\Delta$.
%
\end{enumerate}
\end{LEM}
\textbf{Proof}.\quad We let
$\varphi\in \mathcal{C}^\Delta(G-ab)$ and
$$
F=(b, ba,a)
$$
be the multifan with respect to $ab$ and $\varphi$.
By Lemma~\ref{thm:vizing-fan1}~\eqref{thm:vizing-fan1a},
\begin{equation}\label{pbarFa}
|\overline{\varphi}(F)|= 2\Delta+2-(d_{G}(a)+d_{G}(b))= 2\Delta+2-(\Delta+2)=\Delta.
\end{equation}
By Lemma~\ref{thm:vizing-fan1}, for every $\varphi'\in \mathcal{C}^\Delta(G-ab)$, $\{a,b\}$ is $\varphi'$-elementary and for every
$i\in \overline{\varphi}'(a)$ and $j\in \overline{\varphi}'(b)$, $a$ and $b$ are $(i,j)$-linked with respect to $\varphi'$. We will use this fact very often.
Since all the $\Delta$ colors appear in $\overline{\varphi}(F)$, each of $N_G(a)\cup \{b\}$ and $ N_G(b)\cup \{a\}$
is the vertex set of a multifan with respect to $ab$ and $\varphi$.
By Lemma~\ref{thm:vizing-fan1}~\eqref{thm:vizing-fan1a} and~\eqref{pbarFa},
we know that for every $x\in (N_G(a)\cup N_G(b))\setminus\{a,b\}$, $d_G(x)=\Delta$.
This proves (i).
For (ii), let $x\in V(G)\setminus\{a,b\}$ such that $\dist_G(x, \{a,b\})=2$.
We assume that $\dist_G(x, b)=2$ and let
$u\in ( N_G(b))\setminus\{a\})\cap N_G(x)$.
Then by~\eqref{pbarFa}, $K=(a,ab, b, bu, u, ux, x)$ is a Kierstead path with respect to $ab$
and $\varphi$. By~\eqref{pbarFa} and Lemma~\ref{Lemma:kierstead path1} (b), it follows that
$d_G(x)\ge \Delta-1$. If $d_G(a)<\Delta$ and $d_G(b)<\Delta$, by~\eqref{pbarFa} and Lemma~\ref{Lemma:kierstead path1} (a), we get $d_G(x)=\Delta$.
For (iii), let $x\in V(G)\setminus\{a,b\}$ such that $d_G(x)\ge n-|N_G(b)\cup N_G(a)|$. By (i), we may assume that
$x\not\in (N_G(a)\cup N_G(b))\setminus\{a,b\}$. Thus $d_G(x)\ge n-|N_G(b)\cup N_G(a)|$ implies that
there exists $u\in ( (N_G(a)\cup N_G(b)))\cap N_G(x)$.
Therefore, $\dist_G(x,\{a,b\})=2$. Now Statement (ii) yields the conclusion.
Statement (iv) is a consequence of~\eqref{pbarFa} and the Parity Lemma.
\qed
\begin{COR}\label{cor:no2D-1}
Let $G$ be an $n$-vertex Class 2 graph with a full-deficiency pair $(a,b)$ such that $ab$ is a critical edge of $G$.
If $\Delta\ge \frac{3(n-1)}{4}$, then there exists at most one vertex $x\in V(G)\setminus \{a,b\}$ such that $d_G(x)=\Delta-1$.
\end{COR}
\textbf{Proof}.\quad Assume to the contrary that there exist distinct $x,y\in V(G)\setminus \{a,b\}$
such that $d_G(x)=d_G(y)=\Delta-1$. By Lemma~\ref{lemma:class2-with-fullDpair} (i), $x,y\not\in (N_G(a)\cup N_G(b))\setminus\{a,b\}$.
By Lemma~\ref{lemma:class2-with-fullDpair} (iii), we may assume that $d_G(b)=\Delta$.
Thus $d_G(a)=2$ as $d_G(a)+d_G(b)=\Delta+2$.
Let $c$ be the other neighbor of $a$ in $G$. Since $(a,c)$ is a full-deficiency pair of $G$ as well, we may assume $x,y\not\in N_G(c)$.
Since $d_{G}(b)=d_{G}(c)=\Delta$ and $d_{G}(x)=d_{G}(y)=\Delta-1$, we get $|N_{G}(b)\cap N_{G}(c)|\ge \frac{n}{2}-1$ and $|N_{G}(x)\cap N_{G}(y)|\ge \frac{n}{2}-2$.
Since $b,c,x,y\not\in N_{G}(b)\cap N_{G}(c)$ and $b,c,x,y\not\in N_{G}(x)\cap N_{G}(y)$, we get
$|N_{G}(b)\cap N_{G}(c)\cap N_{G}(x)\cap N_G(y)|\ge 1$.
Let
$u\in N_{G}(b)\cap N_{G}(c)\cap N_{G}(x)\cap N_{G}(y)$, $H$ be
the short-kite with $V(H)=\{a,b,c,u,x,y\}$, and let $\varphi\in \mathcal{C}^\Delta(G-ab)$.
As $\{a,b\}$
is $\varphi$-elementary$, |\overline{\varphi}(a)\cup \overline{\varphi}(b)|=2\Delta+2-(d_{G}(a)+d_{G}(b))= \Delta$ and so $\overline{\varphi}(a)\cup \overline{\varphi}(b)=[1,\Delta]$. Thus
$K=(a,ab,b,bu,u,ux,x)$ and $ K^*=(b,ab,a,ac,c,cu,u,uy)$
are two Kierstead paths with respect to $ab$ and $\varphi$, and $\overline{\varphi}(x)\cup \overline{\varphi}(y)\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$.
However, $d_{G}(x)=d_{G}(y)=\Delta-1$, contradicting Lemma~\ref{lemma:class2-with-fullDpair2}.
\qed
\proof[\bf Proof of Theorem~\ref{thm:vertex spliting}]
Since $G$ is overfull, $G$ is Class 2.
We show that every edge of $G$ is critical.
Suppose to the contrary that there exists $xy\in E(G)$ such that $xy$
is not a critical edge of $G$. Let
$$
G^*=G-xy.
$$
Then $\chi'(G^*)=\Delta+1$.
Since $ab$ is a critical edge of $G$, $ab \ne xy$.
Also, since $ab$ is a critical edge of $G$, and any $\Delta$-coloring of $G-ab$
gives a $\Delta$-coloring of $G^*-ab$, $ab$ is also a critical edge of $G^*$.
Since $d_{G^*}(x)=d_{G^*}(y)=\Delta-1$, we reach a contradiction to Corollary~\ref{cor:no2D-1}.
\qed
\proof[\bf Proof of Theorem~\ref{thm:Delta-critical}] Let $(a,b)$ be a full-deficiency pair of $G$.
It suffices to only show that for every $v\in V(G)\setminus\{a,b\}$,
$d_G(v)=\Delta$. To see this, let $G$ be a $\Delta$-critical graph with a
full-deficiency pair $(a,b)$ and for every $v\in V(G)\setminus\{a,b\}$,
$d_G(v)=\Delta$. Let $\varphi\in \mathcal{C}^\Delta(G-ab)$.
Since $\overline{\varphi}(a)\cap \overline{\varphi}(b)=\emptyset$ and $d_G(a)+d_G(b)=\Delta+2$,
$\varphi(a)\cap \varphi(b)=\emptyset$.
Thus, identifying $a$ and $b$ in $G$ gives a $\Delta$-coloring of a $\Delta$-regular multigraph $G^*$.
This implies that $|V(G^*)|=n-1$ is even.
So $n$ is odd. Consequently, $G$ is overfull.
Thus, for the sake of contradiction, we assume
that there exists $x\in V(G)\setminus\{a,b\}$
such that $d_G(x)<\Delta$. By Lemma~\ref{lemma:class2-with-fullDpair} (iv),
there exits $y\in V(G)\setminus\{a,b,x\}$ such that $d_G(y)<\Delta$.
Furthermore,
by Lemma~\ref{lemma:class2-with-fullDpair} (iii) and Corollary~\ref{cor:no2D-1}, there exists at most one vertex $x\in V(G)\setminus\{a,b\}$
such that $d_G(x)=\Delta-1$, and for all other vertex $y\in V(G)\setminus\{a,b,x\}$, if $d_G(y)<\Delta$,
then $d_G(y)<n-|N_G(b)\cup N_G(a)|$.
This gives a vertex $t\in V(G)\setminus\{a,b\}$ such that $d_G(t)<n-|N_G(b)\cup N_G(a)|$.
Let $\varphi\in \mathcal{C}^\Delta(G-ab)$ and
$$
F=(b, ba,a)
$$
be the multifan with respect to $ab$ and $\varphi$.
By Lemma~\ref{thm:vizing-fan1}~\eqref{thm:vizing-fan1a},
\begin{equation}\label{pbarF4}
|\overline{\varphi}(F)|= 2\Delta+2-(d_{G}(a)+d_{G}(b))=\Delta.
\end{equation}
Assume, without loss of generality, that $d_G(b)\ge d_G(a)$. Then $d_G(b)\ge \frac{3(n-1)}{8}+1$
as $d_G(a)+d_G(b)=\Delta+2\ge \frac{3(n-1)}{4}+2$.
By Lemma~\ref{lemma:class2-with-fullDpair} (i) and (ii), we assume that $\dist_G(t, \{a,b\})\ge 3$.
Since $\Delta\ge \frac{3(n-1)}{4}$, for any $s\in N_G(t)$ with $d_G(s)\ge \Delta-1$ (such $s$ exists as $t$ is adjacent to at least two $\Delta$-neighbors by VAL), we conclude that
there exists $u\in N_G(b)\cap N_G(s)$. Now by~\eqref{pbarF4}, $K=(a,ab,b,bu,u,us, s, st)$
is a Kierstaed path with respect to $ab$ and $\varphi$.
This implies that $d_G(b)=d_G(u)=\Delta$ by Lemma~\ref{lem:5vexKpathsettingup} (b).
Thus $d_G(a)=2$. We let $c$ be the other $\Delta$-neighbor of $a$.
As $d_G(a)=2$ and $ab\in E(G)$, $|N_G(b)\cup N_G(a)| \ge \Delta+1> \frac{3n}{4}$.
Since $G$ is $\Delta$-critical, VAL implies that for
every $s\in N_G(t)$, $d_G(s)\ge \Delta+2-d_G(t)\ge \Delta+2 +|N_G(b)\cup N_G(a)|-n\ge n-|N_G(b)\cup N_G(a)|$.
Thus, by Lemma~\ref{lemma:class2-with-fullDpair} (iii) and Corollary~\ref{cor:no2D-1}, there exists at most
one vertex $s\in N_G(t)$ such that $d_G(s)<\Delta$. In this case, $d_G(s)=\Delta-1$.
Next, we claim that
\begin{equation}\label{claimx}
\text{for any $x\in V(G)\setminus \{a,b\}$, $d_G(x)<\Delta \Rightarrow d_G(x)<n-|N_G(b)\cup N_G(a)|\le n-\Delta-1$.}
\end{equation}
Assume to the contrary that $d_G(x)\ge n-|N_G(b)\cup N_G(a)|$. By Lemma~\ref{lemma:class2-with-fullDpair} (iii), we have $d_G(x)=\Delta-1$.
Again, as $\Delta\ge \frac{3(n-1)}{4}$ and every vertex in $(N_G(a)\cup N_G(b)\cup N_G(t)) \setminus\{a,b\}$ has degree at least $\Delta-1$, for any $s\in N_G(t)$, we conclude that
there exists $u\in N_G(b)\cap N_G(s)\cap N_G(x)$.
Now by~\eqref{pbarF4}, $K=(a,ab,b,bu,u,us, s, st)$ and $K^*=(a,ab,b,bu,u,ux,x)$
are two Kierstaed paths with respect to $ab$ and $\varphi$.
Clearly, $\overline{\varphi}(t)\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$ and $\overline{\varphi}(x)\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$,
and $|\overline{\varphi}(t)|\ge \Delta-(n-\Delta-1)=2\Delta-n+1\ge \frac{n-1}{2}$.
Since $n\ge |V(K)\cup \{x\}|=6$, $\Delta \ge \frac{3(n-1)}{4}$ implies that $\Delta \ge 4$.
As $\{b,s,t,x\}\cap N_G(b)=\emptyset$, we see that $n\ge 4+4=8 $.
Hence, $|\overline{\varphi}(t)|\ge \lceil \frac{n-1}{2} \rceil\ge 4$, achieving a contradiction to Lemma~\ref{lem:5vexKpathsettingup2}.
By Lemma~\ref{lemma:class2-with-fullDpair} (iii) and (iv) and the conclusion in~\eqref{claimx},
we let $t_1,t_2\in V(G)\setminus \{a,b\}$ such that
both of them have degree less than $n-\Delta$.
Let $s_1\in N_G(t_1)$ and $s_2\in N_G(t_2)$ be any two distinct vertices.
Since $G$ is $\Delta$-critical, VAL implies that for
every $s_i\in N_G(t_i)$, $d_G(s_i)\ge 2\Delta-n>n-\Delta$.
Thus, By Lemma~\ref{lemma:class2-with-fullDpair} (iii) and (iv) and the conclusion in~\eqref{claimx}, $d_G(s_1)=d_G(s_2)=\Delta$.
Thus, since $b,c,s_1,s_2\not\in N_G(b)\cap N_G(c)$ and $b,c,s_1,s_2\not\in N_G(s_1)\cap N_G(s_2)$,
\begin{eqnarray*}
|N_G(b)\cap N_G(c)\cap N_G(s_1)\cap N_G(s_2)|&\ge& |N_G(s_1)\cap N_G(s_2)|-(n-|N_G(b)\cap N_G(c)|-4), \\
&\ge & |N_G(s_1)\cap N_G(s_2)|+|N_G(b)\cap N_G(c)|+4-n\\
&\ge & 2\Delta-n+2\Delta-n+4-n\ge 1,
\end{eqnarray*}
as $\Delta\ge \frac{3(n-1)}{4}$. Let $u\in N_G(b)\cap N_G(c)\cap N_G(s_1)\cap N_G(s_2)$. Then $H$ with $V(H)=\{a,b,c,u,s_1,s_2,t_1,t_2\}$
is kite. By~\eqref{pbarF4}, both $$K=(a,ab,b,bu,u,us_1, s_1,s_1t_1,t_1) \quad \text{and} \quad K^*=(b,ab,a,ac,c,cu,u,us_2, s_2,s_2t_2,t_2)$$
are Kierstead paths with respect to $ab$ and $\varphi$.
Let
$$
A=\{t\in V(G)\setminus\{a,b\}: d_G(t)<n-\Delta \}.
$$
We consider two cases below.
\medskip
{\bf \noindent Case 1: there exist two distinct $t_1,t_2\in A$
such that $\varphi(t_1)\cap \varphi(t_2)\ne \emptyset$.}
\medskip
In this case, we choose $s_1\in N_G(t_1)$ and $s_2\in N_G(t_2)$ such that $\varphi(s_1t_1)=\varphi(s_2t_2)$.
Let $\Gamma= \overline{\varphi}(t_1)\cap \overline{\varphi}(t_2)$. Since $\overline{\varphi}(a)\cup \overline{\varphi}(b)=[1,\Delta]$, $\Gamma\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$.
By~\eqref{pbarF4} and the assumption of this case,
$|\Gamma|\ge \Delta-2(n-\Delta-2)=3\Delta-2n+4\ge \frac{n-1}{4}+2$.
Since $\Delta\ge \frac{3(n-1)}{4}\ge \frac{3}{4}|(V(H)|-1)$, $\Delta\ge 6$.
Since also $s_1,s_2,t_1,t_2\not\in N_G(b)$,
we have $n\ge |V(H)|+3\ge 11$.
Thus, $|\Gamma|\ge \lceil \frac{n-1}{4}\rceil+2\ge 5$,
contradicting Lemma~\ref{lem:kite}.
\medskip
{\bf \noindent Case 2: for each two distinct $t_1,t_2\in A$, it holds that $\varphi(t_1)\cap \varphi(t_2)=\emptyset$.}
\medskip
By~\eqref{pbarF4} and the assumption of this case, we see that $H^*$ with $V(H^*)=\{a,b,u,s_1,s_2,t_1,t_2\}$
is fork. However, by~\eqref{claimx}, $d_G(a)+d_G(t_1)+d_G(t_2)\le 2+2(n-\Delta-2)=2n-2\Delta-2 <\Delta$,
as $\Delta\ge \frac{3(n-1)}{4}$, contradicting Lemma~\ref{lem:fork}.
The proof is now completed.
\qed
\section{Proof of Lemmas~\ref{lem:5vexKpathsettingup} to~\ref{lem:kite}}
\begin{LEM1}
Let $G$ be a Class 2 graph, $ab\in E(G)$ be a critical edge, $\varphi\in \mathcal{C}^\Delta(G-ab)$, and
$K=(a, ab,b,bu,u, us, s, st, t)$ be a Kierstead path with respect to $ab$ and $\varphi$.
If $|\overline{\varphi}(t)\cap(\overline{\varphi}(a)\cup \overline{\varphi}(b))|\ge 3$, then the following hold:
\begin{enumerate}[(a)]
\item
There exists $\varphi^*\in \mathcal{C}^\Delta(G-ab)$
satisfies the following properties:
\begin{enumerate}[(i)]
\item $\varphi^*(bu)\in \overline{\varphi}^*(a)\cap \overline{\varphi}^*(t)$,
\item $\varphi^*(us)\in \overline{\varphi}^*(b)\cap \overline{\varphi}^*(t)$, and
\item $\varphi^*(st)\in \overline{\varphi}^*(a)$.
\end{enumerate}
\item $d_G(b)=d_G(u)=\Delta$.
\end{enumerate}
\end{LEM1}
\textbf{Proof}.\quad By Lemma~\ref{thm:vizing-fan1}, for every $\varphi'\in \mathcal{C}^\Delta(G-ab)$, $\{a,b\}$ is $\varphi'$-elementary and for every
$i\in \overline{\varphi}'(a)$ and $j\in \overline{\varphi}'(b)$, $a$ and $b$ are $(i,j)$-linked with respect to $\varphi'$.
Let $\Gamma=\overline{\varphi}(t)\cap(\overline{\varphi}(a)\cup \overline{\varphi}(b))$, and
$\alpha,\beta \in \Gamma$.
If $\alpha, \beta \in \overline{\varphi}(a)$, then we let $\lambda\in \overline{\varphi}(b)$, and do a $(\beta,\lambda)$-swap
at $b$. If $\alpha, \beta \in \overline{\varphi}(b)$, then we let $\lambda\in \overline{\varphi}(a)$, and do a $(\beta,\lambda)$-swap
at $a$. Therefore,
we may assume that $$\alpha\in \overline{\varphi}(a) \quad \text{and} \quad \beta\in \overline{\varphi}(b).$$
If $\varphi(bu)=\delta\ne \alpha$, then we do an $(\alpha,\delta)$-swap at $t$,
and rename the color $\delta$ as $\alpha$ and vice versa.
Thus we may assume $$\varphi(bu)=\alpha.$$
Assume first that $\varphi(us)\in \overline{\varphi}(b)$. We do a $(\beta,\varphi(us))$-swap at $t$
and still call the resulting coloring by $\varphi$, we see that $\varphi(us)\in \overline{\varphi}(b)\cap \overline{\varphi}(t)$.
By permuting the name of the colors, we let $\varphi(us)=\beta$.
Let $\varphi(st)=\gamma$. Since $\alpha,\beta\in \overline{\varphi}(t)$, $\gamma\ne \alpha, \beta$.
If $\gamma\in \overline{\varphi}(a)$, we are done. So we assume $\gamma\in \overline{\varphi}(b)\cup \overline{\varphi}(u)$.
We color $ab$ by $\alpha$ and uncolor $bu$. Denote this resulting coloring by $\varphi'$.
Then $K'=(b,bu, u, us,s,st,t)$ is a Kierstead path with respect to $bu$ and $\varphi'$.
However, $\alpha,\beta\in \overline{\varphi}'(t)\cap (\overline{\varphi}'(b)\cup \overline{\varphi}'(u))$, showing a contradiction to
Lemma~\ref{Lemma:kierstead path1} (b).
Thus we let $\varphi(us)=\delta\in \overline{\varphi}(a)$. Then $\delta\ne \beta$
by Lemma~\ref{thm:vizing-fan1}~\eqref{thm:vizing-fan1a}.
Let $\varphi(st)=\gamma$. Clearly, $\gamma\ne \alpha,\beta, \delta$. We have either $\gamma\in \overline{\varphi}(a)$
or $\gamma\in \overline{\varphi}(b)\cup \overline{\varphi}(u)$.
We consider three cases below.
{\bf \noindent Case 1. $\gamma\in \overline{\varphi}(b)$.}
If $u\in P_a(\beta,\delta)=P_b(\beta,\delta)$, we do a $(\beta,\delta)$-swap at $t$.
Since $a$ and $b$ are $(\delta,\gamma)$-linked and $u\in P_t(\delta,\gamma)$,
we do a $(\delta,\gamma)$-swap at $a$. This gives a desired coloring $\varphi^*$.
If $u\not\in P_a(\beta,\delta)=P_b(\beta,\delta)$, we first do a $(\beta,\delta)$-swap at $a$ and then a $(\beta,\gamma)$-swap at $a$. Again this gives a desired coloring $\varphi^*$.
\smallskip
{\bf \noindent Case 2. $\gamma\in \overline{\varphi}(u)$.}
If $\delta\in \Gamma$, since $b$ and $u$
are $(\beta,\gamma)$-linked by Lemma~\ref{thm:vizing-fan1}~\eqref{thm:vizing-fan1b}, we do $(\beta,\gamma)$-swap at $t$.
Now $u\in P_t(\delta,\beta)$, we do a $(\beta,\delta)$-swap at $a$.
This gives a desired coloring $\varphi^*$.
Thus we assume $\delta\not\in \Gamma$. Since $b$ and $u$
are $(\beta,\gamma)$-linked by Lemma~\ref{thm:vizing-fan1}~\eqref{thm:vizing-fan1b},
and $a$ and $u$ are $(\delta,\gamma)$-linked by Lemma~\ref{thm:vizing-fan2}~\eqref{thm:vizing-fan2-a},
we do $(\beta,\gamma)-(\gamma,\delta)$-swaps at $t$.
Finally, since $u\in P_t(\beta,\delta)$, we do a $(\beta,\delta)$-swap at $a$.
This gives a desired coloring $\varphi^*$.
\medskip
{\bf \noindent Case 3. $\gamma\in \overline{\varphi}(a)$.}
If $\delta\in \Gamma$, we do a $(\beta,\gamma)$-swap at $t$ and then a $(\beta,\delta)$-swap at $a$ to get a desired coloring $\varphi^*$.
Thus we assume $\delta\not\in \Gamma$. Let $\tau\in \Gamma\setminus \{\alpha,\beta\}$.
If $\tau\in \overline{\varphi}(u)$, since $a$ and $u$ are $(\delta,\gamma)$-linked by Lemma~\ref{thm:vizing-fan2}~\eqref{thm:vizing-fan2-a}, we do a $(\tau,\delta)$-swap at $t$. This gives back to the previous case that
$\delta\in \Gamma$. Next we assume $\tau\in \overline{\varphi}(b)$.
It is clear that $u\in P_a(\tau,\delta)=P_b(\tau,\delta)$, as otherwise, a $(\tau,\delta)$-swap at $a$ gives a desired coloring.
Thus we do a $(\tau,\delta)$-swap at $t$, giving back to the previous case that
$\delta\in \Gamma$.
Now we assume $\tau\in \overline{\varphi}(a)$.
If $u\not\in P_a(\beta,\delta)$, we do a $(\beta,\delta)$-swap at $a$. Since $a$ and $b$ are $(\alpha,\delta)$-linked and $u\in P_a(\alpha,\delta)$, we do an $(\alpha,\delta)$ swap at $t$.
Now since $u\in P_t(\gamma,\delta)$, we do a
$(\gamma,\delta)$-swap at $a$, and do $(\beta,\gamma)-(\gamma,\alpha)$-swaps at $t$.
Since $a$ and $b$ are $(\tau,\gamma)$-linked, we do a
$(\tau,\gamma)$-swap at $t$, and then a $(\beta,\gamma)$
-swap at $a$. Now since $u\in P_t(\beta, \delta)$,
we do a $(\beta, \delta)$-swap at $a$. This gives a desired coloring.
Thus, we assume $u\in P_a(\beta,\delta)$. We do a $(\beta,\delta)$-swap at $t$, and then a $(\tau,\beta)$-swap at $t$.
Next we do a $(\beta,\gamma)$-swap at $a$ and then a $(\gamma,\delta)$-swap at $a$.
This gives a desired coloring for (b).
For statement (b), let $\varphi^*\in \mathcal{C}^\Delta(G-ab)$ satisfying (i)--(iii).
Let $\alpha,\gamma\in \overline{\varphi}^*(a)$, $\beta\in \overline{\varphi}^*(b)$ with $\alpha,\beta \in \overline{\varphi}(t)$ such that
$$
\varphi^*(bu)=\alpha, \quad \varphi^*(us)=\beta,\quad \text{and} \quad \varphi^*(st)=\gamma.
$$
Let $\tau\in \overline{\varphi}^*(t)\setminus\{\alpha,\beta\}$.
Suppose to the contrary first that $d_G(b)\le \Delta-1$. Let $\lambda\in \overline{\varphi}^*(b)\setminus\{\beta\}$.
We do $(\tau,\lambda)-(\lambda,\gamma)$-swaps at $t$.
Now we color $ab$ by $\alpha$ and uncolor $bu$ to get a coloring $\varphi'$.
Then $K'=(b,bu, u, us, s, st, t)$ is a Kierstead path with respect to $bu$ and $\varphi'$. However, $\alpha,\beta \in \overline{\varphi}'(t)\cap (\overline{\varphi}'(b)\cup \overline{\varphi}'(u))$, contradicting Lemma~\ref{Lemma:kierstead path1} (b).
Assume then that $ d_G(b)=\Delta$ and $d_G(u)\le \Delta-1$. Let $\lambda\in \overline{\varphi}^*(u)$.
Since $(a,ab,b,bu,u)$ is a multifan, $\lambda\not\in \{\alpha,\beta,\gamma\}$.
Since $u$ and $b$ are $(\beta,\lambda)$-linked and $u$ and $a$ are
$(\gamma,\lambda)$-linked by Lemma~\ref{thm:vizing-fan2}~\eqref{thm:vizing-fan2-b},
we do $(\beta,\lambda)-(\lambda,\gamma)$-swap(s) at $t$.
Now we color $ab$ by $\alpha$ and uncolor $bu$ to get a coloring $\varphi'$.
Then $K'=(b,bu, u, us, s, st, t)$ is a Kierstead path with respect to $bu$ and $\varphi'$. However, $\alpha \in \overline{\varphi}'(t)\cap \overline{\varphi}'(u)$, contradicting Lemma~\ref{Lemma:kierstead path1} (a), since $d_G(u)<\Delta$.
\qed
\begin{LEM2}
Let $G$ be a Class 2 graph, $ab\in E(G)$ be a critical edge, $\varphi\in \mathcal{C}^\Delta(G-ab)$, and
$K=(a, ab,b,bu,u, us, s, st, t)$ and $K^*=(a,ab,b,bu,u,ux,x)$ be two Kierstead paths with respect to $ab$ and $\varphi$, where $x\not\in V(K)$.
If $|\overline{\varphi}(t)\cap(\overline{\varphi}(a)\cup \overline{\varphi}(b))|\ge 4$ and $\overline{\varphi}(x) \subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$, then $d_G(x)=\Delta$.
\end{LEM2}
\textbf{Proof}.\quad
Assume to the contrary that $d_G(x)\le \Delta-1$.
Since $\overline{\varphi}(x) \subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$,
Lemma~\ref{Lemma:kierstead path1} (b) gives that $d_G(x)= \Delta-1$.
By Lemma~\ref{lem:5vexKpathsettingup}, $d_G(b)=d_G(u)=\Delta$ and we assume
that $\overline{\varphi}(b)=\beta$,
$\varphi(bu)=\alpha$, $\varphi(us)=\beta$, $\varphi(st)=\gamma$, $\alpha,\gamma \in \overline{\varphi}(a)$,
and $\alpha,\beta\in \overline{\varphi}(t)$.
In the following, when we swap colors, we always make sure that the
colors on the edges $bu$ and $us$ are unchanged. The color on the edge $st$
might be changed, but the new color will still be a color from $\overline{\varphi}(a)$.
This is guaranteed by using the elementary fact that for every coloring $\varphi'\in \mathcal{C}^\Delta(G-ab)$,
$a$ and $b$ are $(i,j)$-linked for every $i\in \overline{\varphi}'(a)$ and every $j\in \overline{\varphi}'(b)$.
We use this fact every often without even mentioning it.
Let $\varphi(ux)=\delta$ and $\overline{\varphi}(x)=\tau$. We first claim that if $\delta\ne \gamma$, then we may assume
$\delta\in \overline{\varphi}(t)$.
Clearly, $\delta\ne \alpha,\beta$, and
since $K^*$ is a Kierstead path and $\overline{\varphi}(b)=\beta$,
we have $\varphi(ux)=\delta\in \overline{\varphi}(a)$.
Let $\Gamma=\overline{\varphi}(t)\cap(\overline{\varphi}(a)\cup \overline{\varphi}(b))$, and let
$\{\alpha,\beta,\eta,\lambda\}\subseteq \Gamma$.
Suppose that $\delta\not\in \overline{\varphi}(t)$.
We do $(\beta,\gamma)-(\gamma,\eta)$-swaps at $b$. Denote the new coloring by $\varphi'$.
If $ux\not\in P_b(\eta,\delta)$, based on $\varphi'$,
we do an $(\eta,\delta)$ -swap at $b$ and
and then do an $(\alpha,\delta)$-swap
at $t$. If $\tau=\gamma$,
we do $(\delta,\gamma)-(\gamma,\eta)$-swaps at $b$ and then do an $(\alpha,\eta)$-swap at $t$.
Finally we do $(\eta,\lambda)-(\lambda,\gamma)-(\gamma,\beta)$-swaps at $b$.
Thus we assume $\tau \ne \gamma$. Clearly, $\tau \ne \alpha$.
If $\tau=\beta$, we simply do a $(\beta,\delta)$-swap at $b$, then $(\beta,\gamma)-(\gamma,\eta)$-swaps at $b$, an $(\alpha,\eta)$-swap at $t$, and finally $(\eta,\lambda)-(\lambda,\gamma)-(\gamma,\beta)$-swaps at $b$.
Thus, $\tau\ne \alpha,\beta$. We do $(\delta,\tau)-(\tau,\eta)$-swaps at $b$, an $(\alpha,\eta)$-swap at $t$,
and finally do $(\eta,\lambda)-(\lambda,\gamma)-(\gamma,\beta)$-swaps at $b$.
Thus, we assume that $ux\not\in P_b(\eta,\delta)$. Based on $\varphi'$, we do an $(\eta,\delta)$-swap at $t$
and then $(\eta,\lambda)-(\lambda,\gamma)-(\gamma,\beta)$-swaps at $b$.
After the operations above, we have $\varphi(bu)=\alpha$, $\varphi(us)=\beta$, $\varphi(st)=\gamma$, $\varphi(ux)=\delta$ and $\overline{\varphi}(x)=\tau$, and $\alpha,\beta,\delta,\lambda \in \Gamma$.
\smallskip
{\noindent \bf Case 1: $\overline{\varphi}(x)=\gamma$}.
\smallskip
Recall that $\alpha,\beta,\delta\in \Gamma$.
We color $ab$ by $\alpha$, recolor $bu$ by $\beta$,
and uncolor $us$.
Note that $u$ and $t$
are $(\alpha,\gamma)$-linked, as otherwise an $(\alpha,\gamma)$-swap
at $u$ and a $(\beta,\gamma)$-swap at $s$ gives a coloring $\varphi'$
such that $\gamma\in \overline{\varphi}'(u)\cap \overline{\varphi}'(s)$.
Thus we do an $(\alpha,\gamma)$-swap at both $a$ and $x$, recolor $ux$ by $\alpha$, and then a $(\beta,\delta)$-swap
at both $x$ and $a$. It is clear that $ux\in P_t(\alpha,\gamma)$ and $P_t(\alpha,\gamma)$
meets $u$ before $x$. We now do the following operations:
\[
\begin{bmatrix}
P_{[t,u]}(\alpha,\gamma)& ux & bu & ab \\
\alpha/ \gamma& \alpha \rightarrow \beta & \beta \rightarrow \gamma&
\gamma \rightarrow \beta \end{bmatrix}.
\]
Based on the coloring above, we do $(\alpha,\beta)-(\beta,\delta)$-swaps at both $x$ and $a$,
and then an $(\alpha,\delta)$-swap at $x$. Denote the new coloring by $\varphi'$.
Now we do an $(\alpha,\beta)$-swap at $t$
and color $us$ by $\alpha$, giving a $\Delta$-coloring of $G$.
\smallskip
{\noindent \bf Case 2: $\overline{\varphi}(x)\ne \gamma$ and $\varphi(ux)\ne \varphi(st)=\gamma$}.
\smallskip
Recall that $\alpha,\beta,\delta\in \Gamma$ and $|\Gamma|\ge 4$.
Let $\{\alpha,\beta,\delta,\lambda\}\subseteq \Gamma$. We show that there is a coloring $\varphi'$
such that $\{\alpha,\delta\}\subseteq \overline{\varphi}'(t)$ and $\overline{\varphi}'(x)=\beta$.
Since we already have $\{\alpha,\delta\}\subseteq \overline{\varphi}(t)$, we assume that $\overline{\varphi}(x)=\tau\ne \beta$.
Thus $\tau\in \overline{\varphi}(a)$.
If $\tau=\alpha$, we simply do an $(\alpha,\beta)$-swap at $x$.
Therefore, $\tau\ne \alpha$. It is possible that $\tau =\lambda$, but we deal with this
together with the case that $\tau\ne \lambda$.
We first do $(\beta,\gamma)-(\gamma,\lambda)-(\lambda,\tau)$-swaps at $b$, then we do an $(\alpha,\tau)$-swap at both $x$ and $t$. Now we do $(\tau,\gamma)-(\gamma,\beta)$-swaps at $b$,
and an $(\alpha,\beta)$-swap at both $x$ and $t$.
We now derive a contradiction based on the coloring of $E(K)\cup E(K^*)$, as shown in Figure~\ref{pic1}.
\begin{figure}[!htb]
\begin{center}
\begin{tikzpicture}[scale=1,rotate=90]
{\tikzstyle{every node}=[draw ,circle,fill=white, minimum size=0.5cm,
inner sep=0pt]
\draw[blue,thick](0,-2) node (a) {$a$};
\draw[blue,thick](-1,-3) node (b) {$b$};
\draw [blue,thick](0, -5) node (u) {$u$};
\draw [blue,thick](-1, -7) node (x) {$s$};
\draw [blue,thick](-1, -9) node (t) {$t$};
\draw [blue,thick](1, -7) node (y) {$x$};
}
\path[draw,thick,black!60!green]
(b) edge node[name=la,pos=0.5, above] {\color{blue} $\alpha$\quad\quad} (u)
(u) edge node[name=la,pos=0.6, above] {\color{blue}$\beta$\quad} (x)
(u) edge node[name=la,pos=0.4,above] {\color{blue} \quad$\delta$} (y)
(x) edge node[name=la,pos=0.4,above] {\color{blue} \quad$\gamma$} (t);
\draw[dashed, red, line width=0.5mm] (b)--++(140:1cm);
\draw[dashed, red, line width=0.5mm] (t)--++(200:1cm);
\draw[dashed, red, line width=0.5mm] (t)--++(340:1cm);
\draw[dashed, red, line width=0.5mm] (y)--++(340:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(40:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(90:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(140:1cm);
\draw[blue] (-1.5, -9.5) node {$\alpha$};
\draw[blue] (-.5, -9.5) node {$\delta$};
\draw[blue] (1.5, -7.5) node {$\beta$};
\draw[blue] (-1.2, -2.5) node {$\beta$};
\draw[blue] (0.6, -1.8) node {$\delta$};
\draw[blue] (0.2, -1.3) node {$\gamma$};
\draw[blue] (-0.6, -1.8) node {$\alpha$};
\end{tikzpicture}
- \end{center}
\caption{Colors on the edges of $K$ and $K^*$}
\label{pic1}
\end{figure}
We color $ab$ by $\alpha$, recolor $bu$ by $\beta$,
and uncolor $us$. We then do an $(\alpha,\beta)$-swap at both $x$ and $t$,
and an $(\alpha,\delta)$-swap at $x$.
Now since $u$ and $s$ are $(\beta,\delta)$-linked, we do a $(\beta,\delta)$-swap at both $x$
and $a$.
Since $u$ and $t$ are $(\delta,\gamma)$-linked, we do a $(\delta,\gamma)$-swap at $a$.
Finally, we do $(\beta,\delta)-(\delta,\alpha)$-swaps at $x$.
Now $P_u(\alpha,\beta)=uba$, so $u$
and $s$ are $(\alpha,\beta)$-unlined. We do an $(\alpha,\beta)$-swap at $u$
and color $us$ by $\beta$.
This gives a $\Delta$-coloring of $G$, showing a contradiction.
\smallskip
{\noindent \bf Case 3: $\overline{\varphi}(x)=\tau\ne \gamma$ and $\varphi(ux)=\varphi(st)=\gamma$}.
\smallskip
We again let $\Gamma=\overline{\varphi}(t)\cap(\overline{\varphi}(a)\cup \overline{\varphi}(b))$, and let
$\{\alpha,\beta,\lambda\}\subseteq \Gamma$. We show that this case can be
converted to Case 2.
We first claim that $\overline{\varphi}(x)=\tau\ne \beta$.
As otherwise, we first do an $(\alpha,\beta)$-swap at $x$,
and then a $(\beta,\gamma)$-swap at $b$.
Now, $P_b(\gamma,\alpha)=bux$, showing a contradiction to the fact that
$a$ and $b$ are $(\alpha,\gamma)$-linked.
Next, we claim that $\tau\ne \alpha$.
As otherwise, we simply do a $(\beta,\gamma)$-swap at $b$
and achieve a same contradiction as above.
Thus, $\tau\ne \alpha,\beta$.
We do a $(\beta,\gamma)$-swap at $b$
and an $(\alpha,\gamma)$-swap at $t$.
Now do a $(\tau,\gamma)$-swap at both $x$ and $t$, a $(\gamma,\lambda)$-swap at $b$,
an $(\lambda,\alpha)$-swap at $t$, and
finally a
$(\beta,\lambda)$-swap at $b$. Let the new coloring be $\varphi'$.
We see that $\varphi'(st)=\lambda \ne \varphi'(ux)=\tau$.
We verify that it still holds that $|\overline{\varphi}'(t)\cap(\overline{\varphi}'(a)\cup \overline{\varphi}'(b))|\ge 4$.
If $\tau \in \Gamma$, we now have $\alpha,\beta,\gamma,\tau \in \overline{\varphi}'(t)\cap(\overline{\varphi}'(a)\cup \overline{\varphi}'(b))$.
If $\tau \not\in \Gamma$, then $(\Gamma\setminus\{\lambda\} )\cup \{\tau\} \subseteq \overline{\varphi}'(t)\cap(\overline{\varphi}'(a)\cup \overline{\varphi}'(b))$.
\qed
\begin{LEM3}
Let $G$ be a Class 2 graph,
$H\subseteq G$
be a short-kite with $V(H)=\{a,b,c,u,x,y\}$, and let $\varphi\in \mathcal{C}^\Delta(G-ab)$.
Suppose $$K=(a,ab,b,bu,u,ux,x) \quad \text{and} \quad K^*=(b,ab,a,ac,c,cu,u,uy)$$
are two Kierstead path with respect to $ab$ and $\varphi$.
If $\overline{\varphi}(x)\cup \overline{\varphi}(y)\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$, then $\max\{d_G(x),d_G(y)\}=\Delta $.
\end{LEM3}
\textbf{Proof}.\quad
Assume to the contrary that $\max\{d_G(x),d_G(y)\}\le \Delta-1$.
Since both $K$ and $K^*$ are Kierstead paths and
$\overline{\varphi}(x)\cup \overline{\varphi}(y)\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$, Lemma~\ref{Lemma:kierstead path1} (a)
and (b) implies that $d_G(b)=d_G(u)=\Delta$
and $d_G(x)=d_G(y)=\Delta-1$.
Let $\overline{\varphi}(b)=\{1\}$. Then $\varphi(ac)=1$.
We may assume $\varphi(uy)=1$. The reasoning is below.
Since $a$ and $b$ are $(1,\alpha)$-linked for every $\alpha\in \overline{\varphi}(y)\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$,
we may assume $\overline{\varphi}(y)=1$. Then a $(1,\varphi(uy))$-swap at $y$
gives a coloring, call it still $\varphi$, such that $\varphi(uy)=1$.
We consider now two cases.
\smallskip
{\bf \noindent Case 1: $\overline{\varphi}(x)=\overline{\varphi}(y)$.}
\smallskip
Let
$ \varphi(ux)=\gamma$, \text{and} $\overline{\varphi}(x)=\overline{\varphi}(y)=\eta.$
As $\varphi(uy)=\overline{\varphi}(b)=1$, $1\not\in \{\gamma, \eta\}$.
As both $K$ and $K^*$ are Kierstead paths and
$\overline{\varphi}(x)\cup \overline{\varphi}(y)\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$, $\gamma,\eta \in \overline{\varphi}(a)$.
Denote by $P_u(1,\gamma)$ the $(1,\gamma)$-subchain starting at $u$ that does not include the edge $ux$.
\begin{CLA}\label{cla:claim1}
We may assume that $P_u(1,\gamma)$ ends at $x$, some vertex $z\in V(G)\setminus\{a,b,c,u,x,y\}$, or passing $c$ ends at $a$.
\end{CLA}
\textbf{Proof}.\quad Note that $P_a(1,\gamma)=P_b(1,\gamma)$. If $u\not\in P_a(1,\gamma)$, then the $(1,\gamma)$-chain containing $u$ is a cycle or a path with endvertices contained in $V(G)\setminus\{a,b,c,u,x,y\}$. Thus
$P_u(1,\gamma)$ ends at $x$ or some $z\in V(G)\setminus\{a,b,c,u,x,y\}$. Hence we assume $u\in P_a(1,\gamma)$.
As a consequence, $P_u(1,\gamma)$ ends at either $b$ or $a$.
If $P_x(1,\gamma)$ ends at $b$, we color $ab$ by 1, uncolor $ac$, and exchange the vertex labels $b$ and $c$.
This gives an edge $\Delta$-coloring of $G-ab$ such that $P_u(1,\gamma)$ ends at $a$.
Thus, if $u\in P_a(1,\gamma)$, we may always assume that $P_u(1,\gamma)$ ends at $a$.
\qed
Let $\varphi(bu)=\delta$. Again, $\delta\in \overline{\varphi}(a)$.
Figure~\ref{f1} depicts the colors and missing colors on these specified edges and vertices, respectively. Clearly, $\delta\ne 1, \gamma$.
Since $a$ and $b$ are $(1,\delta)$-linked with respect to $\varphi$, $\eta\ne \delta$.
Thus, $\gamma, \delta$ and $\eta$ are pairwise distinct.
\begin{figure}[!htb]
\begin{center}
\begin{tikzpicture}[scale=1]
{\tikzstyle{every node}=[draw ,circle,fill=white, minimum size=0.5cm,
inner sep=0pt]
\draw[blue,thick](0,-2) node (a) {$a$};
\draw[blue,thick](-1,-3) node (b) {$b$};
\draw[blue,thick](1,-3) node (c) {$c$};
\draw [blue,thick](0, -5) node (u) {$u$};
\draw [blue,thick](-1, -7) node (x) {$x$};
\draw [blue,thick](1, -7) node (y) {$y$};
}
\path[draw,thick,black!60!green]
(a) edge node[name=la,pos=0.7, above] {\color{blue} $1$} (c)
(c) edge node[name=la,pos=0.5, below] {\color{blue}} (u)
(b) edge node[name=la,pos=0.5, below] {\color{blue} $\delta$\quad\quad} (u)
(u) edge node[name=la,pos=0.6, above] {\color{blue}$\gamma$\quad\quad} (x)
(u) edge node[name=la,pos=0.6,above] {\color{blue} \quad$1$} (y);
\draw[dashed, red, line width=0.5mm] (b)--++(140:1cm);
\draw[dashed, red, line width=0.5mm] (x)--++(200:1cm);
\draw[dashed, red, line width=0.5mm] (y)--++(340:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(40:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(90:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(140:1cm);
\draw[blue] (-1.5, -7.4) node {$\eta$};
\draw[blue] (1.5, -7.4) node {$\eta$};
\draw[blue] (-1.2, -2.5) node {$1$};
\draw[blue] (0.6, -1.8) node {$\delta$};
\draw[blue] (-0.6, -1.8) node {$\gamma$};
\draw[blue] (-0.15, -1.5) node {$\eta$};
\end{tikzpicture}
- \end{center}
\caption{Colors on the edges connecting $x$ and $y$ to $b$}
\label{f1}
\end{figure}
\begin{CLA}\label{cla:claim2}
It holds that $ub\in P_y(\eta,\delta)$ and $P_y(\eta,\delta)$ meets $u$ before $b$.
\end{CLA}
\textbf{Proof}.\quad Let $\varphi'$ be obtained from $\varphi$ by coloring $ab$ by $\delta$ and uncoloring $bu$. Note that $\overline{\varphi}'(b)=1, \overline{\varphi}'(u)=\delta$ and $\varphi'(uy)=1$.
Thus $F^*=(u, ub,b, uy, y)$ is a multifan and so $u$ and $y$ are $(\eta, \delta)$-linked by Lemma~\ref{thm:vizing-fan1}~\eqref{thm:vizing-fan1b}. By uncoloring $ab$ and coloring
$bu$ by $\delta$, we get back the original coloring $\varphi$. Therefore, under the coloring $\varphi$, $u\in P_y(\eta, \delta)$ and $P_y(\eta,\delta)$ meets $u$ before $b$.
\qed
We apply the following operations based on $\varphi$:
\[
\begin{bmatrix}
ux& P_{[u,y]}(\eta,\delta) & ub & P_u(1,\gamma) & ab\\
\gamma\rightarrow \eta& \delta/\eta & \delta \rightarrow 1&
1/\gamma& \delta\end{bmatrix}.
\]
By Claim~\ref{cla:claim1}, $P_u(1,\gamma)$ does not end at $b$.
In any case, the above operations give
an edge $\Delta$-coloring of $G$. This contradicts the earlier assumption that $\chi'(G)=\Delta+1$.
\medskip
{\bf \noindent Case 2: $\overline{\varphi}(x)\ne \overline{\varphi}(y)$.}
Let
$$ \varphi(bu)=\alpha, \quad \varphi(ux)=\beta, \quad \overline{\varphi}(x)=\tau, \quad \text{and} \quad \overline{\varphi}(y)=\gamma.$$
As $\varphi(uy)=\overline{\varphi}(b)=1$, $1\not\in \{\alpha,\beta,\gamma\}$.
Also, since $a$ and $b$ are $(1,\alpha)$-linked, $\gamma\ne \alpha$.
Since both $K$ and $K^*$ are Kierstead paths and
$\overline{\varphi}(x)\cup \overline{\varphi}(y)\subseteq \overline{\varphi}(a)\cup \overline{\varphi}(b)$, we have $\alpha,\beta,\tau,\gamma\in\overline{\varphi}(a)$.
\begin{CLA}
We may assume $\overline{\varphi}(x)=\tau=1$.
\end{CLA}
\textbf{Proof}.\quad If $uy\not\in P_x(1,\tau)$, we simply do a $(1,\tau)$-swap at $x$.
Thus, we assume that $u\in P_x(1,\tau)$. We first do a $(1,\tau)$-swap at $b$, then an $(\alpha,\tau)$-swap at $x$. Then we do a $(\gamma,\tau)$-swap at $b$. Finally, a $(1,\gamma)$-swap at $b$ and a $(1,\alpha)$-swap at $x$
give the desired coloring.
\qed
Since $ux\in P_x(1,\beta)$, and $a$ and $b$ are $(1,\beta)$-linked, we do a $(1,\beta)$-swap at $b$.
Now we color $ab$ by $\alpha$, recolor $bu$ by $\beta$
and uncolor $ux$, see Figure~\ref{f2} for a depiction.
\begin{figure}[!htb]
\begin{center}
\begin{tikzpicture}[scale=1]
{\tikzstyle{every node}=[draw ,circle,fill=white, minimum size=0.5cm,
inner sep=0pt]
\draw[blue,thick](0,-2) node (a) {$a$};
\draw[blue,thick](-1,-3) node (b) {$b$};
\draw[blue,thick](1,-3) node (c) {$c$};
\draw [blue,thick](0, -5) node (u) {$u$};
\draw [blue,thick](-1, -7) node (x) {$x$};
\draw [blue,thick](1, -7) node (y) {$y$};
}
\path[draw,thick,black!60!green]
(a) edge node[name=la,pos=0.7, above] {\color{blue} $\beta$} (c)
(c) edge node[name=la,pos=0.5, below] {\color{blue}} (u)
(b) edge node[name=la,pos=0.5, below] {\color{blue} $\alpha$\quad\quad} (u)
(u) edge node[name=la,pos=0.6, above] {\color{blue}$\beta$\quad\quad} (x)
(u) edge node[name=la,pos=0.6,above] {\color{blue} \quad$1$} (y);
\draw[dashed, red, line width=0.5mm] (b)--++(140:1cm);
\draw[dashed, red, line width=0.5mm] (x)--++(200:1cm);
\draw[dashed, red, line width=0.5mm] (y)--++(340:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(40:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(90:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(140:1cm);
\draw[blue] (-1.5, -7.4) node {$1$};
\draw[blue] (1.5, -7.4) node {$\gamma$};
\draw[blue] (-1.2, -2.5) node {$\beta$};
\draw[blue] (0.6, -1.8) node {$\gamma$};
\draw[blue] (-0.6, -1.8) node {$1$};
\draw[blue] (-0.15, -1.5) node {$\alpha$};
\draw [orange,thick](3.5, -5) node (t) {$\Rightarrow$};
\begin{scope}[shift={(7,0)}]
{\tikzstyle{every node}=[draw ,circle,fill=white, minimum size=0.5cm,
inner sep=0pt]
\draw[blue,thick](0,-2) node (a) {$a$};
\draw[blue,thick](-1,-3) node (b) {$b$};
\draw[blue,thick](1,-3) node (c) {$c$};
\draw [blue,thick](0, -5) node (u) {$u$};
\draw [blue,thick](-1, -7) node (x) {$x$};
\draw [blue,thick](1, -7) node (y) {$y$};
}
\path[draw,thick,black!60!green]
(a) edge node[name=la,pos=0.7, above] {\color{blue} $\beta$} (c)
(a) edge node[name=la,pos=0.7, above] {\color{red} $\alpha$} (b)
(c) edge node[name=la,pos=0.5, below] {\color{blue}} (u)
(b) edge node[name=la,pos=0.5, below] {\color{red} $\beta$\quad\quad} (u)
(u) edge node[name=la,pos=0.6,above] {\color{blue} \quad$1$} (y);
\draw[dashed, red, line width=0.5mm] (u)--++(200:1cm);
\draw[dashed, red, line width=0.5mm] (x)--++(200:1cm);
\draw[dashed, red, line width=0.5mm] (x)--++(340:1cm);
\draw[dashed, red, line width=0.5mm] (y)--++(340:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(40:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(140:1cm);
\draw[blue] (-1.5, -7.4) node {$1$};
\draw[blue] (-0.5, -7.4) node {$\beta$};
\draw[blue] (1.5, -7.4) node {$\gamma$};
\draw[blue] (0.6, -1.8) node {$\gamma$};
\draw[blue] (-0.6, -1.8) node {$1$};
\draw[blue] (-0.5, -5.5) node {$\alpha$};
\draw[blue] (-0.5, -5.5) node {$\alpha$};
\end{scope}
\end{tikzpicture}
- \end{center}
\caption{Colors on the edges connecting $x$ and $y$ to $b$}
\label{f2}
\end{figure}
Note that
$$
F^*=(u, ux,x,uy,y), \quad K^*=(x,xu,u,ub,b,ba,a)
$$
are, respectively, a multifan and
a Kierstead path.
By Lemma~\ref{thm:vizing-fan1}~\eqref{thm:vizing-fan1b}, $u$ and $y$ are $(\alpha,\gamma)$-linked, and $u$
and $x$ are $(\alpha,\beta)$-linked and $(1,\alpha)$-linked.
Thus, we do an $(\alpha,\gamma)$-swap at $a$, an $(\alpha,\beta)$-swap at $a$, a $(1,\alpha)$-swap at $a$,
and then an $(\alpha,\gamma)$-swap at $a$. Now $P_u(\alpha,\beta)=uba$, contradicting
Lemma~\ref{thm:vizing-fan1}~\eqref{thm:vizing-fan1b} that $u$ and $x$ are $(\alpha,\beta)$-linked.
The proof is now completed.
\qed
\begin{LEM4}
Let $G$ be a Class 2 graph, $H\subseteq G$
be a kite with $V(H)=\{a,b,c,u,s_1,s_2,t_1,t_2\}$, and let $\varphi\in \mathcal{C}^\Delta(G-ab)$.
Suppose $$K=(a,ab,b,bu,u,us_1, s_1,s_1t_1,t_1) \quad \text{and} \quad K^*=(b,ab,a,ac,c,cu,u,us_2, s_2,s_2t_2,t_2)$$
are two Kierstead paths with respect to $ab$ and $\varphi$.
If $\varphi(s_1t_1)=\varphi(s_2t_2)$,
then $|\overline{\varphi}(t_1)\cap \overline{\varphi}(t_2) \cap ( \overline{\varphi}(a)\cup \overline{\varphi}(b))|\le 4$.
\end{LEM4}
\textbf{Proof}.\quad Let $\Gamma=\overline{\varphi}(t_1)\cap \overline{\varphi}(t_2) \cap ( \overline{\varphi}(a)\cup \overline{\varphi}(b))$.
Assume to the contrary that $|\Gamma|\ge 5$. By considering $K$ and applying Lemma~\ref{lem:5vexKpathsettingup}, we conclude that $d_G(b)=d_G(u)=\Delta$.
We show that there exists $\varphi^*\in \mathcal{C}^\Delta(G-ab)$ satisfying the following properties:
\begin{enumerate}[(i)]
\item $\varphi^*(bu), \varphi^*(cu), \varphi^*(us_2)\in \overline{\varphi}^*(a)\cap \overline{\varphi}^*(t_1)\cap \varphi^*(t_2)$,
\item $\varphi^*(us_1)\in \overline{\varphi}^*(b)\cap \overline{\varphi}^*(t_1)\cap \varphi^*(t_2)$, and
\item $\varphi^*(s_1t_1)=\varphi^*(s_2t_2)\in \overline{\varphi}^*(a)$.
\end{enumerate}
See Figure~\ref{pic2} for a depiction of the colors described above.
\begin{figure}[!htb]
\begin{center}
\begin{tikzpicture}[scale=1]
{\tikzstyle{every node}=[draw ,circle,fill=white, minimum size=0.5cm,
inner sep=0pt]
\draw[blue,thick](0,-2) node (a) {$a$};
\draw[blue,thick](-1,-3) node (b) {$b$};
\draw[blue,thick](1,-3) node (c) {$c$};
\draw [blue,thick](0, -4.5) node (u) {$u$};
\draw [blue,thick](-1, -6) node (x) {$s_1$};
\draw [blue,thick](1, -6) node (y) {$s_2$};
\draw [blue,thick](-1, -7.5) node (t1) {$t_1$};
\draw [blue,thick](1, -7.5) node (t2) {$t_2$};
}
\path[draw,thick,black!60!green]
(a) edge node[name=la,pos=0.8, above] {\color{blue} $\beta$} (c)
(c) edge node[name=la,pos=0.4, below] {\color{blue} \quad$\tau$} (u)
(b) edge node[name=la,pos=0.4, below] {\color{blue} $\alpha$\quad\quad} (u)
(u) edge node[name=la,pos=0.6, above] {\color{blue}$\beta$\quad\quad} (x)
(x) edge node[name=la,pos=0.7, above] {\color{blue}$\gamma$\quad\quad} (t1)
(y) edge node[name=la,pos=0.7,above] {\color{blue} $\gamma$\quad\quad} (t2)
(u) edge node[name=la,pos=0.6,above] {\color{blue} \quad$\delta$} (y);
\draw[dashed, red, line width=0.5mm] (b)--++(140:1cm);
\draw[dashed, red, line width=0.5mm] (t1)--++(200:1cm);
\draw[dashed, red, line width=0.5mm] (t1)--++(250:1cm);
\draw[dashed, red, line width=0.5mm] (t1)--++(290:1cm);
\draw[dashed, red, line width=0.5mm] (t1)--++(340:1cm);
\draw[dashed, red, line width=0.5mm] (t2)--++(200:1cm);
\draw[dashed, red, line width=0.5mm] (t2)--++(250:1cm);
\draw[dashed, red, line width=0.5mm] (t2)--++(290:1cm);
\draw[dashed, red, line width=0.5mm] (t2)--++(340:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(40:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(100:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(70:1cm);
\draw[dashed, red, line width=0.5mm] (a)--++(140:1cm);
\draw[blue] (-1.6, -9+1.5) node {$\alpha$};
\draw[blue] (1.6, -9+1.5) node {$\alpha$};
\draw[blue] (-1.4, -9.5+1.5) node {$\beta$};
\draw[blue] (1.4, -9.5+1.5) node {$\beta$};
\draw[blue] (-1, -9.6+1.5) node {$\tau$};
\draw[blue] (1, -9.6+1.5) node {$\tau$};
\draw[blue] (-0.45, -9.4+1.5) node {$\delta$};
\draw[blue] (0.45, -9.4+1.5) node {$\delta$};
\draw[blue] (-1.2, -2.5) node {$\beta$};
\draw[blue] (0.6, -1.8) node {$\gamma$};
\draw[blue] (-0.6, -1.8) node {$\alpha$};
\draw[blue] (-0.3, -1.4) node {$\tau$};
\draw[blue] (0.4, -1.4) node {$\delta$};
\end{tikzpicture}
\end{center}
\caption{Colors on the edges of a kite}
\label{pic2}
\end{figure}
Let $\alpha,\beta, \tau,\delta\in \Gamma$, and let $\varphi(s_1t_1)=\varphi(s_2t_2)=\gamma$.
We may assume that $\alpha\in \overline{\varphi}(a)$ and $\beta\in \overline{\varphi}(b)$.
Otherwise, since $d_G(b)=\Delta$, we have $\alpha,\beta \in \overline{\varphi}(a)$.
Let $\lambda\in \overline{\varphi}(b)$. As $a$ and $b$ are $(\beta,\lambda)$-linked, we do a
$(\beta,\lambda)$-swap at $b$. Note that this operation may change some colors of the edges of $K$
and $K^*$, but they are still Kierstead paths with respect to $ab$
and the current coloring.
Since $d_G(b)=d_G(u)=\Delta$, and $\beta\in \overline{\varphi}(b)\cap \overline{\varphi}(t_1)$,
we know that $\gamma\in \overline{\varphi}(a)$, as $K_1$ is a
Kierstead path.
Next, we may assume that $\varphi(bu)=\alpha$.
If not, let $\varphi(bu)=\alpha'$. Since $a$
and $b$ are $(\alpha,\beta)$-linked, we do an $(\alpha,\beta)$-swap at $b$.
Now $a$ and $b$ are $(\alpha,\alpha')$-linked, we do an $(\alpha,\alpha')$-swap at $b$.
Finally, we do an $(\alpha',\beta)$-swap at $b$.
All these swaps do not change the colors in $\Gamma$, so now we get the color on $bu$
to be $\alpha$.
We may now assume that $\varphi(cu)=\tau$.
If not, let $\varphi(cu)=\tau'$.
Since $a$ and $b$ are $(\beta,\tau)$-linked, we do a $(\beta,\tau)$-swap at $b$.
Then do $(\tau,\tau')-(\tau',\beta)$-swaps at $b$.
Finally, we show that we can modify $\varphi$
to get $\varphi'$ such that $\varphi'(us_1)=\beta$
and $\varphi'(us_2)=\delta$.
Assume firstly that $\varphi(us_1)=\beta'\ne \beta$.
If $\beta'\in \Gamma$, we do $(\beta,\gamma)-(\gamma,\beta')$-swaps at $b$.
Thus, we assume $\beta'\not\in \Gamma$.
Let $\lambda\in \Gamma\setminus \{\alpha,\beta,\tau,\delta\}$.
If $u\not\in P_a(\beta,\beta')=P_b(\beta,\beta')$, we simply do a $(\beta,\beta')$-swap at $b$.
Thus, we assume $u\in P_a(\beta,\beta')=P_b(\beta,\beta')$.
We do a $(\beta,\beta')$-swap at both $t_1$
and $t_2$. Since $a$
and $b$ are $(\beta,\lambda)$-linked, we
do a $(\beta,\lambda)$-swap at both $t_1$
and $t_2$.
Now we do $(\beta,\gamma)-(\gamma,\beta')$-swaps at $b$.
By switching the role of $\beta$ and $\beta'$,
we have $\varphi(us_1)=\beta$.
Lastly, we show that $\varphi(us_2)=\delta$.
Note that $bu\in P_{t_1}(\alpha,\gamma)$.
Otherwise, let $\varphi'=\varphi/P_{t_1}(\alpha,\gamma)$.
Then $P_b(\alpha,\beta)=bus_1t_1$, showing
a contradiction to the fact that $a$
and $b$ are $(\alpha,\beta)$-linked with respect to $\varphi'$.
Thus, $bu\in P_{t_1}(\alpha,\gamma)$.
Next, we claim that $P_{t_1}(\alpha,\gamma)$ meets $u$ before $b$.
As otherwise, we do the following operations to get a $\Delta$-coloring of $G$:
\[
\begin{bmatrix}
s_1t_1& P_{[s_1,b]}(\alpha,\gamma) & us_1 & bu & ab\\
\gamma\rightarrow \alpha& \alpha/\gamma & \beta \rightarrow \alpha&
\alpha\rightarrow \beta& \gamma\end{bmatrix}.
\]
This gives a contradiction to the assumption that $G$ is $\Delta$-critical.
Thus, we have that $P_{t_1}(\alpha,\gamma)$ meets $u$ before $b$.
This implies that it is not the case that $P_{t_2}(\alpha,\gamma)$
meets $u$ before $b$. In turn, this implies that $u\in P_a(\beta,\delta')=P_b(\beta,\delta')$.
As otherwise, we get a $\Delta$-coloring of $G$ by doing a $(\beta,\delta)$-swap along the $(\beta,\delta)$-chain containing $u$,
and then doing the same operation as above with $t_2$
playing the role of $t_1$.
Since $u\in P_a(\beta,\delta')=P_b(\beta,\delta')$,
we do a $(\beta,\delta')$-swap at both $t_1$ and $t_2$. As $u\in P_a(\beta,\tau)=P_b(\beta,\tau)$,
we do a $(\beta,\tau)$-swap at both $t_1$
and $t_2$. Since $us_1\in P_{t_1}(\beta,\gamma)$,
we do a $(\beta,\gamma)$-swap at $b$, then a $(\gamma,\lambda)$-swap at $b$.
Since $a$ and $b$ are $(\tau,\lambda)$-linked, we do a $(\tau,\lambda)$-swap at both $t_1$
and $t_2$. Now $(\lambda,\delta)-(\delta,\gamma)-(\gamma,\beta)$-swaps at $b$
give a desired coloring.
Still, by the same arguments as above, we have that $P_{t_1}(\alpha,\gamma)$ meets $u$ before $b$,
and $u\in P_a(\beta,\delta)=P_b(\beta,\delta)$.
Let $P_u(\beta,\delta)$ be the $(\beta,\delta)$-chain starting at $u$ not including the edge $us_2$.
It is clear that $P_u(\beta,\delta)$ ends at either $a$ or $b$.
We may assume that $P_u(\beta,\delta)$ ends at $a$.
Otherwise, we color $ab$ by $\beta$, uncolor $ac$, and let $\tau$
play the role of $\alpha$. Let $P_u(\alpha,\gamma)$
be the $(\alpha,\gamma)$-chain starting at $u$ not including the edge $bu$,
which ends at $t_1$ by our earlier argument.
We do the following operations to get a $\Delta$-coloring of $G$:
\[
\begin{bmatrix}
P_u(\alpha,\gamma)& bu & P_u(\beta,\delta) & us_2t_2 & ab\\
\alpha/\gamma& \alpha\rightarrow \beta & \beta/\delta&
\delta/\gamma& \alpha\end{bmatrix}.
\]
This gives a contradiction to the assumption that $G$ is $\Delta$-critical.
The proof is now finished.
\qed
\bibliographystyle{plain}
| {
"timestamp": "2020-05-28T02:00:12",
"yymm": "2005",
"arxiv_id": "2005.12909",
"language": "en",
"url": "https://arxiv.org/abs/2005.12909",
"abstract": "Let $G$ be a simple graph with maximum degree $\\Delta$. A classic result of Vizing shows that $\\chi'(G)$, the chromatic index of $G$, is either $\\Delta$ or $\\Delta+1$. We say $G$ is of \\emph{Class 1} if $\\chi'(G)=\\Delta$, and is of \\emph{Class 2} otherwise. A graph $G$ is \\emph{$\\Delta$-critical} if $\\chi'(G)=\\Delta+1$ and $\\chi'(H)<\\Delta+1$ for every proper subgraph $H$ of $G$, and is \\emph{overfull} if $|E(G)|>\\Delta \\lfloor (|V(G)|-1)/2 \\rfloor$. Clearly, overfull graphs are Class 2. Hilton and Zhao in 1997 conjectured that if $G$ is obtained from an $n$-vertex $\\Delta$-regular Class 1 graph with maximum degree greater than $n/3$ by splitting a vertex, then being overfull is the only reason for $G$ to be Class 2. This conjecture was only confirmed when $\\Delta\\ge \\frac{n}{2}(\\sqrt{7}-1)\\approx 0.82n$. In this paper, we improve the bound on $\\Delta$ from $\\frac{n}{2}(\\sqrt{7}-1)$ to $0.75n$. Considering the structure of $\\Delta$-critical graphs with a vertex of degree 2, we also show that for an $n$-vertex $\\Delta$-critical graph with $\\Delta\\ge \\frac{3n}{4}$, if it contains a vertex of degree 2, then it is overfull. We actually obtain a more general form of this result, which partially supports the overfull conjecture of Chetwynd and Hilton from 1986, which states that if $G$ is an $n$-vertex $\\Delta$-critical graph with $\\Delta>n/3$, then $G$ contains an overfull subgraph $H$ with $\\Delta(H)=\\Delta$. Our proof techniques are new and might shed some light on attacking both of the conjectures when $\\Delta$ is large.",
"subjects": "Combinatorics (math.CO)",
"title": "$Δ$-critical graphs with a vertex of degree 2",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850862376966,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7095349880885055
} |
https://arxiv.org/abs/math/9703215 | On the zeros of the Hahn-Exton q-Bessel function and associated q-Lommel polynomials | For the Bessel function \begin{equation} \label{bessel} J_{\nu}(z) = \sum\limits_{k=0}^{\infty} \frac{(-1)^k \left( \frac{z}{2} \right)^{\nu+2k}}{k! \Gamma(\nu+1+k)} \end{equation} there exist several $q$-analogues. The oldest $q$-analogues of the Bessel function were introduced by F. H. Jackson at the beginning of this century, see M. E. H. Ismail \cite{Is1} for the appropriate references. Another $q$-analogue of the Bessel function has been introduced by W. Hahn in a special case and by H. Exton in full generality, see R. F. Swarttouw \cite{Sw1} for a historic overview.Here we concentrate on properties of the Hahn-Exton $q$-Bessel function and in particular on its zeros and the associated $q$-Lommel polynomials. | \section{Introduction}
\markboth{\hfill{\sc Preprint}\hfill}{\hfill{\sc Preprint}\hfill}
\pagestyle{myheadings}
\def4.\arabic{equation}{1.\arabic{equation}}
\setcounter{equation}{0}
For the Bessel function
\begin{equation}
\label{bessel} J_{\nu}(z) = \sum\limits_{k=0}^{\infty}
\frac{(-1)^k \left( \frac{z}{2} \right)^{\nu+2k}}{k!
\Gamma(\nu+1+k)}
\end{equation}
there exist several $q$-analogues. The oldest $q$-analogues of the
Bessel function were introduced by F.H. Jackson at the beginning
of this century, see M.E.H. Ismail \cite{Is1} for the appropriate
references.
Another $q$-analogue of the Bessel function has been introduced by
W. Hahn in a special case and by H. Exton in full generality, see
R.F. Swarttouw \cite{Sw1} for a historic overview.
Here we concentrate on properties of the Hahn-Exton $q$-Bessel
function
and in particular on its zeros and the associated $q$-Lommel
polynomials. Ismail has proved very satisfactory results on this
subject
for the Jackson $q$-Bessel functions, cf. \cite{Is1}. In
particular, he
proves orthogonality relations for the associated $q$-Lommel
polynomials.
In section 2 we present the definition of the Hahn-Exton $q$-Bessel
function and some of its properties. The zeros of the Hahn-Exton
$q$-Bessel function of order $\nu>-1$ are the subject of section 3.
The
proofs of the statements in this section rest on the evaluation of
a
$q$-integral closely related to the Fourier-Bessel orthogonality
relations for the Hahn-Exton $q$-Bessel function. The results of
section 3 are in accordance with the results on the zeros of the
Bessel
function. Section 4 deals with
two kinds of associated $q$-Lommel polynomials for which we present
explicit forms as well as some properties. However, the first type
does not give rise to orthogonal polynomials, while the second type
provides polynomials, which are closely related to the orthogonal
modified $q$-Lommel polynomials found by M.E.H. Ismail.
Comparison of our results with the results for the Jackson
$q$-Bessel
functions shows that the Hahn-Exton $q$-Bessel function is less
similar
to the Bessel function in this respect than the Jackson $q$-Bessel
function, since we are not able to prove new orthogonality
relations for
the $q$-Lommel polynomials associated with the Hahn-Exton
$q$-Bessel
function. This seems in support of
M. Rahman \cite{Ra1}, who favours the Jackson $q$-Bessel function
over
the Hahn-Exton $q$-Bessel function. However, it should be noted
that
the proofs concerning the zeros of the Hahn-Exton $q$-Bessel
function
are much simpler and more direct than those for the Jackson
$q$-Bessel
function.
In our opinion both $q$-analogues of the Bessel function are
interesting functions and possess nice properties. Moreover, from
the
harmonic analysis on the quantum group of plane motions, cf.
H.T. Koelink [5, \S6.7], it follows that there exists a common
generalisation of the Jackson and Hahn-Exton $q$-Bessel function.
\section{The Hahn-Exton $q$-Bessel function}
\markboth{\hfill{\sc Preprint}\hfill}{\hfill{\sc Preprint}\hfill}
\pagestyle{myheadings}
\def4.\arabic{equation}{2.\arabic{equation}}
\setcounter{equation}{0}
In this section we present the Hahn-Exton $q$-Bessel function and
some
of its properties. References for these results are T.H.
Koornwinder and
R.F. Swarttouw \cite{Ko1} and R.F. Swarttouw \cite{Sw1}. The
$q$-Bessel
functions
are defined in terms of basic hypergeometric series, which we will
briefly recall. More information on basic hypergeometric series can
be
found in the book by G. Gasper and M. Rahman [3, Chapter 1].
\vspace{5mm}
We fix $0 < q < 1$ and we define the $q$-shifted factorials
\begin{eqnarray}
\label{qshift} (a;q)_k = \left\{ \begin{array}{ll}
1 &\mbox{ if } k=0\\[1ex]
(1-a)(1-aq)\cdots(1-aq^{k-1}) &\mbox{ if } k \geq 1
\end{array}\right.
\end{eqnarray}
for arbitrary $a\in\Bbb C$ and $k\in\Bbb Z_+$. The limit
$k\rightarrow\infty$
is well defined and yields
\begin{eqnarray}
\label{qshiftinf}(a;q)_{\infty} = \lim\limits_{k\rightarrow\infty}
(a;q)_k.
\end{eqnarray}
The $q$-hypergeometric series (or basic hypergeometric) series
$_r\phi_s$ is defined by
\begin{eqnarray}
\qhyp{r}{s}{a_1, \dots, a_r}{b_1, \dots, b_s}{z} =
\sum\limits_{k=0}
^{\infty}\frac{(a_1;q)_k(a_2;q)_k\cdots(a_r;q)_k}{(q;q)_k(b_1;q)_
k\cdots
(b_s;q)_k} \left( (-1)^kq^{\frac{1}{2}k(k-1)}\right)^{1+s-r} z^k
\label{qhyper}
\end{eqnarray}
whenever the series converges.
\vspace{5mm}
The Hahn-Exton $q$-Bessel function $J_{\nu}(x;q)$ of order $\nu$ is
defined as
\begin{eqnarray}
J_{\nu}(x;q) = x^{\nu}
\frac{(q^{\nu+1};q)_{\infty}}{(q;q)_{\infty}}
\qhyp{1}{1}{0}{q^{\nu+1}}{qx^2}.\label{heq}
\end{eqnarray}
From (\ref{qhyper}) we see that $x^{-\nu} J_{\nu}(x;q^2)$
defines a non-zero analytic function on $\Bbb C$, so that
$J_{\nu}(x;q)$
is analytic in $\Bbb C\backslash\{0\}$.
The Hahn-Exton $q$-Bessel function is a $q$-analogue of the Bessel
function, since
$J_{\nu}((1-q)x;q)$ tends to the Bessel function
$J_{\nu}(2x)$ of order $\nu$ as $q\uparrow 1$, cf. [9, Appendix A].
The
standard text on Bessel functions is the classic by G.N. Watson
\cite{Wa1}.
\vspace{5mm}
Next we introduce two concepts of $q$-analysis: the $q$-derivative
and
the $q$-integral, cf. [3, Chapter 1]. The $q$-derivative of a
function
$f$ is defined by
\begin{eqnarray}
\left( D_q f\right)(x) = \frac{f(x) - f(qx)}{(1-q)x},\hspace{5mm}
x \neq 0.\label{qaf}
\end{eqnarray}
Note that the $\left( D_q f \right)(x)$ tends to $f'(x)$ for
$q\uparrow
1$ whenever $f$ is differentiable at $x$. The product rule for the
$q$-derivative is
\begin{eqnarray}
\label{qprod}\left(D_q fg\right)(x) = f(x) \left(D_q g\right)(x) +
g(qx) \left(D_q f\right)(x).
\end{eqnarray}
The $q$-integral of a function $f$ is defined by
\begin{eqnarray}
\label{qint}\int\limits_0^z f(x) d_qx = (1-q)z \sum\limits_{k=0}^
{\infty} f(zq^k) q^k, \hspace{5mm} z > 0.
\end{eqnarray}
For a Riemann-integrable function $f$ this expression tends to
$\int_0^z f(x)\, dx$ as $q\uparrow 1$. It is easily checked that
for
continuous $f$
\begin{eqnarray}
\int_0^z \left(D_q f\right)(x)\, d_qx = f(z) - f(0).\label{intdif}
\end{eqnarray}
The $q$-product rule leads to the following $q$-partial integration
rule
\begin{eqnarray}
\int\limits_0^z f(qx) \left( D_q g\right)(x) \, d_q x = f(z)g(z) -
f(0)g(0)
-\int\limits_0^z \left( D_q f \right)(x) g(x) \, d_q
x.\label{qpart}
\end{eqnarray}
\vspace{5mm}
We will use the following $q$-derivatives, cf. [9, (3.2.22),
(3.2.19)]:
\begin{eqnarray}
& &D_q \left[ (\cdot)^{\nu} J_{\nu}(\cdot \, ;q^2)\right](x) =
\frac{x^{\nu}}{1-q} J_{\nu-1}(x;q^2),\label{af1}\\[1ex]
& &D_q \left[ (\cdot)^{-\nu} J_{\nu}(\cdot \, ;q^2)\right](x) =
- \frac{q^{1-\nu} x^{-\nu}}{1-q} J_{\nu+1}(xq;q^2).\label{af2}
\end{eqnarray}
These formulas are $q$-analogues of the case $m=1$ of
[11, 3.2(5), 3.2(6)].
\vspace{5mm}
The second order differential equation for the Bessel function, cf.
[11, 3.2(1)], has the following second order $q$-difference
equation as
a $q$-analogue for the Hahn-Exton $q$-Bessel function, cf. [9,
(4.3.1)]:
\begin{eqnarray}
\label{qdifeq}J_{\nu}(xq^2;q^2) + q^{-\nu} \left( x^2q^2 - 1 -
q^{2\nu}
\right) J_{\nu}(xq;q^2) + J_{\nu}(x;q^2) = 0.
\end{eqnarray}
It will also be useful to have the following relations at hand.
They
involve Hahn-Exton $q$-Bessel functions of several orders and can
be
found in [9, (3.2.15), (3.2.18), (3.2.21)]:
\begin{eqnarray}
& &J_{\nu+1}(x;q^2) = \left( \frac{1-q^{2\nu}}{x} + x \right)
J_{\nu}
(x;q^2) - J_{\nu-1}(x;q^2),\label{mix1}\\[1ex]
& &J_{\nu+1}(xq;q^2) = q^{-\nu-1} \left( \frac{1-q^{2\nu}}{x}
J_{\nu}
(x;q^2) - J_{\nu-1}(x;q^2) \right).\label{mix2}
\end{eqnarray}
\section{On the zeros of the Hahn-Exton $q$-Bessel function}
\def4.\arabic{equation}{3.\arabic{equation}}
\setcounter{equation}{0}
In this section we prove some results on the zeros of the
Hahn-Exton
$q$-Bessel function. The proofs rest on an explicit evaluation of
a
$q$-integral related to the Fourier-Bessel orthogonality relations
for
the Hahn-Exton $q$-Bessel function and on some formulas relating
Hahn-Exton $q$-Bessel functions of different order. At the end of
this
section we also present a result on the zeros of the $q$-derivative
of
the Hahn-Exton $q$-Bessel function.
\vspace{5mm}
The starting point in the derivation of the results on the zeros of
the
Hahn-Exton $q$-Bessel function is the following proposition. It is
closely related to the Fourier-Bessel orthogonality relations for
the
Hahn-Exton $q$-Bessel functions, originally proved by H. Exton (see
\cite{Ex1} and \cite{Ex2}) using Sturmian methods.
\vspace{5mm}
{\bf Proposition 3.1.} {\sl For $\Re e(\nu) > -1, \, z > 0$ and
$a,b\in\Bbb C
\backslash\{0\}$ we have}
\begin{eqnarray*}
& &\left(a^2 - b^2 \right) \int\limits_0^z x J_{\nu}(aqx;q^2)
J_{\nu}
(bqx;q^2) \, d_q x\\[1ex]
& &\hspace{1cm}= (1-q) q^{\nu - 1} z \left( a J_{\nu+1}(aqz;q^2)
J_{\nu}(bz;q^2) - b J_{\nu+1}(bqz;q^2) J_{\nu}(az;q^2) \right).
\end{eqnarray*}
\vspace{5mm}
{\bf Proof:} Use the $q$-partial integration rule (\ref{qpart}) and
the
formulas (\ref{af1}) and (\ref{af2}) to obtain
\begin{eqnarray}
& &\int\limits_0^z x J_{\nu}(aqx;q^2) J_{\nu}
(bqx;q^2) \, d_qx\label{proof1}\\[1ex]
\hspace{1cm}& &= \frac{(1-q)}{b} q^{\nu - 1} z J_{\nu}(az;q^2)
J_{\nu+1}(bqz;q^2) + \frac{a}{b} \int\limits_0^z x
J_{\nu+1}(aqx;q^2)
J_{\nu+1}(bqx;q^2)\, d_qx.\nonumber
\end{eqnarray}
The restriction $\Re e(\nu) > -1$ is necessary to ensure the
absolute
convergence
of the series defining the $q$-integral, cf. (\ref{qint}), since
the
Hahn-Exton $q$-Bessel function $J_{\nu}(x;q^2)$ behaves like
$x^{\nu}$
times a constant for small $x$. Interchanging
$a$ and $b$ in (\ref{proof1}) yields a set of two equations, which
can
be solved easily. $\Box$
\vspace{2mm}
The proof of Proposition 3.1 is also well known for the Bessel
function, see e.g. I.N. Sneddon [8, Chapter 2].
\vspace{5mm}
{\bf Corollary 3.2.} {\sl The zeros of $J_{\nu}(\cdot\, ;q^2)$,
with
$\nu >
-1$, are real.}
\vspace{5mm}
{\bf Proof:} Suppose $a\neq 0$ is a zero of $J_{\nu}(\cdot\,
;q^2)$.
Since $\nu$ is real we have
\begin{eqnarray*}
J_{\nu}\left(\bar{a};q^2\right) = \overline{J_{\nu}(a;q^2)} = 0.
\end{eqnarray*}
Proposition 3.1 with $z = 1$ and $b = \bar{a}$ yields
\begin{eqnarray}
\left( a^2 - \bar{a}^2 \right) \int\limits_0^1 x \left|
J_{\nu}(aqx;q^2)
\right|^2 \, d_qx = 0.\label{abar}
\end{eqnarray}
Now $a^2 = \bar{a}^2$ if and only if $a\in\Bbb R$ or $a\in i\Bbb R$, so
that
in all other cases the $q$-integral in (\ref{abar}) is zero. Using
the
definition of the $q$-integral (\ref{qint}) we get
\begin{eqnarray*}
J_{\nu}\left(aq^{k+1};q^2 \right) = 0, \hspace{2mm} k\in\Bbb Z_+,
\end{eqnarray*}
and this implies that $x^{-\nu} J_{\nu}(x;q^2)$
is identically zero,
since it defines an analytic function on $\Bbb C$. Hence,
$J_{\nu}(\cdot\,
;q^2) \equiv 0$.
Finally, we have to show that $ia$, with $a\in\Bbb R$, is not a zero
of
$J_{\nu}(\cdot\, ;q^2)$. Now by (\ref{qhyper}) and (\ref{heq})
\begin{eqnarray*}
J_{\nu}(ia;q^2) = (ia)^{\nu}
\frac{(q^{2\nu+2};q^2)_{\infty}}{(q^2;q^2)
_{\infty}} \sum\limits_{k=0}^{\infty} \frac{q^{k(k+1)} a^{2k}}
{(q^2;q^2)_k
(q^{2\nu+2};q^2)_k}
\end{eqnarray*}
and for $\nu > -1$ this expression cannot be zero, since every term
in
the sum is positive. $\Box$
\vspace{5mm}
From the series representation (\ref{heq}) for the Hahn-Exton
$q$-Bessel
function it follows that if $a$ is a zero of $J_{\nu}(\cdot\,
;q^2)$,
then
$-a$ is also a zero of $J_{\nu}(\cdot\, ;q^2)$. Hence we will
restrict
ourselves to the positive zeros of the Hahn-Exton $q$-Bessel
function
of order $\nu > -1$.
\vspace{5mm}
To obtain an expression for the $q$-integral in Proposition 3.1
with
$a=b$, we use l'H\^{o}pital's rule. The result is
\begin{eqnarray*}
\int\limits_0^z x \left( J_{\nu}(axq;q^2)\right)^2 \, d_qx &=&
\frac
{(1-q)q^{\nu-1}z}{-2a}\left( az J_{\nu+1}(aqz;q^2) J_{\nu}'(az;q^2)
\right.\\[1ex]
& &\left. - J_{\nu+1}(aqz;q^2)J_{\nu}(az;q^2) -aqz
J_{\nu+1}'(aqz;q^2)
J_{\nu}(az;q^2)\right).
\end{eqnarray*}
This formula simplifies to
\begin{eqnarray}
\int\limits_0^1 x \left(J_{\nu}(axq;q^2) \right)^2 \, d_qx =
-\frac{1}
{2}(1-q)q^{\nu-1} J_{\nu+1}(aq;q^2)J_{\nu}'(a;q^2)\label{simp}
\end{eqnarray}
for $z=1$ and $a\neq 0$ a (real) zero of $J_{\nu}(\cdot\,;q^2)$.
\vspace{5mm}
{\bf Lemma 3.3.} {\sl The non-zero (real) zeros of
$J_{\nu}(\cdot\,;q^2)$,
with $\nu > -1$, are simple zeros.}
\vspace{5mm}
{\bf Proof:} Let $a$ be a non-zero (real) zero of
$J_{\nu}(\cdot\, ;q^2)$,
with $\nu > -1$. The integral
\begin{eqnarray*}
\int\limits_0^1 x \left| J_{\nu}(axq;q^2) \right|^2 \, d_qx =
\int\limits
_0^1 x \left( J_{\nu}(axq;q^2) \right)^2 d_qx
\end{eqnarray*}
is strictly positive. (If it were zero, this would imply that the
Hahn-Exton $q$-Bessel function is identically zero as in the proof
of
Corollary 3.2.) Hence, (\ref{simp}) implies that $J_{\nu}'(a;q^2)
\neq 0$, which proves the lemma. $\Box$
\vspace{2mm}
In the proof of this lemma we explicitly use the (usual) derivative
of
the Hahn-Exton $q$-Bessel function in (\ref{simp}). It shows that
the
intermingling of (ordinary) analysis and $q$-analysis may be
fruitful
and should not be opposed to as M. Rahman \cite{Ra1} proposes.
\vspace{5mm}
We now come to one of the main results of this section, which shows
that
the zeros of the Hahn-Exton $q$-Bessel function behave like the
zeros
of the classical Bessel function, cf. Watson [11, \S15.3].
\vspace{5mm}
{\bf Theorem 3.4.} {\sl The Hahn-Exton $q$-Bessel function of order
$\nu > -1$ has a countably infinite number of positive simple
zeros.}
\vspace{5mm}
{\bf Proof:} In view of Corollary 3.2 and Lemma 3.3 it remains to
show
that $J_{\nu}(\cdot\,;q^2)$, with $\nu > -1$, has a countably
infinite
number of zeros. We prove that the assumption
$J_{\nu}(\cdot\,;q^2)$,
with $\nu > -1$, has a finite number of real positive zeros, leads
to
a contradiction.
Assume that $a > 0$ is the largest positive zero of
$J_{\nu}(\cdot\,;q^2)$
and choose $x > 0$ satisfying
\begin{eqnarray*}
x^2q^2 - 1 -q^{2\nu} > 0,\hspace{5mm}\mbox{ and } \hspace{5mm} q^2x
> a.
\end{eqnarray*}
Hence, $J_{\nu}(xq^2;q^2), J_{\nu}(xq;q^2)$ and $J_{\nu}(x;q^2)$
are
non-zero real numbers of the same sign and
\begin{eqnarray*}
J_{\nu}(xq^2;q^2) + q^{-\nu} \left( q^2x^2 - 1 -q^{2\nu} \right)
J_{\nu}
(xq;q^2) + J_{\nu}(x;q^2)
\end{eqnarray*}
is a non-zero real number. This is in contradiction with the second
order $q$-difference equation (\ref{qdifeq}) for the Hahn-Exton
$q$-Bessel function. $\Box$
\vspace{5mm}
Theorem 3.4 shows that we can order the positive zeros of $J_{\nu}
(\cdot\,;q^2)$, with $\nu > -1$, as
\begin{eqnarray}
\label{volg}0 < j_1^{\nu}(q^2) < j_2^{\nu}(q^2) < j_3^{\nu}(q^2) <
\dots .
\end{eqnarray}
The positive zeros of $J_{\nu}(\cdot\, ;q^2)$ will also be denoted
by
$j_n$ or $j_n^{\nu}$. Proposition 3.1 and
relation (\ref{simp}) can be combined to state the
Fourier-Bessel orthogonality relations for the Hahn-Exton
$q$-Bessel
function, cf. \cite{Ex1} and \cite{Ex2}.
\vspace{5mm}
{\bf Proposition 3.5.} {\sl Let $\nu > -1$ and $0 < j_1 < j_2 <
\dots$
be the
positive zeros of the Hahn-Exton $q$-Bessel function
$J_{\nu}(\cdot\, ;
q^2)$, then}
\begin{eqnarray*}
& &\int\limits_0^1 x J_{\nu}(qj_nx;q^2)J_{\nu}(qj_mx;q^2)\, d_q
x\\[1ex]
& &\hspace{1cm}= -\frac{1}{2}(1-q)q^{\nu-1}
J_{\nu+1}(qj_n;q^2) J_{\nu}'(j_n;q^2) \delta_{n,m}\\[1ex]
& &\hspace{1cm}= \frac{1}{2}(1-q)^2q^{\nu-2}\left(D_q J_{\nu}
(\cdot\, ;q^2)\right)(j_n) \, J_{\nu}'(j_n;q^2) \delta_{n,m}\\[1ex]
& &\hspace{1cm}= -\frac{1}{2}(1-q)q^{\nu-2}
j_n^{-1} J_{\nu}(qj_n;q^2) J_{\nu}'(j_n;q^2) \delta_{n,m}\\[1ex]
& &\hspace{1cm}= -\frac{1}{2}(1-q)q^{-2}J_{\nu+1}(j_n;q^2) J_{\nu}'
(j_n;q^2) \delta_{n,m}.
\end{eqnarray*}
\vspace{5mm}
{\bf Proof:} It remains to prove the last three equalities. The
first
one is a consequence of
\begin{eqnarray}
\label{af3}\left[D_q J_{\nu}(\cdot\, ;q^2)\right](x) =
\frac{-q}{1-q}
J_{\nu+1}(xq;q^2) + \frac{1-q^{\nu}}{x(1-q)} J_{\nu}(x;q^2),
\end{eqnarray}
which follows from (\ref{qprod}) and (\ref{af2}).
The second equality follows from the definition of the
$q$-derivative
$D_q$, cf. (\ref{qaf}). The third equality follows from $J_{\nu+1}
(j_n;q^2) = q^{\nu+1} J_{\nu+1}(qj_n;q^2)$, which is a consequence
of
(\ref{mix1}) and (\ref{mix2}). $\Box$
\vspace{5mm}
Next we will derive that the zeros of the Hahn-Exton $q$-Bessel
functions of order $\nu$ and $\nu+1$
are interlaced similarly to the interlacing property of the
zeros of the Bessel functions of order $\nu$ and $\nu+1$, cf.
Watson
[11, \S15.22]. We start with the following lemma.
\vspace{5mm}
{\bf Lemma 3.6.} {\sl Let $\nu > -1$, then $J_{\nu-1}(\cdot\,
;q^2)$ and
$J_{\nu+1}(\cdot\, ;q^2)$ each have at least one zero between two
consecutive positive zeros of $J_{\nu}(\cdot\, ;q^2)$.}
\vspace{5mm}
{\bf Proof:} Consider the function
\begin{eqnarray*}
g(x) = \frac{x^{\nu}}{1-q} J_{\nu-1}(x;q^2) = D_q \left(
(\cdot)^{\nu}
J_{\nu}(\cdot\, ;q^2) \right)(x),
\end{eqnarray*}
where we used (\ref{af1}) as well. If $a$ is a positive zero of
$J_{\nu}(\cdot\, ;q^2)$, then
\begin{eqnarray*}
g(a) = - \frac{(aq)^{\nu} J_{\nu}(aq;q^2)}{(1-q)a}
\end{eqnarray*}
by (\ref{qaf}). It follows from Proposition 3.5 and Lemma 3.3 that
$g(a)
g(b) < 0$ for two consecutive zeros $0 < a < b$ of $J_{\nu}(\cdot\,
;q^2)$. This proves the lemma for $J_{\nu-1}(\cdot\, ;q^2)$.
The other case is reduced to the result for
$J_{\nu-1}(\cdot\,;q^2)$,
since $J_{\nu-1}(a;q^2) = - J_{\nu+1}(a;q^2)$ for a non-zero zero
of
$J_{\nu}(\cdot\,;q^2)$ by (\ref{mix1}). $\Box$
\vspace{5mm}
{\bf Theorem 3.7.} {\sl The positive real zeros of $J_{\nu}(\cdot\,
;q^2)$
and $J_{\nu+1}(\cdot\,;q^2)$, with $\nu > -1$, interlace.
Explicitly,}
\begin{eqnarray*}
0 < j_1^{\nu} < j_1^{\nu+1} < j_2^{\nu} < j_2^{\nu+1} < j_3^{\nu}
<
j_3^{\nu+1} < \dots.
\end{eqnarray*}
\vspace{5mm}
{\bf Proof:} The interlacing of the zeros of $J_{\nu}(\cdot\,;q^2)$
and
$J_{\nu+1}(\cdot\, ;q^2)$ is a straightforward application of Lemma
3.6.
It remains to prove that $j_1^{\nu} < j_1^{\nu+1}$.
First note that the $_1\phi_1$-series in (\ref{heq}) yields $1$ for
$x=0$. Hence, by continuity,
\begin{eqnarray}
\label{jpos}J_{\nu}(x;q^2) > 0, \hspace{5mm} x\in\left( 0,
j_1^{\nu}
\right), \, \nu > -1.
\end{eqnarray}
Thus $J_{\nu}'(j_1^{\nu};q^2) < 0$ and by Lemma 3.3 and Proposition
3.5
we get $J_{\nu+1}(j_1^{\nu};q^2) > 0$. From (\ref{jpos}) it follows
that
there exists an even number of positive zeros of $J_{\nu+1}(\cdot\,
;q^2)$ in the open interval $(0, j_1^{\nu})$. Lemma 3.6 and
$j_1^{\nu}$
being the smallest positive zero of $J_{\nu}(\cdot\, ;q^2)$ imply
that
this number is zero. $\Box$
\vspace{5mm}
An upper bound for the first zero $j^{\nu}_1$ may be obtained as
follows. Take $x=j^{\nu}_1$ in (\ref{qdifeq}) to obtain
\begin{eqnarray*}
J_{\nu}(j^{\nu}_1 q^2;q^2) + q^{-\nu} \left( \left(j^{\nu}_1
\right)^2
q^2 - 1 - q^{2\nu} \right) J_{\nu}(j^{\nu}_1 q;q^2) = 0.
\end{eqnarray*}
Inequality (\ref{jpos}) implies that both $q$-Bessel functions are
positive, so that
\begin{eqnarray*}
\left(j^{\nu}_1 \right)^2 q^2 - 1 - q^{2\nu} < 0
\Longleftrightarrow
j^{\nu}_1 < q^{-1} \sqrt{1+q^{2\nu}}.
\end{eqnarray*}
\vspace{5mm}
Theorem 3.7 shows that for $\nu>-1$ the Hahn-Exton $q$-Bessel
functions
of order $\nu$ and $\nu+1$ do not have zeros in common except
possibly
$0$. This also holds for general order.
\vspace{2mm}
{\bf Proposition 3.8.} {\sl The Hahn-Exton $q$-Bessel functions
$J_{\nu}(\cdot \, ;q^2)$ and $J_{\nu+1}(\cdot \, ;q^2)$ have no
common
zeros except possibly $0$.}
\vspace{2mm}
{\bf Proof:} We argue by contradiction, so let $x\in\Bbb C$ be a
non-zero
zero of both $J_{\nu}(\cdot \, ;q^2)$ and $J_{\nu-1}(\cdot \,
;q^2)$.
Eliminating $J_{\nu-1}(x;q^2)$ from (\ref{mix1}) and (\ref{mix2})
leads
to
\begin{eqnarray*}
J_{\nu+1}(x;q^2) - q^{\nu+1} J_{\nu+1}(xq;q^2) = x J_{\nu}(x;q^2).
\end{eqnarray*}
Hence, $J_{\nu+1}(xq;q^2) = 0$ as well. Next consider the
$q$-difference
equation (\ref{qdifeq}) with $\nu$ replaced by $\nu+1$. Since $x$
and
$xq$ are zeros of $J_{\nu+1}(\cdot \, ;q^2)$ we find
$J_{\nu+1}(xq^2;q^2)=0$. Repeated application of (\ref{qdifeq})
leads
to $J_{\nu+1}(xq^k;q^2)=0$ for all $k\in\Bbb Z_+$. This implies
$J_{\nu+1}(\cdot \, ;q^2)\equiv 0$, which is the desired
contradiction.
$\Box$
\vspace{5mm}
Finally we state a result on the zeros of the $q$-derivative of the
Hahn-Exton $q$-Bessel function. Specialise $z=1$ in Proposition 3.1
and use (\ref{af3}) to obtain
\begin{eqnarray}
& &\label{spec}(a^2 - b^2) \int\limits_0^1 x J_{\nu}(aqx;q^2)
J_{\nu}
(bqx;q^2) \, d_q x \\[1ex]
& &\hspace{1cm}= (1-q)^2 q^{\nu-2} \left( b \left(D_q
J_{\nu}(\cdot\,;q^2)
\right)(b) J_{\nu}(a;q^2) - a \left(D_q
J_{\nu}(\cdot\,;q^2)\right)(a)
J_{\nu}(b;q^2) \right)\nonumber
\end{eqnarray}
for $\Re e(\nu) > -1$ and
$a,b\in\Bbb C\backslash\{0\}$.
\vspace{5mm}
{\bf Proposition 3.9.} {\sl The non-zero zeros of $D_q
J_{\nu}(\cdot\,
;q^2)$ for $\nu > 0$ are real and simple.}
\vspace{5mm}
{\bf Proof:} In order to prove that the zeros of
$D_q J_{\nu}(\cdot\,;q^2)$ are
real, we proceed as in the proof of Corollary 3.2. In this case
purely imaginary zeros can be excluded for $\nu > 0$.
To see that the zeros are simple, we apply l'H\^{o}pital's rule to
(\ref{spec}) and then take $a$ a non-zero real zero of $D_q J_{\nu}
(\cdot\,;q^2)$. The result is
\begin{eqnarray*}
\int\limits_0^1 x \left| J_{\nu}(aqx;q^2) \right|^2 \, d_q x = -
\frac
{1}{2}(1-q)^2q^{\nu-2} J_{\nu}(a;q^2) \left( D_q J_{\nu}(\cdot\,
;q^2)
\right)'(a).
\end{eqnarray*}
Since the left hand side is non-zero, the result follows.
$\Box$
\section{\bf On the associated $q$-Lommel polynomials.}
\def4.\arabic{equation}{4.\arabic{equation}}
\setcounter{equation}{0}
The Bessel function $J_{\nu}(z)$ satisfies the three term
recurrence
relation
\begin{eqnarray}
\label{recurbes}J_{\nu+1}(z) = \frac{2\nu}{z} J_{\nu}(z) -
J_{\nu-1}(z).
\end{eqnarray}
If we iterate relation (\ref{recurbes}), we can express
$J_{\nu+m}(z)$,
with $m\in\Bbb Z_+$, in terms of $J_{\nu}(z)$ and $J_{\nu-1}(z)$, with
coefficients that are polynomials in $\frac{1}{z}$. Indeed we have
\begin{equation}
\label{recurmbes}J_{\nu+m}(z) = R_{m,\nu}(z)J_{\nu}(z) -
R_{m-1,\nu+1}
(z) J_{\nu-1}(z),
\end{equation}
where $R_{m,\nu}(z)$ are the Lommel polynomials, see Watson [11,
\S9.6].
These polynomials satisfy the three term recurrence relation, cf.
[11, 9.63(2)]
\begin{equation}
\label{recurlom}R_{m+1,\nu}(z) = \frac{2(\nu+m)}{z} R_{m,\nu}(z) -
R_{m-1,\nu}(z),
\end{equation}
with $R_{0,\nu}(z) = 1$ and $R_{1,\nu}(z) = \frac{2\nu}{z}$.
Usually
a related set of polynomials, the so-called modified Lommel
polynomials, is defined by
\begin{eqnarray*}
h_{m,\nu}(z) = R_{m,\nu}\left(\frac{1}{z}\right).
\end{eqnarray*}
It is clear that $h_{m,\nu}(z)$ is a polynomial in $z$ of degree
$m$,
which satisfies the three term recurrence relation
\begin{eqnarray*}
\label{recurmod}h_{m+1,\nu}(z) = 2z(\nu+m) h_{m,\nu}(z) -
h_{m-1,\nu}(z).
\end{eqnarray*}
By Favard's theorem, the modified Lommel polynomials are orthogonal
with respect to a positive measure. The orthogonality measure is a
discrete measure with weights at $\frac{1}{j_n^{\nu}}$, where
$j_n^{\nu}$ are the zeros of the Bessel function $J_{\nu}(z)$.
\vspace{5mm}
The asymptotic behaviour of the Lommel polynomials $R_{m,\nu}$ is
related to the Bessel function by Hurwitz's formula, cf. [11,
9.65(1)]
\begin{eqnarray}
\label{hur} \frac{\left(\frac{1}{2}z\right)^{\nu+m} R_{m,\nu+1}(z)}
{\Gamma(\nu+m+1)} \stackrel{m\rightarrow\infty}{\longrightarrow}
J_{\nu}(z).
\end{eqnarray}
In the next subsections we will derive and investigate two
different
$q$-analogues
of the Lommel polynomials. The first type follows from the
recurrence
relation (\ref{mix1}). The second type is a result of the
difference-recurrence relation (\ref{mix2}).
\subsection{$q$-Lommel polynomials associated with (2.13)}
In this subsection we determine $q$-analogues of the Lommel
polynomials
arising from the recurrence relation (\ref{mix1}). We will present
an
explicit formula, a generating function and an analogue of
Hurwitz's
formula (\ref{hur}).
In order to derive $q$-analogues of the results mentioned in the
previous section, we introduce a second Hahn-Exton $q$-Bessel
function.
Define
\begin{eqnarray}
{\cal J}_{\nu}(x;q) &=& e^{i\nu\pi} q^{-\frac{1}{2}\nu}J_{-\nu}
(xq^{-\frac{1}{2}\nu};q)\label{calj},\\[1ex]
&=& e^{i\nu\pi} \frac{(q^{-\nu +1};q)_{\infty}
}{(q;q)_{\infty}}x^{-\nu}q^{\frac{1}{2}\nu
(\nu-1)}\sum\limits_{k=0}
^{\infty}\frac{(-1)^kq^{\frac{1}{2}k(k+1)}x^{2k}q^{-\nu k}}
{(q^{-\nu +1};q)_k(q;q)_k}\nonumber
\end{eqnarray}
Relations (\ref{af1}) and (\ref{af2}) give us, also in combination
with the definition (\ref{calj}) above, the next formulas:
\begin{eqnarray}
\label{drj1}& &J_{\nu}(xq^{\frac{1}{2}};q) = q^{\frac{1}{2}\nu}
J_{\nu}(x;q) + x q^{\frac{1}{2}}J_{\nu
+1}(xq^{\frac{1}{2}};q),\\[1ex]
\label{drjj1}& &{\cal J}_{\nu}(xq^{\frac{1}{2}};q) = q^{\frac{1}{2}
\nu}{\cal J}_{\nu}(x;q) + x q^{\frac{1}{2}}{\cal J}_{\nu
+1}(xq^{\frac
{1}{2}};q),\\[1ex]
\label{drj2}& &J_{\nu}(xq^{\frac{1}{2}};q) =
q^{-\frac{1}{2}\nu}J_{\nu}
(x;q) - xq^{-\frac{1}{2}\nu}J_{\nu -1}(x;q),\\[1ex]
\label{drjj2}& &{\cal J}_{\nu}(xq^{\frac{1}{2}};q) =
q^{-\frac{1}{2}
\nu}{\cal J}_{\nu}(x;q) - xq^{-\frac{1}{2}\nu}{\cal J}_{\nu
-1}(x;q).
\end{eqnarray}
\vspace{5mm}
\noindent Furthermore, ${\cal J}_{\nu}(x;q^2)$ also satisfies the
relations
(\ref{qdifeq}) and (\ref{mix1}), i.e.
\begin{eqnarray}
\label{recjj1}& &\left\{ x + \frac{1-q^{\nu}}{x} \right\}
{\cal J}_{\nu}(x;q) =
{\cal J}_{\nu -1}(x;q) + {\cal J}_{\nu +1}(x;q),\\[1ex]
\label{difj2}& &{\cal J}_{\nu}(xq;q) + q^{-\frac{1}{2}\nu}\left(
x^2q-1-
q^{\nu}\right) {\cal J}_{\nu}(xq^{\frac{1}{2}};q) + {\cal
J}_{\nu}(x;q)
=0.
\end{eqnarray}
\noindent This follows by combining relations (\ref{drjj1}) and
(\ref{drjj2}).
\vspace{5mm}
\noindent Next, we will give two $q$-analogues of
(\ref{recurmbes}):
\vspace{2mm}
{\bf Proposition 4.1.} {\sl The functions $J_{\nu}(x;q)$ and
${\cal J}_{\nu}(x;q)$
satisfy the (same) recurrence relations}
\begin{eqnarray}
\label{recj2}& &J_{\nu +m}(x;q) = R_{m,\nu}(x;q)J_{\nu}(x;q) -
R_{m-1,\nu +1}(x;q)J_{\nu -1}(x;q),\\[1ex]
\label{recjj2}& &{\cal J}_{\nu +m}(x;q) = R_{m,\nu}(x;q)
{\cal J}_{\nu}
(x;q) - R_{m-1,\nu +1}(x;q){\cal J}_{\nu -1}(x;q).
\end{eqnarray}
\vspace{2mm}
{\bf Proof:} We will give the proof for $J_{\nu}(x;q)$. We can
iterate
the recurrence relation (\ref{mix1}) to get an expression of the
form
\begin{eqnarray*}
J_{\nu + m}(x;q) = R_{m,\nu}(x;q)J_{\nu}(x;q) -
S_{m,\nu}(x;q)J_{\nu -1}(x;q).
\end{eqnarray*}
Next consider $J_{\nu + m + 1}(x;q)$. It satisfies the relations
\begin{eqnarray*}
& &J_{\nu +m+1}(x;q)= R_{m+1,\nu}(x;q)J_{\nu}(x;q) -
S_{m+1,\nu}(x;q)
J_{\nu-1}(x;q)\\[1ex]
& &\hspace{1cm}=R_{m,\nu+1}(x;q)J_{\nu+1}(x;q) - S_{m,\nu+1}(x;q)
J_{\nu}(x;q)\\[1ex]
& &\hspace{1cm}=\left[ R_{m,\nu+1}(x;q) \left\{ x +
\frac{1-q^{\nu}}{x}
\right\} -
S_{m,\nu+1}(x;q)\right] J_{\nu}(x;q) -
R_{m,\nu+1}(x;q)J_{\nu-1}(x;q).
\end{eqnarray*}
Hence $S_{m,\nu}(x;q) = R_{m-1,\nu+1}(x;q)$ and the proof is
completed.
Since $J_{\nu}(x;q)$ and
${\cal J}_{\nu}(x;q)$ satisfy the same recurrence relation (compare
(\ref{mix1}) and (\ref{recjj1})), it is obvious that (\ref{recjj2})
will also hold.$\Box$\newline
\noindent Another consequence of the proof is the recurrence
relation
\begin{eqnarray}
\label{recl1}R_{m+1,\nu}(x;q) = \left\{
x+\frac{1-q^{\nu}}{x}\right\}
R_{m,\nu+1}(x;q) - R_{m-1,\nu+2}(x;q).
\end{eqnarray}
\vspace{5mm}
Since (\ref{recj2}) and (\ref{recjj2}) are $q$-analogues of
(\ref{recurmbes}), the functions $R_{m,\nu}$ are $q$-analogues
of the Lommel polynomials. Later, when an explicit formula is
derived,
this will become clearer.
\vspace{5mm}
{\bf Lemma 4.2.} {\sl For non-integral $\nu$ the function
$R_{m,\nu}$
can be expressed as}
\begin{eqnarray}
\label{exlom1}R_{m,\nu}(x;q) &=& \frac{x e^{-i\pi\nu}
q^{-\frac{1}{2}
\nu(\nu-1)} (q;q)_{\infty}(q;q)_{\infty}}{(q^{\nu};q)_{\infty}
(q^{-\nu+1};q)_{\infty}}\times\\[1ex]
& &\hspace{2cm}\times\left\{ J_{\nu-1}(x;q){\cal J}_{\nu +m}(x;q)
-
J_{\nu +m}(x;q){\cal J}
_{\nu -1}(x;q) \right\}\nonumber.
\end{eqnarray}
\vspace{2mm}
{\bf Proof.} Multiply the recurrence relations (\ref{recj2}) and
(\ref
{recjj2}) with respectively ${\cal J}_{\nu-1}(x;q)$ and
$J_{\nu-1}(x;q)$
and subtract the resulting formulas to obtain
\begin{eqnarray*}
R_{m,\nu}(x;q) = \left\{ \frac{{\cal J}_{\nu+m}(x;q)J_{\nu-1}(x;q)
-
J_{\nu+m}(x;q){\cal J}_{\nu-1}(x;q)}{J_{\nu-1}(x;q){\cal
J}_{\nu}(x;q)
- J_{\nu}(x;q){\cal J}_{\nu -1}(x;q)} \right\}.
\end{eqnarray*}
Using (\ref{drj2}) and (\ref{drjj2}) we rewrite the denominator of
this
expression as
\begin{eqnarray*}
& &J_{\nu-1}(x;q){\cal J}_{\nu}(x;q) - J_{\nu}(x;q){\cal
J}_{\nu-1}(x;q)
\\[1ex]
&=& {\cal J}_{\nu}(x;q)\left[ \frac{1}{x}J_{\nu}(x;q) - \frac{1}{x}
q^{\frac{1}{2}\nu}J_{\nu}(xq^{\frac{1}{2}};q)\right] - J_{\nu}(x;q)
\left[ \frac{1}{x} {\cal J}_{\nu}(x;q) -
\frac{1}{x}q^{\frac{1}{2}\nu}
{\cal J}_{\nu}(xq^{\frac{1}{2}};q)\right]\\[1ex]
&=&\frac{q^{\frac{1}{2}\nu}}{x}\left\{ J_{\nu}(x;q){\cal J}_{\nu}(x
q^{\frac{1}{2}};q) - {\cal J}_{\nu}(x;q)J_{\nu}(xq^{\frac{1}{2}};q)
\right\}.
\end{eqnarray*}
The term in braces is a Wronskian type formula, which has been
evaluated by\newline
R.F. Swarttouw [9, (4.3.8)]. Explicitly,
\begin{eqnarray}
\label{wron}J_{\nu}(x;q){\cal J}_{\nu}(xq^{\frac{1}{2}};q) -
{\cal J}_{\nu}
(x;q)J_{\nu}(xq^{\frac{1}{2}};q) =
q^{\frac{1}{2}\nu(\nu-2)}e^{i\nu\pi}
\frac{(q^{\nu};q)_{\infty}(q^{-\nu+1};q)_{\infty}}{(q;q)_{\infty}
(q;q)_{\infty}},
\end{eqnarray}
which is non-zero for non-integral values of $\nu$. Now the lemma
follows easily.$\Box$
\vspace{2mm}
{\bf Remark:} The case $\nu=n\in\Bbb Z$ requires a closer
investigation,
since (\ref{wron}) equals zero if $\nu\in\Bbb Z$. However, for
$n\in\Bbb Z$
we have the relation, cf. [9, (3.2.12)]
\begin{eqnarray}
\label{minplus}J_n(x;q) = (-1)^n q^{-\frac{1}{2}n}
J_{-n}(xq^{-\frac{1}{2}n};q) = {\cal J}_n(x;q),
\end{eqnarray}
so that the numerator in (\ref{exlom1}) will also become zero for
$n\in\Bbb Z$.
Now we can apply l'H\^{o}pital's rule to (\ref{exlom1}) to show
that the
formula still makes sense for $\nu\in\Bbb Z$. However, it will be
easier to
show that $R_{m,n}(x;q)$ is the analytic continuation of
$R_{m,\nu}(x;q)$,
when we have the explicit representation for the $R_{m,\nu}(x;q)$.
Until (\ref{explir}) we assume that $\nu$ is non-integral, but it
is
easily verified by analytic continuation that the results obtained
in
between are valid for integer $\nu$ as well.
\vspace{5mm}
Relation (\ref{exlom1}) gives rise to a three term recurrence
relation
for the $q$-Lommel polynomials:
\vspace{2mm}
{\bf Proposition 4.3.} {\sl The function $R_{m,\nu}(x;q)$ satisfies
the
recurrence relation}
\begin{eqnarray}
\label{recl2}\left\{ x^2 + 1 - q^{\nu+m} \right\} R_{m,\nu}(x;q) =
x \left\{ R_{m-1,\nu}(x;q) + R_{m+1,\nu}(x;q) \right\}.
\end{eqnarray}
with $R_{0,\nu}(x;q)=1$ and $R_{1,\nu}(x;q)=x+\frac{1-q^{\nu}}{x}$.
\vspace{2mm}
{\bf Proof:} $R_{0,\nu}(x;q)$ and $R_{1,\nu}(x;q)$ follow
immediately
from (\ref{recjj2}) and (\ref{recjj1}). In order to shorten the
notation
we introduce
\begin{eqnarray}
\label{cnu}C_{\nu}(x;q) =\frac{x
e^{-i\nu\pi}q^{-\frac{1}{2}\nu(\nu-1)}
(q;q)_{\infty}(q;q)_
{\infty}}{(q^{\nu};q)_{\infty}(q^{-\nu+1};q)_{\infty}}.
\end{eqnarray}
Now, starting with the lefthand side of (\ref{exlom1}), with $m$
replaced by $m+1$, we find with the recurrence relations
(\ref{mix1})
and (\ref{recjj1}), with $\nu$ replaced by $\nu+m$
\begin{eqnarray*}
R_{m+1,\nu}(x;q)&=&C_{\nu}(x;q) \left\{ J_{\nu-1}(x;q){\cal
J}_{\nu+m+1}
(x;q) - J_{\nu+m+1}(x;q){\cal J}_{\nu-1}(x;q)\right\}\\[1ex]
&=&C_{\nu}(x;q) \left\{J_{\nu-1}(x;q)\left[ \left\{ x +
\frac{1-q^{\nu
+m}}{x}\right\} {\cal J}_{\nu+m}(x;q) - {\cal
J}_{\nu+m-1}(x;q)\right]
\right.+ \\[1ex]
& &\hspace{1cm}-\left.{\cal J}_{\nu-1}(x;q)\left[
\left\{ x+\frac{1-q^{\nu+m}}
{x}\right\} J_{\nu+m}(x;q) - J_{\nu+m-1}(x;q) \right]
\right\}\\[1ex]
&=&\left\{ x + \frac{1-q^{\nu+m}}{x} \right\} R_{m,\nu}(x;q) -
R_{m-1,
\nu}(x;q)
\end{eqnarray*}
which proves the proposition.$\Box$
\vspace{5mm}
From Proposition 4.3 it is easy to see that the function
$R_{m,\nu}$
is not a polynomial, but a function of the form
\begin{eqnarray*}
R_{m,\nu}(x;q) = \sum\limits_{n=-m}^m c_n x^n.
\end{eqnarray*}
So $R_{m,\nu}$ is in fact a Laurent polynomial.
This suggests the introduction of the function
$p_m(x;q)=x^m R_{m,\nu}(x;q)$.
From (\ref{recl2}) it follows that $p_m(x;q)$ satisfies the
three term recurrence relation
\begin{eqnarray}
\label{prec} & &\left( x^2 + 1 - q^{\nu+m} \right) p_m(x;q) =
p_{m+1}(x;q) + x^2 p_{m-1}(x;q),\\[1ex]
& &\hspace{1cm} p_0(x;q)=1 \hspace{5mm} \mbox{ and } \hspace{5mm}
p_1(x;q)=x^2+\left(1-q^{\nu}\right).\nonumber
\end{eqnarray}
It follows from (\ref{prec}) that $p_{m}$ is actually a polynomial
of
degree $m$ in $x^2$.
The polynomial $p_m$ does not satisfy the conditions of Favard's
theorem, cf. [10, II.3.2], so that it is not orthogonal with
respect
to a positive
measure. However,\, $p_m(x;q)$ possesses nice properties, which are
$q$-analogues of the well-known properties of the Lommel
polynomials.
\vspace{5mm}
First we will derive a generating function for the $q$-Lommel
polynomials. Multiply (\ref{prec}) with $t^{m+1}$ and sum from
$m=1$ to
$\infty$. When we introduce the generating function
\begin{eqnarray*}
G(x,t) = \sum\limits_{m=0}^{\infty} t^m p_m(x;q),
\end{eqnarray*}
we find
\begin{eqnarray*}
& &x^2t \left( G(x,t) - 1 \right) + t \left( G(x,t) - 1 \right) -
q^{\nu}t \left( G(x,qt) - 1 \right) \\[1ex]
& &\hspace{1cm}= G(x,t) - 1 - t\left(x^2 + 1 - q^{\nu}\right) +
x^2t^2 G(x,t),
\end{eqnarray*}
which simplifies to
\begin{eqnarray}
G(x,t) = \frac{1 - q^{\nu}t G(x,qt)}{(1-t)(1-x^2t)}.\label{geng}
\end{eqnarray}
When we iterate relation (\ref{geng}) and make use of the fact that
$G(x,0)=1$, we have
\begin{eqnarray*}
G(x,t) = \sum\limits_{j=0}^{\infty} \frac{q^{j\nu}
q^{\frac{1}{2}j(j-1)}
(-t)^j}{(t;q)_{j+1} (x^2t;q)_{j+1}}.
\end{eqnarray*}
This expression is valid for $t\neq q^{-p}, \, x^2t\neq q^{-p}, \,
p\in\Bbb Z_+$. So we have the following generating function for the
function $R_{m,\nu}(x;q)$:
\begin{eqnarray}
\label{genr}\sum\limits_{m=0}^{\infty} (xt)^m R_{m,\nu}(x;q) &=&
\sum\limits_{j=0}^{\infty} \frac{q^{j\nu} q^{\frac{1}{2}j(j-1)}
(-t)^j}
{(t;q)_{j+1} (tx^2;q)_{j+1}}\\[1ex]
&=& \frac{1}{(1-t)(1-x^2t)} \qhyp{2}{2}{q,0}{qt, qtx^2}{tq^{\nu}}.
\nonumber
\end{eqnarray}
\vspace{5mm}
The generating function (\ref{genr}) gives rise to an explicit
expression of $R_{m,\nu}(x;q)$.
Use the $q$-binomial theorem [3, (1.3.2)] twice to obtain
\begin{eqnarray*}
& &\frac{1}{(t;q)_{j+1}} =
\frac{(tq^{j+1};q)_{\infty}}{(t;q)_{\infty}}
= \qhyp{1}{0}{q^{j+1}}{-}{t},\hspace{5mm} \left| t \right| <
1,\\[1ex]
& &\frac{1}{(tx^2;q)_{j+1}} = \qhyp{1}{0}{q^{j+1}}{-}{tx^2},
\hspace{5mm} \left| tx^2 \right| < 1.
\end{eqnarray*}
This yields
\begin{eqnarray*}
\sum\limits_{m=0}^{\infty} (xt)^m R_{m,\nu}(x;q) =
\sum\limits_{j,k,n=0}
^{\infty} \frac{q^{j\nu} q^{\frac{1}{2}j(j-1)} (-1)^j (q^{j+1};q)_k
(q^{j+1};q)_n}{(q;q)_k (q;q)_n} x^{2n} t^{j+k+n}.
\end{eqnarray*}
Equating powers of $t$ yields
\begin{eqnarray*}
x^m R_{m,\nu}(x;q) &=& \sum\limits_{n=0}^m \frac{x^{2n}}{(q;q)_n}
\sum\limits_{j=0}^{m-n} \frac{q^{j\nu} q^{\frac{1}{2}j(j-1)} (-1)^j
(q^{j+1};q)_{m-n-j} (q^{j+1};q)_n}{(q;q)_{m-n-j}}\\[1ex]
&=& \sum\limits_{n=0}^m x^{2n} \sum\limits_{j=0}^{m-n}
\frac{(q^{n-m};q)_j (q^{n+1};q)_j}{(q;q)_j (q;q)_j} q^{j\nu}
q^{j(m-n)}
\\[1ex]
&=& \sum\limits_{n=0}^m x^{2n} \qhyp{2}{1}{q^{n-m},q^{n+1}}{q}
{q^{\nu+m-n}}.
\end{eqnarray*}
This expression for $R_{m,\nu}(x;q)$ shows the analytic dependence
on
$q^{\nu}$, cf. the remark following Lemma 4.2.
Finally, using Heine's transformation formula, cf. [3, (1.4.1)], we
obtain the explicit representation
\begin{eqnarray}
\label{explir}R_{m,\nu}(x;q) = \sum\limits_{n=0}^m x^{2n-m} \frac
{(q^{n+1};q)_{\infty}(q^{\nu};q)_{\infty}}{(q;q)_{\infty}
(q^{\nu+m-n};q)_{\infty}} \qhyp{2}{1}{q^{-n},q^{\nu+m-n}}{q^{\nu}}
{q^{n+1}}.
\end{eqnarray}
The explicit expression (\ref{explir}) can also be obtained from
(\ref{exlom1}) in combination with the explicit series
representations
(\ref{calj}) and (\ref{heq}). In this case a formula [9, (6.4.4)]
for
the product of two Hahn-Exton $q$-Bessel functions has to be used.
\vspace{5mm}
Formally we can obtain a $q$-analogue of Hurwitz's formula
(\ref{hur})
by taking termwise limits in the explicit representation
(\ref{explir}).
This gives
\begin{eqnarray*}
& &x^m R_{m,\nu}(x;q)
\stackrel{m\rightarrow\infty}{\longrightarrow}
\frac{(q^{\nu};q)_{\infty}}{(q;q)_{\infty}}
\sum\limits_{n=0}^{\infty}
x^{2n} (q^{n+1};q)_{\infty} \qhyp{2}{1}{q^{-n},0}{q^{\nu}}{q^{n+1}}
\\[1ex]
& &\hspace{3cm}= (q^{\nu+1};q)_{\infty} \sum\limits_{k=0}^{\infty}
\frac{q^k}{(q^{\nu};q)_k
(q;q)_k} \sum\limits_{n=k}^{\infty} \frac{(q^{-n};q)_k}{(q;q)_n}
q^{nk} x^{2n}.
\end{eqnarray*}
The inner sum equals
\begin{eqnarray*}
(-1)^k q^{\frac{1}{2}k(k-1)} x^{2k}
\frac{1}{(x^2;q)_{\infty}},
\end{eqnarray*}
so that we formally obtain
\begin{eqnarray}
x^m R_{m,\nu}(x;q) &\stackrel{m\rightarrow\infty}{\longrightarrow}&
\frac{(q^{\nu};q)_{\infty}}{(x^2;q)_{\infty}}
\qhyp{1}{1}{0}{q^{\nu}}
{qx^2}\nonumber\\[1ex]
&=& \frac{(q;q)_{\infty}}{(x^2;q)_{\infty}} x^{1-\nu}
J_{\nu-1}(x;q).
\end{eqnarray}
\vspace{5mm}
\subsection{$q$-Lommel polynomials associated with (2.14)}
In this subsection we consider the $q$-analogues of the Lommel
polynomials that arise from the iteration of (\ref{mix2}). Note
that
the shift in the argument in (\ref{mix2}) will present some
difficulty. As a consequence, there is no unique definition of
these
$q$-Lommel polynomials. For a certain choice we show that the
(modified)
$q$-Lommel polynomials associated with the Jackson $q$-Bessel
function, see M.E.H. Ismail \cite{Is1}, are obtained in this
context.
These polynomials are orthogonal as shown by M.E.H. Ismail [4,
(4.16)].
For a slightly different definition of the $q$-Lommel polynomials
we
present a Hurwitz-type formula, cf. (\ref{hur}).
We start with iterating (\ref{mix2}).
\vspace{5mm}
{\bf Lemma 4.4.} {\sl There exist unique constants $a_i(\nu,m)$ and
$b_j(\nu,m)$ so that
\begin{eqnarray*}
J_{\nu+m}(xq^m;q^2) = \sum\limits_{i=0}^{[m/2]} \frac{a_i(\nu,
m)}{x^{m-2i}} J_{\nu}(xq^i;q^2) + \sum\limits_{j=0}^
{[(m-1)/2]} \frac{b_j(\nu,m)}{x^{m-1-2j}} J_{\nu-1}(xq^j;q^2),
\end{eqnarray*}
where $[a]$ denotes the largest integer smaller than or equal to
$a$.}
\vspace{2mm}
{\bf Proof:} The proof uses induction with respect to $m$. The case
$m=0$ is trivial and the case $m=1$ is just (\ref{mix2}). For the
induction step we use, cf. (\ref{mix2}),
\begin{eqnarray}
& &J_{\nu+m+1}(xq^{m+1};q^2) = J_{(\nu+m)+1}(xq^m\cdot q;q^2)
\label{ind}\\[1ex]
& &\hspace{1cm}= q^{-(\nu+m+1)} \frac{1-q^{2(\nu+m)}}{xq^m}
J_{\nu+m}(xq^m;q^2) -
q^{-(\nu+m+1)} J_{\nu+(m-1)}(xq\cdot q^{m-1};q^2).\nonumber
\end{eqnarray}
Use the induction hypothesis in the last two terms of (\ref{ind})
and
shift summation parameters to prove the lemma. $\Box$
\vspace{2mm}
Actually this proof yields a recurrence relation for the constants
$a_i(\nu, m+1)$. Explicitly,
\begin{eqnarray}
\label{reca}a_i(\nu,m+1) =
q^{-(\nu+1)}q^{-2m}(1-q^{2(\nu+m)})a_i(\nu,m)
- q^{-\nu + 2(i-1-m)} a_{i-1}(\nu,m-1),
\end{eqnarray}
with $a_i(\nu,m)=0$ for $i < 0$ or $i > [m/2]$.
A similar result holds for the constants $b_j(\nu, m)$, but these
constants will be related to the $a_i(\nu, m)$ in due course, cf.
Lemma
4.5. So we do not present the recurrence relation for the
$b_j(\nu, m)$'s.
\vspace{5mm}
Although Lemma 4.4 also holds with $J_{\nu}$ replaced by
${\cal J}_{\nu}$, cf. (\ref{calj}), this does not lead to nice
expressions as in the previous subsection due to the shift in the
argument in Lemma 4.4.
\vspace{5mm}
Motivated by Lemma 4.4 and the recurrence relation (\ref{reca}) we
define
\begin{eqnarray}
r_{m,\nu}(x;q^2) = \sum\limits_{i=0}^{[m/2]} \frac{a_i(\nu,m)}
{x^{m-2i}} q^{-i(i+1)},\label{rklein}
\end{eqnarray}
which is a polynomial of degree $m$ in $\frac{1}{x}$. The factor
$q^{-i(i+1)}$ in (\ref{rklein}) is chosen in such a way that
(\ref{reca}) yields a three term recurrence relation for the
$r_{m,\nu}(x;q^2)$. Explicitly
\begin{eqnarray}
\label{recr2}q^{2m} r_{m+1,\nu}(x;q^2) = q^{-(\nu+1)}
\frac{1-q^{2(\nu+m)}}{x} r_{m,\nu}(x;q^2) - q^{-(\nu+2)}
r_{m-1,\nu}(x;q^2),
\end{eqnarray}
which follows from multiplying (\ref{reca}) by
$x^{-m-1+2i} q^{-i(i+1)}$
and summing from $i=0$ to $[(m+1)/2]$. The begin values of the
recurrence relation (\ref{recr2}) are
\begin{eqnarray}
\label{begrec}r_{0,\nu}(x;q^2) = 1, \hspace{1cm} r_{1,\nu}(x;q^2)
=
q^{-(\nu+1)} \left( 1-q^{2\nu} \right) x^{-1}.
\end{eqnarray}
\vspace{2mm}
In order to determine the constants $a_i(\nu, m)$ in (\ref{rklein})
we introduce the polynomial $h_{m,\nu}(x;q^2) = r_{m,\nu}\left(
\frac{1}{x};q^2\right)$. The three term recurrence relation
(\ref{reca})
with its begin values (\ref{begrec}) is rewritten as
\begin{eqnarray}
& &q^{2m} h_{m+1,\nu}(x;q^2) = q^{-\nu-1} \left( 1-q^{2(\nu+m)}
\right)
x h_{m,\nu}(x;q^2) - q^{-\nu-2} h_{m-1,\nu}(x;q^2),\nonumber\\[1ex]
& &\hspace{1cm} h_{0,\nu}(x;q^2) = 1 \hspace{5mm} \mbox{ and }
\hspace{5mm} h_{1,\nu}(x;q^2) = q^{-\nu-1} \left(1-q^{2\nu} \right)
x.
\label{modrec}
\end{eqnarray}
\vspace{5mm}
Let us recall the (modified) $q$-Lommel polynomials associated with
the
Jackson $q$-Bessel function as presented by M.E.H. Ismail
\cite{Is1}.
Define, cf. [4, (3.6)],
\begin{eqnarray}
\tilde{h}_{m,\nu}(x;q^2) = \sum\limits_{j=0}^{[m/2]}
\frac{(2x)^{m-2j}
(-1)^j (q^{2\nu};q^2)_{m-j} (q^2;q^2)_{m-j}}{(q^2;q^2)_j
(q^{2\nu};q^2)_j (q^2;q^2)_{m-2j}}q^{2j(j+\nu-1)},\label{his}
\end{eqnarray}
then the set of polynomials $\tilde{h}_{m,\nu}(x;q^2)$ satisfies
the
three term recurrence relation, cf. [4, (1.22)],
\begin{eqnarray}
& &\tilde{h}_{m+1,\nu}(x;q^2) = 2x \left( 1-q^{2(\nu+m)} \right)
\tilde{h}_{m,\nu}(x;q^2) - q^{2(m+\nu-1)}
\tilde{h}_{m-1,\nu}(x;q^2),
\nonumber\\[1ex]
& &\hspace{1cm} \tilde{h}_{0,\nu}(x;q^2) = 1 \hspace{5mm} \mbox{
and }
\hspace{5mm} \tilde{h}_{1,\nu}(x;q^2) = 2x \left(1-q^{2\nu}
\right).
\label{modis}
\end{eqnarray}
A straightforward calculation shows that (\ref{modrec}) and
(\ref{modis}) are equivalent. It suffices to take
\begin{eqnarray}
h_{m,\nu}(x;q^2) = q^{-\frac{3}{2}m\nu - m^2} \tilde{h}_{m,\nu}
(q^{\frac{\nu}{2}}x/2;q^2).\label{hish}
\end{eqnarray}
From (\ref{hish}), (\ref{his}) and (\ref{rklein}) we obtain the
explicit
expression
\begin{eqnarray}
a_i(m,\nu) = q^{-m(m+\nu)} q^{i(3i+\nu-1)} \frac{(-1)^i
(q^{2\nu};q^2)_{m-i} (q^2;q^2)_{m-i}}{(q^2;q^2)_i (q^{2\nu};q^2)_i
(q^2;q^2)_{m-2i}}.\label{expla}
\end{eqnarray}
\vspace{5mm}
Now that we have determined the value of $a_i(\nu,m)$ we present
its
link to the constants $b_j(\nu,m)$.
\vspace{2mm}
{\bf Lemma 4.5.}
\begin{eqnarray}
b_j(\nu,m+1) = -q^{2j-m-\nu-1} a_j(\nu+1,m).
\end{eqnarray}
\vspace{2mm}
{\bf Proof:} Consider
\begin{eqnarray}
J_{\nu+m+1}(xq^{m+1};q^2) = J_{(\nu+1)+m}(xq
\cdot q^m;q^2)\label{lem45}
\end{eqnarray}
and apply Lemma 4.4 with $\nu$ replaced by $\nu+1$ to the right
hand
side of (\ref{lem45}). In the resulting sum involving the
Hahn-Exton
$q$-Bessel function of order $\nu+1$ we use (\ref{mix2}).
Comparison
of the coefficients of $x^{2i-m} J_{\nu-1}(xq^i;q^2)$ with the ones
obtained by applying Lemma 4.4 with $m$ replaced by $m+1$ to the
left
hand side of (\ref{lem45}) proves the lemma.$\Box$
\vspace{2mm}
In the proof of Lemma 4.5 we can also compare the coefficients of
$x^{2i-m-1} J_{\nu}(xq^i;q^2)$ and this results in
\begin{eqnarray}
\label{abcom}a_i(\nu,m+1) = q^{i-m-\nu-1} \left( 1-q^{2\nu} \right)
a_i(\nu+1,m) + q^{2i-m-1} b_{i-1}(\nu+1,m).
\end{eqnarray}
Using Lemma 4.5 and (\ref{rklein}) we can rewrite (\ref{abcom}) as
\begin{eqnarray}
\label{recrecr}r_{m+1,\nu}(x;q^2) &=&
\frac{1}{x}\left(1-q^{2\nu}\right)
q^{-(\frac{1}{2}m+\nu+1)} r_{m,\nu+1}(xq^{\frac{1}{2}};q^2)\\[1ex]
& &\hspace{3cm}- q^{-(m+\nu+3)} r_{m-1,\nu+2}(xq;q^2).\nonumber
\end{eqnarray}
Since we have identified $r_{m,\nu}(x;q^2)$ with $q$-analogues of
Lommel polynomials associated with Jackson's $q$-Bessel functions,
Hurwitz's formula, cf. M.E.H. Ismail [4, (3.10)], holds for
$r_{m,\nu}(x;q^2)$. We want to have a $q$-analogue of Hurwitz's
formula involving the Hahn-Exton $q$-Bessel function and in order
to
do so we make use of the following $q$-analogue of the Lommel
polynomial:
\begin{eqnarray}
\tilde{r}_{m,\nu}(x;q) = \sum\limits_{i=0}^{[m/2]}
\frac{a_i(m,\nu)}
{x^{m-2i}} q^{-2i(i-1)}.\label{tilder}
\end{eqnarray}
From (\ref{reca}) we see that it satisfies the recurrence relation
\begin{eqnarray}
\tilde{r}_{m+1,\nu}(x;q^2) = x^{-1} q^{-\nu-1-2m}
\left(1-q^{2(\nu+m)}
\right) \tilde{r}_{m,\nu}(x;q^2) - q^{1-\nu-3m} \tilde{r}_{m-1,\nu}
(q^{-1}x;q^2).\label{rectilder}
\end{eqnarray}
It follows from Favard's theorem, cf. [10, II.3.2], that
$\tilde{r}_{m,\nu}\left(\frac{1}{x};q^2\right)$ does not yield a
set
of orthogonal polynomials. However, from (\ref{expla}) and
(\ref{tilder}) we get
\begin{eqnarray}
\tilde{r}_{m,\nu}(x;q^2) &=& x^{-m}q^{-m(m+\nu)} (q^{2\nu};q^2)_m
\label{exrtilde}\\[1ex]
&\times&\qqhyp{5}{3}
{q^{1-m}, -q^{1-m}, q^{-m},
-q^{-m}, 0}{q^{2\nu}, q^{2(1-m-\nu)}, q^{-2m}}
{x^2q^{2-\nu}}\nonumber
\end{eqnarray}
and from (\ref{exrtilde}) we obtain the following $q$-analogue of
Hurwitz's formula
\begin{eqnarray*}
x^m q^{m(m+\nu)} \tilde{r}_{m,\nu}(x;q^2)
\stackrel{m\rightarrow\infty}
{\longrightarrow}
(q^2;q^2)_{\infty} x^{1-\nu} q^{\frac{1}{2}\nu(1-\nu)}
J_{\nu-1}(xq^{\frac{\nu}{2}};q^2),
\end{eqnarray*}
by taking formal termwise limits in (\ref{exrtilde}).
| {
"timestamp": "1998-09-11T06:38:22",
"yymm": "9703",
"arxiv_id": "math/9703215",
"language": "en",
"url": "https://arxiv.org/abs/math/9703215",
"abstract": "For the Bessel function \\begin{equation} \\label{bessel} J_{\\nu}(z) = \\sum\\limits_{k=0}^{\\infty} \\frac{(-1)^k \\left( \\frac{z}{2} \\right)^{\\nu+2k}}{k! \\Gamma(\\nu+1+k)} \\end{equation} there exist several $q$-analogues. The oldest $q$-analogues of the Bessel function were introduced by F. H. Jackson at the beginning of this century, see M. E. H. Ismail \\cite{Is1} for the appropriate references. Another $q$-analogue of the Bessel function has been introduced by W. Hahn in a special case and by H. Exton in full generality, see R. F. Swarttouw \\cite{Sw1} for a historic overview.Here we concentrate on properties of the Hahn-Exton $q$-Bessel function and in particular on its zeros and the associated $q$-Lommel polynomials.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "On the zeros of the Hahn-Exton q-Bessel function and associated q-Lommel polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983085085246543,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7095349873731471
} |
https://arxiv.org/abs/1901.04855 | Linear inequalities in primes | In this paper we prove an asymptotic formula for the number of solutions in prime numbers to systems of simultaneous linear inequalities with algebraic coefficients. For $m$ simultaneous inequalities we require at least $m+2$ variables, improving upon existing methods, which generically require at least $2m+1$ variables. Our result also generalises the theorem of Green-Tao-Ziegler on linear equations in primes. Many of the methods presented apply for arbitrary coefficients, not just for algebraic coefficients, and we formulate a conjecture concerning the pseudorandomness of sieve weights which, if resolved, would remove the algebraicity assumption entirely. | \section{Introduction}
\label{section introduction}
Fourier analysis is a vital tool in the study of diophantine problems. In recent years, however, new tools have been developed which can prove asymptotic formulae for the number of solutions to certain systems even when the Fourier-analytic approach is not known to succeed. In particular, in \cite{GT10} Green and Tao established an asymptotic formula for the number of prime solutions to generic systems of $m$ simultaneous linear equations in at least $m+2$ variables. Their result was conditional on various conjectures, but these conjectures were later proved by the same authors and Ziegler, in the series of papers \cite{GT12}, \cite{GTa12} and \cite{GTZ12}.
\begin{Theorem}[Theorem 1.8, \cite{GT10}, Green-Tao-Ziegler]
\label{Theorem Green Tao}
Let $N$, $m$, and $d$ be natural numbers, with $d\geqslant m+2$, and let $C$ be a positive constant. Let $L = (\lambda_{ij})_{i\leqslant m,j\leqslant d}$ be an $m$-by-$d$ matrix with integer coefficients, with rank $m$, and assume the non-degeneracy condition that the only element of the row-space of $L$ over $\mathbb{Q}$ with two or fewer non-zero entries is the zero vector. Let $\mathbf{b} \in \mathbb{Z}^m$, and suppose that $\Vert \mathbf{b}\Vert_\infty \leqslant CN$ and that $\vert \lambda_{ij}\vert \leqslant C$ for all $i$ and $j$. Let $K \subset [-N,N]^d$ be a convex set. Then
\begin{equation}
\label{with non archimdedean factors}
\vert \{ \mathbf{p} \in K : L\mathbf{p} = \mathbf{b}\}\vert = \Big(\alpha_\infty\prod\limits_{p}\alpha_p\Big) (\log N)^{-d} + o_{C,d,m}(N^{d-m}(\log N)^{-d}),
\end{equation}
\noindent where the \emph{local densities} $\alpha_p$ are given, for each prime $p$, by
\begin{equation}
\alpha_p : = \lim\limits_{M \rightarrow \infty} \frac{1}{(2M)^d} \sum\limits_{\substack{\mathbf{n} \in [-M,M]^d \\ L\mathbf{n} = \mathbf{b} \\ (n_i,p) = 1 \, \text{for all } i}} \Big(1 +\frac{1}{p-1}\Big)^d\nonumber
\end{equation}
\noindent and the \emph{global factor} $\alpha_\infty$ is given by
\begin{equation}
\alpha_\infty : = \vert \{ \mathbf{n} \in \mathbb{Z}^d : \mathbf{n} \in K, L\mathbf{n} = \mathbf{b}, n_i \geqslant 0 \, \text{for all } i\} \vert.\nonumber
\end{equation}
\end{Theorem}
\noindent Here and throughout, $p$ denotes a prime, $\mathbf{p}$ denotes a vector all of whose coordinates are prime, and $\mathbf{n}$ denotes a vector all of whose coordinates are integers $n_i$. The expression $(n_i,p)$ denotes the greatest common divisor of $n_i$ and $p$.
To give a concrete example to which this result may be applied, by considering \[ L = \left(\begin{matrix} 1 & -2 & 1 & 0 \\ 0 & 1 & -2 & 1 \end{matrix} \right) , \qquad \mathbf{b} = \mathbf{0}\] one may deduce an asymptotic formula for the number of four-term arithmetic progressions of primes that are less than $N$.
For $m \geqslant 2$, Theorem \ref{Theorem Green Tao} is stronger than any similar statement that may be proved using the Fourier transform alone. Indeed, notwithstanding Balog's example \cite[Corollary 3]{Ba92} of a certain non-generic class of $m$ equations in $ m + \lceil \sqrt{2m}\rceil $ prime variables, generically the Fourier transform approach needs at least $2m + 1$ prime variables in order to succeed. The proof of Theorem \ref{Theorem Green Tao} rests on many creative innovations, in particular the authors' use of Gowers norms and their inverse theory, which is a subject that is now referred to as `higher order Fourier analysis'. The object of the present paper is to use certain aspects of this machinery to establish, in a related setting, an analogous reduction in the number of variables that are required to prove an asymptotic formula.\\
We will be concerned with diophantine inequalities, a topic that we first considered in \cite{Wa17}. Before giving our first main result (Theorem \ref{Main theorem simpler version}) let us briefly review some previous results concerning diophantine inequalities in the primes. Consider the following classical theorem of Baker.\footnote{In fact Baker proved a slightly different result, writing in the cited paper that the result we quote here followed easily from the then existing methods. Vaughan proved a similar result in \cite{Va74}.}
\begin{Theorem}[\cite{Ba67}, Baker]
\label{Theorem Baker}
Let $\varepsilon>0$, and let $\lambda_1,\lambda_2,\lambda_3\in\mathbb{R}\setminus \{0\}$ be three non-zero reals that are not all of the same sign. Furthermore, suppose that for all $\alpha\in\mathbb{R}\setminus\{0\}$ the relation $(\alpha\lambda_1,\alpha\lambda_2,\alpha\lambda_3)\notin \mathbb{Z}^3$ holds. Then there exist infinitely many triples of primes $(p_1,p_2,p_3)$ satisfying \begin{equation}
\label{the type of inequality Baker thought about}
\vert \lambda_1p_1 + \lambda_2p_2 + \lambda_3 p_3\vert \leqslant \varepsilon.
\end{equation}
\end{Theorem}
\begin{Remark}
\emph{The condition concerning the signs of $\lambda_1,\lambda_2,\lambda_3$ is clearly a necessary one, as otherwise there exist only finitely many solutions to (\ref{the type of inequality Baker thought about}) in the positive integers (and so certainly there exist only finitely many solutions in the primes). Regarding the other condition, the conclusion of Theorem \ref{Theorem Baker} may hold even if there exists some $\alpha \in \mathbb{R}\setminus \{0\}$ for which \[(\alpha\lambda_1,\alpha\lambda_2,\alpha\lambda_3) = (q_1,q_2,q_3) \in \mathbb{Z}^3.\] But then one is required to solve \[ \vert q_1p_1 + q_2p_2 + q_3p_3\vert \leqslant \varepsilon \alpha,\] which, if $\varepsilon$ is small enough, is equivalent to solving \[ q_1p_1 + q_2p_2 + q_3p_3 = 0.\] Theorem \ref{Theorem Green Tao} can then affirm that there are infinitely many solutions, provided that $q_1$, $q_2$, and $q_3$ satisfy certain local properties. This issue, of when an inequality can encode a certain equation with rational coefficients, will be an important theme of the paper.}
\end{Remark}
The classical approach to proving results such as Theorem \ref{Theorem Baker} involves Fourier analysis over $\mathbb{R}$, after having replaced the characteristic function of the interval $[-\varepsilon,\varepsilon]$ with a smoother cut-off function. This approach is known as the Davenport-Heilbronn method, it having originated in a paper \cite{DaHe46} of those two authors. For a variety of technical reasons this method was, until relatively recently, unable to give an asymptotic formula for the number of solutions to (\ref{the type of inequality Baker thought about}) that satisfied $1\leqslant p_1,p_2,p_3\leqslant N$, or even give a lower bound of the expected order of magnitude (at least for arbitrary $N$). However, certain advances of Freeman \cite{Fr00, Fr02} enabled Parsell to achieve the second of these two goals.
\begin{Theorem}[Theorem 1, \cite{Pa02}, Parsell]
\label{Thm Parsel}
Let $\varepsilon>0$, and let $\lambda_1,\lambda_2,\lambda_3\in\mathbb{R}\setminus \{0\}$ be three non-zero reals that are not all of the same sign. Furthermore, suppose that for all $\alpha\in\mathbb{R}\setminus\{0\}$ the relation $(\alpha\lambda_1,\alpha\lambda_2,\alpha\lambda_3)\notin \mathbb{Z}^3$ holds. Then the number of prime triples $(p_1,p_2,p_3)$ satisfying $1\leqslant p_1,p_2,p_3\leqslant N$ and
\begin{equation}
\label{the type of inequality we're talking about}
\vert \lambda_1p_1 + \lambda_2p_2 + \lambda_3 p_3\vert \leqslant \varepsilon
\end{equation}
\noindent is $\Omega_{\lambda_1,\lambda_2,\lambda_3} (\varepsilon N^2 (\log N)^{-3}).$
\end{Theorem}
\noindent Since \cite{Pa02} was published, it has been understood that a very minor modification to Parsell's analytic method can be used to obtain an asymptotic expression for the number of solutions to (\ref{the type of inequality we're talking about}), namely \[C_{\lambda_1,\lambda_2,\lambda_3} \varepsilon N^2 \log^{-3} N + o_{\lambda_1,\lambda_2,\lambda_3}(N^2 \log^{-3} N),\] for some positive constant $C_{\lambda_1,\lambda_2,\lambda_3}$. Furthermore, in the case of $m$ simultaneous (rationally independent) inequalities of the form (\ref{the type of inequality we're talking about}), Parsell's method can calculate an asymptotic formula for the number of solutions in primes provided the number of variables is at least $2m + 1$. In Appendix \ref{section an analytic argument} we take the opportunity to record the details of both the statement and the proof of this result.\\
In the main theorems of this paper (Theorem \ref{Main theorem simpler version} and Theorem \ref{Main theorem}) we specialise to the case of algebraic coefficients and reduce the number of variables that are required from $2m+1$ to $m+2$. Our first result does not concern the most general type of diophantine inequality, but nonetheless it enjoys several applications. To state it, we recall the notion of the \emph{dual degeneracy variety}, which we defined in Definition 2.3 of \cite{Wa17} in order to manipulate the non-degeneracy conditions more succinctly.
\begin{Definition}[Dual degeneracy variety, \cite{Wa17}]
\label{Definition dual degeneracy variety}
Let $m,d$ be natural numbers satisfying $d\geqslant m+2$. Let $V^*_{\operatorname{degen}}(m,d)$ denote the set of all $m$-by-$d$ matrices with real coefficients that contain a non-zero row-vector in their row-space over $\mathbb{R}$ that has two or fewer non-zero co-ordinates. We call $V^*_{\operatorname{degen}}(m,d)$ the \emph{dual degeneracy variety}.
\end{Definition}
\noindent For example, the matrix \[ \left ( \begin{matrix} 1 & -2 & 1 & 0 \\ 2 & -4 & 0 & \sqrt{3} \end{matrix} \right) \] is in $V_{\operatorname{degen}}^*(2,4)$, since the vector $(0,0,-2,\sqrt{3})$ lies in its row space. As is explained at length in \cite{Wa17}, if one wishes to count solutions to an inequality given by $L$ using a method involving Gowers norms then one can only possibly succeed if $L \notin V_{\operatorname{degen}}^*(m,d)$. Returning to Theorem \ref{Theorem Green Tao}, we observe that the non-degeneracy condition in the statement of that theorem is exactly the condition that $L\notin V_{\operatorname{degen}}^*(m,d)$. If $d=m+2$, non-degeneracy in this sense is easy to detect. Indeed, $L \notin V_{\operatorname{degen}}^*(m,d)$ if and only if the determinants of all the $m$-by-$m$ submatrices of $L$ are non-vanishing.
\begin{Remark}
\label{Remark dual CS}
\emph{The above notion is `dual' to the notion of finite Cauchy-Schwarz complexity (see Definition \ref{Definition finite complexity}), in the sense that $L$ is in the dual degeneracy variety if and only if $\ker L$ may be parametrised by a system of linear forms with finite Cauchy-Schwarz complexity. In \cite{Wa17} we also introduced a \emph{degeneracy variety} in order to manipulate quantitative versions of this fact, but this will not be necessary here. For more on these issues, we invite the reader to consult Sections 6 and 7 of \cite{Wa17}.}
\end{Remark}
We are now ready to state our first main result. In the statement below, $[N]$ refers to the set $\mathbb{N} \cap [1,N]$ and the function $1_{[-\varepsilon,\varepsilon]^m}$ refers to the indicator function of the set $[-\varepsilon,\varepsilon]^m$.
\begin{Theorem}[Main theorem, purely irrational version]
\label{Main theorem simpler version}
Let $N,m,d$ be natural numbers, with $d\geqslant m+2$, and let $C,\varepsilon$ be positive constants. Let $L$ be an $m$-by-$d$ real matrix with algebraic coefficients and rank $m$. Suppose that $L\notin V^*_{\operatorname{degen}}(m,d)$. Suppose further that for all $\boldsymbol{\alpha}\in\mathbb{R}^m\setminus \{ \mathbf{0}\}$ one has $L^T\boldsymbol{\alpha} \notin \mathbb{Z}^d$, i.e. suppose that $L$ is purely irrational in the sense of Definition 2.4 of \cite{Wa17}. Let $\mathbf{v}\in\mathbb{R}^m$ be any vector satisfying $\Vert\mathbf{v}\Vert_\infty\leqslant CN$. Then
\begin{equation}
\label{simpler asymptotic formula}
\sum\limits_{\mathbf{p} \in [N]^d} 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{p}+\mathbf{v}) = \frac{1}{\log^d N}\int\limits_{\mathbf{x} \in [0,N]^d} 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{x}+\mathbf{v}) \, d\mathbf{x} + o_{C,L,\varepsilon}(N^{d-m}(\log N)^{-d})
\end{equation}
as $N\rightarrow \infty$.
\end{Theorem}
\begin{Remark}
\emph{One notes that in the asymptotic formula (\ref{simpler asymptotic formula}) there is not a contribution from any non-archimedean local factors. In Theorem \ref{Main theorem} below, we will remove the supposition that there does not exist any non-zero vector $\boldsymbol{\alpha}\in\mathbb{R}^m\setminus \{\mathbf{0}\}$ for which $L^T\boldsymbol{\alpha} \in \mathbb{Z}^d$. Once these potential rational relations are permitted, one does indeed observe a contribution from local factors.}
\end{Remark}
\begin{Remark}
\label{Remark approximation of integral}
\emph{When $\mathbf{v} = \mathbf{0}$, it is straightforward to show (see Lemma \ref{Lemma singular integral}) that the main term in (\ref{simpler asymptotic formula}) is equal to \[C_L \varepsilon^m N^{d-m} (\log N)^{-d} + o_{L,\varepsilon}(N^{d-m}(\log N)^{-d}),\] where $C_L$ is a constant depending only on $L$. The positivity of $C_L$ may be determined in practice.}
\end{Remark}
\begin{Remark}
\label{Remark fixed L}
\emph{The reader may note that Theorem \ref{Main theorem simpler version} insists upon a fixed matrix $L$, rather than a matrix $L$ with bounded coefficients (as appeared in Theorem \ref{Theorem Green Tao}). In our previous work \cite[Theorem 2.10]{Wa17}, performed in the context of linear inequalities weighted by bounded functions we proved a result that enabled $L$ to vary, as long as the coefficients of $L$ were bounded and $L$ was bounded away from $V_{\operatorname{degen}}^*(m,d)$. In the present paper there are many auxiliary linear equalities $L^\prime$, which will also need to enjoy such a quantitative non-degeneracy. We found keeping track of these features throughout the whole argument to be extremely complicated, but in principle it should be possible to do so.}
\end{Remark}
\begin{Remark}
\emph{Theorem \ref{Main theorem simpler version} strengthens Theorem \ref{Parsell general asymptotic} of Parsell, in the sense that the number of variables has been reduced (from $2m+1$ to $m+2$). But unfortunately this has been achieved at the cost of imposing an algebraicity assumption on the coefficients of $L$. The situation is regrettable as, under this assumption, the classical Davenport-Heilbronn method alone is adequate to count the number of prime solutions to $m$ simultaneous linear inequalities in $2m+1$ variables, without needing the developments of Parsell. We should stress that most of our method does not rely on the algebraicity assumption. Indeed, the conclusions of Theorems \ref{Main theorem simpler version} and \ref{Main theorem} do in fact hold for some explicit set of matrices $L$ that has full Lebesgue measure (see Remark \ref{Remark almost all}). Unfortunately, owing to the intricacy of the linear-algebraic manipulations in Section \ref{section Cauchy Schwarz argument}, we have not been able to formulate a clean or enlightening characterisation of this full-measure set. We have decided to clarify the exposition of the paper by working with algebraic coefficients throughout.}
\end{Remark}
Let us give a concrete example of a linear inequality to which Theorem \ref{Main theorem simpler version} applies but the Davenport-Heilbronn method does not.
\begin{Example}
\label{Example surd primes}
Let $\varepsilon>0$. Then the number of prime quadruples $(p_1,p_2,p_3,p_4)\in [N]^4$ satisfying
\begin{align}
\label{solutions in concrete example}
\vert p_1 + p_3\sqrt{2} - p_4\sqrt{3} \vert &\leqslant \varepsilon\nonumber\\
\vert p_2+ p_3\sqrt{5} - p_4\sqrt{7} \vert &\leqslant \varepsilon
\end{align}
\noindent is equal to $C\varepsilon^2 N^2 (\log N)^{-4} + o_{\varepsilon}( N^2 (\log N)^{-4} )$, for some positive constant $C$.
\end{Example}
\begin{proof}
Taking \[ L = \left(\begin{matrix} 1 & 0 & \sqrt{2} & -\sqrt{3} \\ 0 & 1 & \sqrt{5} & -\sqrt{7} \end{matrix}\right),\] $L$ certainly satisfies the hypotheses of Theorem \ref{Main theorem simpler version}, since all the $2$-by-$2$ submatrices have non-zero determinant and surds of primes are rationally independent. Taking $\mathbf{v} = \mathbf{0}$, one may therefore apply Theorem \ref{Main theorem simpler version}.
This yields an asymptotic expression for the number of solutions to (\ref{solutions in concrete example}) with the main term in the form of an integral. Since $\mathbf{v} = \mathbf{0}$, by Remark \ref{Remark approximation of integral} we may express the main term as $C_L \varepsilon^2 N^2 (\log N)^{-4}$ for some constant $C_L$. Explicitly, from Lemma \ref{Lemma singular integral} and expression (\ref{CL}) therein, \[ \frac{C_L}{4} = \int\limits_{\substack{0\leqslant x_1,x_2\leqslant 1 \\ x_1 \mathbf{v^{(1)}} + x_2 \mathbf{v^{(2)}} \in [0,1]^2}} 1 \, dx_1 \, dx_2,\] where \[ \mathbf{v^{(1)}} = \left( \begin{matrix}
-\sqrt{2} \\ -\sqrt{5}\end{matrix} \right), \quad \mathbf{v^{(2)}} = \left( \begin{matrix} \sqrt{3} \\ \sqrt{7} \end{matrix} \right).\] By a computation, we satisfy ourselves that $C_L \approx 1.394...$ is positive.
\end{proof}
Theorem \ref{Main theorem simpler version} may also be used to count prime solutions to other systems.
\begin{Corollary}
\label{Example irrational szemeredi}
Let $(\theta_1,\dots,\theta_d)^T = \boldsymbol{\theta}\in \mathbb{R}^d$ be a real vector with algebraic coefficients. Suppose that there does not exist any $\mathbf{k}\in\mathbb{Z}^d\setminus \{\mathbf{0}\}$ that satisfies $\mathbf{k}\cdot \boldsymbol{\theta} \in \mathbb{Z}$. Let $\mathcal{P}$ denote the set of primes. Then \begin{equation}
\label{expression for corollary}
\sum\limits_{p_1,p_2\leqslant N} \prod\limits_{j=1}^d 1_{\mathcal{P}\cap [N]}(\lfloor p_1+ p_2\theta_j \rfloor) = C_{\boldsymbol{\theta}}\frac{N^2}{\log ^d N} + o_{\boldsymbol{\theta}}\Big(\frac{N^2} {\log ^d N}\Big),
\end{equation} for some positive constant $C_{\boldsymbol{\theta}}$.
\end{Corollary}
\noindent Here $\lfloor x \rfloor$ denotes the floor function of $x$, i.e. the greatest integer that is at most $x$.
\begin{proof}
We can expand the left-hand side of (\ref{expression for corollary}) as \begin{equation*}
\sum\limits_{p_1,p_2\leqslant N}\sum\limits_{p_3,\dots,p_{d+2} \leqslant N} \prod\limits_{j=3}^{d+2} 1_{[0,1)}(p_1 + p_2\theta_{j-2} - p_j).
\end{equation*} Observe that the equation $p_1 + p_2\theta_{j-2} - p_j = 1$ has no solutions, since $\theta_{j-2}$ is irrational by assumption. So the above is equal to \[\sum\limits_{p_1,p_2\leqslant N}\sum\limits_{p_3,\dots,p_{d+2} \leqslant N} \prod\limits_{j=3}^{d+2} 1_{[0,1]}(p_1 + p_2\theta_{j-2} - p_j),\] and this in turn is equal to
\begin{equation}
\label{the expression that were going to apply the simple theorem to}
\sum\limits_{p_1,p_2\leqslant N}\sum\limits_{p_3,\dots,p_{d+2} \leqslant N} 1_{[0,1]^d}(p_1\mathbf{1} + p_2 \boldsymbol{\theta} - \mathbf{p_3^{d+2}}),
\end{equation} where $\mathbf{1}\in \mathbb{R}^d$ is the vector with every coordinate equal to $1$, and $\mathbf{p_3^{d+2}} : = (p_3,\dots,p_{d+2})^T$.
Let $L$ be the $d$-by-$(d+2)$ matrix \[L = \left( \begin{matrix}
\mathbf{1} & \boldsymbol{\theta} & -I\end{matrix} \right).\] Then (\ref{the expression that were going to apply the simple theorem to}) is equal to \[ \sum\limits_{\mathbf{p} \in [N]^{d+2}} 1_{[-\frac{1}{2},\frac{1}{2}]^d}(L\mathbf{p} + \mathbf{v}),\] where $\mathbf{v}: = (-1/2,\dots,-1/2)^T$.
One sees that $L$ satisfies the hypotheses of Theorem \ref{Main theorem simpler version}. Indeed, note first that if there exists some $\boldsymbol{\alpha} \in \mathbb{R}^d \setminus \{\mathbf{0}\}$ for which $L^T \boldsymbol{\alpha} \in \mathbb{Z}^{d+2}$ then by considering the final $d$ coordinates of $L^T \boldsymbol{\alpha}$ it follows such an $\boldsymbol{\alpha}$ must have integer coordinates. But by considering the second coordinate of $L^T \boldsymbol{\alpha}$ it follows that $\boldsymbol{\alpha} \cdot \boldsymbol{\theta} \in \mathbb{Z}$, which is a contradiction to our assumptions on $\boldsymbol{\theta}$. Secondly, if $L$ were in $V_{\operatorname{degen}}^*(d,d+2)$ then either $\theta_i = 0$ for some index $i$, or $\theta_i = \theta_j$ for two different indices $i$ and $j$. Both of these possibilities are precluded by the assumptions on $\boldsymbol{\theta}$.
Therefore we may apply Theorem \ref{Main theorem simpler version}, and by Remark \ref{Remark approximation of integral} we get a main term of the form $C_{\boldsymbol{\theta}} N^2 (\log N)^{-d}$. Explicitly, using Lemma \ref{Lemma singular integral} and expression (\ref{CL}) as above, we have \[ C_{\boldsymbol{\theta}}= \int\limits_{\substack{0 \leqslant x_1,x_2\leqslant 1 \\ 0 \leqslant x_1 + \theta_i x_2 \leqslant 1 \text{ for all } i}} 1 \, dx_1 \, dx_2.\] For any vector $\boldsymbol{\theta}$ this integral is positive, and so the corollary is proved.
\end{proof}
Let us now present a theorem which does not require $L$ to be purely irrational. This is Theorem \ref{Main theorem} below, and we consider it to be our main result. \\
For ease of notation, we introduce the following definition.
\begin{Definition}
\label{Definition discrete solution count}
Let $N,m,d$ be natural numbers, and let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a linear map. Let $F:\mathbb{R}^d\rightarrow \mathbb{R}$ and $G:\mathbb{R}^m\rightarrow \mathbb{R}$ be functions with compact support. Let $\mathbf{v} \in\mathbb{R}^m$. Then, for functions $f_1,\dots,f_{d}:\mathbb{Z}\longrightarrow \mathbb{R}$, we define
\begin{equation}
\label{definiton of the solution count form}
T^{L,\mathbf{v}}_{F,G,N}(f_1,\dots,f_{d}) := \frac{1}{N^{d-m}}\sum\limits_{\mathbf{n}\in \mathbb{Z}^d}\Big(\prod\limits_{j=1}^{d}f_j(n_j)\Big)F(\mathbf{n}/N)G(L\mathbf{n}+\mathbf{v}).
\end{equation}
\end{Definition}
It will be convenient to introduce a logarithmic weighting to the primes. To this end, following \cite{GT10}, we define the function $\Lambda^\prime : \mathbb{Z} \longrightarrow \mathbb{R}$ by \[ \Lambda^\prime(n) : = \begin{cases}
\log n & n \text{ is prime}\\
0 & \text{otherwise}.
\end{cases}\]
\noindent The von Mangoldt function $\Lambda$ will not be needed in this paper.
Another notion from \cite{GT10} will be useful.
\begin{Definition}[Local von Mangoldt function]
\label{Definition local von M function}
For $q\geqslant 2$, the \emph{local von Mangoldt function} $\Lambda_{\mathbb{Z}/q\mathbb{Z}}:\mathbb{Z}\longrightarrow \mathbb{R}$ is the $q$-periodic function defined by $$\Lambda_{\mathbb{Z}/q\mathbb{Z}}(n) = \begin{cases}
\frac{q}{\varphi(q)} & (n,q)=1\\
0 & \text{otherwise.}
\end{cases}$$ We let $\Lambda_{\mathbb{Z}/q\mathbb{Z}}^+: \mathbb{Z} \longrightarrow \mathbb{R}$ denote the restriction of $\Lambda_{\mathbb{Z}/q\mathbb{Z}}$ to the non-negative reals, namely the function $\Lambda_{\mathbb{Z}/q\mathbb{Z}} 1_{[0,\infty)}$.
\end{Definition}
\noindent The local von Mangoldt function, when $q$ is the product of small primes, can be viewed as a model for the function $\Lambda^\prime$. This model\footnote{This is essentially the modified Cram\'{e}r random model.} is intimately connected to a technical device known as the $W$-trick, which we recall in Section \ref{section W trick and Gowers norms}. \\
For a function $F:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ we define the Lipschitz constant of $F$ to be \[ \sup\limits_{\mathbf{x},\mathbf{y} \in \mathbb{R}^d} \frac{\Vert F(\mathbf{x}) - F(\mathbf{y})\Vert_\infty}{\Vert \mathbf{x} - \mathbf{y} \Vert_\infty },\] and call $F$ Lipschitz if this value is finite. \\
We may now state the main theorem.
\begin{Theorem}[Main theorem]
\label{Main theorem}
Let $N,m,d$ be natural numbers with $d\geqslant m+2$, and let $C,\varepsilon,\sigma$ be positive real parameters. Let $L:\mathbb{R}^d\longrightarrow \mathbb{R}^m$ be a surjective linear map with algebraic coefficients, and suppose that $L\notin V_{\operatorname{degen}}^*(m,d)$. Let $\mathbf{v} \in\mathbb{R}^m$ be any vector that satisfies $\Vert \mathbf{v}\Vert_\infty\leqslant CN$. Let $F:\mathbb{R}^d\longrightarrow [0,1]$ and $G: \mathbb{R}^m\longrightarrow [0,1]$ be compactly supported Lipschitz functions with Lipschitz constants at most $\sigma^{-1}$, and assume that $F$ is supported on $[-1,1]^d$ and $G$ is supported on $[-\varepsilon,\varepsilon]^m$. Let $w = w(N):= \log\log \log N$, assuming that $N$ is large enough for this function to be well defined, and let $W = W(N) := \prod\limits_{p\leqslant w} p$. Then
\begin{equation}
\label{expression from main theorem}
T_{F,G,N}^{L,\mathbf{v}}(\Lambda^\prime,\dots,\Lambda^\prime) = T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W\mathbb{Z}}^+,\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}}^+) + o_{C,L,\varepsilon,\sigma}(1)
\end{equation}
as $N\rightarrow \infty$.
\end{Theorem}
\begin{Remark}
\label{Remark after statement of Main theorem}
\emph{If $F$ is supported on $[0,1]^d$, we have \[T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W\mathbb{Z}}^+,\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}}^+) = T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}}).\] We will prove an asymptotic formula for $T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}})$ later, in Lemma \ref{Lemma problem for local von Mangoldt} and Remark \ref{Remark asymptotic for local von mangoldt in general}. For example, if $\mathbf{v} = \mathbf{0}$ and \[ L = \left ( \begin{matrix} 1 & -2 & 1 & 0 \\
0 & 1 & - \sqrt{3} & 1 \end{matrix} \right),\] say, and $F$ and $G$ are smooth functions supported on $[0,1]^4$ and $[-1/2,1/2]^2$ respectively, one may use Lemma \ref{Lemma problem for local von Mangoldt} and Remark \ref{Remark asymptotic for local von mangoldt in general} to show that \[ T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}}) = \mathfrak{S} J + o_{F,G,L}(1),\] where \[ \mathfrak{S} = \frac{1}{2}\prod\limits_{p\geqslant 3} \Big(1-\frac{2}{p}\Big) \Big(\frac{p}{p-1}\Big)^2\] and \[ J = \frac{1}{N^2}\int\limits_{\mathbf{x} \in \mathbb{R}^3} F(\Xi(\mathbf{x})/N)G(L\Xi(\mathbf{x})) \, d\mathbf{x},\] where \[\Xi(x_1,x_2,x_3) = (x_1,x_1 + x_2,x_1 + 2x_2,x_3).\] The constant $\mathfrak{S}$ is in fact equal to \[ \prod\limits_{p} \frac{1}{p^3}\sum\limits_{x_1,x_2,x_3 \leqslant p} \prod\limits_{j=1}^4 \Lambda_{\mathbb{Z}/p\mathbb{Z}}(\xi_j(x_1,x_2,x_3)),\] where $(\xi_1,\xi_2,\xi_3,\xi_4)$ are the coordinate maps for $\Xi$. }
\emph{It takes some effort to establish precisely what the map $\Xi$ should be for a given $L$. What's more, the asymptotic formula in the general case is not just a product of a local factor and a global factor but rather a finite sum of products of local factors and global factors, and we will need to introduce an abundance of additional notation in order to be able to state these terms properly. Thus, in the interests of readability, we choose not to include this formula as part of the statement of Theorem \ref{Main theorem}.}
\end{Remark}
\begin{Remark}
\emph{If $L$ has rational coefficients\footnote{or more generally if $L$ has rational dimension $m$, see Definition \ref{Definition rational space} below.}, then Theorem \ref{Main theorem} reduces to a statement on linear equations in primes (a reduction which we will make precise in Remark \ref{Remark generalising Green Tao} below). In this sense, our work is a generalisation of Green-Tao-Ziegler.}
\end{Remark}
\begin{Remark}
\emph{We have phrased Theorem \ref{Main theorem} with Lipschitz cut-offs $F$ and $G$. In Section \ref{section removing sharp cut-offs} we will demonstrate how these cut-offs may be removed when $L$ is `purely irrational', and in doing so will demonstrate how Theorem \ref{Main theorem} implies Theorem \ref{Main theorem simpler version}. The same methods may be applied when $L$ is not purely irrational, but they will not always succeed, due to the rational degeneracy introduced in those cases. Unfortunately we have not been able to formulate what we regard to be a satisfactory general condition for saying when (\ref{expression from main theorem}) holds with sharp cut-offs $F$ and $G$. Note in particular how the proof of Lemma \ref{Lemma singular integral} relies heavily on the convex sets $[-\varepsilon,\varepsilon]^m$ and $[0,N]^d$ being axis-parallel boxes. Therefore we do not present a version of the theorem in which summation is over a general convex set $K$, as is done in Theorem \ref{Theorem Green Tao}. However, if the reader wishes to apply a specific instance of Theorem \ref{Main theorem} with sharp cut-offs, the methods of Section \ref{section removing sharp cut-offs} and Appendix \ref{section Easy calculations} will almost certainly suffice for the purpose.}
\end{Remark}
\begin{Remark}
\emph{The reader will observe that, as in Theorem \ref{Main theorem simpler version}, we do not determine the nature of the dependence of the error term in (\ref{expression from main theorem}) on the map $L$. We discussed this feature in Remark \ref{Remark fixed L}.}
\end{Remark}
We conjecture that the conclusion of Theorem \ref{Main theorem} holds for all $L \notin V_{\operatorname{degen}}^*(m,d)$, provided $w$ grows slowly enough in terms of $L$.
\begin{Conjecture}[Transcendental case]
\label{Conjecture without algebraic}
Let $L$, $\mathbf{v}$, $F$, and $G$ be as in the statement of Theorem \ref{Main theorem}, but do not assume that $L$ necessarily has algebraic coefficients. Then there is some function $w:\mathbb{N} \longrightarrow \mathbb{R}_{\geqslant 0}$, with $w(N) \rightarrow \infty$ as $N\rightarrow \infty$, such that (\ref{expression from main theorem}) holds with $W = \prod\limits_{p\leqslant w}p$.
\end{Conjecture}
\noindent In Section \ref{section proof of pseduorandomness} we will formulate a statement involving smoothed sieve weights (namely Conjecture \ref{Conjecture pseudorandomness}) which, if resolved, would settle Conjecture \ref{Conjecture without algebraic}. \\
\textbf{Acknowledgments.} During the writing of this paper we benefited greatly from the supervision of Ben Green, and had helpful conversations with Sam Chow, Trevor Wooley, Yufei Zhao, Joni Ter\"{a}v\"{a}inen and Kaisa Matom\"{a}ki. We would like to thank an anonymous referee for an exceptionally detailed reading of the manuscript and for many helpful corrections and comments. The majority of the work was carried out while the author was supported by EPSRC grant no.
EP/M50659X/1, continued while the author was a Program Associate at the Mathematical Sciences Research Institute in Berkeley, and finished while the author was supported by a Junior Research Fellowship at Trinity College Cambridge. \\
\section{The structure of the argument}
In this section we discuss our approach to proving Theorem \ref{Main theorem}, and describe the geography of the paper as a whole.
Initially, one might hope that Theorem \ref{Main theorem} could be proved by replacing the coefficients of $L$ with some rational approximations, by considering the corresponding linear equation with rational coefficients, and then by appealing directly to Theorem \ref{Theorem Green Tao} on linear equations in primes. However, unless the coefficients of $L$ are extremely well-approximable by rationals (and in particular are transcendental), such an approach does not seem to succeed. Indeed, let $L = (\lambda_{ij})_{i\leqslant m,j\leqslant d}$ and let $\lambda^\prime_{ij}$ be a rational approximation to $\lambda_{ij}$, with $L^\prime$ being the corresponding approximation to $L$. In order for the comparison of $L$ with $L^\prime$ to be meaningful, we will need $\Vert L\mathbf{n} - L^\prime \mathbf{n}\Vert_\infty = O(1)$ for all relevant $\mathbf{n}$, and in the general situation where all coordinates of $\mathbf{n}$ have magnitude $\Omega(N)$ this requires $\vert\lambda^\prime_{ij} - \lambda_{ij}\vert$ to be $O( N^{-1})$. Hence the numerator and denominator of $\lambda^\prime_{ij}$ must grow rapidly with $N$, unless $\lambda_{ij}$ is extremely well-approximable. Yet Theorem \ref{Theorem Green Tao} requires the coefficients of the associated affine linear equations to have height $O(1)$ (excepting the constant term, which may be $O(N)$). In \cite{B17} Bienvenu offers a slight improvement, but even with this refinement it does not seem that we can apply an existing result on linear equations in primes as a black box.
Instead, we will follow a similar approach to that which we used in our work \cite{Wa17}, a paper that considered diophantine inequalities in the setting of bounded functions. Namely, we replace the function $\Lambda^\prime:\mathbb{Z}\rightarrow \mathbb{R}$ by a suitable convolution $ \Lambda^\prime \ast \chi:\mathbb{R}\rightarrow \mathbb{R}$, designed to ensure the validity of the approximation
\begin{equation}
\label{intro twidles}
\sum\limits_{\mathbf{n}\in \mathbb{Z}^d}\Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j) \Big) F(\mathbf{n}/N) G(L\mathbf{n} + \mathbf{v})\approx \int\limits_{\mathbf{x} \in \mathbb{R}^d}\Big( \prod\limits_{j=1}^d (\Lambda^\prime \ast \chi)(x_j)\Big) F(\mathbf{x}/N) G(L\mathbf{x} + \mathbf{v})\, d\mathbf{x}.
\end{equation} The integral may be manipulated by certain reparametrisations (Lemma \ref{separating out the kernel}), yielding expressions of the form \[ \int\limits_{\mathbf{y} \in \mathbb{R}^{d-m}}\Big(\prod\limits_{j=1}^d g_j(\psi_j(\mathbf{y})) \Big) F(\Psi(\mathbf{y})/N) \, d\mathbf{y},\] where $(\psi_1,\dots,\psi_d) = \Psi:\mathbb{R}^{d-m} \longrightarrow \mathbb{R}^d$ parametrises $\ker L$ and $g_1,\dots,g_d$ are certain functions. By applying the Gowers-Cauchy-Schwarz inequality, in a manner strongly resembling \cite[Appendix C]{GT10}, such expressions may be bounded by the Gowers norm $\Vert\Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}}\Vert_{U^{s+1}[N]}$, for some $s = O(1)$. A qualitative bound on this Gowers norm is known by the work of Green-Tao-Ziegler (see Lemma \ref{Lemma Corollary of tool from Green-Tao}), and so Theorem \ref{Main theorem} follows.\\
The novel aspect of this manipulation, over the work of \cite{GT10} and \cite{Wa17}, is the appearance of various auxiliary linear inequalities, weighted by upper bound sieve weights. These enter in a manner that is somewhat analogous to the way in which the so-called `linear forms condition' arises in \cite{GT10}. Asymptotics for the number of solutions to these auxiliary inequalities underpin the argument, and this leads to a `linear inequalities condition' \[ T_{F,G,N}^{L,\mathbf{v}}( \nu,\dots,\nu) \approx T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}})\] for a sieve weight $\nu$, which is our corresponding notion of pseudorandomness (made precise in Definition \ref{Definition linear inequalities condition}). We are unable to verify this pseudorandomness condition in full generality, but we succeed in the case when $L$ has algebraic coefficients. Our key technical tool is a bound for the number of solutions to a diophantine inequality restricted to a lattice, which we prove using the Davenport-Heilbronn method. This is the only part of the entire argument that uses the fact that the coefficients are assumed to be algebraic.
There is a final technical manoeuvre that we employ, one which has no direct analogue in \cite{GT10} or \cite{Wa17}. It will transpire that passing to the local von Mangoldt function $\Lambda_{\mathbb{Z}/W\mathbb{Z}}$ introduces certain singular expressions, which arise from the fact that we are dealing with inequalities rather than equations. To circumvent this issue we find it necessary to work at two different `local scales', introducing functions $\Lambda_{\mathbb{Z}/W^*\mathbb{Z}}$ and $\Lambda_{\mathbb{Z}/W\mathbb{Z}}$. By careful manoeuvring one can ensure that the singular expressions are only introduced by the $W^*$ scale, and so, provided $W^*$ grows slowly enough compared to $W$, these singularities may be offset by the decay in the Gowers norm expressions involving $W$. This further complicates the analysis of the expressions, and in fact our final choice of function $W^*$ will be non-effective. \\
The structure of the paper is as follows. The main elements of the proof of Theorem \ref{Main theorem} take place in Part \ref{part gen von neu}, and the reader may wish to begin with this section. It is here that we reduce matters to bounding certain systems by Gowers norms (Section \ref{section Controlling by Gowers norms}), prove the approximation (\ref{intro twidles}) (Section \ref{section transfer}), and apply the Gowers-Cauchy-Schwarz inequality (Section \ref{section Cauchy Schwarz argument}).
However, the arguments of this part rely heavily on lemmas that are proved earlier in the paper, and these lemmas split naturally into four types. There are those results that are standard properties of smooth functions, and these are recorded in Section \ref{section smooth functions}. We also have lemmas whose proofs involve manipulation of a purely linear algebraic nature, in order to reduce inequalities to ones that are `purely irrational' or to put linear equations into `normal form'. We describe these notions in Part \ref{Part linear algebra}. The definition of pseudorandomness for an enveloping sieve weight is contained in Part \ref{part pseudorandomness}, as is our proof that a certain weight satisfies this pseudorandomness condition. Also in this part one may find Conjecture \ref{Conjecture pseudorandomness}, which, if resolved, would remove the algebraicity assumptions. Part \ref{part the structure of inequalities} is reserved for those lemmas that involve the (somewhat tedious) manipulation of integrals into more pleasant forms. One of these lemmas is Lemma \ref{Lemma approximation of Q}, which is the lemma that introduces the second local scale $W^*$ that we mentioned above.
The first appendix is concerned with elementary estimates relating to the integral that appears in the global factor of Theorem \ref{Main theorem}. As we have already said, Appendix \ref{section an analytic argument} presents a Fourier-analytic argument which is essentially due to Parsell. \\
Finally, let us mention that, to help to streamline the statements of various propositions and lemmas in the paper as a whole, we have found it useful to introduce certain notational conventions that are unique to this paper. We describe these in Section \ref{section conventions}. \\
\part{Preliminaries}
\label{part Preliminaries}
\section{Smooth functions}
\label{section smooth functions}
Smooth functions will play a significant role in the paper, and in this section we collect together those notions and lemmas that will be necessary for our forthcoming manipulations.
Following \cite[Section 2]{HB96}, given a natural number $d$ and a compactly supported smooth function $F:\mathbb{R}^d \longrightarrow \mathbb{R}$, we define $d(F)$ to be the corresponding value of $d$, $\operatorname{Rad}(F)$ to be the smallest $R$ such that $F$ is supported on $[-R,R]^d$, and for every non-negative integer $j$ we define \[ d_j(F): = \max \Big \{ \bigg\vert \frac{\partial^{j_1+\dots + j_d } F}{\partial^{j_1} x_1\dots \partial^{j_d} x_d } \Big|_{\mathbf{x} = \mathbf{a}} \bigg\vert : \mathbf{a} \in \mathbb{R}^d , \, \sum_{i=1}^d j_i = j \Big\}.\]
Then, if $P$ is any set, we shall define $\mathcal{C}(P)$ to be the set of those smooth functions $F$ for which \[ d(F), \operatorname{Rad}(F), d_0(F),d_1(F),d_2(F), \dots\] can be bounded above by quantities that depend only on the elements of $P$. For example, let $g: \mathbb{R} \longrightarrow \mathbb{R}$ be the function given by \[ g(x): = \begin{cases}
\exp(-(1-x^2)^{-1}) & \vert x\vert \leqslant 1 \\
0 & \vert x\vert >1 ,
\end{cases}\] and then for a positive parameter $\delta$ let $g_\delta:\mathbb{R}\longrightarrow \mathbb{R}$ be defined by $g_\delta(x):= g(x/\delta)$. Then $g_\delta \in \mathcal{C}(\delta)$, as is proved rather succinctly in \cite[Lemma 9]{BFI87}, say.
In order to shorten some of the statements in the main part of the paper, it will be convenient to consider all functions on $\mathbb{R}^0$ to be smooth (with derivatives equal to $0$). \\
Let us record a standard proposition on smooth majorants and minorants.
\begin{Lemma}
\label{Lemma smooth approximations}
Let $\delta$ be a real number in the range $0<\delta < 1$. Then there exist two smooth functions $f^{+\delta}, f^{-\delta}:\mathbb{R}\rightarrow [0,1]$, with $f^{+\delta}, f^{-\delta} \in \mathcal{C}(\delta)$, satisfying \[ 1_{[\delta,1-\delta]}(x)\leqslant f^{-\delta}(x)\leqslant 1_{[0,1]}(x)\leqslant f^{+\delta}(x)\leqslant 1_{[-\delta,1+\delta]}(x)\] for all $x \in \mathbb{R}$.
\end{Lemma}
\begin{proof}
Let $g$ be as above, and let $C := \int g(x) \, dx$. Then one may define \[ f^{-\delta}(x): = \frac{4}{\delta C} \int\limits_{y\in\mathbb{R}}1_{[\delta/2,1-\delta/2]}(y) g(4(x-y)/\delta) \, dy\] and \[ f^{+\delta}(x): = \frac{4}{\delta C} \int\limits_{y\in\mathbb{R}}1_{[-\delta/2,1+\delta/2]}(y) g(4(x-y)/\delta) \, dy.\] The fact that $f^{+\delta},f^{-\delta} \in \mathcal{C}(\delta)$ follows from differentiating under the integral (which is easily justified by the mean value theorem).
\end{proof}
\begin{Lemma}[Smooth partition of unity]
\label{Lemma smooth partition of unity on interval}
Let $\delta$ be a real number in the range $0< \delta < 1$. Then there exists a natural number $t$, satisfying $t = O(\delta^{-1})$, and functions $f_1,\dots,f_t: \mathbb{R} \longrightarrow [0,1]$ such that
\begin{enumerate}
\item for each $i\leqslant t$, $f_i \in \mathcal{C}(\delta)$;
\item for each $i\leqslant t$, $f_i$ is supported on an interval of length at most $2\delta$;
\item for all $x \in \mathbb{R}$, $1_{[-1 + \delta, 1 - \delta]}(x)\leqslant \sum\limits_{i=1}^t f_i(x) \leqslant 1_{[-1 - \delta, 1 + \delta]}(x) $;
\item for all $x \in \mathbb{R}$, $x$ is contained in the support of at most $2$ of the functions $f_i$.
\end{enumerate}
\end{Lemma}
\begin{proof}
Let $t = \lceil 4\delta^{-1} \rceil$, and write \[1_{[-1,-1+ t\delta/2)}=\sum\limits_{i=1}^t 1_{I_i}.\] where \[I_i := [-1 + (i-1)\delta/2, -1 + i\delta/2).\] Then define \[ f_i (x) : = \frac{4}{\delta C} \int\limits_{y\in \mathbb{R}} 1_{I_i}(y) g(4(x-y)/\delta) \, dy.\] The desired properties are immediate.
\end{proof}
\begin{Lemma}[Approximating Lipschitz functions by smooth boxes]
\label{Lemma approximating Lipschitz functions by smooth boxes}
Let $\delta,\sigma,N$ be positive real parameters, with $\delta,\sigma$ in the range $0 < \delta,\sigma < 1/2$. Let $d$ be a natural number, and let $F:\mathbb{R}^d \longrightarrow [0,1]$ be a Lipschitz function supported on $[-N,N]^d$ with Lipschitz constant at most $(\sigma N)^{-1}$. Then there exists a natural number $k$, satisfying $k = O(\delta^{-d})$, and functions $F_1,\dots,F_{k}:\mathbb{R}^d \longrightarrow [0,1]$ such that
\begin{enumerate}
\item $\Vert F - \sum\limits_{i=1}^{k} F_i\Vert_\infty = O(\delta \sigma^{-1})$;
\item for each $i\leqslant k$, $F_i$ is supported on a box with side length $O(\delta)$;
\item there is a natural number $t$, satisfying $t = O(\delta^{-1})$, and functions $f_1,\dots,f_t: \mathbb{R} \longrightarrow [0,1]$, satisfying $f_1,\dots,f_t \in \mathcal{C}(\delta)$, such that \[F_i(\mathbf{x}) = c_{i,F} \prod\limits_{j=1}^d f_{S^{(i)}_{j}}(x_j/2N)\] for each $i\leqslant k$, for some element $S^{(i)} \in [t]^d$ and some constant $c_{i,F} \in [0,1]$.
\end{enumerate}
\end{Lemma}
\begin{proof}
We have \begin{align}
\label{boxing F}
F(\mathbf{x}) &= F(\mathbf{x})1_{[-1,1]^d}(\mathbf{x}/2N) \nonumber\\
& = F(\mathbf{x}) \Big( \prod\limits_{j=1}^d 1_{[-1,1]}(x_j/2N) \Big) \nonumber\\
& = F(\mathbf{x}) \Big( \prod\limits_{j=1}^d \sum\limits_{i=1}^t f_i(x_j/2N)\Big),
\end{align}
\noindent where the functions $f_1,\dots, f_t$ are those constructed by applying Lemma \ref{Lemma smooth partition of unity on interval} with this value of $\delta$. This manipulation is indeed valid, since $F(\mathbf{x}) = 0$ for any $\mathbf{x}$ for which \[ \prod\limits_{j=1}^d\sum\limits_{i=1}^t f_i(x_j/2N) \neq \prod\limits_{j=1}^d 1_{[-1,1]}(x_j/2N).\]
Swapping the product and summation, (\ref{boxing F}) equals \[ \sum\limits_{S \in [t]^d} F(\mathbf{x})\Big(\prod\limits_{j=1}^df_{S_j}(x_j/2N)\Big) .\] Let $\mathbf{x}^{(S)}\in \mathbb{R}^d$ be any point at which $\prod\limits_{j=1}^d f_{S_j}(x^{(S)}_j/2N)$ is non-zero. Then the above is equal to
\begin{equation*}
\sum\limits_{S \in [t]^d}(F(\mathbf{x}^{(S)}) + O(\delta \sigma^{-1}))\Big( \prod\limits_{j=1}^d f_{S_j}(x_j/2N)\Big) ,
\end{equation*} by the Lipschitz properties of $F$ and the limited support of the functions $f_1,\dots,f_t$ (which was part (2) of Lemma \ref{Lemma smooth partition of unity on interval}).
Define \[F_S(\mathbf{x}) : = F(\mathbf{x}^{(S)})\Big(\prod\limits_{j=1}^d f_{S_j}(x_j/2N)\Big) .\] These functions satisfy properties (2) and (3) of Lemma \ref{Lemma approximating Lipschitz functions by smooth boxes}. Finally note that, by part (4) of Lemma \ref{Lemma smooth partition of unity on interval}, each $\mathbf{x} \in \mathbb{R}$ is contained in the support of at most $O(1)$ of the functions $F_S$, and hence $\Vert F - \sum\limits_{S \in [t]^d} F_S\Vert_\infty = O(\delta \sigma^{-1})$, as required.
\end{proof}
The Fourier transform of smooth functions will be an important tool in Section \ref{section Inequalities in arithmetic progressions}. We choose the following convention. If $F:\mathbb{R}^d \longrightarrow \mathbb{R}$ is a compactly supported smooth function, we define the Fourier transform $\widehat{F}:\mathbb{R}^d \longrightarrow \mathbb{C}$ by the formula \[ \widehat{F}(\boldsymbol{\alpha}): = \int\limits_{\mathbf{x} \in \mathbb{R}^d} F(\mathbf{x}) e(-\boldsymbol{\alpha} \cdot \mathbf{x}) \, d\mathbf{x}.\]
\begin{Lemma}
\label{Lemma by parts}
Let $P$ be a set of parameters and suppose $F \in \mathcal{C}(P)$. Then for every $\boldsymbol{\alpha}$ and every non-negative integer $K$ one has \[\vert \widehat{F}(\boldsymbol{\alpha})\vert \ll_{P,K} \Vert 1+ \boldsymbol{\alpha}\Vert_\infty^{-K}.\]
\end{Lemma}
\begin{proof}
This follows from integration by parts.
\end{proof}
Finally, we recall the definition of dual lattices and the version of the Poisson summation formula that we will use.
\begin{Definition}[Dual lattice]
\label{Definition dual lattice}
Let $h$ be a natural number and let $\Gamma \leqslant \mathbb{R}^h$ be a lattice of rank $h$. Then the dual lattice $\Gamma^*$ is defined by \[ \Gamma^*: = \{ \mathbf{y} \in \mathbb{R}^h: \langle \mathbf{y}, \mathbf{x} \rangle \in \mathbb{Z} \text{ for all } \mathbf{x} \in \Gamma\}.\]
\end{Definition}
\noindent It is easily seen that if $M$ is an $h$-by-$h$ matrix whose columns are a lattice basis for $\Gamma$, then $(M^{-1})^T$ is an $h$-by-$h$ matrix whose columns are a lattice basis for $\Gamma^*$.
\begin{Lemma}[Poisson summation]
\label{Lemma Poisson summation}
Let $h$ be a natural number and let $\Gamma \leqslant \mathbb{R}^h$ be a lattice of rank $h$. Let $F: \mathbb{R}^h \longrightarrow \mathbb{C}$ be a smooth compactly supported function. Then \[\sum\limits_{\mathbf{x} \in \Gamma} F(\mathbf{x}) = \frac{1}{\operatorname{vol}(\mathbb{R}^h/\Gamma)} \sum\limits_{ \mathbf{y} \in \Gamma^*} \widehat{F}(\mathbf{y}).\]
\end{Lemma}
\begin{proof}
This is a standard result. The version in which $\Gamma = \mathbb{Z}^h$ appears as \cite[Theorem 3.1.17]{Gr08}, with the extension to general full-rank lattices following from a change of variables.
\end{proof}
\section{Notation and Conventions}
\label{section conventions}
For the most part the notation used in this paper is very standard, and any usage that could be viewed as somewhat unusual will be introduced as and when it is required. However, there are a few particular points that will apply to the paper as a whole which we believe to be important to address now. \\
We will use the Bachmann-Landau asymptotic notation $O$, $o$, and $\Omega$, but we do not, as is sometimes the convention, for a function $f$ and a positive function $g$ choose to write $f = O(g)$ if there exists a constant $C$ such that $\vert f(N)\vert \leqslant C g(N) $ for $N$ sufficiently large. Rather we require the inequality to hold for all $N$ in some pre-specified range. If $N$ is a natural number, the range is always assumed to be $\mathbb{N}$ unless otherwise specified. It will be a convenient shorthand to use these symbols in conjunction with minus signs, whenever they appear in exponents. For example, $N^{-\Omega(1)}$ refers to a term $N^{-c}$, where $c$ is some positive quantity bounded away from $0$ as the asymptotic parameter tends to infinity.
The Vinogradov symbol $\ll$ will be used, where for a function $f$ and a positive function $g$ we write $f\ll g$ if and only if $f = O(g)$. We write $f\asymp g$ if $f\ll g$ and $g\ll f$. If an implied constant or a $o(1)$ term depends on other parameters, we will denote these by subscripts, e.g. $O_{c,C,\varepsilon}(1)$, or $f\asymp_{\varepsilon} g$. However, if the implied constants depend on the underlying dimensions (denoted by $m$, $d$, and occasionally by $h$, $s$, and $t$) we will not record this fact explicitly, as this would render most of the expressions unreadable.
The notation $\operatorname{Rad}(F)$, which was introduced in the previous section for compactly supported smooth functions $F$, will also be used when $F$ is not smooth. \\
In order to keep track of which variables are scalars and which are vectors, we will use boldface $\mathbf{x}$ to denote any $\mathbf{x} \in \mathbb{R}^d$ where $d$ could be at least $2$. In order to describe certain integrals over many variables, the following notational convention will be useful. If $\mathbf{x} \in \mathbb{R}^d$ and if $a$ and $b$ are two subscripts with $1\leqslant a\leqslant b\leqslant d$, we use $\mathbf{x_a^b}$ to denote the vector $(x_a,x_{a+1},\cdots,x_{b})^T \in \mathbb{R}^{b-a+1}$.\\
With a view to trying to shorten some of the statements and proofs to follow, there are certain functions that we will fix throughout the paper, namely $w$, $W$, $\rho$, and $\chi$. From now on, the function $w: \mathbb{N} \longrightarrow \mathbb{R}_{\geqslant 0}$ will always be defined by \[w(N): = \max(1,\log\log\log N).\] Whenever $N$ is a quantity that we have defined, we write $w$ for $w(N)$ and let \[W = W(N) = \prod_{p\leqslant w(N)}p.\] The empty product is considered to be equal to $1$. Whenever other functions $w_1,\dots,w_d, w^*:\mathbb{N}\longrightarrow \mathbb{R}_{\geqslant 0}$ occur, and a natural number $N$ is given, we will define $W_1,\dots,W_d,W^*$ analogously.
The following definition (a smooth version of \cite[Definition 5.2]{Wa17}) will be a useful way to control certain functions that are required in the argument.
\begin{Definition}[$\eta$-supported]
\label{Defintion eta supported}
Let $\chi:\mathbb{R}\longrightarrow [0,1]$ be a smooth function, and let $\eta$ be a positive parameter. We say that $\chi$ is \emph{$\eta$-supported} if $\chi$ is supported on $[-\eta,\eta]$ and $\chi(x) \equiv 1$ for all $x\in [-\eta/2,\eta/2]$.
\end{Definition}
\noindent It follows from Lemma \ref{Lemma smooth approximations} that $1$-supported functions exist. From now on we fix a smooth function \[\rho:\mathbb{R} \longrightarrow [0,1]\] that is $1$-supported. We think of $\rho$ as an element of $\mathcal{C}(\emptyset)$. Whenever a positive parameter $\eta$ is defined we also define \[\chi:\mathbb{R} \longrightarrow [0,1]\] by the relation $\chi(x) : = \rho(x/\eta)$. The function $\chi$ is $\eta$-supported, and satisfies $\chi \in \mathcal{C}(\eta)$. \\
We finish this section with some pieces of notation of a more standard nature. If $X,Y\subset\mathbb{R}^d$ for some $d$, we define \[\operatorname{dist}(X,Y): = \inf\limits_{x\in X, y\in Y} \Vert x - y\Vert_{\infty}.\] If $X$ is the singleton $\{x\}$, we write $\operatorname{dist}(x,Y)$ for $\operatorname{dist}(\{x\},Y)$. We let $\partial(X)$ denote the topological boundary of $X$ (though the symbol $\partial$ will also be used for partial differentiation, as usual). If $A$ and $B$ are two sets with $A\subseteq B$, we let $1_A:B\longrightarrow \{0,1\}$ denote the indicator function of $A$. The relevant set $B$ will usually be obvious from context. If $E$ is some event, e.g. a divisor condition, we will also use $1_E$ for the indicator function of this event. For $\theta \in \mathbb{R}$ we adopt the standard shorthand $e(\theta)$ to mean $e^{2\pi i \theta}$. The M\"{o}bius function will be denoted by $\mu$, though in Section \ref{section Cauchy Schwarz argument} the symbol $\mu$ will also be used to denote a measure. In Section \ref{section proof of pseduorandomness} we will use $\varphi$ for Euler's $\varphi$-function, and for two natural numbers $a$ and $b$ we use the shorthand $(a,b)$ to denote their greatest common divisor. \\
\part{Linear algebra}
\label{Part linear algebra}
In \cite{Wa17} we developed an armoury of linear-algebraic methods, which enabled us to manipulate linear inequalities into certain desired forms. The same manipulation is necessary here. We have chosen not to consign this material to an appendix, nor simply to cite \cite{Wa17}, since the result of Lemma \ref{Lemma generating a purely irrational map} below will be very important during subsequent sections. We will also need a few results (on the vector $\widetilde{\mathbf{r}}$ below) that were not required in our previous work, and so citing \cite{Wa17} won't quite do.
Fortunately, as we do not seek to determine exactly how the error term in Theorem \ref{Main theorem} depends on $L$, we can offer a significant simplification over the work that was presented in \cite{Wa17}. This is another reason to include this material. \\
Before starting, we remind the reader of some of the central definitions from the theory of dual vector spaces and dual linear maps, which will be used liberally throughout. Let $V$ be a finite-dimensional vector space over a field $\mathbb{F}$. Then $V^*$ denotes the dual vector space, i.e. the vector space of all linear maps $\omega: V \longrightarrow \mathbb{F}$ under pointwise addition and scalar multiplication. If $L: V \longrightarrow W$ is a linear map between two finite-dimensional vector spaces, the dual map $L^*: W^* \longrightarrow V^*$ is defined by the relation $(L^*(\boldsymbol{\omega}))(\mathbf{v}): = \boldsymbol{\omega}(L(\mathbf{v}))$ for all $\boldsymbol{\omega}\in W^*$ and $\mathbf{v} \in V$. Given a basis $\mathbf{e_1}, \dots, \mathbf{e_n}$ for $V$, the dual basis $\mathbf{e_1^*}, \dots, \mathbf{e_n^*}$ for $V^*$ is defined by extending linearly the relations $$\mathbf{e_i^*}(\mathbf{e_j}) = \begin{cases} 1 & \text{if } i = j \\
0 & \text{otherwise.}\end{cases}$$ Finally, given a set $S \subset V$ the annihilator $S^0 \subset V^*$ is defined by \[S^0 :=\{ \boldsymbol{\omega} \in V^* : \boldsymbol{\omega}(\mathbf{v}) = 0 \text{ for all } \mathbf{v} \in S \}.\] \\
\section{Dimension reduction}
\label{section linear algebra and dimension reduction}
We begin with a generalisation of Definition \ref{definiton of the solution count form}. Note that the case $m=0$ is permitted below.
\begin{Definition}
\label{Definition most general discrete solution count form}
Let $N,d,h$ be natural numbers, and let $m$ be a non-negative integer. Let $L:\mathbb{R}^h \longrightarrow \mathbb{R}^m$ be a linear map, and let $(\xi_1,\dots,\xi_d) =\Xi:\mathbb{R}^h \longrightarrow \mathbb{R}^d$ be a linear map with integer coefficients. Let $F:\mathbb{R}^d \longrightarrow \mathbb{R}$ and $G:\mathbb{R}^m \longrightarrow \mathbb{R}$ be functions with compact support. Let $\mathbf{v} \in \mathbb{R}^m$ and $\widetilde{\mathbf{r}} \in \mathbb{Z}^d$. Then for $f_1, \dots, f_d:\mathbb{Z} \longrightarrow \mathbb{R}$ we define
\begin{equation}
T_{F,G,N}^{L,\mathbf{v} ,\Xi,\widetilde{\mathbf{r}}}(f_1,\dots,f_d): = \frac{1}{N^{h-m}} \sum\limits_{\mathbf{n} \in \mathbb{Z}^h} \Big( \prod\limits_{j=1}^d f_j(\xi_j(\mathbf{n}) + \widetilde{r}_j) \Big) F\Big(\frac{\Xi(\mathbf{n}) + \widetilde{\mathbf{r}}}{N}\Big) G(L\mathbf{n} + \mathbf{v}),
\end{equation}
\noindent where $\widetilde{r}_j$ is the $j^{th}$ coordinate of $\widetilde{\mathbf{r}}$.
\end{Definition}
\noindent The reader might notice that this definition is subtly different from the similar definition that appeared in \cite{Wa17}, namely Definition 4.3 of that paper, in which the function $\mathbf{n} \mapsto F((\Xi(\mathbf{n})+ \widetilde{\mathbf{r}})/N)$ was treated as an arbitrary function $F_1:\mathbb{R}^h\longrightarrow [0,1]$. When dealing with quantitative aspects of smooth functions (a feature of this paper that is not required in \cite{Wa17}) it is convenient to preserve the internal structure of this particular function, and so we have modified Definition \ref{Definition most general discrete solution count form} accordingly. \\
Recall the notion of \emph{rational maps} from \cite{Wa17}.
\begin{Definition}[Rational dimension, rational map, purely irrational]
\label{Definition rational space}
Let $m$ and $d$ be natural numbers, with $d\geqslant m$. Let $L:\mathbb{R}^d\longrightarrow \mathbb{R}^m$ be a surjective linear map. Let $u$ denote the largest integer for which there exists a surjective linear map $\Theta:\mathbb{R}^m \longrightarrow \mathbb{R}^u$ for which $\Theta L (\mathbb{Z}^d) \subseteq \mathbb{Z}^u$. We call $u$ the \emph{rational dimension} of $L$, and we call any map $\Theta$ with the above property a \emph{rational map} for $L$. We say that $L$ is \emph{purely irrational} if $u=0$.
\end{Definition}
\begin{Remark}
\label{Remark algebraic coefficients of rational map}
\emph{If (the matrix of) $L$ has algebraic coefficients, then there exists a rational map for $L$ that also has algebraic coefficients.}
\end{Remark}
Purely irrational linear maps are those that we may analyse most easily using the Davenport-Heilbronn method (see Section \ref{section Inequalities in arithmetic progressions}). However, even when proving Theorem \ref{Main theorem simpler version}, whose statement concerns only purely irrational linear maps, we will be forced to consider auxiliary linear maps that are not purely irrational. It is necessary therefore to develop a rudimentary theory of these maps. Readers desiring more detail and motivating examples concerning rational maps and rational dimension may consult Sections 2, 4, and 6 of \cite{Wa17}. \\
Our key tool will be Lemma \ref{Lemma generating a purely irrational map}, which is a version of Lemma 4.10 from \cite{Wa17}. This lemma will enable us to `quotient out' the rational relations that are present in a diophantine inequality, leaving behind a purely irrational linear map between spaces of a lower dimension. In particular, we will show that \[T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d)=\sum\limits_{\widetilde{\mathbf{r}} \in \widetilde{R}}T_{F,G_{\widetilde{\mathbf{r}}},N}^{L^\prime,\mathbf{v}^\prime,\Xi,\widetilde{\mathbf{r}}}(f_1,\dots,f_d),\] where $L^\prime$ is purely irrational, and the vectors $\mathbf{v^\prime}$ and $\widetilde{\mathbf{r}}$, the linear map $\Xi$ and the function $G_{\widetilde{\mathbf{r}}}$ are objects that we may control.
To state the lemma we need to recall explicitly the notion from \cite{GT10} that was mentioned in Remark \ref{Remark dual CS}, namely \emph{finite Cauchy-Schwarz complexity} for linear maps.\footnote{In \cite{Wa17} a notion of degeneracy for pairs of linear maps was useful, but we have structured the present paper in such a way as to avoid requiring this complicated notion.}
\begin{Definition}
[Finite Cauchy-Schwarz complexity]
Let $d,h$ be natural numbers, and let $(\xi_1,\dots,\xi_d): = \Xi:\mathbb{R}^h \longrightarrow \mathbb{R}^d$ be a linear map. We say that $\Xi$ has \emph{infinite Cauchy-Schwarz complexity} if there are two distinct indices $i$ and $j$, and some $\lambda \in \mathbb{R}$, for which $\xi_i = \lambda\xi_j$. If no such $i$ and $j$ exist we say that $\Xi$ has \emph{finite Cauchy-Schwarz complexity}.
\end{Definition}
There is an equivalent definition, which will be more convenient for algebraic manipulations.
\begin{Definition}[Finite Cauchy-Schwarz complexity, equivalent definition]
\label{Definition finite complexity}
Let $d,h$ be natural numbers. Let $\mathbf{e_1}, \dots,\mathbf{e_d}$ denote the standard basis vectors of $\mathbb{R}^d$, and let $\mathbf{e_1}^\ast, \dots,\mathbf{e_d}^\ast$ denote the dual basis of $(\mathbb{R}^d)^\ast$. Then let $V_{\operatorname{degen}}(h,d)$ denote the set of all linear maps $\Xi:\mathbb{R}^h \longrightarrow \mathbb{R}^d$ for which there exist two indices $i,j \leqslant d$, and some real number $\lambda$, such $\mathbf{e_i} - \lambda \mathbf{e_j}$ is non-zero and $\mathbf{e_i}^* - \lambda \mathbf{e_j}^* \in \ker (\Xi^*)$. If $\Xi \notin V_{\operatorname{degen}}(h,d)$, we say that $\Xi$ has \emph{finite Cauchy-Schwarz complexity}.
\end{Definition}
\noindent The equivalence of these definitions is elementary.
For more background on the notion of finite Cauchy-Schwarz complexity, the reader may consult Section 1 of \cite{GT10} or Section 6 of \cite{Wa17}.\\
Now we may state and prove the important lemma, which provides the `dimension reduction' of the section title.
\begin{Lemma}[Generating a purely irrational map]
\label{Lemma generating a purely irrational map}
Let $m,d$ be natural numbers, with $d \geqslant m+2$, and let $C,\eta$ be positive parameters. Let $L: \mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a surjective linear map with algebraic coefficients. Let $u$ be the rational dimension of $L$. Let $F:\mathbb{R}^d \longrightarrow [0,1]$ and $G: \mathbb{R}^m \longrightarrow [0,1]$ be compactly supported functions. Assume that $G$ is smooth, $\operatorname{Rad}(G) \leqslant \eta$, and moreover that $G \in \mathcal{C}(P,\eta)$ for some set of parameters $P$. Let $\mathbf{v} \in\mathbb{R}^m$ be a vector with $\Vert \mathbf{v}\Vert_\infty \leqslant CN$. Then there exists a surjective linear map $\Theta:\mathbb{R}^m \longrightarrow \mathbb{R}^u$, a surjective linear map $L^\prime:\mathbb{R}^{d-u} \longrightarrow \mathbb{R}^{m-u}$, an injective linear map $\Xi:\mathbb{R}^{d-u} \longrightarrow \mathbb{R}^d$, a finite subset $\widetilde{R}\subset \mathbb{Z}^d$, a vector $\mathbf{v}^\prime \in \mathbb{R}^{m-u}$, and, for each $\widetilde{\mathbf{r}} \in \widetilde{R}$, a compactly supported function $G_{\widetilde{\mathbf{r}}}:\mathbb{R}^{m-u} \longrightarrow [0,1]$, such that
\begin{enumerate}[(1)]
\item $\Theta$ is a rational map for $L$ with algebraic coefficients;
\item $\Xi$ has integer coefficients, depends only on $L$, and satisfies $\operatorname{Im} \Xi = \ker \Theta L $ and $\Xi(\mathbb{Z}^{d-u}) = \mathbb{Z}^d \cap \operatorname{Im} \Xi$;
\item $\widetilde{R}$ satisfies $\vert \widetilde{R}\vert = O_{L,\eta}(1)$, and $\Vert \widetilde{\mathbf{r}}\Vert_\infty = O_{C,L,\eta}(N)$ for all $\widetilde{\mathbf{r}} \in \widetilde{R}$;
\item for all $\widetilde{\mathbf{r}} \in\widetilde{R}$, the function $G_{\widetilde{\mathbf{r}}}$ is smooth, $\operatorname{Rad}(G) = O_L(\eta)$, and $G_{\widetilde{\mathbf{r}}} \in \mathcal{C}(L,P,\eta)$;
\item $\mathbf{v}^\prime$ satisfies $\Vert \mathbf{v}^\prime\Vert_\infty = O_{C,L}(N)$;
\item for all natural numbers $N$, and for all functions $f_1,\dots,f_d: \mathbb{Z} \longrightarrow \mathbb{R}$, one has \[T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d)=\sum\limits_{\widetilde{\mathbf{r}} \in \widetilde{R}}T_{F,G_{\widetilde{\mathbf{r}}},N}^{L^\prime,\mathbf{v}^\prime,\Xi,\widetilde{\mathbf{r}}}(f_1,\dots,f_d);\]
\item $L^\prime$ is purely irrational, depends only on $L$, and has algebraic coefficients;
\item if $L \notin V_{\operatorname{degen}}^*(m,d)$ then $\Xi$ has finite Cauchy-Schwarz complexity. \\
\noindent The above properties suffice for Section \ref{section proof of pseduorandomness}, but three additional properties also hold. We will need these additional properties in Section \ref{section structure of Q}. \\
\item Letting $\mathbf{e_1}, \dots,\mathbf{e_{d-u}}$ denote the standard basis of $\mathbb{R}^{d-u}$, there is a set $\{ \mathbf{x_i}: i \leqslant u\} \subset \mathbb{R}^d$ for which \begin{equation}
\mathcal{B}: = \{\mathbf{x_i}:i\leqslant u\} \cup \{ \Xi(\mathbf{e_j}):j\leqslant d-u\}
\end{equation} is a basis for $\mathbb{R}^d$ and a lattice basis for $\mathbb{Z}^d$. Furthermore, $\widetilde{R} \subset \operatorname{span}(\mathbf{x_i}:i\leqslant u)$ and $\{\Theta L \mathbf{x_i}: i\leqslant u\}$ is a lattice basis for $\Theta L \mathbb{Z}^d$;
\item if $\eta$ is small enough in terms of $L$, and if $\mathbf{v} = L\mathbf{a}$ for some $\mathbf{a} \in \mathbb{R}^d$, then $\vert \widetilde{R}\vert = 1$ and $\widetilde{\mathbf{r}} \in R$ is a vector that minimises $\Vert \Theta L (\widetilde{\mathbf{r}} + \mathbf{a})\Vert_\infty$ over all $\widetilde{\mathbf{r}} \in \mathbb{Z}^d$;
\item for all $\widetilde{\mathbf{r}} \in \widetilde{R}$ and $\mathbf{x} \in \mathbb{R}^{d-u}$ one has \[ G_{\widetilde{\mathbf{r}}}(L^\prime \mathbf{x} + \mathbf{v}^\prime) = G(L\Xi (\mathbf{x}) + L\widetilde{\mathbf{r}} +\mathbf{v}).\].
\end{enumerate}
\end{Lemma}
\begin{proof}
\textbf{Parts (1) and (2)}: Choose $\Theta:\mathbb{R}^m \longrightarrow \mathbb{R}^u$ to be a rational map for $L$ that has algebraic coefficients. By rank-nullity $\ker (\Theta L)$ is a $d-u$ dimensional subspace of $\mathbb{R}^d$, and also the matrix of $\Theta L$ has integer coefficients. Combining these two facts, we see that $\ker (\Theta L) \cap \mathbb{Z}^d$ is a $d-u$ dimensional lattice, and (by the standard algorithms) one can find a lattice basis $\mathbf{v_1},\dots, \mathbf{v_{d-u}} \in \mathbb{Z}^d$ that satisfies $\Vert \mathbf{v_i}\Vert_\infty = O_{L}(1)$ for every $i$.
Let $\mathbf{e_1},\dots,\mathbf{e_{d-u}}$ denote the standard basis of $\mathbb{R}^{d-u}$, and then define $\Xi:\mathbb{R}^{d-u} \longrightarrow \mathbb{R}^{d}$ by \[ \Xi(\mathbf{e_i}):= \mathbf{v_i}\] for all $i\leqslant d-u$. Then $\Xi$ satisfies part (2) of the lemma. \\
\textbf{Parts (3), (9), and (10)}: There is a set of vectors $\{ \mathbf{a_1},\dots,\mathbf{a_u}\} \subset \mathbb{Z}^u$ that is an integer basis for the lattice $\Theta L(\mathbb{Z}^d)$ and for which $\Vert \mathbf{a_i}\Vert_\infty = O_{L}(1)$ for each $i$. Furthermore there exists a set of vectors $\{ \mathbf{x_1},\dots,\mathbf{x_u}\} \subset \mathbb{Z}^d$ such that $\Theta L(\mathbf{x_i})= \mathbf{a_i}$ for each $i$, and $\Vert \mathbf{x_i}\Vert_\infty = O_{L}(1)$. By Lemma 4.8 of \cite{Wa17},
\begin{equation}
\label{basis for Rd}
\mathcal{B}: = \{\mathbf{x_i}:i\leqslant u\} \cup \{ \Xi(\mathbf{e_j}):j\leqslant d-u\}
\end{equation} is a basis for $\mathbb{R}^d$ and a lattice basis for $\mathbb{Z}^d$.
Now, if $\mathbf{z} \in\mathbb{R}^m$ and $\Theta(\mathbf{z}) = \mathbf{r}$ then $ \Vert \mathbf{z} \Vert_\infty =\Omega_L(\Vert\mathbf{r} \Vert_\infty)$. Recall that $\operatorname{Rad}(G) \leqslant \eta$ and that $\Theta L(\mathbb{Z}^d) \subseteq \mathbb{Z}^u$. It follows that there are at most $O_{L,\eta}(1)$ possible vectors $\mathbf{r} \in \mathbb{Z}^u$ for which there exists a vector $\mathbf{n} \in\mathbb{Z}^d$ for which both $G(L\mathbf{n} + \mathbf{v}) \neq 0$ and $\Theta L \mathbf{n} = \mathbf{r}$. Let $R$ denote the set of all such vectors $\mathbf{r}$. Observe that, for all $\mathbf{r} \in R$, $\Vert \mathbf{r}\Vert_\infty = O_{C,L,\eta}(N)$.
For each $\mathbf{r} \in R$, there exists a unique vector $\widetilde{\mathbf{r}}\in \operatorname{span}(\mathbf{x_i}:i\leqslant u)$ such that $\Theta L \widetilde{\mathbf{r}} = \mathbf{r}$. Note that $\Vert \widetilde{\mathbf{r}}\Vert_\infty = O_{C,L,\eta}(N)$. Letting $\widetilde{R}$ denote the set of these $\widetilde{\mathbf{r}}$, we see that $\widetilde{R}$ satisfies part (3).
If $\eta$ is small enough in terms of $L$, then $R$ has size at most $1$. Indeed, if $\mathbf{r^{(1)}}$ and $\mathbf{r^{(2)}}$ are two different vectors in $R$, with respective $\widetilde{\mathbf{r}}^{\mathbf{(1)}}$ and $\widetilde{\mathbf{r}}^{\mathbf{(2)}}$, then $G(L\widetilde{\mathbf{r}}^{\mathbf{(1)}} + \mathbf{v}) \neq 0$ and $G(L\widetilde{\mathbf{r}}^{\mathbf{(2)}} + \mathbf{v}) \neq 0$. Hence $\Vert L(\widetilde{\mathbf{r}}^{\mathbf{(1)}} - \widetilde{\mathbf{r}}^{\mathbf{(2)}})\Vert_\infty \ll\eta$. Yet $\Vert\Theta (L(\widetilde{\mathbf{r}}^{\mathbf{(1)}} - \widetilde{\mathbf{r}}^{\mathbf{(2)}}))\Vert_\infty = \Vert \mathbf{r^{(1)}} - \mathbf{r^{(2)}}\Vert_\infty \gg 1$ (which is a contradiction). In this instance, writing $\mathbf{v}$ in the form $L\mathbf{a}$, we may pick $\widetilde{\mathbf{r}} \in \mathbb{Z}^d$ to be an element in $\operatorname{span}(\mathbf{x_i}:i\leqslant u)$ that minimises $\Vert \Theta L(\widetilde{\mathbf{r}} + \mathbf{a})\Vert_{\infty}$ over all $\widetilde{\mathbf{r}} \in \mathbb{Z}^d$\\
\textbf{Parts (4), (5), (6), and (11)}: By the definition of $\widetilde{R}$, and the fact that $\Xi(\mathbb{Z}^{d-u}) = \mathbb{Z}^d \cap \ker (\Theta L)$, we have that $T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d)$ is equal to
\begin{equation}
\label{equation getting integer rows}
\sum\limits_{\widetilde{\mathbf{r}} \in \widetilde{R}}\frac{1}{N^{d-m}} \sum\limits_{\mathbf{n} \in \mathbb{Z}^{d-u}} \Big(\prod\limits_{j=1}^d f_j(\xi_j(\mathbf{n}) + \widetilde{r}_j) \Big)F\Big(\frac{\Xi(\mathbf{n}) + \widetilde{\mathbf{r}}}{N}\Big) G(L\Xi(\mathbf{n}) + L\widetilde{\mathbf{r}} + \mathbf{v}).
\end{equation}
\noindent This is very close to being of the form required for part (6), and indeed it can be massaged into exactly the required form.\\
To do this, note that \begin{equation*}
\label{direct sum}
\mathbb{R}^m = \operatorname{span}(L\mathbf{x_i}:i\leqslant u) \oplus \ker \Theta
\end{equation*} and so there exists an invertible linear map $Q:\mathbb{R}^m \longrightarrow \mathbb{R}^m$ with algebraic coefficients such that \begin{align*}
Q((\operatorname{span}(L\mathbf{x_i}:i\leqslant u))) &= \mathbb{R}^u \times \{0\}^{m-u}, \\
Q(\ker \Theta) &= \{0\}^{u} \times \mathbb{R}^{m-u}.
\end{align*} For all $\mathbf{x} \in \mathbb{R}^{d-u}$ we have \[ G(L\Xi(\mathbf{x}) + L\widetilde{\mathbf{r}} + \mathbf{v}) = (G \circ Q^{-1})(QL\Xi(\mathbf{x}) + QL\widetilde{\mathbf{r}} + Q\mathbf{v}). \] We also note that $QL\Xi(\mathbf{x}) \in \{0\}^u \times \mathbb{R}^{m-u}$, and that $QL\widetilde{\mathbf{r}} \in \mathbb{R}^u \times \{0\}^{m-u}$.
Now, write $\pi_{m-u}:\mathbb{R}^{m}\longrightarrow \mathbb{R}^{m-u}$ for the projection map onto the final $m-u$ coordinates. Define $G_{\widetilde{\mathbf{r}}}:\mathbb{R}^{m-u} \longrightarrow [0,1]$ by
\begin{equation}
\label{explicit form of G}
G_{\widetilde{\mathbf{r}}}(\mathbf{x}) : = (G\circ Q^{-1})(\mathbf{x_0} + QL\widetilde{\mathbf{r}} + Q\mathbf{v} - (\pi_{m-u} Q \mathbf{v})_\mathbf{0}),
\end{equation} where $\mathbf{x_0}$ is the extension of $\mathbf{x}$ by $0$ in the first $u$ coordinates. Then $G_{\widetilde{\mathbf{r}}}$ satisfies the desired properties of part (3), since $\mathbf{x_0}$ and $QL\widetilde{\mathbf{r}} + Q\mathbf{v} - (\pi_{m-u} Q \mathbf{v})_\mathbf{0}$ are orthogonal.
Then (\ref{equation getting integer rows}) is equal to
\begin{equation}
\label{equation end of second part}
\sum\limits_{\widetilde{\mathbf{r}} \in \widetilde{R}}\frac{1}{N^{d-m}} \sum\limits_{\mathbf{n} \in \mathbb{Z}^{d-u}} \Big(\prod\limits_{j=1}^d f_j(\xi_j(\mathbf{n}) + \widetilde{r}_j) \Big)F\Big(\frac{\Xi(\mathbf{n}) + \widetilde{\mathbf{r}}}{N}\Big)G_{\widetilde{\mathbf{r}}}( \pi_{m-u} QL\Xi(\mathbf{n}) + \pi_{m-u} Q\mathbf{v}).
\end{equation}
\noindent Let
\begin{equation}
\label{equation definition of L prime}
L^\prime: = \pi_{m-u} QL\Xi, \qquad \mathbf{v}^\prime = \pi_{m-u} Q\mathbf{v}.
\end{equation} Then $L^\prime:\mathbb{R}^{d-u} \longrightarrow \mathbb{R}^{m-u}$ is surjective, and \[T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d) = \sum\limits_{\widetilde{\mathbf{r}} \in \widetilde{R}}T_{F,G_{\widetilde{\mathbf{r}}},N}^{L^\prime, \mathbf{v}^\prime,\Xi,\widetilde{\mathbf{r}}}(f_1,\dots,f_d).\] This resolves parts (5) and (6). But furthermore, by the construction of $G_{\widetilde{\mathbf{r}}}$, part (10) is also satisfied. \\
\textbf{Part (7)}: This is immediate from Lemma 4.10 of \cite{Wa17}. To spell it out, suppose for contradiction that there exists some surjective linear map $\varphi:\mathbb{R}^{m-u} \longrightarrow \mathbb{R}$ with $\varphi L^\prime (\mathbb{Z}^{d-u}) \subseteq \mathbb{Z}$, i.e. with $\varphi \pi_{m-u} QL\Xi(\mathbb{Z}^{d-u}) \subseteq \mathbb{Z}$. Then define the map $\Theta^\prime:\mathbb{R}^m \longrightarrow \mathbb{R}^{u+1}$ by \[ \Theta^\prime(\mathbf{x}) : = (\Theta(\mathbf{x}),\varphi \pi_{m-u} Q(\mathbf{x})).\] Then $\Theta^\prime$ is surjective, and $\Theta^\prime L(\mathbb{Z}^d)\subseteq \mathbb{Z}^{u+1}$. This second fact is immediately seen by writing $\mathbb{Z}^d$ with respect to the lattice basis $\mathcal{B}$ from (\ref{basis for Rd}). This contradicts the assumption that $L$ has rational dimension $u$. So $L^\prime$ is purely irrational. \\
\textbf{Part (8)}: Suppose $L\notin V_{\operatorname{degen}}^*(m,d)$ and suppose for contradiction that $\Xi$ has infinite Cauchy-Schwarz complexity. Letting $\mathbf{e_1},\dots,\mathbf{e_d}$ denote the standard basis of $\mathbb{R}^d$, this means there exists $i,j\leqslant d$ and a non-zero vector $\mathbf{e_i} - \lambda\mathbf{e_j}$ such that $\mathbf{e_i}^* - \lambda\mathbf{e_j}^* \in \ker (\Xi^*)$. But $\ker (\Xi^*) = (\operatorname{Im} \Xi)^0 = (\ker \Theta L)^0 = \operatorname{Im} (L^* \Theta^*)$. Hence $\mathbf{e_i} - \lambda\mathbf{e_j} \in \operatorname{Im} L^*$, which implies that $L \in V_{\operatorname{degen}}^*(m,d)$, contradicting our hypothesis.\\
The lemma is proved.
\end{proof}
\begin{Remark}
\label{Remark generalising Green Tao}
\emph{Applying Lemma \ref{Lemma generating a purely irrational map} with $f_j = \Lambda^\prime$ for all $j$, and when $L$ has rational dimension $m$, it is evident that estimating $T_{F,G,N}^{L,\mathbf{v}}(\Lambda^\prime,\dots,\Lambda^\prime)$ is equivalent to counting solutions to $\vert \widetilde{R}\vert$ systems of linear equations given by $\Xi$. This is handled by the Main Theorem of \cite{GT10}. In this sense, one may see how our work in this paper generalises Green-Tao's work in \cite{GT10} to the cases in which the rational dimension is not equal to $m$. }
\end{Remark}
\section{Normal form}
\label{section normal form}
In this section we describe, very briefly, what it means for a linear map $(\psi_1,\dots,\psi_t) = \Psi:\mathbb{R}^d\longrightarrow \mathbb{R}^t$ to be in \emph{$s$-normal form}. For a more complete discussion we refer the reader to \cite{GT10} and \cite{Wa17}.
\begin{Definition}[Normal form]
\label{Definition normal form}
Let $d,t$ be natural numbers, let $s$ be a non-negative integer, and let $(\psi_1,\dots,\psi_t) = \Psi:\mathbb{R}^d\longrightarrow \mathbb{R}^t$ be a linear map. We say that $\Psi$ is in $s$-normal form if for every $i \in [t]$ there exists a collection $J_i \subseteq \{ \mathbf{e_1},\dots,\mathbf{e_d}\}$ of basis vectors of cardinality $\vert J_i\vert\leqslant s+1$ such that $\prod_{\mathbf{e}\in J_i} \psi_{i^\prime}(\mathbf{e})$ is non-zero for $i^\prime = i$ and vanishes otherwise.
\end{Definition}
The notion of normal form is intimately connected with the notion of finite Cauchy-Schwarz complexity (Definition \ref{Definition finite complexity}). The key proposition was proved\footnote{In \cite{Wa17} we were forced to prove a delicate quantitative version, but this will not be necessary here.} in \cite{GT10}.
\begin{Lemma}[Normal form extensions]
\label{Lemma normal form algorithm}
Let $d,t$ be natural numbers, and let $(\psi_1,\dots,\psi_t) = \Psi:\mathbb{R}^d \longrightarrow \mathbb{R}^t$ be a linear map with finite Cauchy-Schwarz complexity. Then there is a linear map $\Psi^\prime:\mathbb{R}^{d^\prime}\longrightarrow \mathbb{R}^t$ such that:
\begin{itemize}
\item $d^\prime = O(1)$;
\item for some vectors $\mathbf{f_k}\in \mathbb{R}^d$ that satisfy $\Vert \mathbf{f_k}\Vert_\infty=O_{\Psi}(1)$ for every $k$, the map $\Psi^\prime$ is of the form $$\Psi^\prime(\mathbf{u}, x_1,\dots,x_{d^\prime - d}) = \Psi(\mathbf{u} + x_1\mathbf{f_1}+ \dots + x_{d^\prime - d}\mathbf{f_{d^\prime - d}})$$ for all $\mathbf{u} \in \mathbb{R}^d$;
\item $\Psi^\prime$ is in $s$-normal form, for some $s= O(1)$.
\end{itemize}
\end{Lemma}
\begin{proof}
In \cite[Lemma 4.4]{GT10} this lemma was proved for a linear map over a $\mathbb{Q}$-vector space. The proof over $\mathbb{R}$ is identical. Alternatively one can iterate \cite[Proposition 6.7]{Wa17} over all $i\leqslant t$.
\end{proof}
\begin{Remark}
\label{Remark Cauchy Scwarz complexity}
\emph{In Lemma \ref{Lemma normal form algorithm} one may take $s$ to be the Cauchy-Schwarz complexity of $\Psi$. This notion will not be used in this paper, save for the `finite versus infinite' dichotomy already given in Definition \ref{Definition finite complexity}.}
\end{Remark}
\part{Pseudorandomness}
\label{part pseudorandomness}
Notions of pseudorandomness are crucial to the theory of higher order Fourier analysis. A small Gowers norm is one such notion, as is satisfying the `linear forms condition' of \cite{GT08} and \cite{GT10}. In this part we review what is known about Gowers norms in relation to the primes, and then formulate a `linear inequalities condition', which will be the analogous notion of pseudorandomness for this paper. \\
\section{The $W$-trick and Gowers norms}
\label{section W trick and Gowers norms}
To begin with, let us recall the definition of the Gowers norm over a cyclic group and over $[N]$. Given a function $f:\mathbb{Z}/N\mathbb{Z}\longrightarrow \mathbb{C}$, and a natural number $d$, one defines the Gowers $U^{d}$ norm $\Vert f\Vert_{U^{d}(N)}$ to be the unique non-negative solution to the equation
\begin{equation}
\label{Definition of Gowers norms}
\Vert f\Vert_{U^{d}(N)}^{2^d} = \frac{1}{N^{d+1}}\sum\limits_{x,h_1,\cdots,h_d}\prod\limits_{\boldsymbol{\omega}\in \{0,1\}^d}\mathscr{C}^{\vert \boldsymbol{\omega} \vert} f(x+\mathbf{h}\cdot\boldsymbol{\omega}),
\end{equation}
\noindent where $\vert \boldsymbol{\omega}\vert = \sum_i \omega_i$, $\mathbf{h} = (h_1,\cdots,h_d)$, $\mathscr{C}$ is the complex-conjugation operator, and the summation is over $x,h_1,\cdots,h_d\in \mathbb{Z}/N\mathbb{Z}$. It is not immediately obvious why the right-hand side of (\ref{Definition of Gowers norms}) is always a non-negative real, nor why the $U^d$ norms are genuine norms if $d\geqslant 2$, but both facts are true. There are many expositions of the standard theory of these norms available in the literature, for example \cite[Chapter 11]{TaVu10} and \cite{Gr07}. For the most general treatment, the reader may consider Appendices B and C of \cite{GT10}.
In the sequel we will be considering functions defined on $[N]$ rather than on $\mathbb{Z}/N\mathbb{Z}$. However, the Gowers norm of such functions may be easily defined by reference to the cyclic group case. Indeed, if $f:[N] \longrightarrow \mathbb{C}$, and $d$ is a natural number, one chooses a natural number $N^\prime > N$ and then considers $[N]$ as an initial segment of $\mathbb{Z}/N^\prime \mathbb{Z}$ (viewing $[N^\prime]$ as a set of representative classes for $\mathbb{Z}/N^\prime \mathbb{Z}$). One then defines \begin{equation}
\label{equation GN over ZNZ}
\Vert f\Vert_{U^d[N]} := \frac{\Vert f1_{[N]}\Vert_{U^d(N^\prime)}}{\Vert 1_{[N]}\Vert_{U^d(N^\prime)}},
\end{equation} which is independent of $N^\prime$ provided $N^\prime/N$ is large enough in terms of $d$.
This is as much background as we will give here, and the reader is invited to consult the aforementioned references for more detail. A Gowers norm over $\mathbb{R}$ will also appear later on in this paper, but will be introduced in Section \ref{section transfer} as and when it is needed. \\
We move our consideration to the primes. Given some fixed modulus $q$ the primes are not uniformly distributed across arithmetic progressions modulo $q$ (as almost all the primes are coprime to $q$), and this lack of uniformity is an obstacle when trying to count solutions to equations in primes. Fortunately, there is a technical device, known as the $W$-trick, that has long been used to manage this difficulty.
This device is usually introduced via the following function.
\begin{Definition}
\label{Definition W tricked von mangoldt function}
Let $N$ be a natural number, and let $W$ be as in Section \ref{section conventions}. For any natural number $b$ with $(b,W)=1$, let $\Lambda_{b,W}^\prime: \mathbb{Z} \longrightarrow \mathbb{R}_{\geqslant 0}$ be defined by \[\Lambda^\prime_{b,W}(n) = \begin{cases} \frac{\varphi(W)}{W}\Lambda^\prime(Wn+b) & n\geqslant 1 \\ 0 & \text{otherwise}. \end{cases} \]
\end{Definition}
\noindent The idea from \cite{GT10}, going back to \cite{Gr05} and \cite{GT08}, is that the function \[\frac{1}{\varphi(W)}\sum\limits_{\substack{b \leqslant W \\ \,(b,W) = 1}} \Lambda_{b,W}^\prime\] should act as a proxy for $\Lambda^\prime$, while each $\Lambda_{b,W}^\prime$ enjoys strong pseudorandomness properties. For example we have the following deep result, which is a crucial component of the proof of Theorem \ref{Theorem Green Tao} on linear equations in primes.
\begin{Theorem}{\cite[Theorem 7.2]{GT10}}
\label{Tool from Green and Tao}
Let $N,s$ be natural numbers, and let $w^*: \mathbb{N} \longrightarrow \mathbb{R}_{ \geqslant 0}$ be any function that satisfies $w^*(n) \longrightarrow \infty$ as $n\rightarrow \infty$ and $w^*(n) \leqslant \frac{1}{2} \log\log n$ for all $n$. Let $b = b(N)$ be a natural number that satisfies $b\leqslant W^*$ and $(b,W^*) = 1$. Then
\begin{equation}
\label{key green tao result}
\Vert \Lambda^\prime_{b,W^*}-1\Vert_{U^{s+1}[N]}=o(1)
\end{equation}
\noindent as $N\rightarrow \infty$, where the $o(1)$ term may depend on the function $w^*$ chosen (but is independent of the choice of $b$).
\end{Theorem}
\noindent We remind the reader that $s$ is a dimension parameter, and so dependence on $s$ is not denoted explicitly in our implied constants.
\begin{Remark}
\emph{In \cite{GT10} Theorem \ref{Tool from Green and Tao} is proved conditionally, relying on two other conjectures. But, as we intimated in the introduction, these conjectures were later settled in joint work of Green-Tao and Green-Tao-Ziegler \cite{GT12, GTa12, GTZ12}.}
\end{Remark}
\begin{Remark}
\label{Remark rescaling w trick}
\emph{We will use Theorem \ref{Tool from Green and Tao} to prove Theorem \ref{Main theorem}. Unfortunately it seems that this cannot be done in the same manner as in \cite{GT10}, i.e. by splitting $[N]$ into arithmetic progressions modulo $W$ at an early stage and then performing subsequent manipulations with the functions $\Lambda^\prime_{b,W}$.}
\emph{As a heuristic, instead of considering an inequality such as
\begin{equation}
\label{basic inequality}
\Vert L\mathbf{n} + \mathbf{v}\Vert_\infty \leqslant \varepsilon,
\end{equation} for some $L$ with irrational coefficients and some positive $\varepsilon$, \cite{GT10} considers (\ref{basic inequality}) for some $L$ with rational coefficients and sets $\varepsilon$ equal to $0$. Under those assumptions one may rescale the variables $\mathbf{n}$ by a factor of $W$, as required in Definition \ref{Definition W tricked von mangoldt function}, without fundamentally altering the problem. However, in the more general scenario of Theorem \ref{Main theorem}, where $\varepsilon$ is strictly positive, rescaling the variable $\mathbf{n}$ by a factor of $W$ means we must replace $\varepsilon$ by $\varepsilon/W$, and we cannot afford this loss, as the manipulations in Section \ref{section transfer} lose some powers of $\varepsilon$. As far as we have been able to tell, this means that we cannot perform the $W$-trick in this manner.}
\end{Remark}
To circumvent this issue of scaling, we will manipulate with the local von Mangoldt functions $\Lambda_{\mathbb{Z}/W\mathbb{Z}}$ throughout, saving our rescaling for the very end of the argument. Regarding the control on Gowers norms, the following lemma is therefore the more appropriate bound.
\begin{Lemma}
\label{Lemma Corollary of tool from Green-Tao}
Let $N,s$ be natural numbers. Then
$$\Vert \Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}}\Vert_{ U^{s+1}[N]} = o(1)$$ as $N\rightarrow \infty$.
\end{Lemma}
\noindent The proof is a standard deduction from results of \cite{GT10}, achieved by splitting into arithmetic progressions modulo $W$. We would however like to thank the anonymous referee for suggesting a simplification to our original argument.
\begin{proof}
Let $(\psi_{\boldsymbol{\omega}})_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} = \Psi:\mathbb{R}^{s+2} \longrightarrow \mathbb{R}^{2^{s+1}}$ denote the linear map giving the Gowers norm, i.e. where each $\psi_{\boldsymbol{\omega}}$ is of the form $\psi_{\boldsymbol{\omega}}(x,\mathbf{h}) = x + \boldsymbol{\omega} \cdot \mathbf{h}$. From expression (\ref{equation GN over ZNZ}), we then have
\begin{equation}
\label{general linear form comparison of von mangoldt and local von mangoldt}
\Vert \Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}}\Vert_{ U^{s+1}[N]}^{2^{s+1}}=\frac{1}{\vert Z\vert}\sum\limits_{\mathbf{n}\in Z } \prod\limits_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} (\Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}})(\psi_{\boldsymbol{\omega}}(\mathbf{n}))\end{equation}
\noindent where \[ Z = \{ \mathbf{n} \in \mathbb{\mathbb{Z}}^{s+2} : \Psi(\mathbf{n}) \in [1,N]^{2^{s+1}}\}.\] It is immediate that $\vert Z\vert \asymp N^{s+2}$.
We now split into arithmetic progressions modulo $W$. To this end let $A \subset [W]^{s+2}$ be the set defined by \[ A = \{ \mathbf{a} \in [W]^{s+2}: (\psi_{\boldsymbol{\omega}}(\mathbf{a}), W) = 1 \text{ for all } \boldsymbol{\omega} \in \{0,1\}^{s+1}\}.\] Then the right-hand side of (\ref{general linear form comparison of von mangoldt and local von mangoldt}) is
\begin{equation}
\label{gowers correction}
\asymp \frac{1}{N^{s+2}} \sum\limits_{\mathbf{a} \in A} \sum\limits_{ \substack{ \mathbf{m} \in \mathbb{Z}^{s+2} \\ W\mathbf{m} + \mathbf{a} \in Z }} \prod\limits_{ \boldsymbol{\omega} \in \{0,1\}^{s+1}} (\Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}})( \psi_{\boldsymbol{\omega}}(W\mathbf{m} + \mathbf{a})),
\end{equation} plus an error of magnitude at most \[ \frac{ \log^{2^{s+1}} W}{N^{s+2}} \sum\limits_{ \mathbf{n} \in Z } \sum\limits_{ \boldsymbol{\omega} \in \{0,1\}^{s+1}} 1_{[0,W]}(\psi_{\boldsymbol{\omega}}(\mathbf{n})).\] This error is $o(1)$.
By the linearity of $\Psi$, and recalling the definition of $\Lambda_{b,W}^\prime$ from Definition \ref{Definition W tricked von mangoldt function}, we have that expression (\ref{gowers correction}) is equal to
\[ \frac{1}{N^{s+2}} \sum\limits_{\mathbf{a} \in A} \Big(\frac{W}{\varphi(W)}\Big)^{2^{s+1}}\sum\limits_{\substack{\mathbf{m} \in \mathbb{Z}^{s+2} \\ W\mathbf{m} + \mathbf{a} \in Z}} \prod\limits_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} (\Lambda^\prime_{\psi_{\boldsymbol{\omega}}(\mathbf{a}), W} - 1)(\psi_{\boldsymbol{\omega}}(\mathbf{m})).\] Observe that \[\vert A\vert = \beta_W W^{s+2}\frac{\varphi(W)^{2^{s+1}}}{W^{2^{s+1}}},\] where \[\beta_W: = \frac{1}{W^{s+2}}\sum\limits_{\mathbf{m}\in [W]^{s+2}}\prod\limits_{\boldsymbol{\omega}\in \{0,1\}^{s+1}}\Lambda_{\mathbb{Z}/W\mathbb{Z}}(\psi_{\boldsymbol{\omega}}(\mathbf{m}))\] is the local factor associated to the system of forms $\Psi$. Since $\Psi$ has finite Cauchy-Schwarz complexity, we have the bound $\beta_W=O(1)$ (by \cite[Lemma 1.3]{GT10}). This means that the lemma would follow from the bound
\begin{equation}
\label{what it would follow from}
\frac{1}{(N/W)^{s+2}}\sum\limits_{\substack{\mathbf{m} \in \mathbb{Z}^{s+2} \\ \mathbf{m} \in (Z-\mathbf{a})/W}} \prod\limits_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} (\Lambda^\prime_{\psi_{\boldsymbol{\omega}}(\mathbf{a}), W} - 1)(\psi_{\boldsymbol{\omega}}(\mathbf{m})) = o(1)
\end{equation}
for each fixed $\mathbf{a} \in A$. What's more, expression (\ref{what it would follow from}) is an immediate consequence of the Gowers-Cauchy-Schwarz inequality when combined with Theorem \ref{Tool from Green and Tao}.
To spell out some of the details, let $M: = \lfloor N/W \rfloor$ and let $M^\prime>M$ be a natural number with $M^\prime /M$ large enough in terms of $s$. Then, recalling the definition of the set $Z$, the left-hand side of (\ref{what it would follow from}) is equal to
\begin{equation}
\label{next}
\frac{1}{M^{s+2}}\sum\limits_{\substack{\mathbf{m} \in \mathbb{Z}^{s+2} \\ \Psi(\mathbf{m}) \in [M]^{2^{s+1}}}} \prod\limits_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} (\Lambda^\prime_{\psi_{\boldsymbol{\omega}}(\mathbf{a}), W} - 1)(\psi_{\boldsymbol{\omega}}(\mathbf{m})) + o(1).
\end{equation}
Taking the $o(1)$ term as read, this is
\begin{equation}
\label{about to do GCS}
\ll \frac{1}{(M^\prime)^{s+2}} \sum\limits_{ \mathbf{m} \in (\mathbb{Z}/M^\prime \mathbb{Z})^{s+2}}\prod \limits_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} (\Lambda^\prime_{\psi_{\boldsymbol{\omega}}(\mathbf{a}), W}1_{[M]} - 1_{[M]})(\psi_{\boldsymbol{\omega}}(\mathbf{m})).
\end{equation}
Now, by the Gowers-Cauchy-Schwarz inequality (see \cite[Expression (11.6)]{TaVu10}), expression (\ref{about to do GCS}) is at most
\[\max_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} \Vert \Lambda^\prime_{\psi_{\boldsymbol{\omega}}(\mathbf{a}),W}1_{[M]} - 1_{[M]}\Vert_{U^{s+1}(M^\prime)}^{2^{s+1}} .\] By expression (\ref{equation GN over ZNZ}), this is bounded above by a constant times
\begin{equation}
\label{nearly amenable}
\max_{\substack{1 \leqslant b \leqslant (s+2)W \\ (b,W) = 1}} \Vert \Lambda^\prime_{b,W} - 1\Vert_{U^{s+1}[M]}^{2^{s+1}} .
\end{equation}
Expression (\ref{nearly amenable}) is directly amenable to Theorem \ref{Tool from Green and Tao}, with the only wrinkle being the fact that Theorem \ref{Tool from Green and Tao} only applied to functions $\Lambda^\prime_{b,W}$ with $b \leqslant W$. But this is easy to deal with. Indeed, for natural numbers $n$ and $k$ we have the identity \[ \Lambda^\prime_{b+kW,W}(n) = \Lambda^\prime_{b,W}(n+k),\] and so one establishes that if $b$ is in the range $1 \leqslant b \leqslant (s+2) W$ then \[\Vert \Lambda^\prime_{b,W} - 1\Vert_{U^{s+1}[M]}^{2^{s+1}} = \Vert \Lambda^\prime_{b^\prime,W} - 1\Vert_{U^{s+1}[M]}^{2^{s+1}} + E,\] where $b^\prime \in [W]$ and $b^\prime \equiv b \,(\text{mod } W)$, and where the error term $E$ is at most a constant times \[ \frac{\log^{2^{s+1}}M}{M^{s+2}}\sum\limits_{ \mathbf{m} \in [M]^{s+2}} \sum\limits_{ \boldsymbol{\omega} \in \{0,1\}^{s+1}} (1_{[s+2]}(\psi_{\boldsymbol{\omega}}(\mathbf{m})) + 1_{\{M + 1,\dots, M + s+2\}}(\psi_{\boldsymbol{\omega}}(\mathbf{m}))).\] We have $E = o(1)$ and therefore, by Theorem \ref{Tool from Green and Tao}, expression (\ref{nearly amenable}) is $o(1)$. The lemma follows.
\end{proof}
\section{Inequalities in lattices}
\label{section Inequalities in arithmetic progressions}
This section will be devoted to proving the following technical lemma. This is the only part of the paper in which we pay especial attention to the quantitative aspects of the smooth cut-off functions, as the lemma will be applied in contexts where the functions $F$ and $G$ depend on the asymptotic parameter $N$ (albeit tamely).
\begin{Lemma}[Inequalities in lattices]
\label{Lemma in APs}
Let $N,m,d, h$ be natural numbers, with $d\geqslant h\geqslant m+1$, and let $\gamma$ be a positive constant. Suppose that $N>2^{\frac{1}{\gamma}}$. Let $P$ be an additional set of parameters. Let $(\xi_1,\dots,\xi_d) = \Xi:\mathbb{R}^h\longrightarrow \mathbb{R}^d$ be an injective linear map with integer coefficients and let $L:\mathbb{R}^h\longrightarrow \mathbb{R}^m$ be a purely irrational surjective linear map with algebraic coefficients. Let $\mathbf{v}\in \mathbb{R}^{m}$ and let $\widetilde{\mathbf{r}}\in\mathbb{Z}^d$. Let $e_1,\dots,e_d\in \mathbb{N}$ and suppose that $e_j< N^\gamma$ for all $j$. Let $F:\mathbb{R}^h\longrightarrow [0,1]$ and $G:\mathbb{R}^{m}\longrightarrow [0,1]$ be functions in $\mathcal{C}(P)$. Then, provided that $\gamma$ is small enough in terms of $L$, for all positive $K$ we have
\begin{align}
\label{equation removing divisors}
\frac{1}{N^{h - m}}\sum\limits_{\substack{\mathbf{n} \in \mathbb{Z}^h \\ e_j \vert \xi_j(\mathbf{n}) + \widetilde{r}_j \, \forall j \leqslant d}} F(\mathbf{n}/N)G(L\mathbf{n} + \mathbf{v})=\frac{ \alpha_{\mathbf{e},\widetilde{\mathbf{r}}}}{N^{h-m}} \int\limits_{\mathbf{x}\in \mathbb{R}^h} F(\mathbf{x}/N)G(L\mathbf{x} + \mathbf{v})\, d\mathbf{x}\nonumber \\ +O_{K,L,P}(N^{-K}),
\end{align}
\noindent where $\alpha_{\mathbf{e},\widetilde{\mathbf{r}}}$ is the local factor
\begin{equation}
\label{local factor def}
\alpha_{\mathbf{e},\widetilde{\mathbf{r}}} := \lim\limits_{M\rightarrow \infty}\frac{1}{M^{h}}\sum\limits_{\mathbf{n}\in [M]^h} \prod\limits_{j\leqslant d} 1_{e_j\vert \xi_j(\mathbf{n}) + \widetilde{r}_j}.
\end{equation}
\end{Lemma}
\begin{Remark}
\label{Remark local factor for AP calculation}
\emph{If $h=d$ and if $\Xi:\mathbb{R}^h \longrightarrow \mathbb{R}^d$ is the identity map, then the Chinese Remainder Theorem guarantees that $\alpha_{\mathbf{e},\widetilde{\mathbf{r}}} = (e_1\dots e_d)^{-1}$. In the general case, the local factors $\alpha_{\mathbf{e},\widetilde{\mathbf{r}}}$ are the same objects as those factors $\alpha_{m_1,\dots,m_t}$ considered in \cite[Page 1831]{GT10}. }
\end{Remark}
\begin{proof}[Proof of Lemma \ref{Lemma in APs}]
We assume throughout that $\gamma$ is small enough in terms of $L$, and that $K$ is large enough in terms of the dimensions $m$, $d$, and $h$.
By applying Fourier inversion to $G$, we see that the left-hand side of (\ref{equation removing divisors}) is equal to
\begin{equation}
\label{after first fourier transform}
\frac{1}{N^{h-m}}\int\limits_{\boldsymbol{\alpha}\in \mathbb{R}^{m}}\widehat{G}(\boldsymbol{\alpha})\sum\limits_{\substack{\mathbf{n} \in \mathbb{Z}^h \\ e_j \vert \xi_j(\mathbf{n}) + \widetilde{r}_j \, \forall j\leqslant d}} F(\mathbf{n}/N)e((L^T\boldsymbol{\alpha})\cdot \mathbf{n} + \boldsymbol{\alpha}\cdot \mathbf{v}) \, d\boldsymbol{\alpha}.
\end{equation}
\noindent To bound this integral, we split $\mathbb{R}^m$ into three ranges. Let $\eta$ be a small positive parameter to be chosen later, which we assume to be small enough in terms of $L$. We then define the so-called `trivial arc' by \[ \mathfrak{t} := \{\boldsymbol{\alpha}\in\mathbb{R}^{m}: \Vert \boldsymbol{\alpha} \Vert_\infty \geqslant N^\eta\}, \] the `minor arc' by \[\mathfrak{m} := \{\boldsymbol{\alpha}\in\mathbb{R}^{m}: N^{-1+\eta}\leqslant \Vert \boldsymbol{\alpha} \Vert_\infty < N^\eta\}, \] and the `major arc' by \[\mathfrak{M} := \{\boldsymbol{\alpha}\in\mathbb{R}^{m}: \Vert \boldsymbol{\alpha} \Vert_\infty < N^{-1+\eta}\}.\] \\
\textbf{Trivial arc:}
By Lemma \ref{Lemma by parts}, $\vert\widehat{G}(\boldsymbol{\alpha})\vert \ll_{K,P}\Vert\boldsymbol{\alpha}\Vert_\infty ^{-K}$. Therefore, applying the trivial $O_P(N^h)$ bound to the inner sum, we have \[\frac{1}{N^{h-m}}\int\limits_{\boldsymbol{\alpha}\in \mathfrak{t}}\widehat{G}(\boldsymbol{\alpha})\sum\limits_{\substack{\mathbf{n} \in \mathbb{Z}^h \\ e_j \vert \xi_j(\mathbf{n}) + \widetilde{r}_j \, \forall j}} F(\mathbf{n}/N)e((L^T\boldsymbol{\alpha})\cdot \mathbf{n} + \boldsymbol{\alpha}\cdot \mathbf{v}) \, d\boldsymbol{\alpha} \ll _{K,P}N^{-\eta K + O(1)}.\]\\
\textbf{Minor arc:} Choose $\mathbf{x}\in\mathbb{Z}^h$ to satisfy the simultaneous divisor conditions $e_j\vert \xi_j(\mathbf{x}) + \widetilde{r}_j$ for every $j\leqslant d$. If there is no such $\mathbf{x}\in\mathbb{Z}^h$ then (\ref{equation removing divisors}) is trivially true. Further, we may assume that $\mathbf{x}$ satisfies $\Vert\mathbf{x}\Vert_\infty\leqslant e_1\dots e_d$. Let $\Gamma_{\Xi,\mathbf{e}}$ denote the lattice \[\Gamma_{\Xi,\mathbf{e}} := \{ \mathbf{n} \in \mathbb{Z}^h: e_j\vert \xi_j(\mathbf{n}) \, \text{ for every } j\leqslant d\}.\] Then
\begin{align}
\label{reformulation}
\sum\limits_{\substack{\mathbf{n} \in \mathbb{Z}^h \\ e_j \vert \xi_j(\mathbf{n}) + \widetilde{r}_j \, \forall j\leqslant d}} F(\mathbf{n}/N) e((L^T \boldsymbol{\alpha} )\cdot \mathbf{n}) =
\sum\limits_{\mathbf{n} \in \Gamma_{\Xi,\mathbf{e}}} F((\mathbf{x}+\mathbf{n})/N) e((L^T \boldsymbol{\alpha}) \cdot ( \mathbf{x} + \mathbf{n})).
\end{align}
\noindent Using this reformulation, we apply Poisson summation (Lemma \ref{Lemma Poisson summation}) to the inner sum of (\ref{after first fourier transform}). Then the contribution to (\ref{after first fourier transform}) from the minor arc $\mathfrak{m}$ is equal to
\begin{equation}
\label{after Poisson summation}
\frac{N^{d-h+m}}{\operatorname{vol} (\mathbb{R}^h/ \Gamma_{\Xi,\mathbf{e}})} \int\limits_{\boldsymbol{\alpha}\in\mathfrak{m}} \widehat{G}(\boldsymbol{\alpha}) e(\boldsymbol{\alpha}\cdot \mathbf{v})\sum\limits_{\mathbf{c}\in \Gamma_{\Xi,\mathbf{e}}^*} \widehat{F}(N(\mathbf{c} - L^{ T}\boldsymbol{\alpha})) e(\mathbf{x}\cdot\mathbf{c}) \, d\boldsymbol{\alpha},
\end{equation}
where $\Gamma_{\Xi,\mathbf{e}}^*$ is the lattice that is dual to $\Gamma_{\Xi,\mathbf{e}}$ (see Definition \ref{Definition dual lattice}). \\
We need the following obvious lemma.
\begin{Lemma}
\label{Lemma clearing denominators in dual lattices}
There is a natural number $A$, of size at most $O(N^{O(\gamma)})$, such that $A\Gamma_{\Xi,\mathbf{e}}^*\subset \mathbb{Z}^h$.
\end{Lemma}
\begin{proof}
There is an $h$-dimensional sublattice of $\Gamma_{\Xi,\mathbf{e}}$, namely $(e_1\dots e_d \mathbb{Z})^h$. Therefore, we may choose a lattice basis for $\Gamma_{\Xi,\mathbf{e}}$ all of whose elements $\mathbf{b}$ satisfy $\Vert \mathbf{b}\Vert_\infty = O(N^{O(\gamma)})$. Let $M$ be the $h$-by-$h$ matrix that has these basis vectors as its columns. Then the columns of the matrix $(M^T)^{-1}$ are a lattice basis for the dual lattice $\Gamma_{\Xi,\mathbf{e}}^*$. The entries in $(M^T)^{-1}$ are rational numbers with numerator and denominator at most $O(N^{O(\gamma)})$. Clearing denominators, the lemma follows.
\end{proof}
Let $\langle L^T\boldsymbol{\alpha}\rangle$ denote some $\mathbf{c}\in \Gamma_{\Xi,\mathbf{e}}^*$ that minimises the expression $ \Vert \mathbf{c} -L^T\boldsymbol{\alpha}\Vert_\infty$. We claim that the only term in (\ref{after Poisson summation}) that cannot be easily absorbed into the error term comes from $\mathbf{c} = \langle L^T\boldsymbol{\alpha}\rangle$.
Indeed, let $A$ be the quantity provided by Lemma \ref{Lemma clearing denominators in dual lattices}, and let $\langle L^T \boldsymbol{\alpha} \rangle_2$ denote the second closest point to $L^T\boldsymbol{\alpha}$ in the lattice $\Gamma_{\Xi,\mathbf{e}}^*$. If more than one such point exists, choose arbitrarily. Then \begin{equation}
\label{second closest lattice point}
\Vert\langle L^T \boldsymbol{\alpha} \rangle_2 - L^T\boldsymbol{\alpha}\Vert_\infty \geqslant a A^{-1},
\end{equation} where $a$ is some positive constant which depends only on $h$. By the triangle inequality and dyadic pigeonholing, one then has
\begin{align}
\label{contribution from distant lattice points}
\Big\vert\sum\limits_{\substack{\mathbf{c}\in\Gamma_{\Xi,\mathbf{e}}^*\\ \mathbf{c}\neq \langle L^T\boldsymbol{\alpha} \rangle}} \widehat{F}(N(\mathbf{c} - L^T\boldsymbol{\alpha}))e(\mathbf{x}\cdot\mathbf{c})\Big\vert \leqslant \sum\limits_{k=0}^\infty \sum\limits_{\substack{\mathbf{c}\in\Gamma_{\Xi,\mathbf{e}}^* \\ 2^{k} a A^{-1} \leqslant \Vert \mathbf{c} - L^T \mathbf{a}\Vert_\infty \leqslant 2^{k+1} a A^{-1}}}\vert \widehat{F}(N(\mathbf{c} - L^T\boldsymbol{\alpha}))\vert.
\end{align}
\noindent By Lemma \ref{Lemma clearing denominators in dual lattices} we also have the estimate
\begin{equation}
\label{estimate sum over lattice points}
\sum\limits_{\substack{\mathbf{c}\in\Gamma_{\Xi,\mathbf{e}}^* \\ \Vert \mathbf{c} - L^T\boldsymbol{\alpha}\Vert_\infty \leqslant R}} 1 \ll \operatorname{max}(1,R^hA^h),
\end{equation}
\noindent which holds for all $R>0$. Using (\ref{estimate sum over lattice points}), Lemma \ref{Lemma by parts}, and the bound $A = O(N^{O(\gamma)})$, the quantity (\ref{contribution from distant lattice points}) is seen to be \begin{equation}
\label{the thing that it follows from}
\ll_{K,P} N^{-K + O(K\gamma)}.
\end{equation}
This implies that the contribution from these lattice points to (\ref{after Poisson summation}) is at most
\begin{align}
\label{overall contributoin form distant lattice points}
&\ll_{K,P} N^{-K + O(K\gamma) + O(1)} \int\limits_{\boldsymbol{\alpha} \in \mathfrak{m}} \vert \widehat {G}(\boldsymbol{\alpha})\vert \, d\boldsymbol{\alpha} \nonumber \\
&\ll_{K,P} N^{-K + O(K \gamma) + O(1) + O(\eta) }.
\end{align}
\noindent Since $\gamma$ and $\eta$ are small enough, (\ref{overall contributoin form distant lattice points}) is \[\ll_{K,P} N^{-K/2},\] which may be absorbed into the error term of (\ref{equation removing divisors}) after adjusting the implied constant appropriately. \\
It remains to estimate
\begin{align}
\label{after removing far lattice points}
\frac{N^{d-h+m}}{\operatorname{vol} (\mathbb{R}^h/ \Gamma_{\Xi,\mathbf{e}})} \int\limits_{\boldsymbol{\alpha}\in\mathfrak{m}} &\widehat{G}(\boldsymbol{\alpha}) e(\boldsymbol{\alpha}\cdot \mathbf{v})\widehat{F}(N(\langle L^T\boldsymbol{\alpha} \rangle - L^T\boldsymbol{\alpha})) e(\mathbf{x}\cdot \langle L^T\boldsymbol{\alpha} \rangle)\, d\boldsymbol{\alpha}.\\\nonumber
\end{align} We have the following key lemma.
\begin{Lemma}
\label{Lemma key to minor arcs}
Under the assumption that $\eta$ and $\gamma$ are suitably small in terms of $L$, \[\operatorname{sup}\limits_{\boldsymbol{\alpha} \in\mathfrak{m}} \vert\widehat{F}(N(\langle L^T\boldsymbol{\alpha} \rangle -L^T\boldsymbol{\alpha}))\vert \ll_{K,L,P} N^{-K\eta}.\]
\end{Lemma}
\begin{Remark}
\emph{The proof of this lemma uses the algebraicity of the coefficients of $L$. One should note that the bound (\ref{only point where algebraicity is used}) below, which holds for matrices with algebraic coefficients, also holds for almost all matrices. It is this fact which ultimately leads to our observation in the introduction that the main theorems of this paper hold for almost all matrices (as well as for matrices with algebraic coefficients, as stated).}
\end{Remark}
\begin{proof}
Certainly, by rescaling $\boldsymbol{\alpha}$ and using Lemmas \ref{Lemma by parts} and \ref{Lemma clearing denominators in dual lattices},
\begin{align}
\label{bound in terms of approximation function}
\operatorname{sup}\limits_{\boldsymbol{\alpha} \in\mathfrak{m}} \vert\widehat{F}(N(\langle L^T\boldsymbol{\alpha} \rangle -L^T\boldsymbol{\alpha}))\vert& \ll_{K,P} N^{-K}( \inf\limits_{\boldsymbol{\alpha} \in\mathfrak{m}}\Vert \langle L^T\boldsymbol{\alpha} \rangle -L^T\boldsymbol{\alpha} \Vert_\infty)^{-K} \nonumber\\
&\ll_{K,P} A^KN^{-K} ( \inf\limits_{\substack{ \boldsymbol{\beta} \in \mathbb{R}^m \\ AN^{-1 + \eta} \leqslant \Vert \boldsymbol{\beta}\Vert_\infty \leqslant AN^{\eta} }} \inf\limits_{\mathbf{n} \in \mathbb{Z}^h} \Vert \mathbf{n} - L^T \boldsymbol{\beta} \Vert_\infty)^{-K}
\end{align}
\noindent
The quantity
\begin{equation}
\label{approximation function expression}
\inf\limits_{\substack{ \boldsymbol{\beta} \in \mathbb{R}^m \\ AN^{-1 + \eta} \leqslant \Vert \boldsymbol{\beta}\Vert_\infty \leqslant A N^{\eta}}} \inf\limits_{\mathbf{n} \in \mathbb{Z}^h} \Vert \mathbf{n} - L^T \boldsymbol{\beta} \Vert_\infty
\end{equation} encodes information about diophantine approximations to the coefficients of $L$. For example, since $L$ is purely irrational, by definition\footnote{The reader may consult Definition \ref{Definition rational space}.} we have $L^T \boldsymbol{\beta} \neq \mathbb{Z}$ for any $\boldsymbol{\beta} \neq \mathbf{0}$. Therefore, since the function \[ \boldsymbol{\beta} \mapsto \inf\limits_{\mathbf{n} \in \mathbb{Z}^h} \Vert L^T \boldsymbol{\beta} - \mathbf{n} \Vert_\infty\] is continuous, (\ref{approximation function expression}) is always non-zero. We will need a quantitative refinement of this fact.
Fortunately, in \cite{Wa17} we extensively analysed expressions such as (\ref{approximation function expression}). Consider Definition 2.8 of \cite{Wa17} in particular, in which we defined the approximation function\footnote{We stress that the notation $A_L$ is unrelated to the parameter $A$ from this section.} $A_L$. In this language, (\ref{approximation function expression}) is equal to \[A_L(AN^{-1+\eta},A^{-1}N^{-\eta}).\] Therefore, since $L$ is purely irrational and has algebraic coefficients, Lemma E.1 of \cite{Wa17} tells us that \begin{equation}
\label{only point where algebraicity is used} \inf\limits_{\substack{ \boldsymbol{\beta} \in \mathbb{R}^m \\ AN^{-1 + \eta} \leqslant \Vert \boldsymbol{\beta}\Vert_\infty \leqslant A N^{\eta}}} \inf\limits_{\mathbf{n} \in \mathbb{Z}^h} \Vert L^T \boldsymbol{\beta} - \mathbf{n} \Vert_\infty \gg_{L} \min ( A N^{-1 + \eta}, A^{-O_L(1)}N^{-O_L( \eta )}).
\end{equation} Since $\eta$ and $\gamma$ are small enough in terms of $L$, and since $A = O(N^{O(\gamma)})$, (\ref{bound in terms of approximation function}) implies that
\[\operatorname{sup}\limits_{\boldsymbol{\alpha} \in\mathfrak{m}} \vert\widehat{F}(N(\langle L^T\boldsymbol{\alpha} \rangle -L^T\boldsymbol{\alpha}))\vert \ll_{K,L,P} N^{-K\eta},\] as claimed.
\end{proof}
\noindent The lemma above implies that (\ref{after removing far lattice points}) has size
\begin{equation}
\label{minor arc total contribution}
O_{K,L,P}(N^{-K \eta + O(1)}),
\end{equation} which is thus our bound for the total contribution from the minor arc $\mathfrak{m}$. \\
\noindent \textbf{Major arc:} Performing the same Poisson summation argument as in the minor arc case, the main term on the left-hand side of (\ref{equation removing divisors}) is equal to
\begin{align}
\label{major arc expression}
\frac{N^{d-h+m}}{\operatorname{vol} (\mathbb{R}^h/ \Gamma_{\Xi,\mathbf{e}})} \int\limits_{\boldsymbol{\alpha}\in\mathfrak{M}} &\widehat{G}(\boldsymbol{\alpha}) e(\boldsymbol{\alpha}\cdot \mathbf{v})
\widehat{F}(N(\langle L^T\boldsymbol{\alpha} \rangle - L^T\boldsymbol{\alpha})) e(\mathbf{x}\cdot \langle L^T\boldsymbol{\alpha} \rangle)\, d\boldsymbol{\alpha}.
\end{align}
For $\boldsymbol{\alpha}\in\mathfrak{M}$ one has $\Vert L^T\boldsymbol{\alpha}\Vert_\infty \ll_L N^{-1+\eta}$, and so $\langle L^T\boldsymbol{\alpha}\rangle = \mathbf{0}$. Therefore (\ref{major arc expression}) is equal to
\begin{equation}
\label{equation before expanding integral}
\frac{N^{d-h+m}}{\operatorname{vol} (\mathbb{R}^h/ \Gamma_{\Xi,\mathbf{e}})}\int\limits_{\boldsymbol{\alpha} \in\mathfrak{M}} \widehat{G}(\boldsymbol{\alpha}) e(\boldsymbol{\alpha}\cdot\mathbf{v}) \widehat{F}(-NL^T\boldsymbol{\alpha}) \, d\boldsymbol{\alpha}.
\end{equation} Since $L^T:\mathbb{R}^{m}\longrightarrow \mathbb{R}^h$ is injective, one has $\Vert L^T\boldsymbol{\alpha}\Vert_\infty \gg_{L} \Vert \boldsymbol{\alpha}\Vert_\infty$. Therefore (\ref{equation before expanding integral}) is equal to
\begin{align*}
&\frac{N^{d-h+m}}{\operatorname{vol} (\mathbb{R}^h/ \Gamma_{\Xi,\mathbf{e}})} \Big(\int\limits_{\boldsymbol{\alpha} \in\mathbb{R}^{m}} \widehat{G}(\boldsymbol{\alpha}) e(\boldsymbol{\alpha}\cdot\mathbf{v}) \widehat{F}(- NL^T\boldsymbol{\alpha}) \, d\boldsymbol{\alpha}\Big) + O_{K,L,P}(N^{-\eta K + O(1)}),
\end{align*}
\noindent which, after the obvious manipulations, equals \[\frac{1}{N^{h-m}\operatorname{vol} (\mathbb{R}^h/ \Gamma_{\Xi,\mathbf{e}})}\Big(\int\limits_{\mathbf{x}\in\mathbb{R}^h} F(\mathbf{x}/N) G(L\mathbf{x} + \mathbf{v}) \, d\mathbf{x}\Big) + O_{K,L,P}(N^{-\eta K + O(1)}).\]
Fixing suitably small $\eta$ and $\gamma$, and combining the contribution from the trivial, minor, and major arc, we deduce that \begin{align}
\label{final error term}
\frac{1}{N^{h - m}}\sum\limits_{\substack{\mathbf{n} \in \mathbb{Z}^h \\ e_j \vert \xi_j(\mathbf{n}) + \widetilde{r}_j \, \forall j}} F(\mathbf{n}/N)G(L\mathbf{n} + \mathbf{v})
= \frac{1}{N^{h-m}\operatorname{vol} (\mathbb{R}^h/ \Gamma_{\Xi,\mathbf{e}})}\int\limits_{\mathbf{x}\in\mathbb{R}^h} F(\mathbf{x}/N) G(L\mathbf{x} + \mathbf{v}) \nonumber\\+ O_{K,L,P}(N^{-\eta K + O(1)}).
\end{align}
By adjusting the implied constant appropriately, the error term from (\ref{final error term}) is $O_{K,L,P}(N^{-K})$ for all positive real $K$. The final observation is that, considering the definition of the local factor $\alpha_{\mathbf{e},\widetilde{\mathbf{r}}}$ in (\ref{local factor def}) and the fact that we assumed $\alpha_{\mathbf{e},\widetilde{\mathbf{r}}}$ is non-zero, \[\frac{1}{\operatorname{vol} (\mathbb{R}^h/ \Gamma_{\Xi,\mathbf{e}})} = \alpha_{\mathbf{e},\widetilde{\mathbf{r}}}.\]
The lemma follows.
\end{proof}
The following estimate will also be needed.
\begin{Lemma}
\label{Lemma just arithmetic progressions}
Under the same hypotheses as Lemma \ref{Lemma in APs}, for all positive $K$ \begin{equation}
\label{equation just aps}
\frac{1}{N^{h}} \sum\limits_{\substack{ \mathbf{n} \in \mathbb{Z}^h\\ e_j \vert \xi_j(\mathbf{n}) + \widetilde{ r}_j \, \forall j\leqslant d}} F(\mathbf{n}/N) = \frac{\alpha_{\mathbf{e},\widetilde{\mathbf{r}}}} {N^{h}} \int\limits_{\mathbf{x} \in \mathbb{R}^h} F(\mathbf{x}/N) \, d\mathbf{x} + O_{K,L,P}( N^{-K}),
\end{equation}
\noindent where $\alpha_{\mathbf{e},\widetilde{\mathbf{r}}}$ is as in (\ref{local factor def}).
\end{Lemma}
\begin{proof}
By applying Poisson summation, the left-hand side of (\ref{equation just aps}) is equal to
\begin{equation}
\frac{N^{d-h}}{ \operatorname{vol}(\mathbb{R}^h/ \Gamma_{\Xi,\mathbf{e}})} \sum\limits_{\mathbf{c} \in \Gamma^*_{\Xi,\mathbf{e}}} \widehat{F}(N\mathbf{c}) e(\mathbf{x} \cdot \mathbf{c}),
\end{equation}
where $\mathbf{x}$ and $\Gamma^*_{\Xi,\mathbf{e}}$ are as in (\ref{after Poisson summation}). By applying estimates (\ref{contribution from distant lattice points}) and (\ref{the thing that it follows from}), one shows that the main term of (\ref{equation just aps}) comes from the $\mathbf{c}=\mathbf{0}$ term above. After the obvious manipulations, this concludes the lemma.
\end{proof}
\section{The linear inequalities condition}
\label{section proof of pseduorandomness}
In \cite{GT10}, the key notion of pseudorandomness is the so-called `linear forms condition' (see Definition 6.2 of of that paper). The upshot is that in order to understand the number of solutions to a particular linear equation in primes, it is enough to understand the number of solutions to certain auxiliary linear equations weighted by a sieve weight $\nu$. In this paper an analogous philosophy holds. Indeed we will show that, in order to understand the number of solutions to a particular linear inequality in primes, it is enough to understand the number of solutions to certain auxiliary linear inequalities weighted by a sieve weight $\nu$. \\
Let us proceed with the formal definition. The reader is reminded that $W_j = \prod_{p\leqslant w_j(N)} p$ (see Section \ref{section conventions}).
\begin{Definition}[Linear inequalities condition]
\label{Definition linear inequalities condition}
Let $m,d$ be natural numbers, and let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a linear map. For each natural number $N$, let $\nu_N:\mathbb{Z}\longrightarrow \mathbb{R}$ be a function. We say that the family of functions $(\nu_N)_{N=1}^\infty$ is \emph{$(L,w)$-pseudorandom} if the following holds. For all positive constants $C$ and for all sets of parameters $P$, for all compactly supported smooth functions $F:\mathbb{R}^d\longrightarrow [0,1]$ and $G:\mathbb{R}^m\longrightarrow [0,1]$ such that $F,G\in\mathcal{C}(P)$, and for all functions $w_1,\dots w_d:\mathbb{N}\longrightarrow \mathbb{R}_{\geqslant 0}$ that each satisfy $w_j(n)\rightarrow \infty$ as $n\rightarrow \infty$ and $w_j(n)\leqslant w(n)$ for all $n$, for all $\mathbf{v}\in\mathbb{R}^m$ satisfying $\Vert \mathbf{v} \Vert_\infty \leqslant CN$, and for functions $f_1,\dots,f_d:\mathbb{Z} \longrightarrow \mathbb{R}$ such that each $f_j$ equals either $\nu_N$ or $\Lambda_{\mathbb{Z}/W_j\mathbb{Z}} $,
\begin{equation}
\label{eqn defining pseudorandomness}
T^{L,\mathbf{v}}_{F,G,N}(f_1, \dots,f_d) = T^{L,\mathbf{v}}_{F,G,N}(\Lambda_{\mathbb{Z}/W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W_d\mathbb{Z}}) + o(1)
\end{equation}
as $N\rightarrow \infty$, where the $o(1)$ term may depend on the family $(\nu_N)_{N=1}^\infty$, on $C$, $L$, $P$, and on the functions $w_1,\dots,w_d$.
\end{Definition}
\begin{Remark}
\emph{Equation (\ref{eqn defining pseudorandomness}) might seem to be a slightly curious formulation of a pseudorandomness principle, as it does not claim that the weight $\nu_N$ behaves like the constant $1$ function but rather behaves like the local von Mangoldt function. However, referring to Remark \ref{Remark rescaling w trick}, let us reiterate the comment that we are not performing the $W$-trick in the same manner as \cite{GT10}.}
\end{Remark}
The aim of this section is to introduce a sieve weight $\nu_{N,w}^\gamma$, and to prove that it is $(L,w)$-pseudorandom for a large class of linear maps $L$. We begin by introducing the sieve weight from \cite[Appendix D]{GT10}.
\begin{Definition}[Smooth sieve weight]
\label{Definition Smooth sieve weight}
Let $N$ be a natural number, $\gamma$ be a positive real, and define $R:= N^\gamma$. Let $\rho \in \mathcal{C}(\emptyset)$ be the smooth $1$-supported function fixed in Section \ref{section conventions}. Define the function $\Lambda_{\rho,R,2}:\mathbb{Z}\longrightarrow \mathbb{R}_{\geqslant 0}$ by the formula
\begin{equation}
\label{definition of GOldston weight}
\Lambda_{\rho,R,2}(n):=(\log R) \Big(\sum\limits_{d\vert n}\mu(d)\rho\big(\frac{\log d}{\log R}\big)\Big)^2,
\end{equation}
\noindent for non-negative integers $n$, and then by the obvious extension to negative integers.
\end{Definition}
We now define the family of majorants themselves.
\begin{Definition}[Pseudorandom majorant]
\label{Definnition pseudorandom majorant}
Let $N$ be a natural number, let $\gamma$ be a positive real, and let $R:= N^\gamma$. Define the constant \[c_{\rho,2} : = \int\limits_{0}^{\infty} \vert \rho^\prime (x)\vert^2 \, dx.\] Then define the weight $\nu_{N,w}^\gamma:\mathbb{Z}\rightarrow \mathbb{R}_{\geqslant 0}$ by \[\nu_{N,w}^\gamma(n) := \frac{1}{2c_{\rho,2}}\Lambda_{\rho,R,2}(n) + \frac{1}{2}\Lambda_{\mathbb{Z}/W\mathbb{Z}}(n).\]
\end{Definition}
\noindent Note that $\nu_{N,w}^{\gamma}$ also depends on $\rho$, but we suppress that dependence from the notation (as we fixed $\rho$ in Section \ref{section conventions}). \\
We now state our main new result on the pseudorandomness of this sieve weight.
\begin{Theorem}[Pseudorandomness of sieve weights]
\label{Theorem pseudorandomness}
Let $m,d$ be natural numbers, with $d\geqslant m+2$. Let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a surjective linear map, and suppose that $L\notin V_{\operatorname{degen}}^*(m,d)$ and that the coefficients of $L$ are algebraic. Assume that $\gamma$ is a positive parameter that is small enough in terms of $L$. Then $\nu_{N,w}^{\gamma}$ is $(L,w)$-pseudorandom.
\end{Theorem}
\noindent Temporarily dropping the convention that $w(N) = \max(1,\log\log\log N)$, we speculate that the following general result holds.
\begin{Conjecture}[Pseudorandomness conjecture]
\label{Conjecture pseudorandomness}
Let $m,d$ be natural numbers, with $d\geqslant m+2$. Let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a surjective linear map, and suppose that $L\notin V^*_{\operatorname{degen}}(m,d)$. Then there is some value of $\gamma$ and some function $w:\mathbb{N} \longrightarrow \mathbb{R}_{\geqslant 0}$, satisfying $w(N)\rightarrow \infty$ as $N\rightarrow \infty$, for which $\nu_{N,w}^{\gamma}$ is $(L,w)$-pseudorandom.
\end{Conjecture}
\noindent Unfortunately we have not been able to resolve Conjecture \ref{Conjecture pseudorandomness}, but we strongly believe it to be true. If $d$ is large enough in terms of $m$ then the analytic methods of Parsell (see \cite{Pa02} and Appendix \ref{section an analytic argument}) can be used to show that $\nu_{N,w}^\gamma$ is $(L,w)$-pseudorandom without any algebraicity assumptions. But these methods seem harder to apply in the range $d\geqslant m+2$, and we have not been able to establish the appropriate mean value estimate. Resolving Conjecture \ref{Conjecture pseudorandomness} would, after a straightforward adaptation of the methods of this paper, enable one to remove the algebraicity assumption from Theorem \ref{Main theorem simpler version} and Theorem \ref{Main theorem}.
\begin{Remark}
\label{Remark almost all}
\emph{The proof of Theorem \ref{Theorem pseudorandomness} is the only moment during the proof of the main theorems Theorem \ref{Main theorem simpler version} and Theorem \ref{Main theorem} when we use the fact that the coefficients of the original linear map $L$ are algebraic. Furthermore, we will ultimately only ever appeal to the linear inequalities condition for a certain finite collection of linear maps, which includes the original linear map $L$ itself as well as some auxiliary linear maps that are generated from applications of the Cauchy-Schwarz inequality. Since only the diophantine approximation properties of algebraic numbers are used (witness Lemma \ref{Lemma key to minor arcs} and \cite[Lemma E.1]{Wa17}), and since these properties are satisfied by almost all real numbers, one may show that Theorems \ref{Main theorem simpler version} and \ref{Main theorem} remain true for some explicit set of maps $L$ that has full Lebesgue measure.}
\end{Remark}
To demonstrate our approach to proving Theorem \ref{Theorem pseudorandomness}, we first give the argument under the simplifying additional assumption that $L$ is purely irrational (see Definition \ref{Definition rational space}).
\begin{Lemma}
\label{Claim local calculation}
Suppose that $F$, $G$, $L$, $\mathbf{v}$ and the functions $w_1,\dots,w_d$ all satisfy the conditions in Definition \ref{Definition linear inequalities condition}. Suppose in addition that $L$ is surjective, purely irrational, and has algebraic coefficients. Then for all positive $K$ we have
\begin{equation}
\label{asymptotic local first}
T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/ W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/ W_d\mathbb{Z}}) = J + O_{K,L,P}(N^{-K})
\end{equation}
where $J$ is the singular integral
\begin{equation}
\label{equation singular integral}
J:=\frac{1}{N^{d-m}}\int\limits_{\mathbf{x} \in \mathbb{R}^d}F(\mathbf{x}/N)G(L\mathbf{x} + \mathbf{v})\, d\mathbf{x}.
\end{equation}
\end{Lemma}
\begin{proof}
We have the identity \begin{equation}
\label{equation local von identity}
\Lambda_{\mathbb{Z}/W_j\mathbb{Z}}(n) = \frac{W_j}{\varphi(W_j)}\sum\limits_{\substack{e_j\vert n\\ e_j\vert W_j}} \mu(e_j).
\end{equation} Then the expression $T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/ W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/ W_d\mathbb{Z}})$ is equal to
\begin{align}
\label{calculation of comparison term first}
& \Big(\prod\limits_{j=1}^d \frac{W_j}{\varphi(W_j)}\Big)\frac{1}{N^{d-m}} \sum\limits_{\substack{e_1,\dots,e_d\\e_j\vert W_j \,\forall j\leqslant d}} \Big(\prod\limits_{j=1}^{d}\mu(e_j)\Big)\sum\limits_{\substack{\mathbf{n}\in\mathbb{Z}^d\\ e_j\vert n_j \forall j \leqslant d }} F(\mathbf{n}/N) G(L\mathbf{n} + \mathbf{v})\nonumber\\
& = \Big(\prod\limits_{j=1}^d \frac{W_j}{\varphi(W_j)}\Big)\sum\limits_{\substack{e_1,\dots,e_d\\e_j\vert W_j \,\forall j\leqslant d}} \Big(\prod\limits_{j=1}^{d}\mu(e_j)\Big) (J (\prod\limits_{j=1}^d e_j)^{-1} + O_{K,L,P}(N^{-K})),
\end{align}
\noindent by applying Lemma \ref{Lemma in APs} to the inner sum, where in the statement of that lemma we take $h = d$, the map $\Xi:\mathbb{R}^h \longrightarrow \mathbb{R}^d$ to be the identity, and $\widetilde{\mathbf{r}} = \mathbf{0}$. The local factor $\alpha_{\mathbf{e} ,\widetilde{\mathbf{r}}}$ is equal to $\prod_{j\leqslant d} e_j^{-1}$ in this instance.
Sum the error term in (\ref{calculation of comparison term first}) over all $e_j$. The bound $W_j = O(\log\log N)$ that comes from the prime number theorem controls the resulting error term (with room to spare), and the main term of (\ref{asymptotic local first}) follows from the identity
\begin{equation}
\label{identity mobius function}
\Big(\prod\limits_{j=1}^d \frac{W_j}{\varphi(W_j)}\Big) \sum\limits_{\substack{e_1,\dots,e_d\\e_j\vert W_j \,\forall j\leqslant d}} \Big(\prod\limits_{j=1}^{d}\frac{\mu(e_j)}{e_j}\Big) = 1.
\end{equation}
\end{proof}
To finish the proof of Theorem \ref{Theorem pseudorandomness} (in the case when $L$ is purely irrational, that is) it now suffices to show that
\begin{equation}
\label{sufficing asymptotic}
T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d) = J + O_{C,L,P,\gamma}(\log ^{-\Omega(1)}N),
\end{equation}
\noindent where each $f_j$ is either $\nu_{N,w}^{\gamma}$ or $\Lambda_{\mathbb{Z}/W_j\mathbb{Z}}$.
By multiplying out the left-hand side of (\ref{sufficing asymptotic}), we see that it is sufficient to prove that
\begin{equation}
\label{equation local or sieve first}
\frac{1}{N^{d-m}} \sum\limits_{\mathbf{n}\in \mathbb{Z}^d} \Big(\prod\limits_{j=1}^d \nu_j(n_j)\Big)F(\mathbf{n}/N)G(L\mathbf{n}+\mathbf{v}) = J + O_{L,P,\gamma}(\log ^{-\Omega(1)} N)
\end{equation}
\noindent where each $\nu_j$ equals either $c_{\rho,2}^{-1}\Lambda_{\rho,R,2}$ or $\Lambda_{\mathbb{Z}/W_j\mathbb{Z}}$ (recall $R: = N^{\gamma}$).
After our analysis in Lemma \ref{Lemma in APs}, it turns out that the estimate (\ref{equation local or sieve first}) will follow almost immediately from the sieve calculation performed in \cite[Theorem D.3]{GT10}. To describe the details, it will be useful to introduce the following notation. Let \[ S: = \{ j \leqslant d: \nu_{j} = c_{\rho,2}^{-1}\Lambda_{\rho, R, 2}\} \] and \[ S^\prime:= \{ j \leqslant d: \nu_{j} = \Lambda_{\mathbb{Z}/W_j\mathbb{Z}}\}.\] We may assume that $S \neq \emptyset$, as otherwise the estimate (\ref{equation local or sieve first}) follows from the estimate (\ref{asymptotic local first}).
Each $\nu_j$ may be expressed as a divisor sum, either using Definition \ref{Definition Smooth sieve weight} or expression (\ref{equation local von identity}). Doing this, and swapping orders of summations, we have that the left-hand side of expression (\ref{equation local or sieve first}) is equal to
\begin{align}
\label{massive expanded out sieve expression}
\Big(\prod\limits_{j \in S^\prime} \frac{W_j}{\varphi(W_j)}\Big)c_{\rho,2}^{-\vert S\vert}(\log R)^{\vert S\vert} \sum\limits_{\substack{(e_{j,k})_{j\in S,k \in [2]} \in \mathbb{N}^{S \times [2]}\\ (e_{j})_{j\in S^\prime}\in \mathbb{N}
^{S^\prime} \\e_j\vert W_j \, \forall j \in S^\prime}}
&\Big(\prod\limits_{\substack{ j \in S \\ k \in [ 2]}} \mu(e_{j,k})\rho\Big(\frac{\log e_{j,k}}{\log R}\Big)\Big)\Big(\prod\limits_{j\in S^\prime} \mu(e_{j})\Big) \nonumber\\
& \frac{1}{N^{d-m}}\sum\limits_{\substack{\mathbf{n}\in \mathbb{Z}^d\\ e_{j}\vert n_j \forall j\leqslant d}} F(\mathbf{n}/N)G(L\mathbf{n} + \mathbf{v})
\end{align}
\noindent where $R = N^\gamma$, and if $j \in S$ we write $e_{j}$ for the least common multiple $[e_{j,1}, e_{j,2}]$. Using the compact support of the function $\rho$, when analysing the inner sum one may assume that each $e_j$ is at most $N^{2\gamma}$. \\
We apply Lemma \ref{Lemma in APs}. Therefore, provided $\gamma$ is small enough,
\begin{align}
\label{equation analysing inner sum}
&\frac{1}{N^{d-m}} \sum\limits_{\substack{\mathbf{n}\in \mathbb{Z}^d\\ e_j\vert n_j \,\forall j\leqslant d}} F(\mathbf{n}/N)G(L\mathbf{n} +\mathbf{v} ) = J\Big(\prod\limits_{j=1}^d e_j\Big)^{-1} + O_{L,P,\gamma}(N^{-20}),
\end{align}
\noindent as in (\ref{calculation of comparison term first}). By the bounds on $W_j$ and $e_j$, the error term from (\ref{equation analysing inner sum}) may be summed over all $e_j$ and remain acceptable. We also have the identity \begin{equation}
\Big(\prod\limits_{j\in S^{\prime}}\frac{W_j}{\varphi(W_j)}\Big) \sum\limits_{\substack{(e_j)_{j\in S^\prime} \in \mathbb{N}^{S^\prime} \\e_j \vert W_j \, \forall j \in S^\prime}} \Big(\prod\limits_{j\in S^\prime}\frac{\mu(e_j)}{e_j}\Big) = 1.
\end{equation} Therefore expression (\ref{equation local or sieve first}) would follow from the asymptotic
\begin{align}
\label{equation rational from irrational in sieve calculation}
&(\log R)^{\vert S\vert}\sum\limits_{(e_{j,k})_{j\in S, k \in [2]} \in \mathbb{N}^{S \times [2]}}
&\Big(\prod\limits_{\substack{ j \in S \\ k \in [2]}} \mu(e_{j,k})\rho\Big(\frac{\log e_{j,k}}{\log R}\Big)\Big)\Big(\prod\limits_{j\in S} e_j\Big)^{-1} & = c_{\rho,2}^{\vert S\vert} + O(\log^{-1/20} R).
\end{align}
\noindent But this is just expression D.4 of \cite{GT10}, applied to the identity map $\Psi: \mathbb{R}^{\vert S\vert} \longrightarrow \mathbb{R}^{\vert S\vert}$. Note that the quantity $X$ in expression D.4 of \cite{GT10} is zero, as if $\psi_1,\dots,\psi_{\vert S\vert}:\mathbb{R}^{\vert S\vert} \longrightarrow \mathbb{R}$ are the linear maps given by $\psi_j(\mathbf{x}) : = x_j$ for all $j\leqslant \vert S\vert$ then there are no primes $p$ for which there exist two forms $\psi_i$ and $\psi_j$ that are linearly dependent modulo $p$. This proves (\ref{equation local or sieve first}), and hence resolves Theorem \ref{Theorem pseudorandomness} in the case when $L$ is purely irrational. \\
We now present the detailed proof of Theorem \ref{Theorem pseudorandomness} in full generality.
\begin{proof}[Proof of Theorem \ref{Theorem pseudorandomness}]
Let $u$ be the rational dimension of $L$ (see Definition \ref{Definition rational space}). Apply Lemma \ref{Lemma generating a purely irrational map} to both the expression $T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d)$ and the expression\\ $T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W_d\mathbb{Z}})$. Writing $h:=d-u$, where $u$ is the rational dimension of $L$, and renaming $m-u$ as $m$, $L^\prime$ as $L$, $\mathbf{v^\prime}$ as $\mathbf{v}$, and $G_{\widetilde{\mathbf{r}}}$ as $G$, we see that it suffices to prove the following theorem.
\begin{Theorem}
\label{Theorem pseudorandomness after rational reductions}
Let $N,d,h$ be natural numbers, and let $m$ be a non-negative integer. Suppose that $d\geqslant h\geqslant m+2$. Let $C,\gamma$ be positive parameters, and let $P$ be a set of additional parameters. Let $L:\mathbb{R}^h \longrightarrow \mathbb{R}^m$ be a surjective purely irrational linear map with algebraic coefficients, and let $\Xi :\mathbb{R}^{h} \longrightarrow \mathbb{R}^d$ be an injective linear map with integer coefficients. Assume that $\gamma$ is small enough in terms of $L$. Let $\mathbf{v} \in\mathbb{R}^m$ be a vector with $\Vert \mathbf{v} \Vert_\infty \leqslant CN$, and let $\widetilde{\mathbf{r}} \in \mathbb{Z}^d$ be a vector with $\Vert \widetilde{\mathbf{r}}\Vert_\infty\leqslant CN$. Let $F:\mathbb{R}^d \longrightarrow [0,1]$ and $G: \mathbb{R}^m \longrightarrow [0,1]$ be in $\mathcal{C}(P)$. Let $w_1,\dots w_d:\mathbb{N}\longrightarrow \mathbb{R}_{\geqslant 0}$ be functions that each satisfy $w_j(n)\rightarrow \infty$ as $n\rightarrow \infty$ and $w_j(n)\leqslant w(n)$ for all $n$. \\
\noindent These conditions will be referred to as `the hypotheses of Theorem \ref{Theorem pseudorandomness after rational reductions}'.\\
\noindent Then, if $\Xi$ has finite Cauchy-Schwarz complexity,
\begin{equation}
\label{equation defining pseudorandomness after ratinoal reduction}
T_{F,G,N}^{L,\mathbf{v},\Xi,\widetilde{\mathbf{r}}}(f_1,\dots,f_d) = T_{F,G,N}^{L,\mathbf{v},\Xi,\widetilde{\mathbf{r}}}(\Lambda_{\mathbb{Z}/ W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/ W_d\mathbb{Z}}) + O_{C,L,P,\Xi,\gamma}((\min_j w_j(N))^{-1/2})
\end{equation}
\noindent where each $f_j$ equals either $\nu_{N,w}^\gamma$ or $\Lambda_{\mathbb{Z}/W_j\mathbb{Z}} $.
\end{Theorem}
\begin{proof}[Proof of Theorem \ref{Theorem pseudorandomness after rational reductions}]
Let $\Xi: \mathbb{R}^h \longrightarrow \mathbb{R}^d$ have coordinate maps $(\xi_1,\dots,\xi_d)$. Let \begin{equation}
\label{equation singular series} \mathfrak{S}_{\widetilde{\mathbf{r}}}:=\prod\limits_{p}
\frac{1}{p^h} \sum\limits_{\mathbf{m} \in [p]^h} \prod\limits_{j=1}^d \Lambda_{\mathbb{Z}/p\mathbb{Z}}(\xi_j (\mathbf{m}) +\widetilde{r}_j)
\end{equation} be the \emph{singular series}, where $\widetilde{r_j}$ denotes the $j^{th}$ coordinate of $\widetilde{\mathbf{r}}$. Let
\begin{equation}
\label{equation singular integral again}
J_{\widetilde{\mathbf{r}}}:=\frac{1}{N^{h-m}}\int\limits_{\mathbf{x} \in \mathbb{R}^h}F((\Xi(\mathbf{x}) + \widetilde{\mathbf{r}})/N)G(L\mathbf{x} + \mathbf{v})\, d\mathbf{x}
\end{equation} be the \emph{singular integral}.
\begin{Lemma}
\label{Lemma singular series and singular integral are bounded}
Under the hypotheses of Theorem \ref{Theorem pseudorandomness after rational reductions}, if $\Xi$ has finite Cauchy-Schwarz complexity then the singular series and singular integral satisfy the bounds \[\mathfrak{S}_{\widetilde{\mathbf{r}}} \ll_{\Xi} 1\] and \[J_{\widetilde{\mathbf{r}}} \ll_{C,L,\Xi} \operatorname{Rad}(F)^{h-m}\operatorname{Rad}(G)^m\Vert G\Vert_\infty .\]
\end{Lemma}
\noindent The reader may find the definition of $\operatorname{Rad}(F)$ and $\operatorname{Rad}(G)$ in Section \ref{section smooth functions}.
\begin{proof}
Since $\Xi$ has finite Cauchy-Schwarz complexity, no two of the forms $\xi_1,\dots,\xi_d$ are parallel. Hence by \cite[Lemma 1.3]{GT10} the singular series $\mathfrak{S}_{\widetilde{\mathbf{r}}}$ converges, and the size may be bounded by a constant depending only on $\Xi$.
The bound on $J_{\widetilde{\mathbf{r}}}$ follows directly from Lemma \ref{Lemma general upper bound}.
\end{proof}
We continue with the following lemma, which is a more general version of Lemma \ref{Claim local calculation}.
\begin{Lemma}
\label{Lemma problem for local von Mangoldt}
Under the hypotheses of Theorem \ref{Theorem pseudorandomness after rational reductions} we have, for every positive real $K$,
\begin{align}
\label{asymptotic local all situations}
T_{F,G,N}^{L,\mathbf{v},\Xi,\widetilde{\mathbf{r}}}(\Lambda_{\mathbb{Z}/W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W_d\mathbb{Z}}) = \Big(\frac{1}{(\max W_j)^h} \sum\limits_{\mathbf{m} \in [\max W_j]^h} \prod\limits_{j=1}^d \Lambda_{\mathbb{Z}/W_j\mathbb{Z}}(\xi_j(\mathbf{m}) + \widetilde{r}_j) \Big) J_{\widetilde{\mathbf{r}}}\nonumber \\ + O_{C,K,L,P,\Xi}(N^{-K}).
\end{align}
\noindent If $\Xi$ has finite Cauchy-Schwarz complexity, then
\begin{equation}
\label{asymptotic local}
T_{F,G,N}^{L,\mathbf{v},\Xi,\widetilde{\mathbf{r}}}(\Lambda_{\mathbb{Z}/ W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/ W_d\mathbb{Z}}) = \mathfrak{S}_{\widetilde{\mathbf{r}}} J_{\widetilde{\mathbf{r}}} + O_{C,L,P,\Xi}((\min_j w_j(N))^{-1}).
\end{equation}
\end{Lemma}
\begin{proof}
We proceed as in the proof of Lemma \ref{Claim local calculation}. Then $T_{F,G,N}^{L,\mathbf{v},\Xi,\widetilde{\mathbf{r}}}(\Lambda_{\mathbb{Z}/ W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/ W_d\mathbb{Z}})$ is equal to
\begin{align}
\label{calculation of comparison term}
& \Big(\prod\limits_{j=1}^d \frac{W_j}{\varphi(W_j)}\Big)\frac{1}{N^{h-m}} \sum\limits_{\substack{e_1,\dots,e_d\\e_j\vert W_j \,\forall j\leqslant d}} \Big(\prod\limits_{j=1}^{d}\mu(e_j)\Big)\sum\limits_{\substack{\mathbf{n}\in\mathbb{Z}^h\\ e_j\vert \xi_j(\mathbf{n}) + \widetilde{r}_j \forall j \leqslant d }} F((\Xi(\mathbf{n}) + \widetilde{\mathbf{r}})/N) G(L\mathbf{n} + \mathbf{v})\nonumber\\
& = \Big(\prod\limits_{j=1}^d \frac{W_j}{\varphi(W_j)}\Big)\sum\limits_{\substack{e_1,\dots,e_d\\e_j\vert W_j \,\forall j\leqslant d}} \Big(\prod\limits_{j=1}^{d}\mu(e_j)\Big) \alpha_{\mathbf{e},\widetilde{\mathbf{r}}} J_{\widetilde{\mathbf{r}}} + O_{C,K,L,P,\Xi}(N^{-K}),
\end{align}
\noindent by applying Lemma \ref{Lemma in APs} to the inner sum, where \[ \alpha_{\mathbf{e},\widetilde{\mathbf{r}}} = \lim\limits_{M\rightarrow \infty} \frac{1}{M^h} \sum\limits_{\mathbf{n} \in [M]^h} \prod\limits_{j=1}^d 1_{e_j\vert (\xi_j (\mathbf{n}) +\widetilde{r}_j)}. \] If $m=0$, one should apply Lemma \ref{Lemma just arithmetic progressions} in place of Lemma \ref{Lemma in APs}.
By using the identity (\ref{equation local von identity}) again one obtains
\begin{align}
\label{first part of lemma finished}
\Big(\prod\limits_{j=1}^d \frac{W_j}{\varphi(W_j)}\Big)\sum\limits_{\substack{e_1,\dots,e_d\\e_j\vert W_j \,\forall j\leqslant d}} \prod\limits_{j=1}^{d}\mu(e_j) \alpha_{\mathbf{e},\widetilde{\mathbf{r}}}&= \lim\limits_{M\rightarrow \infty}\frac{1}{M^{h}}\sum\limits_{\mathbf{n} \in [M]^h} \prod\limits_{j=1}^d \Lambda_{\mathbb{Z}/W_j\mathbb{Z}}(\xi_j (\mathbf{n}) +\widetilde{r}_j) \nonumber \\
& = \frac{1}{(\max W_j)^h} \sum\limits_{\mathbf{n} \in [\max W_j]^h}\prod\limits_{j=1}^d \Lambda_{\mathbb{Z}/W_j\mathbb{Z}}(\xi_j (\mathbf{n}) +\widetilde{r}_j).
\end{align}
\noindent This settles the first part of the lemma.
For the second part, by the Chinese Remainder Theorem we have that (\ref{first part of lemma finished}) is equal to
\begin{align}
\label{equation product of local factors in local calculation}
\Big(\prod\limits_{p\leqslant \min_j w_j}\frac{1}{p^h} \sum\limits_{\mathbf{m} \in [p]^h} \prod\limits_{j\leqslant d}& \Lambda_{\mathbb{Z}/p\mathbb{Z}}(\xi_j(\mathbf{m})+ \widetilde{r}_j)\Big) \nonumber \\
&\times \Big(\prod\limits_{\min_j w_j < p \leqslant \max_j w_j} \frac{1}{p^h} \sum\limits_{\mathbf{m} \in [p]^h} \prod\limits_{j\leqslant d }^* \Lambda_{\mathbb{Z}/p\mathbb{Z}}(\xi_j(\mathbf{m})+ \widetilde{r}_j)\Big),
\end{align}
\noindent where $\prod^*$ denotes the product over those $j\leqslant d$ for which $p\leqslant w_j$.
Since $\Xi$ has finite Cauchy-Schwarz complexity there is no pair of forms $\xi_i$ and $\xi_j$ that are parallel. Therefore we may apply the analysis of local factors in \cite[Lemma 1.3]{GT10} to conclude that the first bracket in (\ref{equation product of local factors in local calculation}) is equal to $\mathfrak{S}_{\widetilde{\mathbf{r}}}(1 + O_{\Xi}((\min_j w_j(N))^{-1}))$, and that the second bracket is equal to $(1+ O_{\Xi}((\min_j w_j(N))^{-1})$. Combining these bounds with Lemma \ref{Lemma singular series and singular integral are bounded} gives the second part of the present lemma.
\end{proof}
\begin{Remark}
\label{Remark asymptotic for local von mangoldt in general}
\emph{As we intimated earlier, in Remark \ref{Remark after statement of Main theorem}, one can use Lemma \ref{Lemma problem for local von Mangoldt} to establish an asymptotic expression for $T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}})$ in the general case. Indeed, one applies the rational parametrisation process of Lemma \ref{Lemma generating a purely irrational map} and then the asymptotic in Lemma \ref{Lemma problem for local von Mangoldt} to obtain} \[T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}}) = \sum\limits_{\widetilde{\mathbf{r}} \in \widetilde{R}} \mathfrak{S}_{\widetilde{\mathbf{r}}} J_{\widetilde{\mathbf{r}}} + o_{C,L,P}(1).\]
\end{Remark}
Now, Theorem \ref{Theorem pseudorandomness after rational reductions} will be settled if we can show that the left-hand side of (\ref{equation defining pseudorandomness after ratinoal reduction}) enjoys the same asymptotic expression as the one present in (\ref{asymptotic local}). By multiplying out the left-hand side of (\ref{equation defining pseudorandomness after ratinoal reduction}), we see that it is sufficient to prove the following lemma.
\begin{Lemma}
\label{Lemma sieve weight asymptotics in general case} Under the hypotheses of Theorem \ref{Theorem pseudorandomness after rational reductions}, if $\Xi$ has finite Cauchy-Schwarz complexity then
\begin{equation}
\label{equation local or sieve}
\frac{1}{N^{h-m}} \sum\limits_{\mathbf{n}\in \mathbb{Z}^h} \Big(\prod\limits_{j=1}^d \nu_j(n_j)\Big)F((\Xi(\mathbf{n}) + \widetilde{\mathbf{r}})/N)G(L\mathbf{n}+\mathbf{v}) = \mathfrak{S}_{\widetilde{\mathbf{r}}} J_{\widetilde{\mathbf{r}}} + O_{C,L,P,\Xi,\gamma}((\min_j w_j(N))^{-1/2}),
\end{equation}
\noindent where each $\nu_j$ equals either $c_{\rho,2}^{-1}\Lambda_{\rho,R,2}$ or $\Lambda_{\mathbb{Z}/W_j\mathbb{Z}}.$
\end{Lemma}
\begin{proof}[Proof of Lemma]
The first half of the proof of this lemma comprises manipulations that are very similar to those that have appeared previously in this section. Indeed, as before, it will be useful to let \[ S: = \{ j \in [d]: \nu_{j} = c_{\rho,2}^{-1}\Lambda_{\rho, R, 2}\}\] and \[ S^\prime:= \{ j \leqslant d: \nu_{j} = \Lambda_{\mathbb{Z}/W_j\mathbb{Z}}\}.\] We may assume that $S \neq \emptyset$, as otherwise the estimate (\ref{equation local or sieve}) follows from Lemma \ref{Lemma problem for local von Mangoldt}.
Considering (\ref{equation local von identity}) again, and expressing each $\nu_j$ as a divisor sum, we have that the left-hand side of expression (\ref{equation local or sieve}) is equal to
\begin{align}
\label{massive expanded out sieve expression again}
\Big(\prod\limits_{j \in S^\prime} \frac{W_j}{\varphi(W_j)}\Big)c_{\rho,2}^{-\vert S\vert }(\log R)^{\vert S\vert} \sum\limits_{\substack{(e_{j,k})_{j \in S, k \in [2]} \in \mathbb{N}^{S \times [2]}\\ (e_{j})_{j\in S^\prime} \in \mathbb{N}^{S^\prime} \\e_j\vert W_j \, \forall j \in S^\prime}}
&\Big(\prod\limits_{\substack{ j \in S \\ k \in [2]}}\mu(e_{j,k})\rho\Big(\frac{\log e_{j,k}}{\log R}\Big)\Big)\Big(\prod\limits_{j\in S^\prime} \mu(e_{j})\Big) \nonumber\\
& \frac{1}{N^{h-m}}\sum\limits_{\substack{\mathbf{n}\in \mathbb{Z}^h\\ e_{j}\vert \xi_j(\mathbf{n}) + \widetilde{r}_j \forall j\leqslant d}} F((\Xi(\mathbf{n}) + \widetilde{\mathbf{r}})/N)G(L\mathbf{n} + \mathbf{v})
\end{align}
\noindent where if $ j \in S$ we write $e_j$ for the least common multiple $[e_{j,1}, e_{j,2}]$. Using the compact support of the function $\rho$, when analysing the inner sum one may assume that each $e_j$ is at most $N^{2\gamma}$.
We apply Lemma \ref{Lemma in APs} (or, if $m=0$, we apply Lemma \ref{Lemma just arithmetic progressions}). Therefore
\begin{align}
\label{equation analysing inner sum again}
&\frac{1}{N^{h-m}} \sum\limits_{\substack{\mathbf{n}\in \mathbb{Z}^d\\ e_j\vert \xi_j(\mathbf{n}) + \widetilde{r}_j\,\forall j\leqslant d}} F((\Xi(\mathbf{n}) + \widetilde{\mathbf{r}})/N)G(L\mathbf{n} +\mathbf{v} ) = \alpha_{\mathbf{e},\widetilde{\mathbf{r}}} J_{\widetilde{\mathbf{r}}} + O_{C,L,P,\Xi}(N^{-20}),
\end{align} where \begin{equation}
\label{local factor def again}
\alpha_{\mathbf{e},\widetilde{\mathbf{r}}} := \lim\limits_{M\rightarrow \infty}\frac{1}{M^{h}}\sum\limits_{\mathbf{m}\in [M]^h} \prod\limits_{j=1}^d 1_{e_j\vert (\xi_j(\mathbf{m}) + \widetilde{r}_j)}
\end{equation}
Therefore expression (\ref{equation local or sieve}) (and hence the entirety of Theorem \ref{Theorem pseudorandomness after rational reductions}) would follow from the asymptotic expression
\begin{align}
\label{equation rational from irrational in sieve calculation again}
&\Big(\prod\limits_{j \in S^\prime} \frac{W_j}{\varphi(W_j)}\Big)(\log R)^{\vert S\vert} \sum\limits_{\substack{(e_{j,k})_{j \in S, k \in [2]} \in \mathbb{N}^{S \times [2]}\\(e_{j})_{j\in S^\prime} \in \mathbb{N}^{S^\prime} \\ e_j \vert W_j \, \forall j \in S^\prime}}
&\Big(\prod\limits_{\substack{j \in S \\ k \in[2]}} \mu(e_{j,k})\rho\Big(\frac{\log e_{j,k}}{\log R}\Big)\Big)\Big(\prod\limits_{j\in S^\prime} \mu(e_{j})\Big) \alpha_{\mathbf{e},\widetilde{\mathbf{r}}}\nonumber \\
& = c_{\rho,2}^{\vert S\vert }\mathfrak{S}_{\widetilde{\mathbf{r}}} + O((\min_j w_j(N))^{-1/2}).
\end{align}
\noindent Note that this expression concerns linear forms with integer coefficients. We have removed the irrational information entirely. \\
Expression (\ref{equation rational from irrational in sieve calculation again}) follows from the sieve calculation \cite[Theorem D.3]{GT10}, after restricting to suitable arithmetic progressions. Indeed, let \[ A: = \{ \mathbf{a} \in [W]^h: ((\xi_j(\mathbf{a}) + \widetilde{r}_j),W_j) = 1 \, \forall j \in S^\prime\}.\] Then the left-hand side of (\ref{equation rational from irrational in sieve calculation again}) is equal to \begin{align}
\label{split into arithmetic progressions}
&\Big(\prod\limits_{j \in S^\prime} \frac{W_j}{\varphi(W_j)}\Big)\frac{1}{W^h}\sum\limits_{\mathbf{a} \in A}(\log R)^{\vert S\vert}\nonumber\\
&\sum\limits_{(e_{j,k})_{j \in S, k \in [2]} \in \mathbb{N}^{S \times [2]}}\Big(\prod\limits_{\substack{ j \in S \\ k \in [2]}} \mu(e_{j,k})\rho\Big(\frac{\log e_{j,k}}{\log R}\Big)\Big)\lim\limits_{M\rightarrow \infty} \frac{1}{M^h} \sum\limits_{\mathbf{m}\in [M]^h} \prod\limits_{j \in S} 1_{e_j\vert (\xi_j(W\mathbf{m} + \mathbf{a}) + \widetilde{r}_j)}.
\end{align}
The expression following the summation in $\mathbf{a}$ is amenable to the estimate (D.4) from \cite{GT10}, applied with $t = \vert S\vert$ and affine linear forms \[ \psi_j(\mathbf{m}) : = \xi_j(W\mathbf{m} + \mathbf{a}) + \widetilde{r}_j ,\qquad j \in S.\] In order to apply this estimate we note first that $t\neq 0$ (since we have previously assumed that $S \neq \emptyset$). We also note again that, by the finite Cauchy-Schwarz complexity assumption, no two of the forms $\psi_j$ are rational multiples of each other.
So, applying the estimate (D.4) from \cite{GT10} we have that the expression in (\ref{split into arithmetic progressions}) following the summation in $\mathbf{a}$ is equal to
\begin{align}
\label{application of D4}
c_{\rho,2}^{\vert S\vert}\prod\limits_{p}\beta_{p,\mathbf{a}} + O_{C,\Xi,\gamma}(e^{O(X)} \log^{-\frac{1}{20}} R),
\end{align}
where \[ \beta_{p,\mathbf{a}} = \frac{1}{p^h}\sum\limits_{\mathbf{m} \in [p]^h} \prod\limits_{j \in S} \Lambda_{\mathbb{Z}/p\mathbb{Z}} (\xi_j(W\mathbf{m} +\mathbf{a}) + \widetilde{r}_j),\] and \[ X : = \sum\limits_{p\in P_{\Xi}} p^{-1/2},\] where $P_{\Xi}$ is the set of `exceptional' primes, i.e. those primes $p$ for which there exist $i$ and $j$ for which the forms $\xi_i(W\mathbf{m} +\mathbf{a}) + \widetilde{r}_i$ and $\xi_j(W\mathbf{m} +\mathbf{a}) + \widetilde{r}_j$ are affinely related modulo $p$.
\begin{Remark}
\emph{The reader may have noticed that expression (\ref{application of D4}) is not exactly what was proved in estimate (D.4) of \cite{GT10}. Rather than having an error term depending on $\Xi$ and $C$, that expression has an error term depending on the linear maps $\mathbf{m} \mapsto \xi_j(W\mathbf{m} + \mathbf{a}) + \widetilde{r}_j$ which, one notes, have coefficients that depend on $W$ and that are therefore unbounded. Fortunately, the dependence of the error term on the size of the coefficients is only polynomial, and so any contribution from powers of $W$ may be absorbed into the $\log ^{-\frac{1}{20}} R$ factor. }
\emph{This technical manoeuvre is also required in \cite{GT10} (in the application of Theorem D.3 that follows expression (D.24)), although it is not explicitly stated by the authors.}
\end{Remark}
Following on from (\ref{application of D4}) and assuming that $N$ is large enough in terms of $\Xi$, we see that any $p \in P_\Xi$ satisfies $p \leqslant w$ (as $\Xi$ has finite Cauchy-Schwarz complexity). Since $w(N) = \max(1,\log\log\log N)$, the error in (\ref{application of D4}) is therefore $O_{C,\Xi,\gamma}( \log ^{-\Omega(1)} N)$. Furthermore, by \cite[Lemma 1.3]{GT10} we have $\beta_{p,\mathbf{a}} = 1+O(p^{-2})$, and so $\prod\limits_{p>w} \beta_{p,\mathbf{a}} = 1 + O(w^{-1})$. Finally, if $p\leqslant w$ then \[\beta_{p,\mathbf{a}} = \prod\limits_{j \in S} \Lambda_{\mathbb{Z}/p\mathbb{Z}} (\xi_j(\mathbf{a}) + \widetilde{r}_j).\] Therefore expression (\ref{split into arithmetic progressions}), up to an error term of $O_{C,\Xi,\gamma}(w^{-1/2})$, is equal to \begin{align}
\label{long series of local deductions}
& c_{\rho,2}^{\vert S\vert}\Big(\prod\limits_{j\in S^\prime} \frac{W_j}{\varphi(W_j)}\Big) \frac{1}{W^h}\sum\limits_{\mathbf{a}\in A} \prod\limits_{p\leqslant w} \beta_{p,\mathbf{a}}\nonumber\\
= &\, c_{\rho,2}^{\vert S\vert} \Big(\prod\limits_{j\in S^\prime} \frac{W_j}{\varphi(W_j)}\Big) \frac{1}{W^h} \sum\limits_{\mathbf{a}\in A} \prod\limits_{j \in S}\Lambda_{\mathbb{Z}/W\mathbb{Z}}(\xi_j(\mathbf{a}) + \widetilde{r}_j) \nonumber \\
= & \,c_{\rho,2}^{\vert S\vert}\frac{1}{W^h} \sum\limits_{\mathbf{a} \in [W]^h} \prod\limits_{j \in S}\Lambda_{\mathbb{Z}/W\mathbb{Z}}(\xi_j(\mathbf{a}) + \widetilde{r}_j)\prod\limits_{j\in S^\prime} \Lambda_{\mathbb{Z}/W_j\mathbb{Z}}(\xi_j(\mathbf{a}) + \widetilde{r}_j)\nonumber\\
= & \,c_{\rho,2}^{\vert S\vert}\prod\limits_{p\leqslant \min_j w_j}\frac{1}{p^h} \sum\limits_{\mathbf{m} \in [p]^h} \prod\limits_{j\leqslant d} \Lambda_{\mathbb{Z}/p\mathbb{Z}}(\xi_j(\mathbf{m}) + \widetilde{r}_j) \times \prod\limits_{\min_j w_j < p \leqslant w} \widetilde{\beta_p},
\end{align}
\noindent where \[ \widetilde{\beta_p}: = \frac{1}{p^h}\sum\limits_{\mathbf{m} \in [p]^h}\prod\limits_{j \in S}\prod\limits_{j\in S^\prime}^*\Lambda_{\mathbb{Z}/p\mathbb{Z}}(\xi_j(\mathbf{m}) + \widetilde{r}_j),\] where $\prod^*$ denotes the product over all $j\in S^\prime$ for which $p\leqslant w_j$.
By invoking \cite[Lemma 1.3]{GT10} again we conclude that $\widetilde{\beta_p} = 1+O(p^{-2})$ and also that the first part of expression (\ref{long series of local deductions}) is equal to $c_{\rho,2}^{\vert S\vert}\mathfrak{S}_{\widetilde{\mathbf{r}}}(1+O_{C,\Xi}(\min_j w_j^{-1}))$. Hence, as in the conclusion of the proof of Lemma \ref{Lemma problem for local von Mangoldt}, we conclude that expression (\ref{long series of local deductions}) is equal to $c_{\rho,2}^{\vert S\vert}\mathfrak{S}_{\widetilde{\mathfrak{r}}} + O_{C,\Xi,\gamma}(\min_j w_j^{-1/2})$. This establishes expression (\ref{equation rational from irrational in sieve calculation again}), and so Lemma \ref{Lemma sieve weight asymptotics in general case} is proved.
\end{proof}
Therefore Theorem \ref{Theorem pseudorandomness after rational reductions} is resolved.
\end{proof}
Hence Theorem \ref{Theorem pseudorandomness} is settled as well, i.e. we conclude that the weight $\nu_{N,w}^\gamma$ is $(L,w)$-pseudorandom.
\end{proof}
We finish this section by noting a corollary of the theorems above, which will be useful in its own right.
\begin{Corollary}[Upper bound for linear inequalities]
\label{Corollary upper bound}
Let $N,m,d$ be natural numbers, with $d\geqslant m+2$, and let $C,\varepsilon,\gamma$ be positive reals. Let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a surjective linear map with algebraic coefficients, and suppose that $L\notin V_{\operatorname{degen}}^*(m,d)$ and that the coefficients of $L$ are algebraic. Let $u$ be the rational dimension of $L$. Let $w_1,\dots w_d:\mathbb{N}\longrightarrow \mathbb{R}_{\geqslant 0}$ be functions that satisfy $w_j(n)\rightarrow \infty$ as $n\rightarrow \infty$ for all $j$ and satisfy $w_j(n)\leqslant w(n)$ for all $j$ and for all $n$. If $\gamma$ is small enough in terms of $L$, then for all functions $F:\mathbb{R}^d \longrightarrow [0,1]$ supported on $[-C,C]^d$, for all functions $G:\mathbb{R}^m \longrightarrow [0,1]$ supported on $[-\varepsilon,\varepsilon]^m$, and for all $\mathbf{v}\in\mathbb{R}^m$ satisfying $\Vert \mathbf{v} \Vert_\infty \leqslant CN$, one has \[ T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d) \ll_{C,L} \Vert G\Vert_\infty \varepsilon^{m-u} + o(1)\] as $N\rightarrow \infty$, where each $f_j$ equals either $\nu_{N,w}^\gamma$ or $\Lambda_{\mathbb{Z}/W_j\mathbb{Z}} $. The $o(1)$ term may depend on $C$, $L$, $\varepsilon$, $\gamma$, and the choice of functions $w_1,\dots,w_d$.
\end{Corollary}
\begin{proof}
Using Lemma \ref{Lemma smooth approximations}, replace both $F$ and $G$ by compactly supported smooth majorants $F_1$ and $G_1$ for which \[ 1_{[-C_1,C_1]^d} \leqslant F_1 \leqslant 1_{[-2C_1,2C_1]^d}\] and \[ 1_{[-\varepsilon,\varepsilon]^m} \leqslant G_1 \leqslant 1_{[-2\varepsilon,2\varepsilon]^m}.\] We have $F_1 \in \mathcal{C}(C_1)$ and $G_1 \in \mathcal{C}(\varepsilon)$. Then, by Theorem \ref{Theorem pseudorandomness},
\begin{align*}
T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d) &\ll T_{F_1,G_1,N}^{L,\mathbf{v}}(f_1,\dots,f_d) \\
& = T_{F_1,G_1,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W_d\mathbb{Z}}) + o(1),
\end{align*}
where the error term may depend on $C$, $L$, $\varepsilon$, $\gamma$, and the functions $w_1,\dots,w_d$.
In Remark \ref{Remark asymptotic for local von mangoldt in general}, we noted that \[T_{F_1,G_1,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W_1\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W_d\mathbb{Z}}) = \sum\limits_{\widetilde{\mathbf{r}} \in \widetilde{R}} \mathfrak{S}_{\widetilde{\mathbf{r}}} J_{\widetilde{\mathbf{r}}} + o(1),\] where the error term depends on the parameters mentioned above, and where $\mathfrak{S}_{\widetilde{\mathbf{r}}}$ and $J_{\widetilde{\mathbf{r}}}$ are of the form (\ref{equation singular series}) and (\ref{equation singular integral again}). The corollary then follows from the bounds in Lemma~\ref{Lemma singular series and singular integral are bounded}.
\end{proof}
This result is to be compared with the following statement.
\begin{Lemma}[Weak upper bound]
\label{Lemma trivial upper bound}
Let $N,m,d$ be natural numbers, with $d\geqslant m$, and let $C,\varepsilon$ be positive parameters. Let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a surjective linear map. Then, for all functions $F: \mathbb{R}^d \longrightarrow [0,1]$ supported on $[-C,C]^d$, for all functions $G: \mathbb{R}^m \longrightarrow [0,1]$ supported on $[-\varepsilon,\varepsilon]^m$, for all $\mathbf{v} \in \mathbb{R}^m$, and for all functions $f_1,\dots,f_d: \mathbb{Z} \longrightarrow \mathbb{R}$, \[ T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d) \ll_{C, L,\varepsilon} \Vert G\Vert_\infty \sup\limits_{\substack{j\leqslant d \\ \vert n\vert \leqslant \operatorname{Rad}(F)N}} \vert f_j(n)\vert^d .\]
\end{Lemma}
\noindent The bound in Lemma \ref{Lemma trivial upper bound} is weaker than the bound in Corollary \ref{Corollary upper bound}, but has the advantage of holding for all surjective maps $L$, which is a situation that will be needed later.\\
\begin{proof}
This is essentially identical to Lemma 3.2 of \cite{Wa17}. Indeed, one sees immediately that \[ T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d) \ll \frac{1}{N^{d-m}}\Big(\sup\limits_{\substack{j\leqslant d \\ \vert n\vert \leqslant \operatorname{Rad}(F)N}} \vert f_j(n)\vert^d \Big) \times \sum\limits_{\substack{ \mathbf{n} \in [-CN,CN]^d \\ \Vert L\mathbf{n} + \mathbf{v}\Vert_\infty \leqslant \varepsilon}} 1.\] Since $L$ is surjective, without loss of generality we may assume that the first $m$ columns of $L$ form an invertible matrix. If the variables $n_{m+1}$ to $n_d$ are fixed, there are only $O_{\varepsilon,L}(1)$ possible choices for $n_1 ,\dots,n_m$ for which the inequality $\Vert L\mathbf{n} + \mathbf{v} \Vert_\infty \leqslant \varepsilon$ is satisfied. Summing over $n_{m+1}$ to $n_{d}$, the lemma follows.
\end{proof}
\part{The structure of inequalities}
\label{part the structure of inequalities}
Before embarking upon this part of the argument, we remind the reader of the following basic notion from functional analysis. A linear map $L:(V, \Vert \cdot \Vert_V) \longrightarrow (W, \Vert \cdot \Vert_W)$ between two normed spaces will be called a \emph{bounded operator} if there exists a constant $C_L$ such that for all $\mathbf{v} \in V$ one has $\Vert L \mathbf{v}\Vert_{W} \leqslant C_L \Vert \mathbf{v} \Vert _V$. It is a standard fact that all linear maps between two finite dimensional normed spaces are bounded. \\
\section{An alternative formulation}
So far all of our theorems and lemmas have been phrased in terms of linear inequalities that are written in the form $T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d)$. In Section \ref{section General proof of the real variable von Neumann Theorem} the auxiliary inequalities will appear in a different form, but, as is shown in Lemma \ref{Lemma different forms of inequalities} below, these different forms are more-or-less equivalent. The statement of this lemma is unfortunately rather technical, but the proof is straightforward. The reader may wish in the first instance to consider the special case in which $l = 0$ and $\Phi$ is injective.
\begin{Lemma}[Alternative formulation]
\label{Lemma different forms of inequalities}
Let $m,d,l$ be natural numbers, with $d\geqslant m$, and let $C,\sigma,\eta$ be positive parameters. Let $P$ be another set of parameters. Let $k$ be a non-negative integer, and suppose that $\eta$ is small enough in terms of $m$, $d$, $k$ and $l$. Let $\Phi:\mathbb{R}^{d-m+k}\longrightarrow \mathbb{R}^d$ be a linear map, and suppose that $k = \dim \ker \Phi$. Let $I:\mathbb{R}^d \longrightarrow [0,1]$ and $H:\mathbb{R}^{d-m+k +l} \longrightarrow [0,1]$ be smooth functions, where $\operatorname{Rad}(I) \leqslant \eta$ and $\operatorname{Rad}(H) \leqslant C$. Assume that the Lipschitz constant of $H$ is at most $\sigma^{-1}$ and assume further that $H,I \in \mathcal{C}(P)$. Then
\begin{enumerate}
\item there exists a surjective linear map $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ such that $\ker L = \operatorname{Im} \Phi$ and $\Vert L\Vert_\infty = O_{\Phi}(1)$. If $\Phi$ has algebraic coefficients then $L$ can be chosen to have algebraic coefficients.
\item for any $L$ satisfying part (1), if $\Phi$ has finite Cauchy-Schwarz complexity then $L \notin V_{\operatorname{degen}}^*(m,d)$.
\item for any $L$ satisfying part (1), if $\eta$ is small enough in terms of $L$ and $\Phi$ then there exist smooth functions $F:\mathbb{R}^{d+l} \longrightarrow \mathbb{R}_{\geqslant 0}$ and $G:\mathbb{R}^{m} \longrightarrow \mathbb{R}_{\geqslant 0}$, with $F \in \mathcal{C}(P,\Phi)$, $G \in\mathcal{C}(L,P,\Phi)$ and $\operatorname{Rad}(G) \ll_L\eta $, such that for all $\mathbf{v} \in \mathbb{R}^l$, $\mathbf{z} \in \mathbb{R}^d$, and natural numbers $N$,
\begin{equation}
\label{equation different forms of inequalities}
\frac{1}{N^k} \int\limits_{\mathbf{x} \in \mathbb{R}^{d-m+k}} I(\mathbf{z} - \Phi(\mathbf{x}))H((\mathbf{v},\mathbf{x})/N) \, d\mathbf{x} = F((\mathbf{v},\mathbf{z})/N) G(L\mathbf{z}) + E(\mathbf{z},N),\end{equation}
\noindent where $E(\mathbf{z},N)$ is an error term of size at most \[
O_{C,\Phi}(\eta \sigma^{-1} N^{-1} 1_{[0,O_{C,\Phi}(N)]}(\Vert \mathbf{z}\Vert_\infty )1_{[0,O_L(\eta)]} (\Vert L\mathbf{z} \Vert_\infty)).\]
\end{enumerate}
\end{Lemma}
\begin{proof}
Part (1) of the lemma is immediate. Indeed, one has the quotient map $\pi:\mathbb{R}^d \longrightarrow \mathbb{R}^d/ \operatorname{Im} \Phi$. Choosing an isomorphism $\iota: \mathbb{R}^d/ \operatorname{Im} \Phi \cong \mathbb{R}^m$, we may define $L:= \iota \circ \pi$. If $\Phi$ has algebraic coefficients then choosing such an $\iota$ with algebraic coefficients gives a suitable $L$ with algebraic coefficients.
For part (2), suppose that $\Phi$ has finite Cauchy-Schwarz complexity. If $L$ were in $V_{\operatorname{degen}}^*(m,d)$ then there would exist $i,j\leqslant d$ and a real number $\lambda$ for which $\mathbf{e_i} - \lambda \mathbf{e_j}$ is non zero and $\mathbf{e_i^*} - \lambda\mathbf{e_j^*} \in L^*((\mathbb{R}^{m})^*)$, which would imply $\mathbf{e_i^*} - \lambda\mathbf{e_j^*} \in (\operatorname{Im} \Phi)^0$, which would imply that $\Phi$ has infinite Cauchy-Schwarz complexity, contradicting the hypothesis. \\
It remains to prove part (3). Let $\{\mathbf{u^{(1)}},\dots,\mathbf{u^{(k)}}\} \subset \mathbb{R}^{d-m+k}$ be an orthonormal basis for $\ker \Phi$, and extend this to an orthonormal basis $\{\mathbf{u^{(1)}},\dots,\mathbf{u^{(d-m+k)}}\}$ for $\mathbb{R}^{d-m+k}$. Then define the linear map $\Psi:\mathbb{R}^{d-m+k}\longrightarrow \mathbb{R}^{d-m+k}$ by \[\Psi(\mathbf{y}) = \sum\limits_{j=1}^{d-m+k} y_j\mathbf{u^{(j)}}.\] By changing variables, we have that the left-hand side of (\ref{equation different forms of inequalities}) is equal to \[\frac{1}{N^k}\int\limits_{\mathbf{y}\in \mathbb{R}^{d-m+k}} I(\mathbf{z} - \Phi(\Psi(\mathbf{y})))H((\mathbf{v},\Psi(\mathbf{y}))/N)\, d\mathbf{y},\] which equals
\begin{equation}
\label{after second rescaling}
\frac{1}{N^k}\int\limits_{\mathbf{y}\in \mathbb{R}^{d-m+k}} I(\mathbf{z} - \Phi(\Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}})))H((\mathbf{v},(\Psi(\mathbf{y_1^k},\mathbf{0}) + \Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}})))/N) \, d\mathbf{y}.
\end{equation}
\noindent Recall, from Section \ref{section conventions}, that we use the notation $\mathbf{y_1^k}$ to refer to the vector $(y_1,\dots,y_k)^T \in \mathbb{R}^k$, etcetera.\\
We make some observations. Firstly, we observe that (\ref{after second rescaling}) is equal to $0$ unless $\Vert \mathbf{z} \Vert_\infty = O_{C,\Phi}(N)$. Indeed, if $\Vert z \Vert_\infty \geqslant C_1N$ then for all $y_{k+1},\dots,y_{d-m+k}$ that give a non-zero contribution to (\ref{after second rescaling}) we have \[\Vert \Phi(\Psi(\mathbf{0}, \mathbf{y_{k+1}^{d-m+k}}))\Vert_\infty \geqslant \frac{1}{2} C_1 N,\] if $\eta$ is small enough. This means that \[\Vert \Psi(\mathbf{0}, \mathbf{y_{k+1}^{d-m+k}})\Vert_\infty \gg_{\Phi} C_1 N,\] which if $C_1$ is large enough in terms of $C$ and $\Phi$ means that \[ H((\mathbf{v},(\Psi(\mathbf{y_1^k},\mathbf{0}) + \Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}})))/N) = 0\] for all $y_1,\dots,y_k$. [Note that $\Psi(\mathbf{y_1^k},\mathbf{0})$ and $\Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}})$ are orthogonal.]
Secondly, we observe that \[\Vert \mathbf{z} - \Phi(\Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}}))\Vert_\infty \ll \eta\] for all $y_{k+1},\dots,y_{d-m+k}$ that give a non-zero contribution to the integral (\ref{after second rescaling}). Write $\mathbf{z} = \mathbf{z_1} + \mathbf{z_2}$, where $\mathbf{z_1} \in \operatorname{Im} \Phi$ and $\mathbf{z_2}\in (\operatorname{Im} \Phi)^\perp$. By orthogonality, we conclude that \[\Vert \mathbf{z_1} - \Phi(\Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}}))\Vert_\infty \ll \eta.\] Since $(\Phi|_{(\ker \Phi)^\perp})^{-1}:\operatorname{Im} \Phi \longrightarrow (\ker \Phi)^\perp$ is a bounded linear map, this in turn means that \[ \Vert (\Phi|_{(\ker \Phi)^\perp})^{-1}(\mathbf{z_1}) - \Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}}))\Vert_\infty\ll_\Phi \eta.\] Since $H$ is Lipschitz, with Lipschitz constant at most $\sigma^{-1}$, this all means that (\ref{after second rescaling}) is equal to
\begin{align}
\label{two brackets}
&\Big(\int\limits_{\mathbf{y_{k+1}^{d-m+k}} \in \mathbb{R}^{d-m}} I(\mathbf{z} - \Phi(\Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}}))) \, d\mathbf{y_{k+1}^{d-m+k}}\Big)\nonumber \\
&\times \frac{1}{N^k}\Big(\int\limits_{\mathbf{y_1^k}\in \mathbb{R}^k}H((\mathbf{v},(\Psi(\mathbf{y_1^k},\mathbf{0}) + (\Phi|_{(\ker \Phi)^\perp})^{-1}(\mathbf{z_1})))/N) \,d\mathbf{y_1^k}\Big),
\end{align}
\noindent plus an error of size at most \[O_{C,\Phi}(\eta \sigma^{-1} N^{-1} 1_{[0,O_{C,\Phi}(N)]}(\Vert \mathbf{z}\Vert_\infty)1_{[0,O(\eta)]}(\operatorname{dist}(\mathbf{z}, \operatorname{Im} \Phi))).\]
We proceed to analyse the terms of (\ref{two brackets}) separately. Firstly, by shifting the variables $y_{k+1},\dots,y_{d-m+k}$ we see that the first bracket of (\ref{two brackets}) is equal to
\begin{equation}
\label{first bracket}
\int\limits_{\mathbf{y_{k+1}^{d-m+k}} \in \mathbb{R}^{d-m}} I(\mathbf{z_2} - \Phi(\Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}}))) \, d\mathbf{y_{k+1}^{d-m+k}}.
\end{equation}
\noindent Now let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^{m}$ be any surjective linear map that satisfies $\ker L = \operatorname{Im} \Phi$. Note that $L|_{(\operatorname{Im} \Phi)^\perp}$ is an injective linear map, and thus (\ref{first bracket}) is equal to \[\int\limits_{\mathbf{y_{k+1}^{d-m+k}} \in \mathbb{R}^{d-m}}I((L|_{(\operatorname{Im} \Phi)^\perp})^{-1}L\mathbf{z_2} - \Phi (\Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}}))) \, d \mathbf{y_{k+1}^{d-m++k}}.\] Differentiating inside the integral, one sees that this expression is equal to $G(L\mathbf{z_2})$ for some smooth compactly supported function $G:\mathbb{R}^{m}\longrightarrow \mathbb{R}_{\geqslant 0}$ satisfying $G \in \mathcal{C}(L,P,\Phi)$. Moreover, $G$ is supported on $[O_L(\eta),O_L(\eta)]^m$, since $(L|_{(\operatorname{Im} \Phi)^\perp})^{-1}L\mathbf{z_2}$ and $\Phi (\Psi(\mathbf{0},\mathbf{y_{k+1}^{d-m+k}}))$ are orthogonal. Note that $L\mathbf{z_2} = L\mathbf{z}$, so the expression is equal to $G(L\mathbf{z})$.\\
We move to the second term of (\ref{two brackets}). Choose $\iota: \operatorname{Im} \Phi \longrightarrow \mathbb{R}^{d-m}$ to be an isomorphism with $\Vert \iota(\mathbf{x})\Vert_\infty \asymp _{\Phi} \Vert\mathbf{x}\Vert_\infty$. Then the second term of (\ref{two brackets}) is equal to $F_1((\mathbf{v},\iota(\mathbf{z_1}))/N)$ for some smooth function $F_1:\mathbb{R}^{d-m+l}\longrightarrow \mathbb{R}_{\geqslant 0}$ satisfying $F_1 \in \mathcal{C}(P,\Phi)$. Note that $F_1$ is indeed compactly supported, since $(\Phi|_{(\ker \Phi)^\perp})^{-1}(\mathbf{z_1})$ and $\Psi(\mathbf{y_1^k},\mathbf{0})$ are orthogonal vectors.\\
In summary, we have shown that (\ref{equation different forms of inequalities}) is equal to
\begin{equation}
F_1((\mathbf{v},\iota(\mathbf{z_1}))/N) G(L\mathbf{z}),
\end{equation} plus an error of size \[O_{C,\Phi}(\eta \sigma^{-1} N^{-1} 1_{[0,O_{C,\Phi}(N)]}(\Vert \mathbf{z}\Vert_\infty)1_{[0,O(\eta)]}(\operatorname{dist}(\mathbf{z}, \operatorname{Im} \Phi))).\] By the construction of $L$, this error is bounded by \[O_{C,\Phi}(\eta \sigma^{-1} N^{-1}1_{[0,O_{C,\Phi}(N)]}(\Vert \mathbf{z}\Vert_\infty) 1_{[0,O_L(\eta)]}(\Vert L\mathbf{z}\Vert_\infty)).\] The term $F_1(\mathbf{v},\iota(\mathbf{z_1})/N) G(L\mathbf{z})$ is not quite of the required form, since $F_1(\mathbf{v},\iota(\mathbf{z_1})/N)$ is not compactly supported as a function of $\mathbf{z}$. However, it may be easily massaged into this form. Indeed, from the above discussion we know that $G(L\mathbf{z}) \neq 0$ implies that $\Vert\mathbf{z_2}\Vert_\infty \leqslant C_1\eta$, for some constant $C_1$ that satisfies $C_1 = O_{L,\Phi}(1)$. Let $b:\mathbb{R}\longrightarrow [0,1]$ be a $1/2$-supported function (in the sense of Definition \ref{Defintion eta supported}), and let $B:\mathbb{R}^{d}\longrightarrow [0,1]$ be defined by $B(\mathbf{x}) = \prod_{j=1}^d b(x_j)$. Then let $F:\mathbb{R}^d \longrightarrow \mathbb{R}_{\geqslant 0}$ be defined by \[F(\mathbf{v},\mathbf{z}): = F_1(\mathbf{v},\iota (\mathbf{z_1})) B(\mathbf{z_2}).\] Then $F \in \mathcal{C}(P,\Phi)$, and if $\eta \leqslant 1/2C_1$ we have \[ F_1((\mathbf{v},\iota(\mathbf{z_1}))/N) G(L\mathbf{z}) = F((\mathbf{v},\mathbf{z})/N) G(L\mathbf{z}).\] The lemma is proved.
\end{proof}
This reformulation allows us to deduce Corollary \ref{Corollary switching functions} below. This is a corollary of Theorem \ref{Theorem pseudorandomness} and is the result on inequalities and sieve weights that we will actually use in Section \ref{section Cauchy Schwarz argument}. In order to state this inequality, we introduce the following convention.
\begin{Definition}[Convolution]
\label{Definition convolution}
If $f:\mathbb{Z}\longrightarrow \mathbb{R}$ has finite support, and $g:\mathbb{R}\longrightarrow [0,1]$ is a measurable function, we may define the convolution $(f\ast g)(x) : \mathbb{R}\longrightarrow \mathbb{R}$ by \[ (f\ast g)(x): = \sum\limits_{n\in \mathbb{Z}} f(n) g(x -n).\]
\end{Definition}
Recall from Section \ref{section conventions} that, for some positive parameter $\eta$, the function $\chi:\mathbb{R} \longrightarrow [0,1]$ denotes a fixed $\eta$-supported function.
\begin{Corollary}[Switching functions]
\label{Corollary switching functions}
Let $N,m,d$ be natural numbers, with $d\geqslant m+2$, and let $k$ be a non-negative integer. Let $C,\gamma,\eta$ be positive parameters, and let $P$ be a set of further parameters. Suppose that $\eta$ is small enough in terms of $m$, $d$, and $k$. Let $(\varphi_1,\dots,\varphi_d) = \Phi:\mathbb{R}^{d-m+k} \longrightarrow \mathbb{R}^d$ be a linear map with algebraic coefficients, and suppose that $k=\dim \ker \Phi$. Suppose that $\Phi$ has finite Cauchy-Schwarz complexity. Let $H:\mathbb{R}^{d-m+k} \longrightarrow [0,1]$ be a smooth function in $\mathcal{C}(P)$. For $j\leqslant d$, let $w_1,\dots,w_d$ be any functions with $w_j(n)\leqslant w(n)$ for all $n$ and for which $w_j(n) \rightarrow \infty$ as $n \rightarrow \infty$. For each $j\leqslant d$ let the function $f_j:\mathbb{Z}\longrightarrow \mathbb{R}_{\geqslant 0}$ be equal to either $\nu_{N,w}^\gamma$ or $\Lambda_{\mathbb{Z}/W_j\mathbb{Z}}$. Let $\mathbf{r} \in \mathbb{R}^d$ be any vector satisfying $\Vert \mathbf{r} \Vert_\infty \leqslant C N$.
Then, if $\gamma$ is small enough in terms of $\Phi$, the expression \begin{equation}
\label{upper bounding count} \frac{1}{N^{d-m+k}} \int\limits_{\mathbf{x} \in \mathbb{R}^{d-m+k}} \Big( \prod\limits_{j=1}^{d} (f_j \ast \chi)(\varphi_j(\mathbf{x}) - r_j)\Big) H(\mathbf{x}/N) \, d\mathbf{x}
\end{equation}
\noindent is independent of the choices of the functions $f_j$, up to an error of size $o(1)$ as $N\rightarrow \infty$. This $o(1)$ term may depend on $C$, $P$, $\Phi$, $\eta$, $\gamma$, and on the functions $w_1,\dots,w_d$.
\end{Corollary}
\begin{proof}
Expanding out the definition of $f_j \ast \chi$, one observes that the left-hand side of (\ref{upper bounding count}) is equal to
\begin{equation}
\label{equation bounding alternative count}\frac{1}{N^{d-m+k}}\sum\limits_{n_1,\dots,n_d \in \mathbb{Z}} \Big(\prod\limits_{j=1}^d f_j(n_j) \Big)\int\limits_{\mathbf{x} \in \mathbb{R}^{d-m+k}} \Big(\prod\limits_{j=1}^d \chi(\varphi_j(\mathbf{x}) - n_j - r_j) \Big) H(\mathbf{x}/N)\, d\mathbf{x}.
\end{equation} By applying Lemma \ref{Lemma different forms of inequalities} to the inner integral, we get a surjective linear map $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ with algebraic coefficients, and smooth functions $F:\mathbb{R}^d \longrightarrow \mathbb{R}_{\geqslant 0}$ and $G:\mathbb{R}^m \longrightarrow \mathbb{R}_{\geqslant 0}$ supported on $[-O_{P,\Phi}(1),O_{P,\Phi}(1)]^d$ and $[-O_{\Phi}(\eta),O_{\Phi}(\eta)]^m$ respectively and with $F,G \in \mathcal{C}(P,\Phi,\eta)$, such that (\ref{equation bounding alternative count}) is equal to
\begin{equation}
\label{main term and error term}\frac{1}{N^{d-m}} \sum\limits_{n_1,\dots,n_d \in \mathbb{Z}} \Big(\prod\limits_{j=1}^d f_j(n_j) \Big)F(\mathbf{n}/N)G(L\mathbf{n} + L\mathbf{r})
\end{equation} plus an error of size
\begin{equation}
\label{plus an error of size}
O_{P,\Phi}\Big(\frac{1}{N^{d-m+1}} \sum\limits_{\substack{n_1,\dots,n_d \ll_{C,\Phi} N\\ \Vert L\mathbf{n} + L\mathbf{r}\Vert_\infty = O_{C,\Phi}(\eta)}} \prod\limits_{j=1}^d f_j(n_j)\Big).
\end{equation} Furthermore, $L \notin V_{\operatorname{degen}}^*(m,d)$.
Now apply Theorem \ref{Theorem pseudorandomness} to the main term (\ref{main term and error term}). As written this theorem applies to functions $F$ and $G$ that take values in $[0,1]$, but by the obvious rescaling we may nonetheless apply the theorem to the present functions $F$ and $G$. This shows immediately that (\ref{main term and error term}) is independent of the particular choices of $f_1,\dots,f_d$, up to an error of size $o(1)$. The $o(1)$ term has the appropriate dependencies.
For the error term (\ref{plus an error of size}), we apply the upper bound in Corollary \ref{Corollary upper bound}. This shows that (\ref{plus an error of size}) is $o(N^{-1})$, so may be absorbed into the $o(1)$ term above. Corollary \ref{Corollary switching functions} is proved.
\end{proof}
An upper bound in this setting will also be convenient.
\begin{Corollary}
\label{Corollary more upper bounds}
Under the same hypotheses as Corollary \ref{Corollary switching functions}, \begin{equation}
\frac{1}{N^{d-m+k}} \int\limits_{\mathbf{x} \in \mathbb{R}^{d-m+k}} \Big( \prod\limits_{j=1}^{d} (f_j \ast \chi)(\varphi_j(\mathbf{x}) + r_j)\Big) H(\mathbf{x}/N) \, d\mathbf{x} \ll 1,
\end{equation}
\noindent where the implied constant may depend on $C$, $P$, $\Phi$, $\eta$, $\gamma$, and on the functions $w_1,\dots,w_d$.
\end{Corollary}
\begin{proof}
Proceed as in the previous proof to get to expression (\ref{main term and error term}). Then apply the upper bound in Corollary \ref{Corollary upper bound}.
\end{proof}
\section{Variation in parameters}
\label{section structure of Q}
This section will be devoted to proving Lemma \ref{Lemma approximation of Q} below. This technical lemma shows that the number of solutions to certain inequalities, weighted by the local von Mangoldt function, is a quantity that behaves well when the underlying parameters are perturbed. The slightly esoteric notation, in which we introduce a dimension $d$ only to consider $\mathbf{x} \in \mathbb{R}^{d-1}$, is designed to correspond to the moment in Section \ref{section Cauchy Schwarz argument} in which this lemma will be applied.
\begin{Lemma}
\label{Lemma approximation of Q}
Let $d,l,N,s$ be natural numbers, with $d\geqslant 2$, and let $C,\eta$ be positive parameters. Let $(\varphi_1,\dots,\varphi_l) = \Phi:\mathbb{R}^{d-1} \longrightarrow \mathbb{R}^l$ and $(\psi_1,\dots,\psi_l) = \Psi: \mathbb{R}^{s+2} \longrightarrow \mathbb{R}^l$ be linear maps with algebraic coefficients. Let $P$ be a set of parameters, and let $b\in\mathcal{C}(C,P,\eta,\Phi,\Psi)$ be an arbitrary smooth function. Let $w^*:\mathbb{N} \longrightarrow \mathbb{R}$ be a function such that $w^*(n) \rightarrow \infty$ as $n\rightarrow \infty$ and $w^*(n) \leqslant w(n)$ for all $n$. Let $\mathbf{a} \in \mathbb{R}^l$ be a vector satisfying $\Vert \mathbf{a}\Vert_\infty \leqslant CN$. For $\mathbf{y} \in \mathbb{R}^{s+1}$, define
\begin{equation}
\label{equation definition of Qy}
Q_{\mathbf{a},N}(\mathbf{y}) : = \frac{1}{N^{d-1}} \int\limits_{\mathbf{x} \in \mathbb{R}^{d-1}} \Big (\prod\limits_{j\leqslant l} (\Lambda_{\mathbb{Z}/W^*\mathbb{Z}} \ast \chi) ( \varphi_j(\mathbf{x}) + \Psi(\mathbf{y}) + a_j ) \Big) b((\mathbf{x},\mathbf{y})/N) \, d\mathbf{x},
\end{equation} where $a_j$ is the $j^{th}$ coordinate of $\mathbf{a}$. Then, if $\eta$ is sufficiently small in terms of $\Phi$ and $\Psi$, there is a function $f_{1}:\mathbb{Z}^{l} \longrightarrow \mathbb{C}$, satisfying $\Vert f_1\Vert_\infty \ll (\log \log W^*)^{O(1)}$, such that \[Q_{\mathbf{a},N}(\mathbf{y}) = b_{\mathbf{a},N}(\mathbf{y}/N) \sum\limits_{\Vert \mathbf{k}\Vert_{\infty}\leqslant (\log\log W^*)^{O(1)}} f_1(\mathbf{k})e\Big(\frac{\mathbf{k} \cdot (\Psi(\mathbf{y}) + \mathbf{a})}{W^*}\Big) +o_{C,P,\eta,\Phi,\Psi}(1).\] Here $b_{\mathbf{a},N} \in \mathcal{C}(C,P,\eta,\Phi,\Psi)$, though it may also depend on $\mathbf{a}$ and $N$.
\end{Lemma}
None of the methods required to prove this lemma will be particularly deep, but the technical manoeuvres will be a little intricate. In particular, we will need to apply the approximation in Lemma \ref{Lemma different forms of inequalities} multiple times within the same argument. \\
The proof of Lemma \ref{Lemma approximation of Q} will require the preliminary result below, namely Lemma \ref{Lemma functions with small support}. To state this lemma, we define a metric on $\mathbb{R}^d/K\mathbb{Z}^d$ by the formula \[ \Vert \mathbf{x}\Vert _{\mathbb{R}^d/K\mathbb{Z}^d} := \min\limits_{\mathbf{n} \in K\mathbb{Z}^d } \Vert \mathbf{x} - \mathbf{n}\Vert_\infty.\] Lipschitz constants of functions $\mathfrak{F}:\mathbb{R}^d/K\mathbb{Z}^d \longrightarrow \mathbb{R}$ will be considered with respect to this metric.
\begin{Lemma}
\label{Lemma functions with small support}
Let $d,m,K$ be natural numbers, and let $\eta,\sigma$ be positive parameters. Let $S:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a surjective linear map with integer coefficients, and let $G:\mathbb{R}^m \longrightarrow [0,1]$ be a Lipschitz function supported on $[-\eta,\eta]^m$, with Lipschitz constant at most $\sigma^{-1}$. Let $\mathfrak{F}:S\mathbb{Z}^d \longrightarrow \mathbb{R}$ be any function for which \[\mathfrak{F}(S\mathbf{x}) = \mathfrak{F}(S\mathbf{x} + S\mathbf{n})\] for all $\mathbf{x} \in \mathbb{Z}^d$ and all $\mathbf{n} \in K\mathbb{Z}^d$.
For each $\mathbf{a} \in \mathbb{R}^d$, define $\widetilde{\mathbf{a}} \in \mathbb{Z}^d$ to be some vector with integer coordinates for which \[\Vert S(\widetilde{\mathbf{a}} - \mathbf{a})\Vert_\infty = \min \limits_{\mathbf{n} \in \mathbb{Z}^d} \Vert S(\mathbf{n} - \mathbf{a})\Vert_\infty.\] Then, provided $\eta$ is small enough in terms of $S$, the function
\begin{equation}
\label{function that will be lipschitz}
\mathbf{a} \mapsto \mathfrak{F}(S\widetilde{\mathbf{a}})G(S(\widetilde{\mathbf{a}} - \mathbf{a}))
\end{equation}
\begin{itemize}
\item depends only on the value of $\mathbf{a}$ modulo $K\mathbb{Z}^d$;
\item is Lipschitz when viewed as a function on $\mathbb{R}^d/K\mathbb{Z}^d$, with Lipschitz constant at most \[O_S(\Vert \mathfrak{F}\Vert_\infty(\eta^{-1} + \sigma^{-1})).\]
\end{itemize}
\end{Lemma}
\begin{Remark}
\emph{The expression $\min \limits_{\mathbf{n} \in \mathbb{Z}^d} \Vert S(\mathbf{n} - \mathbf{a})\Vert_\infty$ is well-defined, since $S\mathbb{Z}^d$ is a lattice.}
\end{Remark}
\begin{proof}
To prove the first part of the lemma, let $\mathbf{a} \in \mathbb{R}^d$ and first suppose that there is a unique vector $\mathbf{x} \in S \mathbb{Z}^d$ for which \[ \Vert \mathbf{x} - S\mathbf{a}\Vert_\infty = \min \limits_{\mathbf{n} \in \mathbb{Z}^d} \Vert S\mathbf{n} - S\mathbf{a}\Vert_\infty.\] In this case, by the uniqueness of $\mathbf{x}$, we have $\mathbf{x} = S \widetilde{\mathbf{a}}$. By translation, we know that \[S(\widetilde{\mathbf{a} + \mathbf{n}}) - S \mathbf{n} = \mathbf{x}\] for all $\mathbf{n} \in \mathbb{Z}^d$, and hence \[ S(\widetilde{\mathbf{a} + \mathbf{n}}) - S \mathbf{n} = S\widetilde{\mathbf{a}} \] for all $\mathbf{n} \in \mathbb{Z}^d$. Hence \[ G(S(\widetilde{\mathbf{a}} - \mathbf{a})) = G(S(\widetilde{\mathbf{a} + \mathbf{n}} - (\mathbf{a} + \mathbf{n}))),\] and so the function \[ \mathbf{a} \mapsto G(S(\widetilde{\mathbf{a}} - \mathbf{a}))\] depends only on the value of $\mathbf{a}$ modulo $\mathbb{Z}^d$. Furthermore, if $\mathbf{n} \in K\mathbb{Z}^d$, \[ \mathfrak{F}(S(\widetilde{\mathbf{a} + \mathbf{n}})) = \mathfrak{F}(S\widetilde{\mathbf{a}} + S\mathbf{n}) = \mathfrak{F}(S\widetilde{\mathbf{a}}),\] by the invariance properties of $\mathfrak{F}$. Hence the function (\ref{function that will be lipschitz}) only depends on the value of $\mathbf{a}$ modulo $K\mathbb{Z}^d$.
Now suppose that there were two distinct vectors $\mathbf{x_1}$, $\mathbf{x_2} \in S \mathbb{Z}^d$ for which \[ \Vert \mathbf{x_i} - S\mathbf{a}\Vert_\infty = \min \limits_{\mathbf{n} \in \mathbb{Z}^d} \Vert S\mathbf{n} - S\mathbf{a}\Vert_\infty\] for $i=1,2$. Then in fact $G(S(\widetilde{\mathbf{a}} - \mathbf{a})) = 0$. Indeed, if this were not the case then we would have $\Vert \mathbf{x_1} - \mathbf{x_2}\Vert_\infty \leqslant O(\eta)$, which is impossible if $\eta$ is small enough, since $\mathbf{x_1}$ and $\mathbf{x_2}$ are two distinct elements of $\mathbb{Z}^m$. By translation, we may also conclude that $G(S(\widetilde{\mathbf{a} + \mathbf{n}} - (\mathbf{a} + \mathbf{n}))) = 0$ for all $\mathbf{n} \in \mathbb{Z}^d$. So again, the function (\ref{function that will be lipschitz}) depends only on the value of $\mathbf{a}$ modulo $K\mathbb{Z}^d$.\\
Regarding the second part of the lemma, the idea of the proof is similar to the above. Indeed, the only aspect of the function (\ref{function that will be lipschitz}) that could lead to a large Lipschitz constant is of course the term $S\widetilde{\mathbf{a}}$, which could, one fears, jump sharply for small changes in $\mathbf{a}$. However, when such jumps occur, the function $G(S(\widetilde{\mathbf{a}} - \mathbf{a}))$ is always equal to zero.
Let us proceed with the full proof. Indeed, let $\mathbf{a_0}$,$\mathbf{a_1} \in \mathbb{R}^d$ and suppose first that \[\Vert \mathbf{a_0} - \mathbf{a_1}\Vert_{\mathbb{R}^d/K\mathbb{Z}^d} \leqslant \eta.\] By choosing suitable coset representatives, without loss of generality we may assume that \[\Vert \mathbf{a_0} - \mathbf{a_1}\Vert_{\mathbb{R}^d/K\mathbb{Z}^d} = \Vert \mathbf{a_0} - \mathbf{a_1}\Vert_{\infty}.\]
Then either $S\widetilde{\mathbf{a_0}} =S\widetilde{\mathbf{a_1}}$ or $S\widetilde{\mathbf{a_0}} \neq S\widetilde{\mathbf{a_1}}$. If $S\widetilde{\mathbf{a_0}} =S\widetilde{\mathbf{a_1}}$ then
\begin{align}
\vert G(S(\widetilde{\mathbf{a_0}} - \mathbf{a_0})) - G(S(\widetilde{\mathbf{a_1}} - \mathbf{a_1}))\vert &\leqslant \sigma^{-1} \Vert S \mathbf{a_0} - S\mathbf{a_1}\Vert_\infty\nonumber \\
& \ll_S \sigma^{-1} \Vert \mathbf{a_0} - \mathbf{a_1}\Vert_\infty \nonumber \\
& \ll_S \sigma^{-1} \Vert \mathbf{a_0} - \mathbf{a_1}\Vert_{\mathbb{R}^d /K\mathbb{Z}^d}.
\end{align}
\noindent Therefore
\begin{align*}
\vert \mathfrak{F}(S\widetilde{\mathbf{a_0}}) G(S(\widetilde{\mathbf{a_0}} - \mathbf{a_0})) - \mathfrak{F}(S\widetilde{\mathbf{a_1}})G(S(\widetilde{\mathbf{a_1}} - \mathbf{a_1}))\vert &= \vert \mathfrak{F}(S\widetilde{\mathbf{a_0}})\vert \vert G(S(\widetilde{\mathbf{a_0}} - \mathbf{a_0})) - G(S(\widetilde{\mathbf{a_1}} - \mathbf{a_1}))\vert \nonumber \\
&\ll_S \Vert \mathfrak{F}\Vert_\infty \sigma^{-1}\Vert \mathbf{a_0} - \mathbf{a_1}\Vert_{\mathbb{R}^d /K\mathbb{Z}^d}.
\end{align*} That resolves the lemma in this case.
If on the other hand $S\widetilde{\mathbf{a_0}} \neq S \widetilde{\mathbf{a_1}}$, we may conclude that both
\begin{equation}
\label{claim equations}
\Vert S \widetilde{\mathbf{a_0}} - S\mathbf{a_0}\Vert_\infty \geqslant 10\eta
\end{equation} and
\begin{equation}
\label{claim equations 2}
\Vert S \widetilde{\mathbf{a_1}} - S\mathbf{a_1}\Vert_\infty \geqslant 10\eta.
\end{equation} Indeed, if $\Vert S \widetilde{\mathbf{a_0}} - S\mathbf{a_0}\Vert_\infty \leqslant 10\eta$, say, then \[\Vert S\widetilde{\mathbf{a_0}}-S\mathbf{a_1}\Vert_\infty \leqslant 10\eta + \Vert S\mathbf{a_0} - S\mathbf{a_1} \Vert_\infty \ll_S \eta.\] If $\eta$ is small enough, this implies that $S\widetilde{\mathbf{a_0}}$ must be the unique element of $S\mathbb{Z}^d$ for which \[ \Vert S\widetilde{\mathbf{a_0}} - S\mathbf{a_1}\Vert_\infty = \min \limits_{\mathbf{n} \in \mathbb{Z}^d} \Vert S\mathbf{n} - S\mathbf{a_1}\Vert_\infty,\]and hence that $S\widetilde{\mathbf{a_0}} = S\widetilde{\mathbf{a_1}}$, contradicting the assumption.
If $\eta$ is small enough, expressions (\ref{claim equations}) and (\ref{claim equations 2}) imply that
\begin{align}
\label{lipschitz align two}
G(S\widetilde{\mathbf{a_0}} - S\mathbf{a_0}) =G(S\widetilde{\mathbf{a_1}} - S\mathbf{a_1}) = 0,
\end{align}
\noindent and so \[\vert \mathfrak{F}(S\widetilde{\mathbf{a_0}}) G(S(\widetilde{\mathbf{a_0}} - \mathbf{a_0})) - \mathfrak{F}(S\widetilde{\mathbf{a_1}})G(S(\widetilde{\mathbf{a_1}} - \mathbf{a_1}))\vert = 0.\] That resolves the lemma in this case.\\
The only remaining case to consider is when \[ \Vert \mathbf{a_0} - \mathbf{a_1}\Vert_{\mathbb{R}^d/ K\mathbb{Z}^d} \geqslant \eta.\] In this case we bound the Lipschitz constant very crudely, as $O( \eta^{-1}\Vert \mathfrak{F}\Vert_\infty\Vert G\Vert_\infty)$, which is $O(\Vert \mathfrak{F}\Vert_\infty \eta^{-1})$, since $\Vert G\Vert_\infty \leqslant 1$. This settles the lemma.
\end{proof}
We are now ready to prove Lemma \ref{Lemma approximation of Q}.
\begin{proof}[Proof of Lemma \ref{Lemma approximation of Q}]
For this proof we make the following conventions. Any implied constant may depend on $C$, $\Phi$ and $\Psi$, and we will use the notation $b$, $b_1$, $b_2$ etc. to denote a function in $\mathcal{C}(C,P,\eta,\Phi, \Psi)$, that may change from line to line.
The first part of the proof will involve establishing an asymptotic formula for $Q_{\mathbf{a},N}(\mathbf{y})$, namely the expression $Q_{\mathbf{a},N}(\mathbf{y}) = \mathfrak{S}_{\mathbf{a},N}(\mathbf{y}) I_{\mathbf{a},N}(\mathbf{y}) + o_{P,\eta}(1)$ in (\ref{Q asymp formula}) below. Indeed, expanding out the definition of $\Lambda_{\mathbb{Z}/W^*\mathbb{Z}}\ast \chi$ (see Definition \ref{Definition convolution}) we have
\begin{align}
\label{expanding out Q z h} Q_{\mathbf{a},N}(\mathbf{y}) = \frac{1}{N^{d-1}}\sum\limits_{\mathbf{n} \in \mathbb{Z}^l} \Big(\prod\limits_{j\leqslant l} \Lambda_{\mathbb{Z}/W^*\mathbb{Z}}(n_j)\Big)\int\limits_{\mathbf{x} \in \mathbb{R}^{d-1}} \boldsymbol{\chi}(\mathbf{n} - \Phi(\mathbf{x}) - \Psi(\mathbf{y}) - \mathbf{a})
b((\mathbf{x},\mathbf{y})/N)\, d\mathbf{x},
\end{align}
\noindent where $\boldsymbol{\chi}: \mathbb{R}^{l} \longrightarrow [0,1]$ is defined by $\boldsymbol{\chi}(\mathbf{z}): = \prod_{j\leqslant l} \chi(z_j)$. Let $k := \dim \operatorname{Im} \Phi$, and note that $k\leqslant d-1$.
The inner integral of (\ref{expanding out Q z h}) may be analysed using Lemma \ref{Lemma different forms of inequalities}. The following table indicates which objects in (\ref{expanding out Q z h}) play which role in Lemma \ref{Lemma different forms of inequalities}.
\begin{center}
\begin{tabular}{c|c}
Notation of Lemma \ref{Lemma different forms of inequalities} & Objects in (\ref{expanding out Q z h})\\
\hline
$\mathbf{z}$ & $\mathbf{n} - \Psi(\mathbf{y}) - \mathbf{a}$ \\
$\mathbf{v}$ & $\mathbf{y}$\\
$\Phi$ & $\Phi$\\
$H$ & $b$\\
$I$ & $\boldsymbol{\chi}$
\end{tabular}
\end{center} So, applying Lemma \ref{Lemma different forms of inequalities}, one sees that (\ref{expanding out Q z h}) is equal to
\begin{align}
\label{equation writing Q as an inequality}
Q_{\mathbf{a},N}(\mathbf{y}) = \frac{1}{N^{d-1}}\sum\limits_{\mathbf{n} \in \mathbb{Z}^l} \Big(\prod\limits_{j\leqslant l} \Lambda_{\mathbb{Z}/W^*\mathbb{Z}}(n_j)\Big)b_1((\mathbf{y},\mathbf{n})/N) b_2(L(\mathbf{n} - \Psi(\mathbf{y}) - \mathbf{a})) + E.
\end{align} Here, $L:\mathbb{R}^{l} \longrightarrow \mathbb{R}^{l-k}$ is a surjective linear map with algebraic coefficients, that depends only on $\Phi$, $\operatorname{Rad}(b_2) = O(\eta)$, and the error term $E$ may be bounded above by
\begin{equation}
\label{easy error term}
\ll_P \Big\vert \frac{1}{N^{d}} \sum^* \Big(\prod\limits_{j\leqslant l} \Lambda_{\mathbb{Z}/W^*\mathbb{Z}}(n_j)\Big) \Big\vert,
\end{equation}
where the summation $\sum^*$ denotes summation over the set \[\{\mathbf{n} \in \mathbb{Z}^l: \Vert \mathbf{n} \Vert_\infty \ll_P N, \, \Vert L\mathbf{n} - L(\Psi(\mathbf{y})) - L\mathbf{a}\Vert_\infty \ll_P \eta\}.\] The error term $E$ is easy to bound. Indeed, by Lemma \ref{Lemma trivial upper bound}, expression (\ref{easy error term}) may be bounded by $O_P(N^{k-d} (\log \log W^*)^d)$. Since $k\leqslant d-1$, this is an $o_P(1)$ error. \\
It remains to analyse the main term in (\ref{equation writing Q as an inequality}), which we will do with the help of Lemma \ref{Lemma generating a purely irrational map}. The reader is invited to consult Section \ref{section linear algebra and dimension reduction} for the statement of this result, and for the definitions of rational map, rational dimension, etcetera.
Now, let $u$ be the rational dimension of $L$, and let $\Theta: \mathbb{R}^{l - k} \longrightarrow \mathbb{R}^u$ be a rational map for $L$ with algebraic coefficients. Then, there exists an injective linear map $(\xi_1,\dots,\xi_l) = \Xi:\mathbb{R}^{l - u} \longrightarrow \mathbb{R}^l$ with integer coefficients, satisfying $\Xi \mathbb{Z}^{l-u} = \mathbb{Z}^l \cap \ker \Theta L$, and a vector $\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y})\in \mathbb{Z}^{l}$, such that the main term of (\ref{equation writing Q as an inequality}) is equal to
\begin{align}
\label{in structure of Q rational manip}
\frac{1}{N^{d-1}} \sum\limits_{\mathbf{n} \in \mathbb{Z}^{l - u}} &\Big(\prod\limits_{j=1}^{l} \Lambda_{\mathbb{Z}/W^*\mathbb{Z}}(\xi_j(\mathbf{n}) + \widetilde{r}(\mathbf{a},\mathbf{y})_j)\Big)\nonumber \\
&b_1((\mathbf{y},\Xi(\mathbf{n}) + \widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}))/N) b_2(L(\Xi(\mathbf{n}) + \widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) -\Psi(\mathbf{y}) - \mathbf{a})),
\end{align}
\noindent where $\widetilde{r}(\mathbf{a},\mathbf{y})_j$ is the $j^{th}$ coordinate of $\mathbf{\widetilde{r}}(\mathbf{a},\mathbf{y})$. Note how we've appealed to part (11) of Lemma \ref{Lemma generating a purely irrational map} for the particular form of the argument of $b_2$. Note also how, since $\eta$ is sufficiently small, we have been able to apply part (10) of the lemma to establish that $\widetilde{R}$ consists of a single element $\mathbf{\widetilde{r}}(\mathbf{a},\mathbf{y})$.
Moreover, from part (10) of the lemma again, we have that $\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y})$ is an element of $\mathbb{Z}^{l}$ for which \[\Vert \Theta L(\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) - \Psi(\mathbf{y}) - \mathbf{a})\Vert_{\infty} = \min \limits_{\mathbf{m} \in \mathbb{Z}^l} \Vert \Theta L(\mathbf{m} - \Psi(\mathbf{y}) - \mathbf{a})\Vert_\infty.\] From part (9) of Lemma \ref{Lemma generating a purely irrational map}, letting $\{ \mathbf{e_1},\dots,\mathbf{e_{l-u}} \}$ be the standard basis vectors of $\mathbb{R}^{l-u}$, we have a set \[ \mathcal{B} = \{\mathbf{x_i}: i\leqslant u\} \cup \{ \Xi(\mathbf{e_j}): j\leqslant l - u \}\] which is a lattice basis for $\mathbb{Z}^{l}$ and for which $\{ \Theta L\mathbf{x_i}:i\leqslant u\}$ is a lattice basis for $\Theta L \mathbb{Z}^l$. Letting $U = \operatorname{span}( \{\mathbf{x_i}: i\leqslant u\})$, we have that $\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y})\in U$.
By applying the first part of Lemma \ref{Lemma problem for local von Mangoldt} to expression (\ref{in structure of Q rational manip}), one immediately derives
\begin{equation}
\label{Q asymp formula}
Q_{\mathbf{a},N}(\mathbf{y}) = \mathfrak{S}_{\mathbf{a},N}(\mathbf{y}) I_{\mathbf{a},N}(\mathbf{y}) + o_{P,\eta}(1),
\end{equation} where
\begin{equation}
\label{singular series which depends on z and h}\mathfrak{S}_{\mathbf{a},N}(\mathbf{y}):=\frac{1}{(W^*)^{l-u}} \sum\limits_{\mathbf{m} \in [W^*]^{l -u}} \prod\limits_{j=1}^{l} \Lambda_{\mathbb{Z}/W^*\mathbb{Z}}(\xi_j (\mathbf{m}) +\widetilde{r}(\mathbf{a},\mathbf{y})_j)
\end{equation} and $I_{\mathbf{a},N}(\mathbf{y})$ is equal to
\begin{equation}
\label{singular integral which depends on z and h}
\frac{1}{N^{d-1}}\int\limits_{\mathbf{x} \in \mathbb{R}^{l -u}}b_1((\mathbf{y},\Xi(\mathbf{x}) + \widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}))/N)b_2(L \Xi(\mathbf{x}) + L\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) - L(\Psi(\mathbf{y})) -L\mathbf{a})\, d\mathbf{x}.
\end{equation}
\noindent Note that \begin{equation}
\label{kernel inequalities}\dim \ker L\Xi \leqslant \dim \ker L = k \leqslant d-1.
\end{equation}
The remainder of the proof of Lemma \ref{Lemma approximation of Q} will consist of analysing expressions (\ref{singular series which depends on z and h}) and (\ref{singular integral which depends on z and h}) for $\mathfrak{S}_{\mathbf{a},N}(\mathbf{y})$ and $I_{\mathbf{a},N}(\mathbf{y})$. \\
We begin with $I_{\mathbf{a},N}(\mathbf{y})$, aiming for expression (\ref{approximation of singular integral}). Letting $V = \operatorname{Im} \Xi$, we have that $\mathbb{R}^l = U \oplus V$. For any vector $\mathbf{v} \in \mathbb{R}^{l}$ let $\mathbf{v}|_U$ and $\mathbf{v}|_V$ be the components in $U$ and $V$ respectively. Then we have that
\begin{equation}
\label{equation U approx}
\Vert \widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) - \Psi(\mathbf{y})|_U - \mathbf{a}|_U\Vert_\infty = O(1),
\end{equation} since \[\Vert\Theta L(\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) - \Psi(\mathbf{y}) - \mathbf{a})\Vert_\infty = O(1).\] By the bound on the Lipschitz constant of $b_1$, we may replace $b_1((\mathbf{y},\Xi(\mathbf{x}) + \widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}))/N)$ with $b_1((\mathbf{y},\Xi(\mathbf{x}) + \Psi(\mathbf{y})|_U +\mathbf{a}|_U)/N)$ in (\ref{singular integral which depends on z and h}), up to an error of $O_{P,\eta}(N^{-1})$. Also, note that \[ \frac{1}{N^{d-1}}\int\limits_{\substack{\mathbf{x} \in \mathbb{R}^{l -u}\\ \Vert\mathbf{x}\Vert_\infty \ll N}}b_2(L \Xi(\mathbf{x}) + L\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) - L(\Psi(\mathbf{y})) -L\mathbf{a})\, d\mathbf{x} = O_{P,\eta}(N^{\dim \ker L\Xi -d+1}),\] by Lemma \ref{Lemma general upper bound}. This is $O_{P,\eta}(1)$, since $\dim\ker L\Xi\leqslant d-1$ by (\ref{kernel inequalities}). Therefore we may replace (\ref{singular integral which depends on z and h}) by the expression \begin{equation}
\label{altered version of I}
\frac{1}{N^{d-1}}\int\limits_{\mathbf{x} \in \mathbb{R}^{l-u}}b_1((\mathbf{y},\Xi(\mathbf{x}) + \Psi(\mathbf{y})|_U + \mathbf{a}|_U)/N)b_2(L \Xi(\mathbf{x}) + L\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) - L(\Psi(\mathbf{y})) -L\mathbf{a})\, d\mathbf{x},
\end{equation}
\noindent plus an error of size $o_{P,\eta}(1)$. \\
The expression (\ref{altered version of I}) is in a form that is amenable to Lemma \ref{Lemma different forms of inequalities}. The following table indicates which objects from our present discussion play which role in the notation of Lemma \ref{Lemma different forms of inequalities}.
\begin{center}
\begin{tabular}{ c|c }
Notation of Lemma \ref{Lemma different forms of inequalities} & Objects related to (\ref{altered version of I}) \\
\hline
$\mathbf{z}$ & $L\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) - L(\Psi(\mathbf{y})) - L\mathbf{a}$ \\
$\mathbf{v}$ & $\mathbf{y}$ \\
$\Phi$ & $-L\Xi$\\
$L$ & $\Theta$\\
$ (\mathbf{v},\mathbf{x}) \mapsto H(\mathbf{v}, \mathbf{x})$ & $(\mathbf{y},\mathbf{x}) \mapsto b_1(\mathbf{y}, \Xi(\mathbf{x}) + \frac{\Psi(\mathbf{y})|_U}{N} + \frac{\mathbf{a}|_U}{N})$\\
$I$ & $b_2$
\end{tabular}
\end{center}
\noindent This is a valid application of Lemma \ref{Lemma different forms of inequalities}, since $\ker \Theta = \operatorname{Im} L\Xi$ and the final two functions in the right-hand column are compactly supported smooth functions of their arguments (as $\Xi$ is injective, $\Xi(\mathbf{x}) \in V$, and $V$ is an algebraic complement to $U$). Recalling that $\Theta$ has algebraic coefficients, by the third part of Lemma \ref{Lemma different forms of inequalities} we may therefore replace (\ref{altered version of I}) by an expression of the form
\begin{equation}
\label{getting closer}
b_1((\mathbf{y}, L(\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) - \Psi(\mathbf{y}) - \mathbf{a}))/N)b_2(\Theta L(\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) -\Psi(\mathbf{y}) - \mathbf{a})) + o_{P,\eta}(1)
\end{equation}
\noindent where $\operatorname{Rad}(b_2) = O(\eta)$.
The argument of the function $b_1$ above doesn't depend smoothly on $\mathbf{y}$, but this may be easily rectified. Indeed, by (\ref{equation U approx}) and the fact that $b_1$ is Lipschitz and $b_2$ is bounded, (\ref{getting closer}) is equal to \[ b_1((\mathbf{y}, -L(\Psi(\mathbf{y})|_V + \mathbf{a}|_V))/N)b_2(\Theta L(\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) -\Psi(\mathbf{y}) - \mathbf{a})) + o_{P,\eta}(1),\] i.e. is equal to \begin{equation}
\label{approximation of singular integral}
b_{\mathbf{a},N,1}(\mathbf{y}/N) b_2(\Theta L(\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) -\Psi(\mathbf{y}) - \mathbf{a})) + o_{P,\eta}(1),
\end{equation} where $\operatorname{Rad}(b_2) = O(\eta)$. \\
In summary then, since $\mathfrak{S}_{\mathbf{a},N}(\mathbf{y}) \ll (\log\log W^*)^{O(1)}$ we have shown that
\begin{equation}
\label{important staging post}
Q_{\mathbf{a},N}(\mathbf{y}) = \mathfrak{S}_{\mathbf{a},N}(\mathbf{y}) b_{\mathbf{a},N,1}(\mathbf{y}/N) b_2(\Theta L(\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) -\Psi(\mathbf{y}) - \mathbf{a})) + o_{P,\eta}(1).\\
\end{equation}
The function
\begin{equation}
\label{function too that will be lipschitz}
(\Psi(\mathbf{y}) + \mathbf{a}) \mapsto \mathfrak{S}_{\mathbf{a},N}(\mathbf{y})b_2(\Theta L(\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) -\Psi(\mathbf{y}) - \mathbf{a}))
\end{equation} is of the form considered in Lemma \ref{Lemma functions with small support} in expression (\ref{function that will be lipschitz}). Indeed, one first notes that (\ref{function too that will be lipschitz}) is a well-defined mapping, since $\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y})$ is determined only by $\Psi(\mathbf{y}) + \mathbf{a}$ and $\mathfrak{S}_{\mathbf{a},N}(\mathbf{y})$ depends on $\mathbf{a}$ and $\mathbf{y}$ only through the value of $\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y})$ (see (\ref{singular series which depends on z and h})). Then, one takes the map $S$ from Lemma \ref{Lemma functions with small support} to be the map $\Theta L:\mathbb{R}^l \longrightarrow \mathbb{R}^u$ here, ones takes $K$ from that lemma to be $W^*$ here, and one takes the map $G$ from that lemma to be $b_2$ here, and one takes the map $\mathfrak{F}: \Theta L \mathbb{Z}^l \longrightarrow \mathbb{R}$ from that lemma to be \[\mathfrak{F}(\mathbf{x}) = \frac{1}{(W^*)^{l-u}} \sum\limits_{\mathbf{m} \in [W^*] ^{l-u}} \prod\limits_{j=1}^l \Lambda_{\mathbb{Z}/W^* \mathbb{Z}} (\xi_j(\mathbf{m}) + (\Theta L|_U)^{-1}(\mathbf{x})_j)\] here. The definition of $\mathfrak{F}$ is valid since $\Theta L |_U: U \longrightarrow \mathbb{R}^u$ is indeed a bijection, and by part (9) of Lemma \ref{Lemma generating a purely irrational map} we have $(\Theta L|_U)^{-1}(\Theta L (\mathbb{Z}^l)) = \mathbb{Z}^l \cap U$. Consulting expression (\ref{singular series which depends on z and h}) for $\mathfrak{S}_{\mathbf{a},N}(\mathbf{y})$, one sees that \[ \mathfrak{F}(\Theta L \widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y})) = \mathfrak{S}_{\mathbf{a},N}(\mathbf{y})\] and so (\ref{function too that will be lipschitz}) is indeed of the form (\ref{function that will be lipschitz}) as we have claimed. The only hypothesis of Lemma \ref{Lemma functions with small support} that we haven't already verified is the invariance of $\mathfrak{F}$ under translation by elements of $\Theta L(W^*\mathbb{Z}^l)$, but this is immediate from the definition of $\mathfrak{F}$, since $(\Theta L |_U)^{-1}: \mathbb{R}^u \longrightarrow U$ is linear and $\Lambda_{\mathbb{Z}/W^* \mathbb{Z}}$ is $W^*$-periodic. Therefore, by applying Lemma \ref{Lemma functions with small support}, we conclude that the function (\ref{function too that will be lipschitz}) is Lipschitz on $\mathbb{R}^l/W^*\mathbb{Z}^l$, with Lipschitz constant $O_{P,\eta}((\log\log W^*)^{O(1)}).$\\
The proof of Lemma \ref{Lemma approximation of Q} is nearly complete, since Lipschitz functions enjoy good approximation by short exponential sums. Indeed, by Lemma A.9 of \cite{GT08a}, for all $X>2$ there exists a function $f_1:\mathbb{Z}^{l} \longrightarrow \mathbb{C}$ such that $\Vert f_1\Vert_\infty \ll (\log \log W^*)^{O(1)}$ and \[\mathfrak{S}_{\mathbf{a},N}(\mathbf{y})b_2(\Theta L(\widetilde{\mathbf{r}}(\mathbf{a},\mathbf{y}) -\Psi(\mathbf{y}) - \mathbf{a}))\] equals \[ \sum\limits_{\Vert \mathbf{k}\Vert_{\infty}\leqslant X} f_1(\mathbf{k}) e\Big(\frac{\mathbf{k} \cdot (\Psi(\mathbf{y}) + \mathbf{a})}{W^*}\Big) + O_{P,\eta}( (\log\log W^*)^{O(1)} (\log X)/X).\] Then, picking $X$ to be a suitably large power of $\log\log W^*$, Lemma \ref{Lemma approximation of Q} follows.
\end{proof}
\part{The main argument}
\label{part gen von neu}
Having completed all the preparatory material, the main thrust of the proof can begin in earnest.
\section{Controlling by Gowers norms}
\label{section Controlling by Gowers norms}
In this section we state a type of result that has become known as a `generalised von Neumann theorem', which uses Gowers norms to bound the number of solutions to a diophantine inequality. For readers familiar with \cite{GT10}, the procedure is routine. We will then show that this result implies the main theorem (Theorem \ref{Main theorem}).
\begin{Theorem}[Generalised von Neumann Theorem]
\label{Theorem generalised von neumann}
Let $N,m,d$ be natural numbers, satisfying $d\geqslant m+2$, and let $C,\gamma,\varepsilon$ be positive parameters. Let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a surjective linear map with algebraic coefficients, and assume that $L\notin V_{\operatorname{degen}}^*(m,d)$ and that $\gamma$ is small enough (depending on $L$). Let $\mathbf{v}\in\mathbb{R}^{m}$ satisfy $\Vert \mathbf{v}\Vert_\infty \leqslant CN$. Let $F:\mathbb{R}^d\longrightarrow [0,1]$ and $G:\mathbb{R}^{m}\longrightarrow [0,1]$ be functions with Lipschitz constants at most $\sigma^{-1}$, and suppose that $F$ is supported on $[-1,1]^d$ and $G$ is supported on $[-\varepsilon,\varepsilon]^m$. Let $f_1,\dots,f_d:[N]\longrightarrow \mathbb{R}$ be arbitrary functions, satisfying $\vert f_j(n)\vert\leqslant \nu_{N,w}^\gamma(n)$ for all $j\leqslant d$ and for all $n\leqslant N$.
Then there exists an $s = O(1)$ such that, if \[\min_{j\leqslant d} \Vert f_j \Vert_{U^{s+1}[N]} =o(1)\] as $N\rightarrow \infty$, then \[\vert T^{L,\mathbf{v}}_{F,G,N}(f_1,\dots,f_d)\vert = o(1)\] as $N \rightarrow \infty$. The second $o(1)$ term may also depend on $C$, $L$, $\gamma$, $\varepsilon$, $\sigma$, and the rate of decay of the first $o(1)$ term.
\end{Theorem}
\begin{proof}[Proof of Theorem \ref{Main theorem} assuming Theorem \ref{Theorem generalised von neumann}]
Assume the hypotheses of Theorem \ref{Main theorem}. By telescoping we have that \[T_{F,G,N}^{L,\mathbf{v}}(\Lambda^\prime,\dots,\Lambda^\prime) - T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/ W\mathbb{Z}}^+,\dots,\Lambda_{\mathbb{Z}/ W\mathbb{Z}}^+)\] is equal to
\begin{align}
\label{telescoping expression}
&T_{F,G,N}^{L,\mathbf{v}}(\Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}}^+,\Lambda_{\mathbb{Z}/W\mathbb{Z}}^+,\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}}^+) +\nonumber \\ & T_{F,G,N}^{L,\mathbf{v}}(\Lambda^\prime, \Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}}^+,\Lambda_{\mathbb{Z}/W\mathbb{Z}}^+,\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}}^+) + \dots + T_{F,G,N}^{L,\mathbf{v}}(\Lambda^\prime, \dots, \Lambda^\prime,\Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}}^+).
\end{align}
\noindent Since $F$ is supported on $[-1,1]^d$, we may restrict the functions $\Lambda^\prime$ and $\Lambda^+_{\mathbb{Z}/W\mathbb{Z}}$ to $[N]$ without altering the size of expression (\ref{telescoping expression}).
By the construction of the sieve weight $\nu_{N,w}^\gamma$ we have \[\vert \Lambda^\prime(n) + \Lambda_{\mathbb{Z}/W\mathbb{Z}}^+(n)\vert \ll_{\gamma} \nu_{N,w}^\gamma(n)\] for all $n \leqslant N$. Therefore, after rescaling, we may apply Theorem \ref{Theorem generalised von neumann} in this setting.
Recall that, by Lemma \ref{Lemma Corollary of tool from Green-Tao}, \[\Vert\Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}}^+ \Vert_{U^{s+1}[N]} = \Vert\Lambda^\prime - \Lambda_{\mathbb{Z}/W\mathbb{Z}}\Vert_{U^{s+1}[N]} = o(1)\] as $N\rightarrow \infty$, for all $s\leqslant d-2$. So, applying Theorem \ref{Theorem generalised von neumann} to each term of (\ref{telescoping expression}) separately, we derive \[\vert T_{F,G,N}^{L,\mathbf{v}}(\Lambda^\prime,\dots,\Lambda^\prime) - T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/ W\mathbb{Z}}^+,\dots,\Lambda_{\mathbb{Z}/ W\mathbb{Z}}^+)\vert =o_{C,L,\gamma,\varepsilon,\sigma}(1)\] as $N\rightarrow \infty$. By fixing a suitably small value of $\gamma$, we conclude Theorem \ref{Main theorem}.
\end{proof}
\section{Transferring from $\mathbb{Z}$ to $\mathbb{R}$}
\label{section transfer}
In this section we begin the proof of Theorem \ref{Theorem generalised von neumann}. Following the programme set out in \cite{Wa17}, our first step will be to transfer the problem from the setting of functions on $\mathbb{Z}$ to functions on $\mathbb{R}$.
\begin{Definition}
\label{Definition really continuous solution count}
Let $N,m,d$ be natural numbers. Let $L:\mathbb{R}^{d}\longrightarrow \mathbb{R}^m$ be a linear map, let $\mathbf{v} \in \mathbb{R}^m$, and let $F:\mathbb{R}^{d}\rightarrow [0,1]$ and $G:\mathbb{R}^m\rightarrow [0,1]$ be compactly supported measurable functions. Then, for all bounded measurable functions $g_1,\dots,g_{d}:\mathbb{R}\longrightarrow \mathbb{R}$ we define
\begin{equation}
\label{definiton of the continuous solution count form}
\widetilde{T}^{L,\mathbf{v}}_{F,G,N}(g_1,\dots,g_{d}) := \frac{1}{N^{d-m}}\int\limits_{\mathbf{x}\in \mathbb{R}^d}\Big(\prod\limits_{j=1}^{d}g_j(x_j)\Big)F(\mathbf{x}/N)G(L\mathbf{x}+ \mathbf{v})\, d\mathbf{x}.
\end{equation}
\end{Definition}
We now state the key lemma. For the definition of $f \ast \chi$, where $\chi$ is the function we determined in Section \ref{section conventions}, the reader may consult Definition \ref{Definition convolution}.
\begin{Lemma}[Transfer]
\label{Lemma transfer equation}
Let $N,m,d$ be natural numbers, with $d\geqslant m+2$, and let $C$, $\varepsilon$, $\gamma$, $\eta$, $\sigma$ be positive constants. Let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a surjective linear map, and let $\mathbf{v} \in \mathbb{R}^m$ be a vector satisfying $\Vert \mathbf{v}\Vert_\infty \leqslant CN$. Let $F:\mathbb{R}^d\longrightarrow [0,1]$ and $G:\mathbb{R}^{m}\longrightarrow [0,1]$ be compactly supported Lipschitz functions, with Lipschitz constants at most $\sigma^{-1}$. Suppose that $F$ is supported on $[-1,1]^d$, and $G$ is supported on $[-\varepsilon,\varepsilon]^m$. Then there exists some positive real number $C_{\chi}$, satisfying $C_\chi \asymp 1$, such that the following holds. Let $f_1,\dots,f_d:[N]\longrightarrow \mathbb{R}$ be arbitrary functions that satisfy $\vert f_j(n)\vert \leqslant \nu_{N,w}^\gamma(n)$ for all $j\leqslant d$ and for all $n\leqslant N$. Assume that $\eta \leqslant \min (1,\varepsilon)$ and that $\gamma$ is small enough depending on $L$. Then
\begin{equation}
\label{eqn transfer equation}
T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d) = \frac{1}{C_{\chi}\eta^{d}}\widetilde{T}_{F,G,N}^{L,\mathbf{v}}(f_1\ast \chi,\dots,f_d\ast \chi) + O(\eta \sigma^{-1}) + o(1)
\end{equation}
\noindent as $N\rightarrow \infty$. The implied constant in the $O(\eta \sigma^{-1})$ term may depend on $C$, $L$, and $\varepsilon$, and the $o(1)$ term may depend on all these parameters together with $\gamma$ and $\sigma$.
\end{Lemma}
\begin{proof}
The proof is very similar to the proof of Lemma 5.4 in \cite{Wa17}, although we do have to insert various estimates that are only proved in this paper.
Indeed, let $\boldsymbol{\chi}:\mathbb{R}^d \longrightarrow [0,1]$ denote the function $\mathbf{x}\mapsto \prod\limits_{j=1}^d \chi(x_j)$. We choose \[ C_{\chi} : = \frac{1}{\eta^d}\int\limits_{\mathbf{x} \in \mathbb{R}^d} \boldsymbol{\chi}(\mathbf{x}) \, d\mathbf{x} .\] Since $\chi$ is $\eta$-supported, $C_{\chi} \asymp 1$ . Then, expanding the definition of the convolutions $f_i \ast \chi$, \[\frac{1}{C_{\chi}\eta^{d}}\widetilde{T}_{F,G,N}^{L, \mathbf{v},}(f_1\ast \chi,\dots,f_d\ast \chi)\] equals
\begin{align}
\label{equation transferring zero}
&\frac{1}{N^{d-m}}\sum\limits_{\mathbf{n}\in \mathbb{Z}^d}\Big(\prod\limits_{j=1}^d f_j(n_j)\Big) \frac{1}{C_{\chi}\eta^d} \int\limits_{\mathbf{y}\in \mathbb{R}^d} F(\mathbf{y}/N) G(L\mathbf{y} + \mathbf{v}) \boldsymbol{\chi}(\mathbf{y} - \mathbf{n}) \, d\mathbf{y}.
\end{align}
This is equal to
\begin{equation}
\label{extra equation to be referenced in transfer argument}
\frac{1}{N^{d-m}}\sum\limits_{\substack{\mathbf{n}\in \mathbb{Z}^d \\ \Vert \mathbf{n} \Vert_\infty \leqslant 2 N}} \Big(\prod\limits_{j=1}^d f_j(n_j)\Big) \frac{1}{C_{\chi}\eta^d}\int\limits_{\mathbf{y}\in \mathbb{R}^d} (F(\mathbf{n}/N) + O(\eta\sigma^{-1} N^{-1}))G(L\mathbf{y}+\mathbf{v}) \boldsymbol{\chi}(\mathbf{y} - \mathbf{n}) \, d\mathbf{y}.
\end{equation}
Indeed, the inner integrand is only non-zero when $\Vert \mathbf{y} - \mathbf{n}\Vert_\infty\leqslant\eta$, and $F$ has Lipschitz constant $O(\sigma^{-1})$.\\
Continuing, expression (\ref{extra equation to be referenced in transfer argument}) is equal to
\begin{equation}
\label{equation transferring}
\frac{1}{N^{d-m}} \sum\limits_{\substack{\mathbf{n}\in\mathbb{Z}^d \\ \Vert \mathbf{n}\Vert_\infty \leqslant 2N}}\Big(\prod\limits_{j=1}^d f_j(n_j)\Big) F(\mathbf{n}/N) H(L\mathbf{n} + \mathbf{v}) + E
\end{equation}
\noindent where \[H(\mathbf{x}) = \frac{1}{C_{\chi}\eta^d}\int\limits_{\mathbf{y}\in\mathbb{R}^d} \boldsymbol{\chi}(\mathbf{y})G(\mathbf{x} + L\mathbf{y})\ \, d\mathbf{y}\] and $E$ is a certain error, which may be bounded above by a constant times
\begin{equation}
\label{transfer error bounded above}
\frac{\eta}{\sigma N} \frac{1}{N^{d-m}} \sum\limits_{\substack{\mathbf{n}\in \mathbb{Z}^d \\ \Vert \mathbf{n} \Vert_\infty \leqslant 2 N}}\Big(\prod\limits_{j=1}^d \nu_{N,w}^\gamma (n_j)\Big)H(L\mathbf{n} + \mathbf{v}).\\
\end{equation}
Let us deal with the first term of (\ref{equation transferring}), in which we wish to replace $H$ with $G$. We therefore consider
\begin{equation*}
\Big\vert\frac{1}{N^{d-m}} \sum\limits_{\mathbf{n}\in\mathbb{Z}^d}\Big(\prod\limits_{j=1}^d f_j(n_j)\Big) F(\mathbf{n}/N) (G(L\mathbf{n} + \mathbf{v}) - H(L\mathbf{n} + \mathbf{v})) \Big\vert,
\end{equation*}
which is
\begin{equation}
\label{equation transferring main term differencing}
\leqslant \frac{1}{N^{d-m}} \sum\limits_{\mathbf{n}\in\mathbb{Z}^d}\Big(\prod\limits_{j=1}^d \nu_{N,w}^\gamma(n_j)\Big)F(\mathbf{n}/N) \vert G-H\vert(L\mathbf{n} + \mathbf{v}).
\end{equation} Observe that $\Vert G - H\Vert_\infty = O(\eta \sigma^{-1})$. Indeed,
\begin{align*}
&G(\mathbf{x}) - \frac{1}{C_{\chi}\eta^d}\int\limits_{\mathbf{y}\in\mathbb{R}^d} G(\mathbf{x}+ L\mathbf{y}) \chi(\mathbf{y}) \, d\mathbf{y} \\
&= G(\mathbf{x}) - \frac{1}{C_{\chi}\eta^d} \int\limits_{\mathbf{y} \in\mathbb{R}^d} (G(\mathbf{x}) + O(\eta \sigma^{-1}))\boldsymbol{\chi}(\mathbf{y}) \, d\mathbf{y}\\
& = O(\eta\sigma^{-1}),\nonumber
\end{align*}
\noindent by the definition of $C_{\chi}$ and the Lipschitz property of $G$. The function $\vert G - H\vert$ is compactly supported, with $\operatorname{Rad}(\vert G - H\vert ) \ll \varepsilon + \eta \ll \varepsilon$.
Of course $\vert G - H\vert$ needn't be smooth, but we may nonetheless apply Corollary \ref{Corollary upper bound}, concluding that expression (\ref{equation transferring main term differencing}) is at most \[ O_{C,L,\varepsilon}( \eta \sigma^{-1} ) + o_{C,L,\gamma,\varepsilon, \sigma}(1).\]
Turning to the error $E$ from (\ref{equation transferring}), we've already remarked that it may be bounded above by expression (\ref{transfer error bounded above}). Applying Corollary \ref{Corollary upper bound} again, expression (\ref{transfer error bounded above}) is $o(1)$ (with the appropriate dependencies on $C$, $L$, etc.).
The lemma then follows.
\end{proof}
We will need to show that the operation of replacing $f$ by $f\ast \chi$ is compatible with Gowers norms.
Firstly, if $g:[-N,N]\longrightarrow \mathbb{R}$ is a bounded measurable function, we define the Gowers norm over the reals $\Vert g\Vert_{U^{d}(\mathbb{R},N)}$ by
\begin{equation}
\label{real gowers norm}
\Vert g \Vert_{U^{d}(\mathbb{R},N)}^{2^{d}}:= \frac{1}{(2N)^{d+1}}\int\limits_{(x,\mathbf{h}) \in\mathbb{R}^{d+1}} \prod\limits_{\boldsymbol{\omega}\in \{0,1\}^d}\mathscr{C}^{\vert \boldsymbol{\omega} \vert} g(x+\sum\limits_{i=1}^d h_i\omega_i) \, dx\,d \mathbf{h}.
\end{equation} More detail about this quantity may be found in Appendix A of \cite{Wa17}.
Secondly, we note that $\Vert f\Vert_{U^{d}[N]}$ and $\Vert f \ast \chi\Vert_{U^{d}(\mathbb{R},2N)}$ may be related.
\begin{Lemma}[Relating different Gowers norms]
\label{Lemma linking different Gowers norms}
Let $s$ be a natural number, and assume that $\eta$ is a positive parameter that is small enough in terms of $s$. Let $N$ be a natural number, and let $f:[N]\longrightarrow \mathbb{R}$ be an arbitrary function. Then we have \begin{equation}
\label{linking different gowers norms}
\Vert f\ast \chi\Vert_{U^{s+1}(\mathbb{R},2N)}\ll \eta^{\frac{s+2}{2^{s+1}}} \Vert f\Vert_{U^{s+1}[N]}.
\end{equation}
\end{Lemma}
\begin{proof}
This is Lemma 5.5 of \cite{Wa17}.
\end{proof}
\section{Parametrising the kernel}
\label{section General proof of the real variable von Neumann Theorem}
In this section we will convert the expression $T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d)$ into an expression that is tailored to the subsequent manipulations. We begin with a lemma that is very similar to Proposition 8.2 of \cite{Wa17}.
\begin{Lemma}[Separating out the kernel]
\label{Proposition separating out the kernel}
Let $N,m,d$ be natural numbers, with $d\geqslant m+2$, and let $C,\varepsilon,\sigma$ be positive constants. Let $L:\mathbb{R}^d \longrightarrow \mathbb{R}^m$ be a surjective linear map with algebraic coefficients, and assume further that $L \notin V_{\operatorname{degen}}^*(m,d)$. Let $F:\mathbb{R}^d\longrightarrow [0,1]$ be a Lipschitz function supported on $[-1,1]^d$, with Lipschitz constant at most $\sigma^{-1}$, and let $G:\mathbb{R}^{m}\longrightarrow [0,1]$ be any function supported on $[-\varepsilon,\varepsilon]^m$. Then there exists an injective linear map $(\psi_1,\dots,\psi_d) = \Psi:\mathbb{R}^{d-m}\longrightarrow \mathbb{R}^d$ with algebraic coefficients (depending only on $L$), and a Lipschitz function $F_1:\mathbb{R}^{d-m}\longrightarrow [0,1]$ with Lipschitz constant $O_{L}(\sigma^{-1})$ and with $\operatorname{Rad}(F_1) = O_{C,L,\varepsilon}(1)$, such that, if $g_1,\dots,g_d:\mathbb{R}\longrightarrow \mathbb{R}$ are arbitrary bounded measurable functions,
\begin{equation}
\vert\widetilde{T}_{F,G,N}^{L,\mathbf{v}}(g_1,\dots,g_d)\vert\ll_{L,\varepsilon} \Big\vert\frac{1}{N^{d-m}}\int\limits_{\mathbf{x} \in \mathbb{R}^{d-m}} F_1(\mathbf{x}/N)\Big(\prod\limits_{j=1}^d g_j(\psi_j(\mathbf{x}) + a_j)\Big) \, d\mathbf{x}\Big\vert ,
\end{equation}
\noindent where, for each $j$, $a_j$ is some real number that satisfies $a_j = O_{C,L,\varepsilon}(N)$.
Furthermore, $\Psi$ has finite Cauchy-Schwarz complexity (see Definition \ref{Definition finite complexity}).
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{Proposition separating out the kernel}]
For ease of notation, let
\[ \beta: = \widetilde{T}_{F,G,N}^{L,\mathbf{v}}(g_1,\dots,g_d).\]
Noting that $\ker L$ is a vector space of dimension $d-m$, define $\{\mathbf{u^{(1)}},\dots,\mathbf{u^{(d-m)}}\} \subset \mathbb{R}^d$ to be an orthonormal basis for $\ker L$ consisting of vectors with algebraic coordinates. Then the map $(\psi_1,\dots,\psi_d) = \Psi:\mathbb{R}^{d-m} \longrightarrow \mathbb{R}^d$, defined by
\begin{equation}
\label{parametrise the kernel}
\Psi(\mathbf{x}) := \sum\limits_{i=1}^{d-m}x_i \mathbf{u}^{\mathbf{(i)}},
\end{equation}
\noindent is an injective map that parametrises $\ker L$. Furthermore $\Psi$ has finite Cauchy-Schwarz complexity, since otherwise there would exist $i \neq j\leqslant d$ and a real number $\lambda$ such that $\mathbf{e_i}^* - \lambda \mathbf{e_j}^* \in (\operatorname{Im} \Psi)^0$, i.e. $\mathbf{e_i}^* - \lambda \mathbf{e_j}^* \in (\ker L)^{0}$. This implies that $\mathbf{e_i}^* - \lambda \mathbf{e_j}^* \in L((\mathbb{R}^m)^*)$, which, by definition, implies that $L \in V_{\operatorname{degen}}^*(m,d)$, contradicting our hypotheses.
Now, extend the orthonormal basis $\{\mathbf{u^{(1)}},\dots,\mathbf{u^{(d-m)}}\}$ for $\ker L$ to an orthonormal basis $\{\mathbf{u^{(1)}},\dots,\mathbf{u^{(d)}}\}$ for $\mathbb{R}^d$. By implementing a change of basis, we may rewrite
\begin{equation}
\label{separating out the kernel}
\beta=\frac{1}{N^{d-m}} \int\limits_{\mathbf{x}\in \mathbb{R}^d} F(\sum\limits_{i=1}^{d}x_i\mathbf{u^{(i)}}/N)G(L(\sum\limits_{i=1}^{d}x_i\mathbf{u^{(i)}}) + \mathbf{v})\Big(\prod\limits_{j=1}^{d} g_j(\psi_j(\mathbf{x})+\sum\limits_{i=d-m+1}^{d} x_i u^{(i)}_j)\Big) \, d\mathbf{x},
\end{equation}
where $u^{(i)}_j$ is the $j^{th}$ coordinate of $\mathbf{u}^{(\mathbf{i})}$.
We wish to remove the presence of the variables $x_{d-m+1},\dots,x_{d}$. To set this up, note that, by the choice of the vectors $\mathbf{u^{(i)}}$, $$G(L(\sum\limits_{i=1}^{d}x_i\mathbf{u^{(i)}}) + \mathbf{v}) = G(L(\sum\limits_{i=d-m+1}^{d}x_i\mathbf{u^{(i)}}) + \mathbf{v}). $$ The vector $\sum_{i=d-m+1}^{d}x_i\mathbf{u^{(i)}}$ is in $(\ker L)^{\perp}$ and so, since $L|_{(\ker L)^{\perp}}$ is a bounded invertible operator, $G(L(\sum_{i=d-m+1}^{d}x_i\mathbf{u^{(i)}}) + \mathbf{v})$ is equal to zero unless $(x_{d-m+1},\dots,x_d)^T\in D$, for some domain $D \subseteq \mathbb{R}^{m}$ of diameter $O_{L}(\varepsilon)$ and satisfying $\sup_{\mathbf{x} \in D} \Vert \mathbf{x}\Vert_\infty = O_{C,L}(N + \varepsilon)$.
We can use this observation to bound the right-hand side of (\ref{separating out the kernel}). Indeed, we have
\begin{align}
\label{equation bound beta by a sup}
\beta \ll \operatorname{vol} D \times \sup\limits_{\mathbf{x_{d-m+1}^d}\in D}\frac{1}{N^{d-m}} \Big\vert \int\limits_{\mathbf{x_{1}^{d-m}} \in \mathbb{R}^{d-m}} F(\sum\limits_{i=1}^{d}x_i\mathbf{u^{(i)}}/N)G(L(\sum\limits_{i=d-m+1}^{d}x_i\mathbf{u^{(i)}}) + \mathbf{v}) \nonumber\\
\prod\limits_{j=1}^{d} g_j(\psi_j(\mathbf{x_{1}^{d-m}})+\sum\limits_{i=d-m+1}^{d} x_iu^{(i)}_j) \, d \mathbf{x_1^{d-m}} \Big\vert.
\end{align} See Section \ref{section conventions} for explanation of $\mathbf{x_1^{d-m}}$ notation. So there exists some fixed vector\\ $(x_{d-m+1},\dots,x_d)^T$ in $D$ such that \begin{align}
\label{equation bound beta second}
\beta \ll_{L,\varepsilon}\frac{1}{N^{d-m}} \Big\vert \int\limits_{\mathbf{x_{1}^{d-m}} \in \mathbb{R}^{d-m}} F(\sum\limits_{i=1}^{d}x_i\mathbf{u^{(i)}}/N)G(L(\sum\limits_{i=d-m+1}^{d}x_i\mathbf{u^{(i)}}) + \mathbf{v}) \nonumber\\
\prod\limits_{j=1}^{d} g_j(\psi_j(\mathbf{x_{1}^{d-m}})+\sum\limits_{i=d-m+1}^{d} x_iu^{(i)}_j) \, d \mathbf{x_1^{d-m}} \Big\vert.
\end{align}
Define the function $F_1:\mathbb{R}^{d-m} \longrightarrow [0,1]$ by \[ F_1(\mathbf{x_1^{d-m}}) : = F(\Psi(\mathbf{x_1^{d-m}}) + (\sum\limits_{i=d-m+1}^{d} x_i\mathbf{u^{(i)}}/N))\] and for each $j$ at most $d$ a shift \[a_j :=\sum\limits_{i=d-m+1}^{d} x_i u^{(i)}_j.\] Then
\begin{equation}
\label{just before we put everything into normal form}
\beta \ll _{L,\varepsilon}\Big\vert\frac{1}{N^{d-m}}\int\limits_{\mathbf{x} \in \mathbb{R}^{d-m}}F_1(\mathbf{x}/N) \Big(\prod\limits_{j=1}^d g_j(\psi_j(\mathbf{x})+a_j)\Big) \, d\mathbf{x}\Big\vert,
\end{equation}
\noindent and $F_1$ and $a_j$ satisfy the conclusions of the proposition.
\end{proof}
The next proposition is essentially identical to an argument that appears in \cite{Wa17} at the end of Section 8 of that paper. Unfortunately that argument is not in an easily citable form, and so we have found it necessary to state and prove the precise version that we need here. For readers unfamiliar with the notion of normal form, we included a brief summary in Section \ref{section normal form}.
\begin{Lemma}[Parametrising by normal form]
\label{Proposition parametrising by normal form}
Following on from above, there exists a $d^\prime = O(1)$, a linear map $(\psi_1^\prime,\dots,\psi_d^\prime) : = \Psi^\prime:\mathbb{R}^{d^\prime} \longrightarrow \mathbb{R}^d$ with algebraic coefficients that is in $s$-normal form for some $s = O(1)$, and a Lipschitz function $F_2:\mathbb{R}^{d^\prime} \longrightarrow [0,1]$ with Lipschitz constant $O_L(\sigma^{-1})$ and with $\operatorname{Rad}(F_2) = O_{C,L,\varepsilon}(1)$ such that
\begin{equation}
\label{before normal form}
\Big\vert\frac{1}{N^{d-m}}\int\limits_{\mathbf{x} \in \mathbb{R}^{d-m}} F_1(\mathbf{x}/N)\prod\limits_{j=1}^d g_j(\psi_j(\mathbf{x}) + a_j) \, d\mathbf{x}\Big\vert
\end{equation}
\noindent is bounded above by a constant times
\begin{equation}
\label{just after introducing R primed}
\ll \Big\vert\frac{1}{N^{d^\prime}}\int\limits_{\mathbf{x} \in \mathbb{R}^{d^\prime}}F_2(\mathbf{x}/N)\prod\limits_{j=1}^d g_j(\psi_j^\prime(\mathbf{x}) + a_j) \, d\mathbf{x}\Big\vert.
\end{equation}
\end{Lemma}
\begin{proof}
We apply Lemma \ref{Lemma normal form algorithm} to $\Psi$. Therefore, there is a natural number $k = O(1)$ such that, for \emph{any} real numbers $y_1,\dots,y_{k}$, (\ref{before normal form}) is equal to
\begin{equation}
\label{Fixed dummey variables}
\Big\vert \frac{1}{N^{d-m}}\int\limits_{\mathbf{x} \in \mathbb{R}^{d-m}}F_1((\mathbf{x} + \sum\limits_{i=1}^{k} y_i \mathbf{f_i})/N)\prod\limits_{j=1}^d g_j(\psi_j^\prime(\mathbf{y},\mathbf{x}) + a_j) \, d\mathbf{x}\Big\vert,
\end{equation}
\noindent where
\begin{itemize}
\item $\mathbf{f_1},\cdots,\mathbf{f_{k}} \in \mathbb{R}^{d-m}$ are some vectors that satisfy $\Vert\mathbf{f_i}\Vert_\infty = O_{\Psi}(1)$ for each $i$ at most $k$;
\item for each $j$ at most $d$, $\psi^\prime_j:\mathbb{R}^{k}\times\mathbb{R}^{d-m} \longrightarrow \mathbb{R}$ is linear, and $(\psi_1^\prime,\cdots,\psi^\prime_d): = \Psi^\prime: \mathbb{R}^{k} \times \mathbb{R}^{ d-m} \longrightarrow \mathbb{R}^d$ is defined by \[ \Psi^\prime(\mathbf{y},\mathbf{x}): = \psi(\mathbf{x} + \sum\limits_{i=1}^{k} y_i\mathbf{f_i});\]
\item $\Psi^\prime$ is in $s$-normal form, for some $s=O(1)$.
\end{itemize}
\noindent We remark that the right-hand side of expression (\ref{Fixed dummey variables}) is independent of $\mathbf{y}$, as it was obtained by applying the change of variables $\mathbf{x} \mapsto \mathbf{x} + \sum_{i=1}^{k} y_i\mathbf{f_i} $.\\
Now, with $\rho$ as fixed in Section \ref{section conventions}, let $P:\mathbb{R}^{k}\longrightarrow [0,1]$ be defined by \[P(\mathbf{y}) := \prod\limits_{i=1}^{k} \rho(y_i).\] Integrating over $\mathbf{y}$, we have that (\ref{Fixed dummey variables}) is at most a constant times
\begin{align}
&
\frac{1}{N^{d-m+k}} \int\limits_{\mathbf{y} \in \mathbb{R}^{k}} P(\mathbf{y}/N) \Big\vert\int\limits_{\mathbf{x}\in\mathbb{R}^{d-m}} F_1((\mathbf{x} + \sum\limits_{i=1}^{k} y_i \mathbf{f_i})/N)\prod\limits_{j=1}^{d} g_j(\psi_j^\prime(\mathbf{y},\mathbf{x}) + a_j) \, d\mathbf{x}\Big\vert \, d\mathbf{y} \nonumber \\
\label{equation after itnegrating over w}
&\ll \Big\vert\frac{1}{N^{d-m+k}} \int\limits_{\substack{\mathbf{x} \in \mathbb{R}^{d-m}\\ \mathbf{y} \in \mathbb{R}^{k}}} F_2((\mathbf{y},\mathbf{x})/N) \prod\limits_{j=1}^{d} g_j(\psi_j^\prime(\mathbf{y},\mathbf{x}) + a_j) \, d\mathbf{x} \, d\mathbf{y}\Big\vert,
\end{align}
where the function $F_2:\mathbb{R}^{d-m+k} \longrightarrow [0,1]$ is defined by \[F_2(\mathbf{y},\mathbf{x}):= F_1(\mathbf{x} + \sum\limits_{i=1}^{k} y_i \mathbf{f_i}) P(\mathbf{y}). \] Notice in (\ref{equation after itnegrating over w}) that we were able to move the absolute value signs outside the integral, as $P$ is positive and the integral over $\mathbf{x}$ is independent of $\mathbf{y}$ (so in particular has constant sign).
Letting $d^\prime: = d-m+k$, the lemma is proved.
\end{proof}
\section{Gowers-Cauchy-Schwarz argument}
\label{section Cauchy Schwarz argument}
This section will be devoted to proving the following theorem, which lies at the heart of the proof of our main results.
\begin{Theorem}[Gowers-Cauchy-Schwarz argument]
\label{Theorem Cauchy}
Let $N,t,d,s$ be natural numbers, and let $\gamma, \eta,\sigma,C$ be positive constants. Let $a_1,\dots,a_d$ be fixed real numbers that satisfy $\vert a_j\vert \leqslant CN$ for all $j$. Let $(\psi_1,\dots,\psi_t) = \Psi:\mathbb{R}^{d} \longrightarrow \mathbb{R}^t$ be a linear map with algebraic coefficients, which is in $s$-normal form. Let $F:\mathbb{R}^d \longrightarrow [0,1]$ be a Lipschitz function supported on $[-1,1]^d$ and with Lipschitz constant at most $\sigma^{-1}$. Let $g_1,\dots,g_t:[-2N,2N]\longrightarrow \mathbb{R}$ be any bounded measurable functions that satisfy $\vert g_j(x)\vert \leqslant (\nu_{N,w}^\gamma\ast \chi)(x+a_j)$ for all $x$. Suppose that \[\min\limits_{j\leqslant d}\Vert g_j \Vert _{U^{s+1}(\mathbb{R},2N)} =o(1)\] as $N \rightarrow \infty$. Then if $\eta$ and $\gamma$ are small enough in terms of $\Psi$ and the dimensions $t$, $d$, and $s$,
\begin{equation}
\label{equation abstract Gen von Neu}
\frac{1}{N^{d}} \int\limits_{\mathbf{x}\in \mathbb{R}^d} \Big(\prod\limits_{j=1}^t g_j (\psi_j(\mathbf{x}))\Big) F(\mathbf{x}/N)\, d\mathbf{x}= o(1)
\end{equation}
as $N\rightarrow \infty$, where the error term can depend on $C$, $\sigma$, $\eta$, $\gamma$, $\Psi$, and the first $o(1)$ term.
\end{Theorem}
\noindent For the definition of $\Vert g_j\Vert_{U^{s+1}(\mathbb{R},2N)}$, the reader may consult expression (\ref{real gowers norm}).\\
Theorem \ref{Theorem Cauchy} is closely analogous to \cite[Proposition $7.1^{\prime\prime}$]{GT10}, and the first half of our proof will follow the proof of that proposition closely (and in particular will contain no new ideas). However, new technicalities will become apparent as the argument progresses. In particular it will become important to understand the structure of a function that we will come to denote by $Q_{\mathbf{a},N}(z,\mathbf{h})$, and this will not be easy, in that we will have to appeal to the highly technical Lemma \ref{Lemma approximation of Q}. This observation and the subsequent analysis constitute the main new elements of the proof of Theorem \ref{Theorem Cauchy}.
\begin{proof}
We begin by replacing $F$ with a cut-off function that will be easier to work with during the subsequent manipulations. Indeed, let us pick a positive parameter $\delta \in (0,1]$. By Lemma \ref{Lemma approximating Lipschitz functions by smooth boxes} there is some parameter $k = O(\delta^{-d})$ and some smooth functions $F_1,\dots,F_k:\mathbb{R}^d \longrightarrow [0,1]$ such that \[\Vert F - \sum\limits_{i=1}^k F_i \Vert_\infty = O(\delta \sigma^{-1})\] and each $F_i$ is of the form \[ F_i(\mathbf{x}) = c_{i,F} \prod\limits_{j=1}^d b_j^i(x_j/N),\] where $\vert c_{i,F}\vert \leqslant 1$ and the functions $b_j^i:\mathbb{R} \longrightarrow [0,1]$ are smooth, supported on $[-2,2]$, and satisfy $b_{j}^i \in \mathcal{C}(\delta)$.
Therefore, we may write the left-hand side of (\ref{equation abstract Gen von Neu}) as the sum of $O(\delta^{-d})$ expressions of the form
\begin{equation}
\label{stone weierstrass reduction}
c_{i,F}\frac{1}{N^{d}} \int\limits_{\mathbf{x}\in\mathbb{R}^d} \prod\limits_{l=1}^t g_l (\psi_l(\mathbf{x})) \prod\limits_{j=1}^d b_j^i(x_j/N) \, d\mathbf{x},
\end{equation}
plus an error of size at most
\begin{equation}
\label{error from stone weierstrass}
\delta \sigma^{-1}\frac{1}{N^{d}} \int\limits_{\substack{\mathbf{x}\in\mathbb{R}^d \\ \Vert \mathbf{x}\Vert_\infty \ll N} } \prod\limits_{l=1}^t (\nu_{N,w}^\gamma\ast\chi) (\psi_l(\mathbf{x}) + a_l).
\end{equation}
\noindent Since $\Psi$ is in $s$-normal form, for some finite $s$, it follows that $\Psi$ has finite Cauchy-Schwarz complexity (see Definition \ref{Definition finite complexity}). Therefore, by Corollary \ref{Corollary more upper bounds}, expression (\ref{error from stone weierstrass}) has size $O_{C,\eta,\gamma}(\delta \sigma^{-1})$. \\
We now arrange our notation for the rest of the proof, in part to mimic the notation that is used in the proof of \cite[Proposition $7.1^{\prime\prime}$]{GT10}. This will hopefully increase the readability for those who are familiar with \cite{GT10}. Indeed, without loss of generality we may assume that \[\min_{j\leqslant d}(\Vert g_j\Vert_{U^{s+1}[N]}) = \Vert g_1\Vert_{U^{s+1}[N]}.\] Since $\Psi$ is in $s$-normal form there is a set $J \subset \{\mathbf{e_1},\dots,\mathbf{e_d}\}$ of standard basis vectors with $\vert J\vert \leqslant s+1$ and for which $\prod_{j\in J} \psi_i(\mathbf{e_j})$ vanishes for $i \neq 1$ and is nonzero for $i=1$. By the nested property of Gowers norms we may assume that $\vert J\vert = s+1$, and by reordering the variables we can assume without loss of generality that $\prod_{j=1}^{s+1} \psi_i(\mathbf{e_j})$ vanishes for $i\neq 1$ and is nonzero for $i=1$. It will be useful to rename the first $s+1$ variables $\mathbf{x}$ and the remainder as $\mathbf{y}$. If $d = s+1$ then the variable $\mathbf{y}$ is trivial. Note that the coefficients $\psi_1(\mathbf{e_j})$ are non-zero for all $j \in [s+1]$, so, by rescaling the variables $\mathbf{x}$, we may assume that \[ \psi_1(\mathbf{x},\mathbf{y}) = x_1+ \dots + x_{s+1} + \psi_1(\mathbf{0},\mathbf{y}).\]
For $i \leqslant t$, let $\Omega(i)$ denote\footnote{This is the notation used in \cite{GT10}. In this paper it will never risk being confused with the meaning of $\Omega$ in asymptotic notation.} the set \[ \Omega(i): = \{j\in [s+1]: \psi_i(\mathbf{e_j}) \neq 0\}.\] Note that $\Omega(1) = [s+1]$ and $\Omega(i)\neq [s+1]$ for $i=2,\dots,t$. \\
Now, for any set $B\subseteq [s+1]$ and vector $\mathbf{x}\in \mathbb{R}^{s+1}$, we define the vector $\mathbf{x_B}$ to be the restriction of $\mathbf{x}$ to the coordinates in $B$. Then, for any set $B\subseteq [s+1]$ and vector $\mathbf{y} \in \mathbb{R}^{d-s-1}$, we define \[ G_{B,\mathbf{y}}(\mathbf{x_B}): = \prod\limits_{i\in [t] : \Omega(i) = B} g_i(\psi_i(\mathbf{x_B},\mathbf{y})),\] where we have abused notation slightly in viewing $\psi_i$ only as a function of those variables $x_j$ on which it depends. \\
We also use $b:\mathbb{R}^a \longrightarrow \mathbb{R}$ (for some implied dimension parameter $a$) to denote a smooth function in $\mathcal{C}(C,\delta,\eta,\gamma,\Psi)$. The exact function may change from line to line.
With this notation, by picking $\delta$ to be a suitably slowly decaying function of $N$ we see that Theorem \ref{Theorem Cauchy} would follow from the upper bound
\begin{align}
\label{equation first omega product}
\frac{1}{N^{d-s-1}} \int\limits_{\mathbf{y}\in \mathbb{R}^{d-s-1}}\frac{1}{N^{s+1}}\int\limits_{\mathbf{x}\in \mathbb{R}^{s+1}}\prod\limits_{B\subseteq [s+1]} G_{B,\mathbf{y}}(\mathbf{x_B}) \prod\limits_{j=1}^{s+1} b_j(x_j/N) \prod\limits_{k=s+2}^{d} b_k(y_{k-s-1}/N) \, d\mathbf{x} \, d\mathbf{y} \nonumber \\=o_{C,\delta, \eta,\gamma,\Psi}(1).
\end{align} Our entire task is now to establish (\ref{equation first omega product}). From this point onwards, we will allow any error term or implied constant to depend on $C$, $\delta$, $\eta$, $\gamma$, and $\Psi$, without notating so explicitly. \\
We proceed by considering the following version of \cite[Corollary B.4]{GT10}.
\begin{Proposition}[The weighted generalised von Neumann theorem]
\label{Proposition weighted gen von Neu}
Let $A$ be a finite set, and let $(\mu_\alpha)_{\alpha\in A}$ be a finite collection of compactly supported Borel probability measures on $\mathbb{R}$. For every $B\subseteq A$, let $\mu_B$ denote the product measure $\bigotimes\limits_{\alpha\in B} \mu_\alpha$ on $\mathbb{R}^B$, and let $f_B: \mathbb{R}^B\longrightarrow \mathbb{C}$ and $\theta_B:\mathbb{R}^B\longrightarrow \mathbb{R}_{\geqslant 0}$ be integrable functions such that $\vert f_B(\mathbf{x_B})\vert\leqslant \theta _B(\mathbf{x_B})$ for all $\mathbf{x_B} \in \mathbb{R}^B$. Then
\begin{equation}
\label{equation weighted generalised von Neumann theorem}
\Big\vert \int\limits_{\mathbf{x_A} \in \mathbb{R}^A} \Big(\prod\limits_{B\subseteq A} f_B (\mathbf{x_B}) \Big)\, d\mu_A(\mathbf{x_A}) \Big\vert \leqslant \Vert f_A\Vert _{\square^A(\theta;\mu_A)} \prod\limits_{B\subsetneq A} \Vert \theta_B\Vert _{\square^B(\theta;\mu_B)} ^{2^{\vert B\vert - \vert A\vert}},
\end{equation}
where for any $B\subseteq A$ and $h_B:\mathbb{R}^B\longrightarrow \mathbb{C}$ we define $\Vert h_B\Vert _{\square^B(\theta;\mu_B)}$ to be the unique nonnegative real number satisfying
\begin{align}
\label{equation definition of weighted gowers norm}
&\Vert h_B\Vert _{\square^B(\theta;\mu_B)}^{2^{\vert B\vert }}: = \nonumber \\
&\int\limits_{\mathbf{x_B^{(0)}},\mathbf{x_B^{(1)}}\in \mathbb{R}^B} \Big (\prod\limits_{\boldsymbol{\omega_B} \in \{0,1\}^B} \mathscr{C}^{\vert \boldsymbol{\omega_B}\vert } h_B(\mathbf{x_B^{(\boldsymbol{\omega_B})}}) \Big ) \prod\limits_{C\subsetneq B} \prod\limits_{\boldsymbol{\omega_C} \in \{0,1\}^C} \theta_C(\mathbf{x_C^{(\boldsymbol{\omega_C})}}) \, d\mu_B(\mathbf{x_B^{(0)}}) \, d\mu_B(\mathbf{x_B^{(1)}}).
\end{align}
\noindent Here, as before, we use $\mathbf{x_C}$ to denote the restriction of $\mathbf{x_B}$ to $\mathbb{R}^C$.
\end{Proposition}
\begin{proof}
The proof is identical to the proof of \cite[Corollary B.4]{GT10}, replacing all summations with integrals, and is a consequence of the Gowers-Cauchy-Schwarz inequality.
\end{proof}
We now apply this proposition to the left-hand side of (\ref{equation first omega product}) above. Observe that we have the pointwise bounds $\vert G_{B,\mathbf{y}}(\mathbf{x_B})\vert \ll \theta_{B,\mathbf{y}} (\mathbf{x_B})$, where \[ \theta_{B,\mathbf{y}}(\mathbf{x_B}): = \prod\limits_{i\in [t]: \Omega(i) = B} (\nu_{N,w}^\gamma\ast \chi) (\psi_i(\mathbf{x_B},\mathbf{y}) + a_i).\] Therefore, applying Proposition \ref{Proposition weighted gen von Neu} by taking $A$ to be the set $[s+1]$, each $\mu_\alpha$ to be proportional to $(1/N)b_j(x_j/N) dx_j$, and $\theta_B$ to be the function $\theta_{B,\mathbf{y}}$, we establish that the left-hand side of (\ref{equation first omega product}) is
\begin{align}
\label{after Gowers cauchy Schwarz}
\ll \frac{1}{N^{d-s-1}} \int\limits_{\mathbf{y} \in \mathbb{R}^{d-s-1}} \Vert G_{[s+1],\mathbf{y}} \Vert _{\square^{[s+1]}(\theta_{\mathbf{y}};\mu_{[s+1]})}\prod\limits_{B\subsetneq [s+1]} \Vert \theta_{B,\mathbf{y}} \Vert _{\square^B (\theta_{\mathbf{y}}; \mu_B)}^{2^{\vert B\vert - s- 1}}\nonumber \\ \prod\limits_{k=s+2}^{d} b_k(y_{k-s-1}/N) \, d\mathbf{y}.
\end{align}
Observe that \[G_{[s+1],\mathbf{y}}(\mathbf{x_{[s+1]}}) = g_1(x_1+\dots+x_{s+1} + \psi_1 (0,\mathbf{y})),\] and so all the functions $g_j$ other than $g_1$ have been eliminated. Experienced readers will note that, so far, we have been following \cite[Appendix C]{GT10} almost verbatim.\\
After applying H\"{o}lder's inequality to (\ref{after Gowers cauchy Schwarz}), we see that to establish (\ref{equation first omega product}) it suffices to prove
\begin{equation}
\label{very tricky thing involving inequalities}
\frac{1}{N^{d-s-1}}\int\limits_{\mathbf{y} \in\mathbb{R}^{d-s-1}}\Vert G_{[s+1],\mathbf{y}}\Vert^{2^{s+1}}_{\square^{[s+1]}(\theta_{\mathbf{y}}; \mu_{[s+1]})}\prod\limits_{k=1}^{d-s-1} b_k(y_{k}/N) \, d\mathbf{y}=o(1),
\end{equation}
and, for all $B\subsetneq [s+1]$,
\begin{equation}
\label{slightly less tricky thing involving inequalities}
\frac{1}{N^{d-s-1}}\int\limits_{\mathbf{y}\in\mathbb{R}^{d-s-1}} \Vert \theta_{B,\mathbf{y}}\Vert^{2^{\vert B\vert}}_{\square^B(\theta_{\mathbf{y}}; \mu_B)}\prod\limits_{k=1}^{d-s-1} b_k(y_{k}/N) \, d\mathbf{y} \ll 1.
\end{equation}
\noindent These two expressions correspond respectively to expressions (C.10) and (C.11) of \cite{GT10}. \\
Establishing (\ref{slightly less tricky thing involving inequalities}) is straightforward. Indeed, we expand the left-hand side, yielding (up to a multiplicative constant factor) the expression
\begin{align}
\label{equation expanded out nu term}
\frac{1}{N^{d-s-1}} \int\limits_{\mathbf{y}\in \mathbb{R}^{d-s-1}} \frac{1}{N^{2\vert B\vert }} \int\limits_{\mathbf{x_{B}^{(0)}},\mathbf{x_{B}^{(1)}} \in \mathbb{R}^B} \prod\limits_{C\subseteq B} \prod\limits_{\boldsymbol{\omega_C} \in \{0,1\}^C} \prod\limits_{i\in [t]: \Omega(i) = C} (\nu_{N,w} \ast \chi)(\psi_i(\mathbf{x_C^{(\boldsymbol{\omega_C})}}, \mathbf{y})+ a_i) \nonumber \\
\prod\limits_{j\in B} b_j(x_j^{(0)}/N)b_j(x_j^{(1)}/N)\prod\limits_{k=1}^{d-s-1} b_{k+s+1}(y_{k}/N) \, d\mathbf{x_{B}^{(0)}} \,d\mathbf{x_{B}^{(1)}} \, d\mathbf{y}.
\end{align}
As noted in \cite[p. 1824]{GT10}, the system of forms given by \[ (\mathbf{y},\mathbf{x_B^{(0)}},\mathbf{x_B^{(1)}}) \mapsto \psi_i(\mathbf{x_C^{(\boldsymbol{\omega_C})}},\mathbf{y}),\] for each $C\subseteq B$, $\boldsymbol{\omega}_\mathbf{C} \in \{0,1\}^C$ and $i \in [t]$ such that $\Omega(i) = C$, has finite Cauchy-Schwarz complexity (since $\Psi$ does). We may therefore apply the upper bound in Corollary \ref{Corollary more upper bounds} to expression (\ref{equation expanded out nu term}), and this immediately yields (\ref{slightly less tricky thing involving inequalities}). \\
It remains to prove (\ref{very tricky thing involving inequalities}), which will be a much more major undertaking. We introduce some space-saving notation, namely for any subset $B\subseteq [s+1]$ we define the indexing set \[I_B: = \{ (C,\boldsymbol{\omega_C},i): C\subsetneq B, \, \boldsymbol{\omega_C}\in \{0,1\}^C,\, \Omega(i) = C\}.\] If a product is taken over triples $\mathfrak{t} \in I_B$, we interpret $C$, $\boldsymbol{\omega}_C$ and $i$ as coming from the triple $\mathfrak{t} = (C,\boldsymbol{\omega}_C,i)$. For notational expedience we will also identify the space $\mathbb{R}^{I_B}$ with the space $\mathbb{R}^{\vert I_B\vert}$.
With this notation, the left-hand side of (\ref{very tricky thing involving inequalities}) expands as
\begin{align}
\label{expanding tricky}
&\frac{1}{N^{d+s+1}}\int\limits_{\substack{\mathbf{y}\in\mathbb{R}^{d-s-1} \\\mathbf{x_{[s+1]}^{(0)}},\mathbf{x_{[s+1]}^{(1)}} \in\mathbb{R}^{s+1}}} \Big(\prod\limits _{\boldsymbol{\omega}\in \{0,1\}^{s+1}} g_1\Big(\sum\limits_{j=1}^{s+1}x_j^{(\omega_j)}+\psi_1(\mathbf{0},\mathbf{y})\Big)\Big) \nonumber \\
&\Big(\prod\limits_{\mathfrak{t} \in I_{[s+1]}} (\nu_{N,w}^\gamma \ast \chi)(\psi_i(\mathbf{x_C^{(\boldsymbol{\omega_C})}}, \mathbf{y}) + a_i)\Big) b((\mathbf{x_{[s+1]}^{(0)}},\mathbf{x_{[s+1]}^{(1)}},\mathbf{y})/N) \,d\mathbf{x_{[s+1]}^{(0)}}\,d\mathbf{x_{[s+1]}^{(1)}} \, d\mathbf{y}.
\end{align}
We make the substitution $\mathbf{h}: = \mathbf{x_{[s+1]}^{(1)}} - \mathbf{x_{[s+1]}^{(0)}}$ and $z: = x_1^{(0)} + \dots + x_{s+1}^{(0)} + \psi_1(\mathbf{0},\mathbf{y})$. Given $\mathbf{h}$, $z$, $\mathbf{x_{[s]}^{(0)}}$ and $\mathbf{y}$ one can recover $\mathbf{x_{[s+1]}^{(0)}}$, $\mathbf{x_{[s+1]}^{(1)}}$ and $\mathbf{y}$, so the change of variables is invertible. Therefore we may bound (\ref{expanding tricky}) above by a constant (the Jacobian of the change of variables) times
\begin{equation}
\label{expression with P}
\Big\vert\frac{1}{N^{s+2}}\int\limits_{(z,\mathbf{h}) \in \mathbb{R}^{s+2}} P_{\mathbf{a},N}(z,\mathbf{h}) \Big(\prod\limits_{\boldsymbol{\omega}\in\{0,1\}^{s+1}}g_1(z + \sum\limits_{j=1}^{s+1}\omega_j h_j)\Big) \, dz\, d\mathbf{h}\Big\vert
\end{equation}
where $P_{\mathbf{a},N}(z,\mathbf{h})$ is equal to
\begin{equation}
\label{equation defining P} \frac{1}{N^{d-1}}\int\limits_{\mathbf{(x_{[s]}^{(0)}},\mathbf{y})\in \mathbb{R}^{d-1}} \Big(\prod\limits_{\mathfrak{t} \in I_{[s+1]}} (\nu_{N,w}^\gamma \ast \chi) (\varphi_{\mathfrak{t}}(z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y}) + a_i)\Big) b((z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y})/N) \, d\mathbf{x_{[s]}^{(0)}} \, d\mathbf{y}
\end{equation} for some linear functions $\varphi_{\mathfrak{t}} : \mathbb{R}^{d+s+1}\longrightarrow \mathbb{R}$.
To be precise, if $\mathfrak{t} = (C,\boldsymbol{\omega},i)$ then the expression $\varphi_{\mathfrak{t}}(z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y})$ is equal to \[ \sum\limits_{k=1}^{s} (\psi_i(\mathbf{e_k}) - \psi_i(\mathbf{e_{s+1}}))x_k^{(0)} -\psi_i(\mathbf{e_{s+1}}) \psi_1(0,\mathbf{y}) + \psi_i(0,\mathbf{y}) + c(z,\mathbf{h})_{\mathfrak{t}},\] where \[ c(z,\mathbf{h})_{\mathfrak{t}} = \psi_i(\mathbf{e_{s+1}})z + \sum\limits_{k=1}^{s+1} \psi_i(\mathbf{e_k})\omega_k h_k.\] This expression is analogous to expression (C.14) of \cite{GT10}. We let $\mathbf{c}(z,\mathbf{h}) \in \mathbb{R}^{I_{[s+1]}}$ denote the vector $(c(z,\mathbf{h})_{\mathfrak{t}})_{\mathfrak{t} \in I_{[s+1]}}$. Most fortunately, the exact structure of the linear maps $\varphi_{\mathfrak{t}}$, save for the fact that they form a system with finite Cauchy-Schwarz complexity, will be unimportant. \\
Following the philosophy of \cite{GT08} and \cite{GT10}, our next manoeuvre will be to replace $P_{\mathbf{a},N}(z,\mathbf{h})$ with a simpler function. To that end, let $w^*: \mathbb{N} \longrightarrow \mathbb{R}_{\geqslant 0}$ be a function for which $w^*(N) \rightarrow \infty$ as $N\rightarrow \infty$ and $w^*(n) \leqslant w(n)$ for all $n$. Recall from Section \ref{section conventions} that $W^* = W^*(N) = \prod_{p\leqslant w^*(N)} p$.
\begin{Lemma}[Comparing $P_{\mathbf{a},N}(z,\mathbf{h})$ and $Q_{\mathbf{a},N}(z,\mathbf{h})$]
\label{Lemma comparing P and Q}
Define $Q_{\mathbf{a},N}(z,\mathbf{h})$ to be equal to
\begin{equation} \label{equation defining Q z,h}
\frac{1}{N^{d-1}}\int\limits_{\mathbf{(x_{[s]}^{(0)}},\mathbf{y})\in \mathbb{R}^{d-1}} \Big(\prod\limits_{\mathfrak{t} \in I_{[s+1]}} (\Lambda_{\mathbb{Z}/W^*\mathbb{Z}} \ast \chi) (\varphi_{\mathfrak{t}}(z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y}) + a_i)\Big) b((z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y})/N) \, d\mathbf{x_{[s]}^{(0)}} \, d\mathbf{y},
\end{equation} where $b((z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y})/N)$ here denotes the same function as is present in (\ref{equation defining P}). Then expression (\ref{expression with P}) is equal to
\begin{equation}
\frac{1}{N^{s+2}}\int\limits_{(z,\mathbf{h}) \in \mathbb{R}^{s+2}} Q_{\mathbf{a},N}(z,\mathbf{h}) \Big(\prod\limits_{\boldsymbol{\omega}\in\{0,1\}^{s+1}}g_1(z + \sum\limits_{j=1}^{s+1}\omega_j h_j)\Big) \, dz\, d\mathbf{h} + o(1),
\end{equation}
\noindent where the $o(1)$ may depend on the function $w^*$.
\end{Lemma}
\begin{proof}
Considering the upper bound $\vert g_1(x)\vert \leqslant (\nu_{N,w}^\gamma \ast \chi) (x + a_1)$, it suffices to show that \[ \frac{1}{N^{s+2}} \int\limits_{(z,\mathbf{h}) \in \mathbb{R}^{s+2}} \vert P_{\mathbf{a},N}(z,\mathbf{h}) - Q_{\mathbf{a},N}(z,\mathbf{h})\vert \Big(\prod\limits_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} (\nu_{N,w}^\gamma \ast \chi)(z + \sum\limits_{j=1}^{s+1} \omega_jh_j + a_1)\Big) \, dz \, d\mathbf{h}\] is $o(1)$. By Cauchy-Schwarz, it then suffices to show that both
\begin{equation}
\label{expression without P and Q}
\frac{1}{N^{s+2}} \int\limits_{\substack{(z,\mathbf{h}) \in \mathbb{R}^{s+2} \\ \Vert (z,\mathbf{h})\Vert_\infty \ll N}}\Big(\prod\limits_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} (\nu_{N,w}^\gamma \ast \chi)(z + \sum\limits_{j=1}^{s+1} \omega_jh_j + a_1) \Big)\, dz \, d\mathbf{h} \ll 1
\end{equation}
\noindent and
\begin{equation}
\label{expression with P and Q}
\frac{1}{N^{s+2}}\int\limits_{(z,\mathbf{h}) \in \mathbb{R}^{s+2}} (P_{\mathbf{a},N}(z,\mathbf{h}) - Q_{\mathbf{a},N}(z,\mathbf{h}))^2 \Big(\prod\limits_{\boldsymbol{\omega}\in\{0,1\}^{s+1}}(\nu_{N,w}^\gamma \ast \chi)(z + \sum\limits_{j=1}^{s+1}\omega_j h_j + a_1) \Big)\, dz\, d\mathbf{h} = o(1).
\end{equation}
The bound (\ref{expression without P and Q}) is immediate from Corollary \ref{Corollary more upper bounds}. To prove (\ref{expression with P and Q}), expanding out the square we must consider three expressions. One of them is \begin{equation}
\label{one of them is}
\frac{1}{N^{s+2}}\int\limits_{(z,\mathbf{h}) \in \mathbb{R}^{s+2}} P_{\mathbf{a},N}(z,\mathbf{h})^2 \Big(\prod\limits_{\boldsymbol{\omega}\in\{0,1\}^{s+1}}(\nu_{N,w}^\gamma\ast \chi)(z + \sum\limits_{j=1}^{s+1}\omega_j h_j + a_1)\Big) \, dz\, d\mathbf{h}.
\end{equation} When multiplied out, (\ref{one of them is}) is equal to the large expression
\begin{align}
\frac{1}{N^{2d +s}}\int\limits_{\substack{(z,\mathbf{h}) \in \mathbb{R}^{s+2}\\\mathbf{(x_{[s]}^{(0)}},\mathbf{y})\in \mathbb{R}^{d-1}\\ \mathbf{(\widetilde{x}_{[s]}^{(0)}},\mathbf{\widetilde{y}})\in \mathbb{R}^{d-1}}}\Big(\prod\limits_{\mathfrak{t} \in I_{[s+1]}} (\nu_{N,w}^\gamma \ast \chi) (\varphi_{\mathfrak{t}}(z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y}) + a_i)(\nu_{N,w}^\gamma \ast \chi) (\varphi_{\mathfrak{t}}(z,\mathbf{h},\mathbf{\widetilde{x}_{[s]}^{(0)}},\mathbf{\widetilde{y}}) + a_i) \Big)\nonumber \\\Big(\prod\limits_{\boldsymbol{\omega}\in\{0,1\}^{s+1}}(\nu_{N,w}^\gamma \ast \chi)(z + \sum\limits_{j=1}^{s+1}\omega_j h_j + a_1)\Big)b((z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y})/N)
b((z,\mathbf{h},\mathbf{\widetilde{x}_{[s]}^{(0)}},\mathbf{\widetilde{y}})/N)\nonumber \\ d\mathbf{x_{[s]}^{(0)}}\, d\mathbf{\widetilde{x}_{[s]}^{(0)}} \, d\mathbf{y} \, d\mathbf{\widetilde{y}} \, dz \, d\mathbf{h}.
\end{align}
\noindent By applying Corollary \ref{Corollary switching functions} to the above expression, we may replace the functions $\nu_{N,w}^\gamma \ast \chi$ with $\Lambda_{\mathbb{Z}/W^*\mathbb{Z}} \ast \chi$, up to an $o(1)$ error.
It is worth noting why the application of Corollary \ref{Corollary switching functions} is valid. Indeed, the underlying set of linear forms is given by (for each $\mathfrak{t} \in I_{[s+1]}$) \[ (z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y}, \mathbf{\widetilde{x}_{[s]}^{(0)}},\mathbf{\widetilde{y}}) \mapsto (\varphi_{\mathfrak{t}}(z,\mathbf{h},\mathbf{x_{[s]}^{(0)}},\mathbf{y}),\varphi_{\mathfrak{t}}(z,\mathbf{h},\mathbf{\widetilde{x}_{[s]}^{(0)}},\mathbf{\widetilde{y}})).\] We need this linear map to have algebraic coefficients and to have finite Cauchy-Schwarz complexity. Algebraicity follows by the assumptions in the statement of Theorem \ref{Theorem Cauchy}. Establishing finite Cauchy-Schwarz complexity is rather involved, but fortunately this has already been done by Green and Tao, on pages 1826 and 1827 of \cite{GT10}, in the analysis of expression (C.14).
Replacing (\ref{one of them is}) with one of the other two terms that arises from expanding out the square in (\ref{expression with P and Q}), and performing the same estimation, the lemma follows.
\end{proof}
Let us take stock. As a reminder, we are trying to establish that (\ref{very tricky thing involving inequalities}) holds. Lemma \ref{Lemma comparing P and Q} above reduces matters to choosing some function $w^*$ that tends to infinity for which the bound
\begin{equation}
\label{nearly nearly there}
\frac{1}{N^{s+2}}\Big\vert\int\limits_{(z,\mathbf{h})\in \mathbb{R}^{s+2}} Q_{\mathbf{a},N}(z,\mathbf{h})\prod\limits_{\boldsymbol{\omega}\in\{0,1\}^{s+1}}g_1(z + \sum\limits_{k=1}^{s+1}\omega_k h_k + a_1) dz \, d\mathbf{h}\Big\vert =o(1)
\end{equation} holds. If $Q_{\mathbf{a},N}(z,\mathbf{h})$ were identically equal to $1$, then expression (\ref{nearly nearly there}) would be of the order of $\Vert g_1\Vert_{U^{s+1}(\mathbb{R},2N)}$, and hence be $o(1)$ by the hypotheses of Theorem \ref{Theorem Cauchy}. Of course $Q_{\mathbf{a},N}(z,\mathbf{h})$ is not identically equal to $1$, but we do observe that $Q_{\mathbf{a},N}(z,\mathbf{h})$ is a function of the form considered in Lemma \ref{Lemma approximation of Q}. Indeed, consulting the definition of $Q_{\mathbf{a},N}(z,\mathbf{h})$ in (\ref{equation defining Q z,h}), the following table shows which objects in Lemma \ref{Lemma approximation of Q} correspond to which objects concerning the definition of $Q_{\mathbf{a},N}(z,\mathbf{h})$.
\begin{center}
\begin{tabular}{c|c}
Lemma \ref{Lemma approximation of Q} & (\ref{equation defining Q z,h})\\
\hline
$\mathbf{a}$ & $\mathbf{a}$ \\
$ \mathbf{x}$ & $(\mathbf{x_{[s]}^{(0)}}, \mathbf{y})$ \\
$ \mathbf{y}$ & $ (z, \mathbf{h})$ \\
$\varphi_j(\mathbf{x})$ & $\varphi_{\mathfrak{t}}(0, \mathbf{0}, \mathbf{x_{[s]}^{(0)}}, \mathbf{y})$ \\
$\Psi(\mathbf{y})$ & $ \mathbf{c}(z,\mathbf{h})$
\end{tabular}
\end{center}
From Lemma \ref{Lemma approximation of Q}, we therefore know that there exists some function $f_1:\mathbb{Z}^{\vert I_{[s+1]}\vert } \longrightarrow \mathbb{C}$ satisfying $\Vert f_1\Vert_\infty \ll (\log \log W^*)^{O(1)}$ for which \[ Q_{\mathbf{a},N}(z,\mathbf{h}) =b_{\mathbf{a},N}((z,\mathbf{h})/N) \sum\limits_{\Vert \mathbf{k}\Vert_\infty \leqslant (\log \log W^*)^{O(1)}} f_1(\mathbf{k}) e\Big(\frac{\mathbf{k} \cdot(\mathbf{c}(z,\mathbf{h}) + \mathbf{a})}{W^*}\Big) + o(1).\]
Therefore, one gets an upper bound for the left-hand side of (\ref{nearly nearly there}), namely
\begin{align}
\label{equation short exponential sum imput}
&\frac{(\log \log W^*)^{O(1)} }{N^{s+2}} \times \nonumber \\ &\sup\limits_{\mathbf{k} \in \mathbb{Z}^{\vert I_{[s+1]}\vert}}\Big\vert \int\limits_{(z,\mathbf{h}) \in \mathbb{R}^{s+2}} e\Big(\frac{\mathbf{k} \cdot \mathbf{c}(z,\mathbf{h})}{W^*}\Big)b_{\mathbf{a},N}((z,\mathbf{h})/N)\prod\limits_{\boldsymbol{\omega}\in \{0,1\}^{s+1}} g_1(z+ \sum\limits_{j=1}^{s+1} \omega_j h_j + a_1) \, dz\, d\mathbf{h}\Big\vert
\end{align} plus an error of size
\begin{equation}
\label{plus an error of size 1}
o(1) \times \frac{1}{N^{s+2}} \int\limits_{\substack{(z,\mathbf{h}) \in \mathbb{R}^{s+2} \\ \Vert (z,\mathbf{h})\Vert_\infty \ll N}} \prod\limits_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} (\nu_{N,w}^\gamma \ast \chi) (z + \sum\limits_{k=1}^{s+1} \omega_kh_k + a_1) \, dz \, d\mathbf{h}.
\end{equation} By Corollary \ref{Corollary more upper bounds}, the size of term (\ref{plus an error of size 1}) is $o(1)$. To analyse (\ref{equation short exponential sum imput}) we apply Lemma B.4 of \cite{Wa17}. Since the function $b_{\mathbf{a},N}$ is Lipschitz this means that for all $Y>2$ there exists a complex valued function $f_{\mathbf{a},N,2}$ such that $\Vert f_{\mathbf{a},N,2}\Vert_\infty \ll 1$ and for all $(z,\mathbf{h})$ one has \[ b_{\mathbf{a},N}((z,\mathbf{h})/N) = \int\limits_{\Vert\mathbf{y}\Vert_\infty \leqslant Y} f_{\mathbf{a},N,2}(\mathbf{y}) e\Big(\frac{\mathbf{y} \cdot (z,\mathbf{h})}{N}\Big) \, d\mathbf{x} + O((\log Y) / Y).\] Choosing $Y$ to be a suitably large power of $\log\log W^*$, (\ref{equation short exponential sum imput}) may be bounded above by \begin{align}
\label{equation short exponential sum imput second}
&\frac{(\log\log W^*)^{O(1)} }{N^{s+2}} \times \nonumber \\ &\sup\limits_{\mathbf{k},\mathbf{y}}\Big\vert \int\limits_{(z,\mathbf{h}) \in \mathbb{R}^{s+2}} e\Big(\frac{\mathbf{k} \cdot \mathbf{c}(z,\mathbf{h})}{W^*}\Big)e\Big(\frac{\mathbf{y} \cdot (z,\mathbf{h})}{N}\Big)\prod\limits_{\boldsymbol{\omega}\in \{0,1\}^{s+1}} g_1(z+ \sum\limits_{j=1}^{s+1} \omega_j h_j + a_1) \, dz\, d\mathbf{h}\Big\vert
\end{align}
\noindent plus an error of size
\begin{equation}
\label{plus a second error of size}
o(1) \times \frac{1}{N^{s+2}} \int\limits_{\substack{(z,\mathbf{h}) \in \mathbb{R}^{s+2} \\ \Vert (z,\mathbf{h})\Vert_\infty \ll N}} \prod\limits_{\boldsymbol{\omega} \in \{0,1\}^{s+1}} (\nu_{N,w}^\gamma \ast \chi) (z + \sum\limits_{k=1}^{s+1} \omega_kh_k + a_1) \, dz \, d\mathbf{h}.
\end{equation}
\noindent Using Corollary \ref{Corollary more upper bounds} as above, expression (\ref{plus a second error of size}) is $o(1)$. \\
The term (\ref{equation short exponential sum imput second}) may be analysed using the standard methods. Indeed, by shifting the variable $z$ (and noting that $\mathbf{c}(z,\mathbf{h})$ is a linear function of $z$ and $\mathbf{h}$) we may assume that $a_1 = 0$. Then, by spreading the exponential functions across the different instances of $g_1$, we see it suffices to show that
\begin{equation}
\label{the above expression}
\frac{(\log \log W^*)^{O(1)}}{N^{s+2}} \Big\vert \int\limits_{(z,\mathbf{h}) \in \mathbb{R}^{s+2}} \prod\limits_{\boldsymbol{\omega}\in \{0,1\}^{s+1}} g_{\boldsymbol{\omega}}(z+ \sum\limits_{k=1}^{s+1} \omega_k h_k ) \, dz\, d\mathbf{h}\Big\vert = o(1),
\end{equation} where each function $g_{\boldsymbol{\omega}}$ is of the form \[ g_{\boldsymbol{\omega}}(x): = g_1(x)e(\lambda_{\boldsymbol{\omega}} x),\] for some $\lambda_{\boldsymbol{\omega}} \in \mathbb{R}$.
The argument is nearly complete. Considering expression (\ref{real gowers norm}), for each $\boldsymbol{\omega}$ we observe that \[\Vert g_{\boldsymbol{\omega}}\Vert _{U^{s+1}(\mathbb{R},2N)} = \Vert g_1 \Vert_{U^{s+1}(\mathbb{R},2N)}.\] So, by the Gowers-Cauchy-Schwarz inequality (recorded in this setting as Proposition A.4 of \cite{Wa17}), the left-hand side of expression (\ref{the above expression}) is \[O((\log \log W^*)^{O(1)}\Vert g_1 \Vert_{U^{s+1}(\mathbb{R},2N)}^{O(1)}).\] If $w^*$ grows slowly enough, this expression is $o(1)$.\\
We have therefore established the upper bound (\ref{nearly nearly there}), and so, by our long sequence of deductions, Theorem \ref{Theorem Cauchy} is finally proved.
\end{proof}
\section{Combining the lemmas}
\label{section combining the lemmas}
With all the previous lemmas in hand, we may finally prove Theorem \ref{Theorem generalised von neumann} (and hence prove Theorem \ref{Main theorem}).
\begin{proof}[Proof of Theorem \ref{Theorem generalised von neumann}]
Assume the hypotheses of the theorem, fixing a suitably small value of $\gamma$.
By applying Proposition \ref{Proposition separating out the kernel} and Proposition \ref{Proposition parametrising by normal form}, we conclude that there is some $s = O(1)$ and some $d^\prime = O(1)$ for which $\vert \widetilde{T}_{F,G,N}^{L,\mathbf{v}}(f_1\ast \chi,\dots,f_d\ast \chi)\vert$ is
\begin{equation}
\label{final equation of all time}
\ll_{L,\varepsilon} \Big\vert\frac{1}{N^{d^\prime}}\int\limits_{\mathbf{x} \in \mathbb{R}^{d^\prime}}F_2(\mathbf{x}/N)\prod\limits_{j=1}^d (f_j \ast \chi)(\psi_j^\prime(\mathbf{x}) + a_j) \, d\mathbf{x},\Big\vert,
\end{equation} where $(\psi_1^\prime,\dots,\psi_d^\prime) = \Psi^\prime:\mathbb{R}^{d^\prime}\longrightarrow \mathbb{R}^d$ is in $s$-normal form, $F_2: \mathbb{R}^{d^\prime} \longrightarrow [0,1]$ has Lipschitz constant $O_L(\sigma^{-1})$ and $\operatorname{Rad}(F_2) = O_{C,L,\varepsilon}(1)$, and each $a_j$ satisfies $\vert a_j\vert = O_{C,L,\varepsilon}(N)$. Taking this value of $s$ in the hypotheses of Theorem \ref{Theorem generalised von neumann}, without loss of generality we may assume that
\begin{equation}
\label{gowers norm decay}
\Vert f_1\Vert_{U^{s+1}[N]} = o(1)
\end{equation} as $N\rightarrow \infty$.
Then we may apply Theorem \ref{Theorem Cauchy} to expression (\ref{final equation of all time}). Indeed, by rescaling the variable $\mathbf{x}$ we may assume that $F_2$ is supported on $[-1,1]^{d^\prime}$. For each $j\in [d]$ we set \[ g_j : = f_j \ast \chi.\] Provided $\eta$ is small enough, by combining (\ref{gowers norm decay}) and Lemma \ref{Lemma linking different Gowers norms} we deduce that \[\Vert g_1 \Vert_{U^{s+1}(\mathbb{R},2N)} = o_{\eta}(1)\] as $N \rightarrow \infty$. So Theorem \ref{Theorem Cauchy} may indeed be applied, which yields \begin{equation}
\label{final}
\vert\widetilde{T}_{F,G,N}^{L,\mathbf{v}}(f_1\ast \chi,\dots,f_d\ast \chi)\vert = o_{C,L,\gamma,\varepsilon,\eta,\sigma} (1)
\end{equation} as $N\rightarrow \infty$.
But then, combining the estimate (\ref{final}) with Lemma \ref{Lemma transfer equation}, one derives the bound
\begin{equation}
\label{complicated right hand side}
\vert T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d)\vert = O_{C,L,\gamma,\varepsilon,\sigma}(\eta) + o_{C,L,\gamma,\varepsilon,\eta,\sigma}(1).
\end{equation} Choosing $\eta = \eta(N)$ to be a function tending to zero suitably slowly with $N$, we conclude that \[ \vert T_{F,G,N}^{L,\mathbf{v}}(f_1,\dots,f_d)\vert = o_{C,L,\gamma,\varepsilon,\sigma}(1).\] This is the conclusion of Theorem \ref{Theorem generalised von neumann}, and we are done.
\end{proof}
\noindent From the work in Section \ref{section Controlling by Gowers norms}, this means that Theorem \ref{Main theorem}, the main result of this paper, is finally settled. \qed
\part{Final deductions}
\label{part final deductions}
\section{Removing Lipschitz cut-offs}
\label{section removing sharp cut-offs}
In this section we assume Theorem \ref{Main theorem}, and deduce Theorem \ref{Main theorem simpler version}. This deduction will be a routine matter of removing Lipschitz cut-offs.
\begin{Lemma}
\label{Lemma upper bound for short intervals}
Assume the hypotheses of Theorem \ref{Main theorem simpler version}. Let $\delta$ be a real number in the range $0<\delta<1/2$ and let $I \subset [0,1]$ be an interval of length $\delta $. Then
\[ \frac{1}{N^{d-m}} \sum\limits_{i=1}^d\sum\limits_{\substack{\mathbf{n}\in [N]^d \\n_i \in N\cdot I }} \Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v}) \ll_{L} \delta\varepsilon^m + o_{C,L,\delta,\varepsilon}(1).\]
\end{Lemma}
\noindent The reader will note that this lemma is a slight refinement of Corollary \ref{Corollary upper bound}.
\begin{proof}
Fix some $i\leqslant d$. Let $F:\mathbb{R}^d \longrightarrow [0,1]$ be a smooth function in $\mathcal{C}(\delta)$, supported on $\{\mathbf{x} \in [-1,2]^d: x_i \in I + [-\delta,\delta]\}$, that majorises the indicator function of the set $\{ \mathbf{x} \in [0,1]^d: x_i \in I\}$. Let $G:\mathbb{R}^m \longrightarrow [0,1]$ be some smooth function in $\mathcal{C}(\varepsilon)$, supported on $[-2\varepsilon,2\varepsilon]^m$, that majorises $1_{[-\varepsilon,\varepsilon]^m}$. Let $\gamma$ be small enough in terms of $L$. Then, by Theorem \ref{Theorem pseudorandomness} and Lemma \ref{Lemma problem for local von Mangoldt},
\begin{align}
\label{start of final home straight}
\frac{1}{N^{d-m}} \sum\limits_{\substack{\mathbf{n}\in [N]^d \\n_i \in N \cdot I }} \Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v})& \ll_\gamma \frac{1}{N^{d-m}} \sum\limits_{\substack{\mathbf{n}\in [N]^d \\n_i \in N \cdot I}} \Big(\prod\limits_{j=1}^d \nu_{N,w}^{\gamma}(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v})\nonumber \\
& \leqslant T_{F,G,N}^{L,\mathbf{v}}(\nu_{N,w}^\gamma,\dots,\nu_{N,w}^\gamma) \nonumber \\
& = T_{F,G,N}^{L,\mathbf{v}}(\Lambda_{\mathbb{Z}/W\mathbb{Z}},\dots,\Lambda_{\mathbb{Z}/W\mathbb{Z}}) + o_{C,L,\gamma,\delta,\varepsilon}(1)\nonumber \\
& = \int\limits_{\mathbf{x} \in \mathbb{R}^d}F(\mathbf{x}/N) G(L\mathbf{x} + \mathbf{v}) \, d\mathbf{x} + o_{C,L,\gamma,\delta,\varepsilon}(1) .
\end{align}
Since $L \notin V_{\operatorname{degen}}^\ast(m,d)$, for all $d$ of the coordinate subspaces $U \leqslant \mathbb{R}^d$ of dimension $d-1$ the map $L|_U: U \longrightarrow \mathbb{R}^m$ is surjective. We may therefore apply Lemma \ref{Lemma crude upper bound lemma}, and conclude that expression (\ref{start of final home straight}) is $O_L(\delta \varepsilon^m) + o_{C,L,\delta,\varepsilon,\gamma}(1)$. The lemma is proved, after having fixed a suitable $\gamma$.
\end{proof}
\begin{Lemma}
\label{Lemma logged form of simple theorem}
Under the hypotheses of Theorem \ref{Main theorem simpler version},
\[\frac{1}{N^{d-m}}\sum\limits_{\mathbf{n} \in [N]^d} \Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v}) = \frac{1}{N^{d-m}}\int\limits_{\mathbf{x} \in [0,N]^d} 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{x} + \mathbf{v}) \, d\mathbf{x} + o_{C,L,\varepsilon}(1).\]
\end{Lemma}
\begin{proof}
Let $\delta$ be a positive parameter in the range $(0,1/2)$, to be chosen later. Let us first consider \[ \frac{1}{N^{d-m}}\sum\limits_{\substack{\mathbf{n} \in \mathbb{Z}^d \\ \mathbf{n} \in [\delta N,(1-\delta)N]^d}} \Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v}) .\] Let $F^{\pm \delta}:\mathbb{R}^d \longrightarrow [0,1]$ be two Lipschitz functions satisfying \[1_{[3\delta/2 ,1-3\delta/2 ]^d}\leqslant F^{-\delta}\leqslant 1_{[\delta,1-\delta]^d}\leqslant F^{+\delta}\leqslant 1_{[\delta/2 ,1-\delta/2 ]^d},\] with Lipschitz constants depending only on $\delta$. Let $G^{\pm \delta}: \mathbb{R}^{m} \longrightarrow [0,1]$ be two Lipschitz functions satisfying \[1_{[-\varepsilon(1-\delta) ,\varepsilon(1 -\delta) ]^m}\leqslant G^{-\delta}\leqslant 1_{[-\varepsilon,\varepsilon]^m}\leqslant G^{+\delta}\leqslant 1_{[-\varepsilon(1 +\delta) ,\varepsilon(1 +\delta)]^m},\] with Lipschitz constants\footnote{The existence of such functions is immediate by interpolating linearly, or by appealing to the results of Section \ref{section smooth functions}.} depending only on $\delta$. Then we have
\begin{align}
\label{sandwiching expression}
\sum\limits_{\mathbf{n} \in \mathbb{Z}^d}\Big(\prod\limits_{j=1}^d \Lambda(n_j)\Big) F^{-\delta}(\mathbf{n}/N) G^{-\delta}(L\mathbf{n} + \mathbf{v}) \leqslant \sum\limits_{\substack{\mathbf{n} \in \mathbb{Z}^d \\ \mathbf{n} \in [\delta N,(1-\delta)N]^d}} \Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v}) \nonumber\\
\leqslant \sum\limits_{\mathbf{n} \in \mathbb{Z}^d} \Big(\prod\limits_{j=1}^d \Lambda(n_j)\Big)F^{+\delta}(\mathbf{n}/N) G^{+\delta}(L\mathbf{n} + \mathbf{v}).
\end{align}
By Theorem \ref{Main theorem}, the lower bound in (\ref{sandwiching expression}) is equal to \[ \sum\limits_{\mathbf{n} \in \mathbb{Z}^d} \Big(\prod\limits_{j=1}^d \Lambda_{\mathbb{Z}/W\mathbb{Z}}(n_j) \Big) F^{-\delta}(\mathbf{n}/N) G^{-\delta}(L\mathbf{n} + \mathbf{v}) + o_{C,L,\delta,\varepsilon}(N^{d-m}),\] since we may replace $\Lambda_{\mathbb{Z}/W\mathbb{Z}}^+$ with $\Lambda_{\mathbb{Z}/W\mathbb{Z}}$ as $F$ is supported on $[0,1]^d$. By Lemma \ref{Lemma problem for local von Mangoldt}, and the properties of the support of $F^{-\delta}$ and $G^{-\delta}$, this is at least
\begin{equation}
\label{this is at most}
\int\limits_{\mathbf{x} \in [3\delta N/2,N( 1-3\delta/2)]^d} 1_{[-\varepsilon (1 -\delta),\varepsilon(1 +\delta) ]^m}(L\mathbf{x} + \mathbf{v}) \, d\mathbf{x} + o_{C,L,\delta,\varepsilon}(N^{d-m}).
\end{equation} Note that the singular series $\mathfrak{S}$ is equal to $1$ in this instance, since $L$ is purely irrational. By Lemma \ref{Lemma moving error terms}, expression (\ref{this is at most}) is at least \[ \int\limits_{\mathbf{x} \in [0,N]^d} 1_{[-\varepsilon,\varepsilon]^m} (L\mathbf{x} + \mathbf{v}) \, d\mathbf{x} - O(\delta\varepsilon^m N^{d-m}) + o_{C,L,\delta,\varepsilon}(N^{d-m}).\]
By performing an analogous manipulation with the upper bound, we may conclude that \[\sum\limits_{\substack{\mathbf{n} \in \mathbb{Z}^d \\ \mathbf{n} \in [\delta N,(1-\delta)N]^d}} \Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v})\] is equal to
\begin{equation}
\label{expression msot well suited to removing log weighting} \int\limits_{\mathbf{x} \in [0,N]^d} 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{x} + \mathbf{v}) \, d\mathbf{x} + O(\delta\varepsilon^m N^{d-m}) + o_{C,L,\delta,\varepsilon}(N^{d-m}).
\end{equation}
Therefore, by Lemma \ref{Lemma upper bound for short intervals}, we have that \[\frac{1}{N^{d-m}}\sum\limits_{\mathbf{n} \in [N]^d} \Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v}) \] is equal to \[\frac{1}{N^{d-m}} \int\limits_{\mathbf{x} \in [0,N]^d} 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{x} + \mathbf{v}) \, d\mathbf{x} +O(\delta\varepsilon^m) + o_{C,L,\delta,\varepsilon}(1).\] Letting $\delta$ be a function of $N$, tending to zero suitably slowly as $N$ tends to infinity, the lemma follows.
\end{proof}
To establish Theorem \ref{Main theorem simpler version} as given, i.e. to establish Lemma \ref{Lemma logged form of simple theorem} without the log weighting, is standard. To spell it out, Lemma \ref{Lemma logged form of simple theorem} implies that, for any $\delta$ in the range $0<\delta < 1/2$,
\begin{align}
\label{upper bound removing logs}
\sum\limits_{\mathbf{p} \in [\delta N,N]^d }1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{p} + \mathbf{v}) \leqslant \frac{1}{(\log(\delta) + \log N)^d}\sum\limits_{\substack{\mathbf{n} \in \mathbb{Z}^d \\ \mathbf{n} \in [\delta N,N]^d}} \Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v}) \nonumber \\
\leqslant \frac{(1 + o_\delta(1))}{( \log N)^d}\Big(\int\limits_{\mathbf{x} \in [0,N]^d} 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{x} + \mathbf{v}) \, d\mathbf{x}+ o_{C,L,\varepsilon}(N^{d-m})\Big).
\end{align} But also, from expression (\ref{expression msot well suited to removing log weighting})
\begin{align}
\label{lower bound removing logs}
\sum\limits_{\mathbf{p} \in [\delta N,N]^d} 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{p} + \mathbf{v}) \geqslant \frac{1}{(\log N)^d} \sum\limits_{\mathbf{n} \in [\delta N , (1-\delta)N]^d} \Big(\prod\limits_{j=1}^d \Lambda^\prime(n_j)\Big) 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{n} + \mathbf{v}) \nonumber \\
\geqslant \frac{1}{(\log N)^d} \Big(\int\limits_{\mathbf{x} \in [0,N]^d} 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{x} + \mathbf{v}) \, d\mathbf{x} + O(\delta \varepsilon^m N^{d-m}) + o_{C,L,\delta,\varepsilon}(N^{d-m})\Big).
\end{align}
By Lemma \ref{Lemma general upper bound}, \[\int\limits_{\mathbf{x} \in [0,N]^d} 1_{[-\varepsilon,\varepsilon]^m}(L\mathbf{x} + \mathbf{v}) \, d\mathbf{x} = O_{L,\varepsilon}(N^{d-m}).\] Hence, choosing $\delta$ to be a function of $N$ tending to zero suitably slowly, combining bounds (\ref{lower bound removing logs}) and (\ref{upper bound removing logs}) establishes Theorem \ref{Main theorem simpler version}. \qed
\part{Appendices}
| {
"timestamp": "2019-10-22T02:11:24",
"yymm": "1901",
"arxiv_id": "1901.04855",
"language": "en",
"url": "https://arxiv.org/abs/1901.04855",
"abstract": "In this paper we prove an asymptotic formula for the number of solutions in prime numbers to systems of simultaneous linear inequalities with algebraic coefficients. For $m$ simultaneous inequalities we require at least $m+2$ variables, improving upon existing methods, which generically require at least $2m+1$ variables. Our result also generalises the theorem of Green-Tao-Ziegler on linear equations in primes. Many of the methods presented apply for arbitrary coefficients, not just for algebraic coefficients, and we formulate a conjecture concerning the pseudorandomness of sieve weights which, if resolved, would remove the algebraicity assumption entirely.",
"subjects": "Number Theory (math.NT); Combinatorics (math.CO)",
"title": "Linear inequalities in primes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850852465429,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.709534987373147
} |
https://arxiv.org/abs/1610.09791 | The arc length of a random lemniscate | A polynomial lemniscate is a curve in the complex plane defined by $\{z \in \mathbb{C}:|p(z)|=t\}$. Erdös, Herzog, and Piranian posed the extremal problem of determining the maximum length of a lemniscate $\Lambda=\{ z \in \mathbb{C}:|p(z)|=1\}$ when $p$ is a monic polynomial of degree $n$. In this paper, we study the length and topology of a random lemniscate whose defining polynomial has independent Gaussian coefficients. In the special case of the Kac ensemble we show that the length approaches a nonzero constant as $n \rightarrow \infty$. We also show that the average number of connected components is asymptotically $n$, and we observe a positive probability (independent of $n$) of a giant component occurring. | \section{Introduction}
A (polynomial) lemniscate is a curve defined in the complex plane by the equation $|p(z)| = t$,
where $p$ is a polynomial.
If the degree of $p$ is $n,$ then from the conjugation-invariant equation $p(z) \overline{p(z)} = t^2$,
it is apparent that the lemniscate is a real algebraic curve of degree $2n$.
Calculating the length of a lemniscate is a problem of classical Mathematics
that played a role in the development of elliptic integrals.
Namely, the length of Bernoulli's lemniscate $|z^2-1| = 1$
is an elliptic integral of the second kind
(the same type of integral that appears in classical mechanics,
as the period of a pendulum, and in classical statics, as the length of an elastica).
\subsection{The Erd\"os lemniscate problem}
Erd\"os, Herzog, and Piranian \cite{Erdos}
posed the extremal problem of determining the
maximum length of a lemniscate
\begin{equation} \Lambda=\left\{z\in \mathbb{C}\,:\, |p(z)| = 1 \right\}\end{equation}
when $p$ is a monic polynomial of degree $n$.
The problem was restated by Erd\"os several times (e.g., see \cite{Erdos2})
and is often referred to as the \emph{Erd\"os lemniscate problem}.
Taking $p$ monic guarantees that the length of the lemniscate is bounded,
for instance by $2\pi n$ \cite{Danchenko}.
The maximum was conjectured \cite{Erdos} to occur for the
so-called \emph{Erd\"os lemniscate}, i.e, when $p(z) = z^n-1$.
This conjecture remains open
but has seen positive results \cite{Borwein, EremHay, KuTk},
and Fryntov and Nazarov \cite{FryntovNazarov} have proved
that Erd\"os lemniscate is indeed
a \emph{local} maximum and that as $n \rightarrow \infty$
the maximum length is $2n + o(n)$ which is asymptotic to the conjectured extremal.
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{ErdosLemni}
\caption{\label{fig:ErdosLemni} The Erd\"os lemniscate for $n=8$.}
\end{center}
\end{figure}
\subsection{The arc length of a random lemniscate}
A random variable X has the
standard complex Gaussian distribution
if it has density $\frac{1}{\pi}\exp(-|z|^2)$ on $\mathbb{C}.$
We denote this by $X\sim N_{\mathbb{C}}(0,1).$
\vspace{0.1in}
\noindent Motivated by seeking a broad point of view on the Erd\"os lemniscate problem,
we give a probabilistic treatment of the length,
by studying the \emph{average} outcome for a random polynomial lemniscate.
We select $\Lambda = \Lambda_n$ randomly by taking $p_n(z)$ to be a random polynomial
from the Kac ensemble,
\begin{equation}\label{eq:KacModel}
p_n(z)=\sum_{k=0}^{n}a_{k}z^k ,\end{equation}
where $a_{k} \sim N_{\mathbb{C}}(0,1)$ are independent,
identically distributed complex Gaussians. The resulting distribution for the random curve $\Lambda$ is invariant under rotation of the angular coordinate.
Indeed, we have:
\begin{equation}
|p_n(e^{i\theta }z)|=\left|\sum_{k=0}^{n}a_{k}e^{i k \theta}z^k\right|,
\end{equation}
and invariance follows from the observation that $b_{k}= a_{k}e^{i k \theta}$ are i.i.d and distributed as $N_{\mathbb{C}}(0,1)$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.31\textwidth]{Kac10}
\includegraphics[width=0.32\textwidth]{Kac20}
\includegraphics[width=0.335\textwidth]{Kac30}
\includegraphics[width=0.315\textwidth]{Kac40}
\includegraphics[width=0.315\textwidth]{Kac50}
\includegraphics[width=0.31\textwidth]{Kac60}
\caption{\label{fig:samples} Random lemniscates using Kac polynomials of degree
$n=10, 20, 30, 40, 50, 60$ (from left to right).}
\end{center}
\end{figure}
We now state our main result.
\begin{thm}\label{thm:Kac}
Consider a sequence of random polynomials $p_n(z) = \sum_{k=0}^{n}a_kz^k,$ where the $a_k$ are i.i.d $N_{\mathbb{C}}(0, 1).$ Let $\Lambda_n = \left\{z\in \mathbb{C}\,:\, |p_n(z)| = 1 \right\}.$ Then, $$\lim_{n \rightarrow \infty} \mathbb{E} |\Lambda_n| = C ,$$
where the constant $C \approx 8.3882$ is given by
the integral \eqref{eq:limit} below.
\end{thm}
\subsection{The Erd\"os lemniscate is an outlier}\label{sec:outlier}
The following Corollary of Theorem \ref{thm:Kac}
provides weak concentration of measure around lemniscates
having length of constant order.
\begin{cor}\label{cor:outlier}
Let $L_n$ be any sequence with $L_n \rightarrow \infty$ as $n \rightarrow \infty$.
The probability that $|\Lambda_n| \geq L_n$ converges to zero.
\end{cor}
\begin{proof}
Since the length $|\Lambda_n|$ is a positive random variable,
we can apply Markov's inequality:
$$ \P \{ |\Lambda_n| \geq L_n \} \leq \frac{\mathbb{E} |\Lambda_n|}{L_n} =O(L_n^{-1}), \quad \text{ as } n \rightarrow \infty ,$$
by Theorem \ref{thm:Kac}.
\end{proof}
In particular, the probability that the length has the same order as the extremal case
(i.e., exceeding some fixed portion of $n$) converges to zero.
\subsection{The connected components of a random lemniscate}
How many connected components does a random lemniscate have?
This question was addressed in \cite{Lemni}
in the setting of rational lemniscates.
The next theorem answers this question for a random polynomial lemniscate based
on the Kac model. The notation $b_0(\Lambda_n)$ denotes the zeroth Betti number,
which is the number of connected components.
\begin{thm}\label{thm:cc}
The number $b_0(\Lambda_n)$ of connected components of a random
Kac lemniscate satisfies
$$\mathbb{E} b_0(\Lambda_n) \sim n, \quad \text{as } n \rightarrow \infty.$$
\end{thm}
Along with Theorem \ref{thm:Kac},
this indicates a prevalence of small components.
In fact,
the idea of the proof of Theorem \ref{thm:cc}
is to check in the vicinity of a zero
for a component to appear
within a disk of radius $n^{-1-\alpha}$
where $0<\alpha<1/2$.
This suggests that relatively few
components account for most of the length.
It seems natural to further investigate the distribution of lengths of components,
and we begin to do this with the next Thereom that establishes,
with some positive probability independent of n,
the presence of at least one ``giant component''
(compare with the samples plotted in Figure 2).
\begin{thm}\label{thm:giant}
Fix $r\in (0,1)$ and let $\Lambda_n$ be a random Kac lemniscate. There is a positive probability (depending on $r$ but independent of $n$) that $\Lambda_n$ has a component with length at least $2\pi r$.
\end{thm}
\subsection{Remarks}
The Erd\"os lemniscate is
extremely singular and symmetric (see Figure \ref{fig:ErdosLemni}),
and its length appears to diminish rapidly under perturbations.
Naively, this suggests that
it occupies a rather far corner of the parameter space.
The probabilistic approach taken here provides a framework
for making this notion precise
as we have done in Section \ref{sec:outlier}.
The authors expect that the rate of decay in Corollary \ref{cor:outlier}
can be improved,
and it would be interesting to investigate this topic
from the point of view of large deviations.
The outcome for the average length of a random lemniscate
depends on the definition of ``random''.
The Kac ensemble is one of the most well-studied instances,
and it seems especially appropriate
in the context of the Erd\"os lemniscate problem,
since the zeros of $p_n$ resemble those of the defining polynomial of the Erd\"os lemniscate
in that they are approximately equidistributed on the unit circle \cite{SZ, ZZ}.
We consider several models in the sections below,
including the case that the variances have binomial coefficient weights
and also the case in which they have reciprocal binomial coefficient weights.
In each of these cases, the expected length has order $O(n^{-1/2})$.
Another extremal problem, to find the maximal spherical length of a rational lemniscate,
was posed and solved by Eremenko and Hayman \cite{EremHay}.
Lerario and the first author
considered random rational lemniscates on the
Riemann sphere \cite{Lemni}
and computed the average spherical length.
They also studied the connected components
while giving special attention to
nesting of components,
which can occur for rational lemniscates,
but is not possible
for polynomial lemniscates
(the latter statement follows from the maximum principle).
\subsection{Outline of the paper}
Theorem \ref{thm:Kac} will follow from a more general result
proved in Section \ref{sec:general},
namely, Theorem \ref{thm:general}
provides the expected length while allowing
the coefficients appearing in \eqref{eq:KacModel}
to be independent centered Gaussians with different variances.
The methods in proving Theorem \ref{thm:general}
are based on planar integral geometry
combined with the Kac-Rice formula.
In Section \ref{sec:Kac},
we then derive Theorem \ref{thm:Kac} as a consequence
of Theorem \ref{thm:general}.
We also apply Theorem \ref{thm:general} to three other models:
lemniscates generated by Kostlan polynomials are treated in Section \ref{sec:Kostlan},
Weyl polynomials in Section \ref{sec:Weyl},
and a model that we call the ``reciprocal binomial''
model is considered in Section \ref{sec:reciprocal}.
Returning to the Kac model in Section \ref{sec:components},
we study the connected components of a random lemniscate;
we prove Theorem \ref{thm:cc} in Section \ref{sec:cc}
and Theorem \ref{thm:giant} in Section \ref{sec:giant}.
\section{A length formula for Gaussian polynomials}\label{sec:general}
In this section we assume that the
coefficients appearing in $p_n(z)$
are centered, independent, but not necessarily identically distributed complex Gaussians.
\subsection{Length and integral geometry}
Applying the integral geometry formula as in \cite{EremHay}, we have:
$$|\Lambda_n | = \frac{1}{2} \int_0^\pi \int_{-\infty}^\infty N_n(\theta,y) d \theta dy ,$$
where $N_n(\theta,y)$ is the number of intersections of $\Lambda_n$ with the line $L(\theta,y) := \{z \in \mathbb{C}: \Im (e^{-i\theta}z) = y \}$.
Taking the expectation of both sides and using the rotational invariance of $\Lambda_n$, we have:
\begin{equation}\label{eq:IGF}
\mathbb{E} |\Lambda_n | = \frac{1}{2} \int_0^\pi \int_{-\infty}^\infty \mathbb{E} N_n(\theta,y) d \theta dy = \frac{\pi}{2} \int_{-\infty}^\infty \mathbb{E} N_n(0,y) dy.
\end{equation}
\subsection{The Kac-Rice formula}
We use the Kac-Rice formula to compute $\mathbb{E} N_n(0,y)$
which equals the average number of real zeros of the function
$$ p_n(z) \overline{p_n(z)} - 1,$$
restricted to the line $L(0,y)$.
We have:
$$ \frac{\partial}{\partial x} ( p_n(z) \overline{p_n(z)} - 1 ) = p_n'(z) \overline{p_n(z)} + p_n(z) \overline{p_n'(z)} .$$
Applying the Kac-Rice formula, we have:
\begin{equation}\label{eq:KR}
\mathbb{E} N_n(0,y) = \int_{-\infty}^\infty \mathbb{E} \delta(|p_n(z)|^2-1) |p_n'(z) \overline{p_n(z)} + p_n(z) \overline{p_n'(z)} | dx.
\end{equation}
For the sake of notational clarity we will henceforth suppress the dependence on $n.$ So for instance $\Lambda_n$ will be denoted by $\Lambda,$ $p_n$ by $p$ etc. We can rewrite \eqref{eq:KR} in terms of the Gaussian random complex vector
$(U,V) = (p(z),p'(z))$ whose joint probability density function is:
\begin{equation}\label{eq:density}
\rho(u,v;x+iy) = \frac{1}{\pi^2 |\Sigma|} \exp \{- (u,v)^* \Sigma^{-1} (u,v) \},
\end{equation}
where $\Sigma$ is the covariance matrix of $(U,V) = (p(z),p'(z))$,
which can be computed explicitly using the covariance kernel $K(z,w)$:
$$K(z,w) = \mathbb{E} p(z) \overline{p(w)}.$$
Namely, we have:
$$ \Sigma = \left( \begin{array}{cr}
a & b \\ \bar{b} & c
\end{array} \right) ,$$
where
\begin{equation}\label{eq:abc}
a = K(z,z), b = \partial_{z} K(z,z)\hspace{0.05in}\mbox{and,}\hspace{0.05in} c = \partial_{z} \partial_{\bar{z}} K(z,z).
\end{equation}
In terms of this joint density, the expectation inside \eqref{eq:KR}
can be expressed as:
\begin{align*}
\mathbb{E} \delta(|p(z)|^2-1) |p'(z) \overline{p(z)} + p(z) \overline{p'(z)} | &= \int_{\mathbb{C}} \int_{\mathbb{C}} \delta(|u|^2-1) |v\bar{u} + u \bar{v}| \rho(u,v;x+iy) dA(v) dA(u) \\
&= \int_{|u|=1} \int_{\mathbb{C}} \frac{1}{2|u|} |v\bar{u} + u \bar{v}| \rho(u,v;x+iy) dA(v) dA(u) \\
&= \frac{1}{2} \int_{|u|=1} \int_{\mathbb{C}} |v\bar{u} + u \bar{v}| \rho(u,v;x+iy) dA(v) dA(u),
\end{align*}
where we have used the composition property of the $\delta$-function
(\cite{Hor}, Chapter $6$) allowing integration against $\delta(|u|^2-1)$
to be replaced by an integration along the set $|u|^2=1$.
For $|u|=1$, we notice that
\begin{align*}
\rho(u,v;z) &= \frac{1}{\pi^2 |\Sigma|} \exp \{- u\bar{u}(1,\bar{u}v)^* \Sigma^{-1} (1,\bar{u}v) \} \\
&= \frac{1}{\pi^2 |\Sigma|} \exp \{- (1,\bar{u}v)^* \Sigma^{-1} (1,\bar{u}v) \} \\
&= \rho(1,\bar{u}v;z).
\end{align*}
Making the change of variables $t = \bar{u}v$, $dA(t) = dA(v)$,
the integral above becomes
\begin{equation}
\frac{1}{2} \int_{|u|=1} \int_{\mathbb{C}} |t + \bar{t}| \rho(1,t;x+iy) dA(t) du = \pi \int_{\mathbb{C}} |t + \bar{t}| \rho(1,t;x+iy) dA(t) .
\end{equation}
Thus, we have:
$$\mathbb{E} N(0,y) = 2\pi \int_{-\infty}^\infty \int_{\mathbb{C}} |\Re\{t\}| \rho(1,t;x+iy) dA(t) dx.$$
Inserting this into the integral geometry formula \eqref{eq:IGF} gives:
\begin{equation}\label{eq:reduced}
\mathbb{E} |\Lambda | = \pi^2 \int_{\mathbb{C}} \int_{\mathbb{C}} |\Re \{t\} | \rho(1,t;z) dA(t) dA(z).
\end{equation}
Observe that the density $\rho$ can be factored:
\begin{align*}
\rho(1,t;z) &=
\frac{\exp \{-\frac{1}{a} \}}{\pi a} \frac{a}{\pi |\Sigma|} \exp \left\{-\frac{a}{|\Sigma|} \left| t-\frac{b}{a} \right|^2 \right\} \\
&= \frac{\exp \{-\frac{1}{a} \}}{\pi a} \hat{\rho}(t),
\end{align*}
where $\hat{\rho}$ is the probability density function for a complex Gaussian
$N_\mathbb{C}(\mu,\sigma^2)$ with mean $\mu = b/a$ and variance $\sigma^2 = \frac{|\Sigma|}{a}$.
Thus, the following lemma applies.
\begin{lemma}\label{lemma:absreal}
Let $\zeta \sim N_\mathbb{C}(\mu,\sigma^2)$
be a complex Gaussian with mean $\mu = \mu_1 + i \mu_2$.
Then the absolute moment $\mathbb{E} |\zeta_1|$ of the real part of $\zeta = \zeta_1 + i \zeta_2$ is given by
$$\mathbb{E} |\zeta_1| = \frac{\sigma}{\sqrt{\pi}} \exp\{-\mu_1^2/\sigma^2\} + |\mu_1| \erf(|\mu_1|/\sigma) .$$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:absreal}]
We have
\begin{align*}
\mathbb{E} |\zeta_1| &= \frac{1}{\pi \sigma^2} \int_{\mathbb{C}} |\zeta_1| \exp\left\{\frac{-|\zeta-\mu|^2}{\sigma^2}\right\} dA(\zeta) \\
&= \frac{1}{\pi} \int_{\mathbb{C}} |\sigma w_1 + \mu_1| \exp\left\{ -|w|^2 \right\} dA(w),
\end{align*}
where we have made the change of variables $w = \frac{\zeta-\mu}{\sigma}$, $dA(w) = \frac{1}{\sigma^2}dA(\zeta)$.
Letting $H:=\{w \in \mathbb{C}:\sigma w_1 + \mu_1 > 0 \}$, we can rewrite the above integral as:
$$ \frac{1}{\pi}\left( \int_{H} (\sigma w_1 + \mu_1) \exp\{ -|w|^2 \} dw_1 dw_2 - \int_{\mathbb{C} \setminus H} (\sigma w_1 + \mu_1) \exp\{ -|w|^2 \} dw_1 dw_2 \right).$$
Since $\sigma w_1$ is odd and $\mu_1$ is even (with respect to $w_1$) this can be rewritten as:
\begin{equation}\label{eq:first}
\frac{1}{\pi}\left( \int_{R} |\mu_1| \exp\{ -|w|^2 \} dw_1 dw_2 + \sigma \int_{\mathbb{C} \setminus R} |w_1| \exp\{ -|w|^2 \} dw_1 dw_2 \right),
\end{equation}
where $R:=\left\{w \in \mathbb{C}: |w_1| < \frac{|\mu_1|}{\sigma} \right\}$.
The first integral can be computed in terms of the error function, $\erf$:
\begin{equation}\label{eq:second}
\int_{R} |\mu_1| \exp\{ -|w|^2 \} dw_1 dw_2 = \pi |\mu_1| \erf(|\mu_1|/\sigma),
\end{equation}
and the second integral is elementary:
\begin{equation}\label{eq:third}
\int_{\mathbb{C} \setminus R} |w_1| \exp\{ -|w|^2 \} dw_1 dw_2 = \sqrt{\pi}\exp\{-\mu_1^2/\sigma^2\}.
\end{equation}
Collecting \eqref{eq:first}, \eqref{eq:second}, and \eqref{eq:third},
we arrive at the formula stated in the lemma.
\end{proof}
Applying Lemma \ref{lemma:absreal} to \eqref{eq:reduced}, we obtain the following
main result of this section:
\begin{thm}\label{thm:general}
Let $p(z)$ be a random polynomial whose coefficients are independent centered Complex Gaussians.
Then the expected length of its lemniscate $\Lambda := \{z \in \mathbb{C} : |p(z)| = 1 \}$ is given by
\begin{equation}\label{eq:nonasymp}
\mathbb{E} |\Lambda | = \sqrt{\pi} \int_{\mathbb{C}} \frac{\exp \{-\frac{1}{a} \}}{a} \left[ \sqrt{\frac{|\Sigma|}{a}}\exp\left\{-\frac{|\Re{b}|^2}{a|\Sigma|} \right\} + \sqrt{\pi} \frac{|\Re b|}{a} \erf \left\{ |\Re b|/\sqrt{a |\Sigma|} \right\} \right] dA(z).
\end{equation}
where as above $|\Sigma|$ denotes the determinant of the covariance matrix
$\Sigma$ and, the terms $a, b, c$ are
the entries of $\Sigma$ given by \eqref{eq:abc}.
\end{thm}
\section{Kac polynomials: proof of Theorem \ref{thm:Kac}}\label{sec:Kac}
In the case $p(z)$ is a random Kac polynomial, for the entries in the covariance matrix,
$$ \Sigma = \left( \begin{array}{cr}
a & b \\ \bar{b} & c
\end{array} \right) ,$$
we have
\begin{align*}
a &= K(z,z) = \sum_{k=0}^n |z|^{2k},\\
b &= \partial_{z} K(z,z) = \bar{z} \sum_{k=1}^n k |z|^{2k-2},\\
c &= \partial_{z} \partial_{\bar{z}} K(z,z) = \sum_{k=1}^n k^2 |z|^{2k-2}.
\end{align*}
We will show that the pointwise limit of the integrand appearing in \eqref{eq:nonasymp} as $n\to\infty$ is:
\begin{equation}\label{eq:pointwise}
\left\{ \begin{array}{cc}
\exp\left\{ -(1-|z|^2) \right\}\left[\frac{\exp\left\{-x^2(1-|z|^2)\right\} }{(1-|z|^2)^{1/2}} + \sqrt{\pi} x\erf\left\{x\sqrt{1-|z|^2}\right\} \right], \quad |z| < 1, \quad \\
0 , \quad |z| \geq 1.
\end{array} \right.
\end{equation}
We will also show that the dominated convergence theorem applies,
so that the integral in Theorem \ref{thm:general}
has a limit as $n \rightarrow \infty$
given by the integral of \eqref{eq:pointwise}.
After changing to polar coordinates, this becomes:
\begin{align}\label{eq:limit}
C &:= \lim_{n \rightarrow \infty} \mathbb{E} |\Lambda | \\
&= \sqrt{\pi} \int_{|z|<1} \exp\left\{ -(1-|z|^2) \right\}\left[ \frac{\exp\left\{-x^2(1-|z|^2)\right\} }{(1-|z|^2)^{1/2}} + \sqrt{\pi} x\erf\left\{x\sqrt{1-|z|^2}\right\} \right] dA(z)\\
&\approx 8.3882,
\end{align}
which proves Theorem \ref{thm:Kac}.
It remains to compute the pointwise limit and to show dominated convergence.
First, we derive certain formulas from the covariance kernel $K(z,w)$ of the Kac polynomial,
$$K(z,w) = \mathbb{E} p(z) \overline{p(w)} = \sum_{k=0}^n (z \bar{w})^k = \frac{1-(z \bar{w})^{n+1}}{1-z\bar{w}}.$$
Notice that
\begin{equation}\label{eq:a}
a = K(z,z) = \sum_{k=0}^n |z|^{2k} = \frac{1}{1-|z|^2}- \frac{|z|^{2n+2}}{1-|z|^2}.
\end{equation}
We have
\begin{equation}\label{eq:b}
\frac{\Re \{b\}}{a} = \Re \{ \partial_z \log K(z,z) \}=
\frac{x}{(1-|z|^2)} - \frac{(n+1) x |z|^{2n}}{1-|z|^{2n+2}},
\end{equation}
and from this we observe that for $|z| < 1,$
\begin{align*}\label{eq:bound2}
\frac{\Re \{b\}}{a^2} &=
x\frac{\frac{1}{1-|z|^2}-\frac{(n+1)|z|^{2n}}{1-|z|^{2n+2}}}{\frac{1}{1-|z|^2}-\frac{|z|^{2n+2}}{1-|z|^{2}}}\\
&= x\frac{1-(n+1)\frac{|z|^{2n}}{\sum_{k=0}^n|z|^{2k}}}{1-|z|^{2n+2}}\\
&\leq x,
\end{align*}
and as $n\to\infty,$ $\frac{\Re \{b\}}{a^2}\to x.$
On the other hand for $|z| > 1,$ we note that
\begin{align*}
\frac{\Re \{b\}}{a^2} &= x\frac{\frac{(n+1)|z|^{2n}}{|z|^{2n+2}-1}-\frac{1}{|z|^2-1}}{\frac{|z|^{2n+2}}{|z|^2-1}-\frac{1}{|z|^2-1}}\\
& = x\frac{\frac{(n+1)|z|^{2n}}{\sum_{k=0}^n|z|^{2k}}-1}{|z|^{2n+2}-1}\\
&\leq x,
\end{align*}
and as $n\to\infty,$ $\frac{\Re \{b\}}{a^2}\to 0.$ Keeping in mind to apply the dominated convergence theorem for $|z|>1,$ we estimate as follows.
$$\left|\frac{\Re \{b\}}{a^2}\right|\leq |x|, \hspace{0.1in} 1 < |z| < 2 $$
$$\left|\frac{\Re \{b\}}{a^2}\right|\leq|x|\frac{2n}{|z|^{2n+2}}\leq\frac{2}{|z|^3}, \hspace{0.1in} |z| > 2 $$
From
\begin{equation}\label{eq:Sigma}
\frac{|\Sigma|}{a^2} = \partial_{\bar{z}} \partial_z \log K(z,z) =
\frac{1}{(1-|z|^2)^2}- \frac{(n+1)^2|z|^{2n}}{(1-|z|^{2n+2})^2},
\end{equation}
we notice that for $|z| < 1,$
\begin{align*}
\frac{|\Sigma|}{a^3} &= \frac{\partial_{\bar{z}} \partial_z \log K(z,z)}{K(z,z)} \\
&= \frac{1}{1-|z|^2}\left(\frac{1-\frac{(n+1)^2|z|^{2n}}{(\sum_{k=0}^n |z|^{2k})^2}}{1-|z|^{2n+2}} \right)\\
&\leq \frac{1}{1-|z|^2},
\end{align*}
\noindent and $\frac{|\Sigma|}{a^3}\to \frac{1}{1-|z|^2}$ as $n\to\infty.$
\vspace{0.1in}
\noindent A similar computation for $|z| > 1,$ yields
$$ \frac{|\Sigma|}{a^3} = \frac{1}{|z|^2-1}\left(\frac{1-\frac{(n+1)^2|z|^{2n}}{(\sum_{k=0}^n |z|^{2k})^2}}{|z|^{2n+2}-1}\right)\leq \frac{1}{|z|^2-1},$$
\noindent and as $n\to\infty,$ $\frac{|\Sigma|}{a^3}\to 0.$ To apply dominated convergence, we use the following bounds which follow immediately from the above expression
$$\left|\frac{|\Sigma|}{a^3}\right|\leq \frac{1}{|z|^2-1}, \hspace{0.1in} 1 < |z| < 2.$$
$$\left|\frac{|\Sigma|}{a^3}\right|\leq \frac{2}{|z|^{2n+2}}\leq\frac{2}{|z|^{6}}, \hspace{0.1in} |z| > 2, n\geq 2.$$
Letting $F_n(z)$ denote the integrand in \eqref{eq:nonasymp}, we have:
\begin{align*}\label{eq:dominated}
F_n(z) &= \frac{\exp \{-1/a \}}{a} \left[ \sqrt{\frac{|\Sigma|}{a}}\exp\left\{-\frac{|\Re{b}|^2}{a|\Sigma|} \right\} + \sqrt{\pi} \frac{|\Re b|}{a} \erf \left\{ |\Re b|/\sqrt{a |\Sigma|} \right\} \right] \\
&\leq \exp\{-1/a \} \left[ \sqrt{\frac{|\Sigma|}{a^3}} + \sqrt{\pi} \frac{|\Re b|}{a^2} \right] .\\
\end{align*}
For $|z|<1$ we have:
$$F_n(z) \leq \exp\left\{-{(1-|z|^2)} \right\} \left[ \frac{1}{\sqrt{1-|z|^2}} + \sqrt{\pi} x \right] ,$$
which is integrable. If $|z|> 1$ and $n$ is large enough, we split the integral into regions $ 1 < |z| < 2$ and $|z| >2$ and use the appropriate bounds from before. This justifies the use of the dominated convergence theorem.
In order to see the pointwise limit \eqref{eq:pointwise} of $F_n(z)$,
we notice that for $|z| < 1$, we have (as $n \rightarrow \infty$):
$$ \sqrt{\frac{|\Sigma|}{a^3}} \rightarrow \frac{1}{\sqrt{1-|z|^2}} ,$$
$$ a \rightarrow \frac{1}{1-|z|^2},$$
$$ \frac{\Re\{ b \}}{a^2} \rightarrow x,$$
and
$$ \frac{\Re\{ b \}}{\sqrt{a |\Sigma|}} \rightarrow x \sqrt{1-|z|^2}.$$
As pointed earlier, for $|z|> 1$, we have:
$$F_n(z)\rightarrow 0.$$
Combining these pointwise limits, we arrive at \eqref{eq:pointwise},
and applying the dominated convergence theorem proves
the formula \eqref{eq:limit} for the asymptotic expected length of a lemniscate
generated by the Kac model.
\section{The expected length for other models}
\subsection{Kostlan Polynomials}\label{sec:Kostlan}
\noindent In this section we compare the average length of the lemniscate for different ensembles of random polynomials, starting with the Kostlan ensemble.
\noindent Consider a sequence of random polynomials whose coefficients are Kostlan random variables. Namely
$$P_n(z) = \sum_{k=0}^{n}a_{kn}z^k,$$
\noindent where $a_{kn}$ are independent $N_{\mathbb C}(0, \binom{n}{k}).$ Applying Theorem \ref{thm:general}
\begin{equation}\label{nonasymp2}
\mathbb{E} |\Lambda | = \sqrt{\pi} \int_{\mathbb{C}} \frac{\exp \{-\frac{1}{a} \}}{a} \left[ \sqrt{\frac{|\Sigma|}{a}}\exp\left\{-\frac{|\Re{b}|^2}{a|\Sigma|} \right\} + \sqrt{\pi} \frac{|\Re b|}{a} \erf \left\{ |\Re b|/\sqrt{a |\Sigma|} \right\} \right] dA(z).
\end{equation}
\noindent where now for the Kostlan ensemble,
$$ \Sigma = \left( \begin{array}{cr}
a & b \\ \bar{b} & c
\end{array} \right) ,$$
with
\begin{align*}
a &= K(z,z) = (1+|z|^2)^{n},\\
b &= n\bar{z}(1+|z|^2)^{n-1} ,\\
c &= n(n|z|^2+1)(1+|z|^2)^{n-2}.
\end{align*}
\noindent This implies that
\begin{align*}
|\Sigma|& = ac - |b|^2 = n(1+|z|^2)^{2n-2}, \\
\frac{|\Sigma|}{a^3} & = \dfrac{n}{(1+|z|^2)^{n+2}},\\
\frac{\Re \{b\}}{a^2} & = \dfrac{nx}{(1+|z|^2)^{n+1}}, \\
\frac{|\Re b|}{\sqrt{a |\Sigma|}} & = \dfrac{nx^2}{(1+|z|^2)^{n}}.
\end{align*}
\noindent Substituting these expressions into \eqref{nonasymp2}, we obtain
\begin{equation}\label{nonasymp3}
\mathbb{E} |\Lambda_n | = \sqrt{\pi}\int_{\mathbb{C}}\exp\left(-\frac{1}{(1+|z|^2)^{n}}\right)\left[I_{1n}(z) + I_{2n}(z)\right] dA(z)
\end{equation}
\noindent where $I_{1n}(z) = \sqrt{\frac{n}{(1+|z|^2)^{n+2}}}\exp\left(-\frac{nx^2}{(1+|z|^2)^{n}}\right)$ and $I_{2n}(z)= \sqrt{\pi}\frac{nx}{1+|z|^2} \erf \left\{\sqrt{n}x/(1+|z|^2)^{n/2}\right\}$
\noindent Converting the above integral into polar coordinates $(r, \theta),$ followed by the substitution $r = \sqrt{\frac{t}{n}}$ leads us to
\begin{equation}\label{t}
\mathbb{E} |\Lambda_n | = \sqrt{\pi}\int_{0}^{2\pi}\int_{0}^{\infty}\exp\left(-\frac{1}{(1+t/n)^{n}}\right)\left[J_{1n}(t, \theta) + J_{2n}(t, \theta)\right]dtd\theta,
\end{equation}
$$J_{1n}(t, \theta)= \sqrt{\frac{1}{n(1+t/n)^{n+2}}}\exp\left(-\frac{t\cos^2(\theta)}{(1+ t/n)^{n}}\right)$$
$$J_{2n}(t, \theta) = \sqrt{\pi} \frac{\sqrt{t}\cos(\theta)}{\sqrt{n}(1+t/n)} \erf \left\{\sqrt{t}\cos(\theta)/(1+t/n)^{n/2}\right\}.$$
\noindent Removing a factor of $1/\sqrt{n}$ from the $J_{in}$, we see that the resulting integral has a limit as $n\to\infty.$ Namely, we have the following result
$$\sqrt{n}\mathbb{E} |\Lambda_n |\to I \hspace{0.1in}\mbox{as}\hspace{0.1in} n\to\infty,$$
where $I$ is the constant given by
$$I = \sqrt{\pi}\int_{0}^{2\pi}\int_{0}^{\infty}\exp\left(-\frac{1}{e^t}\right)\left[\sqrt{\frac{1}{e^t}}\exp\left(-\frac{t\cos^2(\theta)}{e^t}\right)+ \sqrt{\pi}\sqrt{t}\cos(\theta)\erf \left\{\sqrt{t}\cos(\theta)/e^{t/2}\right\}\right]dtd\theta.$$
\subsection{Weyl Polynomials}\label{sec:Weyl}
We now consider Weyl polynomials defined by $P_n(z) = \sum_{k=0}^{n}a_kz^k$ where $a_k$ are independent random variables with $a_k \sim N_{\mathbb{C}}(0, \frac{1}{k!}).$
\vspace{0.1in}
\noindent One can check easily that now the covariance matrix has entries given by
$$ \Sigma = \left( \begin{array}{cr}
a & b \\ \bar{b} & c
\end{array} \right) ,$$
\noindent with
\begin{align*}
a & = \sum_{k=0}^{n}|z|^{2k}/k!,\\
b & = \bar{z}\sum_{k=1}^{n}|z|^{2k-2}/(k-1)! ,\\
c & = \sum_{k=1}^{n}\frac{k^2}{k!}|z|^{2k-2}.
\end{align*}
\noindent Applying Theorem \ref{thm:general}, we obtain
\begin{equation}\label{nonasymp4}
\mathbb{E} |\Lambda_n | = \sqrt{\pi} \int_{\mathbb{C}} \frac{\exp \{-\frac{1}{a} \}}{a} \left[ \sqrt{\frac{|\Sigma|}{a}}\exp\left\{-\frac{|\Re{b}|^2}{a|\Sigma|} \right\} + \sqrt{\pi} \frac{|\Re b|}{a} \erf \left\{ |\Re b|/\sqrt{a |\Sigma|} \right\} \right] dA(z).
\end{equation}
\noindent All the quantities above have finite limits as $n\to\infty.$ For instance $a\to\exp(|z|^2),$ $b\to\bar{z}\exp(|z|^2),$ and $c\to (1+|z|^2)\exp(|z|^2).$ Also, dominated convergence is easy to verify here. Taking the limit as $n\to\infty$ in \eqref{nonasymp4}, we obtain
$$\mathbb{E} |\Lambda_n |\to L,$$
where $$L = \sqrt{\pi} \int_{\mathbb{C}} \frac{\exp \{-\frac{1}{e^{|z|^2}} \}}{e^{|z|^2}} \left[ \sqrt{e^{|z|^2}}\exp\left\{-\frac{x^2}{e^{|z|^2}} \right\} + \sqrt{\pi}x\erf \left\{x/e^{|z|^2/2} \right\} \right] dA(z).$$
\subsection{Reciprocal binomial distribution}\label{sec:reciprocal}
\noindent Consider a random polynomial of the form
$$p_n(z) = \sum_{k=0}^{n}a_{nk}z^k,$$
where $a_{nk}$ are independent random variables with
$a_{nk} \sim N_{\mathbb{C}} \left( 0, \frac{1}{\binom{n}{k}} \right).$
In this case, the entries $a, b$ and $c$ the
entries of the covariance matrix $\Sigma$
are given as follows.
\begin{align*}
a & = \sum_{k=0}^{n}\frac{|z|^{2k}}{\binom{n}{k}},\\
b & = \bar{z}\sum_{k=1}^{n}\frac{k|z|^{2k-2}}{\binom{n}{k}} ,\\
c & = \sum_{k=1}^{n}\frac{k^2}{\binom{n}{k}}|z|^{2k-2}.
\end{align*}
\noindent Theorem \ref{thm:general} gives,
\begin{equation}\label{nonasymp5}
\mathbb{E} |\Lambda_n | = \sqrt{\pi} \int_{\mathbb{C}} \frac{\exp \{-\frac{1}{a} \}}{a} \left[ \sqrt{\frac{|\Sigma|}{a}}\exp\left\{-\frac{|\Re{b}|^2}{a|\Sigma|} \right\} + \sqrt{\pi} \frac{|\Re b|}{a} \erf \left\{ |\Re b|/\sqrt{a |\Sigma|} \right\} \right] dA(z).
\end{equation}
We consider now asymptotically (with $n$) the contribution of this integral from $|z| <1$ and $|z| > 1$. If $|z| <1,$ then we observe from the expressions for $a, b$ and $c$ that
\begin{align*}
a & = 1 + \frac{|z|^2}{n} + o(1),\\
b & = \frac{\bar{z}}{n}\left(1 + \frac{4}{n-1}|z|^2 + o(1) \right) ,\\
c & = \frac{1}{n}\left(1 + \frac{8}{n-1}|z|^2 + o(1) \right).
\end{align*}
\noindent This yields $|\Sigma| = ac - |b|^2 = \frac{1}{n}\left(1+o(1)\right),$ $\frac{|\Re b|}{a} = \frac{x}{n}(1+o(1))$ and finally $\frac{|\Re{b}|^2}{a|\Sigma|} = \frac{x^2}{n}(1+o(1)).$ This implies that the integral for $|z| < 1$ is of order $\sqrt{\frac{1}{n}}\left(1+o(1)\right).$ So $\sqrt{n}\mathbb{E} |\Lambda_n |$ has a finite limit for $z$ in the unit disc.
\vspace{0.1in}
\noindent We next claim that asymptotically, the integral over $|z| > 1$ goes to $0$ (after a scaling by $\sqrt{n}$). Indeed, notice then that
\begin{align*}
a & = |z|^{2n}\left(1+ \frac{1}{n|z|^2} + o(1) \right),\\
b & = n\bar{z}|z|^{2n-2}\left(1 + \frac{n-1}{n^2|z|^2}+ o(1) \right) ,\\
c & = n^2|z|^{2n-2}\left(1 + \frac{(n-1)^2}{n^3|z|^2}+ o(1) \right).
\end{align*}.
\noindent From here we can deduce that $|\Sigma| = \frac{|z|^{4n-4}}{n}(1+o(1)).$ This gives that
$$\sqrt{\frac{|\Sigma|}{a^3}} = \sqrt{\frac{1}{n}}\frac{1}{|z|^{n+2}}(1+o(1)),$$
$$\frac{|\Re b|}{a^2} = \frac{n|x|}{|z|^{2n+2}}(1 + o(1)).$$
\noindent The pointwise limit of the integrand (even if we scale it by $\sqrt{n}$) is clearly $0$ and because of power decay, dominated convergence holds. So the contribution from the exterior of the unit disc to the integral is negligible.
Ultimately, as $n \rightarrow \infty$, we get that $\sqrt{n}\mathbb{E} |\Lambda_n |$ approaches
a positive constant given by an integral over $|z| < 1$ independent of $n.$
\vspace{0.1in}
\section{The connected components of a random lemniscate}\label{sec:components}
\noindent In this section,
we prove asymptotics for the expected number of
connected components $\mathbb{E}(b_0(\Lambda_n))$
of a lemniscate $\Lambda_n = \{z: |p_n(z)| = 1\}$,
where $p_n$ is a random Kac polynomial, i.e.,
$p_n(z) = \sum_{k=0}^{n}a_kz^k,$
with i.i.d. coefficients $a_k \sim N_{\mathbb{C}}(0,1)$.
Consider the set:
\begin{equation}\label{lemn1}
U_n = \{z: |p_n(z)| < 1\}.
\end{equation}
\noindent
Then $U_n$ is a bounded open set and it
is a well-known fact that
the number of connected components of
$U_n$ is at most $n.$
This can be seen from noticing that each component
of $U_n$ must contain a zero of $p$.
Otherwise the maximum principle may be applied to conclude that
the harmonic function $\log|p|$ is constant.
It also follows from the maximum principle that each
component of $U_n$ is simply-connected.
The boundary of $U_n$ is the lemniscate $\Lambda_n$,
which is smooth with probability one.
We conclude that the connected components of
$\Lambda_n$ are in one-to-one correspondence with those of $U_n$.
\subsection{The expectation of the number of connected components: proof of Theorem \ref{thm:cc}}
\label{sec:cc}
Since the number of connected components $b_0(\Lambda_n)$
is at most $n$, in order to show that $\mathbb{E} b_0(\Lambda_n) \sim n$
it suffices to prove the lower bound
$\mathbb{E} b_0(\Lambda_n) \geq n - o(n)$.
Fix $0< \beta < \alpha <1/2$ with $\alpha - \beta > \frac{1}{2} -\alpha$,
and suppose $n$ is large enough that
$$n^{\beta+\frac{1}{2}- 2 \alpha} \exp\{n^{-\alpha}\} < 1.$$
As a certificate for the appearance of a localized component
we will use the following conditions related to the
Taylor expansion of $p(z)$ centered at $\zeta$.
\begin{equation}
\left\{
\begin{aligned}
p(\zeta) &= 0 \\
|p'(\zeta)| &> 2 \cdot n^{1+\alpha} \\
|p^{(k)}(\zeta)| &< n^{k+\frac{1}{2}+\beta} , \quad \text{for } k=2,3,..,n
\end{aligned}\right.
\label{eq:Taylor}
\end{equation}
These conditions imply that, for any $z$ on the circle defined by
$|z-\zeta| = n^{-1-\alpha}$, we have
\begin{align}
|p(z)| &= \left| p'(\zeta) (z-\zeta) + \sum_{k=2}^n \frac{p^{(k)}(\zeta)}{k!}(z-\zeta)^k \right| \\
&\geq |p'(\zeta) (z-\zeta)| - \left|\sum_{k=2}^n \frac{p^{(k)}(\zeta)}{k!}(z-\zeta)^k \right| \\
&\geq |p'(\zeta)| n^{-1-\alpha} - \sum_{k=2}^n \frac{|p^{(k)}(\zeta)|}{k!}(n^{-1-\alpha})^k \\
&> 2 - \sum_{k=2}^n \frac{n^{(k+\frac{1}{2}+\beta)}}{k!}(n^{-1-\alpha})^k \\
&> 2 - n^{\beta + \frac{1}{2} - 2\alpha} \sum_{k=2}^n \frac{n^{-\alpha (k-2)}}{k!} \\
&> 2 - n^{\beta + \frac{1}{2} - 2\alpha} \exp \{ n^{-\alpha} \} \\
&> 1, \\
\end{align}
so that $p(\zeta)=0$ and $|p(z)|>1$
on the circle $|z-\zeta| = n^{-1-\alpha}$.
This ensures that there is a connected component of $\Lambda_n$
contained in the disk $|z-\zeta| < n^{-1-\alpha}$.
In order to estimate the average number of zeros
for which the conditions \eqref{eq:Taylor} are all satisfied,
we will use a modified version of the Kac-Rice formula.
First recall that the Kac-Rice formula for the
expectation $\mathbb{E} N_p(U)$ of the number of complex zeros of $p$
in a region $U$ states
\begin{align}\label{eq:KRplain}
\mathbb{E} N_p(U) &= \frac{1}{\pi} \int_U \mathbb{E} |p'(z)|^2 \delta(p(z)) dA(z) \\
&= \frac{1}{\pi} \int_U \mathbb{E} \left[ |p'(z)|^2 \, \big| \, p(z)=0 \right] \rho_{p(z)} (0) dA(z),
\end{align}
where $\rho_{p(z)} (0)$ is the marginal probability density of $p(z)$ evaluated at $0$.
We would like to modify \eqref{eq:KRplain} to obtain a lower bound for the expected number $\hat{N}_p$
of zeros satisfying the conditions \eqref{eq:Taylor}. Our approach is based on \cite{AD}, Theorem $5.1.1$. Let $I_1$ be the indicator function of the interval $(2n^{1+\alpha},\infty)$
and $I_k$ be the indicator function of the interval $[0,n^{k+\frac{1}{2}+\beta}).$ Let $T_n(s) := \{z \in \mathbb{C} : e^{-s/n} < |z| < e^{s/n} \}$ and $\hat{N}_p(T_n(s))$ denote the number of zeros satisfying \eqref{eq:Taylor} which lie in the annulus $T_n(s).$
Then we have
\begin{align*}
\mathbb{E} \hat{N}_p &\geq \mathbb{E} \hat{N}_p(T_n(s)) \\
&= \frac{1}{\pi} \int_{T_n(s)} \mathbb{E} |p'(z)|^2 \delta(p(z))
\prod_{k=1}^n I_k(|p^{(k)}(z)|) dA(z) \\
&= \frac{1}{\pi} \int_{T_n(s)} \mathbb{E} \left[ |p'(z)|^2
\prod_{k=1}^n I_k(|p^{(k)}(z)|) \, \big| \, p(z)=0 \right] \rho_{p(z)}(0)dA(z).
\end{align*}
In the above chain, Theorem $5.1.1$ from \cite{AD} was used to go from the first line to the second. Next, for each fixed $s$ the above provides a lower bound on the
average number of connected components
\begin{equation}\label{eq:pivot}
\mathbb{E} b_0(\Lambda_n) \geq \frac{1}{\pi} \int_{T_n(s)} \mathbb{E} \left[ |p'(z)|^2 \prod_{k=1}^n I_k(|p^{(k)}(z)|) \, \big| \, p(z)=0 \right] \rho_{p(z)}(0)dA(z).
\end{equation}
The remainder of the proof will establish that the right hand side
of \eqref{eq:pivot}
is asymptotic to a standard Kac-Rice integral of the form \eqref{eq:KRplain}.
Letting $\tilde{I}_k$ denote the indicator function of
$[n^{k+\frac{1}{2}+\beta},\infty)$,
we will use the union-type bound,
\begin{equation}\label{eq:union}
\prod_{k=2}^nI_k(|p^{(k)}(z)|) \geq
1-\sum_{k=2}^n\tilde{I}_k(|p^{(k)}(z)|),
\end{equation}
in order to prove that
\begin{equation}\label{eq:experr}
\mathbb{E} \left[ |p'(z)|^2
\prod_{k=1}^n I_k(|p^{(k)}(z)|) \, \big| \, p(z)=0 \right]
\geq \mathbb{E} \left[ |p'(z)|^2 I_1(|p'(z)|) \, \big| \, p(z)=0 \right] - O\left(\exp \left\{-n^{\beta} \right\} \right) . \\
\end{equation}
First, we use the simple estimate:
\begin{equation}\label{eq:leftover}
\mathbb{E} \left[ |p'(z)|^2 I_1(|p'(z)|) \sum_{k=2}^n \tilde{I}_k(|p^{(k)|}(z)) \, \big| \, p(z)=0 \right]
\leq \sum_{k=2}^n \mathbb{E} \left[ |p'(z)|^2 \tilde{I}_k(|p^{(k)}(z)|) \, \big| \, p(z)=0 \right].\\
\end{equation}
We estimate each summand above using the Cauchy-Schwarz inequality.
\begin{equation}\label{eq:apply}
\begin{aligned}
\mathbb{E} \left[ |p'(z)|^2 \tilde{I}_k(|p^{(k)}(z)|) \, \big| \, p(z)=0 \right]
&\leq \sqrt{\mathbb{E} \left[ |p'(z)|^4 \, \big| \, p(z)=0 \right] } \sqrt{P(|p^{(k)}(z)| \geq n^{k+\frac{1}{2}+\beta} \big| p(z)=0)} \\
&\leq \sqrt{\mathbb{E} \left[ |p'(z)|^4 \, \big| \, p(z)=0 \right] } \exp \left\{-\frac{n^{2\beta}}{2C_1(s)} \right\},
\end{aligned}
\end{equation}
where we have used the estimates
\begin{equation}\label{eq:key}
P( |p^{(k)}(z)| \geq n^{k+\frac{1}{2} + \beta} \big| p(z)=0) \leq \exp \left\{-\frac{n^{2k+1+2\beta}}{C_1(s)n^{2k+1}} \right\} = \exp\left\{ -\frac{n^{2\beta}}{C_1(s)} \right\},
\end{equation}
which follow from
Lemmas \ref{lemma:conditionalGaussian} and \ref{lemma:overwhelming} below.
By the same lemmas, we have
$\sqrt{\mathbb{E} \left[ |p'(z)|^4 \, \big| \, p(z)=0 \right] } = O(n^3)$.
Applying \eqref{eq:apply} to \eqref{eq:leftover}
and relaxing the expression appearing in the exponent to $-n^{\beta}$,
we can neglect the polynomially growing factor
$\sqrt{\mathbb{E} \left[ |p'(z)|^4 \, \big| \, p(z)=0 \right] } = O(n^3)$
as well as the number of terms $(n-1)$ in the sum.
We thus obtain the bound
\begin{equation}
\mathbb{E} \left[ |p'(z)|^2 I_1(|p'(z)|) \sum_{k=2}^n \tilde{I}_k(|p^{(k)|}(z)) \, \big| \, p(z)=0 \right]
= O\left(\exp \left\{-n^{\beta} \right\} \right),
\end{equation}
which establishes \eqref{eq:experr}
by way of the union bound stated in \eqref{eq:union}.
The random variable $p'(z)$
conditioned on $p(z)=0$ is distributed
as a centered complex Gaussian with variance $\frac{ac-|b|^2}{a}$,
and this implies that $|p'(z)|^2$ conditioned on $p(z)=0$
is distributed as an exponential random variable with parameter
$\lambda = \left( \frac{ac_1-|b_1|^2}{a} \right)^{-1}$,
so we have
\begin{equation}\label{eq:plaincond}
\mathbb{E} \left[ |p'(z)|^2 \, \big| \, p(z)=0 \right]
= \frac{1}{\lambda} = \frac{ac_1-|b_1|^2}{a},
\end{equation}
and
\begin{align*}
\mathbb{E} \left[ |p'(z)|^2 I_1(|p'(z)|) \, \big| \, p(z)=0 \right]
&= \int_{4n^{2+2\alpha}}^\infty x \lambda e^{-\lambda x} dx \\
&= \exp \left\{-n^{2\alpha+2} \lambda \right\} \left( \frac{1}{\lambda} + 4n^{2+2\alpha} \right)\\
&\geq \frac{1}{\lambda} \exp \left\{-n^{2\alpha+2} \lambda \right\}\\
&=\mathbb{E} \left[ |p'(z)|^2 \, \big| \, p(z)=0 \right] \left(1 - O(n^{2\alpha-1}) \right) \end{align*}
where we used \eqref{eq:plaincond} in the last line.
Combining this with \eqref{eq:experr}
in order to reassess \eqref{eq:pivot}
we finally conclude the lower bound
\begin{equation}
\begin{aligned}
\mathbb{E} b_0(\Lambda_n) &\geq
(1-O(n^{2\alpha-1})) \frac{1}{\pi} \int_{T_n(s)} \mathbb{E} \left[ |p'(z)|^2
\, \big| \, p(z)=0 \right] \rho_{p(z)}(0)dA(z), \\
&= (1-O(n^{2\alpha-1})) \mathbb{E} N_p(T_n(s)),
\end{aligned}
\end{equation}
where, as in \eqref{eq:KRplain}
$N_p(T_n(s))$ denotes
the number of zeros of $p$ in $T_n(s)$.
We recall \cite{IbZeit} that
$$\mathbb{E} N_p(T_n(s)) \sim n\left(\frac{1+e^{2s}}{1-e^{2s}} - \frac{1}{s}\right),$$
which implies
\begin{equation}
\liminf_{n \rightarrow \infty} \frac{\mathbb{E} b_0(\Lambda_n)}{n}
\geq \left( 1 - \frac{1}{s} \right).
\end{equation}
This lower bound can be made arbitrarily close to $1$
(by increasing $s$),
and along with the deterministic upper bound
$b_0(\Lambda_n) \leq n$ this shows that the limit
$$\lim_{n \rightarrow \infty} \frac{\mathbb{E} b_0(\Lambda_n)}{n} = 1$$
exists, i.e., $\mathbb{E} b_0(\Lambda_n) \sim n$.
This proves Theorem \ref{thm:cc}.
\begin{lemma}\label{lemma:conditionalGaussian}
Fix $z \in\mathbb{C}$.
The random variable $p^{(k)}(z)$ conditioned on
$p(z)=0$ is distributed as a centered complex Gaussian,
$N_\mathbb{C}(0,\sigma^2)$,
with variance
$$\sigma^2 = \frac{a c_k - |b_k|^2}{a},$$
where
\begin{equation}
a = K(z,z), \quad b_k = \partial_{z}^k K(z,z), \quad c_k = \partial_{z}^k \partial_{\bar{z}}^k K(z,z).
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:conditionalGaussian}]
Let $\rho(u,v)$ denote the joint density of $(U,V) = (p(\zeta),p^{(k)}(\zeta))$.
The conditional density $\rho_{V|U=0}$ of $V$ given $U=0$
is given by:
\begin{equation}\label{eq:conditdensity}
\rho_{V|U=0} (v) = \frac{\rho(0,v)}{\rho_U(0)},
\end{equation}
where $\rho_U(u) = \frac{1}{\pi a} \exp\left\{-\frac{|u|^2}{a} \right\}$
is the marginal density of $U$.
We have
\begin{equation}
\rho(u,v) = \frac{1}{\pi^2 |\Sigma_k|} \exp \{- (u,v)^* \Sigma_k^{-1} (u,v) \},
\end{equation}
where $\Sigma_k$ is the covariance matrix of $(U,V)$,
which can be computed explicitly using the covariance kernel $K(z,w)$:
$$K(z,w) = \mathbb{E} p(z) \overline{p(w)}.$$
Namely, we have:
$$ \Sigma_k = \left( \begin{array}{cr}
a & b_k \\ \bar{b_k} & c_k
\end{array} \right) ,$$
where
\begin{equation}
a = K(z,z), \quad b_k = \partial_{z}^k K(z,z), \quad c_k = \partial_{z}^k \partial_{\bar{z}}^k K(z,z).
\end{equation}
Applying this to \eqref{eq:conditdensity} we obtain:
\begin{equation}
\rho_{V|U=0} (v) = \frac{\rho(0,v)}{\rho_U(0)} = \frac{a}{\pi |\Sigma_k|} \exp \left\{- \frac{a|v|^2}{|\Sigma_k|} \right\},
\end{equation}
as desired.
\end{proof}
\begin{lemma}\label{lemma:overwhelming}
There exists a positive constant $C_1(s)$ depending on $s$ but independent of $n$, such that
$$ \frac{ac_k - |b_k|^2}{a} \leq C_1(s) n^{2k+1},$$
for all $z \in T_n(s) = \{z\in\mathbb{C}: e^{-s/n} < |z| < e^{s/n} \}$ and $k=1,2,..,n$. Furthermore, there exists $C_2(s)>0$ such that for $z \in T_n(s),$ and for all large enough $n\geq N(s),$ we have
$$ \frac{ac_1-|b_1|^2}{a} \geq C_2(s) n^{3}.$$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:overwhelming}]
\noindent For the first estimate, we note that $\dfrac{ac_k - |b_k|^2 }{a}\leq c_k$ and so it is enough to find an upper bound for $c_k.$ We have
$$c_k = \mathbb{E}\left(p^{(k)}(z)\overline{p^{(k)}(z)}\right) = \sum_{j=k}^{n}\left[j(j-1)(j-2)..(j-(k-1)\right]^2|z|^{2j-2k}$$
\noindent Using the above expression, we observe that for $z\in T_n(s),$
\begin{equation}\label{ubc}
c_k\leq e^{2s}\sum_{j=k}^{n}\left[j(j-1)(j-2)..(j-(k-1)\right]^2\leq e^{2s}\sum_{j=k}^{n} n^{2k}\leq e^{2s}n^{2k+1}.
\end{equation}
\noindent Before proving the second inequality we recall that
$$a = \sum_{k=0}^n |z|^{2k}, \hspace{0.05in} b_1 = \bar{z} \sum_{k=1}^n k |z|^{2k-2}$$
$$c_1 = \sum_{k=1}^n k^2 |z|^{2k-2}.$$
\noindent For $z \in T_n(s),$ we now estimate as follows:
\begin{equation}\label{lba}
a = \sum_{k=0}^n |z|^{2k}\geq (e^{-s/n})^{2n}(n+1) = e^{-2s}(n+1).
\end{equation}
\noindent A similar reasoning gives $a\leq (n+1)e^{2s}.$
We next proceed to bound $c_1$ and $|b_1|^2.$
\begin{equation}\label{lbc}
c_1 = \sum_{k=1}^n k^2 |z|^{2k-2}\geq (e^{-s/n})^{2n-2}\sum_{k=1}^n k^2 = (e^{-s/n})^{2n-2}n(n+1)(2n+1)/6
\end{equation}
\begin{eqnarray}\label{ubb}
|b_1|^2& = |z|^2\left(\sum_{k=1}^n k |z|^{2k-2}\right)^2\\
&\leq e^{2s/n}e^{4s}\left(\sum_{k=1}^{n} k\right)^2 \\
& = e^{2s/n}e^{4s}\dfrac{n^2 (n+1)^2}{4}.
\end{eqnarray}
\noindent Combining all the above estimates, we obtain that for large $n$
$$\dfrac{ac_1 - |b|^2}{a}\geq C_2(s)n^3.$$
\noindent This proves the second estimate and concludes the proof of the lemma.
\end{proof}
\subsection{Existence of a giant component: proof of Theorem \ref{thm:giant}}
\label{sec:giant}
We now show that a giant component exists
with positive probability (independent of $n$).
\begin{lemma}\label{giant}
Consider a sequence of random polynomials $p_n(z) = \sum_{k=0}^{n}a_kz^k,$ where $a_k$ are i.i.d $\sim N_{\mathbb{C}}(0,1).$ Let $U_n$ be as in \eqref{lemn1} and let $r\in (0,1)$ be given.
Then, there exist $N = N(r)\in\mathbb{N}$ and $c_r > 0$ such that for all $n\geq N$
\begin{equation}\label{eq:giantdomain}
\mathbb{P}\left(B(0,r)\hspace{0.05in}\mbox{is contained in a component of $U_n$}\hspace{0.05in}\right) > c_r .
\end{equation}
\end{lemma}
\begin{proof}
For each $r\in (0,1),$ consider $g(r) = \sum_{k=0}^{\infty}|a_k|r^k.$ Then $g$ is a random function and $\mathbb{E}(g(r)) < \infty.$ Therefore, there exist $a_r, b_r > 0$ such that
$$\mathbb{P}\left(g(r) < a_r\right) > b_r > 0.$$
\noindent For a given $r\in (0,1)$ choose $N$ so that $r^N < \frac{1}{2a_r}.$ Then, for $n\geq N$
\begin{align*}
\mathbb{P}\left(\sup_{\partial B_r}|p_n| < 1 \right)& \geq \mathbb{P}\left(|a_0| + |a_1|r +... |a_{N-1}|r^{N-1} < \frac{1}{2} ; r^N\sum_{j=N}^n|a_j|r^{j-N} < \frac{1}{2}\right), \\
&\geq\mathbb{P}\left(|a_0| + |a_1|r +... |a_{N-1}|r^{N-1} < \frac{1}{2}\right)\mathbb{P}\left(r^Ng(r) <\frac{1}{2} \right) \\
& \geq\eta_{r}\mathbb{P}\left(g(r) < a_r\right)\\
& = \eta_{r} b_r,
\end{align*}
where $\eta_{r} = \mathbb{P}\left(|a_0| + |a_1|r +... |a_{N-1}|r^{N-1} < \frac{1}{2}\right) > 0$ follows from the Gaussian nature of the coefficients. Note that we have used the independence of
$a_k$'s to go from the first line to the second. This finishes the proof of the Lemma.
\end{proof}
In the case the event in \eqref{eq:giantdomain} occurs,
by the isoperimetric inequality,
the associated connected component of $\Lambda_n$
has length at least $2\pi r$.
This proves Theorem \ref{thm:giant}.
\vspace{0.1in}
\noindent \textbf{Acknowledgements:} The authors are grateful to Antonio Lerario and Manjunath Krishnapur for helpful discussions and suggestions.
| {
"timestamp": "2017-02-02T02:02:53",
"yymm": "1610",
"arxiv_id": "1610.09791",
"language": "en",
"url": "https://arxiv.org/abs/1610.09791",
"abstract": "A polynomial lemniscate is a curve in the complex plane defined by $\\{z \\in \\mathbb{C}:|p(z)|=t\\}$. Erdös, Herzog, and Piranian posed the extremal problem of determining the maximum length of a lemniscate $\\Lambda=\\{ z \\in \\mathbb{C}:|p(z)|=1\\}$ when $p$ is a monic polynomial of degree $n$. In this paper, we study the length and topology of a random lemniscate whose defining polynomial has independent Gaussian coefficients. In the special case of the Kac ensemble we show that the length approaches a nonzero constant as $n \\rightarrow \\infty$. We also show that the average number of connected components is asymptotically $n$, and we observe a positive probability (independent of $n$) of a giant component occurring.",
"subjects": "Probability (math.PR); Classical Analysis and ODEs (math.CA); Complex Variables (math.CV)",
"title": "The arc length of a random lemniscate",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850847509661,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7095349870154678
} |
https://arxiv.org/abs/math/0703921 | Sparse Hypergraphs and Pebble Game Algorithms | A hypergraph $G=(V,E)$ is $(k,\ell)$-sparse if no subset $V'\subset V$ spans more than $k|V'|-\ell$ hyperedges. We characterize $(k,\ell)$-sparse hypergraphs in terms of graph theoretic, matroidal and algorithmic properties. We extend several well-known theorems of Haas, Lov{á}sz, Nash-Williams, Tutte, and White and Whiteley, linking arboricity of graphs to certain counts on the number of edges. We also address the problem of finding lower-dimensional representations of sparse hypergraphs, and identify a critical behaviour in terms of the sparsity parameters $k$ and $\ell$. Our constructions extend the pebble games of Lee and Streinu from graphs to hypergraphs. | \section{Introduction \labelsec{introduction}}
The focus of this paper is on $(k,\ell)$-sparse hypergraphs. A
hypergraph (or set system) is a pair $G=(V,E)$ with {\bf vertices}
$V$, $n=|V|$ and {\bf edges} $E$ which are subsets of $V$ (multiple
edges are allowed). If all the edges have exactly two vertices, $G$
is a (multi){\bf graph}. We say that a hypergraph is
$(k,\ell)$-sparse if no subset $V'\subset V$ of $n'=|V'|$ vertices
spans more than $kn'-\ell$ edges in the hypergraph. If, in addition,
$G$ has exactly $kn-\ell$ edges, we say it is $(k,\ell)$-tight.
The $(k,\ell)$-sparse graphs and hypergraphs have applications in
determining connectivity and arboricity (defined later). For some
special values of $k$ and $\ell$, the $(k,\ell)$-sparse graphs have
important applications to rigidity theory: bar-and-joint minimally
rigid frameworks in dimension 2, and body-and-bar structures in
arbitrary dimension are both characterized generically by sparse
graphs.
In this paper, we prove several equivalent characterizations of the
$(k,\ell)$-sparse hypergraphs, and give efficient algorithms for
three specific problems. The {\bf decision} problem asks if a
hypergraph $G$ is $(k,\ell)$-tight. The {\bf extraction} problem
takes an arbitrary hypergraph $G$ as input and returns as output a
maximum size (in terms of edges) $(k,\ell)$-sparse sub-hypergraph of
$G$. The {\bf components} problem takes a {\em sparse} $G$ as input
and returns as output the {\em maximal} $(k,\ell)$-tight induced
sub-hypergraphs of $G$.
The {\em dimension} of a hypergraph is its minimum edge size. A
large dimension makes them difficult to visualize. We also address
the {\bf representation} problem, which asks for finding a suitably
defined lower-dimensional hypergraph in the same sparsity class, and
we identify a critical behaviour in terms of the sparsity parameters
$k$ and $\ell$.
There is a vast literature on sparse $2$-graphs (see Section
\ref{sec.related}), but not so much on hypergraphs. In this paper,
we carry over to the most general setting the characterization of
sparsity via pebble games from Lee and Streinu \cite{LeSt05}. Along
the way, we develop structural properties for sparse hypergraph
decompositions, identify the problem of lower dimensional
representations, give the proper hypergraph version of depth-first
search in a directed sense and apply the pebble game to efficiently
find lower-dimensional representations within the same sparsity
class.
Complete historical background is given in Section \ref{sec.related}.
In Section 2, we describe our pebble game for hypergraphs
in detail. The rest of the paper provides the proofs: Sections 3
and 4 address structural properties of sparse hypergraphs; Sections 5 and
6 relate graphs accepted by the pebble game with sparse hypergraphs; Section 7
addresses the questions of representing sparse hypergraphs by lower dimensional
ones.
\subsection{Preliminaries and related work\labelsec{preliminaries}}
In this section we give the definitions and describe the notation
used in the paper.
{\bf Note:} for simplification, we will often use {\em graph}
instead of {\em hypergraph} and {\em edge} instead of {\em
hyperedge}, when the context is clear.
\paragraph{Hypergraphs. \labelsec{hypergraphs}} Let $G=(V,E)$ be a
hypergraph, i.e. the edges of $G$ are subsets of $V$. A vertex $v\in
e$ is called an {\em endpoint} (or simply {\em end}) of the edge. We
allow parallel edges, i.e. multiple copies of the same edge.
For a subset $V'$ of the vertex set $V$, we define span($V'$), the
{\bf span} of $V'$, as the set of edges with endpoints in $V'$:
$E(V')=\{e\in E : e\subset V'\}$. Similarly, for a subset $E'$ of
$E$, we define the span of $E'$ as the set of vertices in the union
of the edges: $V(E')=\bigcup_{e\in E'} e$. The {\bf hypergraph
dimension} (or dimension) of an edge is its number of elements. The
hypergraph dimension of a graph $G$ is its {\em minimum} edge
dimension. A graph in which each edge has dimension $s$ is called
{\bf $s$-uniform} or, more succinctly, a {\bf $s$-graph}. So what is
typically called a graph in the literature is a $2$-graph, in our
terminology. \reffig{hypergraph-examples} shows two examples of
hypergraphs.
\begin{figure
\centering
\subfigure[]{\includegraphics[height=1.0 in]{hyper-tree-1}}
\hspace{.3in}
\subfigure[]{\includegraphics[height=1.0 in]{hyper-tree-2}}
\caption{Two hypergraphs. The hypergraph in (a) is 3-uniform; (b) is 2-dimensional but not
a 2-graph.}
\label{fig.hypergraph-examples}
\end{figure}
We say that a hypergraph $H=(V,F)$ {\bf represents} a hypergraph
$G=(V,E)$ with respect to some property ${\cal P}$, if both $H$ and
$G$ satisfy the property, and there is an isomorphism $f$ from $E$
to $F$ such that $f(e)\subset e$ for all $e\in E$. In this paper, we
are primarily concerned with representations which preserve
sparsity. In our figures, we visually present hypergraphs as their
lower dimensional representations when possible, as in
\reffig{representations}. We observe that representations with
respect to sparsity are not unique, as shown in \reffig{notunique}.
\begin{figure
\centering
\subfigure[]{\label{fig.represented-example-1}\includegraphics[height=1.0
in]{represented-example-1}}
\hspace{.3in}
\subfigure[]{\label{fig.represented-example-2}\includegraphics[height=1.0
in]{represented-example-2}} \caption{Lower dimensional
representations. In both cases, the $2$-uniform graph on the right
(a tree) represents the hypergraph on the left (a hypergraph tree)
with respect to $(1,1)$-sparsity. The $2$-dimensional
representations of edges have similar styles to the edges they represent
and are labeled with the vertices of the hyperedge.}
\label{fig.representations}
\end{figure}
\begin{figure
\centering
\includegraphics[height=1 in]{represented-tree-1}
\caption{Lower dimensional representations are not unique. Here we
show two 2-uniform representations of the same hypergraph with
respect to $(1,1$)-sparsity.}\label{fig.notunique}
\end{figure}
The standard concept of {\bf degree} of a vertex $v$ extends
naturally to hypergraphs, and is defined as the number of edges to
which $v$ belongs. The degree of a set of vertices $V'$ is the
number of edges with at least one endpoint in $V'$ and another in
$V-V'$.
An {\bf orientation} of a hypergraph is given by identifying as the
{\bf tail} of each edge one of its endpoints.
\reffig{oriented-example} shows an oriented hypergraph and a lower
dimensional representation of the same graph.
\begin{figure
\centering
\includegraphics[height=1.0 in]{oriented-example-1}
\caption{An oriented 3-uniform hypergraph. On the left, the tail of each edge is indicated by the style of the vertex.
In the 2-uniform representation on the right, the edges are shown as directed arcs.}
\label{fig.oriented-example}
\end{figure}
In an oriented hypergraph, a {\bf path} from a vertex $v_1$ to a
vertex $v_t$ is given by a sequence
\begin{eqnarray}
v_1,e_1,v_2,e_2,\ldots,v_{t-1},e_{t-1},v_t
\end{eqnarray}
where $v_i$ is an endpoint of $e_{i-1}$ and $v_i$ is the tail of
$e_i$ for $1\le i\le t-1$.
The concepts of in-degree and out-degree extend to oriented
hypergraphs. The out-degree of a vertex is the number of edges which
identify it as the tail and connect $v$ to $V-v$; the in-degree is
the number of edges that do not identify it as the tail. The
out-degree of a subset $V'$ of $V$ is the number of edges with the
tail in $V'$ and at least one endpoint in $V-V'$; the in-degree of
$V'$ is defined symmetrically. It is easy to check that the
out-degree and in-degree of $V'$ sum to the undirected degree of
$V'$. Notice that loops (one-dimensional edges) contribute nothing
to the out-degree of a vertex-set.
We use the notation $N_{G}(V')$ to denote the set of neighbors in
$G$ of a subset $V'$ of $V$.
The standard depth-first search algorithm in directed graphs,
starting from a source vertex $v$, extends naturally to oriented
hypergraphs: recursively explore the graph from the unexplored
neighbors of $v$, one after another (ending when it has no
unexplored neighbors left). We will use it in the implementation of
the pebble game to explore vertices of hypergraphs.
\reffig{dfs} shows the depth-first exploration of a hypergraph.
Notice that the picture uses a uniform $2$-dimensional
representation for a $3$-hypergraph (the hyperedges should be clear
from the labels on the $2$-edges representing them).
\begin{figure}[htbp]
\centering
\subfigure[]{\label{fig.dfs-1}\includegraphics[height=1 in]{dfs-example-1}}
\subfigure[]{\label{fig.dfs-2}\includegraphics[height=1 in]{dfs-example-2}}
\subfigure[]{\label{fig.dfs-3}\includegraphics[height=1 in]{dfs-example-3}}
\subfigure[]{\label{fig.dfs-4}\includegraphics[height=1 in]{dfs-example-4}}
\subfigure[]{\label{fig.dfs-5}\includegraphics[height=1 in]{dfs-example-5}}
\subfigure[]{\label{fig.dfs-6}\includegraphics[height=1 in]{dfs-example-6}}
\subfigure[]{\label{fig.dfs-7}\includegraphics[height=1 in]{dfs-example-7}}
\caption{Searching a hypergraph with depth-first search starting at vertex $e$.
Visited edges and vertices are shown with thicker lines.
The search proceeds across an edge from the tail to each of the other endpoints and
backs up at an edge when all its endpoints have been visited
(as in the transition from (b) to (c)). }
\label{fig.dfs}
\end{figure}
Table \ref{tab.hypergraph-terminology} gives a summary of the
terminology in this section.
\begin{table}
\centering
\begin{tabular}{|l|l|l|}
\hline
{\bf Term} & {\bf Notation} & {\bf Meaning} \\
\hline
\hline
Edge & $e$ & $e\subset V$
\\
\hline
Graph & $G=(V,E)$ & $V$ is a finite set of vertices ; $E\subset 2^V$ is a set of edges \\
\hline
Subset of vertices & $V'$ & $V'\subset V$ \\
\hline
Size of $V'$ & $n'$ & \card{V'} \\
\hline
Subset of edges & $E'$ & $E'\subset E$ \\
\hline
Size of a subset of edges & $m'$ & $\card{E'}$ \\
\hline
Span of $V'$ & $E(V')$ & Edges in $E$ that are subsets of $V'$ \\
\hline
Span of $E'$ & $V(E')$ & Vertices in the union of $e\in E'$ \\
\hline
Dimension of $e\in E$ & $|e|$ & Number of elements in $e$ \\
\hline
Dimension of $G$ & $s$ & Minimum dimension of an edge in $E$. \\
\hline
Max size of an edge & $s^*$ & Maximum size of an edge in $E$ \\
\hline
Neighbors of $V'$ in $G$ & $N_G(V')$ & Vertices connected to some $v\in V'$ \\
\hline
\end{tabular}
\caption{Hypergraph terminology used in this paper.}
\label{tab.hypergraph-terminology}
\end{table}
\paragraph{Sparse hypergraphs.\labelsec{sparse}} A graph is {\bf
$(k,\ell)$-sparse} if for any subset $V'$ of $n'$ vertices and its
span $E'$, $m'=|E'|$:
\begin{eqnarray}
m' \le kn'-\ell
\labeleq{subset}
\end{eqnarray}
A sparse graph that has exactly $kn-\ell$ edges is called {\bf
tight}; \reffig{2-map-tight} shows a $(2,0)$-tight hypergraph. A
graph that is not sparse is called {\bf dependent}.
A simple observation, formalized below in
\reflem{sparse-graph-rank}, implies that $0\leq \ell \leq s k -1$,
for sparse hypergraphs of dimension $s$. {\em From now on, we will
work with parameters $k, \ell$ and $s$ satisfying this condition.}
We also define $K_n^{k,\ell}$ as the complete hypergraph with edge
multiplicity $ks-\ell$ for $s$-edges. For example $K_n^{k,0}$ has:
$k$ loops on every vertex, $2k$ copies of every $2$-edge, $3k$
copies of every $3$-edge, and so on.
\reflem{loops-and-parallel-edges} shows that every sparse graph is a
subgraph of $K_n^{k,0}$.
\begin{figure
\centering
\includegraphics[height=1.5 in]{2-map-with-hyperedges}
\caption{A (2,0)-tight hypergraph decomposed into two $(1,0)$-tight ones (gray and black).}
\label{fig.2-map-tight}
\end{figure}
A sparse graph $G$ is {\bf critical} if the only representation of $G$ that is
sparse is $G$ itself. In terms of $B_{G}$ this means that no proper
subgraph of $B'$ of $B_{G}$ corresponds to a hypergraph that is
sparse.
There are two important types of subgraphs of sparse graphs. A {\bf block} is a
tight subgraph of a sparse graph. A {\bf component}
is a maximal block.
In this paper, we study five computational problems. The {\bf decision} problem
asks if a graph $G$ is $(k,\ell)$-tight. The {\bf extraction} problem takes a graph
$G$ as input and returns as output a maximum $(k,\ell)$-sparse subgraph of
$G$. The {\bf optimization} problem is a variant of the {\bf extraction} problem;
it takes as its input
a graph $G$ and a weight function on $E$ and returns as
its output a minimum weight maximum $(k,\ell)$-sparse subgraph of $G$.
The {\bf components} problem take a graph $G$ as input and returns
as output the components of $G$. The {\bf representation} problem takes
as input a sparse graph $G$ and returns as output a sparse graph $H$
that represents $G$ and has lower dimension if this is possible.
\begin{table}
\centering
\begin{tabular}{|l|l|}
\hline
{\bf Term} & {\bf Meaning} \\
\hline
\hline
Sparse graph $G$ & $m'\leq kn'-l$ for all subsets $E'$, $m'=|E'|$. \\
\hline
Tight graph $G$ & $G$ is sparse with $kn-\ell$ edges. \\
\hline
Dependent graph $G$ & $G$ is not sparse \\
\hline
Block $H$ in $G$ & $G$ is sparse, and $H$ is a tight subgraph \\
\hline
Component $H$ of $G$ & $G$ is sparse and $H$ is a maximal block \\
\hline
Decision problem & Decide if a graph $G$ is sparse \\
\hline
Extraction problem & Given $G$, find a maximum sized sparse subgraph $H$ \\
\hline
Optimization problem & Given $G$, find a minimum weight maximum sized sparse subgraph $H$ \\
\hline
Components problem & Given $G$, find the components of $G$ \\
\hline
Representation problem & Given a sparse $G$, find a sparse representation of lower
dimension\\
\hline
\end{tabular}
\caption{Sparse graph terminology used in this paper.}
\label{tab.sparse-terminology}
\end{table}
Table \ref{tab.sparse-terminology} summarizes the notation and terminology
related to sparseness used in this paper.
While the definitions in this section are made for families of
sparse graphs, they can be interpreted in terms of matroids and
rigidity theory. Table \ref{tab.sparse-concepts} relates the
concepts in this section to matroids and generic rigidity, and can
be skipped by readers who are not familiar with these fields.
\begin{table}
\centering
\begin{tabular}{|l|l|l|}
\hline
{\bf Sparse graphs} & {\bf Matroids} & {\bf Rigidity} \\
\hline
\hline
Sparse & Independent & No over-constraints \\
\hline
Tight & Independent and spanning & Isostatic/minimally rigid \\
\hline
Block & --- & Isostatic region \\
\hline
Component & --- & Maximal isostatic region \\
\hline
Dependent & Contains a circuit & Has stressed regions \\
\hline
\end{tabular}
\caption{Sparse graph concepts and analogs in matroids and rigidity.}
\label{tab.sparse-concepts}
\end{table}
\paragraph{Fundamental hypergraphs.} A {\bf map} is a hypergraph that admits
an orientation such that the out degree of every vertex is exactly
one. A $k$-{\bf map} is a graph that admits a decomposition into $k$
disjoint maps. \reffig{2-map-oriented} shows a $2$-map, with an
orientation of the edges certifying that the graph is a $2$-map.
\begin{figure}[htbp]
\centering
\includegraphics[height=1.5 in]{2-map-oriented}
\caption{The hypergraph from \reffig{2-map-tight}, shown here in a
lower-dimensional representation, is a 2-map. The maps are black
and gray.
Observe that each vertex is the tail of one black edge and one gray one.}\label{fig.2-map-oriented}
\end{figure}
An edge $e$ {\bf connects} subsets $X$ and $Y$ of $V$ if $e$ has an
end in both $X$ and $Y$. A graph is {\bf $k$-edge connected} if
$\card{E(X,V-X)}\ge k,$
for any subset $X$ of $V$, where $E(X,Y)$ is the set of edges connecting
$X$ and $Y$.
A graph is {\bf $k$-partition connected} if
\begin{eqnarray}
\card{\bigcup_{i\neq j} E(P_i,P_j)}\ge k(t-1)
\label{partition}
\end{eqnarray}
for any partition $\mathcal{P}=\{P_1,P_2,\ldots,P_t\}$ of $V$.
This definition appears in \cite{frank2001}.
A {\bf tree} is a minimally 1-partition connected graph. A reminder
that this is the definition of a {\em tree} in a {\em hypergraph},
but we use the shortened terminology and drop {\em hyper}. A
$k$-arborescence is a graph that admits a decomposition into $k$
disjoint trees. For $2$-graphs, the definitions of partition
connectivity and edge connectivity coincide by the well-known
theorems of Tutte \cite{tutte61} and Nash-Williams \cite{Na61}. We
also observe that for general hypergraphs, connectivity and
$1$-partition-connectivity are different; a hypergraph with a single
edge containing every vertex is connected but not partition
connected.
\subsection{Related work \labelsec{related}} Our results expand
theorems spanning graph theory, matroids and algorithms. By
treating the problem in the most general setting, we will obtain
many of the results listed in this section as corollaries of our
more general results.
In this paragraph, we use {\em graph} in its usual sense, i.e.
as a $2$-uniform hypergraph.
\paragraph{Graph Theory and Rigidity Theory.} Sparsity is
closely related to graph arborescence. The well-known results of
Tutte \cite{tutte61} and Nash-Williams \cite{Na61} show the
equivalence of
$(k,k)$-tight
graphs and graphs that can
be composed into $k$ edge-disjoint spanning trees. A theorem of Tay
\cite{Ta1,Ta2} relates such graphs to generic rigidity of
bar-and-body structures in arbitrary geometric dimension. The
$(2,3)$-tight $2$-dimensional graphs play an important role in
rigidity theory. These are the generically minimally rigid graphs
\cite{laman70} (also known as Laman graphs), and have been studied
extensively. Results of Recski \cite{Re84I,Re84II} and Lovász
and Yemini \cite{lovasz:yemini} relate them to adding any edge to
obtain a $2$-arborescence. The most general results on $2$-graphs
were proven by Haas in \cite{haas:2002}, who shows the equivalence
of $(k,k+a)$-sparse graphs and graphs which decompose into $k$
edge-disjoint spanning trees after the addition of any $a$ edges. In
\cite{maps} Haas et al. extend this result to graphs that
decompose into edge-disjoint spanning maps, showing that
$(k,\ell)$-sparse graphs are those that admit such a map
decomposition after the addition of any $\ell$ edges.
For hypergraphs, Frank et al. study the $(k,k)$-sparse case in
\cite{frank2001}, generalizing the Tutte and Nash-Williams theorems
to partition connected hypergraphs.
\paragraph{Matroids.} Edmonds \cite{Ed65} used a matroid union approach to
characterize the $2$-graphs that can be decomposed into $k$ disjoint
spanning trees and described the first algorithm for recognizing
them. White and Whiteley \cite{whiteley:matroids} first recognized
the matroidal properties of general $(k,\ell)$-sparse graphs.
In \cite{whiteley:union-matroids}, Whiteley used a classical
theorem of Pym and Perfect \cite{pym-perfect} to show that the
$(k,\ell)$-tight $2$-graphs are exactly those that decompose into an
$\ell$-arborescence and $(k-\ell)$-map for $0\le \ell\le k$.
In the hypergraph setting, Lorea \cite{lorea} described the first
generalization of graphic matroids to hypergraphs. In
\cite{frank2001}, Frank et al. used a union matroid approach to
extend the Tutte and Nash-Williams theorems to arbitrary
hypergraphs.
\paragraph{Algorithms.} Our algorithms generalize the $(k,\ell)$-sparse
graph pebble games of Lee and Streinu \cite{LeSt05}, which in turn
generalize the pebble game of Jacobs and Hendrickson \cite{JaHe97}
for planar rigidity (which would be a $(2,3)$-pebble game in the
sense of \cite{LeSt05}). The elegant pebble game of \cite{JaHe97},
first analyzed for correctness in \cite{Be03}, was intended to be an
easily implementable alternative to the algorithms based on
bipartite matching discovered by Hendrickson in
\cite{hendrickson-thesis}.
The running time analysis of the $(2,3)$-pebble game in \cite{Be03}
showed its running time to be dominated by $O(n^{2})$ queries about
whether two vertices are in the span of a rigid component. This
leads to a data structure problem, considered explicitly in
\cite{LeSt05,cccg}, where it is shown that the running time of the
general $(k,\ell)$-pebble game algorithms on $2$-graphs is $O(n^2)$.
For certain special cases of $k$ and $\ell$, algorithms with better
running times have been discovered for $2$-multigraphs. Gabow and
Westermann \cite{GaWe88} used a matroid union approach to achieve a
running time of $O(n^{3/2})$ for the {\bf extraction} problem when
$\ell\le k$. They also find the set of edges that are in some
component, which they call the {\bf top clump}, with the same
running time as their extraction algorithm. We observe that the
{\bf top clump} problem coincides with the components problem only
for the $\ell=0$ case. Gabow and Westermann also derive an
$O(n^{3/2})$ algorithm for the {\bf decision} problem for
$(2,3)$-sparse (Laman) graphs, which is of particular interest due
to the importance of Laman graphs in many rigidity applications.
Using a matroid intersection approach, Gabow \cite{gabow1995}
also gave an $O((m+n)\log n)$ algorithm for the extraction problem
for $(k,k)$-sparse $2$-graphs.
\subsection{Our Results \labelsec{results}} We describe our results
in this section.
\paragraph{The structure of sparse hypergraphs.} We first describe
conditions for the existence of tight hypergraphs and analyze the
structure of the components of sparse ones. The theorems of this
section are generalizations of results from \cite{LeSt05,szego} to
hypergraphs of dimension $d\ge 3$.
\begin{theorem}[{\bf Existence of tight hypergraphs}]\labelthm{good-range}
There exists an $n_1$ depending on $s$, $k$ at $\ell$ such that
uniform tight graphs on $n$ vertices exist for all values of $n\ge
n_1$. In the smaller range $n<n_1$, such tight graphs may not exist.
\end{theorem}
\begin{theorem}[{\bf Block Intersection and Union}]
If $B_1$ and $B_2$ are blocks of a sparse graph $G$, $0\le \ell\le
ik$, and $B_1$ and $B_2$ intersect on at least $i$ vertices, then
$B_1\cup B_2$ is a block and the subgraph induced by $V(B_1)\cap
V(B_2)$ is a block. \labelthm{block-structure}
\end{theorem}
\begin{theorem}[{\bf Disjointness of Components}]
If $C_1$ and $C_2$ are components of a sparse graph $G$, then
$E(C_1)$ and $E(C_2)$ are disjoint and $\card{V(C_1)\cap V(C_2)}<
s$. If $\ell\leq k$, then the components are vertex disjoint. If
$\ell=0$, then there is only one component.
\labelthm{component-structure}
\end{theorem}
\paragraph{Hypergraph decompositions.} Extending the results of Tutte
\cite{tutte61}, Nash-Williams \cite{Na61}, Recski
\cite{Re84I,Re84II}, Lovász and Yemini \cite{lovasz:yemini},
Haas et al. \cite{haas:2002,maps},
and Frank et al. \cite{frank2001}, we characterize the hypergraphs
that become $k$-arborescences after the addition of any $\ell$
edges.
\begin{theorem}[{\bf Generalized Lov{\'{a}}sz-Recski Property}]
Let $G$ be $(k,\ell)$-tight hypergraph with $\ell\ge k$. Then the
graph $G'$ obtained by adding any $\ell-k$ edges of dimension at
least 2 to $G$ is a $k$-arborescence.
\labelthm{trees-after-adding-any}
\end{theorem}
In particular, the important special case in which $k=\ell$ was
proven by Frank et al. \cite{frank2001}.
\paragraph{Decompositions into maps.} We also extend the results of Haas
et al. \cite{maps} to hypergraphs. This theorem can also be seen
as a generalization of the characterization of Laman graphs in
\cite{hendrickson-thesis}.
\begin{theorem}[{\bf Generalized Nash-Williams-Tutte Decompositions}]
A graph $G$ is a $k$-map if and only if $G$ is $(k,0)$-tight.
\labelthm{k-maps-are-tight}
\end{theorem}
\begin{theorem}[{\bf Generalized Haas-Lov{\'{a}}sz-Recski Property for Maps}]
The graph $G'$ obtained by adding any $\ell$ edges from $K_n^{k,0}-G$ to a
$(k,\ell)$-tight graph $G$ is a $k$-map.
\labelthm{maps-after-adding-any}
\end{theorem}
Using a matroid approach, we also generalize a theorem of Whiteley
\cite{whiteley:union-matroids} to hypergraphs.
\begin{theorem}[{\bf Maps and Trees Decomposition}]
Let $k\ge \ell$ and $G$ be tight. Then $G$ is the union of an
$\ell$-arborescence and a $(k-\ell)$-map. \labelthm{maps-and-trees}
\end{theorem}
\paragraph{Pebble game constructible graphs.} The main theorem of this
paper, generalizing from $s=2$ in \cite{LeSt05} to hypergraphs of any
dimension, is that the matroidal families of sparse graphs coincide
with the pebble game graphs.
\begin{theorem}[{\bf Main Theorem: Pebble Game Constructible
Hypergraphs}]
Let $k$, $\ell$, $n$ and $s$ meet the conditions of
\refthm{good-range}. Then a hypergraph $G$ is sparse if and only if
it has a pebble game construction.
\labelthm{sparse-graphs-are-pebble-game-graphs}
\end{theorem}
\paragraph{Pebble game algorithms.}
We also generalize the pebble game {\em algorithms} of
\cite{LeSt05} to hypergraphs. We present two algorithms, the {\bf
basic pebble game} and the {\bf pebble game with components}.
We show that on an $s$-uniform input $G$ with $n$ vertices and $m$
edges, the basic pebble game solves the {\bf decision} problem in
time $O((s+\ell)sn^2)$ and space $O(n)$. The {\bf extraction}
problem is solved by the basic pebble game in time $O((s+\ell)dnm)$
and space $O(n+m)$. For the {\bf optimization} problem, the basic
pebble game uses time $O((s+\ell)snm+m\log m)$ and space $O(n+m)$.
On an $s$-uniform input $G$ with $n$ vertices and $m$ edges, the
pebble game with components solves the {\bf decision}, {\bf
extraction}, and {\bf components} problems in time
$O((s+\ell)sn^s+m)$ and space $O(n^s)$. For the optimization
problem, the pebble game with components takes time
$O((s+\ell)sn^s+m\log m)$.
\paragraph{Critical representations.}
As an application of the pebble game,
we obtain lower-dimensional representations for certain classes of
sparse hypergraphs, generalizing a result from Lovász
\cite{lovasz-representation} concerning lower-dimensional representations for (hypergraph) trees.
\begin{theorem}[{\bf Lower Dimensional and Critical Representations}]
\labelthm{representation} $G$ is a critical sparse hypergraph of
dimension $s$ if and only if the representation found by the pebble
game construction coincides with $G$. This implies that $G$ is
$s$-uniform and $\ell \leq sk-1$.
\end{theorem}
The proof of \refthm{representation} is based on a modified version of
the pebble game (described below) that solves the {\bf representation}
problem. Its complexity is the same as that of the pebble game with
components: time $O((s+\ell)sn^s+m)$ and space $O(n^s)$ on an $s$-graph.
As corollaries to \refthm{representation}, we obtain:
\begin{corollary}[Lovász \cite{lovasz-representation}]
$G$ is an $s$-dimensional $k$-arborescence if and only if it is
represented by a 2-uniform $k$-arborescence $H$.
\end{corollary}
\begin{corollary}
$G$ is a $k$-map if and only if it is represented by a $k$-map with
edges of dimension $1$.
\end{corollary}
\begin{corollary}
$G$ has a maps-and-trees decomposition if and only if $G$ is
represented by a graph with edges of dimension at most 2 that has a
maps-and-trees decomposition.
\end{corollary}
\section{The pebble game} The {\bf pebble game} is a family of
algorithms indexed by nonnegative integers $k$ and $\ell$.
The game is played by a single player on a fixed finite set of
vertices. The player makes a finite sequence of moves; a move
consists of the addition and/or orientation of an edge. At any
moment of time, the state of the game is captured by a graph: we
call it a {\bf pebble game graph}.
Later in this paper, we will use the pebble game as the basis of
efficient algorithms for the computational problems defined above in
\refsec{sparse}.
We describe the pebble game in terms of its initial configuration
and the allowed moves.
{\bf Initialization:} in the beginning of the pebble game, $H$ has
$n$ vertices and no edges. We start by placing $k$ pebbles on each
vertex of $H$.
{\bf Add edge:} Let $e\subset V$ be a set of vertices with at least
$\ell+1$ pebbles on it. Add $e$ to $E(H)$. Pick up a pebble from
any $v\in e$, and make $v$ the tail of $e$.
\reffig{colored-add-edge} shows an example of this move in the $(2,2)$-pebble game.
\begin{figure
\centering
\subfigure[]{\includegraphics[width=2 in]{colored-add-edge}}
\hspace{.3 in}
\subfigure[]{\includegraphics[width=2 in]{add-edge-ex2}}
\subfigure[]{\includegraphics[width=2 in]{add-edge-ex3}}
\caption{Adding a $3$-edge in the $(2,2)$-pebble game. In all cases, the edge,
shown as a triangle, may be added because there are at least three pebbles present.
The tail of the new edge is filled in; note that in (c) only one of the pebbles
on the tail is picked up.}
\label{fig.colored-add-edge}
\end{figure}
{\bf Pebble shift:} Let $v$ a vertex with at least one pebble on it,
and let $e$ be an edge with $v$ as one of its ends, and with tail $w$.
Move the pebble to $w$ and make $v$ the tail of $e$.
\reffig{colored-pebble-shift} shows an example of this move in the $(2,2)$-pebble game.
The output of playing the pebble game is its complete configuration, which includes
an oriented pebble game graph.
\begin{figure
\centering
\subfigure[]{\includegraphics[width=2 in]{colored-pebble-shift}}
\hspace{.3 in}
\subfigure[]{\includegraphics[width=2 in]{pebble-shift-ex2}}
\caption{Moving a pebble along a $3$-edge in the $(2,2)$-pebble game. The
tail of the edge is filled in. Observe that in (b) the only change is to the
orientation of the edge and the location of the pebble that moved.}
\label{fig.colored-pebble-shift}
\end{figure}
{\bf Output:} At the end of the game, we obtain the oriented hypergraph $H$,
and a map $\ensuremath{\operatorname{peb}}$ from $V$ to $\mathbb{N}$ such that for each vertex $v$,
$\ensuremath{\operatorname{peb}} (v)$ is the number of pebbles on $v$.
\paragraph{Comparison to Lee and Streinu.}
The hypergraph pebble game extends the framework
developed in \cite{LeSt05} for $2$-graphs.
The main challenge was to come up with the concept of orientation of
hyperedges and of moving the pebbles in a way that generalizes
depth-first search for $2$-graphs. Specializing our
algorithm to $2$-uniform hypergraphs gives back the algorithm of
\cite{LeSt05}.
\section{Properties of sparse hypergraphs\labelsec{hypersparse}}
We next develop properties of sparse graphs, starting with the
conditions on $s$, $k$, $\ell$ and $n$ for which there are tight
graphs.
\begin{lemma}
If $\ell\ge ik$, and $G$ is sparse, then $s>i$.
\labellem{sparse-graph-rank}
\end{lemma}
\begin{proof}
If $i\ge s$, then for any edge $e$ of dimension $s$ the ends of $e$ are a set of
vertices for which \refeq{subset} fails.
\end{proof}
As an immediate corollary, we see that the class of uniform sparse graphs is
trivial when $\ell\ge sk$.
\begin{lemma}
If $\ell\ge sk$, then the class of $s$-uniform $(k,\ell)$-sparse
graphs contains only the empty graph.
\labellem{when-no-sparse-graphs}
\end{lemma}
We also observe that when $\ell<0$, the union of two disjoint sparse graphs need not
be sparse. Since this is a desirable property, for the moment we focus on the case
in which $\ell\ge 0$. Our next task is to further subdivide this range.
\begin{lemma}
Let $G$ be sparse and uniform. The multiplicity of parallel edges
in $G$ is at most $sk-\ell$. \labellem{loops-and-parallel-edges}
\end{lemma}
\begin{proof}
\refeq{subset} holds for no more than $sk-\ell$ parallel edges of
dimension $s$.
\end{proof}
The next lemmas establish a range of parameters for which there are
tight graphs.
\begin{lemma}
Let $\ell\ge (s-1)k$. There are no tight subgraphs on $n<s$ vertices.
\labellem{lower-trivial-range}
\end{lemma}
\begin{proof}
By \reflem{loops-and-parallel-edges} no sparse subgraph may contain edges
of dimension less than $s$.
\end{proof}
\begin{lemma}
If $\ell\ge (s-1)k$ then there is an $n_1$ depending on $s$, $k$ at
$\ell$ such that for $n\ge n_1$ there exist tight $s$-uniform graphs
on $n$ vertices. For $n<n_1$, there may not be tight uniform graphs.
\labellem{bad-range}
\end{lemma}
\begin{proof}
When $\ell\ge (s-1)k$ there are no loops in any sparse graph. Also, by
\reflem{loops-and-parallel-edges} no edge in a uniform graph
has multiplicity greater
than $k$ in a sparse graph. It follows that any tight uniform graph is a
subgraph of the complete $s$-uniform graph on $n$ vertices, allowing
edge multiplicity $k$.
For tight uniform subgraphs to exist, we need to have
\begin{eqnarray}
kn-\ell\le k\binom{n}{s}
\end{eqnarray}
Since the function $f(n)=kn^ss^{-s}-kn+\ell$ is asymptotically
positive, the desired $n_1$ must exist.
Notice that there is no tight $2$-uniform graph for $n=3$, $k=3$ and
$\ell=5$; the complete graph $K_3$ has only 3 edges, and by
\reflem{loops-and-parallel-edges} any $(3,5)$-sparse graph must be
simple. Such examples can be constructed for all values of $n\le
n_1$.
\end{proof}
We next turn to showing that tight graphs exist.
\begin{lemma}
Suppose that $\ell\ge (s-1)k$ and that $n\ge n_1$, where $n_1$ is taken
as in \reflem{bad-range}. Then there are tight graphs on $n$ vertices.
\labellem{tight-graphs-exist}
\end{lemma}
\begin{proof}
Start with the complete $d$-uniform hypergraph
with $k$ parallel edges, $K^{k}_{n_1}$. Identify a vertex $v$ and discard up to $\ell$
edges having $v$ as an end until the resulting graph $G_{n_1}$ is sparse. This graph
must be sparse: any subgraph $H$ not spanning $v$ is sparse, as is any
subgraph containing only edges spanning $v$ by construction.
Since $G_{n_1}$ is maximally sparse, it is tight.
To complete the proof, proceed inductively: create $G_n$ from $G_{n-1}$ by
adding a new vertex and $k$ edges having the new vertex as an endpoint
such that the subgraph induced by the new edges is sparse.
\end{proof}
We next characterize the range of parameters for which there are
tight graphs.
\begin{restate}{good-range}{{\bf Existence of tight hypergraphs}}
There is an $n_1$ depending on $s$, $k$ at $\ell$ such that
for $n\ge n_1$ there are uniform tight graphs on $n$ vertices.
For $n<n_1$, there may not be tight graphs.
\end{restate}
\begin{proof
Immediate from
\reflem{bad-range} and \reflem{tight-graphs-exist}; the existence of tight uniform
hypergraphs implies the existence of tight hypergraphs.
\end{proof}
We next turn to the structure of blocks and components.
\begin{restate}{block-structure}{{\bf Block Intersection and Union}}
If $B_1$ and $B_2$ are blocks of a sparse graph $G$, $0\le \ell\le ik$,
$B_1$ and $B_2$ intersect on at least $i$
vertices, then $B_1\cup B_2$ is a block and the subgraph
induced by $V(B_1)\cap V(B_2)$ is a block.
\end{restate}
\begin{proof
Let $m_i=\card{E(B_i)}$ for $i=1,2$; similarly let $v_i=\card{V(B_i)}$.
Also let $m_\cap=\card{E(B_1)\cap E(B_2)}$, $m_\cup=\card{E(B_1)\cup E(B_2)}$,
$v_\cup=\card{V(B_1)\cup V(B_2)}$, and
$v_\cap=\card{V(B_1)\cap V(B_2)}$.
The sequence of inequalities
\begin{eqnarray}
kn_\cup-\ell\ge m_\cup=m_1+m_2-m_\cap\ge kn_1-\ell+kn_2-\ell-kn_\cap+\ell=kn_\cup-\ell
\end{eqnarray}
holds whenever $n_\cap\ge i$, which shows that $B_1\cup B_2$ is a block.
From the above, we get
\begin{eqnarray}
m_\cap=m_1+m_2-m_\cup=kn_1-\ell+kn_2-\ell-kn_\cup+\ell=
kn_\cap-\ell,
\end{eqnarray}
completing the proof.
\end{proof}
From \refthm{block-structure}, we obtain the first part of \refthm{component-structure}.
\begin{lemma}
If $C_1$ and $C_2$ are components of a $(k,\ell)$-sparse graph $G$ then
$E(C_1)$ and $E(C_2)$ are disjoint and $\card{V(C_1)\cap V(C_2)}< s$.
\labellem{component-structure}
\end{lemma}
\begin{proof}
Observe that since $0\le \ell<sk$, components with non-empty edge intersection
are blocks meeting the condition of \refthm{block-structure}, as components
intersecting on $s$ vertices. Since components are maximal, no two components
may meet the conditions of \refthm{block-structure}.
\end{proof}
For certain special cases, we can make stronger statements about the components.
\begin{lemma}
The components of a $(k,k)$-sparse graph are vertex disjoint.
\labellem{tree-components}
\end{lemma}
\begin{proof}
Observe that $\ell\le k$ and apply \refthm{block-structure} as above with $i=1$.
\end{proof}
\begin{lemma}
There is at most one component in a $(k,0)$-sparse graph.
\labellem{map-components}
\end{lemma}
\begin{proof}
Applying \refthm{block-structure} with $i=0$ shows that
the components of a
$(k,0)$-sparse graph are vertex disjoint. Now suppose that
$C_1$ and $C_2$ are distinct components of a $(k,0)$-sparse
graph. Then, using the notation of \refthm{block-structure},
$m_1+m_2=kn_1+kn_2=kn_\cup$, which implies that
$C_1\cup C_2$ is a larger component, contradicting the
maximality of $C_1$ and $C_2$.
\end{proof}
Together these lemmas prove the following result about the structure of components.
\begin{restate}{component-structure}{{\bf Disjointness of Components}}
If $C_1$ and $C_2$ are components of a sparse graph $G$, then
$E(C_1)$ and $E(C_2)$ are disjoint and $\card{V(C_1)\cap V(C_2)}< s$. If $k=\ell$, then the components are vertex disjoint. If $\ell=0$, then there
is only one component.
\end{restate}
\begin{proof
Immediate from \reflem{component-structure}, \reflem{tree-components},
and \reflem{map-components}.
\end{proof}
\section{Hypergraph Decompositions}
In this section we investigate links between tight hypergraphs and
decompositions into edge-disjoint maps and trees.
\subsection{Hypergraph arboricity} We now generalize
results of Haas \cite{haas:2002} and Frank et al. \cite{frank2001}
to prove an equivalence between
sparse hypergraph and those for which adding any $a$ edges results
in a $k$-arborescence.
We will make use of the following important result from \cite{frank2001}.
\begin{proposition}[ Frank et al. \cite{frank2001} ]
A hypergraph $G$ is a $k$-arborescence if and only if $G$ is $(k,k)$-tight.
\labelprop{frank-k-arborescence}
\end{proposition}
\begin{restate}{trees-after-adding-any}{{\bf Generalized Lovász-Recski Property}}
Let $\ell\ge k$ and let $G$ be tight. Then the graph $G'$ obtained by adding any
$\ell-k$ edges of dimension at least 2 to $G$ is a $k$-arborescence.
\end{restate}
\begin{proof
Suppose that $G$ is tight and that $\ell\ge k$. Let $G'=(V,F)$ be a graph obtained by adding
$\ell-k$ edges of dimension at least 2 to $G$, and consider a subset $V'$ of $V$. It
follows that
\begin{eqnarray}
\card{E_{G'}(V')}\le \card{V'}+\ell-k\le kn'-\ell+\ell-k=kn'-k,
\end{eqnarray}
which implies that $G'$ is $(k,k)$-tight, since $\card{F}=kn-k$. By
\refprop{frank-k-arborescence} $G'$ is a $k$-arborescence.
Conversely, if adding any $\ell-k$ edges to $G$ results in a $(k,k)$-tight
graph, then $G$ must be tight; if $V'$ spans more than $kn-\ell$ edges in $G$, then
adding $\ell-k$ edges to the the span of $V'$ results in a graph which is not
$(k,k)$-sparse.
\end{proof}
\subsection{Decompositions into maps} The main result of this
section shows the equivalence of the $(k,0)$-tight graphs and
$k$-maps. As an application, we obtain a characterization of all
the sparse hypergraphs in terms of adding {\em any} edges.
\begin{restate}{k-maps-are-tight}{{\bf Generalized Nash-Williams-Tutte Decompositions}}
A graph $G$ is a $k$-map if and only if $G$ is $(k,0)$-tight.
\end{restate}
\begin{proof
Let $G=(V,E)$ be a hypergraph with $n$ vertices and $kn$ edges.
Let $B^k_G=(V_k,E,F)$ be the bipartite graph with one vertex class indexed by $E$ and
the other by $k$ copies of $V$. The edges of $B^k_G$ capture the incidence structure of
$G$. That is, we define $F=\{ v_ie : e=vw, e\in E, i=1,2,\ldots,k\}$; i.e., each edge vertex in $B$
is connected to the $k$ copies of its endpoints in $B^k_G$. \reffig{k3-bipartite-example} shows $K_3$
and $B^1_{K_3}$.
\begin{figure}[htbp]
\centering
\includegraphics[height=1.0 in]{k3-bipartite-graph-color}
\caption{The $(1,0)$-sparse 2-graph $K_3$ and its associated bipartite graphs
$B^1_{K_3}$. The vertices and edges of $K_3$ are
matched to the corresponding vertices in $B^1_{K_3}$ by shape and
line style.}
\label{fig.k3-bipartite-example}
\end{figure}
Observe that
for any subset $E'$ of $E$,
\begin{eqnarray}
\card{N_{B^k_G}(E)}=k\card{V(E')}\ge \card{E}.
\labeleq{map-hall-condition}
\end{eqnarray}
if and only if $G$ is $(k,0)$-sparse. By Hall's theorem, this implies that
$G$ is $(k,0)$-tight if and only if $B^k_G$ contains a perfect matching.
\fig{width=4 in}{An orientation of a 2-dimensional $2$-map $G$ and the associated bipartite matching in $B^2_G$.}{bipartite-2-map}
The edges matched to the $i$th copy of $V$ correspond to
the $i$th map in the $k$-map, as
shown for a 2-map in \reffig{bipartite-2-map}.
Assign as the tail of each edge away from the vertex to which it is matched.
It follows that
each vertex has out degree one in the spanning subgraph matched to
each copy of $V$ as desired.
\end{proof}
\refthm{k-maps-are-tight} implies \refthm{maps-after-adding-any}.
\begin{restate}{maps-after-adding-any}{{\bf Generalized Haas-Lov{\'{a}}sz-Recski Property for Maps}}
The graph $G'$ obtained by adding any $\ell$ edges from $K_n^{k,0}-G$ to a
$(k,\ell)$-tight graph $G$ is a $k$-map.
\end{restate}
\begin{proof
Similar to the proof of \refthm{trees-after-adding-any}. Because the added edges come from
$K_n^{k,0}-G$, the resulting graph must be sparse.
\end{proof}
We see from the proof of \refthm{maps-after-adding-any}, that the condition of adding edges of dimension
at least 2 in \refthm{trees-after-adding-any} is equivalent to saying that the added edges
come from $K_n^{k,k}$.
To prove \refthm{maps-and-trees}, we need several results from matroid
theory.
\begin{proposition}
Let $r$ be a non-negative, increasing, submodular set function on a finite
set $E$. Then the class
$\mathcal{N}=\{A\subset E : \card{A'}\le r(A'), \forall A'\subset A \}$
gives the independent sets of a matroid.
\end{proposition}
We say that $\mathcal{N}$ is generated by $r$. In particular, we see
that our matroids of sparse hypergraphs are generated
by the function $r_{k,\ell}(E')=k\card{V(E')}-\ell$.
Pym and Perfect \cite{pym-perfect}
proved the following result about unions of such matroids.
\begin{proposition}[Pym and Perfect \cite{pym-perfect}]
Let $r_1$ and $r_2$ be non-negative, submodular, integer-valued
functions, and let $\mathcal{N}_1$ and $\mathcal{N}_2$
be matroids they generate. Then the matroid union of
$\mathcal{N}_1$ and $\mathcal{N}_2$ is generated by $r_1+r_2$. \labelprop{pym}
\end{proposition}
Let $\mathcal{M}_{1,0}$ and $\mathcal{M}_{1,1}$ be the matroids which
have as bases the $(1,0)$-tight and $(1,1)$-tight hypergraphs
respectively. That these are matroids is a result of White and Whiteley from
\cite{whiteley:matroids} proven in the appendix of this paper for completeness.
\refthm{k-maps-are-tight} and \refprop{frank-k-arborescence} imply that the bases
of these matroids are the maps and trees and that these matroids are
generated by the functions $r_{1,0}(E')=\card{V(E')}$ and
$r_{1,1}(E')=\card{V(E')}-1$.
With these observations we can prove \refthm{maps-and-trees}.
\begin{restate}{maps-and-trees}{{\bf Decompositions into maps and trees}}
Let $k\ge \ell$ and $G$ be tight. Then $G$ is the union of an
$\ell$-arborescence and a $(k-\ell)$-map.
\end{restate}
\begin{proof
We first observe that $r_{1,0}$
meets the conditions of \refprop{pym}. Since $r_{1,1}$ does not (it
is not non-negative), we switch to the submodular function
\begin{eqnarray}
r'(V')=n'-c
\end{eqnarray}
where $c$ is the number of non-trivial partition-connected components
spanned by $V'$. It follows that $r'$ is non-negative, since a
graph with no edges has no non-trivial partition-connected
components. Observe also, that if $V'$ spans $c$ partition-connected
components with $n_1,n_2,\ldots,n_c$ vertices we have
\begin{eqnarray}
r_{1,1}(V')=\sum_{i=1}^c(n_i-1)=n'-c=r'(V'),
\end{eqnarray}
since the partition-connected components are blocks of trees, and
thus disjoint.
Applying \refprop{pym} to $r_{1,0}$ and $r'$ now shows that
the union matroid of $k-\ell$ maps and $\ell$ trees is generated by
\begin{eqnarray}
r(V')=(k-\ell)r_{1,0}(V')+\ell r'(V')=(k-\ell)n'+\ell
n'-\ell,
\end{eqnarray}
proving that the union of the matroid with bases that decompose into
$(k-\ell)$ maps and $\ell$ trees is $\mathcal{M}_{k,\ell}$ as
desired.
\end{proof}
\section{Pebble game constructible graphs}
The main result of this section is that the matroidal sparse graphs are exactly
the ones that can be constructed by the pebble game.
We begin by
establishing some invariants that hold during the execution of the
pebble game.
\begin{lemma}During the execution of the pebble game, the following
invariants are maintained in $H$:
\begin{description}
\item [{\bf (I1)}] There are at least $\ell$ pebbles on $V$.
\item [{\bf (I2)}] For each vertex $v$, $\ensuremath{\operatorname{span}} v + \ensuremath{\operatorname{out}} v + \ensuremath{\operatorname{peb}} v=k$.
\item [{\bf (I3)}] For each $V'\subset V$, $\ensuremath{\operatorname{span}} V'+\ensuremath{\operatorname{out}} V'+\ensuremath{\operatorname{peb}} V'=kn'$.
\end{description}
\labellem{pebble-game-invariants}
\end{lemma}
\begin{proof}
{\bf (I1)} The number of pebbles on $V$ changes only after an {\bf add edge move}.
When there are fewer than $\ell+1$ pebbles, no {\bf add edge} moves are possible.
{\bf (I2)} This invariant clearly holds at the initialization of the pebble game.
We verify that each of the moves preserves {\bf (I2)}. An {\bf add edge} move
consumes a pebble from exactly one vertex and adds one to its out degree or span.
Similarly, a {\bf pebble shift} move adds one to the out degree of the source and
removes a pebble while adding one pebble to the destination and decreasing its
out degree by one.
{\bf (I3)} Let $V'\subset V$ have $n'$ vertices and span $m^{+}$ edges with
at least two ends. Then
\begin{eqnarray}
\ensuremath{\operatorname{out}} V'=\sum_{v\in V'} \ensuremath{\operatorname{out}} v-m^{+}
\end{eqnarray}
and
\begin{eqnarray}
\ensuremath{\operatorname{span}} V'=m^{+}+\sum_{v\in V'}\ensuremath{\operatorname{span}} v.
\end{eqnarray}
Then we have
\[
\begin{split}
\ensuremath{\operatorname{span}} V'+ \ensuremath{\operatorname{out}} V'+\ensuremath{\operatorname{peb}} V' \\
&=\sum_{v\in V'} \ensuremath{\operatorname{out}} v-m^{+} + m^{+} +
\sum_{v\in V'} \ensuremath{\operatorname{span}} v + \sum_{v\in V'} \ensuremath{\operatorname{peb}} v \\
&=\sum_{v\in V'}(\ensuremath{\operatorname{out}} v+\ensuremath{\operatorname{span}} v+\ensuremath{\operatorname{peb}} v) = kn',
\end{split}
\]
where the last step follows from {\bf (I2)}.
\end{proof}
From these invariants, we can show that the pebble game constructible graphs
are sparse.
\begin{lemma}Let $H$ be a hypergraph constructed with the pebble game. Then $H$
is sparse. If there are exactly $\ell$ pebbles on $V(H)$, then $H$ is tight.
\labellem{pebble-graphs-are-sparse}
\end{lemma}
\begin{proof}
Let $V'\subset V$ have $n'$ vertices and consider the configuration of the
pebble game immediately after the most recent {\bf add edge} move that
added to the span of $V'$. At this point, $\ensuremath{\operatorname{peb}} V'\ge \ell$. By \reflem{pebble-game-invariants}
{\bf (I3)},
\begin{eqnarray}
kn'\ge \ensuremath{\operatorname{span}} V'+\ensuremath{\operatorname{out}} V'+\ell.
\end{eqnarray}
When $\ensuremath{\operatorname{span}} V'>kn'-\ell$, this implies that $-1\ge\ensuremath{\operatorname{out}} V'$, which is a contradiction.
In the case where there are exactly $\ell$ pebbles on $V(H)$, \reflem{pebble-game-invariants}
{\bf (I3)} implies that $\ensuremath{\operatorname{span}} V=kn-\ell$.
\end{proof}
We now consider the reverse direction: that all the sparse graphs
admit a pebble game construction. We start with the observation
that if there is a path in $H$ from $u$ to $v$, then if $v$ has a pebble
on it, a sequence of {\bf pebble shift} moves can bring the
pebble to $u$ from $v$.
\newcommand{\ensuremath{\operatorname{reach}}}{\ensuremath{\operatorname{reach}}}
Define the {\bf reachability region} of a vertex $v$ in $H$ as the set
\begin{eqnarray}
\ensuremath{\operatorname{reach}}{v}=\{u\in V : \text{there is a path in $H$ from $v$ to $u$}\}.
\end{eqnarray}
\begin{lemma}
Let $e$ be a set of vertices such that $H+e$ is sparse. If $\ensuremath{\operatorname{peb}} e<\ell+1$,
then a pebble not on $e$ can be brought to an end of $e$.
\labellem{can-bring-another-pebble}
\end{lemma}
\begin{proof}
Let $V'$ be the union of the reachability regions of the ends of $e$; i.e.,
\begin{eqnarray}
V'=\bigcup_{v\in e}\ensuremath{\operatorname{reach}} v.
\end{eqnarray}
Since $V'$ is a union of reachability regions, $\ensuremath{\operatorname{out}} V'=0$. As $H+e$
is sparse and $e$ is in the span of $V'$, $\ensuremath{\operatorname{span}} V'<kn'-\ell$.
It follows by \reflem{pebble-game-invariants} {\bf (I3)}, that $\ensuremath{\operatorname{peb}} V'\ge \ell+1$,
so there is a pebble on $V'-e$. By construction there is a $v\in e$ such that
the pebble is on a vertex $u\in \ensuremath{\operatorname{reach}} v-e$. Moving the pebble from $u$
to $v$ does not affect any of the other pebbles already on $e$.
\end{proof}
It now follows that any sparse hypergraph has a pebble game construction.
\begin{restate}{sparse-graphs-are-pebble-game-graphs}{{\bf The Main Theorem: Pebble Game Constructible Hypergraphs}}
Let $G$ be a $(k,\ell)$-sparse hypergraph with $k$, $\ell$ and $s$
meeting the conditions of \refthm{good-range}. Then $G$ can be constructed by the pebble
game.
\end{restate}
\begin{proof}
For each edge $e$ of $G$ in any order, inductively apply \reflem{can-bring-another-pebble} to
the ends of $e$ until there are $\ell+1$ of them. At this point, use an {\bf add edge} move to
add $e$ to $H$.
\end{proof}
It is instructive to note that the pebble game invariants enforce
the matroid properties of the sparse graphs. The $\ell+1$
acceptance condition enforces the constraints on $k$, $\ell$ and
$s$, and the proof of \reflem{can-bring-another-pebble} shows that
the order in which edges of a sparse graph are added does not matter
in a pebble game construction.
\section{Pebble games for Components and Extraction}
Until now we were concerned with characterizing sparse and tight
graphs. In this section we describe efficient algorithms based on
pebble game constructions.
\subsection{The basic pebble game}
In this section we develop the basic $(k,\ell)$-pebble game for hypergraphs to solve the
{\bf decision} and {\bf extraction} problems. We first describe the algorithm.
\begin{algorithm}[The $(k,\ell)$-pebble game]
$\quad$ \\
{\bf Input:} A hypergraph $G=(V,E)$ \\
{\bf Output:} `sparse', `tight' or `dependent.' \\
{\bf Method:} Initialize a pebble game construction on $n$
vertices.
For each edge $e$, try to collect $\ell+1$ pebbles on the ends of $e$.
Pebbles can be collected using depth-first search to find a path
to a pebble and then a sequence of {\bf pebble shift} moves to move it.
If it is possible to collect $\ell+1$ pebbles,
use an {\bf add edge} move to add $e$ to $H$.
If any edge was not added to $H$, output `dependent'.
If every edge was added and there are exactly
$\ell$ pebbles left, then output `tight'. Otherwise output `sparse'.
\labelalg{basic-pebble-game}
\end{algorithm}
\reffig{collect-pebble} shows an example of collecting a pebble and accepting an edge.
\begin{figure}[htbp]
\centering
\subfigure[]{\label{fig.pebble-1}\includegraphics[height=1.5 in]{pebble-example-1}}
\subfigure[]{\label{fig. pebble-2}\includegraphics[height=1.5 in]{pebble-example-2}}
\subfigure[]{\label{fig. pebble-4}\includegraphics[height=1.5 in]{pebble-example-4}}
\subfigure[]{\label{fig. pebble-7}\includegraphics[height=1.5 in]{pebble-example-7}}
\caption{Collecting a pebble and accepting an edge in a $(2,2)$-pebble game on a
$3$-uniform hypergraph $H$. $H$ is shown via a 2-uniform representation.
In (a), the edge being tested, $cde$ is shown with thick circles around the vertices.
The pebble game starts a search
to bring a pebble to $d$. (This choice is arbitrary; had $e$ been chosen first, the
edge would be immediately accepted.)
In (b) a path from $d$ to $e$ across the edge marked with a think line is found.
In (c) the pebble is moved and the path is reversed; the new tail
of the edge marked with a think line is $e$.
In (d) the pebble is picked up, and the edge being checked is accepted. The
tail of the new edge, marked with a thick line, is $d$.
}
\label{fig.collect-pebble}
\end{figure}
The correctness of the basic pebble game for the {\bf decision}
and {\bf extraction} problems
follows immediately from \refthm{sparse-graphs-are-pebble-game-graphs}.
For the {\bf optimization }
problem, sort the edges in order of increasing weight before starting;
the correctness follows from
\refthm{kl-matroid} and the characterization of
matroids by the greedy algorithm (discussed in, e.g.,
\cite{oxley:matroid}).
The running time of the pebble game is dominated by the time needed
to collect pebbles. If the maximum edge size in the hypergraph is
$s^*$, the time for one depth-first search is $O(s^*n+m)$, from
which it follows that the time to find one pebble in $H$ is
$O(s^*n)$. To check an edge requires no more than $s^*+\ell+1$
pebble searches, and $m$ edges need to be checked. To summarize, we
have proven the following.
\begin{lemma}
Let $G$ be a hypergraph with $n$ vertices, $m$
edges, and maximum edge size $s^*$. The running time of the basic pebble game is
$O((s^*+\ell)s^*nm)$; for the {\bf decision} problem, this is
$O((s^*+\ell)s^*n^2)$, since $m=O(n)$.
\labellem{pebble-game-running-time}
\end{lemma}
All of the searching, marking, and pebble counting
can be done with $O(1)$ space per vertex.
Since $H$ has $O(n)$ edges, the space complexity of the basic pebble game
is dominated by the size of the input.
\begin{lemma}
The space complexity of the basic pebble game is $O(m+n)$, where $m$ and $n$ are,
respectively,
the number of edges and vertices in the input.
\labellem{pebble-game-space}
\end{lemma}
Together the preceding lemmas complete the complexity analysis. The running time for the {\bf decision}
problem on a $d$-uniform hypergraph with $n$ vertices and $kn-\ell$ edges
is $O((s+\ell)sn^2)$, and the space used $O(n)$. For the {\bf optimization } problem, the running
time increases to $O((s+\ell)sn^2+n\log n)$ because of the sorting phase.
The {\bf extraction} problem is solved in time $O((s+\ell)snm)$ and space $O(n+m)$.
\subsection{Detecting components}
In the next several sections we extend the basic pebble game to
solve the {\bf components} problem. Along the way, we also improve
the running time for the {\bf extraction} problem by developing a
more efficient way of discarding dependent edges. As the proof of
\reflem{pebble-game-running-time} shows, the time spent trying to
bring pebbles to the ends of dependent edges can be $\Omega(n^2)$ if
the edges are very large. We will reduce this to $O(s)$, improving
the running time.
We first present an algorithm to detect components.
\begin{algorithm}[Component detection]
$\quad$ \\
{\bf Input:} An oriented hypergraph $H$ and $e$, the most recently accepted edge. \\
{\bf Output:} The component spanning $e$ or `free.' \\
{\bf Method:} When the algorithm starts, there are $\ell$ pebbles on the ends of $e$,
and a vertex $w$ is the tail of $e$. If there are any other pebbles on $\ensuremath{\operatorname{reach}} w$,
stop and output `free.' Otherwise let $C=\ensuremath{\operatorname{reach}} w$, and enqueue any vertex that is
an end of an edge pointing into $C$.
While there are more vertices in the queue, dequeue a vertex $u$. If the only pebbles in
$\ensuremath{\operatorname{reach}} u$ are the $\ell$ on $e$, add $\ensuremath{\operatorname{reach}} w$ to $C$ and
enqueue any newly discovered vertex that is an end of an
edge pointing into $C$.
Finally, output $C$.
\labelalg{detect-components}
\end{algorithm}
In the rest of this section we analyze the correctness and running time of
\refalg{detect-components}. We put off a discussion of the space required to
maintain the components until the next section.
We start with a technical lemma about blocks.
\begin{lemma}
Let $G$ be tight and $\ell>0$. Then $G$ is connected.
\labellem{when-blocks-are-connected}
\end{lemma}
\begin{proof}
Consider a partition of $V$ into two subsets. These span at most
$kn-2\ell$ edges by sparsity, but $G$ has $kn-\ell$ edges.
\end{proof}
\begin{lemma}
If \refalg{detect-components} outputs `free,' then $e$ is not spanned by any component. Otherwise
the output $C$ of \refalg{detect-components} is the component spanning $e$.
\labellem{detect-components-correctness}
\end{lemma}
\begin{proof}
\refalg{detect-components} outputs `free' only when it is possible to collect at least $\ell+1$ pebbles
on the ends of $e$. \reflem{pebble-graphs-are-sparse} shows that in this case, $e$ is not spanned by
any block in $H$ and thus no component.
Now suppose that \refalg{detect-components} outputs a set of vertices $C$. By construction,
the number of free pebbles on $C$ is $\ell$. Also, since $C$ is the union of reachability
regions, it has no out edges. By \reflem{pebble-graphs-are-sparse}, $C$ spans a block in $H$.
Since \refalg{detect-components} does a breadth first search in $H$, $C$ is a maximal
connected block.
There are now two cases to consider. When $\ell>0$, blocks are connected
by \reflem{when-blocks-are-connected}. If $\ell=0$, blocks may not be connected, but there
is only one component in $H$ by \reflem{map-components}; add $C$ to the component being maintained.
\end{proof}
For the running time of \refalg{detect-components} we observe that
$O(s^*)$ time is spent processing the vertices of each edge pointing
into $C$ for enqueueing and dequeuing. Vertices are explored by
pebble searches only once; mark vertices accepted into $C$ and also
those from which pebbles can be reached to cut off the searches.
Since $H$ is $(k,\ell)$-sparse, it has $O(n)$ edges. Summarizing,
we have shown the following.
\begin{lemma}
The running time of \refalg{detect-components} is $O(s^*n)$.
\labellem{detect-components-running-time}
\end{lemma}
\subsection{The pebble game with components}
We now present an extension of the basic pebble game that solves the components
problem.
\begin{algorithm}[The $(k,\ell)$-pebble game with components]
$\quad$ \\
{\bf Input:} A hypergraph $G=(V,E)$ \\
{\bf Output:} `Strict', `tight' or `dependent.' \\
{\bf Method:} Modify \refalg{basic-pebble-game} as follows. When processing an
edge $e$ first check if it is spanned by a component. If it is, then reject it. Otherwise
collect $\ell+1$ pebbles on $e$ and accept it. After accepting $e$, run
\refalg{detect-components} to find a new component if once has
been created.
Output the components discovered along with the output of the basic pebble
game.
\labelalg{components-pebble-game}
\end{algorithm}
The correctness of \refalg{components-pebble-game} follows
from the fact that $H+e$ is sparse if and only if $e$ is not in the
span of any component and \refthm{sparse-graphs-are-pebble-game-graphs}.
\begin{lemma}
\refalg{components-pebble-game} solves the {\bf decision}, {\bf extraction}
and {\bf components} problems.
\labelthm{components-pebble-game-is-correct}
\end{lemma}
\subsection{Complexity of the pebble game with components}
We analyze the running time of the pebble game with components
in two parts: component maintenance and edge processing.
For component maintenance, we easily generalize the union pair-find data structures described in
\cite{cccg}. If $s^*$ is the largest size of an edge in $G$,
the complexity of
checking whether an edge is spanned by a component is $O(s^*)$, and
the total time spent updating the components discovered is $O(n^{s^*})$.
The complexity is dominated by maintaining a table with $n^{s^*}$ entries that
records with $s^*$-tuples are spanned by some component.
The time spent processing dependent edges is $O(s^*n^{s^*})$; they are exactly those
edges spanned by a component. For each accepted edge, we need to
collect $\ell+1$ pebbles. The analysis is similar to that for the basic pebble game.
Since there are $O(n)$ edges accepted, we have the following total running time.
\begin{lemma}
The running time of \refalg{components-pebble-game} on a $s$-dimensional
hypergraph with $n$ vertices and $m$ edges is $O((s^*+\ell)s^*n^{s^*}+m)$.
\labellem{components-running-time}
\end{lemma}
Since the data structure used to maintain the components uses a table of
size $\Theta(n^{s^*})$, the space complexity of the pebble game with
components is the same on any input.
\begin{lemma}
The pebble game with components uses $O(n^s)$ space.
\labelthm{components-space}
\end{lemma}
Together the preceding lemmas complete the complexity analysis of the
pebble game with components. The running time on an $s$-graph with
$n$ vertices and $m$ edges is $O((s+\ell)sn^s+m)$ and the space used is
$O(n^s)$. For the optimization problem, the sorting phase of the
greedy algorithm takes an additional $O(m\log m)$ time.
\section{Critical representations}
As an application of the pebble game, we investigate the
circumstances under which we may represent a sparse hypergraph with
a lower dimensional sparse hypergraph. The main result of this
section is a complete characterization of the critical sparse
hypergraphs for any $k$ and $\ell$.
Clearly, by \reflem{sparse-graph-rank}, when $\ell\ge (s-1)k$, every sparse
$s$-uniform hypergraph must be critical. In this section we show that
these are the only $s$-uniform critical sparse hypergraph and describe an
algorithm for finding them.
We first present a modification of the pebble game to compute a
representation. Only the {\bf add edge} and {\bf pebble shift}
moves need to change.
{\bf Represented add edge:} When adding an edge $e$ to $H$, create a set $r(e)$
which is the set of vertices with the $\ell+1$ pebbles used to certify that
$e$ was independent.
{\bf Represented pebble shift:} When a {\bf pebble shift} move makes an end
$v\notin r(e)$ the tail of $e$, add $v$ to $r(e)$ and remove any other element of $r(e)$.
Let $R$ be the oriented hypergraph with the edge set $r(e)$ for $e\in E(H)$.
We now consider the invariants of the represented pebble game.
\begin{lemma}\labellem{represented-pebble-game-invariants}
The invariants {\bf (I1)}, {\bf (I2)}, and {\bf (I3)} hold in $R$
throughout the pebble game.
Also, the invariant:
\begin{enumerate}
\item {\bf (I4)}
$\ensuremath{\operatorname{span}}_R V'+\ensuremath{\operatorname{out}}_R V'+\ensuremath{\operatorname{peb}} V'\le \ensuremath{\operatorname{span}}_R V'+\ensuremath{\operatorname{out}}_H V'+\ensuremath{\operatorname{peb}} V'$
\end{enumerate}
holds for all $V'\subset V$.
\end{lemma}
\begin{proof}
The proof of {\bf (I1)}, {\bf (I2)} and {\bf (I3)} are similar to the
proof of \reflem{pebble-game-invariants}.
For {\bf (I4)}, we just need to observe that since $E_H(V')\subset E_R(V')$,
the out degree in $H$ it at least the out-degree in $R$.
\end{proof}
From \reflem{represented-pebble-game-invariants} we see that $R$ must be
sparse, and by construction $R$ has dimension at least $(\ell+1)/k$.
Since $R$ is a pebble game graph, we see that $G$ is critical if and
only if $G=R$ for every represented pebble game construction.
\begin{restate}{representation}{{\bf Critical Representations}}
$G$ is a critical sparse hypergraph of
dimension $s$ if and only if the representation found by the pebble
game construction coincides with $G$. This implies that $G$ is
$s$-uniform and $\ell=sk-1$.
\end{restate}
\begin{proof}
The theorem follows from the fact that we can always move pebbles between
the ends of an independent set of vertices unless there are exactly $sk$ pebbles on it already, which is exactly the acceptance condition for the $(k,sk-1)$-pebble game.
\end{proof}
The observation that $E_H(V')\subset E_R(V')$ also proves that
any component in $H$ induces a block in $R$.
It is instructive to note that blocks in $R$ do {\it not} necessarily
correspond to blocks in $H$.
\section{Conclusions and Open Questions}
We have generalized most of the known results on sparse graphs to the
domain of hypergraphs. In particular, we have provided graph theoretic, algorithmic
and matroid characterizations of the entire family of sparse hypergraphs
for $0\le \ell<ks$.
We also provide an initial result on the meaning of dimension in sparse hypergraphs;
in particular the representation theorem shows that the sparse hypergraphs for
$l\ge 2k$ are somehow intrinsically not $2$-dimensional.
The results in this paper suggest a number of open questions, which we
consider below.
\paragraph{Algorithms.} The running time and space complexity of the pebble
game with components is the natural generalization of the $O(n^2)$
achieved by Lee and Streinu in \cite{LeSt05}. Improving our
$\Omega(n^{s^*})$ running time to $O(m+n^2)$ may be possible with a
better data structure.
For the case where $d=2$, the pebble games of Lee and Streinu are not the
best known algorithms for the maps-and-trees range of parameters. We do
not know if the algorithms of \cite{GaWe88} and \cite{gabow1995}
generalize easily to hypergraphs.
\paragraph{Graph theory.}
Proving a partial converse of the lower-dimensional representation
theorem \refthm{representation}
is of particular interest to a
number of applications in rigidity theory.
\bibliographystyle{abbrv}
| {
"timestamp": "2007-03-30T16:15:00",
"yymm": "0703",
"arxiv_id": "math/0703921",
"language": "en",
"url": "https://arxiv.org/abs/math/0703921",
"abstract": "A hypergraph $G=(V,E)$ is $(k,\\ell)$-sparse if no subset $V'\\subset V$ spans more than $k|V'|-\\ell$ hyperedges. We characterize $(k,\\ell)$-sparse hypergraphs in terms of graph theoretic, matroidal and algorithmic properties. We extend several well-known theorems of Haas, Lov{á}sz, Nash-Williams, Tutte, and White and Whiteley, linking arboricity of graphs to certain counts on the number of edges. We also address the problem of finding lower-dimensional representations of sparse hypergraphs, and identify a critical behaviour in terms of the sparsity parameters $k$ and $\\ell$. Our constructions extend the pebble games of Lee and Streinu from graphs to hypergraphs.",
"subjects": "Combinatorics (math.CO); Data Structures and Algorithms (cs.DS)",
"title": "Sparse Hypergraphs and Pebble Game Algorithms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850837598123,
"lm_q2_score": 0.7217432122827969,
"lm_q1q2_score": 0.7095349863001094
} |
https://arxiv.org/abs/1908.05650 | The maximum number of points in the cross-polytope that form a packing set of a scaled cross-polytope | The problem of finding the largest number of points in the unit cross-polytope such that the $l_{1}$-distance between any two distinct points is at least $2r$ is investigated for $r\in\left(1-\frac{1}{n},1\right]$ in dimensions $\geq2$ and for $r\in\left(\frac{1}{2},1\right]$ in dimension $3$. For the $n$-dimensional cross-polytope, $2n$ points can be placed when $r\in\left(1-\frac{1}{n},1\right]$. For the three-dimensional cross-polytope, $10$ and $12$ points can be placed if and only if $r\in\left(\frac{3}{5},\frac{2}{3}\right]$ and $r\in\left(\frac{4}{7},\frac{3}{5}\right]$ respectively, and no more than $14$ points can be placed when $r\in\left(\frac{1}{2},\frac{4}{7}\right]$. Also, constructive arrangements of points that attain the upper bounds of $2n$, $10$, and $12$ are provided, as well as $13$ points for dimension $3$ when $r\in\left(\frac{1}{2},\frac{6}{11}\right]$. | \section{Introduction}
Let $K$ and $L$ be origin-symmetric convex sets in $\mathbb{R}^{n}$
with nonempty interiors. A set $D\subset\mathbb{R}^{n}$ is a (translative)
packing set for $K$ if, for all distinct $\mathbf{x},\mathbf{y}\in D$,
\[
\left(\mathbf{x}+\inter K\right)\cap\left(\mathbf{y}+\inter K\right)=\emptyset.
\]
For $r>0$, we consider the problem of finding the maximum number
of points in a packing set $D$ of $rK$ that is contained in $L$.
This quantity will be denoted by
\[
\gamma\left(L,K,r\right):=\max\left\{ \left|D\right|\mid D\subset L\text{ and }\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{K}\geq2r\text{ for any }\mathbf{x},\mathbf{y}\in D,\mathbf{x}\neq\mathbf{y}\right\} ,
\]
where $\left|\left|\mathbf{x}\right|\right|_{K}=\min\left\{ \lambda\mid\lambda\geq0\text{ and }\mathbf{x}\in\lambda K\right\} $
and for a set $S$, its cardinality is denoted by $\left|S\right|$.
If $K=L$ then we use the notation $\gamma\left(K,r\right)$ as a
shorthand for $\gamma\left(L,K,r\right)$. We will only deal with
the situation where both $K$ and $L$ are the unit cross-polytope
$C_{n}^{*}=\left\{ \mathbf{x}\in\mathbb{R}^{n}\left|\,\sum_{i=1}^{n}\left|x_{i}\right|\leq1\right.\right\} $.
A set $D$ in $C_{n}^{*}$ with $k$ points such that the $l_{1}$-distance
between any two distinct points is greater than or equal to $2r$
is equivalent to a packing of $rC_{n}^{*}+D$ such that each ball
is contained inside the set $\left(1+r\right)C_{n}^{*}$. Unless otherwise
specified, we will use ``distance'' to mean the $l_{1}$-distance.
The vertices of $C_{n}^{*}$ are the $2n$ unit vectors $\left\{ \pm\mathbf{e}_{i}\mid i\in\left\{ 1,\ldots,n\right\} \right\} $,
so the distance between any two distinct vertices is $2$, which implies
that $\gamma\left(C_{n}^{*},r\right)=1$ for any $r>1$ and $\gamma\left(C_{n}^{*},r\right)\geq2n$
for $r\leq1$.
The case $r=\frac{1}{2}$ is related to the topic of kissing numbers.
The (translative) kissing number $k\left(K\right)$ of a convex body
$K$ is the maximum number of translates of $K$ such that no two
translates overlap each other and each translate touches but does
not overlap with $K$. In the case of $\frac{1}{2}C_{n}^{*}$, we
have
\[
k\left(\frac{1}{2}C_{n}^{*}\right)=\max\left\{ \left|D\right|\left|\,D\cup\left\{ \mathbf{0}\right\} \text{ is a packing set for }\frac{1}{2}C_{n}^{*}\text{ and }\left(\mathbf{x}+\frac{1}{2}C_{n}^{*}\right)\cap\frac{1}{2}C_{n}^{*}\neq\emptyset\text{ for any }\mathbf{x}\in D\right.\right\} ,
\]
and since $k\left(K\right)$ is invariant under the scaling of $K$,
\[
k\left(C_{n}^{*}\right)+1=k\left(\frac{1}{2}C_{n}^{*}\right)+1\leq\gamma\left(C_{n}^{*},\frac{1}{2}\right).
\]
For the cross-polytope, it is known that $k\left(C_{3}^{*}\right)=18$
\cite{LarmanZong1999,Talata1999}. This result implies that $\gamma\left(C_{3}^{*},\frac{1}{2}\right)\geq19$,
however, due to the requirement that one point is the origin, it does
not a priori provide an upper bound for \emph{any} packing set for
$\frac{1}{2}C_{3}^{*}$. An upper bound for the kissing number of
any convex body $K$ is obtained from a result of Hadwiger \cite{Hadwiger1957},
\[
k\left(K\right)\leq3^{n}-1,
\]
where this inequality is an equality iff $K$ is a parallelepiped.
The cross-polytope is not a parallelepiped, so $k\left(C_{3}^{*}\right)\leq25$,
which results in an upper bound of $\gamma\left(C_{3}^{*},\frac{1}{2}\right)\leq26$.
In the other direction, a well-known result by Swinnerton-Dyer \cite{Swinnerton-Dyer1953}
says that
\[
k\left(K\right)\geq n^{2}+n,
\]
and in the case when $K$ is a cross-polytope the lower bound has
been improved to
\[
k\left(C_{n}^{*}\right)\geq\left(\frac{9}{8}\right)^{\left(1-o\left(1\right)\right)n}
\]
by \cite{LarmanZong1999} and to
\[
k\left(C_{n}^{*}\right)\geq1.13488^{\left(1-o\left(1\right)\right)n}
\]
by \cite{Talata2000}, which is the best known asymptotic lower bound
for the cross-polytope.
We will work with values of $r$ only in the interval $\left(\frac{1}{2},1\right]$
unless otherwise stated. For $r\in\left(1-\frac{1}{n},1\right]$,
the upper bound for the number of points in the cross-polytope such
that the distance between any two distinct points is at least $2r$
is linear in the dimension of the cross-polytope.
\begin{thm}
Let $n\geq2$, then $\gamma\left(C_{n}^{*},r\right)=2n$ for any $r\in\left(1-\frac{1}{n},1\right]$.
Additionally, $\left(1-\frac{1}{n},1\right]$ is the largest possible
interval such that $\gamma\left(C_{n}^{*},r\right)=2n$ for all $r$
in the interval.
\end{thm}
In particular, $\gamma\left(C_{3}^{*},r\right)=6$ for $r\in\left(\frac{2}{3},1\right]$.
The next theorem is specific to the three-dimensional case.
\begin{thm}
In dimension $3$,
\begin{description}
\item [{(a)}] $\gamma\left(C_{3}^{*},r\right)=10$ for any $r\in\left(\frac{3}{5},\frac{2}{3}\right]$,
\item [{(b)}] $\gamma\left(C_{3}^{*},r\right)=12$ for any $r\in\left(\frac{4}{7},\frac{3}{5}\right]$,
and
\item [{(c)}] $\gamma\left(C_{3}^{*},r\right)\leq14$ for $r\in\left(\frac{1}{2},\frac{4}{7}\right]$.
\end{description}
\end{thm}
For the case $n=3$ and $r\in\left(\frac{1}{2},\frac{4}{7}\right]$,
we could not find exact values of $\gamma\left(C_{3}^{*},r\right)$,
but we do have lower bounds. Since $\gamma\left(C_{3}^{*},r'\right)\geq\gamma\left(C_{3}^{*},r\right)$
for $r'<r$, it follows immediately from Theorem 1.2 (b) is $\gamma\left(C_{3}^{*},r\right)\geq12$
for $r\in\left(\frac{1}{2},\frac{4}{7}\right]$. It is possible to
improve this lower bound for a smaller interval of $r$.
\begin{prop}
In dimension $3$, $\gamma\left(C_{3}^{*},r\right)\geq13$ for $r\in\left(\frac{1}{2},\frac{6}{11}\right]$.
\end{prop}
These lower bounds are obtained by explicit constructions. It follows
from Theorems 1.1 and 1.2, Proposition 1.3, and the above discussion
that
\begin{align*}
\gamma\left(C_{n}^{*},r\right)=1 & \qquad\text{for }r\in\left(1,\infty\right)\text{,}\\
\gamma\left(C_{n}^{*},r\right)=2n & \qquad\text{for }r\in\left(1-\frac{1}{n},1\right]\text{,}\\
\gamma\left(C_{3}^{*},r\right)=10 & \qquad\text{for }r\in\left(\frac{3}{5},\frac{2}{3}\right]\text{,}\\
\gamma\left(C_{3}^{*},r\right)=12 & \qquad\text{for }r\in\left(\frac{4}{7},\frac{3}{5}\right]\text{,}\\
12\leq\gamma\left(C_{3}^{*},r\right)\leq14 & \qquad\text{for }r\in\left(\frac{6}{11},\frac{4}{7}\right]\text{,}\\
12\leq\gamma\left(C_{3}^{*},r\right)\leq14 & \qquad\text{for }r\in\left(\frac{1}{2},\frac{6}{11}\right]\text{, and}\\
19\leq\gamma\left(C_{3}^{*},r\right)\leq26 & \qquad\text{for }r=\frac{1}{2}\text{.}
\end{align*}
Below is a chart of the results for dimension $3$ in addition to
the upper and lower bounds for $r=\frac{1}{2}$ mentioned above.
\noindent \begin{center}
\includegraphics[scale=0.33]{Chart}
\par\end{center}
A closely related topic is the problem of packing a set with copies
of another set. This problem has been explored mainly in dimension
$2$. Let $K$ and $L$ be origin-symmetric convex sets with nonempty
interior, then let
\[
M\left(L,K,m\right):=\sup\left\{ r\mid\left|D\right|=m,\ D\subset L\text{, and}\ \left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{K}\geq2r\text{ for any }\mathbf{x},\mathbf{y}\in D,\mathbf{x}\neq\mathbf{y}\right\} .
\]
From this definition, $M\left(L,K,m\right)$ and $\gamma\left(L,K,r\right)$
are related by the equation
\[
\gamma\left(L,K,M\left(L,K,m\right)\right)=m.
\]
The compendium of Goodman, O'Rourke, and T{\'o}th \cite{GoodmanORourkeToth2017}
lists known quantities of $M\left(L,B_{2},m\right)$ for when $L$
is a square, a circle, and an equilateral triangle and various values
of $m$, usually small. In three dimensions, $M\left(L,B_{3},m\right)$
is known for small $m$ and when $L$ is a cube, a cross-polytope,
and a tetrahedron \cite{GoodmanORourkeToth2017}. A related problem
of packings of squares and rectangles in squares is described in \cite{CroftFalconerGuy1991}.
Let $B_{n}$ be the unit Euclidean ball in $n$ dimensions. B{\"o}r{\"o}czky
Jr. and Wintsche have obtained $M\left(C_{n}^{*},B_{n},m\right)$
for $n\geq3$ and $m=\left\{ 3,\ldots,2n+1\right\} $ \cite{BoeroeczkyWintsche2000}.
Let $K$ be a convex set, $B$ be a bounded convex set, $s>0$, and
let $D\left(s,K,B\right)$ be a packing set of $C_{n}^{*}$ such that
$\left|\left\{ \mathbf{x}\in D\left(s,K,B\right)\mid K+\mathbf{x}\subseteq sB\right\} \right|$
is maximal among all packing sets of $C_{n}^{*}$. The\textbf{ }density
of the densest packing of $K$, or the packing density of $K$, is
defined to be
\[
\delta\left(K\right):=\lim_{s\rightarrow\infty}\frac{\left|\left\{ \mathbf{x}\in D\left(s,K,B\right)\mid K+\mathbf{x}\subseteq sB\right\} \right|\Vol\left(K\right)}{\Vol\left(sB\right)},
\]
see Definition 4 in Section 20 of \cite{GruberLekkerkerker1987}
(page 225), and it is independent of $B$. Then we can set $B=C_{n}^{*}$
and also suppose that $K=C_{n}^{*}$. For $s>1$, since $C_{n}^{*}+\mathbf{x}\subseteq sC_{n}^{*}$
iff $\mathbf{x}\in\left(s-1\right)C_{n}^{*}$, we have $\left|\left\{ \mathbf{x}\in D\left(s,K,B\right)\mid C_{n}^{*}+\mathbf{x}\subseteq sC_{n}^{*}\right\} \right|=\left|\left\{ \mathbf{x}\in D\left(s,K,B\right)\mid\mathbf{x}\in\left(s-1\right)C_{n}^{*}\right\} \right|$.
Next, scale this set by a factor of $\frac{1}{s-1}$ to get $\left|\left\{ \mathbf{x}\in D\left(s,K,B\right)\mid\mathbf{x}\in\left(s-1\right)C_{n}^{*}\right\} \right|=\left|\left\{ \left.\mathbf{x}\in\frac{1}{s-1}D\left(s,K,B\right)\,\right|\mathbf{x}\in C_{n}^{*}\right\} \right|=\gamma\left(C_{n}^{*},\frac{1}{s-1}\right)$.
Now let $r=\frac{1}{s-1}$. It follows from the definition of the
packing density that
\[
\delta\left(C_{n}^{*}\right)=\lim_{s\rightarrow\infty}\frac{\gamma\left(C_{n}^{*},\frac{1}{s-1}\right)\Vol C_{n}^{*}}{\Vol sC_{n}^{*}}=\lim_{r\rightarrow0}\gamma\left(C_{n}^{*},r\right)\left(1-\frac{1}{1+r}\right)^{n}.
\]
Hence the packing density of $C_{n}^{*}$ is related to $\gamma\left(C_{n}^{*},r\right)$
in the sense that $\gamma\left(C_{n}^{*},r\right)\left(1-\frac{1}{1+r}\right)^{n}\sim\delta\left(C_{n}^{*}\right)$
as $r\rightarrow0$.
We now mention some related results involving circle packings in a
circle and sphere packings in a cylinder. For the problem of sphere
packing inside a cylinder of fixed width in three dimensions, Fu et
al. \cite{FuSteinhardtZhaoSocolarCharbonneau2015} predict that as
the radius of the spheres approach zero, densest packings resemble
the face-centered cubic lattice---a densest sphere packing in three
dimensions \cite{ConwaySloane1998}---except for the spheres that
are near the walls of the cylinder. In the case of dimension two the
densest circle packing is generated by the hexagonal lattice \cite{ConwaySloane1998}.
Hopkins, Stillinger, and Torquato \cite{HopkinsStillingerTorquato2010}
provide examples of this phenomenon for dense packings of circles
inside a large circle under the condition that the large circle has
the same center as one of the small circles. Sch{\"u}rmann \cite{Schuermann2002,Schuermann2005}
has shown that under certain conditions the best finite packings of
strictly convex bodies can only be obtained using nonlattice packings.
Other dense arrangements of $k$ circles within a large circle include
modified wedge hexagonal packings and curved hexagonal packings \cite{HopkinsStillingerTorquato2010},
which are the best known packings for some values of $k$ \cite{LubachevskyGraham1997}.
Basic facts about convexity and the cross-polytope can be found in
books such as the ones from Gruber \cite{Gruber2007}, Ziegler \cite{Ziegler2007},
and Coxeter \cite{Coxeter1948}, and about packings are in Conway
and Sloane \cite{ConwaySloane1998}, Gruber \cite{Gruber2007}, and
Zong \cite{Zong1999}. Additional details on the kissing number are
also in Zong \cite{Zong1999}.
Section 2 of this paper provides the notation and preliminaries that
will be used for the rest of the text. Section 3 contains the proof
of the $n$-dimensional case, Theorem 1.1. Section 4 proves the equalities
and upper bound present in the three parts of the $3$-dimensional
case, Theorem 1.2, introducing additional notation as needed. Theorem
1.3 is proved in Section 5, and finally Section 6 presents a gallery
of diagrams related to these lower bounds.
\section{General notation and preliminaries}
Here we introduce notation that will be used over the course of this
paper. For a given $r>0$, let $P_{n}\left(r\right)\subset C_{n}^{*}$
be a packing set of $rC_{n}^{*}$. For any polytope $K$, let $\vertices\left(K\right)$
be the set of its vertices. For a fixed $n\in\mathbb{N}$, define
sets $V_{n}$ and $S_{n}\left(r\right)$ as follows:
\[
V_{n}:=\vertices\left(C_{n}^{*}\right)=\left\{ \pm\mathbf{e}_{i}\mid i\in\left\{ 1,\ldots,n\right\} \right\}
\]
and
\[
S_{n}\left(r\right):=\left(V_{n}+2r\inter\left(C_{n}^{*}\right)\right)\cap C_{n}^{*}.
\]
Therefore $S_{n}\left(r\right)$ is the set of all points in $C_{n}^{*}$
that are of distance $<2r$ from some vertex of $C_{n}^{*}$.
For $\mathbf{p}\in\mathbb{R}^{n}$ and $r>0$, we use the notation
\[
C\left(\mathbf{p},r\right):=\left\{ \mathbf{x}\in\mathbb{R}^{n}\mid\left|\left|\mathbf{x}-\mathbf{p}\right|\right|_{1}<r\right\}
\]
to denote the interior of the cross-polytope centered at $\mathbf{p}$
and scaled by the factor $r$.
The following lemma is necessary for the general $n$-dimensional
case.
\begin{lem}
Let $r\in\left(0,1\right]$. For each $j\in\left\{ 1,\ldots,n\right\} $,
\[
C_{n}^{*}\cap C\left(\mathbf{e}_{j},2r\right)\subseteq\overline{C\left(\left(1-r\right)\mathbf{e}_{j},r\right)},
\]
where $\overline{X}$ is the closure of $X$, and similarly for $-\mathbf{e}_{j}$
instead of $\mathbf{e}_{j}$.
\end{lem}
\begin{proof}
Without loss of generality we take the $\mathbf{e}_{j}$ case. Let
$\mathbf{y}=\sum_{i=1}^{n}y_{i}\mathbf{e}_{i}\in C_{n}^{*}\cap C\left(\mathbf{e}_{j},2r\right)$,
then $\left|\left|\mathbf{y}\right|\right|_{1}\leq1$ and $\left|\left|\mathbf{y}-\mathbf{e}_{j}\right|\right|_{1}\leq2r$.
Then the distance from $\mathbf{y}$ to $\left(1-r\right)\mathbf{e}_{j}$
is
\begin{eqnarray*}
\left|\left|\mathbf{y}-\left(1-r\right)\mathbf{e}_{j}\right|\right|_{1} & = & \left|\left|\sum_{i=1}^{n}y_{i}\mathbf{e}_{i}-\left(1-r\right)\mathbf{e}_{j}\right|\right|_{1}\\
& = & \left|\left|\sum_{i\neq j}y_{i}\mathbf{e}_{i}+y_{j}\mathbf{e}_{j}-\left(1-r\right)\mathbf{e}_{j}\right|\right|_{1}\\
& = & \left|\left|\sum_{i\neq j}y_{i}\mathbf{e}_{i}+\left(y_{j}+r-1\right)\mathbf{e}_{j}\right|\right|_{1}\\
& = & \sum_{i\neq j}\left|y_{i}\right|+\left|y_{j}+r-1\right|.
\end{eqnarray*}
If $y_{j}+r-1\geq0$ then since $\sum_{i=1}^{n}\left|y_{i}\right|\leq1$,
\begin{eqnarray*}
\sum_{i\neq j}\left|y_{i}\right|+\left|y_{j}+r-1\right| & \leq & \left(1-y_{j}\right)+y_{j}+r-1\\
& = & r.
\end{eqnarray*}
Similarly, if $y_{j}+r-1<0$ then since $\left|\left|\mathbf{y}-\mathbf{e}_{j}\right|\right|_{1}\leq2r$,
\begin{eqnarray*}
\sum_{i\neq j}\left|y_{i}\right|+\left|y_{j}+r-1\right| & = & \sum_{i\neq j}\left|y_{i}\right|-y_{j}-r+1\\
& = & \sum_{i\neq j}\left|y_{i}\right|+\left|y_{j}-1\right|-r\\
& \leq & 2r-r\\
& = & r.
\end{eqnarray*}
So $\left|\left|\mathbf{y}-\left(1-r\right)\mathbf{e}_{j}\right|\right|_{1}\leq r$,
or in other words, $\mathbf{y}\in\left\{ \mathbf{x}\in\mathbb{R}^{n}\mid\left|\left|\mathbf{x}-\left(1-r\right)\mathbf{e}_{j}\right|\right|_{1}\leq r\right\} =\overline{C\left(\left(1-r\right)\mathbf{e}_{j},r\right)}$.
\end{proof}
\section{Proof of Theorem 1.1 (the $\boldsymbol{n}$-dimensional case)}
In this section we assume that $n\geq2$. We will show that for any
$r\in\left(1-\frac{1}{n},1\right]$ and any packing set $P_{n}\left(r\right)$,
the number of points in $P_{n}\left(r\right)\cap S_{n}\left(r\right)$
is bounded above by the number of vertices of $C_{n}^{*}$. Then $\left|P_{n}\left(r\right)\right|\leq2n$
and this inequality is true for all $P_{n}\left(r\right)$, so $\gamma\left(C_{n}^{*},r\right)\leq2n$.
As mentioned in the introduction, the set of vertices $V_{n}\subset C_{n}^{*}$
is a packing set of $rC_{n}^{*}$, which means that $2n$ is also
a lower bound, and so $\gamma\left(C_{n}^{*},r\right)=2n$.
\begin{lem}
Let $r\in\left(1-\frac{1}{n},1\right]$, then $C_{n}^{*}=S_{n}\left(r\right)$.
\end{lem}
\begin{proof}
By definition, $S_{n}\left(r\right)\subseteq C_{n}^{*}$, and so it
remains to show the reverse inclusion. Let $\mathbf{x}\in C_{n}^{*}$
and without loss of generality it can be assumed that $\mathbf{x}$
is in the convex hull of $\mathbf{0},\mathbf{e}_{1},\ldots,\mathbf{e}_{n}$.
Then $\mathbf{x}=\sum_{i=1}^{n}x_{i}\mathbf{e}_{i}$ with $0\leq x_{i}\leq1$
and $\sum_{i=1}^{n}x_{i}\leq1$. Then there exists some $j\in\left\{ 1,\ldots,n\right\} $
such that $x_{j}\geq\frac{1}{n}\sum_{i=1}^{n}x_{i}$, so
\begin{eqnarray*}
\left|\left|\mathbf{x}-\mathbf{e}_{j}\right|\right|_{1} & = & \left|\left|\sum_{i=1}^{n}x_{i}\mathbf{e}_{i}-\mathbf{e}_{j}\right|\right|_{1}\\
& = & \left|\left|\sum_{i\neq j}x_{i}\mathbf{e}_{i}+x_{j}\mathbf{e}_{j}-\mathbf{e}_{j}\right|\right|_{1}\\
& = & \left|\left|\sum_{i\neq j}x_{i}\mathbf{e}_{i}+\left(x_{j}-1\right)\mathbf{e}_{j}\right|\right|_{1}\\
& = & \sum_{i\neq j}x_{i}+\left(1-x_{j}\right)\\
& = & \sum_{i=1}^{n}x_{i}-x_{j}+\left(1-x_{j}\right)\\
& = & \sum_{i=1}^{n}x_{i}-2x_{j}+1\\
& \leq & \sum_{i=1}^{n}x_{i}-\frac{2}{n}\sum_{i=1}^{n}x_{i}+1\\
& = & \left(1-\frac{2}{n}\right)\sum_{i=1}^{n}x_{i}+1\\
& \leq & \left(1-\frac{2}{n}\right)+1\qquad\text{(because }{\textstyle 1-\frac{2}{n}}\geq0\text{)}\\
& = & 2\left(1-\frac{1}{n}\right)\\
& < & 2r.
\end{eqnarray*}
So every point in $C_{n}^{*}$ is within distance $2r$ from some
vertex of $C_{n}^{*}$.
\end{proof}
The next lemma will be crucial for showing that the number of points
in $P_{n}\left(r\right)\cap S_{n}\left(r\right)$ is bounded above
by the number of vertices of $C_{n}^{*}$. It is a uniqueness condition
which shows that if a point $\mathbf{p}\in P_{n}\left(r\right)\cap S_{n}\left(r\right)$
is close to a vertex $\mathbf{v}$ of $C_{n}^{*}$, specifically $\left|\left|\mathbf{p}-\mathbf{v}\right|\right|_{1}<2r$,
then no other point in $P_{n}\left(r\right)$ can be close to $\mathbf{v}$.
\begin{lem}
Let $r\in\left(0,1\right]$. If a vertex $\mathbf{v}$ of $C_{n}^{*}$
has the property that $\mathbf{v}\in C\left(\mathbf{p},2r\right)\cap C\left(\mathbf{q},2r\right)$
for some $\mathbf{p},\mathbf{q}\in P_{n}\left(r\right)\cap S_{n}\left(r\right)$,
then $\mathbf{p}=\mathbf{q}$.
\end{lem}
\begin{proof}
Without loss of generality, let $\mathbf{v}=\mathbf{e}_{j}$ for some
$j\in\left\{ 1,\ldots,n\right\} $, then by hypothesis $\mathbf{e}_{j}\in C\left(\mathbf{p},2r\right)\cap C\left(\mathbf{q},2r\right)$.
It suffices to show that $\left|\left|\mathbf{p}-\mathbf{q}\right|\right|_{1}<2r$
since the distance between two distinct points in $P_{n,r}$ must
be $2r$ or greater. . Then in turn, $\mathbf{p},\mathbf{q}\in C\left(\mathbf{e}_{j},2r\right)$.
Since $C\left(\mathbf{e}_{j},2r\right)$ is open there exists a $r'<r$
($r'$ depends on $\mathbf{p}$ and $\mathbf{q}$) such that $\mathbf{p},\mathbf{q}\in C\left(\mathbf{e}_{j},2r'\right)$.
Then it follows from Lemma 2.1 applied to $C_{n}^{*}\cap C\left(\mathbf{e}_{j},2r'\right)$
that
\[
C_{n}^{*}\cap C\left(\mathbf{e}_{j},2r'\right)\subseteq\overline{C\left(\left(1-r'\right)\mathbf{e}_{j},r'\right)}\subseteq C\left(\left(1-r'\right)\mathbf{e}_{j},r\right),
\]
so $\left|\left|\mathbf{p}-\mathbf{q}\right|\right|_{1}<2r$.
\end{proof}
The following lemma will be used both here and in the $3$-dimensional
cases in the next section.
\begin{lem}
Let $r\in\left(0,1\right]$, then
\[
\left|P_{n}\left(r\right)\cap S_{n}\left(r\right)\right|\leq2n.
\]
\end{lem}
\begin{proof}
Define $V_{n}\left(r\right)$ by
\[
V_{n}\left(r\right)=\left\{ \mathbf{v}\in V_{n}\mid\text{there exists a }\mathbf{p}\in P_{n}\left(r\right)\cap S_{n}\left(r\right)\text{ such that }\left|\left|\mathbf{v}-\mathbf{p}\right|\right|_{1}<2r\right\}
\]
(this set may be empty) and a map $f:V_{n}\left(r\right)\rightarrow P_{n}\left(r\right)\cap S_{n}\left(r\right)$
where $f\left(\mathbf{v}\right)$ is the point $\mathbf{p}\in P_{n}\left(r\right)\cap S_{n}\left(r\right)$
such that $\left|\left|\mathbf{v}-\mathbf{p}\right|\right|_{1}<2r$.
First we need to show that $f$ is well-defined. Let $\mathbf{p},\mathbf{q}\in P_{n}\left(r\right)\cap S_{n}\left(r\right)$
be points such that $\mathbf{v}\in C\left(\mathbf{p},2r\right)\cap C\left(\mathbf{q},2r\right)$
for some $\mathbf{v}\in V_{n}$, then $\mathbf{p}=\mathbf{q}$ by
Lemma 3.2, which justifies the use of the words ``the point'' in
the definition of $f$. From the definition of $S_{n}\left(r\right)$,
every point $\mathbf{p}\in P_{n}\left(r\right)\cap S_{n}\left(r\right)$
has the property that there is some $\mathbf{v}\in V_{n}$ such that
$\left|\left|\mathbf{p}-\mathbf{v}\right|\right|_{1}<2r$, so $f$
is surjective. Both the domain and range of $f$ are finite sets,
so the cardinality of the range can be bounded above by
\[
\left|P_{n}\left(r\right)\cap S_{n}\left(r\right)\right|=\left|\Ran\left(f\right)\right|\leq\left|\Dom\left(f\right)\right|\leq\left|V_{n}\left(r\right)\right|=2n,
\]
completing the proof.
\end{proof}
Now we prove Theorem 1.1. With the preparation above, the proof is
mostly a matter of putting together earlier lemmas.
\begin{proof}[\emph{Proof of Theorem 1.1}]
Assume that $P_{n}\left(r\right)$ is nonempty, otherwise $\left|P_{n}\left(r\right)\right|=0$
and there is nothing to prove. Since $r\in\left(1-\frac{1}{n},1\right]$,
it follows from Lemma 3.1 that $C_{n}^{*}=S_{n}$, so $\left|P_{n}\left(r\right)\cap S_{n}\left(r\right)\right|$
is nonempty. Then Lemma 3.3 shows that $\left|P_{n}\left(r\right)\right|=\left|P_{n}\left(r\right)\cap S_{n}\left(r\right)\right|\leq2n$.
This inequality holds for any $P_{n}\left(r\right)$, so
\[
\gamma\left(C_{n}^{*},r\right)\leq2n\qquad\text{for }n\geq2\text{ and }r\in\left(1-\frac{1}{n},1\right]\text{.}
\]
The upper bound of $2n$ is achieved by $V_{n}=\left\{ \pm\mathbf{e}_{i}\mid i\in\left\{ 1,\ldots,n\right\} \right\} $
as a packing set of $rC_{n}^{*}$, so
\[
\gamma\left(C_{n}^{*},r\right)=2n\qquad\text{for }n\geq2\text{ and }r\in\left(1-\frac{1}{n},1\right]\text{.}
\]
The interval $r\in\left(1-\frac{1}{n},1\right]$ cannot be extended
in either direction, because $\gamma\left(C_{n}^{*},r\right)=1$ for
$r>1$ and in Proposition 5.1 we construct a packing set of $rC_{n}^{*}$,
for $r\leq1-\frac{1}{n}$, with $2n+2$ points in $C_{n}^{*}$. For
such $r$, $S_{n}\left(r\right)\subsetneq C_{n}^{*}$ and specifically
the centroid of each facet is not in $S_{n}\left(r\right)$ (cf. Subsection
4.1), so the set consisting of the $2n$ vertices of $C_{n}^{*}$
and the two centroids on opposing facets of $C_{n}^{*}$ is a packing
set of $rC_{n}^{*}$. Therefore, $r\in\left(1-\frac{1}{n},1\right]$
is the largest possible interval such that $\gamma\left(C_{n}^{*},r\right)=2n$
is true.
\end{proof}
\section{Proof of Theorem 1.2 (the $\boldsymbol{3}$-dimensional case)}
When $r\leq\frac{2}{3}$, the set $S_{3}\left(r\right)$ no longer
covers all of $C_{3}^{*}$, so unlike the $n$-dimensional case above,
the proofs for the three-dimensional cases require consideration of
the remainder $C_{3}^{*}\backslash S_{3}\left(r\right)$.
\subsection{Notation and preliminaries for dimension $\boldsymbol{3}$}
Here we collect some lemmas and notation for the three-dimensional
cases. Let $r\in\left[\frac{1}{2},\frac{2}{3}\right]$. Recall that
\[
V_{3}=\vertices\left(C_{3}^{*}\right)=\left\{ \pm\mathbf{e}_{1},\pm\mathbf{e}_{2},\pm\mathbf{e}_{3}\right\}
\]
and
\begin{eqnarray*}
S_{3}\left(r\right) & = & \left(V_{3}+2r\inter\left(C_{n}^{*}\right)\right)\cap C_{n}^{*}\\
& = & \bigcup_{i=1}^{3}\left(C\left(\mathbf{e}_{i},2r\right)\cup C\left(-\mathbf{e}_{i},2r\right)\right).
\end{eqnarray*}
For any $\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $,
define the following subsets of $\mathbb{R}^{3}$:
\[
V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right):=\left\{ \begin{pmatrix}\sigma_{1}\left(2r-1\right)\\
\sigma_{2}\left(2r-1\right)\\
\sigma_{3}\left(2r-1\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(1-r\right)\\
\sigma_{2}\left(1-r\right)\\
\sigma_{3}\left(2r-1\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(1-r\right)\\
\sigma_{2}\left(2r-1\right)\\
\sigma_{3}\left(1-r\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(2r-1\right)\\
\sigma_{2}\left(1-r\right)\\
\sigma_{3}\left(1-r\right)
\end{pmatrix}\right\} .
\]
\noindent \begin{center}
\includegraphics[scale=0.3]{002}
\par\end{center}
\noindent \begin{center}
\emph{Figure 4.1}. The grey cross-polytope is the set $C_{3}^{*}$,
the blue spheres are the points of $V\left(\frac{11}{20},\left(1,1,1\right)\right)$,
and the purple spheres are the points of $V\left(\frac{11}{20},\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)$
for $\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $ and
not all equal to $1$.
\par\end{center}
The endpoints of the range $r\in\left[\frac{1}{2},\frac{2}{3}\right]$
are $\frac{2}{3}$ and $\frac{1}{2}$. For $r=\frac{2}{3}$ the set
reduces to
\[
V\left(\frac{2}{3},\left(1,1,1\right)\right)=\left\{ \begin{pmatrix}\frac{1}{3}\\
\frac{1}{3}\\
\frac{1}{3}
\end{pmatrix}\right\} ,
\]
the centroid of the facet $\conv\left\{ \mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\right\} $,
and when $r=\frac{1}{2}$ the set is
\[
V\left(\frac{1}{2},\left(1,1,1\right)\right)=\left\{ \begin{pmatrix}0\\
0\\
0
\end{pmatrix},\begin{pmatrix}\frac{1}{2}\\
\frac{1}{2}\\
0
\end{pmatrix},\begin{pmatrix}\frac{1}{2}\\
0\\
\frac{1}{2}
\end{pmatrix},\begin{pmatrix}0\\
\frac{1}{2}\\
\frac{1}{2}
\end{pmatrix}\right\} ,
\]
which contains the midpoints of the edges of the facet $\conv\left\{ \mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\right\} $.
Subsets defined using midpoints of edges are used to solve the related
problems of finding upper bounds for $k\left(C_{3}^{*}\right)$ and
$M\left(C_{n}^{*},B_{n},m\right)$ for some values of $n$ and $m$.
To find the kissing number of the cross-polytope, Larman and Zong
\cite{LarmanZong1999} divided the boundary of the cross-polytope
into the union of $18$ subsets including sets of the form
\[
\relint\left(\left(\frac{1}{2}\mathbf{m}+\frac{1}{2}C_{3}^{*}\right)\cap C_{3}^{*}\right),
\]
where $\mathbf{m}$ is a midpoint of an edge in $V_{3}$, and showed
that each subset could contain the center of at most one cross-polytope,
resulting in $k\left(C_{3}^{*}\right)\leq18$. Another method to prove
that $k\left(C_{3}^{*}\right)\leq18$ was used by Talata \cite{Talata1999},
who showed that any packing set achieving a kissing number of $18$
must consist of six points on the vertices, six points on the midpoints
of the edges of two opposing facets, and the remaining points on the
hexagon passing through the midpoints of the other edges. B{\"o}r{\"o}czky
Jr. and Wintsche \cite{BoeroeczkyWintsche2000} use sets defined by vertices
and midpoints of edges to determine an upper bound for $M\left(C_{n}^{*},B_{n},m\right)$
where $n\geq3$ and $m\in\left\{ 4,\ldots,2n\right\} $.
For a packing set $P_{3}\left(r\right)$ and a set $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$,
$\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $, call
$\conv\left(V\right)$ a blocked set of $P_{3}\left(r\right)$ if
$\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
does not contain any points of $P_{3}\left(r\right)$.
First we show that $C_{3}^{*}\backslash S_{3}\left(r\right)$ can
be written in terms of $V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)$.
\begin{lem}
Let $r\in\left(\frac{1}{2},\frac{2}{3}\right]$. For any $\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $,
define the following subsets of $C_{3}^{*}$:
\[
R\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right):=\left(C_{3}^{*}\backslash S_{3}\left(r\right)\right)\cap\left\{ \mathbf{x}\in\mathbb{R}^{3}\mid\sigma_{1}x_{1},\sigma_{2}x_{2},\sigma_{3}x_{3}\geq0\right\} .
\]
Then $R\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)=\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
and
\[
C_{3}^{*}\backslash S_{3}\left(r\right)=\bigcup_{\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} }\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right).\right)
\]
\end{lem}
\begin{proof}
Without loss of generality, assume that $\sigma_{1}=\sigma_{2}=\sigma_{3}=1$,
and we will show that $R\left(r,\left(1,1,1\right)\right)=\conv\left(V\left(r,\left(1,1,1\right)\right)\right)$.
The set $R\left(r,\left(1,1,1\right)\right)$ is the subset of the
unit cross-polytope with all nonnegative coordinates and excluding
the sets $C\left(\mathbf{e}_{i},2r\right)$ for $i\in\left\{ 1,2,3\right\} $,
and the set $\conv\left(V\left(r,\left(1,1,1\right)\right)\right)$
is the intersection of the inequalities $-x_{1}-x_{2}+x_{3}\geq-\left(2r-1\right)$,
$x_{1}-x_{2}-x_{3}\leq-\left(2r-1\right)$, $-x_{1}+x_{2}-x_{3}\leq-\left(2r-1\right)$,
and $x_{1}+x_{2}+x_{3}\leq1$, since the four points in $V\left(r,\left(1,1,1\right)\right)$
satisfy each inequality. We will show that $R\left(r,\left(1,1,1\right)\right)$
is also the intersection of these inequalities.
\noindent \begin{center}
\includegraphics[scale=0.3]{001}
\par\end{center}
\noindent \begin{center}
\emph{Figure 4.2}. The grey cross-polytope is the set $C_{3}^{*}$,
the green cross-polytopes are the sets $\overline{C\left(\mathbf{x},\frac{11}{20}\right)}$
for $\mathbf{x}\in V_{3}$, and the blue spheres are the points of
$V\left(\frac{11}{20},\left(1,1,1\right)\right)$.
\par\end{center}
Let $\mathbf{x}\in R\left(r,\left(1,1,1\right)\right)$. Then $\mathbf{x}\in C_{3}^{*}$
so $x_{1}+x_{2}+x_{3}\leq1$, and in addition, $\mathbf{x}\notin C\left(\mathbf{e}_{1},2r\right)$
so
\begin{eqnarray*}
\left|x_{1}-1\right|+\left|x_{2}\right|+\left|x_{3}\right| & \geq & 2r\\
1-x_{1}+x_{2}+x_{3} & \geq & 2r\\
x_{1}-x_{2}-x_{3} & \leq & -\left(2r-1\right).
\end{eqnarray*}
Similarly, $\mathbf{x}\notin C\left(\mathbf{e}_{2},2r\right)$ and
$\mathbf{x}\notin C\left(\mathbf{e}_{3},2r\right)$ so $-x_{1}+x_{2}-x_{3}\leq-\left(2r-1\right)$
and $x_{1}+x_{2}-x_{3}\geq2r-1$. That proves $R\left(r,\left(1,1,1\right)\right)\subseteq\conv\left(V\left(r,\left(1,1,1\right)\right)\right)$.
For the converse, let $\mathbf{x}\in\conv\left(V\left(r,\left(1,1,1\right)\right)\right)$.
Since $\frac{1}{2}\leq r\leq\frac{2}{3}$, both $2r-1\geq0$ and $1-r\geq0$,
so the four points in $V\left(r,\left(1,1,1\right)\right)$ all have
nonnegative coordinates. Also, $\left|\left|\mathbf{v}\right|\right|_{1}\leq1$
for all $\mathbf{v}\in V\left(r,\left(1,1,1\right)\right)$, and since
$\mathbf{x}$ is in the convex hull of $V\left(r,\left(1,1,1\right)\right)$,
it is also true that $x_{1}+x_{2}+x_{3}\leq1$. Then $x_{1},x_{2},x_{3}\geq0$
and $x_{1}+x_{2}+x_{3}\leq1$ imply that $\mathbf{x}\in C_{n}^{*}$.
Also, $\mathbf{x}$ satisfies $x_{1}-x_{2}-x_{3}\leq-\left(2r-1\right)$,
then
\begin{eqnarray*}
x_{1}-x_{2}-x_{3} & \leq & -\left(2r-1\right)\\
\left(1-x_{1}\right)+x_{2}+x_{3} & \geq & 2r\\
\left|x_{1}-1\right|+\left|x_{2}\right|+\left|x_{3}\right| & \geq & 2r,
\end{eqnarray*}
where the last inequality holds because $x_{2},x_{3}\geq0$ and ${\textstyle 0\leq x_{1}\leq\frac{1}{2}}$,
so $\mathbf{x}\notin C\left(\mathbf{e}_{1},2r\right)$. Similarly,
$-x_{1}+x_{2}-x_{3}\leq-\left(2r-1\right)$ and $-x_{1}+x_{2}-x_{3}\leq-\left(2r-1\right)$
so $\mathbf{x}\notin C\left(\mathbf{e}_{2},2r\right)$ and $\mathbf{x}\notin C\left(\mathbf{e}_{3},2r\right)$.
Hence
\[
\mathbf{x}\in\left(C_{3}^{*}\cap\left\{ x_{1},x_{2},x_{3}\geq0\right\} \right)\left\backslash \left(\bigcup_{i=1}^{3}C\left(\mathbf{e}_{i},2r\right)\right)\right.=R\left(r,\left(1,1,1\right)\right).
\]
To complete the proof of the lemma, note that for any $\mathbf{x}\in\mathbb{R}^{3}$,
let $\sigma_{i}=\frac{x_{i}}{\left|x_{i}\right|}$ if $x_{i}\neq0$
and $\sigma_{i}=1$ if $x_{i}=0$, then $\sigma_{1}x_{1},\sigma_{2}x_{2},\sigma_{3}x_{3}\geq0$,
so $C_{3}^{*}\backslash S_{3}\left(r\right)$ is indeed covered by
all the $R\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)$,
$\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $, resulting
in
\[
C_{3}^{*}\backslash S_{3}\left(r\right)=\bigcup_{\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} }R\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)=\bigcup_{\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} }\conv\left(\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right).
\]
\end{proof}
From this lemma, the cross-polytope $C_{3}^{*}$ is the union of $S_{3}\left(r\right)$
and the eight regions $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$.
By Lemma 3.3,
\[
\left|P_{3}\left(r\right)\cap S_{3}\left(r\right)\right|\leq6,
\]
but for $r\in\left(0,\frac{2}{3}\right]$, some points of $P_{3}\left(r\right)$
may be contained in one or more of the sets $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$.
For $r\in\left(\frac{1}{2},\frac{2}{3}\right]$, each $V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)$
cannot contain more than one point of $P_{3}\left(r\right)$, which
gives an upper bound of $\left|P_{3}\left(r\right)\right|\leq14$
proved in Subsection 4.4. When $r\in\left(\frac{4}{7},\frac{2}{3}\right]$,
the required minimum distance between points of $P_{3}\left(r\right)$
is large enough so that the presence of a point of $P_{3}\left(r\right)$
in one set $\conv\left(V\left(r\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
may imply that one other set $\conv\left(V\left(r\left(\sigma_{1}',\sigma_{2}',\sigma_{3}'\right)\right)\right)$,
$\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\neq\left(\sigma_{1}',\sigma_{2}',\sigma_{3}'\right)$,
cannot contain any points in $P_{3}\left(r\right)$. Then it is possible
to obtain an upper bound of $12$, and the proof in Subsection 4.3
uses a more complicated argument involving the position of $\mathbf{p}$
in $V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)$.
In Subsection 4.2 we prove that when $r\in\left(\frac{3}{5},\frac{2}{3}\right]$,
a point $\mathbf{p}\in P_{3}\left(r\right)\cap V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)$
implies that three other sets of the form $\conv\left(V\left(r\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
cannot contain any points in $P_{3}\left(r\right)$.
\subsection{Proof of Theorem 1.2 (a) (the $\boldsymbol{r\in\left(\frac{3}{5},\frac{2}{3}\right]}$
case)}
\begin{lem}
Let $r\in\left(\frac{3}{5},\frac{2}{3}\right]$ and $\mathbf{p}\in P_{3}\left(r\right)$.
If $\mathbf{p}\in\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
for any $\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ 1,1\right\} $,
then $\conv\left(V\left(r,\left(-\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$,
$\conv\left(V\left(r,\left(\sigma_{1},-\sigma_{2},\sigma_{3}\right)\right)\right)$,
and $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},-\sigma_{3}\right)\right)\right)$
are blocked sets of $P_{3}\left(r\right)$.
\end{lem}
\begin{proof}
Without loss of generality, assume that $\sigma_{1}=\sigma_{2}=\sigma_{3}=1$.
To show that $\conv\left(V\left(r,\left(-\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
is a blocked set of $P_{3}\left(r\right)$, it suffices to show that
$\left|\left|\mathbf{p}-\mathbf{y}'\right|\right|_{1}<2r$ for all
\[
\mathbf{y}'\in V\left(r,\left(-1,1,1\right)\right)=\left\{ \begin{pmatrix}-\left(2r-1\right)\\
2r-1\\
2r-1
\end{pmatrix},\begin{pmatrix}-\left(1-r\right)\\
1-r\\
2r-1
\end{pmatrix},\begin{pmatrix}-\left(1-r\right)\\
2r-1\\
1-r
\end{pmatrix},\begin{pmatrix}-\left(2r-1\right)\\
1-r\\
1-r
\end{pmatrix}\right\} ,
\]
then by the convexity of $V\left(r,\left(-1,1,1\right)\right)$,
the statement $\left|\left|\mathbf{p}-\mathbf{y}\right|\right|_{1}<2r$
holds true for any $\mathbf{y}\in\conv\left(V\left(r,\left(-1,1,1\right)\right)\right)$.
The calculations are as follows:
\begin{eqnarray*}
\left|\left|\mathbf{p}-\begin{pmatrix}-\left(2r-1\right)\\
2r-1\\
2r-1
\end{pmatrix}\right|\right|_{1} & = & \left|p_{1}+\left(2r-1\right)\right|+\left|p_{2}-\left(2r-1\right)\right|+\left|p_{3}-\left(2r-1\right)\right|\\
& = & \left(p_{1}+\left(2r-1\right)\right)+\left(\left(2r-1\right)-p_{2}\right)+\left(\left(2r-1\right)-p_{3}\right)\\
& = & p_{1}-p_{2}-p_{3}+6r-3\\
& \leq & 1-2p_{2}-2p_{3}+6r-3\\
& \leq & 1-2\left(2r-1\right)-2\left(2r-1\right)+6r-3\\
& = & 2-2r\\
& < & 2r,
\end{eqnarray*}
\begin{eqnarray*}
\left|\left|\mathbf{p}-\begin{pmatrix}-\left(1-r\right)\\
1-r\\
2r-1
\end{pmatrix}\right|\right|_{1} & = & \left|p_{1}+\left(1-r\right)\right|+\left|p_{2}-\left(1-r\right)\right|+\left|p_{3}-\left(2r-1\right)\right|\\
& = & \left(p_{1}+\left(1-r\right)\right)+\left(\left(1-r\right)-p_{2}\right)+\left(\left(2r-1\right)-p_{3}\right)\\
& = & p_{1}-p_{2}-p_{3}+1\\
& \leq & 1-2p_{2}-2p_{3}+1\\
& \leq & 1-2\left(2r-1\right)-2\left(2r-1\right)+1\\
& = & 6-8r\\
& < & 2r,
\end{eqnarray*}
and similarly
\[
\left|\left|\mathbf{p}-\begin{pmatrix}-\left(1-r\right)\\
2r-1\\
1-r
\end{pmatrix}\right|\right|_{1}<2r
\]
and
\begin{eqnarray*}
\left|\left|\mathbf{p}-\begin{pmatrix}-\left(2r-1\right)\\
1-r\\
1-r
\end{pmatrix}\right|\right|_{1} & = & \left|p_{1}+\left(2r-1\right)\right|+\left|p_{2}-\left(1-r\right)\right|+\left|p_{3}-\left(1-r\right)\right|\\
& = & \left(p_{1}+\left(2r-1\right)\right)+\left(\left(1-r\right)-p_{2}\right)+\left(\left(1-r\right)-p_{3}\right)\\
& = & p_{1}-p_{2}-p_{3}+1\\
& < & 2r.
\end{eqnarray*}
By the symmetry of $V\left(r,\left(-1,1,1\right)\right)$, $V\left(r,\left(1,-1,1\right)\right)$,
and $V\left(r,\left(1,1,-1\right)\right)$, it follows that $\left|\left|\mathbf{p}-\mathbf{y}\right|\right|_{1}<2r$
for any $\mathbf{y}\in V\left(r,\left(-1,1,1\right)\right)\cup V\left(r,\left(1,-1,1\right)\right)\cup V\left(r,\left(1,1,-1\right)\right)$,
and so these three sets are blocked sets of $P_{3}\left(r\right)$.
\end{proof}
If $P_{3}\left(r\right)\cap\left(C_{3}^{*}\backslash S_{3}\left(r\right)\right)=\emptyset$
then trivially every set of the form $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$,
$\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $, is a
blocked set of $P_{3}\left(r\right)$. Otherwise, the above lemma
implies that for any given $P_{3}\left(r\right)$, three of the eight
sets of the form $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
are blocked sets of $P_{3}\left(r\right)$. Therefore,
\[
\left|P_{3}\left(r\right)\cap\left(C_{3}^{*}\backslash S_{3}\left(r\right)\right)\right|\leq5.
\]
However, it is possible to lower the $5$ to a $4$ with the following
argument.
\begin{lem}
Let $r\in\left(\frac{3}{5},\frac{2}{3}\right]$. Then for any $P_{3}\left(r\right)$,
there exist at least four blocked sets of $P_{3}\left(r\right)$.
\end{lem}
\begin{proof}
Let $\mathbf{p}\in P_{3}\left(r\right)\cap\left(C_{3}^{*}\backslash S_{3}\left(r\right)\right)$.
Without loss of generality assume that there is a $\mathbf{p}\in V\left(r,\left(1,1,1\right)\right)$.
Then by Lemma 4.3, $V\left(r,\left(-1,1,1\right)\right)$, $V\left(r,\left(1,-1,1\right)\right)$,
and $V\left(r,\left(1,1,-1\right)\right)$ are blocked sets of $P_{3}\left(r\right)$.
Consider the set $V\left(r,\left(-1,-1,-1\right)\right)$. If it is
a blocked set, then there is nothing more to prove. If it is not,
then again by Lemma 4.3, $V\left(r,\left(1,-1,-1\right)\right)$,
$V\left(r,\left(-1,1,-1\right)\right)$, and $V\left(r,\left(-1,-1,1\right)\right)$
are blocked sets of $P_{3}\left(r\right)$, resulting in a total of
six blocked sets.
\end{proof}
With this lemma we can prove Theorem 1.2 (a).
\begin{proof}[\emph{Proof of Theorem 1.2 (a)}]
Let $r\in\left(\frac{3}{5},\frac{2}{3}\right]$. As in the proof
of Theorem 1.2 (c), we split up $P_{3}\left(r\right)$ into $P_{3}\left(r\right)\cap S_{3}\left(r\right)$
and $P_{3}\left(r\right)\cap\left(C_{3}^{*}\backslash S_{3}\left(r\right)\right)$,
then
\begin{eqnarray*}
\left|P_{3}\left(r\right)\right| & \leq & \left|P_{3}\left(r\right)\cap S_{3}\left(r\right)\right|+\left|P_{3}\left(r\right)\cap\left(C_{3}^{*}\backslash S_{3}\left(r\right)\right)\right|\\
& \leq & 6+\left|P_{3}\left(r\right)\cap\left(C_{3}^{*}\backslash S_{3}\left(r\right)\right)\right|.
\end{eqnarray*}
By Lemma 4.4,
\[
\left|P_{3}\left(r\right)\cap\left(C_{3}^{*}\backslash S_{3}\left(r\right)\right)\right|\leq4,
\]
which, when combined with the previous inequality, gives
\begin{eqnarray*}
\left|P_{3}\left(r\right)\right| & \leq & 6+4\\
& = & 10.
\end{eqnarray*}
This inequality holds for any $P_{3}\left(r\right)$, so
\[
\gamma\left(C_{3}^{*},r\right)\leq10\qquad\text{for }r\in\left(\frac{3}{5},\frac{2}{3}\right]\text{.}
\]
From Proposition 5.2 below, there is a $10$-point packing set for
$rC_{3}^{*}$ contained in $C_{3}^{*}$. So this upper bound is the
best possible, giving
\[
\gamma\left(C_{3}^{*},r\right)=10\qquad\text{for }r\in\left(\frac{3}{5},\frac{2}{3}\right]\text{.}
\]
\end{proof}
\subsection{Proof of Theorem 1.2 (b) (the $\boldsymbol{r\in\left(\frac{4}{7},\frac{3}{5}\right]}$
case)}
The following additional notation will be used in this section. For
each $r>0$ and $\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $,
define the following sets $V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)$:
\begin{eqnarray*}
V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right),1\right) & := & \left\{ \begin{pmatrix}\sigma_{1}\left(2r-1\right)\\
\sigma_{2}\left(2r-1\right)\\
\sigma_{3}\left(2r-1\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(2r-1\right)\\
\sigma_{2}\left(1-r\right)\\
\sigma_{3}\left(1-r\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\frac{1}{2}r\\
\sigma_{2}\left(1-r\right)\\
\sigma_{3}\frac{1}{2}r
\end{pmatrix},\begin{pmatrix}\sigma_{1}\frac{1}{2}r\\
\sigma_{2}\frac{1}{2}r\\
\sigma_{3}\left(1-r\right)
\end{pmatrix},\frac{1}{3}\begin{pmatrix}\sigma_{1}\\
\sigma_{2}\\
\sigma_{3}
\end{pmatrix}\right\} ,\\
V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right),2\right) & := & \left\{ \begin{pmatrix}\sigma_{1}\left(2r-1\right)\\
\sigma_{2}\left(2r-1\right)\\
\sigma_{3}\left(2r-1\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(1-r\right)\\
\sigma_{2}\left(2r-1\right)\\
\sigma_{3}\left(1-r\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(1-r\right)\\
\sigma_{2}\frac{1}{2}r\\
\sigma_{3}\frac{1}{2}r
\end{pmatrix},\begin{pmatrix}\sigma_{1}\frac{1}{2}r\\
\sigma_{2}\frac{1}{2}r\\
\sigma_{3}\left(1-r\right)
\end{pmatrix},\frac{1}{3}\begin{pmatrix}\sigma_{1}\\
\sigma_{2}\\
\sigma_{3}
\end{pmatrix}\right\} \text{, and}\\
V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right),3\right) & := & \left\{ \begin{pmatrix}\sigma_{1}\left(2r-1\right)\\
\sigma_{2}\left(2r-1\right)\\
\sigma_{3}\left(2r-1\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(1-r\right)\\
\sigma_{2}\left(1-r\right)\\
\sigma_{3}\left(2r-1\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(1-r\right)\\
\sigma_{2}\frac{1}{2}r\\
\sigma_{3}\frac{1}{2}r
\end{pmatrix},\begin{pmatrix}\sigma_{1}\frac{1}{2}r\\
\sigma_{2}\left(1-r\right)\\
\sigma_{3}\frac{1}{2}r
\end{pmatrix},\frac{1}{3}\begin{pmatrix}\sigma_{1}\\
\sigma_{2}\\
\sigma_{3}
\end{pmatrix}\right\} .
\end{eqnarray*}
They have the property that
\[
\bigcup_{i=1}^{3}\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right),i\right)\right)=\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right),
\]
and the numbering of these subsets is so that the set $V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right),i\right)$
contains the point in the set
\[
\left\{ \begin{pmatrix}\sigma_{1}\left(1-r\right)\\
\sigma_{2}\left(1-r\right)\\
\sigma_{3}\left(2r-1\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(1-r\right)\\
\sigma_{2}\left(2r-1\right)\\
\sigma_{3}\left(1-r\right)
\end{pmatrix},\begin{pmatrix}\sigma_{1}\left(2r-1\right)\\
\sigma_{2}\left(1-r\right)\\
\sigma_{3}\left(1-r\right)
\end{pmatrix}\right\} \subsetneq V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)
\]
that is furthest away from the vertex $\sigma_{i}\mathbf{e}_{i}$.
\begin{lem}
Let $r\in\left(\frac{4}{7},\frac{3}{5}\right]$ and $\mathbf{p}\in P_{3}\left(r\right)$.
If $\mathbf{x}\in\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
then there is a blocked set $\conv\left(V\left(r,\left(\sigma_{1}',\sigma_{2}',\sigma_{3}'\right)\right)\right)$
of $P_{3}\left(r\right)$, with $\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)$
and $\left(\sigma_{1}',\sigma_{2}',\sigma_{3}'\right)$ differing
by exactly one coordinate.
\end{lem}
\begin{proof}
Without loss of generality, assume that $\sigma_{1}=\sigma_{2}=\sigma_{3}=1$,
then $\mathbf{p}$ is in one of the subsets $\conv\left(V\left(r,\left(1,1,1\right),i\right)\right)$
for $i\in\left\{ 1,2,3\right\} $. Assume that $\mathbf{p}\in\conv\left(V\left(r,\left(1,1,1\right),1\right)\right)$
and write $\mathbf{p}=\sum_{i=1}^{3}p_{i}\mathbf{e}_{i}$. We will
show that $\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{1}<2r$
for any $\mathbf{x}\in V\left(r,\left(1,1,1\right),1\right)$ and
$\mathbf{y}\in V\left(r,\left(-1,1,1\right)\right)$, and then by
the convexity of $\conv\left(V\left(r,\left(1,1,1\right),1\right)\right)$
and $\conv\left(V\left(r,\left(-1,1,1\right)\right)\right)$, it follows
that $\left|\left|\mathbf{p}-\mathbf{y}\right|\right|_{1}<2r$ for
any $\mathbf{y}\in\conv\left(V\left(r,\left(-1,1,1\right)\right)\right)$,
which shows that $\conv\left(V\left(r,\left(-1,1,1\right)\right)\right)$
is a blocked set of $P_{3}\left(r\right)$. This approach is similar
to the proof of the previous lemma, but the same approach cannot be
used here as the second calculation in the proof of Lemma 4.3 ends
with $6-8r<2r$, which is not true for $r\leq\frac{3}{5}$. There
are $20$ different combinations of points but not all of them need
to be explicitly checked. To keep track of the cases, we use the following
grid:
\noindent \begin{center}
\begin{tabular}{|c|c||c|c|c|c|}
\hline
\multicolumn{1}{|c}{} & & \multicolumn{4}{c|}{Elements of $V\left(r,\left(-1,1,1\right)\right)$}\tabularnewline
\cline{3-6} \cline{4-6} \cline{5-6} \cline{6-6}
\multicolumn{1}{|c}{} & & ${\scriptstyle \begin{pmatrix}{\scriptscriptstyle -\left(2r-1\right)}\\
{\scriptstyle 2r-1}\\
{\scriptstyle 2r-1}
\end{pmatrix}}$ & ${\scriptstyle \begin{pmatrix}{\scriptstyle -\left(1-r\right)}\\
{\scriptstyle 1-r}\\
{\scriptstyle 2r-1}
\end{pmatrix}}$ & ${\scriptstyle \begin{pmatrix}{\scriptstyle -\left(1-r\right)}\\
{\scriptstyle 2r-1}\\
{\scriptstyle 1-r}
\end{pmatrix}}$ & ${\scriptstyle \begin{pmatrix}{\scriptstyle -\left(2r-1\right)}\\
{\scriptstyle 1-r}\\
{\scriptstyle 1-r}
\end{pmatrix}}$\tabularnewline
\hline
\hline
\multicolumn{1}{|c|}{} & ${\scriptstyle \begin{pmatrix}{\scriptstyle 2r-1}\\
{\scriptstyle 2r-1}\\
{\scriptstyle 2r-1}
\end{pmatrix}}$ & Case 1 & Case 2 & Case 3 & Case 4\tabularnewline
\cline{2-6} \cline{3-6} \cline{4-6} \cline{5-6} \cline{6-6}
& ${\scriptstyle \begin{pmatrix}{\scriptstyle 2r-1}\\
{\scriptstyle 1-r}\\
{\scriptstyle 1-r}
\end{pmatrix}}$ & Case 5 & Case 6 & Case 7 & Case 8\tabularnewline
\cline{2-6} \cline{3-6} \cline{4-6} \cline{5-6} \cline{6-6}
Elements of $V\left(r,\left(1,1,1\right),1\right)$ & ${\scriptstyle \begin{pmatrix}{\scriptstyle \frac{1}{2}r}\\
{\scriptstyle 1-r}\\
{\scriptstyle \frac{1}{2}r}
\end{pmatrix}}$ & Case 9 & Case 10 & Case 11 & Case 12\tabularnewline
\cline{2-6} \cline{3-6} \cline{4-6} \cline{5-6} \cline{6-6}
& ${\scriptstyle \begin{pmatrix}{\scriptstyle \frac{1}{2}r}\\
{\scriptstyle \frac{1}{2}r}\\
{\scriptstyle 1-r}
\end{pmatrix}}$ & Case 13 & Case 14 & Case 15 & Case 16\tabularnewline
\cline{2-6} \cline{3-6} \cline{4-6} \cline{5-6} \cline{6-6}
& ${\scriptstyle \begin{pmatrix}{\scriptstyle \frac{1}{3}}\\
{\scriptstyle \frac{1}{3}}\\
{\scriptstyle \frac{1}{3}}
\end{pmatrix}}$ & Case 17 & Case 18 & Case 19 & Case 20\tabularnewline
\hline
\end{tabular}
\par\end{center}
\noindent For each $k\in\left\{ 1,\ldots,20\right\} $, case $k$
corresponds to the calculation of $\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{1}$,
where $\mathbf{x}$ is the element of $V\left(r,\left(1,1,1\right),1\right)$
in the same row as $k$ and $\mathbf{y}$ is the element of $V\left(r,\left(-1,1,1\right)\right)$
in the same column as $k$. For example, $\left|\left|\left(2r-1,2r-1,2r-1\right)^{\mathsf{T}}-\left(-\left(2r-1\right),2r-1,2r-1\right)^{\mathsf{T}}\right|\right|_{1}$
will be calculated in case 1 below. Cases that are similar to previous
cases will be pointed out as they arise.
\end{proof}
\begin{enumerate}
\item Since $2r-1,1-r<\frac{1}{2}<r$, it immediately follows that
\[
\left|\left|\begin{pmatrix}2r-1\\
2r-1\\
2r-1
\end{pmatrix}-\begin{pmatrix}-\left(2r-1\right)\\
2r-1\\
2r-1
\end{pmatrix}\right|\right|_{1}<2r.
\]
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}2r-1\\
2r-1\\
2r-1
\end{pmatrix}-\begin{pmatrix}-\left(1-r\right)\\
1-r\\
2r-1
\end{pmatrix}\right|\right|_{1} & = & \left|\left(2r-1\right)+\left(1-r\right)\right|+\left|\left(2r-1\right)-\left(1-r\right)\right|\\
& = & \left(\left(2r-1\right)+\left(1-r\right)\right)+\left(\left(1-r\right)-\left(2r-1\right)\right)\\
& = & 2-2r\\
& < & 2r.
\end{eqnarray*}
\item By symmetry, this case is similar to case 2.
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}2r-1\\
2r-1\\
2r-1
\end{pmatrix}-\begin{pmatrix}-\left(2r-1\right)\\
1-r\\
1-r
\end{pmatrix}\right|\right|_{1} & = & \left|\left(2r-1\right)+\left(2r-1\right)\right|+\left|\left(2r-1\right)-\left(1-r\right)\right|+\left|\left(2r-1\right)-\left(1-r\right)\right|\\
& = & \left(\left(2r-1\right)+\left(2r-1\right)\right)+\left(\left(1-r\right)-\left(2r-1\right)\right)+\left(\left(1-r\right)-\left(2r-1\right)\right)\\
& = & 2-2r\\
& < & 2r.
\end{eqnarray*}
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}2r-1\\
1-r\\
1-r
\end{pmatrix}-\begin{pmatrix}-\left(1-r\right)\\
2r-1\\
2r-1
\end{pmatrix}\right|\right|_{1} & = & \left|\left(2r-1\right)+\left(1-r\right)\right|+\left|\left(2r-1\right)-\left(1-r\right)\right|+\left|\left(2r-1\right)-\left(1-r\right)\right|\\
& = & \left(\left(2r-1\right)+\left(1-r\right)\right)+\left(\left(1-r\right)-\left(2r-1\right)\right)+\left(\left(1-r\right)-\left(2r-1\right)\right)\\
& = & 4-5r\\
& < & 2r.
\end{eqnarray*}
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}2r-1\\
1-r\\
1-r
\end{pmatrix}-\begin{pmatrix}-\left(1-r\right)\\
1-r\\
2r-1
\end{pmatrix}\right|\right|_{1} & = & \left|\left(2r-1\right)+\left(1-r\right)\right|+\left|\left(2r-1\right)-\left(1-r\right)\right|\\
& = & \left(\left(2r-1\right)+\left(1-r\right)\right)+\left(\left(1-r\right)-\left(2r-1\right)\right)\\
& = & 2-2r\\
& < & 2r.
\end{eqnarray*}
\item This case follows from case 6 due to symmetry.
\item This case follows from the same argument used in case 1, that $2r-1,1-r<\frac{1}{2}<r$.
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}\frac{1}{2}r\\
1-r\\
\frac{1}{2}r
\end{pmatrix}-\begin{pmatrix}-\left(2r-1\right)\\
2r-1\\
2r-1
\end{pmatrix}\right|\right|_{1} & = & \left|\frac{1}{2}r+\left(2r-1\right)\right|+\left|\left(1-r\right)-\left(2r-1\right)\right|+\left|\frac{1}{2}r-\left(2r-1\right)\right|\\
& = & \left(\frac{1}{2}r+\left(2r-1\right)\right)+\left(\left(1-r\right)-\left(2r-1\right)\right)+\left(\frac{1}{2}r-\left(2r-1\right)\right)\\
& = & 2-2r\\
& < & 2r.
\end{eqnarray*}
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}\frac{1}{2}r\\
1-r\\
\frac{1}{2}r
\end{pmatrix}-\begin{pmatrix}-\left(1-r\right)\\
1-r\\
2r-1
\end{pmatrix}\right|\right|_{1} & = & \left|\frac{1}{2}r+\left(2r-1\right)\right|+\left|\frac{1}{2}r-\left(2r-1\right)\right|\\
& = & \left(\frac{1}{2}r+\left(2r-1\right)\right)+\left(\frac{1}{2}r-\left(2r-1\right)\right)\\
& = & r\\
& < & 2r.
\end{eqnarray*}
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}\frac{1}{2}r\\
1-r\\
\frac{1}{2}r
\end{pmatrix}-\begin{pmatrix}-\left(2r-1\right)\\
2r-1\\
1-r
\end{pmatrix}\right|\right|_{1} & = & \left|\frac{1}{2}r+\left(2r-1\right)\right|+\left|\left(1-r\right)-\left(2r-1\right)\right|+\left|\frac{1}{2}r-\left(1-r\right)\right|\\
& = & \left(\frac{1}{2}r+\left(2r-1\right)\right)+\left(\left(1-r\right)-\left(2r-1\right)\right)+\left(\frac{1}{2}r-\left(1-r\right)\right)\\
& = & r\\
& < & 2r.
\end{eqnarray*}
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}\frac{1}{2}r\\
1-r\\
\frac{1}{2}r
\end{pmatrix}-\begin{pmatrix}-\left(2r-1\right)\\
1-r\\
1-r
\end{pmatrix}\right|\right|_{1} & = & \left|\frac{1}{2}r+\left(2r-1\right)\right|+\left|\frac{1}{2}r-\left(1-r\right)\right|\\
& = & \left(\frac{1}{2}r+\left(2r-1\right)\right)+\left(\frac{1}{2}r-\left(1-r\right)\right)\\
& = & 4r-2\\
& < & 2r.
\end{eqnarray*}
\item By symmetry, this case is similar to case 9.
\item By symmetry, this case is similar to case 11.
\item By symmetry, this case is similar to case 10.
\item By symmetry, this case is similar to case 12.
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}\frac{1}{3}\\
\frac{1}{3}\\
\frac{1}{3}
\end{pmatrix}-\begin{pmatrix}-\left(2r-1\right)\\
2r-1\\
2r-1
\end{pmatrix}\right|\right|_{1} & = & \left|\frac{1}{3}+\left(2r-1\right)\right|+\left|\frac{1}{3}-\left(2r-1\right)\right|+\left|\frac{1}{3}-\left(2r-1\right)\right|\\
& = & \left(\frac{1}{3}+\left(2r-1\right)\right)+\left(\frac{1}{3}-\left(2r-1\right)\right)+\left(\frac{1}{3}-\left(2r-1\right)\right)\\
& = & \frac{4}{3}-2r\\
& < & 2r.
\end{eqnarray*}
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}\frac{1}{3}\\
\frac{1}{3}\\
\frac{1}{3}
\end{pmatrix}-\begin{pmatrix}-\left(1-r\right)\\
1-r\\
2r-1
\end{pmatrix}\right|\right|_{1} & = & \left|\frac{1}{3}+\left(2r-1\right)\right|+\left|\frac{1}{3}-\left(1-r\right)\right|+\left|\frac{1}{3}-\left(2r-1\right)\right|\\
& = & \left(\frac{1}{3}+\left(2r-1\right)\right)+\left(\left(1-r\right)-\frac{1}{3}\right)+\left(\frac{1}{3}-\left(2r-1\right)\right)\\
& = & \frac{4}{3}-r\\
& < & 2r.
\end{eqnarray*}
\item By symmetry, this case is similar to case 18.
\item
\begin{eqnarray*}
\left|\left|\begin{pmatrix}\frac{1}{3}\\
\frac{1}{3}\\
\frac{1}{3}
\end{pmatrix}-\begin{pmatrix}-\left(2r-1\right)\\
1-r\\
1-r
\end{pmatrix}\right|\right|_{1} & = & \left|\frac{1}{3}+\left(2r-1\right)\right|+\left|\frac{1}{3}-\left(1-r\right)\right|+\left|\frac{1}{3}-\left(1-r\right)\right|\\
& = & \left(\frac{1}{3}+\left(2r-1\right)\right)+\left(\left(1-r\right)-\frac{1}{3}\right)+\left(\left(1-r\right)-\frac{1}{3}\right)\\
& = & \frac{2}{3}\\
& < & 2r.
\end{eqnarray*}
\end{enumerate}
\begin{proof}
Hence $\conv\left(V\left(r,\left(-1,1,1\right)\right)\right)$ is
a blocked set of $P_{3}\left(r\right)$. By symmetry, if $\mathbf{p}\in\conv\left(V\left(r,\left(1,1,1\right),2\right)\right)$
or $\mathbf{p}\in\conv\left(V\left(r,\left(1,1,1\right),3\right)\right)$
then calculations similar to the above can be performed with $\mathbf{y}\in\conv\left(V\left(r,\left(1,-1,1\right)\right)\right)$
or $\mathbf{y}\in\conv\left(V\left(r,\left(1,1,-1\right)\right)\right)$
respectively.
\end{proof}
Using the above lemma and the same argument as after Lemma 4.3 in
the last subsection, we have
\[
\left|P_{3}\left(r\right)\cap\left(C_{3}^{*}\backslash S_{3}\left(r\right)\right)\right|\leq7.
\]
However, just like in the previous subsection it is possible to lower
the $7$ to a $6$ with the following argument.
\begin{lem}
Let $r\in\left(\frac{4}{7},\frac{3}{5}\right]$. Then for any $P_{3}\left(r\right)$,
there exist at least two blocked sets of $P_{3}\left(r\right)$.
\end{lem}
\begin{proof}
Let $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$,
$\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $ be a blocked
set of $P_{3}\left(r\right)$ and consider the set $\conv\left(V\left(r,\left(-\sigma_{1},-\sigma_{2},-\sigma_{3}\right)\right)\right)$.
If $\conv\left(V\left(r,\left(-\sigma_{1},-\sigma_{2},-\sigma_{3}\right)\right)\right)$
is a blocked set of $P_{3}\left(r\right)$ then we are done. Otherwise,
by Lemma 4.3 there must be a blocked set $\conv\left(V\left(r,\left(\sigma_{1}',\sigma_{2}',\sigma_{3}'\right)\right)\right)$
of $P_{3}\left(r\right)$ such that $\left(-\sigma_{1},-\sigma_{2},-\sigma_{3}\right)$
and $\left(\sigma_{1}',\sigma_{2}',\sigma_{3}'\right)$ differ by
exactly one coordinate. Then $\left(\sigma_{1}',\sigma_{2}',\sigma_{3}'\right)\neq\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)$,
which means that $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
and $\conv\left(V\left(r,\left(\sigma_{1}',\sigma_{2}',\sigma_{3}'\right)\right)\right)$
are two distinct blocked sets of $P_{3}\left(r\right)$.
\end{proof}
The proof of Theorem 1.2 (b) is virtually identical to the proof of
Theorem 1.2 (a).
\begin{proof}[\emph{Proof of Theorem 1.2 (b)}]
Let $r\in\left(\frac{4}{7},\frac{3}{5}\right]$. As in the proof
of Theorem 1.2 (c), we split up $P_{3}\left(r\right)$ into $P_{3}\left(r\right)\cap S_{3}\left(r\right)$
and $P_{3}\left(r\right)\cap\left(S_{3}^{*}\backslash S_{3}\left(r\right)\right)$,
then
\begin{eqnarray*}
\left|P_{3}\left(r\right)\right| & \leq & \left|P_{3}\left(r\right)\cap S_{3}\left(r\right)\right|+\left|P_{3}\left(r\right)\cap\left(S_{3}^{*}\backslash S_{3}\left(r\right)\right)\right|\\
& \leq & 6+\left|P_{3}\left(r\right)\cap\left(S_{3}^{*}\backslash S_{3}\left(r\right)\right)\right|.
\end{eqnarray*}
By Lemma 4.6,
\[
\left|P_{3}\left(r\right)\cap\left(S_{3}^{*}\backslash S_{3}\left(r\right)\right)\right|\leq6,
\]
which, when combined with the previous inequality, gives
\begin{eqnarray*}
\left|P_{3}\left(r\right)\right| & \leq & 6+6\\
& = & 12.
\end{eqnarray*}
This inequality holds for any $P_{3}\left(r\right)$, so
\[
\gamma\left(C_{3}^{*},r\right)\leq12\qquad\text{for }r\in\left(\frac{4}{7},\frac{3}{5}\right]\text{.}
\]
From Proposition 5.3 below, there is a $12$-point packing set for
$rC_{3}^{*}$ contained in $C_{3}^{*}$. So this upper bound is the
best possible, giving
\[
\gamma\left(C_{3}^{*},r\right)=12\qquad\text{for }r\in\left(\frac{4}{7},\frac{3}{5}\right]\text{.}
\]
\end{proof}
\subsection{Proof of Theorem 1.2 (c) (the $\boldsymbol{r\in\left(\frac{1}{2},\frac{4}{7}\right]}$
case)}
For $r\in\left(\frac{1}{2},\frac{4}{7}\right]$, we will use an approach
that has similarities to Larman and Zong \cite{LarmanZong1999} and
B{\"o}r{\"o}czky Jr. and Wintsche \cite{BoeroeczkyWintsche2000} in that the maximum
distance between any two points in $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
is less than $2r$. Then each $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
can contain at most one point of $P_{3}\left(r\right)$, and since
$S_{3}^{*}\backslash S_{3}\left(r\right)=\bigcup_{\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} }\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$,
the number of points of $P_{3}\left(r\right)$ in $S_{3}^{*}\backslash S_{3}\left(r\right)$
is bounded above by $8$.
\begin{lem}
Let $r\in\left(\frac{1}{2},\frac{4}{7}\right]$ and $\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $.
For any two points $\mathbf{x},\mathbf{y}\in\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$,
$\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{1}<2r$.
\end{lem}
\begin{proof}
Without loss of generality, let $\sigma_{1}=\sigma_{2}=\sigma_{3}=1$,
then $\mathbf{x},\mathbf{y}\in\conv\left(V\left(r,\left(1,1,1\right)\right)\right)$.
It suffices to show that the distance between any two points in $V\left(r,\left(1,1,1\right)\right)$
is less than $2r$, then the conclusion for all points in $\conv\left(V\left(r,\left(1,1,1\right)\right)\right)$
follows by the convexity of $\conv\left(V\left(r,\left(1,1,1\right)\right)\right)$.
We also assume that the two points are distinct. Suppose that neither
point is $\left(2r-1,2r-1,2r-1\right)^{\mathsf{T}}$, where $\mathbf{v}^{\mathsf{T}}$
is the transpose of $\mathbf{v}$, then both points are permutations
of $\left(1-r,1-r,2r-1\right)^{\mathsf{T}}$, so the distance between
the two points is
\begin{eqnarray*}
\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{1} & = & 0+\left|\left(1-r\right)-\left(2r-1\right)\right|+\left|\left(2r-1\right)-\left(1-r\right)\right|\\
& = & 0+\left(2-3r\right)+\left(2-3r\right)\\
& = & 4-6r\\
& < & 2r.
\end{eqnarray*}
If one of the points is $\left(2r-1,2r-1,2r-1\right)^{\mathsf{T}}$,
then the other point must be a permutation of $\left(1-r,1-r,2r-1\right)^{\mathsf{T}}$,
so the distance between the two points is
\begin{eqnarray*}
\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{1} & = & \left|\left(1-r\right)-\left(2r-1\right)\right|+\left|\left(2r-1\right)-\left(1-r\right)\right|\\
& < & 2r.
\end{eqnarray*}
\end{proof}
Below is the proof for Theorem 1.2 (c).
\begin{proof}[\emph{Proof of Theorem 1.2 (c)}]
Let $r\in\left(\frac{1}{2},\frac{4}{7}\right]$. By Lemma 3.3,
\[
\left|P_{3}\left(r\right)\cap S_{3}\left(r\right)\right|\leq6.
\]
Write $P_{3}\left(r\right)$ as the union of two sets $P_{3}\left(r\right)\cap S_{3}\left(r\right)$
and $P_{3}\left(r\right)\cap\left(S_{3}^{*}\backslash S_{3}\left(r\right)\right)$,
whose cardinalities can be individually bounded above. In particular,
by Lemma 4.1 the latter set can be expressed as
\begin{eqnarray*}
\left|P_{3}\left(r\right)\right| & \leq & \left|P_{3}\left(r\right)\cap S_{3}\left(r\right)\right|+\left|P_{3}\left(r\right)\cap\left(S_{3}^{*}\backslash S_{3}\left(r\right)\right)\right|\\
& = & 6+\left|P_{3}\left(r\right)\cap\left(\bigcup_{\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} }\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)\right)\right|\\
& \leq & 6+\sum_{\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} }\left|P_{3}\left(r\right)\cap\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)\right|.
\end{eqnarray*}
An immediate consequence of Lemma 4.5 is that
\[
\left|P_{3}\left(r\right)\cap\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)\right|\leq1
\]
for all $\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $,
which, when combined with the previous inequality, gives
\begin{eqnarray*}
\left|P_{3}\left(r\right)\right| & \leq & 6+8\\
& = & 14.
\end{eqnarray*}
This inequality holds for any $P_{3}\left(r\right)$, so
\[
\gamma\left(C_{3}^{*},r\right)\leq14\qquad\text{for }r\in\left(\frac{1}{2},\frac{4}{7}\right]\text{.}
\]
\end{proof}
We are not able to find the exact value of $\gamma\left(C_{3}^{*},r\right)$
for such $r$, but some lower bounds are in Section 5.
\section{Constructive lower bounds including the proof of Proposition 1.3}
In contrast to the upper bounds, the lower bounds are all obtained
by explicit constructions of points in the cross-polytope. For $n=3$
and $r\in\left(\frac{1}{2},\frac{2}{3}\right]$, all of the constructions
shown here contain the six points of $V_{3}$ and the remaining points
are in the union of the eight sets $\conv\left(V\left(r,\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$.
There are no claims of uniqueness made here; more than one set of
points may achieve the lower bounds of Theorem 1.3.
The calculations in the proofs below can be performed by hand or using
a computer.
\begin{prop}
Let $\mathbf{q}_{n}=\left(\frac{1}{n},\ldots,\frac{1}{n}\right)^{\mathsf{T}}\in\mathbb{R}^{n}$.
Then $V_{n}\cup\left\{ \pm\mathbf{q}_{n}\right\} \subset C_{3}^{*}$
and for $r\in\left(0,1-\frac{1}{n}\right]$,
\[
V_{n}\cup\left\{ \pm\mathbf{q}_{n}\right\} =\left\{ \begin{pmatrix}1\\
0\\
0\\
\vdots\\
0
\end{pmatrix},\begin{pmatrix}-1\\
0\\
0\\
\vdots\\
0
\end{pmatrix},\begin{pmatrix}0\\
1\\
0\\
\vdots\\
0
\end{pmatrix},\begin{pmatrix}0\\
-1\\
0\\
\vdots\\
0
\end{pmatrix},\ldots,\begin{pmatrix}0\\
0\\
\vdots\\
0\\
1
\end{pmatrix},\begin{pmatrix}0\\
0\\
\vdots\\
0\\
-1
\end{pmatrix}\right\} \cup\frac{1}{n}\left\{ \begin{pmatrix}1\\
1\\
\vdots\\
1\\
1
\end{pmatrix},-\begin{pmatrix}1\\
1\\
\vdots\\
1\\
1
\end{pmatrix}\right\}
\]
is a packing set of $rC_{n}^{*}$.
\end{prop}
\begin{proof}
Any points $\mathbf{x},\mathbf{y}\in V_{n}\cup\left\{ \pm\mathbf{q}_{n}\right\} $,
$\mathbf{x}\neq\mathbf{y}$, have the property that $\left|\left|\mathbf{x}\right|\right|_{1}\leq1$
and $\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{1}\leq2\left(1-\frac{1}{n}\right)$,
so $V_{n}\cup\left\{ \pm\mathbf{q}_{n}\right\} \subset C_{3}^{*}$
is a packing set of $rC_{n}^{*}$ for $r\leq1-\frac{1}{n}$.
\end{proof}
\begin{prop}
Let
\[
Q_{10}=\frac{1}{3}\left\{ \begin{pmatrix}1\\
1\\
1
\end{pmatrix},\begin{pmatrix}-1\\
-1\\
1
\end{pmatrix},\begin{pmatrix}-1\\
1\\
-1
\end{pmatrix},\begin{pmatrix}1\\
-1\\
-1
\end{pmatrix}\right\} .
\]
Then $V_{3}\cup Q_{10}\subset C_{3}^{*}$ and for $r\in\left(0,\frac{2}{3}\right]$,
\[
V_{3}\cup Q_{10}=\left\{ \begin{pmatrix}1\\
0\\
0
\end{pmatrix},\begin{pmatrix}-1\\
0\\
0
\end{pmatrix},\begin{pmatrix}0\\
1\\
0
\end{pmatrix},\begin{pmatrix}0\\
-1\\
0
\end{pmatrix},\begin{pmatrix}0\\
0\\
1
\end{pmatrix},\begin{pmatrix}0\\
0\\
-1
\end{pmatrix}\right\} \cup\frac{1}{3}\left\{ \begin{pmatrix}1\\
1\\
1
\end{pmatrix},\begin{pmatrix}-1\\
-1\\
1
\end{pmatrix},\begin{pmatrix}-1\\
1\\
-1
\end{pmatrix},\begin{pmatrix}1\\
-1\\
-1
\end{pmatrix}\right\}
\]
is a packing set of $rC_{3}^{*}$.
\end{prop}
\begin{proof}
Any points $\mathbf{x},\mathbf{y}\in V_{3}\cup Q_{10}$, $\mathbf{x}\neq\mathbf{y}$,
have the property that $\left|\left|\mathbf{x}\right|\right|_{1}\leq1$
and $\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{1}\leq\frac{4}{3}$,
so $V_{3}\cup Q_{10}\subset C_{3}^{*}$ is a packing set of $rC_{n}^{*}$
for $r\leq\frac{2}{3}$.
\end{proof}
\begin{prop}
Let
\[
Q_{12}^{+}=\frac{1}{5}\left\{ \begin{pmatrix}2\\
2\\
1
\end{pmatrix},\begin{pmatrix}-2\\
1\\
2
\end{pmatrix},\begin{pmatrix}1\\
-2\\
2
\end{pmatrix}\right\} .
\]
Then $V_{3}\cup Q_{12}^{+}\cup\left(-Q_{12}^{+}\right)\subset C_{3}^{*}$
and for $r\in\left(0,\frac{3}{5}\right]$,
\begin{eqnarray*}
V_{3}\cup Q_{12}^{+}\cup\left(-Q_{12}^{+}\right) & = & \left\{ \begin{pmatrix}1\\
0\\
0
\end{pmatrix},\begin{pmatrix}-1\\
0\\
0
\end{pmatrix},\begin{pmatrix}0\\
1\\
0
\end{pmatrix},\begin{pmatrix}0\\
-1\\
0
\end{pmatrix},\begin{pmatrix}0\\
0\\
1
\end{pmatrix},\begin{pmatrix}0\\
0\\
-1
\end{pmatrix}\right\} \\
& & \,\cup\,\frac{1}{5}\left\{ \begin{pmatrix}2\\
2\\
1
\end{pmatrix},\begin{pmatrix}-2\\
1\\
2
\end{pmatrix},\begin{pmatrix}1\\
-2\\
2
\end{pmatrix}\right\} \cup-\frac{1}{5}\left\{ \begin{pmatrix}2\\
2\\
1
\end{pmatrix},\begin{pmatrix}-2\\
1\\
2
\end{pmatrix},\begin{pmatrix}1\\
-2\\
2
\end{pmatrix}\right\}
\end{eqnarray*}
is a packing set of $rC_{3}^{*}$.
\end{prop}
\begin{proof}
Any points $\mathbf{x},\mathbf{y}\in V_{3}\cup Q_{12}^{+}\cup\left(-Q_{12}^{+}\right)$,
$\mathbf{x}\neq\mathbf{y}$, have the property that $\left|\left|\mathbf{x}\right|\right|_{1}\leq1$
and $\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{1}\leq\frac{6}{5}$,
so $V_{3}\cup Q_{12}^{+}\cup\left(-Q_{12}^{+}\right)\subset C_{3}^{*}$
is a packing set of $rC_{n}^{*}$ for $r\leq\frac{3}{5}$.
\end{proof}
Finally we consider the case $r\in\left(0,\frac{6}{11}\right]$. The
construction below differs from the previous constructions as there
are no obvious large-scale symmetries.
\begin{prop}
Let
\[
Q_{13}=\frac{1}{11}\left\{ \begin{pmatrix}-1\\
5\\
5
\end{pmatrix},\begin{pmatrix}5\\
-1\\
5
\end{pmatrix},\begin{pmatrix}5\\
5\\
-1
\end{pmatrix},\begin{pmatrix}-5\\
-2\\
4
\end{pmatrix},\begin{pmatrix}-5\\
4\\
-2
\end{pmatrix},\begin{pmatrix}4\\
-2\\
-5
\end{pmatrix},\begin{pmatrix}-3\\
-5\\
-3
\end{pmatrix}\right\} .
\]
Then $V_{3}\cup Q_{13}\subset C_{3}^{*}$ and for $r\leq\frac{6}{11}$,
\begin{eqnarray*}
V_{3}\cup Q_{13} & = & \left\{ \begin{pmatrix}1\\
0\\
0
\end{pmatrix},\begin{pmatrix}-1\\
0\\
0
\end{pmatrix},\begin{pmatrix}0\\
1\\
0
\end{pmatrix},\begin{pmatrix}0\\
-1\\
0
\end{pmatrix},\begin{pmatrix}0\\
0\\
1
\end{pmatrix},\begin{pmatrix}0\\
0\\
-1
\end{pmatrix}\right\} \\
& & \,\cup\,\frac{1}{11}\left\{ \begin{pmatrix}-1\\
5\\
5
\end{pmatrix},\begin{pmatrix}5\\
-1\\
5
\end{pmatrix},\begin{pmatrix}5\\
5\\
-1
\end{pmatrix},\begin{pmatrix}-5\\
-2\\
4
\end{pmatrix},\begin{pmatrix}-5\\
4\\
-2
\end{pmatrix},\begin{pmatrix}4\\
-2\\
-5
\end{pmatrix},\begin{pmatrix}-3\\
-5\\
-3
\end{pmatrix}\right\}
\end{eqnarray*}
is a packing set of $rC_{3}^{*}$.
\end{prop}
\begin{proof}
Any points $\mathbf{x},\mathbf{y}\in V_{3}\cup Q_{13}$, $\mathbf{x}\neq\mathbf{y}$,
have the property that $\left|\left|\mathbf{x}\right|\right|_{1}\leq1$
and $\left|\left|\mathbf{x}-\mathbf{y}\right|\right|_{1}\leq\frac{12}{11}$,
so $V_{3}\cup Q_{13}\subset C_{3}^{*}$ is a packing set of $rC_{n}^{*}$
for $r\leq\frac{6}{11}$.
\end{proof}
We do not know if this result can be improved, either in the sense
of a $13$-point configuration for some $r>\frac{6}{11}$ or a $14$-point
configuration for $r=\frac{6}{11}$. Regarding the first avenue for
improvement, the upper end of the range $r\in\left(0,\frac{6}{11}\right]$
cannot be raised without moving the points of $V_{3}\cup Q_{13}$.
As for the second, according to the proof of Theorem $1.2$ (c), at
most eight points in any $P_{3}\left(\frac{6}{11}\right)$ can be
in the sets $\conv\left(V\left(\frac{6}{11},\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$
for all $\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $.
The packing set $V_{3}\cup Q_{13}$ contains points in each set of
the form $\conv\left(V\left(\frac{6}{11},\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\right)\right)$,
$\sigma_{1},\sigma_{2},\sigma_{3}\in\left\{ -1,1\right\} $, except
for $\conv\left(V\left(\frac{6}{11},\left(1,1,1\right)\right)\right)$,
see Figure 6.5. Since
\[
V\left(\frac{6}{11},\left(1,1,1\right)\right)=\left\{ \begin{pmatrix}0\\
0\\
0
\end{pmatrix},\frac{1}{11}\begin{pmatrix}5\\
5\\
1
\end{pmatrix},\frac{1}{11}\begin{pmatrix}5\\
1\\
5
\end{pmatrix},\frac{1}{11}\begin{pmatrix}1\\
5\\
5
\end{pmatrix}\right\}
\]
and the distances from each point in this set to $\frac{1}{11}\left(-1,5,5\right)^{\mathsf{T}}\in V_{3}\cup Q_{13}$
are
\begin{eqnarray*}
\left|\left|\begin{pmatrix}0\\
0\\
0
\end{pmatrix}-\frac{1}{11}\begin{pmatrix}-1\\
5\\
5
\end{pmatrix}\right|\right|_{1} & = & 1\\
& < & \frac{12}{11},
\end{eqnarray*}
\begin{eqnarray*}
\left|\left|\frac{1}{11}\begin{pmatrix}5\\
5\\
1
\end{pmatrix}-\frac{1}{11}\begin{pmatrix}-1\\
5\\
5
\end{pmatrix}\right|\right|_{1} & = & \left|\frac{5}{11}+\frac{1}{11}\right|+\left|\frac{1}{11}-\frac{5}{11}\right|+\left|\frac{5}{11}-\frac{5}{11}\right|\\
& = & \frac{10}{11}\\
& < & \frac{12}{11},
\end{eqnarray*}
\[
\left|\left|\frac{1}{11}\begin{pmatrix}5\\
1\\
5
\end{pmatrix}-\frac{1}{11}\begin{pmatrix}-1\\
5\\
5
\end{pmatrix}\right|\right|_{1}<\frac{12}{11},
\]
and
\begin{eqnarray*}
\left|\left|\frac{1}{11}\begin{pmatrix}1\\
5\\
5
\end{pmatrix}-\frac{1}{11}\begin{pmatrix}-1\\
5\\
5
\end{pmatrix}\right|\right|_{1} & = & \left|\frac{1}{11}+\frac{1}{11}\right|+\left|\frac{5}{11}-\frac{5}{11}\right|+\left|\frac{5}{11}-\frac{5}{11}\right|\\
& = & \frac{2}{11}\\
& < & \frac{12}{11},
\end{eqnarray*}
it follows that the distance from any point in $\conv\left(V\left(\frac{6}{11},\left(1,1,1\right)\right)\right)$
to $\conv\left(V\left(\frac{6}{11},\left(1,1,1\right)\right)\right)$
is less than $1$. Therefore, a $14$-point packing set of $\frac{6}{11}C_{3}^{*}$
is not possible without moving one or more of the points in the subset
\[
\left\{ \begin{pmatrix}1\\
0\\
0
\end{pmatrix},\begin{pmatrix}0\\
1\\
0
\end{pmatrix},\begin{pmatrix}0\\
0\\
1
\end{pmatrix}\right\} \cup\frac{1}{11}\left\{ \begin{pmatrix}-1\\
5\\
5
\end{pmatrix},\begin{pmatrix}5\\
-1\\
5
\end{pmatrix},\begin{pmatrix}5\\
5\\
-1
\end{pmatrix}\right\} \subset V_{3}\cup Q_{13}.
\]
\begin{proof}[\emph{Proof of Proposition 1.3}]
\emph{} By Proposition 5.4, the set $V_{3}\cup Q_{13}$ is a subset
of $C_{n}^{*}$ with $13$ points and is a packing set for $rC_{3}^{*}$
where $r\in\left(\frac{1}{2},\frac{6}{11}\right]$. Therefore
\[
\gamma\left(C_{3}^{*},r\right)\geq13\qquad\text{for }r\in\left(\frac{1}{2},\frac{6}{11}\right]\text{.}
\]
\end{proof}
\section{Diagrams of cross-polytope packings}
Below are graphs showing the unit cross-polytope with cross-polytopes
of radius $r$ around each point of $V_{3}$, $V_{3}\cup Q_{10}$,
$V_{3}\cup Q_{12}^{+}\cup\left(-Q_{12}^{+}\right)$, and $V_{3}\cup Q_{13}$.
For each diagram except the first one, the value of $r$ in the diagram
is the largest possible for that configuration of points. In each
diagram the grey cross-polytope in the middle is $C_{3}^{*}$.
\paragraph{$\boldsymbol{V_{3}}$: $\boldsymbol{6}$ points in $\boldsymbol{C_{3}^{*}}$}
$V_{3}$ is a packing set of $rC_{3}^{*}$ for all $0<r\leq1$.
\noindent \begin{center}
\includegraphics[scale=0.25]{003}
\par\end{center}
\noindent \begin{center}
\emph{Figure 6.1}. The green cross-polytopes represent the sets $\overline{C\left(\mathbf{x},\frac{9}{10}\right)}$
for $\mathbf{x}\in V_{3}$.
\par\end{center}
\paragraph{$\boldsymbol{V_{3}\cup Q_{10}}$: $\boldsymbol{10}$ points in $\boldsymbol{C_{3}^{*}}$}
$V_{3}\cup Q_{10}$ is a packing set of $rC_{3}^{*}$ for all $0<r\leq\frac{2}{3}$.
\noindent \begin{center}
\includegraphics[scale=0.25]{005}\includegraphics[scale=0.25]{004}
\par\end{center}
\noindent \begin{center}
\emph{Figures 6.2 (left) and 6.3 (right)}. The green cross-polytopes
represent the sets $\overline{C\left(\mathbf{x},\frac{2}{3}\right)}$
for $\mathbf{x}\in V_{3}$ and the blue cross-polytopes represent
the sets $\overline{C\left(\mathbf{y},\frac{2}{3}\right)}$ for $\mathbf{y}\in Q_{10}$.
\par\end{center}
\paragraph{$\boldsymbol{V_{3}\cup Q_{12}^{+}\cup\left(-Q_{12}^{+}\right)}$:
$\boldsymbol{12}$ points in $\boldsymbol{C_{3}^{*}}$}
$V_{3}\cup Q_{12}^{+}\cup\left(-Q_{12}^{+}\right)$ is a packing set
of $rC_{3}^{*}$ for all $0<r\leq\frac{3}{5}$.
\noindent \begin{center}
\includegraphics[scale=0.25]{007}\includegraphics[scale=0.25]{006}
\par\end{center}
\noindent \begin{center}
\emph{Figures 6.4 (left) and 6.5 (right)}. The green cross-polytopes
represent the sets $\overline{C\left(\mathbf{x},\frac{3}{5}\right)}$
for $\mathbf{x}\in V_{3}$ and the blue cross-polytopes represent
the sets $\overline{C\left(\mathbf{y},\frac{3}{5}\right)}$ for $\mathbf{y}\in V_{3}\cup Q_{12}^{+}\cup\left(-Q_{12}^{+}\right)$.
\par\end{center}
\paragraph{$\boldsymbol{V_{3}\cup Q_{13}}$: $\boldsymbol{13}$ points in $\boldsymbol{C_{3}^{*}}$}
$V_{3}\cup Q_{13}$ is a packing set of $rC_{3}^{*}$ for all $0<r\leq\frac{6}{11}$.
\noindent \begin{center}
\includegraphics[scale=0.25]{008}\includegraphics[scale=0.25]{009}\\
\includegraphics[scale=0.25]{010}\includegraphics[scale=0.25]{011}
\par\end{center}
\noindent \begin{center}
\emph{Figures 6.6 (top left), 6.7 (top right), 6.8 (bottom left),
and 6.9 (bottom right)}. The green cross-polytopes represent the sets
$\overline{C\left(\mathbf{x},\frac{6}{11}\right)}$ for $\mathbf{x}\in V_{3}$,
the blue cross-polytopes represent $\overline{C\left(\frac{1}{11}\left(-1,5,5\right),\frac{6}{11}\right)}$,
$\overline{C\left(\frac{1}{11}\left(5,-1,5\right),\frac{6}{11}\right)}$,
and $\overline{C\left(\frac{1}{11}\left(5,5,-1\right),\frac{6}{11}\right)}$,
the purple cross-polytopes represent $\overline{C\left(\frac{1}{11}\left(-5,-2,4\right),\frac{6}{11}\right)}$,
$\overline{C\left(\frac{1}{11}\left(-5,4,-2\right),\frac{6}{11}\right)}$,
and $\overline{C\left(\frac{1}{11}\left(4,-2,-5\right),\frac{6}{11}\right)}$,
and the magenta cross-polytope represents $\overline{C\left(\frac{1}{11}\left(-3,-5,-3\right),\frac{6}{11}\right)}$.
\par\end{center}
\section{Acknowledgements}
Thanks to Martin Henk for his advice, suggestions, and feedback, and
Fei Xue for feedback and discussions, especially with organizing and
simplifying the proof of the $n=3$ and $r\in\left(\frac{3}{5},\frac{2}{3}\right]$
case.
\bibliographystyle{siam}
| {
"timestamp": "2019-08-16T02:13:20",
"yymm": "1908",
"arxiv_id": "1908.05650",
"language": "en",
"url": "https://arxiv.org/abs/1908.05650",
"abstract": "The problem of finding the largest number of points in the unit cross-polytope such that the $l_{1}$-distance between any two distinct points is at least $2r$ is investigated for $r\\in\\left(1-\\frac{1}{n},1\\right]$ in dimensions $\\geq2$ and for $r\\in\\left(\\frac{1}{2},1\\right]$ in dimension $3$. For the $n$-dimensional cross-polytope, $2n$ points can be placed when $r\\in\\left(1-\\frac{1}{n},1\\right]$. For the three-dimensional cross-polytope, $10$ and $12$ points can be placed if and only if $r\\in\\left(\\frac{3}{5},\\frac{2}{3}\\right]$ and $r\\in\\left(\\frac{4}{7},\\frac{3}{5}\\right]$ respectively, and no more than $14$ points can be placed when $r\\in\\left(\\frac{1}{2},\\frac{4}{7}\\right]$. Also, constructive arrangements of points that attain the upper bounds of $2n$, $10$, and $12$ are provided, as well as $13$ points for dimension $3$ when $r\\in\\left(\\frac{1}{2},\\frac{6}{11}\\right]$.",
"subjects": "Metric Geometry (math.MG)",
"title": "The maximum number of points in the cross-polytope that form a packing set of a scaled cross-polytope",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850837598122,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7095349863001092
} |
https://arxiv.org/abs/1008.0266 | Dilation properties for weighted modulation spaces | In this paper we give a sharp estimate on the norm of the scaling operator $U_{\lambda}f(x)=f(\lambda x)$ acting on the weighted modulation spaces $\M{p,q}{s,t}(\R^{d})$. In particular, we recover and extend recent results by Sugimoto and Tomita in the unweighted case. As an application of our results, we estimate the growth in time of solutions of the wave and vibrating plate equations, which is of interest when considering the well posedeness of the Cauchy problem for these equations. Finally, we provide new embedding results between modulation and Besov spaces. | \section{Introduction}\label{intro}
The modulation spaces were introduced
by H.~Feichtinger \cite{Fei83}, by
imposing integrability conditions on
the short-time Fourier transform (STFT)
of tempered distributions. More
specifically, for $x, \omega \in
\mathbb{R}^{d}$, we let $M_\omega$ and $T_x$
denote the operators of modulation and
translation. Then, the STFT of $f$
with respect to a nonzero window $g$ in
the Schwartz class is $$V_gf(x,
\omega)=\ip{f}{M_{\omega}T_{x}g}=\int_{\mathbb{R}^{2d}}
f(t)\overline{g(t-x)}e^{-2\pi i t\cdot
\omega}\, dt.$$ $V_{g}f(x, \omega)$ measures the frequency content of $f$ in a
neighborhood of $x$.
For $s_1, s_2 \in \mathbb{R}$ and $1\leq p, q\leq\infty$, the
weighted modulation space
$\M{p,q}{s_{1}, s_{2}}(\mathbb{R}^{2d})$ is
defined to be the Banach space of all
tempered distributions $f$ such that
\begin{equation}\label{modspace}
\nm{f}{\M{p,q}{s_{1}, s_{2}}} =
\biggparen{\int_{\mathbb{R}^{2}}\biggparen{\int_{\mathbb{R}^{2}}|V_{g}f(x,
\omega)|^{p}\, v_{s_{1}}(x)^{p}\, dx}^{q/p}\, v_{s_{2}}(\omega)^{q}\, d\omega}^{1/q}
<\infty.
\end{equation}
Here and in the sequel, we use the notation
$$v_{s}(x)=<x>^{s}=(1+|x|^{2})^{s/2}.$$
The definition of modulation space is
independent of the choice of the window
$g$, in the sense that different window
functions yield equivalent
modulation-space norms. Furthermore,
the dual of a modulation space is also
a modulation space: if $p<\infty$, $
q<\infty$, $(\M{p, q}{s, t})'=\M{p',
q'}{-s, -t}$, where $p', q'$ denote the
dual exponents of $p$ and $q$,
respectively.
When both $s=t=0$, we will simply write
$\M{p, q}{}=\M{p, q}{0, 0}$. The weighted $L^{2}_{s}$ space is
exactly $\M{2, 2}{s, 0}$, while an application of Plancherel's identity shows that
the Sobolev space
$\so{2}{s}$ coincides with $\M{2,2}{0, s}$.
For further properties and
uses of modulation spaces, see Gr\"ochenig's book \cite{book}, and we refer to
\cite{tri83} for equivalent definitions of the
modulation spaces for all $0<p,q\leq \infty$.
The modulation spaces appeared in recent years in various areas of mathematics
and engineering. Their relationship with other function spaces have
been investigated and resulted in embedding results of modulation spaces into other
function spaces such as the Besov and Sobolev spaces \cite{kasso04, sugimototomita,
toft04}. Sugimoto and Tomita \cite{sugimototomita} proved the optimality of certain
of the embeddings of modulation spaces into Besov space obtained in \cite{kasso04,
toft04}. These results were obtained as consequence to optimal bounds of
$\|U_{\lambda}\|_{\M{p,q}{} \to \M{p,q}{}}$ \cite[Theorem 3.1]{sugimototomita},
where $U_{\lambda}f(\cdot)=f(\lambda \cdot)$ for $\lambda >0$.
The operator $U_\lambda$ has been investigated on many other function spaces
including the Besov spaces. For purpose of comparison with our results we include
the following results summarizing the behavior of $U_\lambda$ on the Besov spaces \cite[Proposition 3]{rusic}:
\begin{theorem}\label{dilbes}
For $\lambda\in(0,\infty)$, $s\in\mathbb{R}$,
\begin{equation}\label{dilbesov}
C^{-1}\lambda^{-\frac{d}p }\min\{1,\lambda^s\}\|f\|_{B^{p,q}_s}\leq \|
f_\lambda\|_{B^{p,q}_s} \leq C\lambda^{-\frac{d}p
}\max\{1,\lambda^s\}\|f\|_{B^{p,q}_s}.
\end{equation}
\end{theorem}
\smallskip
The estimate on the norm of $U_{\lambda}$ on the (unweighted)
modulation spaces $\M{p,q}{}(\mathbb{R}^{d})$ was first obtained by
Sugimoto and Tomita \cite{sugimototomita}. In this paper, we shall
derive optimal lower and upper bounds for the operator
$U_{\lambda}$ on general modulation spaces $\M{p,q}{t,s}(\mathbb{R}^{d})$. More specifically,
the boundedness of $U_{\lambda}$ on $\M{p,q}{t,s}$ is proved in Theorems \ref{xdil},
\ref{mainfreq} and \ref{mainboth}, and the optimal bounds on $\|U_{\lambda}\|_{\M{p,q}{t, s} \to \M{p,q}{t, s}}$ are established by Theorems \ref{sharp31} and \ref{sharp32}. We wish to point out that it is not trivial to
prove sharp bounds on the norm of the operator $U_{\lambda}$, as one has to construct examples of functions in the modulation spaces that achieve the desired optimal estimates. We construct such examples by exploiting the properties of Gabor frames generated by the Gaussian window.
It is likely that the functions that we construct can play
some role in other areas of analysis where the modulation are
used, e.,g., time-frequency analysis of pseudodifferential
operators and PDEs.
Interesting applications concern Strichartz estimates for
dispersive equations such as the wave
equation and the vibrating plate
equation on Wiener amalgam and
modulation spaces, where the time
parameter of the Fourier multiplier
symbol is considered as scaling factor.
We plan to investigate such applications in a
subsequent paper.
Finally, we prove new embeddings between modulation
spaces and Besov spaces, generalizing some of the results of \cite{kasso04}.
Although strictly speaking this is not an application of the above dilation results,
it is clearly in the spirit of the main topic of the present paper,
so that we devote a short subsection to the problem.
\vskip0.1truecm
Our paper is organized as follows. In Section~\ref{prelim} we set up the notation
and prove some preliminary results needed to establish our theorems. In
Section~\ref{main} we prove the complete scaling of weighted modulation spaces. In
Section~\ref{sharpness} the sharpness of our results are proved, and in
Section~\ref{applic} we point out the applications of our main results.
Finally, we shall use the notations $A\lesssim B$ to mean that there exists a
constant $c>0$ such that $A\leq cB$, and $A\asymp B$ means that $A\lesssim B
\lesssim A$.
\section{Preliminary}\label{prelim}
We shall use the set and index terminology of the paper \cite{sugimototomita}.
Namely, for $1\leq p\leq\infty$, let $p'$ be the
conjugate exponent of $p$
($1/p+1/p'=1$). For
$(1/p,1/q)\in [0,1]\times
[0,1]$, we define the subsets
$$ I_1=\max (1/p,1/p')\leq 1/q,\quad\quad I_1^*=\min (1/p,1/p')\geq 1/q,
$$
$$ I_2=\max (1/q,1/2)\leq 1/p',\quad\quad I_2^*=\min (1/q,1/2)\geq 1/p',
$$
$$ I_3=\max (1/q,1/2)\leq 1/p,\quad\quad I_3^*=\min (1/q,1/2)\geq
1/p.
$$
These sets are displayed in Figure 1:
\vspace{1.2cm}
\begin{center}
\includegraphics{figSchr1.1}
\\
$ $
\end{center}
\begin{center}{\quad \quad\quad\quad \quad $0<\lambda\leq 1$\hfill $\lambda\geq
1$}\quad\quad\quad\quad\quad\quad\quad\quad\quad
\end{center}
\begin{center}{ Figure 1. The index sets. }
\end{center}
\vspace{1.2cm}
We introduce the indices:
$$ \mu_1(p,q)=\begin{cases}-1/p & \quad {\mbox{if}} \quad (1/p,1/q)\in I_1^*,\\
1/q-1 & \quad {\mbox{if}} \quad (1/p,1/q)\in I_2^*,\\
-2/p +1/q& \quad {\mbox{if}} \quad (1/p,1/q)\in I_3^*,\\
\end{cases}
$$
and
$$ \mu_2(p,q)=\begin{cases}-1/p & \quad {\mbox{if}} \quad (1/p,1/q)\in I_1,\\
1/q-1 & \quad {\mbox{if}} \quad (1/p,1/q)\in I_2,\\
-2/p +1/q& \quad {\mbox{if}} \quad (1/p,1/q)\in I_3.\\
\end{cases}
$$
Next, we prove a lemma that will be used throughout this paper, and which allows us
to investigate the action of $U_{\lambda}$ only on $\mathcal{S}(\bR^d)$.
\begin{lemma}\label{Fabio} Let $m$ be a polynomial growing weight function, A be a
linear continuous
operator from $\mathcal{S}'(\bR^d)$ to $\mathcal{S}'(\bR^d)$. Assume that
\begin{equation}\label{extension}
\|Af\|_{\M{p,q}{m}}\leq
C\|f\|_{\M{p,q}{m}},\quad \mbox{
for\,all}\ f\in\mathcal{S}(\bR^d).
\end{equation}
Then
\begin{equation}\label{extension2}
\|Af\|_{\M{p,q}{m}}\leq C\|f\|_{\M{p,
q}{m}},\quad \mbox{ for\,all}\
f\in\M{p, q}{m}(\bR^d).
\end{equation}
\end{lemma}
\begin{proof}
The conclusion is clear if
$p,q<\infty$, because in that
case $\mathcal{S}(\mathbb{R}^{d})$ is dense
in $\M{p,q}{m}(\mathbb{R}^{d})$.\par
Consider now the case
$p=\infty$ or $q=\infty$. For
any given $f \in \M{p,q}{m}$,
consider a sequence $f_n$ of
Schwartz functions, with
$f_n\to f$ in $\mathcal{S}'(\mathbb{R}^{d})$,
and
\begin{equation}\label{controllo}\|f_n\|_{\M{p,q}{m}}\lesssim
\|f\|_{\M{p,q}{m}}
\end{equation}
(see the proof of Proposition
11.3.4 of \cite{book}). Since
$f_n$ tends to $f$ in
$\mathcal{S}'(\bR^d)$, $Af_n$ tends to
$Af$ in $\mathcal{S}'(\bR^d)$, and
$V_\varphi Af_n$ tends to $V_\varphi
Af$ pointwise.
Hence, by Fatou's Lemma, the
assumption \eqref{extension}
and \eqref{controllo},
$$\|A f\|_{\M{p,q}{m}}\leq \liminf_{n\to\infty}
\|A f_n\|_{\M{p,q}{m}}\lesssim
\liminf_{n\to\infty}
\|f_n\|_{\M{p, q}{m}}\lesssim
\|f\|_{\M{p,q}{m}}.$$
\end{proof}
We shall also make use of the following characterization of the modulation spaces by
Gabor frames generated by the Gaussian function, which will be denoted through the
paper by $\varphi(x)=e^{-\pi |x|^{2}}, x \in \mathbb{R}^d.$
Recall that for $0< a<1$, the family, $$\mathcal{G}(\varphi, a,
1)=\{\varphi_{k, \ell}(\cdot)=M_{\ell}T_{ak}\varphi=e^{2\pi i \ell \cdot}\varphi(\cdot - ak),
k, \ell \in \mathbb{Z}^{d} \}$$ is a Gabor frame for $L^{2}(\mathbb{R}^{d})$ if and only if there exist
$0<A\leq B < \infty$ such that for all $f \in L^2$ we have
\begin{equation}\label{frameineq}
A\|f\|_{L^{2}}^{2}\leq \sum_{k, \ell \in \mathbb{Z}^{d}}|\ip{f}{\varphi_{k,\ell}}|^{2}\leq
B\|f\|_{L^{2}}^{2}.
\end{equation} Moreover, there exists a dual function $\tilde{\varphi}\in \mathcal{S}$ such
that $\mathcal{G}(\tilde{\varphi}, a, 1)$ is also a frame for $L^2$ and every $f \in
L^2$ can be written as
\begin{equation}\label{gabrecons}
f=\sum_{k, \ell \in \mathbb{Z}^{d}}\ip{f}{\tilde{\varphi}_{k,\ell}}\varphi_{k,\ell}= \sum_{k, \ell \in
\mathbb{Z}^{d}}\ip{f}{\varphi_{k,\ell}}\tilde{\varphi}_{k,\ell}.
\end{equation}
It is easy to see from the isometry of the Fourier transform on $L^2$ and the fact
that $\widehat{M_{\ell}T_{ak}\varphi}=T_{\ell}M_{-ak}\hat{\varphi}=e^{2\pi i a
k \ell}M_{-ak}T_{\ell}\varphi$, that $\mathcal{G}(\varphi, 1, a)$ is a Gabor frame whenever
$\mathcal{G}(\varphi, a, 1)$ is. The characterization of the modulation spaces by
Gabor frame is summarized in the following proposition. We refer to \cite[Chapter
9]{book} for a detail treatment of Gabor frames in the context of the modulation
spaces. In particular, the next result is proved in \cite[Theorem 7.5.3]{book} and
describe precisely when the Gaussian function generates a Gabor frame on $L^2$.
\begin{proposition}\label{gabframe}
$\mathcal{G}(\varphi, a, 1)$ is a Gabor frame for $L^2$ if and only if $0<a<1$.
In this case, $\mathcal{G}(\varphi, a, 1)$ is also a Banach frame for
$\M{p,q}{t,s}$ for all $1\leq p, q \leq \infty$, and $s, t \in \mathbb{R}$. Moreover, $f \in
\M{p,q}{t,s}$ if and only if there exists a sequence $\{c_{k,\ell}\}_{k, \ell \in \mathbb{Z}^{d}}\in
\ell^{p,q}_{t,s}(\mathbb{Z}^{d}\times \mathbb{Z}^{d})$ such that
$
f=\sum_{k, \ell \in \mathbb{Z}^{d}}c_{k,\ell}\varphi_{k,\ell}$ with convergence in the modulation space norm. In addition,
$$\|f\|_{\M{p,q}{t,s}}\asymp
\|c\|_{\ell^{p,q}_{t,s}}:=\bigg(\bigg(\sum_{k\in
\mathbb{Z}^{d}}|c_{k,\ell}|^{p}v_{t}(k)^{p}\bigg)^{q/p}v_{s}(\ell)^{q}\bigg)^{1/q}.$$
\end{proposition}
\section{Dilation properties of weighted modulation spaces}\label{main}
We first consider the polynomial weights in the time variables $v_{t}(x)=\langle
x\rangle^t=(1+|x|^2)^{t/2}$, $t\in\mathbb{R}$.
\begin{theorem} \label{xdil}
Let $1\leq p,q \leq\infty$, $t\in\mathbb{R}$. Then the following are true:
\noindent (1) There exists a constant $C>0$ such that $\forall f \in
\M{p,q}{t,0},\,\lambda\geq 1$,
\begin{equation}\label{stima1}
C^{-1}\, \lambda^{d\mu_2(p,q)}\min\{1,\lambda^{-t}\}\,\|f\|_{\M{p,q}{t,0}}\leq \|
f_\lambda\|_{\M{p,q}{t,0}}\leq C
\lambda^{d\mu_1(p,q)}\max\{1,\lambda^{-t}\}\,\|f\|_{\M{p,q}{t,0}}.
\end{equation}
\noindent (2) There exists a constant $C>0$ such that $\forall f \in
\M{p,q}{t,0},\,0<\lambda\leq 1$,
\begin{equation}\label{stima2}
C^{-1}\, \lambda^{d\mu_1(p,q)}\min\{1,\lambda^{-t}\}\,\|f\|_{\M{p,q}{t,0}}\leq \|
f_\lambda\|_{\M{p,q}{t,0}}\leq C
\lambda^{d\mu_2(p,q)}\max\{1,\lambda^{-t}\}\,\|f\|_{\M{p,q}{t,0}}.
\end{equation}
\end{theorem}
\begin{proof} We shall only prove the upper halves of each of the
estimates~\eqref{stima1} and~\eqref{stima2}. The lower halves will follow from the
fact that $0< \lambda \leq 1$ if and only if $1/\lambda \geq 1$ and
$f=U_{\lambda}U_{1/\lambda}f=U_{1/\lambda}U_{\lambda}f$.
We first consider the case $\lambda \geq 1$. Recall the definition of the dilation
operator $U_\lambda$ given by $U_\lambda f(x)=f(\lambda x)$.
Since the mapping $f\mapsto \langle \cdot\rangle^t f$ is an homeomorphism from
$\M{p,q}{t_0,s}$ to $\M{p,q}{t_0-t,s}$, $t,t_0,s\in \mathbb{R}$, see, e.g., \cite[Corollary
2.3]{Toftweight}, we have:
$$\|U_\lambda f\|_{\M{p,q}{t,0}}\asymp\|\langle\cdot\rangle^t U_\lambda f\|_{\M{p,q}{}}.
$$
Using $\langle\cdot\rangle^t U_\lambda f=U_\lambda(\langle\lambda^{-1}\cdot\rangle^t ) f$ and the
dilation properties for unweighted modulation spaces in \cite[Theorem
3.1]{sugimototomita}, we obtain
$$\|U_\lambda(\langle\lambda^{-1}\cdot\rangle^t f)
\|_{\M{p,q}{}}\leq C \lambda^{d \mu_1(p,q)}
\|\langle\lambda^{-1}\cdot\rangle^t f\|_{\M{p,q}{}}\asymp
\lambda^{d \mu_1(p,q)} \|
\langle\cdot\rangle^{-t}\langle\lambda^{-1}\cdot\rangle^t
(\langle\cdot\rangle^{t}f)\|_{\M{p,q}{}}.
$$
Hence,
it remains to prove that the pseudodifferential operator with symbol
$g^{(t,\lambda)}(x):=\langle
x\rangle^{-t}\langle\lambda^{-1}x\rangle^{t}$
is bounded on $\M{p,q}{}$, and that its
norm is bounded above by
$\max\{1,\lambda^{-t}\}$.
By \cite[Theorem
14.5.2]{book}, this will follow once we prove that
$\|g^{(t,\lambda)}(x)\|_{\M{\infty,1}{}}
\lesssim\max\{1,\lambda^{-t}\}$.
To see this, observe first
that
\begin{equation}\label{aggiunta1}|g^{(t,\lambda)}(x)|\lesssim
\max\{1,\lambda^{-t}\},\quad \forall x \in \bR^d.
\end{equation} Indeed, let $
v^{(t,\lambda)}(x)= \langle
\lambda^{-1} x\rangle^t$.
Consider the case $t\geq 0$.
Since $\lambda\geq 1$, we
have $\lambda^{-1}|x|\leq
|x|$ and
$v^{(t,\lambda)}(x)\leq \langle
x\rangle^t$.
Analogously, for $t<0$,
we have
$v^{(t,\lambda)}(x)\leq
\lambda^{-t}\langle x\rangle^t$. Consequently, we get the desired
estimates
\eqref{aggiunta1}.
Using the
inclusion
$\mathcal{C}^{d+1}(\bR^d)\hookrightarrow
\M{\infty,1}{}(\bR^d)$ we have
\[
\|g^{(t,\lambda)}(x)\|_{\M{\infty,1}{}}\lesssim
\sup_{|\alpha|\leq
d+1}\sup_{x\in\bR^d}|\partial^\alpha
g^{(t,\lambda)}(x)|.
\]
By Leibniz' formula, the
estimate
$|\partial^\beta\langle
x\rangle^t|\lesssim \langle
x\rangle^{t-|\beta|}$ and
\eqref{aggiunta1} we see that
this last expression is
estimated by
$\max\{1,\lambda^{-t}\}$.
This concludes the proof of the upper half of~\eqref{stima1}.
We now consider the case $0<\lambda \leq 1$. Observe that by \cite{sugimototomita}
we have
$$\|U_\lambda(\langle\lambda^{-1}\cdot\rangle^t f)\|_{\M{p,q}{}}
\leq C \lambda^{d \mu_2(p,q)}\|\langle\lambda^{-1}\cdot\rangle^t
f\|_{\M{p,q}{}}\asymp \lambda^{d \mu_2(p,q)} \|
\langle\cdot\rangle^{-t}\langle\lambda^{-1}\cdot\rangle^t
(\langle\cdot\rangle^{t}f)\|_{\M{p,q}{}}
.$$
Moreover, one easily shows that~\eqref{aggiunta1} still holds using the same
arguments along with the fact that
$v^{(t,\lambda)}(x)=\lambda^{-t}(\lambda^2+|x|^2)^{t/2}\leq
\lambda^{-t}\langle x\rangle^t$ when $t\geq 0$. Similarly, $v^{(t,\lambda)}(x)\leq
\langle x\rangle^t$ when $t<0$. In addition,
$g^{(t,\lambda)}(x)=g^{(-t,\lambda^{-1})}(\lambda^{-1}x)$.
Hence, by the proof of~\eqref{stima1} and \cite[Theorem
3.1]{sugimototomita}, we see that
\[
\|g^{(t,\lambda)}\|_{\M{\infty,1}{}}\lesssim
\|g^{(-t,\lambda^{-1})}\|_{\M{\infty,1}{}}\lesssim\max
\{1,\lambda^{-t}\}.
\]
This establishes the upper half of~\eqref{stima2}.
\end{proof}
We now consider the polynomial weights in the frequency variables $v_{s}(\omega)=\langle
\omega\rangle^s$,
$s\in\mathbb{R}$.
\begin{theorem}\label{mainfreq}
Let $1\leq p,q \leq\infty$, $s\in\mathbb{R}$. Then the following are true:\\
(1) There exists a constant $C>0$ such that $\forall f \in
\M{p,q}{0,s},\,\lambda\geq 1,$
\begin{equation}\label{bof1}
C^{-1}\,\lambda^{d\mu_2(p,q)}\min\{1,\lambda^s\}\,\|f\|_{\M{p,q}{0,s}}\leq \|
f_\lambda\|_{\M{p,q}{0,s}}\leq C
\lambda^{d\mu_1(p,q)}\max\{1,\lambda^s\}\,\|f\|_{\M{p,q}{0,s}}.
\end{equation}
(2) There exists a constant $C>0$ such that $\forall f \in
\M{p,q}{0,s},\,0<\lambda\leq 1,$
\begin{equation}\label{bof2}
C^{-1}\,\lambda^{d\mu_1(p,q)}\min\{1,\lambda^s\}\,\|f\|_{\M{p,q}{0,s}}\leq\|
f_\lambda\|_{\M{p,q}{0,s}}\leq
C\lambda^{d\mu_2(p,q)}\max\{1,\lambda^s\}\,\|f\|_{\M{p,q}{0,s}}.
\end{equation}
\end{theorem}
\begin{proof} Here we use the fact that the mapping $f\mapsto \langle D\rangle^s f$ is an
homeomorphism from $\M{p,q}{t,s_0}$
to $\M{p,q}{t,s_0-s}$, $t,s,s_0\in\mathbb{R}$ (see \cite[Corollary 2.3]{Toftweight}). The
rest of the proof uses similar arguments as those in Theorem \ref{xdil}.
\end{proof}
The next result follows immediately by combining the last two theorems.
\begin{corollary} \label{xodil}
Let $1\leq p,q \leq\infty$, $t,s\in\mathbb{R}$. Then the following are true:\\
(1) There exists a constant $C>0$ such that $\forall f \in
\M{p,q}{t,s},\,\lambda\geq 1,$
\begin{align*}
C^{-1}\lambda^{d\mu_2(p,q)}\min\{1,\lambda^{-t}\}\min\{1,\lambda^{s}\}\,&\|f\|_{\M{p,q}{t,s}}
\leq \| f_\lambda\|_{\M{p,q}{t,s}}\\[1 \jot]
& \leq C
\lambda^{d\mu_1(p,q)}\max\{1,\lambda^{-t}\}\max\{1,\lambda^{s}\}\,\|f\|_{\M{p,q}{t,s}}.
\end{align*}
(2) There exists a constant $C>0$ such that $\forall f \in
\M{p,q}{t,s},\,0<\lambda\leq 1,$
\begin{align*}
C^{-1}
\lambda^{d\mu_1(p,q)}\min\{1,\lambda^{-t}\}\min\{1,\lambda^{s}\}\,&\|f\|_{\M{p,q}{t,s}}
\leq \| f_\lambda\|_{\M{p,q}{t,s}}\\[1 \jot]
& \leq C
\lambda^{d\mu_2(p,q)}\max\{1,\lambda^{-t}\}\max\{1,\lambda^{s}\}\,\|f\|_{\M{p,q}{t,s}}.
\end{align*}
\end{corollary}
The following result is an analogue of Corollary~\ref{xodil} for modulation spaces
defined by non-separable polynomial growing
weight function such as $v_s (x,\omega ):=\langle (x,\omega )\rangle^s=(1+|x|^2+|\omega|^2)^{s/2}$.
\begin{theorem}\label{mainboth}
Let $1\leq p,q \leq\infty$, $s\in\mathbb{R}$. Then the following are true:\\
(1) There exists a constant $C>0$ such that $\forall f \in
\M{p,q}{v_s},\,\lambda\geq 1,$
\begin{equation}
C^{-1} \lambda^{d\mu_2(p,q)}\min\{\lambda^{-s},\lambda^s\}\,\|f\|_{\M{p,q}{v_s}}\leq
\| f_\lambda\|_{\M{p,q}{v_s}} \leq C
\lambda^{d\mu_1(p,q)}\max\{\lambda^{-s},\lambda^s\}\,\|f\|_{\M{p,q}{v_s}}.
\label{boxf1}
\end{equation}
(2) There exists a constant $C>0$ such that $\forall f \in
\M{p,q}{v_s},\,0<\lambda\leq 1,$
\begin{equation}
C^{-1} \lambda^{d\mu_1(p,q)}\min\{\lambda^{-s},\lambda^s\}\,\|f\|_{\M{p,q}{v_s}}
\leq \| f_\lambda\|_{\M{p,q}{v_s}}
\leq C \lambda^{d\mu_2(p,q)}\max\{\lambda^{-s},\lambda^s\}\,\|f\|_{\M{p,q}{v_s}}.
\label{boxf2}
\end{equation}
\end{theorem}
\begin{proof}
We assume $s\geq 0$. A duality argument can be used to complete
the proof when $s<0$. (Notice, this duality argument will be given explicitly below
in the proof of the sharpness of Theorem~\ref{xdil} in the case $(1/p,1/q)\in I_2$,
$t\geq
0$).
Moreover, since the result has been
proved in \cite[Theorem
3.1]{sugimototomita} for $s=0$, one can
use interpolation arguments along with
Lemma~\ref{Fabio} to reduce the proof
when $s$ is an even integer.
The mapping $f\mapsto
\langle x,D\rangle^s f$ is an
homeomorphism from
$\M{p,q}{v_s}$
to $\M{p,q}{}$, $s\in\mathbb{R}$
(see \cite[Theorem 2.2]{Toftweight}). Hence
\begin{align*} \|f_\lambda\|_{\M{p,q}{v_s}}&\asymp
\|\langle x,D\rangle^s f_\lambda\|_{\M{p,q}{}}\\
&=\|U_\lambda (\langle \lambda^{-1} x,\lambda D\rangle ^sf)
\|_{\M{p,q}{}}\\
&\leq C \begin{cases}
\lambda^{d\mu_1(p,q)}\|\langle \lambda^{-1} x,\lambda
D\rangle^s f \|_{\M{p,q}{}}, \quad \lambda\geq1\\
\lambda^{d\mu_2(p,q)}\|\langle \lambda^{-1} x,\lambda
D\rangle^s f \|_{\M{p,q}{}}, \quad 0<\lambda\leq1,
\end{cases}
\end{align*}
where in the last inequality
we used again the dilation
properties for unweighted
modulation spaces of
\cite[Theorem
3.1]{sugimototomita}.
Therefore, writing $f=\langle
x,D\rangle^{-s} \langle
x,D\rangle^{s}f$ we see that
it suffices to prove that the pseudodifferential
operator
\[
\langle \lambda^{-1} x,\lambda
D\rangle^s \langle
x,D\rangle^{-s}
\]
is bounded on $\M{p,q}{}$, and its norm is bounded above by
$\max\{1,\lambda^{-s}\}\max\{1,\lambda^s\}=\max\{\lambda^s,\lambda^{-s}\}$.
To this end, we observe that,
if $s$ is an even integer,
$\langle \lambda^{-1} x,\lambda
D\rangle^s$ is a finite sum of
operators of the form
$\lambda^{k}x^\alpha
D^\beta$, with $|k|\leq s$
and $|\alpha|+|\beta|\leq s$.
Now, Shubin's
pseudo-differential calculus
\cite{shubin} shows that the
operators $x^\alpha D^\beta
\langle x,D\rangle^{-s}$ have
bounded symbols, together
with all their derivatives,
so that they are bounded on
$\M{p,q}{}$. The proof is completed by taking into
account the additional factor
$\lambda^{k}$.
\end{proof}
Finally, it is relatively straightforward to give optimal estimates for the dilation
operator $U_{\lambda}$ on the Wiener amalgam spaces $W(\cF
L^p_s,L^q_t)$. These spaces are images of modulation spaces under Fourier transform,
that is $\cF \M{p,q}{t,s}=W(\cF L^p_s,L^q_t)$. It is also worth noticing that the
indices $\mu_1$ and $\mu_2$ obey the following relations,
$$ \mu_1(p',q')=-1-\mu_2(p,q), \quad \mu_2(p',q')=-1-\mu_1(p,q)\, \,
\textrm{whenever}\, \, \tfrac{1}{p}+\tfrac{1}{p'}=\tfrac{1}{q}+\tfrac{1}{q'}=1.$$
Using the above relations along with the definition of the Wiener amalgam spaces,
as well as the behavior of the Fourier transform under dilation, i.e.,
$\widehat{f_\lambda}=\lambda^{-d}(\hat{f})_{\frac1\lambda}$ and Corollary
\ref{xodil} we obtain the following result
\begin{proposition}\label{mainbothW}
Let $1\leq p,q \leq\infty$, $t,s\in\mathbb{R}$. Then the following are true:\\
(1) There exists a constant $C>0$ such that $\forall f \in W(\cF
L^p_s,L^q_t),\,\lambda\geq 1,$
\begin{align*}
C^{-1}
\lambda^{d\mu_2(p',q')}\min\{1,\lambda^{t}\}\min\{1,&\lambda^{-s}\}\,\|f\|_{W(\cF
L^p_s,L^q_t)} \leq \| f_\lambda\|_{W(\cF L^p_s,L^q_t)}\\[1 \jot]
& \leq C
\lambda^{d\mu_1(p',q')}\max\{1,\lambda^{t}\}\max\{1,\lambda^{-s}\}\,\|f\|_{W(\cF
L^p_s,L^q_t)}.
\end{align*}
(2) There exists a constant $C>0$ such that $\forall f \in W(\cF
L^p_s,L^q_t),\,\lambda\leq 1,$
\begin{align*}
C^{-1}
\lambda^{d\mu_1(p',q')}\min\{1,\lambda^{t}\}\min\{1,&\lambda^{-s}\}\,\|f\|_{W(\cF
L^p_s,L^q_t)} \leq \| f_\lambda\|_{W(\cF L^p_s,L^q_t)}\\[1 \jot]
& \leq
C\lambda^{d\mu_2(p',q')}\max\{1,\lambda^{t}\}\max\{1,\lambda^{-s}\}\,\|f\|_{W(\cF
L^p_s,L^q_t)}.
\end{align*}
\end{proposition}
\section{Sharpness of Theorems \ref{xdil} and \ref{mainfreq}.}\label{sharpness}
In this section we prove the sharpness of Theorems \ref{xdil} and \ref{mainfreq}.
The sharpness of Theorem \ref{mainboth} is proved by modifying the examples constructed in the next subsection. Therefore we omit it. But we first prove some preliminary lemmas in which we construct functions that achieve
the optimal bound.
\subsection{ Preliminary Estimates}
The next two lemmas involve estimates for the modulation space
norms of various modifications of the Gaussian. Together with
Lemmas~\ref{istar1}--\ref{i3negative}, they provide examples of
functions that achieve the optimal bound under the dilation
operator on weighted modulation spaces with weight on the space
parameter. Similar constructions for weighted modulation spaces
with weight on the frequency parameter are contained in
Lemmas~\ref{i1}--\ref{inftynegfreq}. Finally, in
Lemma~\ref{altertf} we investigated the property of the dilation
operator on compactly supported functions.
Recall that $\varphi(x)=e^{-\pi |x|^{2}}$ for $x \in \bR^d$, and that $\varphi_{\lambda}(x)=U_{\lambda}\varphi(x)=\varphi(\lambda x).$
\begin{lemma}\label{Gaussian} For $t, s\geq
0$, we have
\begin{equation}\label{zero}
\|\varphi_\lambda\|_{M^{p,q}_{t,0}}\asymp\lambda^{-\frac d p-t},\quad 0<\lambda\leq 1,
\end{equation}
\begin{equation}\label{infinity}
\|\varphi_\lambda\|_{M^{p,q}_{t,0}}\asymp\lambda^{- d\left(1-\frac1q\right)},\quad
\lambda\geq1,
\end{equation}
\begin{equation}\label{zerof}
\|\varphi_\lambda\|_{\M{p,q}{0,s}}\asymp \lambda^{-\frac d p},\quad 0<\lambda\leq1,
\end{equation}
and
\begin{equation}\label{infinityf}
\|\varphi_\lambda\|_{\M{p,q}{0,s}}\asymp \lambda^{- d\left(1-\frac1q\right)+s},\quad
\lambda\geq 1.
\end{equation}
\end{lemma}
\begin{proof}
We shall only prove the first two estimates, as the last two are proved similarly.
By some straightforward computations, (see, e.g., \cite[Lemma 1.5.2]{book}) we get
\begin{equation}\label{aggiunta.0}
|V_\varphi \varphi_\lambda (x,\omega )| =
(\lambda^2+1)^{-\frac d
2}e^{-\pi
\frac{\lambda^2}{\lambda^2+1}|x|^2}e^{-\pi
\frac{1}{\lambda^2+1}|\omega|^2}.
\end{equation}
Hence
\begin{align*}\|\varphi_\lambda\|_{\M{p,q}{t,0}}&\asymp \|V_\varphi
\varphi_\lambda\|_{\M{p,q}{t,0}}\\
&=p^{-2p}q^{-2q}\lambda^{-\frac{d}p}(\lambda^2+1)^{\frac d2(\frac 1p +\frac 1
q-1)}\left( \int_{\rd} e^{-\pi |x|^2}\langle
\frac{\sqrt{\lambda^2+1}}{\lambda\sqrt{p}}x\rangle^{pt} \,d x\right)^{\frac 1p}.
\end{align*}
If $0< \lambda \leq 1$, then
$$\lambda^{-t}|x|^{t/2}\leq (\tfrac{\lambda^{2}+1}{\lambda^{2}}|x|^{2})^{t/2}\leq
\big(1 + \tfrac{\lambda^{2}+1}{\lambda^{2}}|x|^{2}\big)^{t/2}\leq 2 \lambda^{-t} (1+
|x|^{2})^{t/2}.$$ Thus, we have
$$ \lambda^{-t}\lesssim \left(\int_{\rd} e^{-\pi |x|^2}\langle
\frac{\sqrt{\lambda^2+1}}{\lambda\sqrt{p}}x\rangle^{pt}
\,d x\right)^{1/p}\lesssim \lambda^{-t}, \quad 0< \lambda\leq 1,$$
and the estimate \eqref{zero} follows.
Now, observe that, if $\lambda\geq 1$, then $\langle
\frac{\sqrt{\lambda^2+1}}{\lambda\sqrt{p}}x\rangle \asymp \langle x \rangle$ and
\eqref{infinity} follows.
\end{proof}
\begin{lemma}\label{i2star} For $t\leq 0$, $\lambda\geq1$, consider the family of
functions
\begin{equation}\label{TransGaus}
f(x)=\lambda^{-t}\varphi(x-\lambda
e_1),\ \
e_1=(1,0,0,\ldots,0).
\end{equation}
Then there exists a constant $C>0$ such that
$\|f\|_{\M{p,q}{t,0}}\leq C $, uniformly with respect to $\lambda$. Moreover,
\begin{equation}\label{bo1}
\|f_\lambda\|_{\M{p,q}{t,0}}\gtrsim \lambda^{-t+d(\frac1q-1)},\quad
\forall\,\lambda\geq1.
\end{equation}
\end{lemma}
\begin{proof}
We have
\begin{align*}
\|f\|_{\M{p,q}{t,0}}&\asymp
\|V_\varphi f(x,\omega)\langle
x\rangle^t\|_{L^{p,q}}=\lambda^{-t}\|V_\varphi
\varphi(x-\lambda
e_1,\omega)\langle
x\rangle^t\|_{L^{p,q}}\\
&=\lambda^{-t}\|V_\varphi
\varphi(x,\omega)\langle x+\lambda
e_1\rangle^t\|_{L^{p,q}}\lesssim\lambda^{-t}\lambda^t\|V_\varphi
\varphi\langle x\rangle^{-t}\|_{L^{p,q}}\lesssim1.
\end{align*}
The last inequality follows from the fact that the weight $\langle\cdot\rangle^t$ is
$\langle\cdot\rangle^{-t}$-moderate which implies that
$\langle x+\lambda
e_1\rangle^t\lesssim
\lambda^t\langle
x\rangle^{-t}$. This proves the first part of the Lemma. Let us now
estimate
$\|f_\lambda\|_{\M{p,q}{t,0}}$
from below. We have
\[
f_\lambda(x)=\lambda^{-t}\varphi_\lambda(x-e_1).
\]
Hence, by arguing as above and using \eqref{aggiunta.0}, we have
\begin{align*}
\|f_\lambda\|_{\M{p,q}{t,0}}&
\asymp\lambda^{-t}\|V_\varphi
\varphi_\lambda (x,\omega)\langle
x+e_1\rangle^t\|_{L^{p,q}}\\
&\gtrsim \lambda^{-t} \lambda^{d(\frac1q-1)} \Bigg(\int
e^{-\pi p|x|^2} \langle
x+e_1\rangle^{pt} dx\Bigg)^{\frac1p} \gtrsim
\lambda^{-t+d(\frac1q-1)},
\end{align*}
which concludes the proof.
\end{proof}
\begin{lemma}\label{istar1}
Let $1\leq p, q \leq \infty$, $\epsilon>0$, $t\in\mathbb{R}$, and $\lambda>1 $. Moreover, assume
that $(1/p,1/q)\in I^*_1$.
\noindent a) If $t\geq 0$, define
$$
f(x)=\sum_{\ell\neq 0}|\ell|^{-d/p-\epsilon}e^{2\pi i \lambda^{-1}\ell\cdot
x}\varphi(x)=\sum_{\ell\neq 0}|\ell|^{-d/p-\epsilon}M_{\lambda^{-1}\ell}\varphi(x),\quad
\mbox{in}\,\, \mathcal{S}'(\bR^d).$$
Then there exists a constant $C>0$ such that
$\|f\|_{\M{p,q}{t,0}}\leq C $, uniformly with respect to $\lambda$. Moreover,
\begin{equation}\label{Gaugf}
\|f_{\lambda}\|_{\M{p,q}{t,0}}\gtrsim\lambda^{-d/p - \epsilon}, \qquad \forall \,
\lambda > 1.
\end{equation}
\noindent b) If $t\leq 0$ define
$$
f(x)=\sum_{k\neq 0}|k|^{-d/p-\epsilon-t}\varphi(x-k)=\sum_{k\neq
0}|k|^{-d/p-\epsilon-t}T_{k}\varphi(x),\quad \mbox{in}\,\, \mathcal{S}'(\bR^d).$$ Then there exists a constant $C>0$ such that
$\|f\|_{\M{p,q}{t,0}}\leq C $, uniformly with respect to $\lambda$. Moreover,
\begin{equation}\label{Gaugffreq}
\|f_{\lambda}\|_{\M{p,q}{t,0}}\gtrsim \lambda^{-d/p - \epsilon-t}, \qquad \forall \,
\lambda > 1.
\end{equation}
\end{lemma}
\begin{proof}We only prove part $a)$ as part $b)$ is obtained similarly.
We use Proposition~\ref{gabframe} to prove that $f$ defined in the lemma belongs to $\M{p,q}{t,0}$.
Indeed, $\mathcal{G}(\varphi, 1, \lambda^{-1})$ is a Gabor frame, and the coefficients of $f$ in this frame are given by
$c_{k,\ell}=\delta_{k,0}|\ell|^{-d/p-\epsilon}$ if $\ell\neq 0$ and $c_{0,0}=0.$
It is clear that
$$\|c_{k,\ell}\|_{\ell^{p,q}_{t,0}}=\bigg(\sum_{\ell\in \bZ^d}\bigg(\sum_{k\in
\bZ^d}|c_{k,\ell}|^{p}\langle k
\rangle^{pt}\bigg)^{q/p}\bigg)^{1/q}=\bigg(\sum_{\ell\neq
0}|\ell|^{q(-d/p-\epsilon)}\bigg)^{1/q} < \infty,$$ because $q/p\geq 1$. Thus, $f \in \M{p,q}{t,0}$ with uniform norm (with respect to $\lambda$).
Given $\lambda >1,$ we have
$$\|f_{\lambda}\|_{\M{p,q}{t,0}}=\sup_{\|g\|_{\M{p',q'}{-t,
0}}=1}|\ip{f_{\lambda}}{g}|\geq \|\varphi\|_{\M{p',
q'}{-t,0}}^{-2}|\ip{f_{\lambda}}{\varphi}|.$$
Using relation~\eqref{aggiunta.0},
$$\ip{f_{\lambda}}{\varphi}=\sum_{\ell\neq
0}|\ell|^{-d/p-\epsilon}V_{\varphi}\varphi_{\lambda}(0, \ell)=\sum_{\ell\neq
0}|\ell|^{-d/p-\epsilon}(1+\lambda^2)^{-d/2}e^{\tfrac{-\pi
|\ell|^{2}}{\lambda^{2}+1}}.$$
Therefore, if $\lambda >1$,
\begin{align*}
\|f_{\lambda}\|_{\M{p,q}{t,0}} & \geq C\sum_{\ell\neq
0}|\ell|^{-d/p-\epsilon}(1+\lambda^2)^{-d/2}e^{\tfrac{-\pi
|\ell|^{2}}{\lambda^{2}+1}}\\
& \geq C \lambda^{-d} \sum_{\ell\neq 0}|\ell|^{-d/p-\epsilon}e^{\tfrac{-\pi
|\ell|^{2}}{\lambda^{2}+1}}\\
& \geq C \lambda^{-d} \sum_{0< |\ell| <
\lambda}|\ell|^{-d/p-\epsilon}e^{\tfrac{-\pi |\ell|^{2}}{\lambda^{2}+1}}\\
& \geq C \lambda^{-d} \lambda^{-d/p - \epsilon} \sum_{0< |\ell| < \lambda }e^{-\pi} \\
& \geq C \lambda^{-d} \lambda^{-d/p - \epsilon} e^{-\pi}
\lambda^{d}=C\lambda^{-d/p+\epsilon},
\end{align*} from which the proof follows.
\end{proof}
The next results extend \cite[Lemma 3.9]{sugimototomita} and \cite[Lemma
3.10 ]{sugimototomita}.
\begin{lemma}\label{lemm1} Let $1\leq p,q \leq\infty$, $t\geq 0$,
$\epsilon>0$. Suppose that $\psi\in \mathcal{S}(\bR^d)$ satisfy $\supp\psi\subset [-1/2,1/2]^d$
and $\psi=1$ on $[-1/4,1/4]^d$.
\noindent a) If $1\leq q < \infty$, define
\begin{equation}\label{lem1ex1}
f(y)=\sum_{k\in\bZ^d\setminus\{0\}} | k|^{-\frac{d}q-\epsilon-t} M_{k}T_k
\psi(y),\quad\mbox{in}\,\,\mathcal{S}'(\bR^d).
\end{equation}
Then, $f\in \M{p,q}{t,0}(\bR^d)$ and
\begin{equation}\label{est1}
\|f_\lambda\|_{\M{p,q}{t,0}}\gtrsim\,
\lambda^{-d(\frac2p-\frac1q)+\epsilon-t},\,\quad\forall\,\,0<\lambda\leq
1.
\end{equation}
\noindent b) If $q=\infty$, let
\begin{equation}\label{es21}
f(y)=\sum_{k\not=0} |k|^{-t}M_{k}T_k
\psi(y),\quad\mbox{in}\,\,\mathcal{S}'(\bR^d).
\end{equation}
Then $f\in \M{p,\infty}{t,0}$ and
\begin{equation}\label{est2}\|f_\lambda\|_{\mathcal{M}^{p,\infty}_{t,0}}\gtrsim
\lambda^{-
\frac{2d}p-t},\quad\forall\,\, 0<\lambda\leq1.
\end{equation}
\end{lemma}
\begin{proof}
We only prove part $a)$, i.e., the case $1\leq q<\infty$ as the case $q=\infty$ is
proved in a similar fashion.
Let $g \in \mathcal{S}(\bR^d)$ satisfy $\supp g\subset
[-1/8,1/8]^d$, and $|\hat{g}|\geq 1$ on $[-2,2]^d$. The proof of each part of the
Lemma is based on the appropriate estimate for $V_{g}f$.
Let us first show that $f\in \M{p,q}{t,0}(\bR^d)$.
We have
\begin{align*}&\left|\int_{\rd} e^{-2\pi i (\omega-k)y}\psi(y-k)g(y-x)dy\right|\\
\quad&=\left|\int_{\rd} \psi(y-k)g(y-x)\{(1+|\omega-k|^2)^{-d}(I-\Delta_y)^d e^{-2\pi i
(\omega-k)y}\} \,dy\right|\\
\quad&=\frac1{(1+|\omega-k|^2)^d}\left|\sum_{|\beta_1+\beta_2|\leq 2d}C_{\beta_1,\beta_2}\int_{\rd}
\partial^{\beta_1}(T_k\psi)(y)(\partial^{\beta_2} g)(x-y)e^{-2\pi i (\omega-k)y} dy\right|\\
\quad&\leq \frac
C{(1+|\omega-k|^2)^d}\sum_{|\beta_1+\beta_2|\leq
2d}(|T_k(\partial^{\beta_1}\psi)|\ast|\partial^{\beta_2}g|)(x).
\end{align*}
Hence
\begin{align}&\|f\|_{\M{p,q}{t,0}}\asymp \|V_g f\|_{L^{p,q}_{t,0}} \notag \\
\quad&=\left(\int_{\rd}\left(\int_{\rd}\left|\sum_{k\not=0}|k|^{-\frac{d}q-\epsilon-t}\int_{\rd}
e^{-2\pi i (\omega -k)y}\psi(y-k)g(y-x)\,dy\right|^p\langle x\rangle^{tp}
dx\right)^{\frac{q}p}\,d\omega\right)^{\frac1q} \notag \\
\quad&\leq C
\Big(\int_{\rd}\big(\sum_{k\not=0}|k|^{-\frac{d}q-\epsilon-t}\frac
1{(1+|\omega-k|^2)^d}\sum_{|\beta_1+\beta_2|\leq
2d}\| |T_k(\partial^{\beta_1}\psi)|\ast
|\partial^{\beta_2}g| \|_{L^p_t}\big)^q
d\omega\Big)^{\frac1q}.
\label{1stkeyest}
\end{align}
Using Young's inequality: $\| |T_k(\partial^{\beta_1}\psi)|\ast|\partial^{\beta_2}g|
\|_{L^p_t}\lesssim \|
T_k\partial^{\beta_1}\psi\|_{L^1_t}\,\|\partial^{\beta_2}g\|_{L^p_t}$, and the
estimate $\|T_k\partial^{\beta_1}\psi\|_{L^1_t}\leq \langle
k\rangle^t\|\partial^{\beta_1}\psi\|_{L^1_t}$, we can control~\eqref{1stkeyest} by
\begin{align}
C & \left(\int_{\rd}\left(\sum_{k\not=0}|k|^{-\frac{d}q-\epsilon}\frac
1{(1+|\omega-k|^2)^d}\right)^q d\omega\right)^{\frac1q} \notag \\
\quad& \leq C
\left(\sum_{\ell\in\bZ^d}\int_{\ell+[-1/2,1/2]^d}\left(\sum_{k\not=0}|k|^{-\frac{d}q-\epsilon}\frac
1{(1+|\omega-k|^2)^d}\right)^q d\omega\right)^{\frac1q} \notag \\
\quad& \leq \tilde{C}
\left(\sum_{\ell\in\bZ^d}\left(\sum_{k\not=0}|k|^{-\frac{d}q-\epsilon}\frac
1{(1+|\ell-k|^2)^d}\right)^q\right)^{\frac1q}\notag \\
\quad &= \tilde{C} \Big\| |k|^{-\frac{d}q-\epsilon}\ast
\frac1{(1+|k|^2)^d}\Big\|_{\ell^q} <\infty,
\label{2nkeyest}
\end{align}
since $\{|k|^{-\frac{d}q-\epsilon}\}_{k\not=0}\in\ell^q.$\par
Next, we prove~\eqref{est1}. Since $V_g f_\lambda(x,\omega )
=\lambda^{-d}V_{g_{\lambda^{-1}}} f(\lambda x,\lambda^{-1}\omega) $, we obtain
\begin{equation*}
\|V_g
f_\lambda\|_{L^{p,q}_{t,0}}=\lambda^{-d(1+\frac1p-\frac1q)}\left(\int_{\rd}\left(\int_{\rd}|V_{g_{\lambda^{-1}}}
f(x,\omega )|^p\langle \lambda^{-1} x\rangle^{p t}\,dx\right)^{\frac{q}p}d\omega\right)^{\frac1q}.
\end{equation*}
Observe that $$\langle \lambda^{-1} (x+\ell)\rangle\geq \lambda^{-1} \langle \lambda^{-1} \ell
\rangle \geq \langle \lambda^{-1} \ell \rangle$$ and $\supp
g((\cdot -x)/\lambda)\subset \ell+[-1/4,1/4]^d$, for all $0< \lambda\leq1$,
$x\in\ell+[-1/8,1/8]^d$. Since $\supp \psi(\cdot-k)\subset k+[-1/2,1/2]^d$ and
$\psi(t-k)=1$ if $t\in k+[-1/4,1/4]^d$, the inner integral can be estimated as
follows:
\begin{align*} &\left(\int_{\rd} |V_{g_{\lambda^{-1}}} f(x,\omega )|^p\langle \lambda^{-1}
x\rangle^{pt}\,dx\right)^{\frac1p}\\
\quad &\geq\Big(\sum_{\ell\not=0}\int_{\ell+[-1/8,1/8]^d}\Big|\sum_{k\not=0}
|k|^{-d/q-\epsilon-t}\int_{\rd} e^{-2\pi i (\omega
-k)y}\psi(y-k)\overline{g(\frac{y-x}{\lambda})}\,dy\Big|^p \langle \lambda^{-1}
x\rangle^{pt}\,d x\Big)^{\frac1p}\\
\quad&\gtrsim\Big(\sum_{\ell\not=0}(|\ell|^{-\frac{d}q-\epsilon-t}\lambda^d
|\hat{g}(-\lambda(\omega-\ell))|\,\lambda^{-t}|\ell|^t)^p\Big)^{\frac1p}\\
\quad&\gtrsim\Big(\sum_{\ell\not=0}(|\ell|^{-\frac{d}q-\epsilon}\lambda^{d-t}
|\hat{g}(-\lambda(\omega-\ell))|)^p\Big)^{\frac1p}.
\end{align*}
Consequently,
\begin{align*}
\|V_g f_\lambda\|_{L^{p,q}_{t,0}}
&=\lambda^{-d(1+\frac1p-\frac1q)}\left(\int_{\rd}\left(\int_{\rd}|V_{g_{\lambda^{-1}}}
f(x,\omega )|^p\langle \lambda^{-1} x\rangle^{p t}\,dx\right)^{\frac qp}d\omega\right)^{\frac1q}\\
& \gtrsim \lambda^{-d(1+\frac1p-\frac1q)}\,
\left(\int_{\rd}\Big(\sum_{\ell\not=0}(|\ell|^{-\frac dq-\epsilon}\lambda^{d-t}
|\hat{g}(-\lambda(\omega-\ell))|)^p\Big)^{\frac qp}d\omega\right)^{\frac1q}\\
& = \lambda^{d-t - d/q}\, \lambda^{-d(1+\frac1p-\frac1q)}\,
\left(\int_{\rd}\Big(\sum_{\ell\not=0}(|\ell|^{-\frac dq-\epsilon}
|\hat{g}(\omega+\lambda \ell)|)^p\Big)^{\frac qp}d\omega\right)^{\frac1q}\\
& \gtrsim \lambda^{-t -\frac dp}\, \left(\int_{|\omega|\leq 1}\Big(\sum_{|\ell|\leq
\tfrac{1}{\lambda}}(|\ell|^{-\frac dq-\epsilon}
|\hat{g}(\omega+\lambda \ell)|)^p\Big)^{\frac qp}d\omega\right)^{\frac1q}\\
& \gtrsim \lambda^{-t -\frac dp}\, \left(\int_{|\omega|\leq 1}\Big(\sum_{|\ell|\leq
\tfrac{1}{\lambda}}(|\ell|^{-\frac dq-\epsilon})^p\Big)^{\frac
qp}d\omega\right)^{\frac1q}\\
& = \lambda^{-t -\frac dp}\, \Big(\sum_{|\ell|\leq
\tfrac{1}{\lambda}}(|\ell|^{-\frac dq-\epsilon})^p\Big)^{\frac1p}\gtrsim \lambda^{-t
-\frac dp} \, \lambda^{\frac dq + \epsilon} \Big(\sum_{|\ell|\leq
\tfrac{1}{\lambda}}\Big)^{\frac1p} \gtrsim \lambda^{-t -2\frac dp+ \frac dq +
\epsilon},
\end{align*}
which completes the proof.
\end{proof}
\begin{lemma}\label{i3negative}
Let $1\leq p, q \leq \infty$ be such that $(1/p,1/q)\in I_3$. Let
$\epsilon >0$, $t<0$, and $ 0< \lambda<1$.
\noindent a) If $t\leq -d$ define
\begin{equation}\label{i3neginfd}
f(x)=\lambda^{\tfrac{d}{q} -\tfrac{2d}{p} +2d}\sum_{k\neq
0}|k|^{-\tfrac{\epsilon}{2}}T_{\lambda^{2}k}\varphi(x),\quad \mbox{in}\,\, \mathcal{S}'(\bR^d).
\end{equation}
Then there exists a constant $C>0$ such that
$\|f\|_{\M{p,q}{t,0}}\leq C $, uniformly with respect to $\lambda$. Moreover,
\begin{equation}\label{i3negest}\nm{f_{\lambda}}{\M{p,q}{t,0}}\gtrsim \lambda^{d\mu_{2}(p,q)+\epsilon},\quad\forall\,\,0<\lambda<1.
\end{equation}
\noindent b) If $-d < t< 0$, choose a positive integer $N$ large enough such that
$\tfrac{1}{N} <\tfrac{p-1}{2}-\tfrac{pt}{2d}$. Define
\begin{equation}\label{i3negd0}
f(x)=\lambda^{\tfrac{d}{q}}\sum_{k\neq 0}|k|^{d(\tfrac{2}{Np}-1)
-\tfrac{\epsilon}{N}}T_{\lambda^{N}k}\varphi(x),\quad\mbox{in}\,\,\mathcal{S}'(\bR^d).
\end{equation}
Then the conclusions of part a) still hold.
\end{lemma}
\begin{proof}
\noindent $a)$ For the range of $p,q$ being considered, $
\tfrac{d}{q}+2d -\tfrac{2d}{p}=d\mu_{2}(p,q)+2d\geq 0$, and so if $\lambda < 1$, then $\lambda^{\tfrac{d}{q}+2d -\tfrac{2d}{p}}< 1$.
Next, notice that $\mathcal{G}(\varphi, \lambda^{2}, 1)$ is a Gabor frame. So, to
check that $f \in \M{p,q}{t,0}$ we only need to verify that the sequence
$c=\{c_{k\ell}\}=\{|k|^{-\tfrac{\epsilon}{2}}\delta_{\ell,0}, k\neq 0\}_{k, \ell \in
\mathbb{Z}^{d}} \in \ell^{p,q}_{t,0}$. But, the condition $t\leq -d$ guarantees this, since
$$\nm{c}{\ell^{p,q}_{t,0}}=\lambda^{\tfrac{d}{q}+2d -\tfrac{2d}{p}}\bigg(\sum_{k\neq
0}|k|^{-p\epsilon/2}(1+|k|^{2})^{pt/2}\bigg)^{1/p} \leq C.$$
Next, as in the proof of Lemma~\ref{istar1}, we have
$$\|f_{\lambda}\|_{\M{p,q}{t,0}}=\sup_{\|g\|_{\M{p',q'}{-t,
0}}=1}|\ip{f_{\lambda}}{g}|\geq \|\varphi\|_{\M{p',
q'}{-t,0}}^{-2}|\ip{f_{\lambda}}{\varphi}|.$$
In this case, $$\ip{f_{\lambda}}{\varphi}=\lambda^{2d+d\mu_{2}(p,q)}\sum_{k\neq
0}|k|^{-\epsilon/2}V_{\varphi}\varphi_{\lambda}(\lambda
k,0)=\lambda^{2d+d\mu_{2}(p,q)}\sum_{k\neq
0}|k|^{-\epsilon/2}(1+\lambda^2)^{-d/2}e^{\tfrac{-\pi
\lambda^{2}|k|^{2}}{\lambda^{2}+1}}.$$ Therefore, if $\lambda < 1$,
\begin{align*}
\|f_{\lambda}\|_{\M{p,q}{t,0}} & \geq C \lambda^{2d+d\mu_{2}(p,q)}\sum_{k\neq
0}|k|^{-\epsilon/2}(1+\lambda^2)^{-d/2}e^{\tfrac{-\pi
\lambda^{2}|k|^{2}}{\lambda^{2}+1}}\\
& \geq C \lambda^{2d+d\mu_{2}(p,q)} \sum_{k\neq 0}|k|^{-\epsilon/2} \geq C
\lambda^{2d+d\mu_{2}(p,q)} \sum_{0< |k| <
\tfrac{1}{\lambda^{2}}}|k|^{-\epsilon/2}\\
& \geq C \lambda^{2d+d\mu_{2}(p,q)} \lambda^{ \epsilon}
\lambda^{-2d}=C\lambda^{d\mu_{2}(p,q)+\epsilon}
\end{align*} which completes the proof of part $a)$.
\noindent $b)$ If $p\geq 1$, the assumptions $-d<t< 0$ and
$\frac1 N < \tfrac{p-1}{2}-\tfrac{pt}{2d}$ are sufficient to
prove that $f \in \M{p,q}{t,0}.$ In addition, the main estimate is
that
\begin{align*}
\|f_{\lambda}\|_{\M{p,q}{t,0}} & \geq C \lambda^{d/q} \sum_{k\neq
0}|k|^{d(\tfrac{2}{N p}-1) -\tfrac{\epsilon}{N}}(1+\lambda^2)^{-d/2}e^{\tfrac{-\pi
\lambda^{2(N-1)}|k|^{2}}{\lambda^{2}+1}}\\
& \geq C \lambda^{d/q} \sum_{0< |k| < \tfrac{1}{\lambda^{N}}}|k|^{d(\tfrac{2}{N
p}-1) -\tfrac{\epsilon}{N} }\geq C\lambda^{d\mu_{2}(p,q)+\epsilon}.
\end{align*}
\end{proof}
We now state results similar to the above lemmas when the weight is in the frequency
variable.
\begin{lemma}\label{i1} For $s\leq 0$, $0<\lambda\leq1$, consider the family of
functions
\begin{equation}\label{ModGaus}
f(x)=\lambda^{s}M_{\lambda^{-1}e_1}\varphi(x),\ e_1=(1,0,0,\ldots,0).
\end{equation}
Then there exists a constant $C>0$ such that
$\|f\|_{\M{p,q}{0,s}}\leq C $, uniformly with respect to $\lambda$. Moreover,
\begin{equation}\label{bo2}
\|f_\lambda\|_{\M{p,q}{0,s}}\gtrsim \lambda^{s-\frac{d}p},\quad
\forall\,\,0<\lambda\leq1.
\end{equation}
\end{lemma}
\begin{proof}
We have
\begin{align*}
\|f\|_{\M{p,q}{0,s}}&\asymp
\|V_\varphi f(x,\omega)\langle
\omega\rangle^s\|_{L^{p,q}}=\lambda^{s}\|V_\varphi
\varphi(x,\omega-\lambda^{-1} e_1)\langle
\omega\rangle^s\|_{L^{p,q}}\\
&=\lambda^{s}\|V_\varphi \varphi(x,\omega)\langle \omega+\lambda^{-1}
e_1\rangle^s\|_{L^{p,q}}\lesssim\lambda^{s}\lambda^{-s}\|V_\varphi
\varphi\langle\omega\rangle^{-s}\|_{L^{p,q}}\lesssim1,
\end{align*}
where we have used again the fact that the weight $\langle\cdot\rangle^s$ is
$\langle\cdot\rangle^{-s}$-moderate. Thus
the
functions $f$ have norms in
$\M{p,q}{0,s}$ uniformly
bounded with respect to
$\lambda$. Let us now estimate
$\|f_\lambda\|_{\M{p,q}{0,s}}$
from below. We have
\[
f_\lambda(x)=\lambda^{s}M_{e_1}\varphi_\lambda(x).
\]
By using \eqref{aggiunta.0}, we obtain
\begin{align*}
\|f_\lambda\|_{\M{p,q}{0,s}}&=\lambda^s\|V_\varphi \varphi_{\lambda}(x,\omega-e_1)\langle\omega
\rangle^s\|_{L^{p,q}}\\
&\gtrsim \lambda^{s-\frac{d}p} \Bigg(\int
e^{-\pi q|\omega|^2} \langle
\omega+e_1\rangle^{qs} d\omega\Bigg)^{\frac1q} \gtrsim \lambda^{s-\frac{d}p},
\end{align*}
as desired.
\end{proof}
\begin{lemma}\label{fistar2} Let $1\leq p, q \leq \infty$ be such that $(1/p,1/q)\in
I^*_2$. Assume that $s\leq 0$, $\epsilon >0$ and $\lambda >1 $.
\noindent a) If $q\geq 2$ and $s\leq 0$, or $1\leq q \leq 2$ and $s\leq -d$, define
$$
f(x)=\sum_{\ell\neq 0}|\ell|^{d(\frac1q-1) -\epsilon}e^{2\pi i \lambda^{-1}\ell\cdot
x}\varphi(x)=\sum_{\ell\neq
0}|\ell|^{d(\frac1q-1)-\epsilon}M_{\lambda^{-1}\ell}\varphi(x),\quad \mbox{in}\,\, \mathcal{S}'(\bR^d).$$
Then there exists a constant $C>0$ such that
$\|f\|_{\M{p,q}{0,s}}\leq C $, uniformly with respect to $\lambda$. Moreover,
\begin{equation}\label{Gaugffreqneg}
\|f_{\lambda}\|_{\M{p,q}{0,s}}\gtrsim \lambda^{d(\frac1q -1)-\epsilon},\,\quad
\forall \lambda > 1.
\end{equation}
\noindent b) If $1\leq q \leq 2$ and $-d< s <0$, choose a positive integer $N$
such that $\tfrac{1}{N} < -\tfrac{sq}{d}$, and define
$$
f(x)=\sum_{\ell\neq 0}|\ell|^{d(\tfrac{1}{Nq}-1) -\epsilon/N}e^{2\pi i
\lambda^{-N}\ell\cdot x}\varphi(x)=\sum_{\ell\neq
0}|\ell|^{d(\tfrac{1}{Nq}-1)-\epsilon/N}M_{\lambda^{-N}\ell}\varphi(x), \quad \mbox{in}\,\,
\mathcal{S}'(\bR^d).$$ Then the conclusions of part a) still hold.
\end{lemma}
\begin{proof}
\noindent $a)$ First of all notice that $\mathcal{G}(\varphi, 1, \lambda^{-1})$ is a
frame. In addition, $q\geq 2$ is equivalent to $1/q -1 \leq -1/q$. Thus, for all $s\leq 0$, $\{|\ell|^{d(1/q -1) - \epsilon}, \ell\neq 0\} \in
\ell^{q}_{s},$ which ensures that the function $f$ defined above belongs to
$\M{p,q}{0,s}$. This is also true when $1\leq q \leq 2$ and $s\leq -d$.
To prove~\eqref{Gaugffreqneg} we follow the proof of Lemma~\ref{istar1}. In
particular, we have
$$
\|f_{\lambda}\|_{\M{p,q}{0,s}} \geq C \sum_{\ell\neq 0}|\ell|^{d(\frac1q
-1)-\epsilon}(1+\lambda^2)^{-d/2}e^{\tfrac{-\pi |\ell|^{2}}{\lambda^{2}+1}} \geq C
\lambda^{-d} \sum_{0< |\ell|\leq {\lambda}}|\ell|^{d(1/q -1)-\epsilon}
e^{\tfrac{-\pi |\ell|^{2}}{\lambda^{2}+1}},$$
from which \eqref{Gaugffreqneg} follows.
\noindent $ b)$ In this case, $\mathcal{G}(\varphi, 1, \lambda^{-N})$ is a frame.
Moreover, the choice of $N$ insures that $d(1/(Nq) -1) +s <-d$ which is enough to
prove that $f\in \M{p,q}{0,s}$, and that $\|f\|_{\M{p,q}{0,s}}\leq C .$
Relation \eqref{Gaugffreqneg} now follows from
\begin{align*}
\|f_{\lambda}\|_{\M{p,q}{0,s}} & \geq C \sum_{\ell\neq 0}|\ell|^{d(\tfrac{1}{Nq}
-1)-\epsilon/N}(1+\lambda^2)^{-d/2}e^{\tfrac{-\pi
\lambda^{-2N+2}|\ell|^{2}}{\lambda^{2}+1}}\\
& \geq C \lambda^{-d} \sum_{0< |\ell|\leq {\lambda^{N}}}|\ell|^{d(\tfrac{1}{Nq}
-1)-\epsilon/N} e^{\tfrac{-\pi \lambda^{-2N+2}|\ell|^{2}}{\lambda^{2}+1}}\geq C
\lambda^{d(\frac1q -1)-\epsilon}.
\end{align*}
\end{proof}
The next lemma is proved similarly to Lemma~\ref{lemm1}, so we omit its proof.
\begin{lemma}\label{lemm1b} Let $1\leq p,q \leq\infty$, $s\leq 0$,
$\epsilon>0$. Suppose that $\psi\in \mathcal{S}(\bR^d)$ satisfies $\supp\psi\subset [-1/2,1/2]^d$
and $\psi=1$ on $[-1/4,1/4]^d$.
\noindent a) If $1\leq q < \infty$, define
$
f(y)=\sum_{k\in\bZ^d\setminus\{0\}} | k|^{-\frac{d}q-\epsilon-s} M_{k}T_k
\psi(y),\quad\mbox{in}\,\,\mathcal{S}'(\bR^d).
$
Then, $f\in \M{p,q}{0,s}(\bR^d)$ and
\begin{equation}\label{est1b}
\|f_\lambda\|_{\M{p,q}{0,s}}\gtrsim
\lambda^{-d(\frac2p-\frac1q)+\epsilon+s},\,\quad\forall\,\,0<\lambda\leq 1.
\end{equation}
\noindent b) If $q=\infty$, let
\begin{equation}\label{es22}
f(y)=\sum_{k\not=0} |k|^{-s}e^{2\pi i k y}T_k\psi(y),\quad\mbox{in}\,\,\mathcal{S}'(\bR^d).
\end{equation}
Then $f\in \M{p,\infty}{0,s}$ and
\begin{equation}\label{est2b}\|f_\lambda\|_{\mathcal{M}^{p,\infty}_{0,s}}\gtrsim
\lambda^{-
\frac{2d}p+s},\quad\forall\,\,0<\lambda\leq1.
\end{equation}
\end{lemma}
\begin{lemma}\label{i3postivefreq}
Let $1\leq p, q \leq \infty$ be such that $(1/p,1/q)\in I_3$. Let
$\epsilon >0$, $s\geq 0$ and $ 0< \lambda< 1$. Assume that $p>1$,
and choose a positive integer $N$ such that $\frac1N <
\tfrac{p-1}{2}$. Define
\begin{equation}\label{i3posfreq}
f(x)=\lambda^{\tfrac{d}{q}}\sum_{k \neq 0}|k|^{d(\tfrac{2}{N p}-1)
-\tfrac{\epsilon}{N}}T_{\lambda^{N}k}\varphi(x), \quad
\mbox{in}\,\,\mathcal{S}'(\bR^d).
\end{equation}
Then, there exists a constant $C>0$ such that
$\|f\|_{\M{p,q}{0,s}}\leq C $, uniformly with respect to
$\lambda$. Moreover,
$$\nm{f_{\lambda}}{\M{p,q}{0,s}}\gtrsim \lambda^{d\mu_{2}(p,q)+\epsilon}.$$
\end{lemma}
\begin{proof} In this case, $\mathcal{G}(\varphi, \lambda^{N}, 1)$ is a frame.
The condition $\frac1N < \tfrac{p-1}{2}$ is equivalent to $\tfrac{2}{Np}-1 < -\tfrac{1}{p}$ which is enough to show that $\{|k|^{d(\tfrac{2}{Np}-1) - \tfrac{\epsilon}{N}}\}_{k\neq 0} \in \ell^{p}$. Therefore, $ f \in \M{p,q}{0,s}$ with $\nm{f}{\M{p,q}{0,s}}\leq C$ where $C$ is a
universal constant. The rest of the proof is an adaptation of the proof of Lemma~\ref{i3negative}.
\end{proof}
Notice that, the previous lemma excludes the case $p=1$. We prove
this last case by considering the dual case. Observe that the case
$(1/\infty,1/\infty)\in I_{1}^{*}\cap I_{3}^{*}$ was already
considered in dealing with the region $I_{1}^{*}$.
\begin{lemma}\label{inftynegfreq}
Let $1\leq q \leq \infty $ be such that $(1/\infty,1/q)\in
I_3^{*}$. Let $\epsilon >0$, $s\leq 0$ and $ \lambda> 1$.
\noindent a) If $1< q <2$, choose a positive integer $N$ such that
$\frac3N < q-1$. Define
\begin{equation}\label{inftyneqfreqa}
f(x)=\lambda^{d(1-\tfrac{2}{q})}\sum_{\ell \neq
0}|\ell|^{d(\tfrac{3}{N q}-1)
-\tfrac{\epsilon}{N}}M_{\lambda^{-N}\ell}\varphi(x),\quad
\mbox{in}\,\,\mathcal{S}'(\bR^d).
\end{equation}
Then there exists a constant $C>0$ such that
$\|f\|_{\M{p,q}{0,s}}\leq C $, uniformly with respect to
$\lambda$. Moreover,
$$\nm{f_{\lambda}}{\M{\infty,q}{0,s}}\gtrsim
\lambda^{\tfrac{d}{q}-\epsilon}.$$
\noindent b) If $2\leq q <\infty$, choose a positive integer $N$
such that $N>2+q$. Define
\begin{equation}\label{inftyneqfreqb}
f(x)=\lambda^{d+\tfrac{d(2-N)}{q}}\sum_{\ell \neq
0}|\ell|^{d(\tfrac{N-1}{N q}-1)
-\tfrac{\epsilon}{N}}M_{\lambda^{-N}\ell}\varphi(x),\quad
\mbox{in}\,\,\mathcal{S}'(\bR^d).
\end{equation}
Then the conclusions of part a) still hold.
\noindent c) If $ q=1$ and $s\leq -d$, define
\begin{equation}\label{infty1neqfreqb}
f(x)=\sum_{\ell \neq
0}|\ell|^{-\tfrac{\epsilon}{2}}M_{\lambda^{-2}\ell}\varphi(x),
\quad\mbox{in}\,\,\mathcal{S}'(\bR^d).
\end{equation}
Then there exists a constant $C>0$ such that
$\|f\|_{\M{\infty,1}{0,s}}\leq C $, uniformly with respect to
$\lambda$. Moreover,
$$\nm{f_{\lambda}}{\M{\infty,1}{0,s}}\gtrsim \lambda^{d-\epsilon}.$$
\noindent d) If $ q=1$ and $-d < s < 0 $, choose a positive
integer $N$ such that $\tfrac{1}{N} < \tfrac{-s}{2d}$. Define
\begin{equation}\label{infty1neqfreqb2}
f(x)=\sum_{\ell \neq
0}|\ell|^{d(\tfrac{2}{N}-1)-\tfrac{\epsilon}{N}}M_{\lambda^{-N}\ell}\varphi(x),
\quad\mbox{in} \,\,\mathcal{S}'(\bR^d).
\end{equation}
Then the conclusions of part c) still hold.
\end{lemma}
\begin{proof}
\noindent $a)$ In this case, $\mathcal{G}(\varphi, 1,
\lambda^{-N})$ is a frame. The hypotheses $1< q <2$ and $\lambda
>1$ imply that $\lambda^{d(1-\tfrac{2}{q})}<1$. In addition, the
condition $\frac3N < q-1$ is equivalent to $\tfrac{3}{Nq}-1 <
-\tfrac{1}{q}$ which is enough to show that
$\{|\ell|^{d(\tfrac{3}{Nq}-1) - \tfrac{\epsilon}{N}}\}_{\ell \neq
0} \in \ell^{q}_{s}$. Therefore, $ f \in \M{\infty,q}{0,s}$ with
$\nm{f}{\M{\infty,q}{0,s}}\leq C$ where $C$ is a universal
constant. The rest of the proof is an adaptation of the proof of
Lemma~\ref{i3negative}.
\noindent $b)$ Assume that $2\leq q < \infty$. The proof is
similar to the above with the following differences: $N > q+2$
and $\lambda >1$ imply that $\lambda^{d(1+\tfrac{2-N}{q})}<1$. In
addition, the condition $q\geq 2$ implies that $\tfrac{N-1}{Nq}-1
< -\tfrac{1}{q}$. This is enough to show that
$\{|\ell|^{d(\tfrac{N-1}{Nq}-1) - \tfrac{\epsilon}{N}}\}_{\ell
\neq 0} \in \ell^{q}_{s}$. Therefore, $ f \in \M{\infty,q}{0,s}$
with $\nm{f}{\M{\infty,q}{0,s}}\leq C$ where $C$ is a universal
constant.
\noindent $c)$ In this case, $\mathcal{G}(\varphi, 1,
\lambda^{-2})$ is a frame. The fact that $s\leq -d$ implies that
$\{|\ell|^{- \tfrac{\epsilon}{2}}\}_{\ell \neq 0} \in
\ell^{1}_{s}$. Therefore, $ f \in \M{\infty,1}{0,s}$ with
$\nm{f}{\M{\infty,1}{0,s}}\leq C$ where $C$ is a universal
constant. The rest of the proof is an adaptation of the proof of
Lemma~\ref{i3negative}.
\noindent $d)$ In this case, $\mathcal{G}(\varphi, 1,
\lambda^{-N})$ is a frame. The fact that $-d< s<0$ and the choice
of $N$ imply that $d(\tfrac{2}{N}-1) + s < -d$. Therefore
$\{|\ell|^{d(\tfrac{2}{N}-1)- \tfrac{\epsilon}{2}}\}_{\ell \neq 0}
\in \ell^{1}_{s}$. Therefore, $ f \in \M{\infty,1}{0,s}$ with
$\nm{f}{\M{\infty,1}{0,s}}\leq C$ where $C$ is a universal
constant. The rest of the proof is an adaptation of the proof of
Lemma~\ref{i3negative}.
\end{proof}
We finish this subsection by proving lower bound estimates for the dilation of
functions that are compactly supported either in the time or in the frequency
variables.
\begin{lemma}\label{altertf}
Let $u\in \mathcal{S}(\bR^d)$, $\lambda\in(0,\infty)$ and $1\leq p, q \leq \infty$.
(i) If $u$ is supported in a compact set $K\subset \bR^d$, then, for every $t\in \mathbb{R}$,
and $\lambda \geq 1$
\begin{equation}\label{spK}
\| u_{\lambda}\|_{\M{p,q}{t,0}}\gtrsim \lambda^{-d(1-\frac1 q)}\min \{ 1,
\lambda^{-t}\}.
\end{equation}
(ii) If $\hat{u}$ is supported in a compact set $K\subset \bR^d$, then, for every
$s\in\mathbb{R}$, and $\lambda \leq 1$
\begin{equation}\label{spKf}
\| u_{\lambda}\|_{\M{p,q}{0,s}}\gtrsim C \lambda^{-\frac d p } \min \{ 1, \lambda^s\}.
\end{equation}
\end{lemma}
\begin{proof}
We use the dilation properties for the Sobolev spaces (Bessel potential spaces)
$H_s^p(\bR^d)$ (see, e.g., \cite[Proposition 3]{rusic}):
$$C^{-1 } \lambda^{-\frac d p} \min \{ 1, \lambda^s\} \|u\|_{H_s^p} \leq
\|u_{\lambda}\|_{H_s^p}\leq C \lambda^{-\frac d p}\max \{ 1,
\lambda^s\}\|u\|_{H_s^p}, \quad 1\leq p\leq \infty, \,\,s>0.$$
$(i)$ Let $u$ be supported
in a compact set $K\subset
\bR^d$, we have $u\in
\mathcal{M}^{p,q}\Leftrightarrow u\in
\mathcal{F} L^q$, and
\begin{equation}
C_K^{-1} \|u\|_{\mathcal{M}^{p,q}}\leq
\|u\|_{\cF L^q}\leq C_K
\|u\|_{\mathcal{M}^{p,q}},
\end{equation}
where $C_K>0$ depends only on
$K$ (see, e.g., \cite{fe89-1, ko09}). Hence, if $\lambda\geq 1$,
\begin{align*} \|u_\lambda\|_{\M{p,q}{t,0}} &\asymp \|\langle \cdot\rangle^t
u_\lambda\|_{\M{p,q}{}}\asymp \|\langle \cdot\rangle^t u_\lambda\|_{\cF L^q} \asymp
\|\cF^{-1} (u_\lambda)\|_{H^q_t}
=\lambda^{-d}\| (\cF^{-1} u)_{\lambda^{-1}}\|_{H^q_t}\\
&\geq \lambda^{-d}(\lambda^{-1})^{-\frac d q}\min \{1, \lambda^{-t}\}.
\end{align*}
$(ii)$ Now let $\hat{u}$ be supported in a compact set $K\subset \bR^d$. We have $u\in
\mathcal{M}^{p,q}\Leftrightarrow u\in
L^p$, and
\begin{equation*}
C_K^{-1} \|u\|_{\mathcal{M}^{p,q}}\leq\|u\|_{ L^p}\leq C_K\|u\|_{\mathcal{M}^{p,q}},
\end{equation*}
where $C_K>0$ depends only on $K$ (again, see, e.g., \cite{fe89-1}). Arguing as in
part (i) above with $0<\lambda\leq 1$,
\begin{equation*} \|u_\lambda\|_{\M{p,q}{0,s}} \asymp \|\langle D\rangle^s
u_\lambda\|_{\mathcal{M}^{p,q}}\asymp \|\langle D\rangle^s u_\lambda\|_{L^p} \asymp
\|u_\lambda\|_{H_s^p}
\geq C\lambda^{-\frac d p } \min \{1, \lambda^s\} \| u\|_{H_s^p}
\end{equation*}
and the proof is completed.
\end{proof}
\subsection{Sharpness of Theorems \ref{xdil} and \ref{mainfreq}.}\,
We are now in position to state and prove the sharpness of the results obtained in
Section~\ref{main}. In particular, Theorem~\ref{xdil} is optimal in the following
sense:
\begin{theorem}\label{sharp31} Let $1\leq p, q \leq \infty.$
\noindent (A) If $t \geq 0$ then the following statements hold:
\noindent Assume that there exist constants $C>0$, and $\alpha, \beta \in \mathbb{R}$ such
that
\begin{equation}\label{sharp1}
C^{-1}\, \lambda^{\beta}\|f\|_{\M{p,q}{t,0}}\leq \|U_{\lambda}f\|_{\M{p,q}{t,0}}\leq
C \lambda^{\alpha}\|f\|_{\M{p,q}{t,0}}
\quad \forall f \in \M{p,q}{t,0}\quad {\textrm and} \quad \lambda \geq 1,
\end{equation} then, $\alpha \geq d\mu_{1}(p,q)$, and $\beta \leq d\mu_{2}(p,q)-t.$
\noindent Assume that there exist constants $C>0$, and $\alpha, \beta \in \mathbb{R}$
such that
\begin{equation}\label{sharp2}
C^{-1}\, \lambda^{\alpha}\|f\|_{\M{p,q}{t,0}}\leq
\|U_{\lambda}f\|_{\M{p,q}{t,0}}\leq C \lambda^{\beta}\|f\|_{\M{p,q}{t,0}}\quad
\forall f \in \M{p,q}{t,0}\quad {\textrm and} \quad 0<\lambda \leq 1,
\end{equation} then, $\alpha \geq d\mu_{1}(p,q)$, and $\beta \leq d\mu_{2}(p,q)-t.$
\noindent (B)
If $t\leq 0$ then the following statements hold:
\noindent Assume that there exist constants $C>0$, and $\alpha, \beta \in \mathbb{R}$ such
that
\begin{equation}\label{sharp3}
C^{-1}\, \lambda^{\beta}\|f\|_{\M{p,q}{t,0}}\leq \|U_{\lambda}f\|_{\M{p,q}{t,0}}\leq
C \lambda^{\alpha}\|f\|_{\M{p,q}{t,0}}\quad \forall f \in \M{p,q}{t,0}\quad {\textrm
and} \quad \lambda \geq 1,
\end{equation}
then, $\alpha \geq d\mu_{1}(p,q)-t$, and $\beta \leq d\mu_{2}(p,q).$
\noindent Assume that there exist constants $C>0$, and $\alpha, \beta \in \mathbb{R}$
such that
\begin{equation}\label{sharp4}
C^{-1}\, \lambda^{\alpha}\|f\|_{\M{p,q}{t,0}}\leq
\|U_{\lambda}f\|_{\M{p,q}{t,0}}\leq C \lambda^{\beta}\|f\|_{\M{p,q}{t,0}}\quad
\forall f \in \M{p,q}{t,0}\quad {\textrm and} \quad 0<\lambda \leq 1,
\end{equation} then, $\alpha \geq d\mu_{1}(p,q)-t$, and $\beta \leq d\mu_{2}(p,q).$
\end{theorem}
\begin{proof}
It will be enough to prove the upper half of each of the estimates, as the lower
halves will follow from the fact that $f=U_{\lambda}U_{1/\lambda}f$. Moreover, the
proof relies on analyzing the examples provided by the previous lemmas, and by
considering several cases.
\textbf{Case $1$: $(1/p,1/q)\in I^{*}_2$, $t\geq 0$.}
In this case we have $ \lambda\geq 1$ and $\mu_1(p,q)=1/q-1$. Substitute
$f(x)=\varphi(x)=e^{-\pi|x|^{2}}$ in the upper half estimates~\eqref{sharp1} and use
Lemma~\ref{Gaussian} to obtain $$\lambda^{-d(1-1/q)}\lesssim
\|\varphi_\lambda\|_{\M{p,q}{t,0}}\leq C \lambda^\alpha
\|\varphi\|_{\M{p,q}{t,0}},$$ for all
$\lambda\geq 1$. This immediately implies that $\alpha \geq -d(1-1/q)=d\mu_{1}(p,q)$.
\textbf{Case $2$: $(1/p, 1/q) \in I_{2}$, $t\leq0$.} This is the dual case to the
previous case and can be handled as follows. In this case we have $ \lambda\leq 1$
and $\mu_2(p,q)=1/q-1$. Assume that the upper-half estimate in~\eqref{sharp4} holds.
Notice that $(1/p, 1/q) \in I_2$ if and only if $(1/p', 1/q') \in I^{*}_{2}$, and
that $\lambda \leq 1$ if and only if $1/\lambda \geq 1$.
\begin{align*}\|f_{1/\lambda}\|_{\M{p', q'}{-t, 0}} & =
\sup|\ip{f_{1/\lambda}}{g}|=\lambda^{d}\sup|\ip{f}{g_{\lambda}}|\leq
\lambda^{d}\|f\|_{\M{p', q'}{-t, 0}} \sup\|g_{\lambda}\|_{\M{p,q}{t,0}}\\&\leq
\lambda^{d +\beta} \|f\|_{\M{p', q'}{-t, 0}} \sup\|g\|_{\M{p,q}{t,0}},
\end{align*}
where the supremum is taken over all $g \in \mathcal{S}$ and $\|g\|_{\M{p,q}{t,0}}=1$;
hence,
$$ \|f_{1/\lambda}\|_{\M{p', q'}{-t, 0}}\leq \lambda^{d +\beta} \|f\|_{\M{p',
q'}{-t, 0}}.
$$
Thus from Case $1$ above, $-\beta -d \geq d\mu_{1}(p', q')=d/q'-d$. Hence, $\beta
\leq d\mu_2(p,q)$.
\textbf{Case $3$: $(1/p,1/q)\in I_3$, $t\geq 0$.}
In this case we have $ \lambda\leq 1$ and $\mu_2(p,q)=-2/p+1/q$.
First assume that $1\leq q < \infty$ and that the upper-half estimate
in~\eqref{sharp3} holds for all $f \in \M{p,q}{t,0}$ and $0< \lambda <1,$ but that
$\beta >d\mu_{2}(p,q)-t.$ Then there is $\epsilon >0$ such that $\beta >
d\mu_{2}(p,q)-t+\epsilon.$ For this choice of $\epsilon>0$, we construct a
function $f$ as in~\eqref{lem1ex1} of Lemma~\ref{lemm1} such that:
$$\lambda^{d\mu_{2}(p,q) -t +\epsilon} \lesssim \|f_{\lambda}\|_{\M{p,q}{t,0}} \leq
C \lambda^{\beta}\|f\|_{\M{p,q}{t,0}}$$ for some $C>0$ and all $0<\lambda \leq 1$.
This leads to a contradiction on the choice of $\epsilon$.
When $q=\infty$ the function given by~\eqref{es21} of Lemma~\ref{lemm1} gives the
optimal bound.
\textbf{Case $4$: $(1/p,1/q)\in I^{*}_3$, $t\leq0$.} In this case, $\lambda \geq
1$, and $\mu_1(p,q)=-2/p+1/q$. This is the dual of Case $3$, and a duality argument similar to the used
in Case $2$ above gives the result.
\textbf{Case $5$: $(1/p,1/q)\in I_1^{*}$, $t\leq 0$.}
In this case, $\lambda \geq 1$, and $\mu_1(p,q)=-1/p$. Assume that the upper-half
estimate in~\eqref{sharp3} holds and that $\alpha < d\mu_1(p,q) -t$. Then, choose
$\epsilon >0$ and construct a function $f$ as in part $b)$ of Lemma~\ref{istar1}. A
contradiction immediately follows.
\textbf{Case $6$: $(1/p,1/q)\in I_1$, $t\geq 0$.} In this case $\lambda \leq 1$,
and $\mu_2(p,q)=-1/p$.
This is the dual of Case $5$.
\textbf{Case $7$: $(1/p,1/q)\in I_1^{*}$, $t\geq 0$.} In this case $\lambda \geq
1$, and $\mu_1(p,q)=-1/p$.
Assume that the upper-half estimate in~\eqref{sharp1} holds for all $f \in
\M{p,q}{t,0}$ and $ \lambda >1,$ but that $\alpha <d\mu_{1}(p,q).$ Then there is
$\epsilon >0$ such that $\alpha< d\mu_{1}(p,q)-\epsilon.$ For this choice of
$\epsilon>0$, we can now construct a function $f$ as in Lemma~\ref{istar1}, part $a)$,
such that:
$$\lambda^{d\mu_{1}(p,q) -\epsilon} \lesssim \|f_{\lambda}\|_{\M{p,q}{t,0}} \leq C
\lambda^{\alpha}\|f\|_{\M{p,q}{t,0}}$$ for some $C>0$ and all $\lambda \geq 1$. This
leads to a contradiction on the choice of $\epsilon$.
\textbf{Case $8$: $(1/p,1/q)\in I_1$, $t\leq 0$.} In this case $\lambda \leq 1$, and
$\mu_2(p,q)=-1/p$.
This is the dual of Case $7$.
\textbf{Case $9$: $(1/p,1/q)\in I_2^{*}$, $t\leq 0$.} In this case $\lambda \geq 1$,
and $\mu_1(p,q)=1/q-1$. The function constructed in Lemma~\ref{i2star} leads to the
result.
\textbf{Case $10$: $(1/p,1/q)\in I_2$, $t\geq 0$.} In this case $\lambda \leq 1$,
and $\mu_2(p,q)=1/q-1$.
This is the dual of Case $9$.
\textbf{Case $11$: $(1/p,1/q)\in I_3$, $t\leq 0$.}
In this case $\lambda \leq 1$, and $\mu_1(p,q)=-2/p+1/q$ and Lemma~\ref{i3negative}
can be used to conclude.
\textbf{Case $12$: $(1/p,1/q)\in I_3^{*}$, $t\geq 0$.}
In this case $\lambda \geq 1$, and $\mu_2(p,q)=-2/p+1/q$.
This is the dual of Case $11$.
\end{proof}
We next consider the sharpness Theorem~\ref{mainfreq}.
\begin{theorem}\label{sharp32} Let $1\leq p, q \leq \infty.$
\noindent (A) If $s \geq 0$ then the following statements hold:
\noindent
Assume that there exist constants $C>0$, and $\alpha, \beta \in \mathbb{R}$ such that
\begin{equation}\label{sharp5}
C^{-1}\, \lambda^{\beta}\|f\|_{\M{p,q}{0,s}}\leq \|U_{\lambda}f\|_{\M{p,q}{0,
s}}\leq C \lambda^{\alpha}\|f\|_{\M{p,q}{0, s}}\quad \forall f \in \M{p,q}{0,
s}\quad {\textrm and}\quad \lambda \geq 1,
\end{equation} then, $\alpha \geq d\mu_{1}(p,q) +s$, and $\beta \leq d\mu_{2}(p,q).$
\noindent
Assume that there exist constants $C>0$, and $\alpha, \beta \in \mathbb{R}$ such that
\begin{equation}\label{sharp6}
C^{-1}\, \lambda^{\alpha}\|f\|_{\M{p,q}{0, s}}\leq \|U_{\lambda}f\|_{\M{p,q}{0,
s}}\leq C \lambda^{\beta}\|f\|_{\M{p,q}{0, s}}\quad \forall f \in \M{p,q}{0, s}\quad
{\textrm and}\quad 0<\lambda \leq 1,
\end{equation} then, $\alpha \geq d\mu_{1}(p,q)+s$, and $\beta \leq d\mu_{2}(p,q).$
\noindent (B) If $s \leq 0$ then the following statements hold:
\noindent
Assume that there exist constants $C>0$, and $\alpha, \beta \in \mathbb{R}$ such that
\begin{equation}\label{sharp7}
C^{-1}\, \lambda^{\beta}\|f\|_{\M{p,q}{0,s}}\leq \|U_{\lambda}f\|_{\M{p,q}{0,
s}}\leq C \lambda^{\alpha}\|f\|_{\M{p,q}{0, s}}\quad \forall f \in \M{p,q}{0,
s}\quad {\textrm and}\quad \lambda \geq 1,
\end{equation} then, $\alpha \geq d\mu_{1}(p,q) $, and $\beta \leq d\mu_{2}(p,q)+s.$
\noindent
Assume that there exist constants $C>0$, and $\alpha, \beta \in \mathbb{R}$ such that
\begin{equation}\label{sharp8}
C^{-1}\, \lambda^{\alpha}\|f\|_{\M{p,q}{0, s}}\leq \|U_{\lambda}f\|_{\M{p,q}{0,
s}}\leq C \lambda^{\beta}\|f\|_{\M{p,q}{0, s}}\quad \forall f \in \M{p,q}{0, s}\quad
{\textrm and}\quad 0<\lambda \leq 1,
\end{equation} then, $\alpha \geq d\mu_{1}(p,q)$, and $\beta \leq d\mu_{2}(p,q)+s.$
\end{theorem}
\begin{proof}
As for the time weights, it is enough to prove the upper half of each estimates.
Moreover, in what follows we consider only
$6$ of the $12$ cases to be proved, since the others are obtained by the same
duality argument used in the previous theorem.
\textbf{Case $1$: $(1/p,1/q)\in I_1$, $s\geq 0$.} In this case, $0<\lambda\leq1$ and
$\mu_2(p,q)=-1/p$. Assume there exist constants $C>0$ and $\beta\in \mathbb{R}$ such that
the upper-half estimate \eqref{sharp6} holds.
Taking the Gaussian $f=\varphi$ as in Lemma \ref{Gaussian} and using \eqref{zerof}, we have
$$ \lambda^{-\frac d p} \lesssim \|\varphi_\lambda\|_{\M{p,q}{0, s}}\lesssim
\lambda^\beta \| \varphi\|_{\M{p,q}{0, s}},
$$
for all $0<\lambda\leq 1$. This gives $\beta\leq -d/p$.
\textbf{Case $2$: $(1/p,1/q)\in I_1$, $s\leq 0$.} Here $\lambda \leq 1$ and we
test the upper-half estimate \eqref{sharp8}
on the family of functions \eqref{ModGaus}. Using \eqref{bo2}, we obtain
$\beta\leq s-d/p$.
\textbf{Case $3$: $(1/p,1/q)\in
I_2^*$, $s\geq0$.} Here $\lambda\geq1$, $\mu_1(p,q)=1/q-1$.
We assume the upper-half estimate \eqref{sharp5} and test it on the dilated
Gaussian function in \eqref{infinityf}, obtaining $\alpha\geq d(1/q-1)+s$.
\textbf{Case $4$: $(1/p,1/q)\in I_2^*$, $s\leq0$.} Here $\lambda\geq1$,
$\mu_1(p,q)=1/q-1$. We use a contradiction argument based on Lemma~\ref{fistar2}.
\textbf{Case $5$: $(1/p,1/q)\in I_3$, $s\geq0$.} Here
$\lambda\leq1$, $\mu_2(p,q)=-2/p+1/q$. The sharpness is obtained by
testing the upper-half estimate \eqref{sharp6} on the family of
functions $f_\lambda$, defined in Lemma \ref{i3postivefreq} when
$p>1$.
If $p=1$ we consider the dual case, that is $(1/\infty,1/q)\in
I_{3}^{*}$, $s\leq0$. Here $\lambda\geq1$, $\mu_1(\infty,q)=1/q$.
We use a contradiction argument based on Lemma~\ref{inftynegfreq}.
\textbf{Case $6$: $(1/p,1/q)\in I_3$, $s\leq0$.} Here $\lambda\leq1$,
$\mu_2(p,q)=-2/p+1/q$. The sharpness is obtained by testing the upper-half estimate
\eqref{sharp8} on the family of functions $f_\lambda$, defined in Lemma
\ref{lemm1b}.
\end{proof}
\section{Applications}\label{applic}
\subsection{Applications to dispersive equations}\label{dispde}
\subsubsection{Wave equation.} Let us first recall the Cauchy problem for
the wave equation:
\begin{equation}\label{cpw}
\begin{cases}
\partial^2_t u-\Delta_x u=0\\
u(0,x)=u_0(x),\,\,
\partial_t u (0,x)=u_1(x),\,\,
\end{cases}
\end{equation}
with $t\geq 0$, $x\in\mathbb{R}^d$, $d\geq1$, $\Delta_x=\partial^2_{x_1}+\dots
\partial^2_{x_d}$. The formal solution $u(t,x)$ is given by
\begin{align*}
u(t,x)& =\int_{\rd} e^{2\pi i x\xi} \cos(2\pi t |\xi|) \widehat{u_0}(\xi)\,d\xi+\int_{\rd}
e^{2\pi i x\xi} \frac{\sin (2\pi t |\xi|)}{2\pi |\xi|} \widehat{u_1}(\xi) \, d\xi,\\
&= H_{\sigma_{0}}u_{0}(x)+H_{\sigma_{1}}(x)
\end{align*}
with,
$\sigma_0(\xi)= \cos(2\pi t |\xi|)$ and $\sigma_1(\xi)=\frac{\sin (2\pi t
|\xi|)}{2\pi|\xi|}$.
We recall that $H_{\sigma_{i}}$ $i=0, 1$, are examples of Fourier multipliers which
are defined by
\begin{equation}\label{FM}
H_{\sigma} f(x)=\int_{\rd} e^{2\pi i
x\xi}\sigma(\xi) \hat{f}(\xi)\,d\xi
\end{equation}
where $\sigma$ is called the symbol.
The boundedness of $H_{\sigma_{i}}$, $i=0, 1$ on modulation spaces was proved in
\cite{bgor, bo09} and in \cite{CNwave}. Moreover, some related local-in-time well
posedness results for certain nonlinear PDEs were also obtained in \cite{bo09,
CNwave} for initial data in modulation spaces.
\begin{proposition} \label{L1} Let $s\in \mathbb{R}$, and $1\leq p, q \leq \infty$. Then,
the solution $u(t, x)$ of~\eqref{cpw} with initial data $(u_{0}, u_{1}) \in
\M{p,q}{0,s}\times \M{p,q}{0, s-1}$
satisfies
\begin{equation}\label{es1A}\|u(t,\cdot)\|_{\M{p,q}{0, s}}\leq C_0 (1+t)^{d+1}
\|u_0\|_{\M{p,q}{0,s}}+ C_1 t(1+t)^{d+1} \|u_1\|_{\M{p,q}{0, s-1}}
\end{equation}
where $C_{0}$ and $C_{1}$ are only functions of the dimension $ d$.
\end{proposition}
\begin{proof}
It was proved in \cite{bgor} that $\sigma_0(\xi)\in W(\cF L^1, L^\infty)$ and in
\cite{CNwave} that
$\sigma_1(\xi)\in W(\cF L^1,L^\infty_{1})$. In addition, it was shown in
\cite{CNwave} that the solution satisfies
\begin{align*}
\|u(t,\cdot)\|_{\M{p,q}{0,s}}& \leq \|H_{\sigma_{0}}u_{0}\|_{\M{p,q}{0,s}} +
\|H_{\sigma_{1}}u_{1}\|_{\M{p,q}{0,s}}\\
& \leq \|H_{\sigma_{0}}u_{0}\|_{\M{p,q}{0,s}} +
\|H_{\sigma_{1}}u_{1}\|_{\M{p,q}{0,s-1}}\\
& \leq \|\sigma_{0}\|_{W(\cF L^1, L^\infty)} \|u_{0}\|_{\M{p,q}{0,s}} +
\|\sigma_{1}\|_{W(\cF L^1, L^\infty_{1})} \|u_{1}\|_{\M{p,q}{0,s-1}} \\
& \leq C_0 ( t) \|u_0\|_{\M{p,q}{0, s}}+ C_1(t) \|u_1\|_{\M{p,q}{0, s-1}}.
\label{es1A}
\end{align*}
We can now use the results proved in Section~\ref{main} to estimate $C_{0}(t)$ and
$C_{1}(t)$. More specifically, setting $\widetilde{\sigma_0}(\xi)=\cos |\xi|$, for
$t>0$, we can write
$\sigma_0(\xi)=(\widetilde{\sigma_0})_{{2\pi t}}$. Using~\eqref{mainbothW} with
$\mu_1(\infty,1)=1$, $\mu_2(\infty,1)=0$, we have, for every $R>0$,
\begin{equation*}
\|(\widetilde{\sigma_0})_{2\pi t}\|_{W(\cF L^1,
L^\infty_{1})}\leq \begin{cases} C_{0,R} \|\widetilde{\sigma_1}\|_{W(\cF L^1,
L^\infty)} ,\quad t\leq R\\
C'_{0,R} t^{d+1}\|\widetilde{\sigma_0}\|_{W(\cF L^1,
L^\infty)}, \quad t\geq R.\end{cases}
\end{equation*}
Hence $$C_0(t)\leq \begin{cases} C_{0,R}, \,\quad 0\leq t\leq R\\
C'_{0,R}t^{d+1},\,\quad t\geq R.
\end{cases}
$$
Setting $\widetilde{\sigma_1}(\xi)=\frac{\sin |\xi|}{|\xi|}$, for $t>0$, we can
write $\sigma_1(\xi)=t( \widetilde{\sigma_1})_{{2\pi t}}$ and, for every $R>0$,
\begin{equation*}
\|(\widetilde{\sigma_1})_{{2\pi t}}\|_{W(\cF L^1,
L^\infty_{1})}\leq \begin{cases} C_{1,R} \|\widetilde{\sigma_1}\|_{W(\cF L^1,
L^\infty_{1})} ,\quad t\leq R\\
C'_{1,R} t^{d+1}\|\widetilde{\sigma_1}\|_{W(\cF L^1,
L^\infty_{1})}, \quad t\geq R.\end{cases}
\end{equation*}
Hence $$C_1(t)\leq \begin{cases} C_{1,R}t, \,\quad 0\leq t\leq R\\
C'_{1,R}t^{d+2},\,\quad t\geq R,
\end{cases}
$$ and the estimate~\eqref{es1A} becomes
\begin{equation*}\|u(t,\cdot)\|_{\M{p,q}{0, s}}\leq C_0 (1+t)^{d+1}
\|u_0\|_{\M{p,q}{0, s}}+ C_1t(1+t)^{d+1} \|u_1\|_{\M{p,q}{0,s-1}},
\quad t> 0.
\end{equation*}
\end{proof}
\subsubsection{Vibrating plate equation.}
Consider now the following Cauchy problem for the vibrating plate equation
\begin{equation}\label{cpp}
\begin{cases}
\partial^2_t u+\Delta^2_x u=0\\
u(0,x)=u_0(x),\,\,
\partial_t u (0,x)=u_1(x),\,\,
\end{cases}
\end{equation}
with $t\geq 0$, $x\in\mathbb{R}^d$, $d\geq1$. The formal solution $u(t,x)$ is given by
$$u(t,x)=\int_{\rd} e^{2\pi i x\xi} \cos(4\pi^2 t |\xi|^2)
\widehat{u_0}(\xi)\,d\xi+\int_{\rd} e^{2\pi i x\xi} \frac{\sin (4\pi^2 t
|\xi|^2)}{4\pi^2 |\xi|^2} \widehat{u_1}(\xi) \, d\xi,
$$ and satisfies the following estimate.
\begin{proposition} \label{L2} Let $s\in \mathbb{R}$, and $1\leq p, q \leq \infty$. Then,
the solution $u(t, x)$ of~\eqref{cpp} with initial data $(u_{0}, u_{1}) \in
\M{p,q}{0,s}\times \M{p,q}{0, s-2}$
satisfies
\begin{equation}\label{es1B}\|u(t,\cdot)\|_{\M{p,q}{0, s}}\leq C_0 (1+ t)^{\frac d
2}
\|u_0\|_{\M{p,q}{0,s}}+ C_1 t(1+t)^{\frac d 2+1} \|u_1\|_{\M{p,q}{0, s-2}}
\end{equation}
where $C_{0}$ and $C_{1}$ are only functions of the dimension $ d$.
\end{proposition}
\begin{proof}
Here the solution is the sum of two Fourier multipliers $u=H_0 u_0+H_1 u_1$ having
symbols
$\sigma_0(\xi)= \cos(4\pi^2 t |\xi|^2) \in W(\cF L^1, L^\infty)$ (see \cite{bgor})
and $\sigma_1(\xi)=
\frac{\sin (4\pi^2 t |\xi|^2)}{4\pi^2 |\xi|^2} \in W(\cF L^1, L^\infty_2)$ (see
\cite{CZplate}).
Since $\sigma_0(\xi)=\cos( |\xi|^2)_{2\pi \sqrt{t}}$ and $\sigma_1(\xi)=t
\left(\frac{\sin ( |\xi|^2)}{ |\xi|^2}\right)_{2\pi \sqrt{t}}$, using the same
arguments as for the wave equation we obtain:
$$\|u(t,\cdot)\|_{\M{p,q}{0, s}}\leq C_0 (1+ t)^{\frac d
2}\|u_0\|_{\M{p,q}{0, s}}+ C_1 t(1+t)^{\frac d 2+1}
\|u_1\|_{\M{p,q}{0, s-2}}, \quad t> 0.
$$
\end{proof}
\subsection{Embedding of Besov spaces into modulation spaces}\label{embed}
We generalize some results of \cite{kasso04}. But first, we recall the inclusion
relations between Besov spaces and modulation spaces (see
\cite{sugimototomita,baoxiang3}). Consider the following indices, where $\mu_i$,
$i=1, 2$ were defined in Section~\ref{prelim}:
$$\nu_1(p,q)=\mu_1(p,q)+\frac1p,\quad\quad\nu_2(p,q)=\mu_2(p,q)+\frac1p.
$$
The following result was proved in \cite[Theorem 3.1]{toft04}
and in \cite[Theorem 1.1]{baoxiang3}
\begin{theorem}\label{incl}
Let $1\leq p,q\leq \infty$ and $s\in\mathbb{R}$.
\noindent (i) If $s\geq
d\nu_1(p,q)$ then $
B^{p,q}_s(\bR^d)
\hookrightarrow
\M{p,q}{}(\bR^d)$.
\noindent (ii) If
$s\leq d\nu_2(p,q)$ then
$\M{p,q}{}(\bR^d)
\hookrightarrow
B^{p,q}_s(\bR^d)$.
\end{theorem}
The next results improve those in \cite[Theorem 3.1]{kasso04}.
\begin{theorem}\label{teor31} Let $1\leq p\leq 2$.
\noindent (i) If $s\geq d(1/p-1/p')$ and
$1\leq q\leq p$ then
$B^{p,q}_s\hookrightarrow \M{p}{}.
$
\noindent (ii) If $s> d(1/p-1/p')$ and $1\leq q\leq \infty$ then
$B^{p,q}_s\hookrightarrow \M{p}{}.
$
\end{theorem}
\begin{proof} (i) For $s\geq d(1/p-1/p')\geq \nu_{1}(p,p)=0$, Theorem \ref{incl}
says that $B^{p,p}_s \hookrightarrow \M{p,p}{}$.
However, the inclusion relations for Besov spaces give $B^{p,q}_s\hookrightarrow
B^{p,p}_s$, for $q\leq p$. Hence the result follows. \\
(ii) If $s> d(1/p-1/p')\geq 0$, and $q\leq p$, then this is exactly (i) above. If
$p\leq q$, then $B^{p,q}_{s}\hookrightarrow B^{p,p} \hookrightarrow \M{p}{}.$
\end{proof}
The next results improve those in \cite[Theorem 3.2]{kasso04}.
\begin{theorem}\label{teor32}
\noindent (i) Let $1\leq p\leq 2$, $s>0$. Then
$B^{p,q}_s\hookrightarrow \M{p,p'}{},\quad \mbox{for\,all\,}\,\,1\leq q\leq\infty.
$
\noindent(ii) If $2\leq p\leq \infty$, $s> d(1/p'-1/p)$, then
$B^{p,q}_s\hookrightarrow \M{p,p'}{},\quad \mbox{for\,all\,}\,\,1\leq q\leq\infty.
$
\end{theorem}
\begin{proof} (i) For $1\leq p\leq 2$, $\nu_1(p,p')=0$ and using Theorem
\ref{incl} we obtain $B^{p,p'} \hookrightarrow \M{p,p'}{}$. Since $B^{p,q}_s
\hookrightarrow B^{p,p'}$, for all $1\leq q\leq\infty$, $s>0$, the result
follows.\\
(ii) If $2\leq p\leq \infty$, $$\nu_1(p,p')=\frac1{p'}-\frac1p\leq\frac1{p'}.$$
Hence, if $s\geq d(1/p'-1/p)$, Theorem \ref{incl} gives $B^{p,p'}_s \hookrightarrow
\M{p,p'}{}$. If $s> d(1/p'-1/p)$,
the inclusion relations for Besov spaces give $B^{p,q}_s \hookrightarrow
B^{p,p'}_{d(1/p'-1/p)}$. This is easy to see if $q\leq p'$. On the other hand if
$q>p'$ it follows by an application of H\"older's inequality for $\ell^p$ spaces. In
any case, this concludes the proof.
\end{proof}
\section{Acknowledgment} The authors would like to thank Fabio Nicola for helpful
discussions. K.~A.~Okoudjou would also like to acknowledge the partial support of the
Alexander von Humboldt foundation.
| {
"timestamp": "2010-08-03T02:02:51",
"yymm": "1008",
"arxiv_id": "1008.0266",
"language": "en",
"url": "https://arxiv.org/abs/1008.0266",
"abstract": "In this paper we give a sharp estimate on the norm of the scaling operator $U_{\\lambda}f(x)=f(\\lambda x)$ acting on the weighted modulation spaces $\\M{p,q}{s,t}(\\R^{d})$. In particular, we recover and extend recent results by Sugimoto and Tomita in the unweighted case. As an application of our results, we estimate the growth in time of solutions of the wave and vibrating plate equations, which is of interest when considering the well posedeness of the Cauchy problem for these equations. Finally, we provide new embedding results between modulation and Besov spaces.",
"subjects": "Functional Analysis (math.FA)",
"title": "Dilation properties for weighted modulation spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850837598123,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7095349863001092
} |
https://arxiv.org/abs/1305.2856 | On the flag curvature of invariant Randers metrics | In the present paper, the flag curvature of invariant Randers metrics on homogeneous spaces and Lie groups is studied. We first give an explicit formula for the flag curvature of invariant Randers metrics arising from invariant Riemannian metrics on homogeneous spaces and, in special case, Lie groups. We then study Randers metrics of constant positive flag curvature and complete underlying Riemannian metric on Lie groups. Finally we give some properties of those Lie groups which admit a left invariant non-Riemannian Randers metric of Berwald type arising from a left invariant Riemannian metric and a left invariant vector field. | \section{Introduction}
The geometry of invariant structures on homogeneous spaces is one
of the interesting subjects in differential geometry. Invariant
metrics are of these invariant structures. K. Nomizu studied many
interesting properties of invariant Riemannian metrics and the
existence and properties of invariant affine connections on
reductive homogeneous spaces (see \cite{KoNo,No}.). Also some
curvature properties of invariant Riemannian metrics on Lie groups
has studied by J. Milnor \cite{Mi}. So it is important to study
invariant Finsler metrics which are a generalization of invraint
Riemannian metrics.
S. Deng and Z. Hou studied invariant Finsler metrics on reductive
homogeneous spaces and gave an algebraic description of these
metrics \cite{DeHo1,DeHo2}. Also, in \cite{EsSa1,EsSa2}, we have
studied the existence of invariant Finsler metrics on quotient
groups and the flag curvature of invariant Randers metrics on
naturally reductive homogeneous spaces. In this paper we study the
flag curvature of invariant Randers metrics on homogeneous spaces
and Lie groups. Flag curvature, which is a generalization of the
concept of sectional curvature in Riemannian geometry, is one of
the fundamental quantities which associate with a Finsler space.
In general, the computation of the flag curvature of Finsler
metrics is very difficult, therefore it is important to find an
explicit and applicable formula for the flag curvature. One of
important Finsler metrics which have found many applications in
physics are Randers metrics (see \cite{AnInMa,As}.). In this
article, by using P\"uttmann's formula \cite{Pu}, we give an
explicit formula for the flag curvature of invariant Randers
metrics arising from invariant Riemannian metrics on homogeneous
spaces and Lie groups. Then the Randers metrics of constant
positive flag curvature and complete underlying Riemannian metric
on Lie groups are studied. Finally we give some properties of
those Lie groups which admit a left invariant non-Riemannian
Randers metric of Berwald type arising from a left invariant
Riemannian metric and a left invariant vector field.
\section{Flag curvature of invariant Randers metrics on homogeneous spaces}
The aim of this section is to give an explicit formula for the
flag curvature of invariant Randers metrics of Berwald type,
arising from invariant Riemannian metrics, on homogeneous spaces.
For this purpose we need the P\"uttmann's formula for the
curvature tensor of invariant Riemannian metrics on homogeneous
spaces (see \cite{Pu}.).
Let $G$ be a compact Lie group, $H$ a closed subgroup, and $g_0$ a
bi-invariant Riemannian metric on $G$. Assume that $\frak{g}$ and
$\frak{h}$ are the Lie algebras of $G$ and $H$ respectively. The
tangent space of the homogeneous space $G/H$ is given by the
orthogonal compliment $\frak{m}$ of $\frak{h}$ in $\frak{g}$ with
respect to $g_0$. Each invariant metric $g$ on $G/H$ is determined
by its restriction to $\frak{m}$. The arising $Ad_H$-invariant
inner product from $g$ on $\frak{m}$ can extend to an
$Ad_H$-invariant inner product on $\frak{g}$ by taking $g_0$ for
the components in $\frak{h}$. In this way the invariant metric $g$
on $G/H$ determines a unique left invariant metric on $G$ that we
also denote by $g$. The values of $g_0$ and $g$ at the identity
are inner products on $\frak{g}$ which we denote as $<.,.>_0$ and
$<.,.>$. The inner product $<.,.>$ determines a positive definite
endomorphism $\phi$ of $\frak{g}$ such that $<X,Y>=<\phi X,Y>_0$
for all $X, Y\in\frak{g}$.\\
Now we give the following Lemma which was proved by T. P\"uttmann
(see \cite{Pu}.).
\newtheorem{lem}{Lemma}
\begin{lem}\label{Puttmann}
The curvature tensor of the invariant metric $<.,.>$ on the
compact homogeneous space $G/H$ is given by
\begin{eqnarray}\label{puttmans formula}
<R(X,Y)Z,W> &=& \frac{1}{2}(<B_-(X,Y),[Z,W]>_0+<[X,Y],B_-(Z,W)>_0) \nonumber \\
&+& \frac{1}{4}(<[X,W],[Y,Z]_{\frak{m}}>-<[X,Z],[Y,W]_{\frak{m}}> \\
&-& 2<[X,Y],[Z,W]_{\frak{m}}>)+(<B_+(X,W),\phi^{-1}B_+(Y,Z)>_0 \nonumber\\
&-&<B_+(X,Z),\phi^{-1}B_+(Y,W)>_0)\nonumber,
\end{eqnarray}
where the symmetric resp.skew symmetric bilinear maps $B_+$ and
$B_-$ are defined by
\begin{eqnarray*}
B_+(X,Y) &=& \frac{1}{2}([X,\phi Y]+[Y,\phi X]), \\
B_-(X,Y) &=& \frac{1}{2}([\phi X,Y]+[X,\phi Y]),
\end{eqnarray*}
and $[.,.]_{\frak{m}}$ is the projection of $[.,.]$ to
$\frak{m}$.\hfill$\Box$
\end{lem}
Let $\tilde{X}$ be an invariant vector field on the homogeneous
space $G/H$ such that
$\parallel\tilde{X}\parallel=\sqrt{g(\tilde{X},\tilde{X})}<1$. A
case happen when $G/H$ is reductive with
$\frak{g}=\frak{m}\oplus\frak{h}$ and $\tilde{X}$ is the
corresponding left invariant vector field to a vector
$X\in\frak{m}$ such that $<X,X><1$ and $Ad(h)X=X$ for all $h\in H$
(see \cite{DeHo2} and \cite{EsSa1}.). By using $\tilde{X}$ we can
construct an invariant Randers metric on the homogeneous space
$G/H$ in the following way:
\begin{eqnarray}
F(xH,Y) = \sqrt{g(xH)(Y,Y)}+g(xH)(\tilde{X}_x,Y) \ \ \ \ \forall Y\in
T_{xH}(G/H).
\end{eqnarray}
Now we give an explicit formula for the flag curvature of these
invariant Randers metrics.
\newtheorem{thm}{Theorem}
\begin{thm}\label{flagcurvature}
Let $G$ be a compact Lie group, $H$ a closed subgroup, $g_0$ a
bi-invariant metric on $G$, and $\frak{g}$ and $\frak{h}$ the Lie
algebras of $G$ and $H$ respectively. Also let $g$ be any
invariant Riemannian metric on the homogeneous space $G/H$ such
that $<Y,Z>=<\phi Y,Z>_0$ for all $Y, Z\in \frak{g}$. Assume that
$\tilde{X}$ is an invariant vector field on $G/H$ which is
parallel with respect to $g$ and $g(\tilde{X},\tilde{X})<1$ and
$\tilde{X}_H=X$. Suppose that $F$ is the Randers metric arising
from $g$ and $\tilde{X}$, and $(P,Y)$ is a flag in $T_H(G/H)$ such
that $\{Y,U\}$ is an orthonormal basis of $P$ with respect to
$<.,.>$. Then the flag curvature of the flag $(P,Y)$ in $T_H(G/H)$
is given by
\begin{eqnarray}\label{Fcurvatureformula}
K(P,Y)=\frac{A}{(1+<X,Y>)^2(1-<X,Y>)},
\end{eqnarray}
where $A=\alpha.<X,U>+\gamma(1+<X,Y>$, and for $A$ we have:
\begin{eqnarray}
\alpha&=&\frac{1}{4}(<[\phi U,Y]+[U,\phi Y],[Y,X]>_0+<[U,Y],[\phi Y,X]+[Y,\phi X]>_0)\nonumber\\
&&+\frac{3}{4}<[Y,U],[Y,X]_\frak{m}>+\frac{1}{2}<[U,\phi X]+[X,\phi U],\phi^{-1}([Y,\phi Y])>_0\nonumber\\
&&-\frac{1}{4}<[U,\phi Y]+[Y,\phi U],\phi^{-1}([Y,\phi X]+[X,\phi
Y])>_0,
\end{eqnarray}
and
\begin{eqnarray}
\gamma&=&\frac{1}{2}<[\phi U,Y]+[U,\phi Y],[Y,X]>_0\nonumber \\
&& \ \ \ +\frac{3}{4}<[Y,U],[Y,U]_{\frak{m}}>+<[U,\phi U],\phi^{-1}([Y,\phi Y])>_0 \\
&& \ \ \ -\frac{1}{4}<[U,\phi Y]+[Y,\phi U],\phi^{-1}([Y,\phi U]+[U, \phi Y])>_0.\nonumber
\end{eqnarray}
\end{thm}
\newproof{pf}{Proof}
\begin{pf}
$\tilde{X}$ is parallel with respect to $g$, therefore $F$ is of
Berwald type and the Chern connection of $F$ and the Riemannian
connection of $g$ coincide (see \cite{BaChSh}, page 305.), so we
have $R^F(U,V)W=R^g(U,V)W$, where $R^F$ and $R^g$ are the
curvature tensors of $F$ and $g$, respectively. Let $R:=R^g=R^F$
be the curvature tensor of $F$ (or $g$). Also for the flag
curvature we have (\cite{Sh}):
\begin{equation}\label{flag}
K(P,Y)=\frac{g_Y(R(U,Y)Y,U)}{g_Y(Y,Y).g_Y(U,U)-g_Y^2(Y,U)},
\end{equation}
where $g_Y(U,V)=\frac{1}{2}\frac{\partial^2}{\partial s\partial
t}(F^2(Y+sU+tV))|_{s=t=0}$.\\
By a direct computation for $F$ we get
\begin{eqnarray}\label{g_Y}
g_Y(U,V)&=&g(U,V)+g(X,U).g(X,V)-\frac{g(X,Y).g(Y,V).g(Y,U)}{g(Y,Y)^{\frac{3}{2}}}+\nonumber\\
&&\frac{1}{\sqrt{g(Y,Y)}}\{g(X,U).g(Y,V)+g(X,Y).g(U,V)\\
&&+g(X,V).g(Y,U)\}.\nonumber
\end{eqnarray}
Since $\{Y,U\}$ is an orthonormal basis of $P$ with respect to
$<.,.>$, by using the formula \ref{g_Y} we have:
\begin{eqnarray}\label{eq1}
g_Y(Y,Y).g_Y(U,U)-g_Y(Y,U)=(1+<X,Y>)^2(1-<X,Y>).
\end{eqnarray}
Also we have:
\begin{eqnarray}\label{eq2}
g_Y(R(U,Y)Y,U)&=&<R(U,Y)Y,U>+<X,R(U,Y)Y>.<X,U>\nonumber\\
&&+<X,Y>.<R(U,Y)Y,U>\\&&
+<X,U>.<Y,R(U,Y)Y>,\nonumber
\end{eqnarray}
now let $\alpha=<X,R(U,Y)Y>$, $\theta=<Y,R(U,Y)Y>$ and
$\gamma=<R(U,Y)Y,U>$.\\
By using P\"uttmann's formula (see Lemma \ref{Puttmann}.) and some
computations we have:
\begin{eqnarray}\label{eq3}
\alpha&=&\frac{1}{4}(<[\phi U,Y]+[U,\phi Y],[Y,X]>_0+<[U,Y],[\phi Y,X]+[Y,\phi X]>_0)\nonumber\\
&&+\frac{3}{4}<[Y,U],[Y,X]_\frak{m}>+\frac{1}{2}<[U,\phi X]+[X,\phi U],\phi^{-1}([Y,\phi Y])>_0\nonumber\\
&&-\frac{1}{4}<[U,\phi Y]+[Y,\phi U],\phi^{-1}([Y,\phi X]+[X,\phi
Y])>_0,
\end{eqnarray}
\begin{eqnarray}\label{eq4}
\theta=0,
\end{eqnarray}
and
\begin{eqnarray}\label{eq5}
\gamma&=&\frac{1}{2}<[\phi U,Y]+[U,\phi Y],[Y,U]>_0+\frac{3}{4}<[Y,U],[Y,U]_{\frak{m}}>\nonumber\\
&&+<[U,\phi U],\phi^{-1}([Y,\phi Y])>_0\\
&& -\frac{1}{4}<[U,\phi Y]+[Y,\phi U],\phi^{-1}([Y,\phi U]+[U, \phi Y])>_0.\nonumber
\end{eqnarray}
Substituting the equations (\ref{g_Y}), (\ref{eq1}), (\ref{eq2}),
(\ref{eq3}), (\ref{eq4}) and (\ref{eq5}) in the equation
(\ref{flag}) completes the proof. \hfill$\Box$
\end{pf}
\newproof{rem}{Remark}
\begin{rem}
In the previous theorem, If we let $H=\{e\}$ and
$\frak{m}=\frak{g}$ then we can obtain a formula for the flag
curvature of the left invariant Randers metrics of Berwald types
arising from a left invariant Riemannian metric $g$ and a left
invariant vector field $\tilde{X}$ on Lie group $G$.
\end{rem}
If the invariant Randers metric arises from a bi-invariant
Riemannian metric on a Lie group then we can obtain a simpler
formula for the flag curvature, we give this formula in the
following theorem.
\begin{thm}
Suppose that $g_0$ is a bi-invariant Riemannian metric on a Lie
group $G$ and $\tilde{X}$ is a left invariant vector field on $G$
such that $g_0(\tilde{X},\tilde{X})<1$ and $\tilde{X}$ is parallel
with respect to $g_0$. Then we can define a left invariant Randers
metric $F$ as follows:
\begin{eqnarray*}
F(x,Y)=\sqrt{g_0(x)(Y,Y)}+g_0(x)(\tilde{X}_x,Y).
\end{eqnarray*}
Assume that $(P,Y)$ is a flag in $T_eG$ such that $\{Y,U\}$ is an
orthonormal basis of $P$ with respect to $<.,.>_0$. Then the flag
curvature of the flag $(P,Y)$ in $T_eG$ is given by
\begin{eqnarray*}
K(P,Y)=\frac{<[Y,[U,Y]],X>_0.<X,U>_0+<[Y,[U,Y]],U>_0(1+<X,Y>_0)}{4(1+<X,Y>_0)^2(1-<X,Y>_0)}.
\end{eqnarray*}
\end{thm}
\begin{pf}
Since $\tilde{X}$ is parallel with respect to $g_0$ the curvature
tensors of $g_0$ and $F$ coincide. On the other hand for $g_0$ we
have $R(X,Y)Z=\frac{1}{4}[Z,[X,Y]]$, therefore by substituting $R$
in the equation (\ref{flag}) and using equation (\ref{g_Y}) the
proof is completed. \hfill$\Box$
\end{pf}
\section{Invariant Randers metrics on Lie groups}
In this section we study the left invariant Randers metrics on Lie
groups and, in some special cases, find some results about the
dimension of Lie groups which can admit invariant Randers metrics.
These conclusions are obtained by using Yasuda-Shimada theorem.
The Yasuda-Shimada theorem is one of important theorems which
characterize the Randers spaces. In the year 2001, Shen's examples
of Randers manifolds with constant flag curvature motivated Bao
and Robles to determine necessary and sufficient conditions for a
Randers manifold to have constant flag curvature. Shen's examples
showed that the original version of Yasuda-Shimada theorem (1977)
is wrong. Then Bao and Robles corrected the Yasuda-Shimada theorem
(1977) and gave the correct version of this theorem,
Yasuda-Shimada theorem (2001) (see \cite{BaRo}.). (For a
comprehensive history of Yasuda-Shimada theorem see \cite{Ba}.)\\
Suppose that $M$ is an $n$-dimensional manifold endowed with a
Riemannian metric $g=(g_{ij}(x))$ and a nowhere zero 1-form
$b=(b_i(x))$ such that $\|b\|=b_i(x)b_j(x)g^{ij}(x)<1$. We can
define a Randers metric on $M$ as follows
\begin{equation}\label{eq6}
F(x,Y)=\sqrt{g_{ij}(x)Y^iY^j}+b_i(x)Y^i.
\end{equation}
Next, we consider the 1-form $\beta=b^i(b_{j|i}-b_{i|j})dx^i$,
where the covariant derivative is taken with respect to
Levi-Civita connection to $M$. Now we give the Yasuda-Shimada
theorem from \cite{Ba}.
\begin{thm}\label{Yasuda-Shimada}
(Yasuda-Shimada) (see \cite{Ba}.) Let $F$ be a strongly convex
non-Riemannian Randers metric on a smooth manifold $M$ of
dimension $n\geq 2$. Let $g_{ij}$ be the underlying Riemannian
metric and $b_i$ the drift 1-form. Then:
\begin{description}
\item[(+)] $F$ satisfies $\beta=0$ and has constant positive
flag curvature $K$ if and only if:
\begin{itemize}
\item $b$ is a non-parallel Killing field of $g$ with
constant length;
\item the Riemann curvature tensor of $g$ is given by
\begin{eqnarray*}
R_{hijk}&=&K(1-\|b\|^2)(g_{hk}g_{ij}-g_{hj}g_{ik})\\
&&+K(g_{ij}b_hb_k-g_{ik}b_hb_j)\\
&&-K(g_{hj}b_ib_k-g_{hk}b_ib_j)\\
&&-b_{i|j}b_{h|k}+b_{i|k}b_{h|j}+2b_{h|i}b_{j|k}
\end{eqnarray*}
\end{itemize}
\item[(0)] $F$ satisfies $\beta=0$ and has zero flag
curvature $\Leftrightarrow$ it is locally Minkowskian.
\item[(--)] $F$ satisfies $\beta=0$ and has constant negative
flag curvature if and only if:
\begin{itemize}
\item $b$ is a closed 1-form;
\item $b_{i|k}=\frac{1}{2}\sigma(g_{ik}-b_ib_k)$, with
$\sigma^2=-16K$;
\item $g$ has constant negative sectional curvature $4K$,
that is, \\ $R_{hijk}=4K(g_{ij}g_{hk}-g_{ik}g_{hj})$.
\end{itemize}
\end{description} \hfill$\Box$
\end{thm}
Since any Randers manifold of dimension $n=1$ is a Riemannian
manifold from now on we consider $n>1$.
An immediate conclusion of Yasuda-Shimada theorem is the following
corollary.
\newtheorem{cor}{Corollary}
\begin{cor}
There is no non-Riemannian Randers metric of Berwald type with
$\beta=0$ and constant positive flag curvature.
\end{cor}
Now by using the results of \cite{BeFa} we obtain the following
conclusions.
\begin{thm}\label{parallel}
Let $F^n=(M,F,g_{ij},b_i)$ be an $n$-dimensional parallelizable
Randers manifold of constant positive flag curvature with
$\beta=0$ on $M$ and complete Riemannian metric $g=(g_{ij})$. Then
the dimension of $M$ must be $3$ or $7$.
\end{thm}
\begin{pf}
By using theorem 2.2 of \cite{BeFa} $M$ is diffeomorphic with a
sphere of dimension $n=2k+1$. But a sphere $S^m$ is parallelizable
if and only if $m=1,3$ or $7$ (see \cite{Ad}.). Therefore $n=3$ or
$7$. \hfill$\Box$
\end{pf}
A family of Randers metrics of constant positive flag curvature on
Lie group $S^3$ was studied by D. Bao and Z. Shen (see
\cite{BaSh}.). They produced, for each $K>1$, an explicit example
of a compact boundaryless (non-Riemannian) Randers spaces that has
constant positive flag curvature $K$, and which is not
projectively flat, on Lie group $S^3$. In the following we give
some results about the dimension of Lie groups which can admit
Randers metrics of constant positive flag curvature. These results
show that the dimension $3$ is important.
\begin{cor}
There is no Randers Lie group of constant positive flag curvature
with $\beta=0$, complete Riemannian metric $g=(g_{ij})$ and $n\neq
3$.
\end{cor}
\begin{pf}
Any Lie group is parallelizable, so by attention to theorem
\ref{parallel} and the condition $n\neq 3$, $n$ must be $7$. Since
$G$ is diffeomorphic to $S^7$ and $S^7$ can not admit any Lie
group structure, hence the proof is completed. \hfill$\Box$
\end{pf}
Similar to the \cite{Mi} for the sectional curvature of the left
invariant Riemannian metrics on Lie groups, we compute the flag
curvature of the left invariant Randers metrics on Lie groups in
the following theorem.
\begin{thm}\label{flag(ei)}
Let $G$ be a compact Lie group with Lie algebra $\frak{g}$, $g_0$
a bi-invariant Riemannian metric on $G$, and $g$ any left
invariant Riemannian metric on $G$ such that $<X,Y>=<\phi X,Y>_0$
for a positive definite endomorphism
$\phi:\frak{g}\longrightarrow\frak{g}$. Assume that $X\in\frak{g}$
is a vector such that $<X,X><1$ and $F$ is the Randers metric
arising from $\tilde{X}$ and $g$ as follows:
\begin{eqnarray*}
F(x,Y)=\sqrt{g(x)(Y,Y)}+g(x)(\tilde{X}_x,Y),
\end{eqnarray*}
where $\tilde{X}$ is the left invariant vector field corresponding
to $X$, and we have assumed $\tilde{X}$ is parallel with respect
to $g$. Let $\{e_1,\cdots,e_n\}\subset\frak{g}$ be a
$g$-orthonormal basis for $\frak{g}$. Then the flag curvature of
$F$ for the flag $P=span\{e_i,e_j\} (i\neq j)$ at the point
$(e,e_i)$, where $e$ is the unit element of $G$, is given by the
following formula:
\begin{eqnarray*}
K(P=span\{e_i,e_j\},e_i)=\frac{X_j.<R(e_j,e_i)e_i,X>+(1+X_i).<R(e_j,e_i)e_i,e_j>}{(1+X_i)^2(1-X_i)},
\end{eqnarray*}
where $X=X^ke_k$,
\begin{eqnarray*}
<R(e_j,e_i)e_i,X>&=& -\frac{1}{4}(<[\phi e_j,e_i],[e_i,X]>_0+<[e_j,\phi e_i],[e_i,X]>_0 \\
&&+<[e_j,e_i],[\phi e_i,X]>_0+<[e_j,e_i],[e_i,\phi X]>_0)\\
&&+\frac{3}{4}<[e_j,e_i],[e_i,X]>\\
&&-\frac{1}{2}<[e_j,\phi X]+[X,\phi e_j],\phi^{-1}([e_i,\phi e_i])>_0\\
&&+\frac{1}{4}<[e_j,\phi e_i]+[e_i,\phi e_j],\phi^{-1}([e_i,\phi X]+[X,\phi e_i])>_0
\end{eqnarray*}
and
\begin{eqnarray*}
<R(e_j,e_i)e_i,e_j>&=&-\frac{1}{2}(<[\phi e_j,e_i],[e_i,e_j]>_0+<[e_j,\phi e_i],[e_i,e_j]>_0) \\
&&+\frac{3}{4}<[e_j,e_i],[e_i,e_j]>-<[e_j,\phi e_j],\phi^{-1}([e_i,\phi e_i])>_0\\
&&+\frac{1}{4}<[e_j,\phi e_i]+[e_i,\phi e_j],\phi^{-1}([e_i,\phi e_j]+[e_j,\phi e_i])>_0.
\end{eqnarray*}
\end{thm}
\begin{pf}
By using theorem \ref{flagcurvature}, the proof is clear.
\hfill$\Box$
\end{pf}
Now we give some properties of those Lie groups which admit a left
invariant non-Riemannian Randers metric of Berwald type arising
from a left invariant Riemannian metric and a left invariant
vector field.
\begin{thm}
There is no left invariant non-Riemannian Randers metric of
Berwald type arising from a left invariant Riemannian metric and a
left invariant vector field on connected Lie groups with a perfect
Lie algebra, that is, a Lie algebra $\frak{g}$ for which the
equation $[\frak{g},\frak{g}]=\frak{g}$ holds.
\end{thm}
\begin{pf}
If a left invariant vector field $X$ is parallel with respect to a
left invariant Riemannian metric $g$ then, by using Lemma 4.3 of
\cite{BrFiSpTaWu}, $g(X,[\frak{g},\frak{g}])=0$. Since $\frak{g}$
is perfect therefore $X$ must be zero. \hfill$\Box$
\end{pf}
\begin{cor}
There is not any left invariant non-Riemannian Randers metric of
Berwald type arising from a left invariant Riemannian metric and a
left invariant vector field on semisimple connected Lie groups.
\end{cor}
\begin{cor}
If a Lie group $G$ admits a left invariant non-Riemannian Randers
metric of Berwald type $F$ arising from a left invariant
Riemannian metric $g$ and a left invariant vector field $X$ then
for sectional curvature of the Riemannian metric $g$ we have
\begin{eqnarray*}
K(X,u)\geqslant 0
\end{eqnarray*}
for all $u$, where equality holds if and only if $u$ is orthogonal
to the image $[X,\frak{g}]$.
\end{cor}
\begin{pf}
Since $F$ is of Berwald type, $X$ is parallel with respect to $g$.
By using Lemma 4.3 of \cite{BrFiSpTaWu}, $ad(X)$ is skew-adjoint,
therefore by Lemma 1.2 of \cite{Mi} we have $K(X,u)\geqslant 0$.
\hfill$\Box$
\end{pf}
| {
"timestamp": "2013-05-14T02:04:11",
"yymm": "1305",
"arxiv_id": "1305.2856",
"language": "en",
"url": "https://arxiv.org/abs/1305.2856",
"abstract": "In the present paper, the flag curvature of invariant Randers metrics on homogeneous spaces and Lie groups is studied. We first give an explicit formula for the flag curvature of invariant Randers metrics arising from invariant Riemannian metrics on homogeneous spaces and, in special case, Lie groups. We then study Randers metrics of constant positive flag curvature and complete underlying Riemannian metric on Lie groups. Finally we give some properties of those Lie groups which admit a left invariant non-Riemannian Randers metric of Berwald type arising from a left invariant Riemannian metric and a left invariant vector field.",
"subjects": "Differential Geometry (math.DG)",
"title": "On the flag curvature of invariant Randers metrics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850897067341,
"lm_q2_score": 0.7217432062975978,
"lm_q1q2_score": 0.7095349847082999
} |
https://arxiv.org/abs/1501.03053 | Random Triangle Theory with Geometry and Applications | What is the probability that a random triangle is acute? We explore this old question from a modern viewpoint, taking into account linear algebra, shape theory, numerical analysis, random matrix theory, the Hopf fibration, and much much more. One of the best distributions of random triangles takes all six vertex coordinates as independent standard Gaussians. Six can be reduced to four by translation of the center to $(0,0)$ or reformulation as a 2x2 matrix problem.In this note, we develop shape theory in its historical context for a wide audience. We hope to encourage other to look again (and differently) at triangles.We provide a new constructive proof, using the geometry of parallelians, of a central result of shape theory: Triangle shapes naturally fall on a hemisphere. We give several proofs of the key random result: that triangles are uniformly distributed when the normal distribution is transferred to the hemisphere. A new proof connects to the distribution of random condition numbers. Generalizing to higher dimensions, we obtain the "square root ellipticity statistic" of random matrix theory.Another proof connects the Hopf map to the SVD of 2 by 2 matrices. A new theorem describes three similar triangles hidden in the hemisphere. Many triangle properties are reformulated as matrix theorems, providing insight to both. This paper argues for a shift of viewpoint to the modern approaches of random matrix theory. As one example, we propose that the smallest singular value is an effective test for uniformity. New software is developed and applications are proposed. | \section{Introduction}
Triangles live on a hemisphere and are linked to 2 by 2 matrices.
The familiar triangle is seen in a different light. New understanding
and new applications come from its connections to the modern developments of random matrix theory.
You may never look at a triangle the same way again.
We began with an idle
question: \textit{Are most triangles acute or obtuse\,?}
While looking for an answer, a note was passed in lecture. (We do
not condone our behavior\,!) The note contained an integral over a region in $\mathbb{R}^{6}$. The evaluation of that integral gave us a number -- the fraction of obtuse triangles. This paper will present several other ways to reach that number, but our real purpose is to provide a more complete picture of {}``triangle space.''
Later we learned that Lewis Carroll (as Charles Dodgson) asked the same question in $1884$. His answer for the probability of an obtuse triangle (by his rules) was
\[
\dfrac{3}{8-\dfrac{6}{\pi}\sqrt{3}}\approx0.64.
\]
Variations of interpretation lead to multiple answers (see \cite{eisenberg96,Portnoy94} and their references). Portnoy reports that in the first issue of The Educational Times $(1886)$, Woolhouse reached $9/8-4/\pi^{2}\approx0.72$. In every case obtuse triangles are the winners -- if our mental image of a typical triangle is acute, we are wrong. Probably a triangle taken randomly from a high school geometry book would indeed be acute. Humans generally think of acute triangles, indeed nearly equilateral triangles or right triangles, in our mental representations of a generic triangle. Carroll fell short of our favorite answer $3/4$, which is more mysterious than it seems. There is no paradox, just different choices of probability measure.
The most developed piece of the subject is humbly known as {}``Shape
Theory.'' It was the last interest of the first professor of mathematical statistics at Cambridge University, David Kendall
\cite{kendall89,kendall10}. We rediscovered on our own what the shape theorists knew, that \textit{triangles are naturally mapped onto points of a hemisphere}. It was a thrill to discover both the result and the history of shape space.
We will add a purely geometrical derivation of the picture of triangle
space, delve into the linear algebra point of view, and connect triangles to
computational mathematics issues including condition number analysis, Kahan's \cite{kahan1,kahan2} accuracy of areas of needle shaped triangles, and random matrix theory.
We hope to rejuvenate the study of shape theory\,!
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.25]{bookcover}\includegraphics[angle=359,scale=0.22]{problem58}
\par\end{centering}
\caption[]{Lewis Carroll's Pillow Problem $58$ (January $20$, $1884$). $25$ and $83$ are page numbers for his answer and his method of solution. He specifies the longest side $AB$ and assumes that $C$ falls uniformly in the region where $AC$ and $BC$ are not longer than $AB$.}
\label{fig:Lewis-Carroll's-Pillow}
\end{figure}
\subsection{Random Angle Space}
\vspace{-.1in}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.4]{pasted15}
\par\end{center}
\vspace{-2ex}
\caption[]{Uniform angle distribution\,: angle $1 +$
angle $2 +$ angle $3=180^{\circ}$.}
\label{fig:Uniform:-angle-1}
\end{figure}
The simplest model of a random triangle works with random angles.
We mention it as a contrast to the Gaussian model that is more central to this paper.
Figure~\ref{fig:Uniform:-angle-1} shows a picture of angle space in $\mathbb{R}^{3}$. It is a {}``barycentric'' picture in the plane for which $\hbox{angle }1+\hbox{ angle } 2+\hbox{angle }3=180^{\circ}$. A random point chosen from this space has a natural distribution, the uniform distribution, on the three angles. We then reach a simple fraction\,: $3/4$ of the triangles are obtuse.
\emph{The normal distribution on vertices also gives the fraction} $3/4$. This result is much less obvious. We are not aware of an argument that links the {}``angle picture'' with the normal distribution, though it is hard to imagine that anything in mathematics is a coincidence. Nonetheless, the normal distribution on vertices gives a very nonuniform distribution on angles. We explore this further in Section 3.6.
In the next section we will discuss the {}``natural'' shape picture
of triangle space that is equivalent to the normal distribution. It seems worth repeating a key message: the uniform angle distribution is not the same triangle measure as the Gaussian distribution yet obtuse triangles have the same probability, $3/4.$
\subsection{Table of Random Triangles}
Before computers were handy, random number tables were widely available for applications. On first sight, a booklet of random numbers seemed an odd use of paper and ink, but these tables were highly useful. In the same spirit, we publish in Figure~\ref{Fig:table} a table of $1000$ random triangles. Only the shapes matter, not the scaling. You might try to count how many are acute. The vertices have six independent standard normally distributed coordinates. Each triangle is recentered and rescaled, but not rotated.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.9]{thousandtriangles.pdf}
\par\end{centering}
\caption[]{1000 Random Triangles (Gaussian Distribution): Most triangles are obtuse.}
\label{Fig:table}
\end{figure}
\subsection{A Fortunate {}``Optical Illusion''}
This subsection moves from $1,\!000$ to $50,\!000$ random triangles. The assumptions are the same\,: $x_{1},y_{1},x_{2},y_{2},x_{3},y_{3}$ are six independent random numbers drawn from the standard normal distribution (mean $0$, variance $1$). These six numbers generate a random triangle with vertices $(x_{1},y_{1})$, $(x_{2},y_{2})$ and $(x_{3,}y_{3})$. From the vertices, we can compute the three side lengths, $a$, $b$, $c$. And since we do not care about scaling, we may normalize so that $a^{2}+b^{2}+c^{2}=1$.
Instead of drawing the actual triangle, we represent it by
the point $(a^{2},b^{2},c^{2})$ in the plane $x+y+z=1$. The
result for many random triangles appears in Figure~\ref{fig:Points-represent-triangle}.
The first curiosity is that the collection of points forms a disk.
This is the triangle inequality, though the connection is not obvious. Points on the outer circumference
represent degenerate triangles, with area $0$.
A second curiosity concerns right triangles. The points that represent
right triangles land on the white figure, an equilateral triangle
inscribed in the disk.
A third curiosity, not visible in the picture, and also not obvious, is that one quarter
of the points land in the acute region, inside the white equilateral
triangle. Each of the three disk segments representing obtuse triangles also contains one quarter of the points.
A fourth curiosity, perhaps the most important of all, is
the particular density of points (triangles) towards the perimeter of the disk. It is not difficult to imagine, in the spirit of many familiar optical illusions, that one is looking straight down towards the top of a hemisphere. The three white line segments are semicircles on the hemisphere, viewed {}``head on''.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=1.0]{lotsdots}
\par\end{centering}
\caption[]{Points represent triangle shapes, where $(a^{2},b^{2},c^{2})$ is drawn in the plane $x+y+z=1$. Right triangles fall on three white lines like $a^2+b^2=c^2=1/2$.} \label{fig:Points-represent-triangle}
\end{figure}
Indeed, \textit{the points are uniformly distributed on a hemisphere}.
This fact was known to David Kendall, and is a major underpinning
of the subject of {}``shape theory.'' We discovered it ourselves
through this picture. Each area is $1/4$ of the area of the hemisphere, as Archimedes knew.
Grade school geometry emphasizes {}``side-side-side'' as enough
to represent any triangle, so what does height on the hemisphere represent\,? The answer is simple\,: height represents the area. The Equator has height zero, for degenerate triangles. The North Pole, an equilateral triangle, has maximum height. Latitudes represent triangles of equal area.
There is more. Triangles with one angle specified form small circles
on the hemisphere going through two vertices of the white figure. Indeed
it is good advice to take any triangle property and consider what
this hemisphere representation has to say.
\subsection{Two by Two Matrices: Turning Geometry into Linear Algebra}
A triangle may be represented as a $2$ by $3$ matrix $T$ whose columns give the $x,y$ coordinates of the vertices.
One way to remove two degrees of freedom is to translate the first vertex, say, to the origin. A more symmetric way
to remove two degrees of freedom is to translate the centroid to the origin.
There is something to be said for treating all vertices and also all
edges symmetrically. The classical law of sines does this, as we will do, but not
the law of cosines. Hero's formula for area is also symmetric but $\frac{1}{2}\hbox{ base }\times\hbox{ height }$ requires the choice of a base and hence is not symmetric.
Let $\Delta$ be the $2 \times 3$ matrix of a reference equilateral triangle centered at the origin.
This means that each of the three columns of $\Delta$ has the same euclidean length.
A $2\times2$ matrix $M$
transforms the triangle by taking the vertices of the equilateral triangle to the vertices of another triangle centered at the origin.
As illustrated in
Figure~\ref{fig:2by2} below, every
$2\times2$ matrix $M$
may be associated with a
triangle with zero centroid
through the equation $T=M\Delta$. If the matrix $M$ is random, then the associated triangle $T$ is random.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{twobytwo}
\caption[]{Equivalence between $2\times 2$ matrices $M$ and triangles.}
\label{fig:2by2}
\end{figure}
An origin centered triangle may alternatively be represented by its edge vectors which
we may place in a $2 \times 3$ matrix $E$ whose columns add to the zero column $(0,0)^T$.
We may again let $\Delta$ be the notation we use, this time for the edge vector matrix for an equilateral triangle.
Whether $\Delta$ is thought of as the vertices of an origin centered equilateral triangle or the edges, the important
property is that the columns of $\Delta$ are all of equal length. The vertices are equidistant from $(0,0)$, the edges have equal
lengths.
If $M$ is a random $2 \times 2$ matrix, then $E=M\Delta$ produces a random triangle whose columns are triangle edges.
The columns of $\Delta$ sum to $(0,0)^T$, hence the columns of $E$ sum to $(0,0)^T$, so $E$ encodes a proper closed triangle.
The edge lengths are the square roots of the diagonal elements of $E^{T}E$. The vertex view would have produced $T$, whose three column vectors originate at the centroid, giving less access to the edge lengths.
Section~2.2.1 takes a close look at the choice of the Helmert matrix $\Delta=
\left(\begin{array}{crc}
1/\sqrt{2} & -1/\sqrt{2} & 0\\
1/\sqrt{6} & 1/\sqrt{6} & -2/\sqrt{6}\end{array}\right)
$
as our reference.
Consider as an example, the ``45,45,90" zero centered right triangle with vertices $(-2,-1)$,$(1,-1)$,$(1,2)$
and edge lengths $3\sqrt{2},3,3$.
We form the matrix $T=\left(\begin{array}{crc}
-2 & 1 & 1\\
-1& -1 & 2 \end{array}\right).
$
With the Helmert matrix as $\Delta$, the corresponding $M_{v}=T\Delta^T$ for the vertex view is $M_v=
-\left(\begin{array}{cc}
\sqrt{9/2} & \sqrt{3/2} \\
0 & \sqrt{6} \end{array}\right).
$
Any translation of the triangle produces the same $M_v$.
An edge view requires differences of columns of $T$.
We may transform between 0 centered vertices and edges with the equations
$$E = T
\left(
\begin{array}{rrr}
1 & -1 & 0 \\
0 & 1 & -1 \\
-1 & 0 & 1
\end{array}
\right) \ \ \ {\rm and }
\ \ \
T = \frac{1}{3}E
\left(
\begin{array}{rrr}
1 & 0& -1\\
-1 & 1 & 0 \\
0 & -1 & 1
\end{array}
\right)
. $$
These matrices are pseudoinverses as the $(1,1,1)$ direction is irrelevant.
The $3\sqrt{2},3,3$ triangle is then represented
as $E=\left(\begin{array}{rrr}
-3 & 3 & 0\\
-3 & 0 & 3 \end{array}\right).
$
The corresponding edge view matrix is $M_e=E\Delta^T=
- \left(\begin{array}{cc}
\sqrt{18} &0 \\
\sqrt{9/2} & \sqrt{27/2}
\end{array}\right).
$
It is possible to check that
$M_v = M_e
\left(\begin{array}{rr}
1/2 & \sqrt{3}/6 \\
-\sqrt{3}/6 & 1/2
\end{array}\right)
$
always holds.
This paper will take the edge view unless noted otherwise.
\section{Triangular Shapes $=$ Points on the Hemisphere}
Now we come to the heart of the paper. Every triangular shape with ordered vertices is naturally identified with a point on the hemisphere. {}``Random triangles are uniformly distributed on that hemisphere.'' We know of three constructions that exhibit the identification\,:
\begin{itemize}
\item Complex Numbers
\item Linear Algebra\,: The Singular Value Decomposition
\item High School Geometry
\end{itemize}
\begin{flushleft}
The linear algebra approach, through the SVD, generalizes most readily
to higher dimensional shape theory. It also invites applications using Random Matrix Theory which has advanced
considerably since the early inception of shape theory. (See Sections 3.2 and 4.1 of this paper for connections to Random Matrix Theory including random condition numbers and uniformity tests.) Yet another benefit is the connection to the Hopf fibration.
\par\end{flushleft}
The geometric construction has two benefits not enjoyed by the others\,:
1. It can be understood with only knowledge of ordinary Euclidean
geometry
2. It reveals the triangular shape and the point on the hemisphere in
the same picture.
\subsection{Complex Numbers}
A generic triangle can be scaled, rotated, and reflected so as to have vertices with complex coordinates $0,1,\zeta$. There are six possibilities for $\zeta$ in the upper half plane, corresponding to the six permutations of the vertices. One can consider $\bar{\zeta}$ in the lower half plane, to pick up six more possibilities.
The usual correspondence with the hemisphere is that the stereographic
projection maps $\zeta$ from the upper half plane to the hemisphere
\cite{kendall89,kendall10}.
\subsection{Linear Algebra}
\subsubsection{The Helmert matrix and regular tetrahedra in $n$ dimensions}
The Helmert matrix $\Delta$ appears in statistics, group theory, and geodesy. It is a particular construction with orthonormal rows perpendicular to $(1,1,1)$\,:
\[
\Delta=
\left(\begin{array}{crc}
1/\sqrt{2} & -1/\sqrt{2} & 0\\
1/\sqrt{6} & 1/\sqrt{6} & -2/\sqrt{6}\end{array}\right).
\]
The columns contain the vertices of an equilateral triangle. They are also the edges of a similar equilateral triangle. The operator view is that $\Delta^T$ is
an isometry (distances and angles are preserved) from
$\mathbb{R}^{2}$ to the plane $x+y+z=0$. This means that $\Delta\Delta^{T}=I_{2}$ and $\Delta^{T}\Delta=I_{3}-J_{3}/3$, where $J_{3}=$\texttt{ones(3)} is the all-ones matrix.
A useful property related to the duality between vertices and edges
is that
\[
\Delta\left(\begin{array}{rrr}
1 & -1\\
& 1 & -1\\
-1 & & 1\end{array}\right)=
\sqrt{3}\left(\begin{array}{cr}
\cos\frac{\pi}{6} & -\sin\frac{\pi}{6}\\
\sin\frac{\pi}{6} & \cos\frac{\pi}{6}\end{array}\right)\Delta.
\]
The left hand side takes the vector differences of
the columns of $\Delta$ (the
vertices of
the equilateral triangle). These differences are the edges. We may place them as
vectors at a common origin. The right hand side is $\sqrt{3}$ times
a rotation matrix times $\Delta$.
The $\sqrt{3}$ represents the familiar fact that
the edge lengths of an equilateral triangle are $\sqrt{3}$
times the distances of the vertices from the centroid.
Also if you rotate an edge counterclockwise by $\pi/6$, you get the direction of a vertex.
The generalization of Helmert's construction to $\mathbb{R}^{n}$
is the $n-1\times n$ matrix $\Delta_n$. It has orthonormal rows, and the zeros
make it lower Hessenberg:
\begin{equation}
\Delta_{n}=\left(\begin{array}{cccc}
\sqrt{1\cdot2}\\
& \sqrt{2\cdot3}\\
& & \ddots\\
& & & \sqrt{(n-1)\cdot n}\end{array}\right)^{-1}\left(\begin{array}{crrcl}
1 & -1 & 0 & \ldots & 0\\
1 & 1 & -2 & \ldots & 0\\
\ldots & \ldots & \ldots & \ddots & \vdots\\
1 & 1 & 1 & \dots & -(n-1)\end{array}\right).
\label{eq:helmert}
\end{equation}
A transposed (and negated) form of that last matrix is available
in the statistics language R as \texttt{contr.helmert(n)}. The name indicates a matrix of {}``contrasts'' as used in statistics.
The square (and orthogonal) Helmert matrix adds a first row with entries $(1,\ldots,1)/\sqrt{n}$. It is obtained in MATLAB as an orthogonal test matrix with the command \texttt{gallery('orthog',n,4)}.
The rectangular $\Delta_{n}$ contains the vertices of a regular simplex from the {}``column view.'' When $n=4$ this is the regular tetrahedron with four vertices in $\mathbb{R}^{3}$. Our equilateral triangle $\Delta$ is $\Delta_{3}$. (The {}``row view'' consists of $n-1$ orthogonal rows each perpendicular to $(1,1,\ldots,1)$ in $\mathbb{R}^{n}$.) Again $\Delta_{n}\Delta_{n}^{T}=I_{n-1}$ and $\Delta_{n}^{T}\Delta_{n}=I_{n}-J_{n}/n$.
\subsubsection{The SVD\,: How $2\times2$ matrices connect triangles to the hemisphere}\label{sub:The-SVD:-How}
Let $M=U\Sigma V^{T}$ be the Singular Value Decomposition of a $2\times2$ matrix $M$. As shown in Figure~\ref{fig:2by2}, with
the edge viewpoint, the columns of $E=M\Delta$ are ordered edges of a triangle. We take a few steps to make the shape and the SVD unique\,:
\begin{itemize}
\item Scaling\,: We may assume that the squared entries of $M$ have sum $1$. The diagonal matrix $\Sigma$ then has $1=\sigma_{1}^{2}+\sigma_{2}^{2}$. The triangle edges then have sum of squares $1$ since ${\rm tr}(E^T\! E)={\rm tr}(\Delta^T M^T M\Delta)
= {\rm tr}( M^T M\Delta\Delta^T) = {\rm tr}( M^T M)=1.$
\item
The orthogonal factor $U$ in the SVD is unimportant, since $M$ and $U^{-1}M$ correspond to
the same triangle just
rotated or reflected.
\item To make the SVD unique, assume that $\sigma_{1}\ge\sigma_{2}\ge0$ and that $V=\left(\begin{array}{rr}
\cos\theta & -\sin\theta\\ \sin\theta & \cos\theta\end{array}\right)$ with $0\le\theta<\pi$. There is a singularity in the SVD when $M=\frac{1}{\sqrt 2}I_{2}$. In this case $V$ is arbitrary.
\end{itemize}
We associate $V$ and $\Sigma$ with a point on the hemisphere of radius
$\frac{1}{2}$\,: The longitude is $2\theta$ and the height is $\sigma_{1}\sigma_{2}$. The latitude is thus asin($2\sigma_{1} \sigma_{2}$). From \foreignlanguage{english} {$1=\sigma_{1}^{2}+\sigma_{2}^{2}$} we have $0\le\sigma_{1}\sigma_{2}\le\frac{1}{2}$. The singularity of the North Pole (every angle is a longitude) is consistent with the singularity of the SVD at $\frac{1}{\sqrt 2}I_{2}$.
The height $\sigma_1\sigma_2$ is also $\left(\kappa+\kappa^{-1}\right)^{-1}$, where $\kappa=\sigma_{1}/\sigma_{2}\ge1$ is the condition number of $M$. The best conditioned matrix $(\kappa=1)$ is $\frac{1}{\sqrt{2}}I_{2}$ at the North Pole, and corresponds to the equilateral triangle. The ill-conditioned matrices with $\kappa=\infty$, and at height $0$, are on the equator. They correspond to degenerate triangles, with collinear vertices. We see our first hint of the link between {}``ill-conditioned'' triangles and {}``ill-conditioned'' matrices.
One can now go either way from triangles to the hemisphere through the
$2\times2$ matrix $M$\,:
\begin{itemize}
\item Hemisphere to matrix to triangles\,: Start with a point on the hemisphere. Create $M=\Sigma V^{T}$ from the latitude asin($2\sigma_{1} \sigma_{2}$) and longitude $2\theta$. The edges of the triangle are the columns of $M\Delta$.
\item Triangles to matrix to hemisphere\,: Start with a triangle centered at $0$ whose squared edges sum to $1$. Then $M=$ ($2\times3$ matrix of triangle edges)$\Delta^{T}$. The point on the hemisphere comes from the SVD of $M$ by ignoring $U$, taking the height from $\det M=\sigma_{1}\sigma_{2},$ and the longitude from $\theta$.
\end{itemize}
The matrix $\mbox{\ensuremath{M}}$ is related to the preshape in \cite{kendall89}.
In that view, any triangle can be moved into a
standard position through $\Delta^{T}$. Our recommended view as shown in Figure~\ref{fig:2by2} is that the preshape is an operator mapping the equilateral triangle into a particular triangle.
This viewpoint emphasizes the linear operator that transforms triangles.
\subsection{Formulas at a glance}
As a summary, one may sequentially follow these steps to derive the
key formulas:
\begin{enumerate}
\item[] Reference Triangle:
edges from the three columns of \foreignlanguage{american}{ $\Delta=\left(\begin{array}{crc}
1/\sqrt{2} & -1/\sqrt{2} & 0\\
1/\sqrt{6} & 1/\sqrt{6} & -2/\sqrt{6}\end{array}\right).$}
\selectlanguage{american}%
Random Triangle: edges from the columns of $E=M\Delta.$ (Note $M=E \Delta^T$ since $\Delta \Delta^T=I$.)
\item[{{(1})}] \selectlanguage{english} SVD ($\sigma_1,\sigma_2,\theta$) of Matrix $M$ ($\|M\|=1$) :
$M=\Sigma V^{T}=\left(\begin{array}{rr}
\sigma_{1}\\
& \sigma_{2}\end{array}\right)\left(\begin{array}{rr}
\cos\theta & \sin\theta\\
-\sin\theta & \cos\theta\end{array}\right),$
$1 \ge \sigma_{1}\ge\sigma_{2}\ge 0,\ \sigma_{1}^{2}+\sigma_{2}^{2}=1,\ 0\le\theta<\pi,$\foreignlanguage{american}{\vspace{0in}} ($U$ not needed.)
\item[{{(2})}] \selectlanguage{english} Triangle edges $a,b,c$:
$a^{2}+b^{2}+c^{2}=1,\ \left(\begin{array}{c}
a^{2}\\
b^{2}\\
c^{2}\end{array}\right)=\mbox{diag}(\Delta^{T}M^{T}M\Delta), $ $a+b\ge c$, $b+c \ge a$, $c+a \ge b$. \foreignlanguage{american}{\vspace{0in}}
\item[{{(3})}] \selectlanguage{english} Hemisphere of radius $1/2$ (coordinates $\lambda$, $\phi$ denote
$\frac{1}{2}(\cos \lambda \cos \phi,\cos \lambda \sin \phi, \sin \lambda $):
Latitude = $\lambda=\mbox{asin}(2\sigma_{1}\sigma_{2}),$ Longitude $=\phi=2\theta,$
Height = $\frac{1}{2}\sin(\lambda)$. \foreignlanguage{american}{\vspace{0in}}
$0 \le \lambda \le \pi/2$, $ \ 0 \le \phi <2 \pi$.
\item[{{(4})}] \selectlanguage{english} Disk of radius 1/2 (projection of hemisphere to polar coordinates $r,\phi$ ):
$r=\frac{1}{2}\cos(\lambda)$. The angle $\phi=2\theta$ is the longitude of the point on the sphere. $0 \le r \le 1/2$, $0 \le \phi < 2 \pi$.
\end{enumerate}
The area formulas for these descriptions:
\[
K=\mbox{Area = \ensuremath{\frac{\sigma_{1}\sigma_{2}}{\sqrt{12}}=\frac{1}{4}\sqrt{1-2(a^4+b^4+c^4)}=\sqrt{\frac{1-4r^{2}}{48}}=\frac{\mbox{Height}}{\sqrt{12}}=\frac{\mbox{sin($\lambda$)}}{\sqrt{48}}=(\kappa+\kappa^{-1})^{-1}/\sqrt{12}}.}\]
Here are conversion formulas. The (2)$\rightarrow$(4) and (4)$\rightarrow$(2) blocks are developed in Section 2.3.1.
\vspace{0.2in}
\hspace{-0.4in}\begin{tabular}{|l@{}|c@{}|c@{}|c@{}|c@{\,}|}
\hline
From\textbackslash{}To & (1) SVD($M$) & (2) Triangle & (3) Hemisphere & (4) Disk\tabularnewline
\hline
\hline
$\begin{array}{@{\,}l@{\,}}
\mbox{(1) SVD($M$)} \\
\hspace{0.2in} \sigma_{1},\sigma_{2},\theta\end{array}$ & & $\begin{array}{@{\,}l@{\,}}
\rule{0pt}{0.15in} \ \ \ \ \ \ \ \ \mbox{diag}((M\Delta)^T(M\Delta)):\\[0.03in]
a^{2}=\frac{1}{3}(1-(\sigma_{1}^{2}-\sigma_{2}^{2})\cos2\theta_{+})\\[0.04in]
b^{2}=\frac{1}{3}(1-(\sigma_{1}^{2}-\sigma_{2}^{2})\cos2\theta_{-})\\[0.04in]
c^{2}=\frac{1}{3}(1-(\sigma_{1}^{2}-\sigma_{2}^{2})\cos2\theta)\\[0.05in] \end{array}$ & $\begin{array}{@{\,}l@{\,}}
\mbox{$\lambda=$asin(\ensuremath{2\sigma_{1}\sigma_{2})}}\\
\mbox{$\phi$=\ensuremath{2\theta}}\\[0.07in]
\mbox{Height=\ensuremath{K\sqrt{{12}}}}\\
\mbox{\ensuremath{=\sigma_{1}\sigma_{2}}=(\ensuremath{\kappa}+\ensuremath{\kappa^{-1})^{-1}}}\end{array}$ & $\begin{array}{@{\,}l@{\,}}
\rule{0pt}{0.15in} r\sin \phi=(M^T\! M)_{12}
\\
r\cos \phi=\frac{ (M^{\!T}\!\! M)_{11}-(M^{\!T}\!\! M)_{22} }{2}
\\ [0.05in]
r=\sqrt{\frac{1}{4}-\sigma_{1}^{2}\sigma_{2}^{2}}\\[.05in]
\ \ \ =\frac{1}{2}(\sigma_1^2-\sigma_2^2)\\[.03in]
\phi=2\theta\end{array}$\tabularnewline
\hline
$\begin{array}{@{\,}l@{\,}}
\mbox{(2) Triangle}\\
\hspace{0.2in} a^{2},b^{2},c^{2}\end{array}$ & $\begin{array}{@{\hspace{-.1in}}l@{\,}}
\mbox{ 1)Triangle\ensuremath{\rightarrow}\ disk}\\
\mbox{ 2)then use (4)}\downarrow\end{array}$ &
& $\! \! \frac{1}{\sqrt{6}} \left( \begin{array}{@{\,}l@{\,}}
\cos \lambda \sin \phi \\ \cos \lambda \cos \phi \end{array}\right)=
\Delta\! \left( \begin{array}{@{\,}c@{\,}} {\rule{0pt}{0.15in} a^2} \\ b^2 \\c^2 \end{array}\right)$
&
$r\left( \begin{array}{@{\,}c@{\,}} \sin \phi \\ \cos \phi \end{array}\right)
=
\sqrt \frac{3}{2} \Delta \! \left( \begin{array}{@{\,}c@{\,}} {\rule{0pt}{0.15in} a^2} \\ b^2 \\c^2 \end{array}\right)
$
\vspace{0.02in}
\tabularnewline
\hline
$\begin{array}{@{\,}l@{\,}}
\mbox{(3) Hemisphere}\\
\hspace{0.2in} \mbox{$\lambda,\phi$}\end{array}$ &
$\begin{array}{@{\,}l@{\,}}
\sigma_1=\cos(\lambda/2) \\
\sigma_2=\sin(\lambda/2)
\end{array}$
& $\begin{array}{@{\,}l@{\,}}
\mbox{ 1) }r=\cos (\lambda)/2\\
\mbox{ 2) then use }\downarrow\end{array}$
& \rule{0pt}{0.2in} & $\begin{array}{@{\,}l@{\,}}
r=\cos(\lambda)/2\\
\phi=\phi\end{array}$\tabularnewline
\hline
$\begin{array}{@{\,}l@{\,}}
\mbox{(4) Disk}\\
\hspace{0.2in} r,\phi\end{array}$ & $\begin{array}{@{\,}l@{\,}}
\rule{0pt}{0.14in} \sigma_{1}^{2}=\nicefrac{1}{2}+r\\
\sigma_{2}^{2}=\nicefrac{1}{2}-r\\
\theta=\phi/2\end{array}$ & $\begin{array}{@{\,}l@{\,}}
\rule{0pt}{0.14in} a^{2}=(1-2r\cos\phi_{+})/3\\
b^{2}=(1-2r\cos\phi_{-})/3\\
c^{2}=(1-2r\cos\phi)/3
\\
\end{array}$ &
$\begin{array}{l}
\lambda=\mbox{acos}(2r) \\ \phi=\phi \end{array} $
& \tabularnewline
\hline
\end{tabular}
$\theta_{\pm}=\theta\pm\pi/3,$ $\kappa=\sigma_{1}/\sigma_{2},$ $\phi_{\pm}=\phi\pm2\pi/3.$
\vspace{0.2in}
Also useful is the direct mapping of $M$ to the hemisphere in Cartesian coordinates:
$$
M=
\left(
\begin{array}{cc}
M_{11} & M_{12} \\
M_{21} & M_{22}
\end{array}
\right)
\longrightarrow
\frac{1}{2}
\left(
\begin{array}{c}
(M_{11}^2+M_{21}^2)-(M_{12}^2+M_{22}^2) \\[.03in]
2(M_{11}M_{12}+M_{21}M_{22}) \\[.03in]
2|M_{11}M_{22}-M_{21}M_{12}|
\end{array}
\right)
=
\frac{1}{2}
\left(
\begin{array}{c}
(M^T\! M)_{11}-(M^T\! M)_{22}\\[.03in]
2(M^T\! M)_{12} \\[.03in]
2|\det{M}|
\end{array}
\right)
=
\frac{1}{2}
\left(
\begin{array}{c}
\cos \lambda \cos \phi \\[.03in]
\cos \lambda \sin \phi \\[.03in]
\sin \lambda
\end{array}
\right)
.
$$
This formula may be derived directly or through the Hopf Map described in Section 2.5.5.
A notebook in the programming language Julia is available which takes six representations
and performs all possible conversions. The notebook provides a correctness check by
performing all possible roundtrips between representations. (See http://www-math.mit.edu/\verb+~+edelman/Edelman/publications.htm.)
\subsubsection{Disk $r,\phi$ to triangle $a^2,b^2,c^2$ directly}
\label{sub_direct}
An immediate consequence of
$$\left(\begin{array}{c}
a^{2}\\
b^{2}\\
c^{2}\end{array}\right)
=
\mbox{diag}((M\Delta)^T(M\Delta))
= \mbox{diag}(\Delta^{T}V\Sigma^2V^T\Delta)
$$
and
$$
\Sigma^2=
\left(\begin{array}{cc}
\sigma_1^2 & \\
& \sigma_2^2
\end{array} \right)
=\frac{1}{2} I+r \left(\begin{array}{cc}
1 &\\
& -1 \end{array}\right)
$$ is that
$$\left(\begin{array}{c}
a^{2}\\
b^{2}\\
c^{2}\end{array}\right)=
\frac{1}{3}
\left(\begin{array}{c}
1\\
1\\
1\end{array}\right)
+r \left( \rule{0pt}{0.3in} \Delta_i^T V\left(\begin{array}{cc}
1 &\\
& -1 \end{array}\right)V^T \Delta_{i} \right)_{i=1,2,3}
.$$
This contains the quadratic form
$V\left(\begin{array}{cc}
1 &\\
& -1 \end{array}\right)V^T$ evaluated at the three columns of $\Delta.$
One can readily derive
$$\left(\begin{array}{c}
a^{2}\\
b^{2}\\
c^{2}\end{array}\right)=
\frac{1}{3}
\left(\begin{array}{c}
1\\
1\\
1\end{array}\right)
-\frac{2}{3} r \left( \rule{0pt}{0.3in} \begin{array}{l}
\cos \phi_+ \\
\cos \phi_- \\
\cos \phi \end{array}\right)
\mbox{ with }
\left( \rule{0pt}{0.3in} \begin{array}{l}
\phi_+ \\
\phi_- \\
\phi \end{array}\right)
=
\left( \rule{0pt}{0.3in} \begin{array}{l}
\phi+{2\pi}/{3} \\
\phi- {2\pi}/{3} \\
\phi \end{array}\right) .
$$
We prefer the geometric realization that $\Delta$ contains
three columns oriented at angles $\pi/6,5\pi/6,9\pi/6$ with lengths $\sqrt{2/3}$.
The matrix $V\left(\begin{array}{cc}
1 &\\
& -1 \end{array}\right)V^T$ rotates a vector at angle $\alpha$ clockwise by $\theta$, reflects
across the x-axis, and then rotates counterclockwise by $\theta.$ The quadratic
form thus takes a dot product of a vector at angle $\alpha-\theta$ with its reflection,
i.e., two vectors at angle $2(\alpha-\theta)$, yielding $\cos (2\alpha-2\phi)=-\cos(2\phi-2\alpha+\pi)$.
Plugging in the three values $\pi/6,5\pi/6,$ and $9\pi/6$ gives the result with little algebra!
As $\phi$ runs from $0$ to $2\pi$, the points $(\sin \phi,\cos \phi)$ trace out a circle in the plane.
The transformation to the plane $x+y+z=0$ is
$$\sqrt{\frac{3}{2}}\Delta^T
\left( \begin{array}{l}
\sin \phi \\
\cos \phi \end{array}\right)
=
-
\left( \rule{0pt}{0.3in} \begin{array}{l}
\cos \phi_+ \\
\cos \phi_- \\
\cos \phi \end{array}\right).
$$
This yields an alternative formula to transform the disk to the squared sidelengths:
$$
\left(\begin{array}{c}
a^{2}\\
b^{2}\\
c^{2}\end{array}\right)=
\frac{1}{3}
\left(\begin{array}{c}
1\\
1\\
1\end{array}\right)
+
\sqrt{\frac{2}{3}}r\Delta^T
\left( \begin{array}{l}
\sin \phi \\
\cos \phi \end{array}\right) .
$$
\subsubsection{Triangle $a^2,b^2,c^2$ to disk $r,\phi$ directly: a ``barycentric'' interpretation}
\label{sub_directinverse}
Apply $\Delta$ to the equation just above:
\begin{equation}
\label{circleq}
\Delta
\left(\begin{array}{c}
a^{2}\\
b^{2}\\
c^{2}\end{array}\right)=
\sqrt{\frac{2}{3}}r
\left( \begin{array}{l}
\sin \phi \\
\cos \phi \end{array}\right) .
\end{equation}
Then $r$ and $\phi$ are the polar coordinates of
$$
\tilde{\Delta}\left(\begin{array}{c}
a^{2}\\
b^{2}\\
c^{2}\end{array}\right)= r\left(
\begin{array}{r}
\cos \phi \\
\sin \phi \end{array}
\right), \
{\mbox{ where }} \
\tilde{\Delta}=
\left(
\begin{array}{rrr}
1/2 & 1/2 & -1 \\
\sqrt{3}/2 & -\sqrt{3}/2 & 0 \end{array}
\right)
.
$$
{\bf Barycentric Interpretation:}
To obtain the point on the disk, take an equilateral triangle of sidelength $1$ (the columns of $\tilde{\Delta}$) and find the point with barycentric coordinates $(a^2,b^2,c^2).$
{\bf The {}``broken stick'' problem:} \cite{goodman08} asks how
frequently $a,b,c$ can be edge lengths of a triangle, when their sum is the length of the stick.
How often do $a,b,c$ satisfy the triangle inequality? This has an easy answer $1/4$
(again!) and a long history.
We could modify the
question to fix $a^{2}+b^{2}+c^{2}=1$. The answer now consists of
those $(a,b,c)$ inside the disk in Figure 7(a) as a percentage of the big triangle.
The solution to our ``renormalized" problem is that the fraction $\pi/\sqrt{27}\approx 0.60$ of
all triples $(a,b,c)$ can be edge lengths of a triangle.
\subsubsection{Triangles with fixed area}
In the four representations the following ``sets of fixed area" are equivalent:
\begin{itemize}
\item Matrices $M$ with ($\|M\|_F^2=\sum M_{ij}^2=1$ and) determinant $\sqrt{12}K$.
\item Triangles with ($a^2+b^2+c^2=1$ and) constant area $K=\frac{1}{4}\sqrt{1-2(a^4+b^4+c^4)}$ (Hero's formula).
\item The latitude at height $\sqrt{12}K$ on the hemisphere.
\item A circle centered at the origin with radius $r=\frac{1}{2}\sqrt{1-48K^2}$ in the disk representation.
\end{itemize}
To sweep through all triangles with area $K$, take $r=\frac{1}{2}\sqrt{1-48K^2}$
and let $\phi=2\theta$ run from $0$ to $2\pi/3$. This avoids the ``rotated'' triangles that
cycle through $a,b,c$.
Here are special triangles with area $K$\, (including approximations for very small $K$):
\begin{itemize}
\item {\bf Right Triangles:}
\begin{centering}
\begin{tabular}{|c|c|c|c|} \hline
$K$ & $r$ & $\phi$ (or $\phi_\pm$) & squared sides \\[0.03in] \hline
$K\le 1/8$ & $r \ge 1/4$& $\pm$acos$(-\frac{1}{4r})$&
$ \frac{1}{2}\ $ and
$\ \frac{1}{4}\pm \frac{\sqrt{1-64K^2}}{4}=\frac{1}{4} \pm \sqrt{\frac{16r^2-1}{3}} $\\ \hline
\end{tabular}
\end{centering}
For area $K\le\frac{1}{8}$ (corresponding to $r\ge\frac{1}{4})$ there are six congruent right triangles.
For $K$ small, the right triangles have squared side lengths $\frac{1}{2}$, $8K^{2}+128K^{4}+O(K^{6})$, and $\frac{1}{2}-8K^{2}-128K^{4}+O(K^{6})$. We can check that $\frac{1}{2}\sqrt{(8K^{2})(1/2)}\approx K$.
\item {\bf Isosceles Triangles} (third side smaller):
\begin{centering}
\begin{tabular}{|c|c|c|c|} \hline
$K$ & $r$ & $\phi$ (or $\phi_\pm$) & squared sides $\ \ \rule[-.07in]{0in}{0.2in} a^2+b^2+c^2=1$ \\ \hline
$K\le 1/\sqrt{48}$ & $r\le 1/2$ & 0 &
Two equal sides:$(1+r)/3\ $ Third side:$ (1-2r)/3$
\\ \hline
\end{tabular}
\end{centering}
For each $K$, three congruent isosceles triangles might be called the most acute of the acute triangles (furthest from the three white lines that represent right triangles.) As $r\rightarrow\frac{1}{2},$ this isosceles
triangle approaches a right triangle. (Many physics computations use
this fact.) The squared side lengths for $K$ small are two of size $\frac{1}{2}-4K^{2}-48K^{4}+O(K^{6})$
and a tiny side $8K^{2}+96K^{4}+O(K^{6})$.
\item {\bf Isosceles Triangles} (third side larger):
\begin{centering}
\begin{tabular}{|c|c|c|c|} \hline
$K$ & $r$ & $\phi$ (or $\phi_\pm$) & squared sides $\ \ \rule[-.07in]{0in}{0.2in} a^2+b^2+c^2=1$ \\ \hline
$K\le 1/\sqrt{48}$ & $r\le 1/2$ & $\pi/3$ &
Two equal:$(1-r)/3\ $ Third:$ (1+2r)/3$
\\ \hline
\end{tabular}
\end{centering}
For each $K$, three congruent isosceles triangles might be called the most obtuse of the triangles (although they may be acute). The squared side lengths for $K$ small are two of size $\frac{1}{6}+4K^{2}+48K^{4}+O(K^{6})$
and a third side $\frac{2}{3}-8K^{2}+96K^{4}+O(K^{6})$.
\item {\bf Singular Triangles:}
\begin{centering}
\begin{tabular}{|c|c|c|c|} \hline
$K$ & $r$ & $\phi$ (or $\phi_\pm$) & {\bf actual} sides \ \ $a,b,c$ \\ \hline
$0$ & $1/2$ & any &
$\sqrt{\frac{2}{3}}|\sin \frac{1}{2}\phi|, \sqrt{\frac{2}{3}}|\sin \frac{1}{2}\phi_+|, \sqrt{\frac{2}{3}}|\sin \frac{1}{2}\phi_-|$
\\ \hline
\end{tabular}
\end{centering}
The longer side is the sum of the two shorter sides.
\item{{\bf Nearly Singular Triangles:}}
For $K$ tiny, $r=\frac{1}{2}-12K^2-144K^4 +O(K^6).$
\end{itemize}
We conclude with area/angle formulas that will be useful in Section 2.5.3.
They may have interest in their own right as they do not seem to appear in the famous Baker
listings of area formulas from 1885.
\cite{baker85}.
\begin{lem}
The following formulas relate side lengths $a,b,c$, area $K$, and angles $A,B,C$:
$$
\begin{array}{l}
\tan\, A=4K/(b^{2}+c^{2}-a^{2}) \\
\tan\, B=4K/(a^{2}+c^{2}-b^{2}) \\
\tan\, C=4K/(a^{2}+b^{2}-c^{2}).
\end{array}
$$
\end{lem}
We can derive $\tan A$ from the law of cosines: $\cos A=(b^{2}+c^{2}-a^{2})/2bc$
and the well known formula $\sin\,A=2K/bc$. We apply Lemma 1 with $a^2+b^2+c^2=1$ in the form
$$
\begin{array}{l}
\tan\, A=4K/(1-2a^{2}) \\
\tan\, B=4K/(1-2b^{2}) \\
\tan\, C=4K/(1-2c^{2}).
\end{array}
$$
If angle $A$ is held fixed, there is a linear relationship between $a^2$ and
$K$. This means that a circular arc on the hemisphere represents all triangles with angle $A$.
This will be further discussed in Section 2.5.3.
Three special arcs (the three white lines in Figure 3) represent all right triangles with
$A$ or $B$ or $C$ equal to $\pi/2$.
\subsection{The Triangle Inequality and the Disk Boundary}
The razor sharp triangle inequality shows up as the smooth disk in the plane $x+y+z=1$.
We saw this in Section 1.2 experimentally. Careful readers may have thought about this disk
throughout Section 2.3, and some points are worth attention:
\begin{itemize}
\item {\bf Area Viewpoint:}
The triangle inequality is equivalent to non-negative area.
Hero's formula \newline $K=\frac{1}{4}\sqrt{1-2(a^4+b^4+c^4)}$ implies
$a^4+b^4+c^4 \le 1/2$. This is the intersection of the sphere of radius $1/\sqrt{2}$ centered at the origin
with the plane $x+y+z=1$ whose distance from the origin measured from $(1/3,1/3,1/3)$ is $1/\sqrt{3}$.
The radius of the circle of intersection is then $\sqrt{1/2-1/3}=\sqrt{1/6}$.
Equation \eqref{circleq} takes us to the disk with $r=1/2$ through the factor of $\sqrt{3/2}$.
For general $a,b,c$ (no restriction on $a^2+b^2+c^2$), Hero's formula is
$$16K^2=(a+b+c)(-a+b+c)(a-b+c)(a+b-c)=(a^2+b^2+c^2)^2-2(a^4+b^4+c^4).$$
This formula has three factors whose positivity amounts to the three triangle inequalities.
The inequality $(x+y+z)^2-2(x^2+y^2+z^2)\ge0$ produces a cone through the origin which intersects
the plane $x+y+z=1$ at the aforementioned circle of radius $1/\sqrt{6}$.
\item {\bf Matrix Viewpoint:} The boundary of the triangle inequality is equivalent to
$|\det M|=\sigma_1\sigma_2=0.$
\item{ \bf Hemisphere Viewpoint:} det$(M)=0$ corresponds to the
equator of the hemisphere ($\lambda=0$).
\item{\bf Disk Viewpoint:} The conversion table yields $r=\cos(\mbox{$\lambda$})/2 \le 1/2$, where $\lambda=0$ corresponds to the disk boundary at $r=1/2$.
\item{\bf Triangle Viewpoint:} The sides of the singular triangles (with $a^2+b^2+c^2=1$) are expressed in the table before Lemma 1.
\end{itemize}
\subsection{Geometric Construction}
Here we exhibit what is marvelous about mathematics\,: \textit{The triangle is not only represented as a point on the hemisphere, but it can be constructed geometrically inside that hemisphere.}
The full construction will be described in a moment. Informally, the point on the hemisphere
will be one vertex, and the base is inscribed in an equilateral triangle on the equatorial plane.
\subsubsection{Parallelians}
We must define the three parallelians of an equilateral triangle $E$,
meeting at an arbitrary point $P$.
\begin{defn}
\textbf{(Parallelians) }\textsl{Through any point $P$, draw lines parallel to the sides of $E$ (Figure~\ref{fig:Parallelians-through}). The parallelians are the segments of these three lines with endpoints on the sides (possibly extended) of $E$.}
\end{defn}
When $P$ is interior to $E$, the three line segments are interior
as well. They intersect at $P$. Otherwise their extensions intersect
at $P$.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.45]{newparallelian}
\par\end{centering}
\caption[]{Parallelians when $P$ is interior or exterior to $E$.}
\label{fig:Parallelians-through}
\end{figure}
\subsubsection{Barycentric coordinates revisited}
This section picks up where Section 2.3.2 left off, with a
barycentric view of the point in the disk.
We introduce big and little equilateral triangles whose vertices are the columns of
$$\Delta_{\mbox{big}}=
\left(
\begin{array}{rrr}
\sqrt{3}/2 & -\sqrt{3}/2 & 0 \\
1/2 & 1/2 & -1 \end{array}
\right)=\frac{\sqrt 6}{2}\Delta
$$
$$\Delta_{\mbox{little}}=
\Delta_{\mbox{big}}\left(
\begin{array}{ccc}
0 & 1/2 & 1/2 \\
1/2 & 0 & 1/2 \\
1/2 & 1/2 & 0 \end{array}
\right) =
\left(
\begin{array}{rrr}
-\sqrt{3}/4 & \sqrt{3}/4 & 0 \\
-1/4 & -1/4 & 1/2 \end{array}
\right)
=
-\frac{1}{2}
\Delta_{\mbox{big}}
$$
$$
{\mbox{\hspace{-.7in}or equivalently }}
\
\Delta_{\mbox{big}}= \Delta_{\mbox{little}}
\left(
\begin{array}{rrr}
-1 & 1 & 1 \\
1 & -1 & 1 \\
1 & 1 & -1 \end{array}
\right) .
$$
The columns of $\Delta_{\mbox{big}}$ all have norm $1$. The midpoints (columns of $\Delta_{\mbox{little}}$)
all have norm $1/2$.
We have seen in Section 2.4 that if the sides have $a^2+b^2+c^2=1$, then
$$
\Delta_{\mbox{big}} \left(
\begin{array}{r}
a^2 \\
b^2 \\
c^2 \end{array}
\right) =
\Delta_{\mbox{little}} \left(
\begin{array}{r}
-a^2+b^2+c^2 \\
a^2-b^2+c^2 \\
a^2+b^2-c^2 \end{array}
\right) =
\Delta_{\mbox{little}} \left(
\begin{array}{r}
1-2a^2 \\
1-2b^2 \\
1-2c^2 \end{array} \right)
$$
falls inside a disk of radius 1/2.
(Note when comparing Section 2.3.2, for convenience, we have reflected the x and y coordinates
in $\tilde{\Delta}$. This serves the purpose of allowing the little triangle to have a horizontal
base.)
In summary, the barycentric coordinates $(a^2,b^2,c^2)$ for the ``big'' equilateral triangle
correspond to the (possibly nonpositive) barycentric
coordinates $(1-2a^2,1-2b^2,1-2c^2)$ on the inverted ``little'' triangle. (Figure 7(a))
Figure 7 also includes the segments into which parallelians
are sliced. In an equilateral triangle, the barycentric coordinates are in the same ratio as the
distances to the sides in (d), which in turn are in the same ratio as the parallelian segments in (e). We note that the circle has radius $1/2$, the little triangle has edges $\sqrt{3}/2$ and the big triangle has edges $\sqrt{3}$. In Figure 7(e), the lengths labeled $\alpha,\beta,\gamma$ are exactly $\frac{\sqrt{3}}{2}(1-2a^2),\frac{\sqrt{3}}{2}(1-2b^2),\frac{\sqrt{3}}{2}(1-2c^2)$.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.6]{barycentrics}
\par\end{centering}
\caption[]{
(a) $P=(a^2,b^2,c^2)$ in the $\Delta_{\mbox {big}}$ barycentric system.
(b) $P=(\alpha,\beta,\gamma)$ in the $\Delta_{\mbox{little}}$ system.
Also in the same ratio $\alpha:\beta:\gamma$ are the areas (c), the perpendicular distances (d), and the parallelian segments (e).
Explicitly $\alpha:\beta:\gamma=(1-2a^2):(1-2b^2):(1-2c^2).$
}
\label{fig:Barycentric-Coordinates-new-in}
\end{figure}
\begin{center}
Endpoints for Parallelians through $P$ in Barycentric Coordinates
\end{center}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
& $\Delta_{\mbox{big}}$ & $\Delta_{\mbox{little}}$ \\ \hline
$P$ & $(\rule{0pt}{0.2in}a^2,b^2,c^2)$ & $(1-2a^2,1-2b^2,1-2c^2)$ \\[0.1in] \hline
Parallelian 1 & $\begin{array}{l@{,\ }l@{,\ }l} (\rule{0pt}{0.2in} a^2&1/2&1/2-a^2)\\ (a^2&1/2-a^2&1/2) \end{array}$
& $\begin{array}{l@{,\ }c@{,\ }r} (\rule{0pt}{0.2in}1-2a^2&0&2a^2)\\ (1-2a^2&2a^2&0) \end{array}$
\\[0.2in] \hline
Parallelian 2 & $\begin{array}{l@{,\ }l@{,\ }l} (\rule{0pt}{0.2in} 1/2&b^2&1/2-b^2)\\ (1/2-b^2&b^2&1/2) \end{array}$
& $\begin{array}{l@{,\ }c@{,\ }r} (\rule{0pt}{0.2in}0&1-2b^2&2b^2)\\ (2b^2&1-2b^2&0) \end{array}$
\\[0.2in] \hline
Parallelian 3 & $\begin{array}{l@{,\ }l@{,\ }l} (\rule{0pt}{0.2in} 1/2&1/2-c^2&c^2)\\ (1/2-c^2&1/2&c^2) \end{array}$
& $\begin{array}{l@{,\ }c@{,\ }r} (\rule{0pt}{0.2in}0&2c^2&1-2c^2)\\ (2c^2&0&1-2c^2) \end{array}$
\\[0.2in] \hline
\end{tabular}
\end{center}
\subsubsection{Hemisphere to Triangle: Direct Construction}
Let $S$ be a point on the hemisphere corresponding to edge lengths $a,b,c.$
A straightforward but unusual construction yields
a triangle within that hemisphere whose sides are proportional to $a,b,c$.
In the construction below, we let $E$ denote the equilateral triangle inscribed
in the hemisphere's equator as in Figure 7(a). The vertices of $E$ in the xy plane
are the columns of $\Delta_{\mbox{little}}$.
{\bf Geometry Construction Theorem:} Let S be any point on a hemisphere of radius $1/2$. Let P be its projection onto the base. The line through P parallel to the x-axis, intersects
$E$ at X and Y. The triangle SXY in Figure \ref{fig:construct} has side lengths proportional to $a,b,c$.
{\bf Proof:} Let $\omega=\sqrt{3}$. We know from the last sentence in Section 2.5.2 that XP=$\omega(\frac{1}{2}-b^2)$ and YP=$\omega(\frac{1}{2}-a^2)$. Thus the parallelian length is XY=$\omega c^2$. The height SP is $\sqrt{12}K=2\omega K$ from the
table in Section 2.3.
Thus the tangent of angle SXP is $4K/(1-2b^2)$ and similarly the tangent of angle SYP is $4K/(1-2a^2)$.
These agree with the tangents in Lemma 1, so we have constructed the
triangle with sidelengths proportional to
$a,b,c.$
The lengths are SY=$\omega ac$, SX=$\omega bc$ and XY=$\omega c^2$.
This triangle inside the hemisphere is one of the three similar triangles that we now construct
using parallelians.
The collinear points $P,X,Y$ in the xy plane are the columns of
$$ [P \ X \ Y]=\Delta_{\mbox{big}} \left(\begin{array}{ccc}
a^2 & 1/2 & 1/2-c^2 \\
b^2 & 1/2-c^2 & 1/2 \\
c^2 & c^2 & c^2
\end{array}\right) . $$
From
$a,b,c$ we computed
$P,X,Y$ in the equatorial plane in Figure 9 from this formula.
We construct the hemisphere point $S$ by changing $P$'s
z-coordinate to $\sqrt{\frac{1}{4}-\|P\|^2}$.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.9]{construction}
\par\end{centering}
\caption[]{\begin{tabular}{l} \\
Top View (Left) and Front View (Right) \\
Triangle SXY has sides proportional to $a,b,c$.
\end{tabular}
}
\label{fig:construct}
\end{figure}
\subsubsection{The three similar triangles theorem}
Theorem 3 is interesting for several reasons. It seems unusual
that in traditional geometry, one would construct triangles with vertex on a hemisphere, and opposite edges inscribed in an equilateral triangle that itself is inscribed in the hemisphere's base.
It also feels unusual that the three triangles would share a common line segment SP which represents
{\it three different altitudes} in the shape.
\begin{thm}
S is on the hemisphere corresponding to $a,b,c$ and P is its projection onto the base. Three triangles
share SP as an altitude with vertex S and opposite edge one of the parallelians
through P as in Figure 7(e). The three triangles thus created are similar with edges in the proportion $a:b:c$.
\end{thm}
{\bf Proof:} The construction in 2.4.3, by symmetry, yields three triangles. The scaling factors $2\omega a$ and
$ 2\omega b$ and $2\omega c$ multiply $a,b,c$.
The three similar triangles are illustrated in Figure 9.
\begin{figure}[H]
\begin{centering}
\includegraphics{theoremq.pdf}
\caption[]{
\begin{tabular}{l} \\
Illustration of the hidden geometry inside triangle shape space revealed in Theorem 3: \\
The three similar triangles theorem. \\
Above: Skeleton: The three triangles sharing the altitude SP are similar\,! \\
Below: 3d model: All triangles are similar;
congruent triangles are color coded.
\end{tabular}
}
\vspace{.2in}
\begin{center}
\includegraphics[scale=.35]{figall.pdf}
\includegraphics[scale=.35]{figtop.pdf}
\end{center}
\end{centering}
\end{figure}
\newpage
\subsubsection{Matrix to Hemisphere: The Hopf Map}
Hopf discovered in 1931 (see \cite{Lyons_2003}) that the hypersphere $S^3$ maps naturally to the sphere $S^2$.
We think of points on $S^3$ as the $2\times2$ matrices normalized by $\|M\|_F^2=\sum M_{ij}^2=1$.
The Hopf map comes directly from the Singular Value Decomposition $M=U\Sigma V^T$:
$$
\mbox{Hopf($M$)}:
M=U
\left(
\begin{array}{cc}
\cos(\lambda/2) & 0 \\
0 & \sin(\lambda/2)
\end{array}
\right)
\left(
\begin{array}{cr}
\cos(\phi/2) & -\sin(\phi/2) \\
\sin(\phi/2) & \cos(\phi/2)
\end{array}
\right)^T
\longrightarrow
\left(
\begin{array}{l}
\cos \lambda \cos \phi \\[.03in]
\cos \lambda \sin \phi \\[.03in]
\sin \lambda
\end{array}
\right) .
$$
in $S^2$.
Here the latitude is $\lambda \in [-\pi/2,\pi/2]$, with sign matching $\det(M)$.
The $|\lambda|$ is determined by the singular values of $M$, while the longitude $\phi$ comes from the
right singular vectors of $M$. The rotation matrix $U$ of left singular vectors gives the fiber of $S^3$, that is mapped
to a single point on $S^2$.
Remark: We have not seen the SVD in the usual definitions of the Hopf fibration. It may be the the most immediate
way to get to the Hopf map.
An explicit formula is
$$
\mbox{Hopf($M$)}:
\left(
\begin{array}{cc}
M_{11} & M_{12} \\
M_{21} & M_{22}
\end{array}
\right)
\longrightarrow
\left(
\begin{array}{c}
(M_{11}^2+M_{21}^2)-(M_{12}^2+M_{22}^2) \\[.03in]
2(M_{11}M_{12}+M_{21}M_{22}) \\[.03in]
2(M_{11}M_{22}-M_{21}M_{12})
\end{array}
\right) .
$$
Our construction of triangle space, mapping
$M$ to the upper hemisphere, uses $\frac{1}{2}$Hopf($M$). The coordinate, $z=\frac{1}{2}\sin \lambda$, is
always chosen positive.
\subsubsection{Rotations, Quaternions, and Hopf}
The bad news is that
there is an ugly formula for the general 3 $\times$ 3 rotation matrix:
$$
Q_3 =
\left( \begin{array}{ccc}
\alpha^2 +\beta^2-\gamma^2-\delta^2 & 2(\alpha\delta+\beta\gamma) & 2(\beta\delta-\alpha\gamma) \\
-2(\alpha\delta-\beta\gamma) & \alpha^2 -\beta^2+\gamma^2-\delta^2 & 2(\alpha\beta+\gamma\delta) \\
2(\alpha\gamma+\beta\delta) & -2(\alpha\beta-\gamma\delta) & \alpha^2-\beta^2-\gamma^2+\delta^2
\end{array} \right) , $$
where $1=\alpha^2+\beta^2+\gamma^2+\delta^2$.
This is a rotation about
axis $(\beta,\gamma,\delta)$
with rotation angle $2$ acos($\alpha$).
The good news is that given any $Q_3$
we can obtain $\alpha,\beta,\gamma,\delta$
through an eigendecomposition. We can then associate a 4$\times$4 rotation matrix
$$Q_4=\left( \begin{array}{rrrr}
\alpha & -\beta & \delta & -\gamma \\
\beta & \alpha & \gamma & \delta \\
-\delta & -\gamma & \alpha & \beta \\
\gamma & -\delta & -\beta & \alpha
\end{array} \right) .
$$
The key relationship that ties $Q_3$ to $Q_4$ will be crucial for Theorem 6:
$$\mbox{Hopf}(Q_4M) = Q_3 \mbox{Hopf}(M).$$
$Q_4M$ denotes the operation of flattening $M$ to the column vector
$[M_{11},M_{21},M_{12},M_{22}]^T,$ applying $Q_4$, and reshaping back
into a two by two matrix.
Readers who prefer not to think about quaternions can safely skip ahead to Section 2.6.
$\mbox{Hopf}(Q_4M) = Q_3 \mbox{Hopf}(M),$ could be checked directly, for example by Mathematica.
But quaternions are the better way to understand why the
relationship holds. It
is a manifestation of the quaternion identity
$$(qm)i(\bar{qm})=q(mi\bar{m})\bar{q}.$$
We encode the matrix $M$ with the unit quaternion $m=M_{11}-M_{21}i-M_{22}j+M_{12}k$, and
we will encode $Q_3$ and $Q_4$ with the unit quaternion $q=-\alpha+\beta i +\gamma j +\delta k.$
The Hopf map is visible in the relationship:
$$
\begin{array}{rcl}
mi\bar{m}&=& \left((M_{11}^2+M_{21}^2)-(M_{12}^2+M_{22}^2) \right)\ i \ + \
2(M_{11}M_{12}+M_{21}M_{22})\ j \ +\ 2(M_{11}M_{22}-M_{21}M_{12})\ k \\[.07in]
& = &[i \ j \ k ]\ \mbox{Hopf($M$)}. \end{array}$$
The matrix $Q_3$ may be found in the computation of $q(iX+jY+kZ)\bar{q}$ whose $(i,j,k)$ parts
are the components of $Q_3[X\ Y\ Z]^T$.
The matrix $Q_4$ may be found in the computation of the quaternion product $w=qm$ by writing
$W=
\left(
\begin{array}{rr}
w_1 & w_4 \\
-w_2 & w_3
\end{array}\right).$
This reverses how $m$ was formed from $M$. We then create $Q_4$ from $W=-Q_4M$ by flattening $M$ and $W$.
\subsection{The Hemisphere: Positions of special triangles}
We conclude the nonrandom section of this paper with a summary of the correspondence
between position on the hemisphere and special triangles:
\begin{itemize}
\item{Lines of Latitude:} Triangles of equal area.
\item{Lines of Longitude:} Triangles resulting from a fixed rotation of the reference triangle followed by scaling along the x and y axes.
\item Circular arc in a plane through an edge of the little equilateral triangle $E$: Triangles with constant angle. (Around the perimeter of the circle, most points correspond to the degenerate triangles with angles $(0,0,\pi)$. At the vertices of $E$, we obtain all the triangles $(\theta,\pi-\theta,0)$.
\item{Vertical circular arcs as above:} Right triangles.
\item{North Pole:} Equilateral triangle.
\item{Longitudes at multiples of $\pi/3$:} Isosceles triangles
\item{The Equator:} Singular triangles.
\item{Inner Spherical Triangle:} Acute triangles
\item{The six fold symmetries:} The six permutations of unequal $a,b,c$.
\item{Where is the lower hemisphere?:} Replace $\det M$ with $-\det M$. Every ordered triangle is ``double
covered." A signed SVD with rotations $U$ and $V$ would work well.
\end{itemize}
\section{The Normal Distribution generates random shapes}
Section 2 concentrated on the relationship between triangles,
the hemisphere and disk, and 2x2 matrices through the SVD.
Nothing was random. Here in Section 3,
randomness comes into its own with the normal distribution as a natural structure.
\subsection{The Normal Distribution}
The normal distribution has a very special place in mathematics.
Several decades of research into Random Matrix Theory have shown that exact analytic formulas
are available for important distributions when the matrix entries arise from independent
standard Gaussians. Non-Gaussian entries rarely enjoy the same beautiful properties for finite matrices,
though recent ``universality" theorems show convergence for infinite random matrix theory.
We might anticipate, if only by analogy, that random triangles arising from independent
Gaussians might also enjoy special analytic properties compared to other distributions.
Section 3.2 reveals that random triangles generated
from a Gaussian random matrix are uniformly distributed on the hemisphere.
\vspace{.1in}
{\bf Four Gaussian degrees of freedom from six:}
Figure 2 illustrates 1000 random triangle shapes.
As we saw in Section 1.3, six independent standard Gaussians may be used
to describe the 2x3 array of vertex coordinates with six degrees of freedom.
Nonetheless, if we translate the centroid to the origin, or a vertex to the origin, or use the $2 \times 3$ edge matrix $E$,
or the matrix $M_e$ from the edge viewpoint or the matrix $M_v$ from the vertex viewpoint, only
four degrees of freedom remain.
Consider the $2 \times 3$ matrices $T$, the origin centered vertex matrix, or $E$ the edge matrix.
Up to scaling, both of these matrices consist of elements that are standard normals conditioned on a zero column sum.
This may be argued or computed. The argument is that Gaussians are linear, and the symmetry of the situation requires
that no one vertex or edge can be special.
One can check readily that centering six standard normals
at the origin produces two independent rows each with the singular covariance matrix
$$\Delta^T \! \Delta=\frac{1}{3}
\left(
\begin{array}{rrr}
2 &-1 &-1 \\
-1& 2 &-1 \\
-1 &-1& 2
\end{array}
\right) .
$$
The matrix $\Delta^T \! \Delta$ is the projection matrix onto the plane $x+y+z=0$.
The edge matrix $E$ obtained by taking differences of columns can be readily verified to have
covariance matrix $3\Delta^T \! \Delta. $ As shape information removes scale information,
this is irrelevant to the paper, and we often tend to think of $T$ and $E$ as providing dual views of the
same construction.
If one forms the matrix $M_v=T\Delta^T$, the covariance matrix for each row of two elements is now $ \Delta (\Delta^T \! \Delta) \Delta^T = I_2.$
We then obtain the nice conclusion that $M_v$ consists of four independent standard normals. Similarly, $M_e$ before scaling would thus
consist of four independent normals with variance $3$.
Once again we remark that scaling is irrelevant to shape theory results justifying our normalization of choosing $M$ to have sum of squares $1$.
\subsection{Uniform distribution on the hemisphere}
\begin{prop}
These equivalent representations describe
the uniform measure on the hemisphere with radius $1/2$:
\end{prop}
\begin{tabular}{|l|l|} \hline
Longitude &
$\phi$ uniform on $[0,2\pi)$ is independent from:
\\ \hline \hline
Height & uniform on $[0,1/2]$
\\ \hline
Latitude &
density $\rule{0in}{0.15in}\cos(\lambda)d\lambda \mbox{ on } [0,\pi/2]$
\\ \hline
$r$ & density
$\rule{0in}{0.15in}
4r(1-4r^2)^{-1/2}dr \mbox{ on } [0,1/2] $
\\ \hline
$|\det M|$ & uniform on $\rule{0in}{0.15in}[0,1/2]$ \\ \hline
Triangle area & uniform on $\rule{0in}{0.15in} [0,1/\sqrt{48}]$ \\ \hline
\end{tabular}
\vspace{.15in}
{\bf Proof:} The latitude and longitude joint density $\cos(\lambda)d\lambda d\phi$ is the familiar volume element on the sphere,
where latitude $\lambda$ is measured from the equator rather than the zenith.
Archimedes of Syracuse
(born -287, died -212) knew the height formula
through his hat box theorem: on a sphere, the surface area of a zone is proportional to its height.
The area comes from Hero (c.\ 10-70) or Heron of Alexandria's formula, believed already known to
Archimedes centuries earlier.
\begin{lem}
{\bf Exponential distributions} The sum of squares of two independent standard normals ($\chi_2^2$)
is exponentially distributed with $\mbox{density } \frac{1}{2}e^{-x/2}$. If $e_1,e_2$ are independent
random variables with identical exponential distribution then $e_1/(e_1+e_2)$ is uniformly
distributed on $[0,1]$.
\end{lem}
{\bf Proof:} These facts are well known. The generalization to $n$ exponentials
is a popular way to generate uniform samples $x_i=e_i/\sum e_i$ on the simplex $\sum_{i=1}^n x_i=1, x_i\ge 0$
(cf. \cite[Proposition 1]{Portnoy94}).
\vspace{.15in}
We now turn to the beautiful result known to David Kendall and his collaborators:
\begin{thm}
Triangles generated from six independent normals ( xy coordinates of three vertices or edges) correspond
to points distributed uniformly on the hemisphere.
\end{thm}
The corresponding $M$ is a $2 \times 2$ matrix of independent standard normals.
{\bf Proof 1 (Exponentials):}
We first consider the height:
$$\det M/\|M\|_F^2 =\frac{ad-bc}{a^2+b^2+c^2+d^2}=
\frac{
\left( \left( \frac{a+d}{\sqrt{2}} \right)^2+
\left( \frac{b+c}{\sqrt{2}} \right)^2 \right) -
\left( \left( \frac{a-d}{\sqrt{2}} \right)^2+
\left( \frac{b-c}{\sqrt{2}} \right)^2 \right) }
{2\left( \left( \frac{a+d}{\sqrt{2}} \right)^2+
\left( \frac{b+c}{\sqrt{2}} \right)^2 +
\left( \frac{a-d}{\sqrt{2}} \right)^2+
\left( \frac{b-c}{\sqrt{2}} \right)^2 \right)
}
. $$
The numerator is the difference
between two exponentially distributed random variables
(being the sums of squares of two independent standard normals.)
Then by Lemma 5,
$$\frac{e_1-e_2}{2(e_1+e_2)}=
\frac{e_1}{e_1+e_2}-\frac{1}{2}$$ is uniform on $[-1/2,1/2]$.
The absolute value is then uniform on $[0,1/2]$ as desired.
If we now turn to the ``longitude,"
the right singular vectors provide the uniform
distribution owing to the right orthogonal invariance of $M$.
\vspace{.1in}
{\bf Proof 2 (Random Matrix Theory and condition numbers):}
The singular values $\sigma_1,\sigma_2$ and singular vectors $V$ for $M$=\verb+randn(2,2)+ are independent. $V$ is a rotation with angle uniformly distributed on $[0,\pi).$
The distribution of the condition number $\kappa$ is important. It may be found in
\cite[Eq. 2.1]{edelman88a} or \cite[Eq. 14]{edelman89a}.
Restated, the probability density of $\kappa$ for a random $2\times2$ matrix of iid normals is $-2\tfrac{d}{dx}(x+x^{-1})^{-1}$. Equivalently, the distribution of
$
\frac{\sigma_{1}\sigma_{2}}{\sigma_{1}^{2}+\sigma_{2}^{2}}=(\kappa+\kappa^{-1})^{-1}
$
is \textbf{uniform} on $[0,1/2]$.
\vspace{.1in}
{\bf Proof 3 (Hopf Map)}
We know that $M$ is uniformly distributed on the sphere $1=M_{11}^2+M_{21}^2+M_{12}^2+M_{22}^2.$
We seek to explain why Hopf($M$) is uniform on the sphere $1=x^2+y^2+z^2$.
The mathematician's favorite proof would have the latter inherited from
the former. We want uniformity of Hopf$(M)$ to be a shadow of uniformity of $M$, but the Hopf map is not
a linear projection.
For any 3$\times$3 rotation matrix $Q_3$, we constructed in Section 2.5.6 a
4$\times$4 rotation matrix $Q_4$, such that
$$\mbox{Hopf}(Q_4M) = Q_3 \mbox{Hopf}(M),$$
for any fixed matrix $M$.
If $M$ is random and uniformly distributed on
$1=M_{11}^2+M_{21}^2+M_{12}^2+M_{22}^2,$ then of course $Q_4M$
is too.
What does this say about the distribution of Hopf($M$)?
For any choice of $Q_3$, the distribution of $Q_3\mbox{Hopf}(M)$
is the distribution of Hopf($Q_4M$), which is the same as the distribution of Hopf($M$).
The distribution of $\mbox{Hopf}(M)$ is invariant
under any fixed 3$\times$3 rotation.
It must be the uniform distribution on $x^2+y^2+z^2=1$.
We encourage the reader to follow this -- it is truly a beautiful argument.
\subsection{What is the probability that a random triangle is acute or obtuse\,?}
Put Lewis Carroll's question in the context of the normal distribution. We then obtain the result
known to Portnoy and Kendall and collaborators:
\begin{thm}
A random triangle from the uniform distribution in shape space has squared side lengths uniformly distributed on $[0,2/3]$.
The probability
is $1/4$ that this triangle is acute.
\end{thm}
{\bf Proof:}
The edges of the triangle are the lengths of the three columns of $M\Delta$. Taking the third column
for convenience, the squared side length is $\Delta_3^2=2/3$ times $(M_{21}^2+M_{22}^2)$.
By Lemma 5, this is the uniform distribution on $[0,2/3]$.
Edges that satisfy $a^{2}+b^{2}+c^{2}=1$ give a right triangle when $c^{2}=1/2$. The triangle is obtuse when $c^{2}>1/2$, which makes $c$ the longest side. The probability that a particular angle is obtuse is then $(2/3-1/2)/(2/3)=1/4$. The probability that any angle is obtuse is then $3/4$ (at most one can be obtuse!). Then $1/4$ is the probability that all are acute.\qed
\subsection{Triangles in $n$ dimensions}
There is an obvious generalization of Figure 5 to higher dimensions. Let $M$ be a random matrix of independent
standard norms with $n$ rows and $2$ columns. Then $M\Delta$ is a random triangle shape in $n$ dimensions.
We can think of this as random vertices centered at the origin, or random edges that close to a proper triangle.
There is a further generalization
The same argument as for the plane shows a squared side length has the distribution $(2/3)\mathrm{Beta}(n/2,n/2)$. In $\mathbb{R}^{n}$ the probability of an obtuse triangle is
$3(1-I(\frac{3}{4},\frac{n}{2},\frac{n}{2}))$ where $I$ denotes the incomplete beta function. This can be evaluated in MATLAB by
$$\mbox{\texttt{3{*}(betainc(3/4,n/2,n/2,'upper'))}}$$ or in Mathematica
by $$\mbox{\texttt{3{*}(1-BetaRegularized[3/4, n/2, n/2])}.}$$
This probability is also computed by Eisenberg and Sullivan
\cite{eisenberg96}. They note that for larger $n$, it is increasingly likely that a random triangle is acute. The probability of an obtuse triangle is $10\%$ in $\mathbb{R}^{12}$, and $1\%$ in $\mathbb{R}^{26}$. It is nearly $0.1\%$ in $\mathbb{R}^{40}$.
For arbitrary $n$ the distribution is no longer uniform on the hemisphere.
Connecting multivariate statistics and numerical
analysis, the same result is innocently hidden as an exercise
in Wishart matrix theory. It is an unlikely connection. This ``square root ellipticity statistic'' may be found in Exercise~$8.7(\mbox{b})$ of
\cite[p.379]{Muirhead82a}. It states that \foreignlanguage{english}{$P\left(\frac{2\sigma_{1}\sigma_{2}}{\sigma_{1}^{2}+\sigma_{2}^{2}}<x\right)=x^{n-1}$}. As $n$ increases, this reduces the probability near the equator and adds weight near the poles. Acute triangles become more
probable.
\subsection{An experimental investigation that revealed the distribution's {}``shadow''}
Early in our own investigation, we plotted the normalized squares of $\mathtt{randn(2,2)}*\Delta$ as barycentric coordinates. We found very quickly that the probability of an acute triangle is $1/4$, and triangles naturally fill a hemisphere with a uniform distribution. Here is a MATLAB code and a picture that tells so much in so little space that we could not resist sharing. A few interesting things to note\,: Line $10$ projects the three normalized squared edges to $\mathbb{R}^{2}$ using $\Delta\transp$. Line 9 compares $a^{2}+b^{2}$ to $c^{2}$ to decide acute or obtuse. (This is more efficient than computing angles.) Ten thousand trials gave $25.37\%$ acute triangles. Ten million trials gave $24.99\%$ acute triangles.
Readers may wish to recreate this road to discovery of the uniform
distribution on the hemisphere\,: Histogram the radii of the points, guess the measure, and then realize the uniformity. We guessed that uniformity by fitting the density $f(x)=4x/\sqrt{1-4x^{2}}=-\frac{d}{dx}\sqrt{1-4x^{2}}$ which is the
shadow of the uniform distribution on the hemisphere (differential form
of Archimedes' hat box theorem). Then came proofs.
\vspace{-.17in}
\begin{figure}[H]
\includegraphics[scale=1.0]{mc1}
\includegraphics[scale=.4]{fewpoints}
\caption[]{Random $a^2,b^2,c^2$ are computed on line $5$ and normalized on line $6$. The obtuse triangles are identified on line $9$. Line $10$ projects the plane $x+y+z=1$ onto $\mathbb{R}^2$. Line $11$ plots every obtuse triangle as a point in this plane. The acute triangles form the inner triangle as in Figure $3$ and the obtuse triangles complete a disk of radius $1/2$. The hemisphere is viewed from above.}
\end{figure}
\vspace{-.3in}
\begin{figure}[H]
\includegraphics[scale=1.0]{mc2}
\includegraphics[scale=0.27]{thehist}
\vspace{0.2in}
\caption[]{Line $12$ computes the sample probability of acute triangles. Lines $13$ through $18$ histogram all the radii against the guess (curved line) that the points are uniformly distributed on the hemisphere.}
\end{figure}
\subsection{Uniform shapes versus uniform angles}
Two uniform distributions, on the hemisphere and on angle space $A+B+C=180^{\circ}$, gave the same fraction $\frac{3}{4}$ of obtuse triangles. We are not aware of a satisfying theoretical link. Portnoy
\cite[Section 3]{Portnoy94} philosophizes about {}``the fact that the answer $3/4$ arises so often''. To emphasize the difference between these distributions, we report on a numerical experiment and a theoretical density computation to understand the angular distribution.
Our first figure might be called $100\!,000$ triangles in $100$ bins. The three angles divided by $\pi$ are barycentric coordinates in the
figure. With four bins the triangles would appear uniform. With $100$ bins we see that they are anything but. A uniform distribution would have 1000 triangles per bin.
\begin{figure}[H]
\begin{center}
$100,\!000$ triangles in $100$ bins
\end{center}
\begin{center}
\includegraphics[width=300pt]{trianglehist}
\end{center}
\caption[]{$100\!,000$ triangles in $100$ bins\,: Triangles selected uniformly in shape space are not uniform in angle space.}
\end{figure}
The underlying theoretical distribution involves
the Jacobian from angle space to the ``squared edges'' disk. Then
one must apply a second Jacobian to the hemisphere, and finally
invert. Since the second part is standard, we will sketch the first
part of the argument.
Suppose $\alpha+\beta+\gamma=1$. Considering
the Law of Sines, define
\[
s=\left(\begin{array}{c}
a^{2}\\
b^{2}\\
c^{2}\end{array}\right)=\frac{1}{\sigma}\left(\begin{array}{c}
\sin^2 \pi\alpha\\
\sin^2 \pi\beta\\
\sin^2 \pi\gamma \end{array}\right),\]
with $\sigma=\sin^2 \pi\alpha+\sin^2 \pi\beta+\sin^2 \pi\gamma$.
With some calculus we can show that\[
ds=\pi J
\left(\begin{array}{c}
d\alpha\\
d\beta\\
d\gamma\end{array}\right),\]
with $J=
\left(\frac{\mbox{diag}(p)}{\sigma}-\frac{sp\transp }{\sigma^{2}}\right)$
and
$
p=\left(\begin{array}{c}
\sin2\pi\alpha\\
\sin2\pi\beta\\
\sin2\pi\gamma\end{array}\right) .
$
The Jacobian determinant that transforms the angle space to
squared edge space is proportional to $\det\left( \Delta J \Delta\transp \right)$.
The remaining step, omitted here, is the Jacobian to the hemisphere.
As the experiment begins with triangles uniform on the hemisphere, it is the inverse Jacobian
that we use in our plot.
Figure 12 is a Monte-Carlo experiment. Figure 13, shows
the theoretical Jacobian.
\begin{figure}[H]
\begin{center}
\includegraphics[width=250pt]{angledist}
\end{center}
\caption[]{Angle density reveals the nonuniformity of angles, when triangles are picked uniformly from shape space. The least likely angles are in black, cyan and yellow with $p<0.1$, $p<0.25,$ and $p<.4$. The most likely angles are in blue, magenta, and red with $p>0.6$, $p>0.75,$ and $p>0.90$. Green is $.4<p<.6$.}
\label{figure13}
\end{figure}
\section{Further Applications}
A corpus of pictures becomes a set of shapes, and we study a particular feature. A random choice then becomes a random matrix and the entirety of random matrix theory can be considered for shape applications.
We believe that the timing is right to realize the dream of shape
theory. In our favor are
\begin{itemize}
\item New theorems in random matrix theory that can immediately apply to shape theory
\item Modern computational power making shapes accessible (and Monte Carlo simulations)
\item The technology of multivariate statistical theory. We can compute hypergeometric functions of matrix argument and zonal polynomials \cite{koev06}. Muirhead \cite{Muirhead82a} was $30$ years ahead of his time.
\end{itemize}
There is much room for research in this area. In two dimensions, the exact condition number distribution for the sphericity test confirms that the shapes come from the standard normal distribution.
It is very possible that uniformity may not be a great measure for
real problems on shape space. Similar issues arise for random matrices
and random graphs. It is a mistake to think of a random object
as being {}``any old object.'' Usually random objects
have special properties of their own, much like a special matrix,
or graph, or shape.
Here are random shapes and convex hulls.
The general technique is multiplying a matrix of standard normals by a $\Delta_n$ from the Helmert matrix.
It is a bit further afield (but interesting) to ask for the average number of edges of the convex hull. For literature in this direction see
\cite[Chapter 8]{schneider08}.
\begin{figure}[H]
\begin{center}
$
\begin{array}{l}
k=3:\\[.73in]
k=5:\\[.73in]
k=20:\\[.73in]
k=100: \\ [3.2in]
\end{array}
\ $
\includegraphics[scale=1.2]{tenrandomshapes}
\end{center}
\vspace{-3in}
\caption[]{Ten random shapes in $2$d taken from the uniform shape distribution with $k=3,5,20,100$ points.}
\end{figure}
\vspace{-0.17in}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=.74]{figure15}
\end{center}
\vspace{-0.3in}
\caption[]{ Random tetrahedra in $3$d ($m=3$ and $k=4$)}
\end{figure}
\vspace{-0.132in}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=.95]{figure18}
\end{center}
\vspace{-0.32in}
\caption[]{Random Gems: Convex hulls of 100 random points in $3$d ($m=3$ and $k=100$)}
\label{fig:gems}
\end{figure}
\subsection{Tests for Uniformity of Shape Space}
An object formed from $k$ points in $\mathbb{R}^{m}$ may be encoded in
an $m\times k$ matrix $X$. The shape is therefore encoded in an $m\times(k-1)$ \emph{preshape matrix} $Z=X\Delta_{k}\transp /\|X\Delta_{k}\transp \|_F,$ where $\Delta_{k}$ is the Helmert matrix in equation (\ref{eq:helmert}). The uniform distribution on shapes may be thought of as the distribution obtained when
\[
X=\mbox{\texttt{\textbf{randn(m,k-1)}}}*\Delta_{k},
\]
so that $X$ is the product of an $m\times(k-1)$ matrix
of iid standard Gaussians and $\Delta_{k}$. In the preshape, $Z$ is normalized by its own Frobenius norm, $\sqrt{\sum Z_{ij}^2},$:
\[
Z=\mbox{\texttt{\textbf{randn(m,k-1)}}}/\|\cdot\|_{F}.
\]
Figure 15 plots random tetrahedra in this way.
In \cite{chikuse04}, Chikuse and Jupp propose a statistical test
on $Z$ for uniformity. From samples $Z_{1},\ldots,Z_{t}$ they
calculate
\[
S=\frac{(k-1)(m(k-1)+2)}{2}\,t\,\mbox{ trace(\ensuremath{\left\{ \ensuremath{\frac{1}{t}\sum_{i=1}\transp Z\transp Z-\frac{1}{k-1}I_{k-1}}\right\} ^{2})\mbox{\ensuremath{\mbox{\ensuremath{}}}}}}
\]
As $t\rightarrow\infty$ they approximate $S$ with $\chi_{(k-1)(k+2)/2}^{2}$. Corrections are proposed for finite $t$ as well.
Other tests are easy to construct. For example, much is known about the smallest singular value of the random matrix $Z$. When $Z$ is square and $t\geq m$, the density of $1/\sigma_{\min}(Z)$ is exactly
\[
\frac{2m\Gamma(\frac{m+1}{2})\Gamma(\frac{m^{2}}{2})}{\sqrt{\pi}\Gamma(\frac{m(m+1)}{2}-1)}t^{1-m^{2}}(t^{2}-m)^{\frac{m(m+1)}{2}-2}{}_{2}F_{1}\left(\frac{m-1}{2},\frac{m}{2}+1;\frac{m^{2}+m}{2}-1;-(t^{2}-m)\right).
\]
Non-square cases can also be handled by a combination of known techniques.
With the exact density, one can compute the smallest singular value of the samples and perform goodness-of-fit tests (such as Kolmogorov-Smirnov). Broadly speaking one might choose to test for the orthogonal invariance of the singular vectors. Since $Z*\chi_{m(k-1)}$is normally distributed, tests based on Wishart matrices are natural.
\subsection{Northern Hemisphere Map}
Perhaps because we had the technology,
we mapped the northern hemisphere (shape theory hemisphere) to angle space (Figure 17.)
Barycentric coordinates correspond to the angles divided by $\pi$.
The resulting picture appears in the figure that follows. The middle
triangle consists of the {}``acute'' points.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.7]{northern}
\end{center}
\caption[]{Northern hemisphere as triangles in shape space, mapped to {}``angle'' space.}
\end{figure}
We would like to thank Wilfrid Kendall, Mike Todd, and Eric Kostlan for their insights.
The first author acknowledges NSF support under DMS 1035400 and DMS 1016125.
The second author acknowledges NSF support under EFRI 1023152.
\newpage{}
\bibliographystyle{plain}
| {
"timestamp": "2015-01-14T02:12:41",
"yymm": "1501",
"arxiv_id": "1501.03053",
"language": "en",
"url": "https://arxiv.org/abs/1501.03053",
"abstract": "What is the probability that a random triangle is acute? We explore this old question from a modern viewpoint, taking into account linear algebra, shape theory, numerical analysis, random matrix theory, the Hopf fibration, and much much more. One of the best distributions of random triangles takes all six vertex coordinates as independent standard Gaussians. Six can be reduced to four by translation of the center to $(0,0)$ or reformulation as a 2x2 matrix problem.In this note, we develop shape theory in its historical context for a wide audience. We hope to encourage other to look again (and differently) at triangles.We provide a new constructive proof, using the geometry of parallelians, of a central result of shape theory: Triangle shapes naturally fall on a hemisphere. We give several proofs of the key random result: that triangles are uniformly distributed when the normal distribution is transferred to the hemisphere. A new proof connects to the distribution of random condition numbers. Generalizing to higher dimensions, we obtain the \"square root ellipticity statistic\" of random matrix theory.Another proof connects the Hopf map to the SVD of 2 by 2 matrices. A new theorem describes three similar triangles hidden in the hemisphere. Many triangle properties are reformulated as matrix theorems, providing insight to both. This paper argues for a shift of viewpoint to the modern approaches of random matrix theory. As one example, we propose that the smallest singular value is an effective test for uniformity. New software is developed and applications are proposed.",
"subjects": "History and Overview (math.HO); Metric Geometry (math.MG)",
"title": "Random Triangle Theory with Geometry and Applications",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850892111574,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7095349843506209
} |
https://arxiv.org/abs/2204.02201 | On the size distribution of Levenshtein balls with radius one | The fixed length Levenshtein (FLL) distance between two words $\mathbf{x,y} \in \mathbb{Z}_m^n$ is the smallest integer $t$ such that $\mathbf{x}$ can be transformed to $\mathbf{y}$ by $t$ insertions and $t$ deletions. The size of a ball in FLL metric is a fundamental but challenging problem. Very recently, Bar-Lev, Etzion, and Yaakobi explicitly determined the minimum, maximum and average sizes of the FLL balls with radius one. In this paper, based on these results, we further prove that the size of the FLL balls with radius one is highly concentrated around its mean by Azuma's inequality. | \section{Introduction}
The \emph{Levenshtein distance} ({also known as {\em edit distance}}) between two words is the smallest number of deletions and insertions needed to transform one word to {the other}.
This {is a metric used for codes correcting synchronization errors}. The theory of {coding with respect to Levenshtein distance} dates back to 1960s~\cite{levenshtein1966bianry}, but there has been much less progress in comparison.
{As commented by Mitzenmacher~\cite{MR2525669}}, \textit{``Channels with synchronization errors, including both insertions and deletions as well as more general timing errors, are simply not adequately understood by current theory. Given the near-complete knowledge we have for channels with erasures and errors \ldots our lack of understanding about channels with synchronization errors is truly
remarkable.''} Indeed, even the fundamental problem of counting the number of words formed {by} deleting and inserting symbols into a word remains elusive.
In 1966, Levenshtein~\cite{levenshtein1966bianry} gave the earliest bounds on the number of words formed by deleting a constant number of symbols from a word.
This bound was later improved by Calabi and Hartnett~\cite{calabi1969}; also by Hirschberg and Regnier~\cite{MR1856838}. On the other hand, the number of words formed {by} inserting $r$ symbols into $\x \in \mathbb{Z}_m^n$ does not depend on $\x$ itself, and was {given by $\sum_{i=0}^{r}\binom{n+r}{i}(m-1)^i$~\cite{levenshtein1966bianry}}.
Motivated by estimating the rate of synchronization error-correction codes, Sala and Dolecek~\cite{sala2013} studied the number of words formed {by} deleting and inserting a constant number of symbols into a given word.
{The \emph{fixed length Levenshtein (FLL) distance} (originally under the name ``\emph{ancestor distance}'') between two words $\x, \y \in \mathbb{Z}_m^n$ is defined} as half of the traditional Levenshtein distance, i.e., the
smallest $t$ such that $\x$ can be transformed to $\y$ by $t$ insertions \emph{and} $t$ deletions. {An explicit expression was given on the FLL ball size with radius one (see Lemma~\ref{lemma: sala}), and the size of the FLL balls with radius larger than one is bounded in~\cite{sala2013}.} Very recently, Bar-Lev, Etzion, and Yaakobi~\cite{levball2021} found the explicit expressions for the minimum, the maximum and the average sizes of {the} FLL balls with radius one, respectively.
A natural {follow-up question is how the size of the FLL balls with radius one is distributed.}
In this paper, we prove {that the size of the} FLL balls with radius one is highly concentrated around its mean by Azuma's inequality (see Theorem~\ref{thm: binary distribution} and Theorem~\ref{thm: m-ary distribution}).
The rest of this paper is organized as follows.
In Section~\ref{sec: preliminaries}, we provide {some notations, definitions, and auxiliary results}.
In Section~\ref{sec: distribution}, we analyze the size distribution of {the} FLL balls with radius one, and state the main {results} in Theorem~\ref{thm: binary distribution} and Theorem~\ref{thm: m-ary distribution}.
Finally, we conclude this paper with possible future directions in Section~\ref{sec: conclusion}.
\section{Preliminaries}~\label{sec: preliminaries}
Let $\mathbb{Z}_m = \{0,1,\dots,m-1\}$ for $m \ge 2$.
We use $[n]$ to denote the set $\{1,\dots,n\}$.
In this work, vectors in $\mathbb{Z}_m^n$ are called $m$-ary sequences (words) with length $n$, and {are written} as strings for convenience.
{For a word $\x = x_1, \dots, x_n \in \mathbb{Z}_m^n$ and $1 \leq i \le j \le n$}, the subsequence $x_i, x_{i+1}, \dots, x_j$ is denoted by $\x_{[i,j]}$.
A \emph{run} of $\x$ is a subsequence $\x_{[i,j]}$ such that $x_i = x_{i+1} = \dots = x_j$, $x_{i-1} \ne x_i$ and~$x_{j+1} \ne x_j$.
An \emph{alternating segment} of $\x$ is a \emph{maximal} subsequence $\x_{[i,j]}$ with the form $abab\dots ab$ or $abab\dots ba,$ where $a \ne b$ and $a,b \in \mathbb{Z}_m$.
{Note that by `maximal'} we mean $x_{i-1} \ne b$ and $x_{j+1} \ne x_{j-1}$.
The number of runs in $\x$ is denoted by $\rho(\x)$, and the number of alternating segments in $\x$ is denoted by $a(\x)$.
For each $\x \in \mathbb{Z}_2^n$, we have $\rho(\x) + a(\x) = n + 1$.
The lengths of the first and last alternating segments in a word $\mathbf{x}$ are of particular interest in this work and denoted by {$h(\x)$ and $t(\x)$}, respectively.
\begin{example}
Let $\x = 01100101$, {then} the runs of $\x$ are $0, 11, 00, 1, 0, 1$ and $\rho(\x) = 6$.
The alternating segments of $\x$ are $01, 10, 0101$ and $a(\x) = 3$, $h(\x) = 2$, $t(\x) = 4$.
\end{example}
\begin{lemma}\label{lemma: exp h and t}
Let $n>0$, $m>1$ be integers{, then we} have
\begin{equation}\label{equ: exp h}
\underset{\x \in \mathbb{Z}_m^n}{\mathbb{E}}\left[ h(\x) \right] = \underset{\x \in \mathbb{Z}_m^n}{\mathbb{E}}\left[ h(\x) \vert x_1 \right] = 2 - \frac{1}{m^{n-1}},
\end{equation}
\begin{equation}\label{equ: exp t}
\underset{\x \in \mathbb{Z}_m^n}{\mathbb{E}}\left[ t(\x) \right] = \underset{\x \in \mathbb{Z}_m^n}{\mathbb{E}}\left[ t(\x) \vert x_n \right] = 2 - \frac{1}{m^{n-1}}.
\end{equation}
\end{lemma}
\begin{proof}
We prove Eq.~\eqref{equ: exp h}, and Eq.~\eqref{equ: exp t} then follows by symmetry.
Let $\x \in \mathbb{Z}_m^n$ be an arbitrary sequence.
Define the indicated variable $I_i$ for $i \in [n]$ as follows.
\[
I_i = \begin{cases}
1, & \text{ if $x_i$ belongs to the first alternating segment}, \\
0, & \text{ otherwise.}
\end{cases}
\]
Then we have $h(\x) = \sum_{i=1}^{n}I_i$. Also note that $\Pr(I_1 = 1) = 1$, and $ \Pr(I_i = 1) = (m-1)/m^{i-1}$ for $i > 1$. Therefore,
\[
\begin{aligned}
\E{h(\x)} = \sum_{i=1}^n\E{I_i} & = 1 + (m-1)\left(\frac{1}{m} + \dots \frac{1}{m^{n-1}}\right) \\
& = 1 + (m-1) \cdot \frac{1 - \frac{1}{m^{n-1}}}{m-1} \\
& = 2 - \frac{1}{m^{n-1}}.
\end{aligned}
\]
Intuitively, exposing the first symbol of a random $\x \in \mathbb{Z}_m^n$ {does not} provide any information about $h(\x)$ since $h(\x) \ge 1$ for all words.
Thus, we have $\E{h(\x)} = \E{h(\x) \middle \vert x_1}$.
{More precisely,} we define an equivalence relation ``$\approx$'' on $\mathbb{Z}_m^n$ as follows.
For $\x, \y \in \mathbb{Z}_m^n$, we say that $\x \approx \y$ if and only if $x_i \equiv y_i + k \mod{m}$ for each $i \in [n]$ and some integer $k$.
Clearly $h(\x) = h(\y)$ if $\x \approx \y$, and $\{a\} \times \mathbb{Z}_m^{n-1}$ contains {exactly} one element from each equivalence class.
It then follows that $\E{h(\x) \vert x_1 = a} = \E{h(\x) \vert x_1 = b}$.
{Thus,} we have
\[
\E{h(\x)} = \sum_{a \in \mathbb{Z}_m} \frac{1}{m} \E{h(\x) \vert x_1 = a} = \E{h(\x) \vert x_1 = a},
\]
and the proof is then {completed}.
\end{proof}
A sequence $\y \in \mathbb{Z}_m^{n-t}$ is called a \emph{$t$-subsequence} of $\x \in \mathbb{Z}_m^n$ if $\y$ is formed by deleting $t$ symbols from $\x$.
In other words, $\y = (x_{i_1}, x_{i_2}, \dots, x_{i_{n-t}})$, where {$1 \leq i_1 < i_2 < \dots <i_{n-t} \leq n$ and} $t \in [n-1]$.
Likewise, $\x$ is called a \emph{$t$-supersequence} of $\y$.
The set of all $t$-subsequences of $\x$ is called the deletion $t$-sphere centered at $\x$, and denoted by $D_t(\x)$.
The set of $t$-supersequences of $\x$ is called the insertion $t$-sphere centered at $\x$, and denoted by $I_t(\x)$.
The {fixed length} Levenshtein distance is formally defined as follows.
\begin{definition}[Fixed length Levenshtein distance]
The Fixed length Levenshtein (FLL) distance between two words $\x, \y \in \mathbb{Z}_m^n$ is the smallest $t$ such that $D_t(\x) \cap D_t(\y) \ne \emptyset$, and {is} denoted by $d_l(\x,\y)$.
\end{definition}
It is {easy} to see that $d_l(\x,\y) = t$ if and only if $t$ is the smallest integer such that $\x$ can be transformed to $\y$ by $t$ deletions and $t$ insertions.
\begin{definition}[FLL ball]
{ For each word $\x \in \mathbb{Z}_m^n$, the \emph{FLL $t$-ball} centered} at $\x$ is defined by
\begin{equation*}
L_t(\x) \triangleq \{\y \in \mathbb{Z}_m^n \vert d_l(\x,\y) \le t \},
\end{equation*}
and $t$ is called the radius.
\end{definition}
{The following results on the size of the FLL ball were given in~\cite{sala2013,levball2021}, and will be useful later.}
\begin{lemma}~\cite[Theorem 1]{sala2013}\label{lemma: sala}
For all $\x \in \mathbb{Z}_m^n$,
\begin{equation}\label{equ: ball size}
|L_1(\x)| = \rho(\x) (mn - n - 1) + 2 - \frac{1}{2}\sum_{i=1}^{a(\x)} s_i^2 + \frac{3}{2}\sum_{i=1}^{a(\x)}s_i - a(\x),
\end{equation}
where $s_i$, {for} $1 \le i \le a(\x)$, is the length of the $i$-th alternating segment of $\x$.
\end{lemma}
\begin{lemma}~\cite[Lemma 16]{levball2021} \label{lemma: partial expectation}
Let the notations be the same as above. For integers $m, n > 1$, we have
\begin{align*}
& \underset{\x \in \mathbb{Z}_{m}^{n}}{\mathbb{E}}\left[\sum_{i=1}^{a(\x)} s_{i}\right]=n+(n-2) \cdot \frac{(m-1)(m-2)}{m^{2}}, \\
& \underset{\x \in \mathbb{Z}_{m}^{n}}{\mathbb{E}}[a(\x)]=1+\frac{(n-2)(m-1)(m-2)}{m^{2}}+\frac{n-1}{m}, \\
& \underset{\x \in \mathbb{Z}_{m}^{n}}{\mathbb{E}}[\rho(\x)]=n-\frac{n-1}{m}, \\
& \underset{\x \in \mathbb{Z}_{m}^{n}}{\mathbb{E}}\left[\sum_{i=1}^{a(\x)} s_{i}^{2}\right]=\frac{n\left(4 m^{2}-3 m+2\right)}{m^{2}}+\frac{6 m-4}{m^{2}}-4-\frac{2}{m-1}\left(1-\frac{1}{m^{n}}\right).
\end{align*}
\end{lemma}
Lemma~\ref{lemma: sala} was proved by a careful argument based on the principle of inclusion and exclusion.
More precisely, $\rho(\x)(mn - n - 1) + 2$ is an overestimate of $|L_1(\x)|$, and Lemma~\ref{lemma: sala} was proved by subtracting the double counted sequences.
Note that Lemma~\ref{lemma: partial expectation} was stated originally without proof.
\begin{theorem}~\cite[Theorem 5]{levball2021}
For integers $m,n > 1$, we have
\begin{equation}\label{equ: expectation}
\underset{\x \in \mathbb{Z}_{m}^{n}}{\mathbb{E}}[|L_1(\x)|] = n^2 (m + \frac{1}{m} - 2) + 2 - \frac{n}{m} + \frac{m^n - 1}{m^n(m - 1)}.
\end{equation}
\end{theorem}
\begin{proof}
By Eq.~\eqref{equ: ball size} and Lemma~\ref{lemma: partial expectation}.
\end{proof}
A {\em martingale} is a sequence of real random variables $Z_0, \dots, Z_n$ with finite expectation such that for each $0 \le i < n$,
\[
\E{Z_{i+1} \middle \vert Z_i, Z_{i-1}, \dots, Z_0} = Z_i.
\]
A classical martingale named \emph{Doob martingale} (see {for example}~\cite{MR3524748}) will be used in this {paper}. Let $X_1, \dots, X_n$ be {the} underlying random variables (not necessarily independent) and $f$ be a function over $X_1, \dots, X_n$. The Doob martingale $Z_0, \dots, Z_n$ is defined by
\begin{equation*}
\begin{aligned}
Z_0 & = \E{f(X_1, \dots, X_n)}; \\
Z_i & = \E{f(X_1, \dots, X_n) \middle \vert X_1, \dots, X_i} \text{ for } i \in [n].
\end{aligned}
\end{equation*}
In other words, $Z_i$ is defined by the expected value of $f$ after exposing $X_1, \dots, X_i$.
The following classical {result~\cite{MR3524748}} plays a key role in {the proof of our main results.}
\begin{theorem}[Azuma's inequality]\label{thm: azuma}
Let $Z_0,Z_1,\dots,Z_n$ be a martingale such that for each $1 \le i \le n$,
\begin{equation*}
|Z_i - Z_{i-1}| \le c_i.
\end{equation*}
Then for every $\lambda > 0$, {we have}
\begin{equation*}
\Pr(Z_n - Z_0 \ge \lambda) \le \exp \left( \frac{-\lambda^2}{2(c_1^2 + \dots + c_n^2)}\right),
\end{equation*}
and
\begin{equation*}
\Pr(Z_n - Z_0 \le -\lambda) \le \exp \left( \frac{-\lambda^2}{2(c_1^2 + \dots + c_n^2)}\right).
\end{equation*}
\end{theorem}
\section{The size distribution {of the FLL balls with radius one}}\label{sec: distribution}
In this section, we discuss the size distribution of {the} FLL balls with radius one. We start with the binary case and then deal with the $m$-ary case.
\subsection{The binary case} \label{sec: binary case}
Let $n, n'$ be positive integers. For each $\y \in \mathbb{Z}_2^{n'}$, define
\begin{equation}\label{equ: f_n}
f_n(\y) = \rho(\y)n - \frac{1}{2}\sum_{i = 1}^{a(\y)}s_i^2,
\end{equation}
where $s_i$ is the length of {the} $i$-th alternating segment of $\y$ for $1 \le i \le a(\y)$.
Note that $\sum_{i=1}^{a(\y)}s_i = n'$ for all $\y \in \mathbb{Z}_2^{n'}$, and $\rho(\y) + a(\y) = n'+1$.
Then by Eq.~\eqref{equ: ball size}, we have $|L_1(\x)| = f_n(\x) + \frac{n}{2} + 1$ for each $\x \in \mathbb{Z}_2^n$.
Therefore, it {suffices} to find the distribution of $f_n(\x)$ for $\x \in \mathbb{Z}_2^n$.
To {this end}, we need the following properties of {$f_n(\y)$}.
\begin{lemma}\label{lemma: f_n partition}
Let {$n, n'$ be positive} integers.
For each $i \in [n'-1]$ and $\y \in \mathbb{Z}_2^{n'}$, we have
\begin{equation*}
f_n(\y) =
\begin{cases}
f_n(\y_{[1,i]}) + f_n(\y_{[i+1,n']}) - n , & \text{if $y_i = y_{i+1}$},\\
f_n(\y_{[1,i]}) + f_n(\y_{[i+1,n']}) - t(\y_{[1,i]}) h(\y_{[i+1, n']}) , & \text{if $y_i \ne y_{i+1}$}.
\end{cases}
\end{equation*}
\end{lemma}
\begin{proof}
It is important to use the additive behavior of $f_n(\y)$.
The first case follows from the equation that $\rho(\y) = \rho(\y_{[1,i]}) + \rho(\y_{[i+1,n]}) - 1$.
When $y_{i} \ne y_{i+1}$, the difference of $f_n(\y)$ and $f_n(\y_{[1,i]}) + f_n(\y_{[i+1,n']})$ is given by
\[
\frac{1}{2} \left[ \left(t(\y_{[1,i]}) + h(\y_{[i+1, n']})\right)^2 - t(\y_{[1,i]})^2 - h(\y_{[i+1, n']})^2 \right] = t(\y_{[1,i]}) h(\y_{[i+1, n']}).
\]
The proof is then completed.
\end{proof}
\begin{corollary}\label{corollary: f difference}
Let $n, n'$ be positive integers.
For all $\y \in \mathbb{Z}_2^{n'}$, we have
\begin{equation*}
f_n(\y) - f_n(\y_{[1,n'-1]}) =
\begin{cases}
-\frac{3}{2}, & \text{if } y_{n'-1} = y_n, \\
n - \frac{3}{2} - t(\y_{[1, n'-1]}), & \text{if } y_{n'-1} \ne y_n.
\end{cases}
\end{equation*}
\end{corollary}
\begin{proof}
Note that $f_n(y_n) = n - \frac{5}{2}$, and the result then follows from Lemma~\ref{lemma: f_n partition} by setting $i = n'-1$.
\end{proof}
Now we are ready to discuss the distribution of $|L_1(\x)|$ for {uniformly} distributed $\x \in \mathbb{Z}_2^n$. The case when $n \le 3$ is trivial, and we are {more interested in the cases} when $n$ is large.
\begin{theorem}\label{thm: binary distribution}
Let $n > 3$ be an integer and $x_1, \dots, x_n$ be independent random variables such that $\Pr(x_i = 0) = \Pr(x_i = 1) = \frac{1}{2}$ for $i \in [n]$. Then for the word $\x = x_1,\dots,x_n$, we have
\begin{equation}\label{equ: upper tail}
\Pr \left( {|L_1(\x)| - \underset{\x \in \mathbb{Z}_2^n}{\mathbb{E}} \left[ |L_1(\x)| \right] \ge c n \sqrt{n-1}} \right) \le e^{-2c^2},
\end{equation}
and
\begin{equation}\label{equ: lower tail}
\Pr \left( {|L_1(\x)| - \underset{\x \in \mathbb{Z}_2^n}{\mathbb{E}} \left[ |L_1(\x)| \right] \le - c n \sqrt{n-1}} \right) \le e^{-2c^2},
\end{equation}
where $\underset{\x \in \mathbb{Z}_2^n}{\mathbb{E}} \left[ |L_1(\x)| \right] = \frac{n^2}{2} - \frac{n}{2} - \frac{1}{2^n} + 3$, and $c$ is a positive constant.
\end{theorem}
\begin{proof}
We define the Doob martingale $Z_0 = \E{f_n(\x)}$, and $Z_i = \E{f_n(\x) \vert \x_{[1,i]}}$ by exposing one $x_i$ at a time for $i \in [n]$.
Clearly, $Z_0 = \frac{n^2}{2} - n - \frac{1}{2^n} + 2$, and $Z_n = f_n(\x)$.
Consider the equivalence relation defined in the proof of Lemma~\ref{lemma: exp h and t}, by symmetry, we then have that $Z_1 = Z_0$.
For $1 \le i \le n-1$, we have
\begin{eqnarray*}
Z_i & = & \E{f_n(\x) \middle\vert \x_{[1,i]}} \\
& = & \E{f_n(\x) \middle\vert \x_{[1,i]}, x_{i} = x_{i+1}} \Pr(x_i = x_{i+1}) \\
& & \ + \E{f_n(\x) \middle\vert \x_{[1,i]}, x_{i} \ne x_{i+1}} \Pr(x_i \ne x_{i+1}) \\
& = & \frac{1}{2} \E{ f_n(\x_{[1,i]}) + f_n(\x_{[i + 1,n]}) -n \middle\vert \x_{[1,i]}, x_i = x_{i+1}} \\
& & \ + \frac{1}{2} \E{ f_n(\x_{[1,i]}) + f_n(\x_{[i + 1,n]}) -t(\x_{[1,i]})h(\x_{[i + 1,n]}) \middle\vert \x_{[1,i]}, x_i \ne x_{i+1}} \\
& = & f_n(\x_{[1,i]}) + \E{f_n(\x_{[i + 1,n]})} - \frac{n}{2} - \frac{1}{2}t(\x_{[1,i]})\E{h(\x_{[i + 1,n]}) \middle\vert x_i \ne x_{i+1}}.
\end{eqnarray*}
Note that by Lemma~\ref{lemma: partial expectation}, we have
\begin{eqnarray*}
\lefteqn{\E{f_n(\x_{[i + 1,n]})}} \\
& = & n \cdot \frac{n-i+1}{2} - \frac{1}{2}\left[3(n-i) - 4 + \frac{1}{2^{n-i-1}}\right] \\
& = & \frac{n^2}{2} - n + \frac{i}{2}(3-n) + 2 -\frac{1}{2^{n-i}},
\end{eqnarray*}
and by Lemma~\ref{lemma: exp h and t}, we have
\[
\E{h(\x_{[i + 1,n]}) \middle\vert x_i \ne x_{i+1}} = 2 - \frac{1}{2^{n-i-1}}.
\]
It then follows that
\[
Z_i = f_n(\x_{[1,i]}) + \frac{n^2}{2} - \frac{3n}{2} + \frac{i}{2}(3-n) + 2 - \frac{1}{2^{n-i}} - t(\x_{[1,i]})(1 - \frac{1}{2^{n-i}}).
\]
We summarize the expressions of $Z_i$ as follows.
\begin{equation} \label{equ: Zi}
Z_i =
\begin{cases}
\frac{n^2}{2} - n + 2 - \frac{1}{2^n}, & \text{if } i = 0,1, \\
f_n(\x_{[1,i]}) + \frac{i}{2}(3-n) - \frac{1}{2^{n-i}} + \frac{n^2}{2} -\frac{3}{2}n + 2 & \\
\ - t(\x_{[1,i]})(1 - \frac{1}{2^{n-i}}), & \text{if } 0 < i < n-1,\\
f_n(\x_{[1,n-1]}) - \frac{1}{2} + \frac{n}{2} - \frac{1}{2}t(\x_{[1,n-1]}), & \text{if } i = n-1, \\
f_n(\x), & \text{if } i = n,
\end{cases}
\end{equation}
where $Z_{n-1}$ is given by Corollary~\ref{corollary: f difference}.
Now we are able to bound the difference $Z_i - Z_{i-1}$ by Eq.~\eqref{equ: Zi} and Corollary~\ref{corollary: f difference}.
Recall that $Z_1 - Z_0 = 0$ by symmetry.
For $1 < i < n-1$, it follows that
\begin{equation*}
Z_i - Z_{i-1} =
\begin{cases}
-\frac{n}{2} + \frac{1}{2^{n-i+1}} + t(\x_{[1,i-1]})(1 - \frac{1}{2^{n-i+1}}), & \text{if } x_i = x_{i-1}, \\
\frac{n}{2} + 1 - t(\x_{[1,i]})(1 - \frac{1}{2^{n-i+1}}), & \text{if } x_i \ne x_{i-1}.
\end{cases}
\end{equation*}
If $x_i = x_{i-1}$, we have
$Z_i - Z_{i-1} < -\frac{n}{2} + \frac{1}{2} + i - 1 < \frac{n}{2} - \frac{3}{2}$.
On the other hand, we have $Z_i - Z_{i-1} \ge -\frac{n}{2} + 1$, and thus, $|Z_i - Z_{i-1}| \le \frac{n}{2}$.
Similarly, it can be verified that $|Z_i - Z_{i-1}| \le \frac{n}{2}$ if $x_i \ne x_{i-1}$.
For the remaining cases, by Eq.~\eqref{equ: Zi} and Corollary~\ref{corollary: f difference}, we have
\begin{equation*}
\begin{aligned}
&Z_{n-1} - Z_{n-2} =
\begin{cases}
-\frac{n}{2} - \frac{1}{4} + \frac{3}{4}t(\x_{[1,n-2]}), & \text{if } x_{n-1} = x_{n-2}, \\
\frac{n}{2} + \frac{1}{2} - \frac{3}{4}t(\x_{[1,n-1]}), & \text{if } x_{n-1} \ne x_{n-2},
\end{cases} \\
& Z_n - Z_{n-1} =
\begin{cases}
-\frac{n}{2} + \frac{1}{2}t(\x_{[1,n-1]}), & \text{if }x_{n-1} = x_n, \\
\frac{n}{2} - \frac{1}{2}t(\x_{[1,n-1]}), & \text{if } x_{n-1} \ne x_n.
\end{cases}
\end{aligned}
\end{equation*}
It is not {difficult} to see that $|Z_{n-1} - Z_{n-2} | \le \frac{n}{2}$ and $|Z_n - Z_{n-1}| \le \frac{n}{2}$.
Then by Theorem~\ref{thm: azuma}, we have
\begin{equation*}
\Pr(Z_n - Z_0 \ge \lambda) \le \exp \left( \frac{-\lambda^2}{2(0^2 + (n-1) (\frac{n}{2})^2)}\right) = \exp \left( \frac{-2\lambda^2}{n^2(n-1)} \right).
\end{equation*}
Take $\lambda = cn\sqrt{n-1}$, where $c$ is a positive constant.
We have
\begin{equation}\label{equ: z upper tail}
\Pr(Z_n - Z_0 \ge cn \sqrt{n-1}) \le e^{-2c^2}.
\end{equation}
In addition,
\begin{equation} \label{equ: z lower tail}
\Pr(Z_n - Z_0 \le - cn \sqrt{n-1}) \le e^{-2c^2}.
\end{equation}
Note that $|L_1(\x)| = Z_n + \frac{n}{2} + 1$, and $\underset{\x \in \mathbb{Z}_2^n}{\mathbb{E}} \left[ |L_1(\x)| \right] = Z_0 + \frac{n}{2} + 1$.
Then Eq.s~\eqref{equ: upper tail} and~\eqref{equ: lower tail} {follow} from Eq.s~\eqref{equ: z upper tail} and~\eqref{equ: z lower tail}, respectively. The proof is completed.
\end{proof}
\subsection{The $m$-ary case}
Let $n,n'$ be positive integers.
For each $\y \in \mathbb{Z}_m^{n'}$, define
\begin{equation*}
f_{m,n}(\y) = \rho(\y)(mn-n-1) - \frac{1}{2}\sum_{i=1}^{a(\y)}s_i^2 + \frac{3}{2}\sum_{i=1}^{a(\y)}s_i - a(\y),
\end{equation*}
where $s_i$ is the length of the $i$-th alternating segment of $\y$.
In parallel to Lemma~\ref{lemma: f_n partition}, we manage to ``break'' $f_{m,n}(\y)$ into $f_{m,n}(\y_{[1,i]})$ and $f_{m,n}(\y_{[i+1,n]})$ as follows.
\begin{lemma}\label{lemma: f_mn partition}
Let $n > 1$ and $m, n' > 2$. For each $1< i < n'-1$ and $\y \in \mathbb{Z}_m^{n'}$, we have
\begin{itemize}
\item If $y_i = y_{i+1}$,
\begin{equation}\label{equ: f_mn partition 1}
f_{m,n}(\y) = f_{m,n}(\y_{[1,i]}) + f_{m,n}(\y_{[i+1,n']}) - mn + n + 1;
\end{equation}
\item If $y_i \ne y_{i+1}$,
\begin{equation}\label{equ: f_mn partition bound}
\begin{aligned}
f_{m,n}(\y) & \le f_{m,n}(\y_{[1,i]}) + f_{m,n}(\y_{[i+1,n']}), \\
f_{m,n}(\y) & \ge f_{m,n}(\y_{[1,i]}) + f_{m,n}(\y_{[i+1,n']}) + 1 - t(\y_{[1,i]})h(\y_{[i+1,n']}).
\end{aligned}
\end{equation}
\end{itemize}
In addition,
\begin{equation} \label{equ: f_mn partition 2}
f_{m,n}(\y) =
\begin{cases}
f_{m,n}(\y_{[1,n'-1]}), & \text{if } y_{n'-1} = y_{n'}, \\
f_{m,n}(\y_{[1,n'-1]}) + mn - n - 1 , & \text{if } y_{n'-1} \ne y_{n'}, y_{n'-2} \ne y_{n'}, \\
f_{m,n}(\y_{[1,n'-1]}) + mn - n - t(\y_{[1,n'-1]}), & \text{if } y_{n'-1} \ne y_{n'}, y_{n'-2} = y_{n'}.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
We first prove Eq.s~\eqref{equ: f_mn partition 1} and~\eqref{equ: f_mn partition bound}.
Define $\boldsymbol{u} = \y_{[1,i]}$, $\boldsymbol{v} = \y_{[i+1,n']}$ and list of integers $\{s_i\}_{i=1}^{a(\x)}$, $\{\lambda_j\}_{j=1}^{a(\boldsymbol{u})}$, $\{\mu_k\}_{k=1}^{a(\boldsymbol{v})}$ such that $s_i$, $\lambda_j$, $\mu_k$ {are} the lengths of $i,j,k$-th alternating segments of $\y, \boldsymbol{u}, \boldsymbol{v}$, respectively.
If $y_i = y_{i+1}$, we have
\begin{itemize}
\item $\rho(\y) = \rho(\boldsymbol{u}) + \rho(\boldsymbol{v}) - 1$;
\item $a(\y) = a(\boldsymbol{u}) + a(\boldsymbol{v})$;
\item $\lambda_j = s_j$ for $1 \le j \le a(\boldsymbol{u})$, and $\mu_k = s_{a(\boldsymbol{u}) + k}$ for $1 \le k \le a(\boldsymbol{v})$ .
\end{itemize}
Then, we have
\begin{eqnarray*}
f_{m,n}(\y) & = & \rho(\y)(mn - n - 1) - \frac{1}{2}\sum_{i = 1}^{a(\y)}s_i^2 + \frac{3}{2}\sum_{i=1}^{a(\y)}s_i - a(\y) \\
& = & \left( \rho(\boldsymbol{u}) + \rho(\boldsymbol{v}) - 1 \right)(mn - n - 1) - \frac{1}{2} \left[ \sum_{i=1}^{a(\boldsymbol{u})} \lambda_i^2 + \sum_{j=1}^{a(\boldsymbol{v})}\mu_j^2\right] + \frac{3}{2} \left[ \sum_{i=1}^{a(\boldsymbol{u})} \lambda_i + \sum_{j=1}^{a(\boldsymbol{v})}\mu_j \right] \\
& & \ - (a(\boldsymbol{u}) + a(\boldsymbol{v})) \\
& = & f_n(\boldsymbol{u}) + f_n(\boldsymbol{v}) - m'n' + n' + 1\\
& = & f_n(\y_{[1,i]}) + f_n(\y_{[i+1,n']}) - m'n' + n' + 1.
\end{eqnarray*}
For the case when $y_i \ne y_{i+1}$, by similar arguments, we have the results summarized in Table~\ref{tab: my_table}.
\begin{table}[h]
\centering
\caption{The case $y_i \ne y_{i+1}$}
\label{tab: my_table}
\begin{tabular}{|lll|l|}
\hline
\multicolumn{3}{|c|}{Conditions} & $f_{m,n}(\y) = $ \\ \hline
\multicolumn{1}{|l|}{\multirow{4}{*}{$y_{i-1} \ne y_{i}$}} & \multicolumn{1}{l|}{\multirow{2}{*}{$y_{i+1} \ne y_{i-1}$}} & $y_i \ne y_{i+2}$ & $f_{m,n}(\boldsymbol{u}) + f_{m,n}(\boldsymbol{v})$ \\ \cline{3-4}
\multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & $y_i = y_{i+2}$ & $f_{m,n}(\boldsymbol{u}) + f_{m,n}(\boldsymbol{v}) + 1 - h(\boldsymbol{v})$\\ \cline{2-4}
\multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{\multirow{2}{*}{$y_{i+1} = y_{i-1}$}} & $y_i \ne y_{i+2}$ & $f_{m,n}(\boldsymbol{u}) + f_{m,n}(\boldsymbol{v}) + 1 - t(\boldsymbol{u})$ \\ \cline{3-4}
\multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & $y_i = y_{i+2}$ & $f_{m,n}(\boldsymbol{u}) + f_{m,n}(\boldsymbol{v}) + 1 - t(\boldsymbol{u})h(\boldsymbol{v})$ \\ \hline
\multicolumn{1}{|l|}{\multirow{2}{*}{$y_{i-1} = y_{i}$}} & \multicolumn{2}{c|}{$y_{i} \ne y_{i+2}$} & $f_{m,n}(\boldsymbol{u}) + f_{m,n}(\boldsymbol{v})$ \\ \cline{2-4}
\multicolumn{1}{|l|}{} & \multicolumn{2}{c|}{$y_{i} = y_{i+2}$} & $f_{m,n}(\boldsymbol{u}) + f_{m,n}(\boldsymbol{v}) + 1 - h(\boldsymbol{v})$ \\ \hline
\end{tabular}
\\
\vspace{0.3em}
\footnotesize{$y_{i-1} = y_i$ and $y_{i} \ne y_{i+1}$ implies $y_{i+1} \ne y_{i-1}$.}
\end{table}
Note that both $t(\boldsymbol{u})$ and $h(\boldsymbol{v})$ are positive, and Eq.~\eqref{equ: f_mn partition bound} then follows.
The proof of Eq.~\eqref{equ: f_mn partition 2} is straightforward and is thus omitted.
\end{proof}
\begin{theorem}\label{thm: m-ary distribution}
Let $m>2, n>3$ be integers, and $x_1, \dots, x_n$ be independent random variables such that $\Pr(x_i = j) = \frac{1}{m}$ for $i \in [n]$, $j \in \mathbb{Z}_m$. Then for the word $\x = x_1,\dots,x_n$, we have
\begin{equation}\label{equ: m-ary lower tail}
\Pr \left( {|L_1(\x)| - \underset{\x \in \mathbb{Z}_m^n}{\mathbb{E}} \left[ |L_1(\x)| \right] \ge c (m + \frac{1}{m}) n \sqrt{n-1}} \right) \le e^{-c^2/2},
\end{equation}
and
\begin{equation}\label{equ: m-ary upper tail}
\Pr \left( {|L_1(\x)| - \underset{\x \in \mathbb{Z}_m^n}{\mathbb{E}} \left[ |L_1(\x)| \right] \le - c (m + \frac{1}{m}) n \sqrt{n-1}} \right) \le e^{-c^2/2},
\end{equation}
where $\underset{\x \in \mathbb{Z}_m^n}{\mathbb{E}} \left[ |L_1(\x)| \right]$ is given in Eq.~\eqref{equ: expectation}, and {$c$} is a positive constant.
\end{theorem}
\begin{proof}
As in Section~\ref{sec: binary case}, define the Doob martingale $Z_0 = \E{f_{m,n}(\x)} = n^2 (m + \frac{1}{m} - 2) - \frac{n}{m} + \frac{m^n - 1}{m^n(m - 1)}$, and $Z_i = \E{f_{m,n}(\x) \middle \vert \x_{[1,i]}}$ for $1 \le i \le n$.
By Lemma~\ref{lemma: f_mn partition},
\begin{equation}\label{equ: Z_n_1 bound}
\begin{aligned}
Z_{n-1} & \le f_{m,n}(\x_{[1,n-1]}) + \left(1 - \frac{1}{m} \right)(mn - n - 1), \\
Z_{n-1} & \ge f_{m,n}(\x_{[1,n-1]}) + \left(1 - \frac{1}{m} \right) \left(mn -n -t(\x_{[1,n-1]})\right).
\end{aligned}
\end{equation}
Similarly, for $1 < i < n-1$, by Eq.s~\eqref{equ: f_mn partition 1} and~\eqref{equ: f_mn partition bound}, we have
\begin{eqnarray}\label{equ: Z_i bound}
Z_i & \le & f_{m,n}(\x_{[1,i]}) + g_{m,n}(i) + \frac{1}{m}(-mn + n + 1) \nonumber\\
& = & f_{m,n}(\x_{[1,i]}) + g_{m,n}(i) - n + \frac{n}{m} + \frac{1}{m}, \nonumber\\
Z_i & \ge & f_{m,n}(\x_{[1,i]}) + g_{m,n}(i) + \frac{1}{m}(- mn + n + 1) + \frac{m-1}{m}\left(1 - t(\x_{[1,i]})\E{h(\x_{[i+1,n]} \middle \vert x_{i} \ne x_{i+1}}\right) \nonumber\\
& = & f_{m,n}(\x_{[1,i]}) + g_{m,n}(i) + 1 -n + \frac{n}{m} - \frac{m-1}{m}t(\x_{[1,i]})\E{h(\x_{[i+1,n]} \middle \vert x_{i} \ne x_{i+1}} \nonumber \\
& = & f_{m,n}(\x_{[1,i]}) + g_{m,n}(i) + 1 -n + \frac{n}{m} - \frac{m-1}{m}t(\x_{[1,i]})\left(2 - \frac{1}{m^{n-i-1}}\right),
\end{eqnarray}
where $g_{m,n}(i) \triangleq \E{f_{m,n}(\x_{[i+1,n]}) \middle \vert \x_{[1,i]}}$.
Then by Lemma~\ref{lemma: partial expectation}, we have
\begin{equation}\label{equ: g_i}
g_{m,n}(i)= n(n-i)(m + \frac{1}{m} -2) - \frac{n}{m} + \frac{1}{m-1} - \frac{1}{(m-1)m^{n-i}} + i.
\end{equation}
Now we claim that the values of $|Z_i - Z_{i-1}|$ can be bounded as follows.
The proof details are left in Appendix~\ref{sec-appendixa}.
\begin{equation} \label{equ: bounded difference}
|Z_i - Z_{i-1}| \le
\begin{cases}
0, & \text{ if } i = 1, \\
n(m + \frac{1}{m} - 2) + 3, & \text{ if } i = 2, \\
n(m + \frac{1}{m}), & \text{ if } 2 < i < n-1, \\
n(m - 1), & \text{ if } i = n - 1, \\
n(m + \frac{1}{m} - 2), & \text{ if } i = n.
\end{cases}
\end{equation}
Therefore, we have $|Z_i - Z_{i-1}| \le n(m+ \frac{1}{m})$ for $i \in [n]$, and the result then follows by Theorem~\ref{thm: azuma}.
\end{proof}
\section{Conclusion}\label{sec: conclusion}
In this paper, we analyze the distribution of $|L_1(\x)|$ for $\x \in \mathbb{Z}_m^n$ by Azuma's inequality.
In Appendix~\ref{sec-appendixb}, we simulate the distribution of $|L_1(\x)|$ by randomly sampling words from $\mathbb{Z}_m^n$.
The numerical result suggests that $|L_1(\x)|$ are more concentrated than we expected.
Specifically, the gap between the simulation results and the bounds in Theorem~\ref{thm: binary distribution} and Theorem~\ref{thm: m-ary distribution} is still large, leaving the derivation of better bounds as an open problem.
Intuitively, the distribution of $|L_t(\x)|$ should be more and more concentrated as $t$ grows.
For example, $|L_n(\x)| = m^n$ for all $\x \in \mathbb{Z}_m^n$.
However, the distribution of $|L_t(\x)|$ is in general difficult and left open.
We also note that the knowledge of the size distribution of {the} FLL balls might be used to improve the lower bound on the sizes of deletion-correcting codes~\cite{sala2014}. In particular, let $G$ be a graph with vertex set $\mathbb{Z}_m^n$ such that there is an edge between $\x, \y \in \mathbb{Z}_m^n$ if and only if $d_l(\x,\y) \le t$.
Thus, each $t$-deletion-correcting codes forms an independent set of $G$.
Let $\alpha(G)$ be the size of the largest independent set in $G$, and $\deg(\x)$ be the degree of $\x \in \mathbb{Z}_m^n$. By the Caro-Wei bound (see for example~\cite{MR3524748}), we have
\[
\alpha(G) \ge \sum_{\x \in \mathbb{Z}_m^n} \frac{1}{1 + \deg(\x)}.
\]
Therefore, a tighter lower bound could be derived if we know {more about} the distribution of $\deg(\x)$.
| {
"timestamp": "2022-04-06T02:26:40",
"yymm": "2204",
"arxiv_id": "2204.02201",
"language": "en",
"url": "https://arxiv.org/abs/2204.02201",
"abstract": "The fixed length Levenshtein (FLL) distance between two words $\\mathbf{x,y} \\in \\mathbb{Z}_m^n$ is the smallest integer $t$ such that $\\mathbf{x}$ can be transformed to $\\mathbf{y}$ by $t$ insertions and $t$ deletions. The size of a ball in FLL metric is a fundamental but challenging problem. Very recently, Bar-Lev, Etzion, and Yaakobi explicitly determined the minimum, maximum and average sizes of the FLL balls with radius one. In this paper, based on these results, we further prove that the size of the FLL balls with radius one is highly concentrated around its mean by Azuma's inequality.",
"subjects": "Information Theory (cs.IT); Combinatorics (math.CO)",
"title": "On the size distribution of Levenshtein balls with radius one",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850887155808,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7095349839929418
} |
https://arxiv.org/abs/2007.09874 | A Note on Stabbing Convex Bodies with Points, Lines, and Flats | $\newcommand{\eps}{\varepsilon}\newcommand{\tldO}{\widetilde{O}}$Consider the problem of constructing weak $\eps$-nets where the stabbing elements are lines or $k$-flats instead of points. We study this problem in the simplest setting where it is still interesting -- namely, the uniform measure of volume over the hypercube $[0,1]^d\bigr.$. Specifically, a $(k,\eps)$-net is a set of $k$-flats, such that any convex body in $[0,1]^d$ of volume larger than $\eps$ is stabbed by one of these $k$-flats. We show that for $k \geq 1$, one can construct $(k,\eps)$-nets of size $O(1/\eps^{1-k/d})$. We also prove that any such net must have size at least $\Omega(1/\eps^{1-k/d})$. As a concrete example, in three dimensions all $\eps$-heavy bodies in $[0,1]^3$ can be stabbed by $\Theta(1/\eps^{2/3})$ lines. Note, that these bounds are \emph{sublinear} in $1/\eps$, and are thus somewhat surprising. The new construction also works for points providing a weak $\eps$-net of size $O(\tfrac{1}{\eps}\log^{d-1} \tfrac{1}{\eps} )$. | \section{Introduction}
\myparagraph{Range spaces and $\varepsilon$-nets.} %
A \emphi{range space} is a pair $\RangeSpace = (\Ground, \Ranges)$,
where $\Ground$ is the \emphi{ground set} (finite or infinite) and
$\Ranges$ is a (finite or infinite) family of subsets of $\Ground$.
The elements of $\Ranges$ are \emphi{ranges}.
Suppose that $\Ground$ is a finite set. For a parameter
$\varepsilon \in (0,1)$, a subset $\Mh{\mathsf{S}} \subseteq \Ground$ is an
\emphi{$\varepsilon$-net} for the range space $\RangeSpace$, if for every
range $\range \in \Ranges$ with
\begin{math}
|\range \cap \Ground| \geq \varepsilon|\Ground|
\end{math}
has $\range \cap \Mh{\mathsf{S}} \neq \varnothing$. The $\varepsilon$-net theorem of
Haussler and Welzl \cite{hw-ensrq-87} implies the existence of
$\varepsilon$-nets of size $O(\delta \varepsilon^{-1} \log \varepsilon^{-1})$, where
$\delta$ is the \VC{} dimension of the range space $\RangeSpace$. The
use of $\varepsilon$-nets is widespread in computational geometry
\cite{m-ldg-02, h-gaa-11}.
\myparagraph{Weak $\varepsilon$-nets.} Consider the range space
$(\PS, \RangesC)$, where $\RangesC$ is the collection of all compact
convex bodies in $\Re^d$ and $\PS \subset \Re^d$ is a point set of
size $n$. This range space has infinite \VC{} dimension---the standard
$\varepsilon$-net constructions do not work for this range space. The notion
of \emphi{weak $\varepsilon$-nets} bypasses this issue by allowing the net
$\Mh{\mathsf{S}}$ to use points outside of $\PS$. Specifically, any convex body
$\Mh{\Xi}$ that contains at least $\varepsilon n$ points of $\PS$ must contain a
point of $\Mh{\mathsf{S}}$. The first construction of weak $\varepsilon$-net is due to
\Barany{} {e{}t~a{}l.}\xspace \cite{bfl-nhp-90}. There was quite a bit of work on
this problem, calumniating in the somewhat simpler construction of
Matou{\v{s}}ek\xspace and Wagner \cite{mw-ncwen-04}, who constructed weak
$\varepsilon$-nets of size $O(\varepsilon^{-d} \log^{f(d)} \varepsilon^{-1})$, where
$f(d) = O(d^2 \log d)$. Recently, Rubin \cite{r-ibwenp-18} gave an
improved bound for points in the plane, showing existence of weak
$\varepsilon$-nets of size $O(\varepsilon^{-(3/2 + \alpha)})$ for arbitrarily small
$\alpha > 0$. For more detailed history of the problem, see the
introduction of Rubin \cite{r-ibwenp-18}. As for a lower bound, Bukh
{e{}t~a{}l.}\xspace \cite{bmn-lbwensc-09} gave constructions of point sets for which
any weak $\varepsilon$-net must have size
$\Omega(\varepsilon^{-1} \log^{d-1} \varepsilon^{-1})$. Closing this gap remains a
major open problem.
\myparagraph{\kenet{k}{\eps}{}s and uniform measure.}
A natural extension of weak $\varepsilon$-nets is to allow the net $\Mh{\mathsf{S}}$ to
contain other geometric objects. Given a collection of $n$ points
$\PS \subset \Re^d$ and a parameter $0 \leq k < d$, we define a (weak)
\kenet{k}{\eps} to be a collection of $k$-flats $\Mh{\mathsf{S}}$ such that if $\Mh{\Xi}$ is a
convex body containing at least $\varepsilon n$ points of $\PS$, then there
exists a $k$-flat in $\Mh{\mathsf{S}}$ intersecting $\Mh{\Xi}$. Note that
\knet{0}{}s are exactly weak $\varepsilon$-nets.
In general, one would expect that as $k$ increases, the size of the
\kenet{k}{\eps} shrinks. For example, a \knet{1} for a collection of points in
$\Re^3$ can be constructed by projecting the points down onto the
$xy$-plane and applying Rubin's construction in the plane to obtain a
weak $\varepsilon$-net $\Mh{\mathsf{S}}$ of size $O(\varepsilon^{-(3/2 + \alpha)})$
\cite{r-ibwenp-18}. Lifting $\Mh{\mathsf{S}}$ up back into three dimensions
results in a \knet{1} of the same size, which is smaller than the best
known weak $\varepsilon$-net size in $\Re^3$ \cite{mw-ncwen-04}. However, one
might expect that a \knet{1} of even smaller size is possible in
$\Re^3$, as this construction uses a set of parallel lines (i.e., one
would expect the lines in an optimal net to be arbitrarily oriented).
Here, we study an even simpler version of the problem, where the
ground set is the hypercube $\HCube = \HCX{d}$. In particular, for
$\varepsilon \in (0,1)$ and $0 \leq k < d$, we are interested in computing
the smallest set $\KS$ of $k$-flats, such that if $\Mh{\Xi}$ is a convex
body with $\volX{\Mh{\Xi} \cap \HCube} \geq \varepsilon$, then there is a
$k$-flat in $\KS$ which intersects $\Mh{\Xi}$. For sake of exposition,
throughout the rest of the paper we refer to this set $\KS$ as a
\emphi{\kenet{k}{\eps}}. We note that $\HCX{d}$ can be replaced with any
arbitrary compact convex body in the definition (the size of the \kenet{k}{\eps}
increases by a factor depending on $d$, see \apndref{extensions}).
\subsection{Our results \& paper organization}
\myparagraph{Notation.} Throughout, the notation $O_d$, $\Omega_d$,
and $\Theta_d$ hides constants depending on the dimension $d$.
\medskip\noindent
First, we show that any \kenet{k}{\eps} must have size
$\Omega_d(1/\varepsilon^{1-k/d})$ (\lemref{lb-k}). Perhaps surprisingly,
we give a relatively simple construction of
\kenet{k}{\eps}{}s of size $O_d(1/\varepsilon^{1-k/d})$ for $k \geq 1$
(\thmref{ke-nets-opt}). For $k=0$, we obtain nets of size
$O_d((1/\varepsilon)\log^{d-1}(1/\varepsilon))$ (\thmref{k-flat-net-det}).
Importantly, both constructions are deterministic and explicit (see
the discussion below).
As far as the authors are aware, this particular problem we study has
not been addressed before. The only related result known is the
existence of explicit constructions of \knet{0}{}s
for axis parallel boxes in $\Re^d$, and is briefly mentioned in
\cite{bmn-lbwensc-09}. In this case, one can construct \knet{0}{}s of
size $O_d(1/\varepsilon)$ using Van der Corput sets in two dimensions, and
Halton-Hammersely sets in higher dimensions. For completeness, we
describe these construction in \apndref{0-net-boxes}.
\myparagraph{Deterministic vs.~explicit constructions of $\varepsilon$-nets.}
For the regular concept of $\varepsilon$-nets, there are known deterministic
constructions. They work by repeatedly halving the input point set,
using deterministic discrepancy constructions, until the set is of the
desired size \cite{m-gd-99, c-dmr-01}. On the one hand, for our setting
(i.e., the measure is uniform volume on the unit hypercube) it is not
clear what the generated $\varepsilon$-net is without running this
construction algorithm outright. On the other hand, we develop a
construction of weak $\varepsilon$-nets---for uniform volume measure over the
hypercube for ellipsoids---which are much simpler and are explicit;
one can easily compute the $i$\th point in this net
using polylogarithmic space.
\section{Lower bound}
\begin{defn}
The affine hull of a point set
$\PS = \{ \pp_1,\ldots, \pp_n \} \subseteq \Re^d$ is the set
\begin{equation*}
\Set{\Bigl.\smash{\sum_{i} \alpha_i \pp_i}}{\forall i \quad \alpha_i
\in \Re \quad \text{ and }\quad \smash{\sum_{i}}
\alpha_i = 1}.
\end{equation*}
For $0 \leq k < d$, a \emphi{$k$-flat} is the affine hull of a set
of $k+1$ (affinely independent) points.
\end{defn}
\begin{defn}
For parameters $\varepsilon \in (0,1)$, and $k \in \{0,1,\ldots, d-1\}$,
a set $\KS$ of $k$-flats is a \emphi{\kenet{k}{\eps}}, if for any convex
body $\Mh{\Xi} \subseteq \Re^d$, with
$\volX{\Mh{\Xi} \cap \HCX{d}} \geq \varepsilon$, then there exists a flat
$\flat \in \KS$ such that $\flat \cap \Mh{\Xi} \neq \emptyset$.
\end{defn}
\begin{lemma}
\lemlab{lb-k}%
%
For a parameter $\varepsilon \in (0,1)$, any \kenet{k}{\eps} must have size
$\Omega_d(1/\varepsilon^{1 - k/d})$.
\end{lemma}
\begin{proof}
Let $\KS$ be a \kenet{k}{\eps}. For each $k$-flat $\flat \in \KS$, let
$H(\flat,r)$ be the locus of points in $\HCX{d}$ within distance at
most $r$ from $\flat$ (for $k=1$ in three dimensions, this is the
intersection of $\HCX{d}$ and the cylinder with radius $r$ centered
at the line $\flat$). Note that a ball $\Ball$ with center $c$ and
radius $r$ intersects a $k$-flat $\flat$ if and only if
$c \in H(\flat, r)$.
Fix $r = (\varepsilon/\mu)^{1/d}$, where $\mu$ is a constant to be
determined shortly. We claim that by choosing $\mu$ appropriately,
if $\KS$ is a \kenet{k}{\eps}, then the collection of objects
$\Set{H(\flat, r)}{\flat \in \KS}$ covers $\HCX{d}$. Indeed,
suppose not. Then there exists a point $\pp \in \HCX{d}$ not
covered by any of the objects $H(\flat, r)$. This implies that a
ball $\Ball$ centered at $\pp$ with radius $r$ does not intersect
any $k$-flat of $\KS$, and its volume is
$c_d r^{d} = c_d\varepsilon/\mu$, where $c_d$ is a constant that depends
on $d$. Choose $\mu = c_d$ so that $\Ball$ has volume at least
$\varepsilon$, but does intersect any $k$-flat of $\KS$. A contradiction
to the required net property.
Hence, by the choice of $r$, any \kenet{k}{\eps} must satisfy the condition
that $\Set{H(\flat, r)}{\flat \in \KS}$ covers $\HCX{d}$. For any
$k$-flat $\flat$, we have
$\beta = \volX{H(\flat, r)} = O_d(r^{d-k}) = O_d(\varepsilon^{1 - k/d})$.
Thus, to cover $\HCX{d}$, we have that
$\cardin{\KS} \geq 1/\beta = \Omega_d(1/\varepsilon^{1 - k/d})$.
\end{proof}
\section{Constructing \kenet{k}{\eps}{}s for $k \geq 1$}
\seclab{ke-nets-opt}
\begin{figure}
\noindent%
\includegraphics[page=1,width=0.23\linewidth]{figs/grid_grad}%
\hfill%
\includegraphics[page=2,width=0.23\linewidth]{figs/grid_grad}%
\hfill%
\includegraphics[page=3,width=0.23\linewidth]{figs/grid_grad}%
\hfill%
\includegraphics[page=4,width=0.23\linewidth]{figs/grid_grad}
\caption{The multi-level grid, and its associated lines.}
\figlab{grid}
\end{figure}
Here, we give a self-contained proof of a deterministic, explicit
construction of \kenet{k}{\eps}{}s of size $O_d(1/\varepsilon^{1 - k/d})$ for $k \geq 1$
which matches the lower bound of \lemref{lb-k} up to constant factors.
The construction will be done recursively on the dimension $d$.
\myparagraph{Base case: $k=d-1$.} Here a \knet{d-1} of size
$d/\varepsilon^{1/d} = O_d(1/\varepsilon^{1 - k/d})$ follows readily by overlaying a
$d$-dimensional grid of size length $\varepsilon^{1/d}$ and letting the net
consist of the hyperplanes forming the grid. As such, we assume
$k < d-1$.
\subsection{Construction}
The construction is based on quadtrees. Starting with the
entire cube $\HCX{d}$, we construct $d$ orthogonal planes which
\emph{split} the cube into $2^d$ cubes of side length $1/2$. We refer
to such planes as \emphi{splitting planes}. This splitting
process is continued recursively inside each cell, for
$i = 0, \ldots, \tau$, where
\begin{equation}
\tau = \ceil{\frac{1}{d}\lg\frac{1}{\varepsilon}} + 3\ceil{\log (3d)} + 1
\eqlab{tau:value}
\end{equation}
(and $\lg = \log_2$), so that cubes at the $i$\th level of the construction has
side length $1/2^i$. The number of such cubes at the $i$\th level is
$2^{di}$. Naturally, these cubes together form a grid with side length
$1/2^i$. See \figref{grid} for an illustration of the construction in
two dimensions.
For each splitting hyperplane $h$ at level $i \geq 1$, which splits cells
of side length $1/2^{i-1}$ into cells of side length $1/2^i$, we recursively
construct a \kenet{k}{\varepsilon_i} on $h$ (which lies in $d-1$ dimensions),
where
\begin{equation}
\varepsilon_i = \frac{2^{i}\varepsilon}{4d}.
\end{equation}
We collect all $k$-flats on all splitting hyperplanes
at all levels into our \kenet{k}{\eps} $\KS$.
\subsection{Analysis}
\begin{figure}
\centerline{\includegraphics[scale=0.7]{figs/draw_ball}}
\caption{The slice volume, and its $1/9$\th power, for the unit
radius ball $\Mh{\Xi}$ in $10$ dimensions. This is an example of
the concavity implied by the Brunn-Minkowski inequality, which
in turn implies that the slice function is unimodal.}
\figlab{bm-unimodal}
\end{figure}
\begin{lemma}
The constructed \kenet{k}{\eps}{} has size $O_d(1/\varepsilon^{1 - k/d})$.
\end{lemma}
\begin{proof}
Let $T(\varepsilon,d)$ denote the minimum size of a \kenet{k}{\eps} for
$[0,1]^d$. The proof is by induction on $d$. When $d = k + 1$, we
have $T(\varepsilon, k+1) \leq (k+1)/\varepsilon^{1/(k+1)}$, by the base case
described above. So assume $d \geq k+2$ and
$T(\delta, d') \leq \beta(d')/\delta^{1 - k/d'}$ for all $d' < d$,
where $\beta(d')$ is a constant to be determined. By the
inductive hypothesis, the above construction produces a \kenet{k}{\eps} of
size
\begin{align*}
\cardin{\KS}
&%
\leq%
d \sum_{i=1}^\tau 2^{i-1}T(\varepsilon_i, d-1)
\leq%
d \sum_{i=1}^{\tau} \frac{2^{i-1} \beta(d-1)}{\varepsilon_{i}^{1 - k/(d-1)}}
\leq%
\frac{4d^2\beta(d-1)}{\varepsilon^{1 - k/d}} \sum_{i=1}^{\tau}
\frac{2^{i-1}}{2^{i - ik/(d-1)}}
\\&%
\leq%
\frac{2d^2\beta(d-1)}{\varepsilon^{1 - k/d}} \sum_{i=1}^{\tau}
2^{ ik/(d-1)}%
\leq%
\frac{4d^2\beta(d-1)}{\varepsilon^{1 - k/(d-1)}} \cdot 2^{\tau k/(d-1)}
\leq%
\frac{16d^2\beta(d-1)}{\varepsilon^{1 - k/d}}.
\end{align*}
The last inequality follows since
$\tau \leq \frac{1}{d}\lg\frac{1}{\varepsilon} + 2$. In particular, we
obtain the recurrence $\beta(d) = 16d^2 \beta(d-1)$, which solves to
$\beta(d) = d^{O(d)}$. As such, $\KS$ has size
$O_d(1/\varepsilon^{1 - k/d})$.
\end{proof}
\myparagraph{The Brunn-Minkowski inequality and unimodal functions.}
The $\Mh{\Xi}$ be a convex body in $\Re^d$. For a parameter
$\alpha \in \Re$, let $f(\alpha)$ denote the $(d-1)$-dimensional
volume of $\Mh{\Xi}$ intersected with the hyperplane $x = \alpha$. The
Brunn-Minkowski inequality \cite{m-ldg-02, h-gaa-11} implies that the
function $g(\alpha) = f(\alpha)^{1/(d-1)}$ is concave. In particular,
$g$ is \emphi{unimodal}. Namely, there exists a $\beta \in \Re$ such
that $g$ is non-decreasing on $(-\infty, \beta]$ and non-increasing on
$[\beta, \infty)$. As such, the function $f$ itself is unimodal. See
\figref{bm-unimodal}.
\begin{lemma}
The set $\KS$ is a \kenet{k}{\eps}{}.
\end{lemma}
\begin{proof}
Let $\Mh{\Xi}$ be a convex body contained in $\HCX{d}$ with volume at
least $\varepsilon$. Assume, for the sake of contradiction, that $\Mh{\Xi}$
is not stabbed by any of the $k$-flats of $\KS$.
Let $h(\alpha)$ be the hyperplane orthogonal to the first axis
which intersects the first axis at $\alpha \in \Re$. Define the
function
\begin{equation*}
f(\alpha) = \volX{\Mh{\Xi} \cap h(\alpha)\bigr.}.
\end{equation*}
By the Brunn-Minkowski inequality, the function
$g(\alpha) = f(\alpha)^{1/(d-1)}$ is concave and unimodal. Define
the point $x^* \in [0,1]$ so that
$x^\star = \arg\max_{\alpha} f(\alpha)$.
Let $V(\Delta) = f( x^\star + \Delta)$, and let
$v(\Delta) = (V(\Delta))^{1/(d-1)}$. The function $v$, being a
translation of $g$, is concave and unimodal. Let $\rv_i\geq0$ be
the maximum number such that $V(\rv_i) = \varepsilon_i$, for
$i=1,\ldots, \tau$. Observe that if $\rv_i \geq 1/2^i$, then there
is hyperplane orthogonal to the first axis that has a recursive
construction of a net on it, for $\varepsilon_i$. This by induction would
imply that the net intersects $\Mh{\Xi}$. We thus assume from this
point on that
\begin{equation*}
\rv_i < \frac{1}{2^i},
\end{equation*}
for all $i$. Observe that
$\rv_1 \geq \rv_2 \geq \cdots \geq \rv_\tau$, as
$\varepsilon_1 < \varepsilon_2 < \cdots < \varepsilon_\tau$ (more specifically,
$\varepsilon_i = 2\varepsilon_{i-1}$ for all $i$).
\begin{figure}
\centering \includegraphics{figs/concave}
\caption{}
\figlab{concave}
\end{figure}
The concavity of $v(\cdot)$, see \figref{concave}, implies that
\begin{equation*}
\frac{v(\rv_{i+2}) - v(\rv_{i+1})}
{\rv_{i+2} - \rv_{i+1}}
\geq
\frac{v(\rv_{i+1}) - v(\rv_{i})}
{\rv_{i+1} - \rv_{i}}
\qquad\implies\qquad%
\frac{\rv_{i+1} - \rv_{i}}
{\rv_{i+2} - \rv_{i+1}}
\leq
\frac{v(\rv_{i+1}) - v(\rv_{i})}
{v(\rv_{i+2}) - v(\rv_{i+1})},
\end{equation*}
as $\rv_{i+1} - \rv_{i} < 0$ and
$v(\rv_{i+2}) - v(\rv_{i+1}) > 0$. Since
\begin{math}
V(\rv_{i+1}) = \varepsilon_{i+1} = 2\varepsilon_i = 2V(\rv_i),
\end{math}
we have that
\begin{math}
v(\rv_{i+1}) = 2^{1/(d-1)}v(\rv_i).
\end{math}
For $i < \tau$, let $\ell_{i} = \rv_{i} - \rv_{i+1}$. Plugging
this into the above, observe
\begin{equation*}
\frac{\ell_{i}}{\ell_{i+1}}
=%
\frac {\rv_{i} - \rv_{i+1}}
{\rv_{i+1} - \rv_{i+2}}
\leq
\frac{v(\rv_{i+1}) - v(\rv_{i})}
{v(\rv_{i+2}) - v(\rv_{i+1})}
=%
\frac{(2^{1/(d-1)}-1) v(\rv_i)}
{2^{1/(d-1)}(2^{1/(d-1)}-1 )v(\rv_{i})}
=%
\frac{1}{2^{1/(d-1)}}.
\end{equation*}
Since $\ell_{\tau-1} \leq \rv_{\tau-1} \leq 1/2^{\tau-1}$, we have
\begin{align*}
\rv_1
&=%
\rv_\tau + \sum_{i=1}^{\tau-1} \ell_i
\leq%
\rv_\tau +
\ell_{\tau-1}\pth{1 + \frac{1}{2^{1/(d-1)}} +
\frac{1}{2^{2/(d-1)}} + \cdots }
\\&%
\leq%
\rv_\tau + 2 d \ell_{\tau-1}
\leq%
(2d+1)
\rv_{\tau-1}
<%
\frac{2d+1}{2^{\tau-1}}
<
\frac{\varepsilon^{1/d}}{4d^2},
\end{align*}
by the value of $\tau$, see \Eqref{tau:value}.
Let $I_1$ be the maximum interval, where the value of
$V(x) \geq \varepsilon_1$, for any $x \in I_1$. By the above, we have
that if the net does not intersect $\Mh{\Xi}$, then
$\lenX{I_1} \leq 2\rv_1 \leq 2{\varepsilon^{1/d}}/(4d^2)$.
We define $I_2, \ldots, I_d$ in a similar fashion on the other
axes, and the same argumentation would imply that
$\lenX{I_j} \leq 2{\varepsilon^{1/d}}/(4d^2)$, for all $j$. Furthermore,
any plane orthogonal to the axes that avoids the box
$B = I_1 \times I_2 \cdots \times I_d$ has an intersection with
$\Mh{\Xi}$ of volume at most $\varepsilon_1$. We conclude that the total
value of $\Mh{\Xi}$ is at most
\begin{equation*}
\volX{\Mh{\Xi}}%
\leq%
\volX{B} +
\sum_{j=1}^d \int_{y \in [0,1] \setminus I_j}
\volX{\Mh{\Xi} \cap (x_j = y)\Bigr.} dy
\leq
\prod_{j=1}^d \lenX{I_j} + d \varepsilon_1 \ll \varepsilon,
\end{equation*}
which is a contradiction to $\volX{ \Mh{\Xi}} \geq \varepsilon$.
\end{proof}
\begin{theorem}
\thmlab{ke-nets-opt}%
%
Given $\varepsilon \in (0,1)$ and $k \in \{1,\ldots, d-1\}$, the above is
a deterministic and explicit construction of a \kenet{k}{\eps}{} for
$[0,1]^d$ of size $O_d(1/\varepsilon^{1 - k/d})$.
\end{theorem}
\section{Constructing \knet{0}{}s}
\subsection{Ellipsoids are enough}
We now give constructions for \knet{0}{}s. The following result shows
that it suffices to build such nets when the convex bodies are
restricted to be ellipsoids.
\begin{lemma}
\lemlab{reduce-to-pts}%
%
Suppose there exists an $\varepsilon$-net (i.e., \knet{0}) for the volume
measure over $\HCX{d}$ for ellipsoids of size $T(\varepsilon,\altDim)$,
for $\altDim=1,\ldots, d$. Then one can construct a \knet{0} for the
volume measure over $\HCX{d}$ of size $T(\varepsilon/d^d, d)$.
\end{lemma}
\begin{proof}
Consider any convex body $\Mh{\Xi}$, such that
$\volX{\Mh{\Xi} \cap [0,1]^d} \geq \varepsilon$. Let $\EC$ be the ellipsoid
of largest volume contained inside $\Mh{\Xi} \cap [0,1]^d$. By John's
ellipsoid theorem, we have that
$\EC \subseteq \Mh{\Xi} \subseteq d\EC$. In particular,
\begin{equation*}
\volX{\EC}%
=%
\volX{d\EC}/d^d%
\geq%
\frac{\volX{\Mh{\Xi}}}{d^d}%
\geq
\frac{\varepsilon}{d^d}.
\end{equation*}
As such, any \kenet{0}{\varepsilon/d^d} when the convex bodies are
restricted to be ellipsoids is a \knet{0} in the general setting.
\end{proof}
Hence, we focus on building $\varepsilon$-nets for ellipsoids. Note that
it is easy to obtain an $\varepsilon$-net of size $O_d(\varepsilon^{-1} \log \varepsilon^{-1})$
by random sampling \cite{hw-ensrq-87}. Here, we give a deterministic,
explicit construction of such a net.
\subsection{Stabbing ellipsoids with points}
\seclab{stab-e-pts}
\begin{figure}
\centerline{\includegraphics[scale=0.85]{figs/net_example}}
\caption{The net constructed.}
\figlab{c:net}
\end{figure}
\subsubsection{Net construction in 2D}
Let $\EC$ be an ellipse contained in the unit square $\HCX{2}$ with
$\areaX{\EC} \geq \varepsilon$. The following construction is inspired by a
construction of Pach and Tardos \cite{pt-tlbse-13}.
\myparagraph{Construction.}
Let $M = 3 + \ceil{\lg \varepsilon^{-1}}$. For $j=1, \ldots, M-1$, consider
the rectangle
\begin{equation*}
R_j = [0, 1/2^{M - j} ] \times [0,1/2^{j}].
\end{equation*}
Consider the natural tiling of $[0,1]^2$ by the rectangle $R_i$, and
let $\PS_i$ be the set of vertices of the resulting grid $\Mh{G}_i$ in
the interior of the unit square. Let $\net = \cup_i \PS_i$. See
\figref{c:net}.
\myparagraph{Correctness.}
We need the following easy observation, whose proof is included for
the sake of completeness.
\begin{claim}
\clmlab{center:e}
Let $c$ be the center of an ellipse $\EC$, and let $\Mh{\mathcalb{h}}$ be the
longest horizontal segment contained in $\EC$. The segment
$\Mh{\mathcalb{h}}$ passes through $c$.
\end{claim}
\begin{proof}
By the central symmetry of $\EC$, if $\Mh{\mathcalb{h}}$ does not pass through
$c$, then it has a symmetric reflection $\Mh{\mathcalb{h}}'$ through $c$,
which is a horizontal segment of the same length. Let $\Line$ be
the horizontal line through $c$, and observe that
$\cardin{\Line \cap \EC} \geq \cardin{\Mh{\mathcalb{h}}}$ by convexity. By the
smoothness of $\EC$, it follows that
$\cardin{\Line \cap \EC} > \cardin{\Mh{\mathcalb{h}}}$, which is a
contradiction.
\end{proof}
\begin{lemma}
\lemlab{eps:net:2}%
%
The set $\net$ constructed above is an $\varepsilon$-net for the volume
measure over $\HCX{2}$ for ellipses. Furthermore,
$\cardin{\net} = O(\varepsilon^{-1} \log \varepsilon^{-1})$.
\end{lemma}
\begin{proof}
Observe that for any $i$, we have
$\areaX{R_i} = 2^{-(M -j) - j} = 2^{-M} \geq \varepsilon/8$. As such,
$\cardin{\PS_i} = O(1/\varepsilon)$, and
$\cardin{\net} = O(M /\varepsilon) = O(\varepsilon^{-1} \log \varepsilon^{-1})$.
Let $\EC \subseteq [0,1]^2$ be any ellipse with
$\areaX{\EC} \geq \varepsilon$. Let $Y$ denote the projection of $\EC$
onto the $y$-axis. Observe that $\cardin{Y} \geq \varepsilon$. Let
$\Mh{\mathcalb{h}}$ be the longest horizontal segment contained in $\EC$
(which passes through the center of $\EC$ by \clmref{center:e}).
The two extreme $y$-axis points in $\EC$, and the segment
$\Mh{\mathcalb{h}}$ forms a quadrilateral in $\EC$ of area
$\cardin{\Mh{\mathcalb{h}}}\cardin{Y}/2$, see \figref{pf-of-0-net-2d}. Let
$Y = [y_-, y_+]$, and for $\alpha \in Y$, let
$g(\alpha) = \cardin{ \set{y=\alpha} \cap \EC} $. We have that
\begin{equation*}
\cardin{\Mh{\mathcalb{h}}}\cardin{Y}/2%
\leq %
\areaX{\EC}%
=%
\int_{\alpha=y_-}^{y_+} g(\alpha)
\mathrm{d} \alpha
\leq
\cardin{\Mh{\mathcalb{h}}}\cardin{Y}.
\end{equation*}
Since $\areaX{\EC}\geq\varepsilon$, we conclude that
$ \cardin{\Mh{\mathcalb{h}}} \geq \varepsilon/\cardin{Y}$.
\begin{figure}
\phantom{}%
\hfill%
\includegraphics[page=2]{figs/ellipse_net} \hfill%
\includegraphics[page=1]{figs/ellipse_net} \hfill%
\phantom{}%
\caption{The setup for proof of correctness.}
\figlab{pf-of-0-net-2d}
\end{figure}
We set $y_{1/4} = (3/4)y_- + (1/4)y_+$ and
$y_{3/4} = (1/4)y_- + (3/4)y_+$. Consider the two horizontal
segments $\Mh{\mathcalb{h}}_{1/4} = \set{y=y_{1/4}} \cap \EC$ and
$\Mh{\mathcalb{h}}_{3/4} = \set{y=y_{3/4}} \cap \EC$. These two segments are
of the same length and are parallel. Furthermore,
$\gamma = \cardin{\Mh{\mathcalb{h}}_{1/4}} = \cardin{\Mh{\mathcalb{h}}_{3/4}} \geq
\cardin{\Mh{\mathcalb{h}}}/2$, see \figref{pf-of-0-net-2d}. Consider the
parallelogram $Z$ formed by the convex hull of $\Mh{\mathcalb{h}}_{1/4}$ and
$\Mh{\mathcalb{h}}_{3/4}$. Observe, that for any
$\alpha \in [y_{1/4}, y_{3/4}]$, we have that
$\cardin{\set{y = \alpha} \cap Z} = \gamma$. As such,
$\areaX{Z} = \cardin{Y}/2 \cdot \cardin{\Mh{\mathcalb{h}}} \geq \varepsilon/2$. Let
$k$ be the minimum integer such that
$1/2^{k+1} \leq \cardin{Y}/2$. Since $|Y| \geq \varepsilon$, it follows
that $k < M-2$.
This implies that the grid $G_{k+1}$ has a horizontal line
$\Line_k$ that intersects $Z$. Furthermore, we have
\begin{equation*}
\cardin{\Line_k \cap \EC}%
\geq%
\cardin{\Line_k \cap Z} %
=%
\frac{\cardin{\Mh{\mathcalb{h}}}}{2}%
\geq%
\frac{\varepsilon}{2|Y|}%
\geq %
\varepsilon 2^{k-1}%
\geq%
\frac{8 \cdot 2^{k-1}}{2^M}
=%
\frac{1}{2^{M-k+1-3}}
>
\frac{1}{2^{M-(k+1)}} = \beta,
\end{equation*}
since $M = 3 + \ceil{\lg \varepsilon^{-1}}$. Namely, the spacing of the
points of $\Mh{G}_{k+1}$ on the line $\Line_k$ (i.e., $\beta$) is
shorter than the interval $\Line_k \cap \EC$. It follows that a
point of $\PS_{k+1} \subseteq \net$ lies in $\EC$, and thus
establishing the claim.
\end{proof}
\subsubsection{The construction in higher dimensions}
We now extend the previous construction to higher dimensions. The
construction is recursive. Namely, we assume that for all $d' < d$, we
can construct an $\varepsilon$-net for the volume measure over $\HCX{d'}$ for
ellipsoids, of size $(\beta(d')/\varepsilon)\lg^{d'-1}(1/\varepsilon)$, where
$\beta(d')$ is a constant depending on the dimension $d'$ (to be
determined shortly). \lemref{eps:net:2} proves the claim when $d = 2$.
\myparagraph{Construction.}
Label the $d$ axes $x_1, \ldots, x_d$. Let
$\tau = \ceil{(1/d)\lg(1/\varepsilon)}$ and define
the function $\Delta(i) = 2^i \varepsilon^{1/d}$. We repeat the following
construction for each axis $x_\ell$, where $\ell = 1, \ldots, d$. For
each $i = 0, \ldots, \tau$, let $M_i = \ceil{\lg(1/\Delta(i))}$. For
each $i$, and for each $j = 0, \ldots, M_i$, form $2^j+1$ evenly
spaced hyperplanes which are orthogonal to the axis $x_\ell$ (thus
consecutive hyperplanes are separated by distance $2^{-j}$). For each
hyperplane $h$, we recursively construct a
\kenet{0}{\varepsilon/\Delta(i+2)} $\PS_{\ell,i,j}$ for $[0,1]^{d-1}$ on
$h \cap [0,1]^d$. Let
$\PS_\ell = \cup_{i=1}^{\tau} \cup_{j = 1}^{M_i} \PS_{\ell,i,j}$.
Finally, we claim the point set $\PS = \cup_{\ell=1}^d \PS_\ell$ is
the desired \knet{0}.
\begin{theorem}
\thmlab{0-net-e-d}%
%
For $\varepsilon \in (0, 2^{-2d}]$, there exists a $\varepsilon$-net (i.e.,
\knet{0}) for the volume measure over $\HCX{d}$ for ellipsoids, of
size $2^{O(d^2)}\varepsilon^{-1}\lg^{d-1} \varepsilon^{-1}$.
\end{theorem}
\begin{proof}
We first bound the size of the resulting net. Since
$\varepsilon \leq 2^{-2d}$, by a direct calculation,
\allowdisplaybreaks
\begin{align*}
\cardin{\PS}
&\leq%
\sum_{\ell=1}^d \cardin{\PS_\ell}
\leq d \sum_{i=0}^\tau \sum_{j=0}^{M_i} (2^j+1) \cdot
\beta(d-1)
\cdot \pth{\frac{\Delta(i+2)}{\varepsilon}
\lg^{d-2}\pth{\frac{\Delta(i+2)}{\varepsilon}}}
\\&%
\leq%
\frac{2 d \cdot \beta(d-1)}{\varepsilon} \sum_{i=0}^\tau 2^{M_i + 1}
\cdot 2^2 \Delta(i) \lg^{d-2}\pth{\frac{\Delta(i+2)}{\varepsilon}}%
\\&%
\leq%
\frac{2^5 d \cdot \beta(d-1)}{\varepsilon} \sum_{i=0}^\tau
\lg^{d-2}\pth{\frac{2^{i+2}}{\varepsilon^{1-1/d}}}
\\&%
\leq%
\frac{2^5 d \cdot \beta(d-1)}{\varepsilon} \sum_{i=0}^\tau \pth{(i+2) +
\lg\pth{\frac{1}{\varepsilon^{1-1/d}}}}^{d-2}.
\end{align*}
Since $i + 2 \leq \tau + 2 \leq \lg(1/\varepsilon)$ for
$\varepsilon \leq 2^{-2d}$, we have
\begin{align*}
\cardin{\PS}
&\leq%
\frac{2^5 d \cdot \beta(d-1)}{\varepsilon} \left[(\tau+1) \cdot 2^{d-2}
\lg^{d-2}\pth{\frac{1}{\varepsilon}} \right]
\leq%
\frac{2^5 d \cdot \beta(d-1)}{\varepsilon} \left[
\frac{4}{d}\lg{\frac{1}{\varepsilon}} \cdot 2^{d-2}
\lg^{d-2}{\frac{1}{\varepsilon}} \right].
\end{align*}
As such, $\cardin{\PS} \leq =%
\frac{2^{d+5} \cdot \beta(d-1)}{\varepsilon}
\lg^{d-1}\pth{\frac{1}{\varepsilon}}$. In particular, we obtain the
recurrence $\beta(d) = 2^{d+5} \beta(d-1)$, which solves to
$\beta(d) = 2^{O(d^2)}$. Hence,
$\cardin{\PS} = 2^{O(d^2)}\varepsilon^{-1}\lg^{d-1}\varepsilon^{-1}$.
We now argue correctness. Let $\EC$ be an ellipsoid of volume at
least $\varepsilon$. Let $B$ be the smallest enclosing axis-aligned
box for $\EC$. Suppose that the longest edge of $B$ is along the
$\ell$\th axis. In particular, along this $\ell$\th axis $B$
has side length $s \geq \varepsilon^{1/d}$, for otherwise
$\volX{\EC} \leq \volX{B} \leq s^d < \varepsilon$. We claim that
$\EC$ intersects a point in the set $\PS_\ell$.
Let $L = [\ell_-, \ell_+]$ be the projection of $\EC$ onto the
$\ell$\th axis, with $s = \cardin{L}$. For $x \in L$, define
$H(x)$ to be the hyperplane orthogonal to the $\ell$\th axis which
intersects the $\ell$\th axis at $x$. Finally, let $K$ be the
hyperplane through the center of $\EC$ which is orthogonal to the
$\ell$\th axis and set $\ECB = \EC \cap K$. We claim that
$\volX{\ECB} \geq \varepsilon/s$. To prove the claim, suppose towards
contradiction that $\volX{\EC \cap K} < \varepsilon/s$. Then,
\begin{align*}
\volX{\EC} =
\int_{\ell_-}^{\ell_+} \volX{\EC \cap H(x)} \, \mathrm{d}x
< \frac{\varepsilon}{s} \int_{\ell_-}^{\ell_+} 1\,\mathrm{d}x
= \frac{\varepsilon}{s} \cardin{L} = \varepsilon,
\end{align*}
a contradiction.
Choose an integer $i \geq 0$ such that
$s \in [\Delta(i), \Delta(i+1))$. Let
$z_{1/4} = (3/4)\ell_- + (1/4)\ell_+$ and
$z_{3/4} = (1/4)\ell_- + (3/4)\ell_+$. Observe that for all
$x \in [z_{1/4}, z_{3/4}]$,
$\volX{\EC \cap H(x)} \geq \varepsilon/(2s) \geq \varepsilon/\Delta(i+2)$.
Next, let $j$ be the minimum integer such that
$1/2^{j+1} \leq s/2$. Note that such an integer exists, as we can
choose $j = \ceil{\lg(1/s)}$. Since $s \geq \Delta(i)$,
$j \leq \ceil{\lg(1/\Delta(i))} \leq M_i$. Thus, for our choices
of $i$ and $j$, we have found a hyperplane $h$ which intersects
$\EC$ with $\volX{\EC \cap h} \geq \varepsilon/\Delta(i+2)$. By our
recursive construction, there is a point in the net $P_{\ell,i,j}$
which intersects $\EC \cap h$ and thus $\EC$.
\end{proof}
\begin{theorem}
\thmlab{k-flat-net-det}%
%
There is a deterministic, explicit construction of \knet{0}{}s
for $[0,1]^d$ of size
\begin{equation*}
O_d\pth{\frac{1}{\varepsilon} \log^{d-1}\frac{1}{\varepsilon}}.
\end{equation*}
\end{theorem}
\begin{proof}
Follows by plugging in the bound for \thmref{0-net-e-d} into
\lemref{reduce-to-pts}.
\end{proof}
\section{Conclusion}
The main open problem left by our work is bounding the size of \kenet{k}{\eps}{}s in
the general case. That is, the input is a set $\PS$ of $n$ points in
$\Re^d$, and we would like to compute a minimum set of $k$-flats which
stab all convex bodies containing at least $\varepsilon n$ points of $\PS$.
As noted earlier, there is a \kenet{k}{\eps}{} of asymptotically
the same size as of a weak $\varepsilon$-net in $\Re^{d-k}$. This follows by
projecting the point set to a subspace of dimension $d-k$, constructing
a regular weak $\varepsilon$-net, and lifting the net back to the original space.
Can one do better than this somewhat na\"ive\xspace construction?
Note that it is easy to show a lower bound of size $\Omega(1/\varepsilon)$
for \knet{1}{}s in the general case. Take a point set that consists of
$\ceil{2/\varepsilon}$ equally sized clusters of tightly packed points, such
that no line passes through three clusters. Namely, our sublinear
results in $1/\varepsilon$ are special for the uniform measure on the
hypercube.
\myparagraph{Acknowledgements.} %
We thank an anonymous reviewer for sketching an improved construction
of \kenet{k}{\eps}{}s for $k \geq 1$, which led to \thmref{ke-nets-opt}. Our
previous construction had an additional $\log$ term.
\BibTexMode{%
\SoCGVer{%
\bibliographystyle{plainurl}%
}%
\NotSoCGVer{%
\bibliographystyle{alpha}%
}%
| {
"timestamp": "2020-12-07T02:24:55",
"yymm": "2007",
"arxiv_id": "2007.09874",
"language": "en",
"url": "https://arxiv.org/abs/2007.09874",
"abstract": "$\\newcommand{\\eps}{\\varepsilon}\\newcommand{\\tldO}{\\widetilde{O}}$Consider the problem of constructing weak $\\eps$-nets where the stabbing elements are lines or $k$-flats instead of points. We study this problem in the simplest setting where it is still interesting -- namely, the uniform measure of volume over the hypercube $[0,1]^d\\bigr.$. Specifically, a $(k,\\eps)$-net is a set of $k$-flats, such that any convex body in $[0,1]^d$ of volume larger than $\\eps$ is stabbed by one of these $k$-flats. We show that for $k \\geq 1$, one can construct $(k,\\eps)$-nets of size $O(1/\\eps^{1-k/d})$. We also prove that any such net must have size at least $\\Omega(1/\\eps^{1-k/d})$. As a concrete example, in three dimensions all $\\eps$-heavy bodies in $[0,1]^3$ can be stabbed by $\\Theta(1/\\eps^{2/3})$ lines. Note, that these bounds are \\emph{sublinear} in $1/\\eps$, and are thus somewhat surprising. The new construction also works for points providing a weak $\\eps$-net of size $O(\\tfrac{1}{\\eps}\\log^{d-1} \\tfrac{1}{\\eps} )$.",
"subjects": "Computational Geometry (cs.CG)",
"title": "A Note on Stabbing Convex Bodies with Points, Lines, and Flats",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983085088220004,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7095349836352626
} |
https://arxiv.org/abs/1312.6344 | Relation Between Surface Codes and Hypermap-Homology Quantum Codes | Recently, a new class of quantum codes based on hypermaps were proposed. These codes are obtained from embeddings of hypergraphs as opposed to surface codes which are obtained from the embeddings of graphs. It is natural to compare these two classes of codes and their relation to each other. In this context two related questions are addressed in this paper: Can the parameters of hypermap-homology codes be superior to those of surface codes and what is precisely the relation between these two classes of quantum codes? We show that a canonical hypermap code is identical to a surface code while a noncanonical hypermap code can be transformed to a surface code by CNOT gates alone. Our approach is constructive; we construct the related surface code and the transformation involving CNOT gates. | \section{Introduction}
Surface codes, proposed by Kitaev \cite{kitaev03}, are an extremely appealing class of codes for fault tolerant quantum computation \cite{dennis02,raussen07}. They have been generalized in various directions \cite{bravyi98,bullock07,bombin07b,bombin07,bombin06,tillich09, zemor09}.
Recently, a new generalization of surface codes was proposed in \cite{martin13}. In this approach quantum codes are constructed based on the homology of hypergraphs rather than the homology of graphs, which is the case in surface codes.
Hypergraphs are a generalization of graphs; the edges of a hypergraph can be incident on two or more vertices. Just as the embedding of graphs on surfaces gives rise to codes which are topological in nature \cite{kitaev03},
embeddings of hypergraphs also give rise to topological quantum codes. These codes have been termed hypermap-homology
codes in \cite{martin13}; we also refer to them as hypermap codes.
Arguably, the most popular surface code is Kitaev's toric code defined on a
square lattice of size $n\times n$ with periodic boundary conditions \cite{kitaev03}. This code is a $[[2n^2,2,n]]$ quantum code. On the other hand, Ref.~\cite{martin13} proposed a hypermap homology code defined on the $n\times n $ square lattice. This is a $[[3n^2/2,2,n]]$ code and
it is more efficient than Kitaev's toric code with
respect to the number of physical qubits required to protect the
same number of logical qubits while maintaining the same level of error correcting capability.
This seemed to suggest that better quantum codes maybe obtained by using hypergraphs. But there are other surface codes with better parameters than the $[[2n^2,2,n]]$ toric code. There exist surface codes whose parameters are $[[n^2+1,2,n]]$, see \cite{bombin07c,kovalev12}.
This begs the question if hypermap codes improve upon the parameters of best surface codes?
A related question is the exact relation between hypermap codes and surface codes.
It was also not apparent which codes could be realized using hypermaps but not by embedding graphs on surfaces.
In this paper we address these questions. The construction of hypermap codes
requires us to make a choice as to whether we will represent certain boundary maps using the standard basis
or not. We call those codes with standard basis as canonical hypermap codes and those with a nonstandard basis
as noncanonical hypermap codes. We show that for every canonical hypermap code there is a surface code which is
identical to the hypermap code. This implies that the $[[3n^2/2,2,]]$ hypermap code, although it improves upon the $[[2n^2,2,n]]$ toric code, it is identical to another surface code.
We also show that every noncanonical hypermap code can be transformed to a surface
code; we only need CNOT gates for this transformation. In some cases, a
noncanonical hypermap code can be identical to a surface code. A hypermap code that cannot be realized by a surface code must be a noncanonical code.
Our results imply that hypermap codes that improve upon surface codes or which cannot be realized as surface codes must be noncanonical hypermap codes.
Through our results, many questions related to hypermap codes can be posed as questions about
surface codes and we can study hypermap codes using surface codes. For instance, decoding of these codes can be studied in terms of related surface codes.
This paper is structured as follows. In Section~\ref{sec:bac} after briefly reviewing surface codes, we give a self-contained introduction to hypermap codes. Then in Section~\ref{sec:main} we present our main results which clarify the relation between canonical and noncanonical hypermap codes on one hand
and the relation between hypermap codes and surface codes on the other. We then conclude with a brief discussion on the significance and consequences of our results.
\section{Background} \label{sec:bac}
\subsection{Surface codes}
We assume that the reader is familiar with the basics of quantum codes and stabilizer formalism \cite{gottesman97,calderbank98}.
Let $\mathsf{G}$ be a graph with vertex set $\mathsf{V}(\mathsf{G})$ and edge set $\mathsf{E}(\mathsf{G})$; when there is no confusion
we drop the dependence on $\mathsf{G}$. Consider an embedding of the graph on a surface $\Sigma$
i.e. a drawing of the graph on the surface so that no two edges cross each other. Denote the
faces of the embedding as $\mathsf{F}(\mathsf{G})$. We restrict our attention only to those embeddings in which the
faces are homeomorphic to open discs. Such an embedding is also called a 2-cell embedding or a map. A
(stabilizer) quantum code is obtained from the embedding as follows. On each edge we place a qubit.
We associate to each vertex an operator called vertex
operator, denoted by $A_v$ and to each face an operator, denoted $B_f$. The face operators are also
sometimes referred to as plaquette operators. These operators are defined
as follows:
\begin{eqnarray}
A_v &=& \prod_{u\in \delta(v)}X_u, \mbox{ where } \delta(v)=\{ u\mid (u,v) \in \mathsf{E}(\mathsf{G}) \}\label{eq:vertex-op}\\
B_f & =&\prod_{e\in \partial(f)} Z_e,\label{eq:face-op}
\end{eqnarray}
where $ \partial(f) =\{e\mid e \mbox{ is in the boundary of $f$} \}$; while $X$ and $Z$ are the Pauli matrices.
The surface code is defined as the joint +1-eigenspace stabilized by the operators $A_v$ and $B_f$.
In other words, it is the quantum code with the stabilizer
\begin{eqnarray}
S&=& \langle A_v, B_f \mid v\in \mathsf{V}(\mathsf{G}), f\in \mathsf{F}(\mathsf{G})\rangle.
\label{eq:surf-stabilizer}
\end{eqnarray}
The stabilizer matrix of the surface code can be represented by
$\left[\begin{array}{cc}\mathsf{I}_\mathsf{V}& 0 \\0 & \mathsf{I}_\mathsf{F} \end{array} \right]$, where
$\mathsf{I}_\mathsf{V}$ is the vertex-edge incidence matrix of $\mathsf{G}$ and $\mathsf{I}_\mathsf{F}$ is the face-edge incidence matrix of $\mathsf{G}$.
The surface code is an $[[|\mathsf{E}|, 2g]]$ quantum code, where $g$ is the genus of the surface on which $\mathsf{G}$ is embedded.
\subsection{Hypermap homological codes}
We now review the hypermap-homological codes proposed in \cite{martin13}. The reader can find more details on
these codes therein.
Let $\Gamma$ be a hypergraph with vertex set $\mathsf{V}(\Gamma)$ and hyperedge set $
\mathsf{E}(\Gamma)$. A hyperedge is any nonempty subset of the vertex set. If $\mathsf{E}(\Gamma)$ has only subsets of size two then $\Gamma$ reduces to a standard graph. (We use Greek alphabet for hypergraphs only.) Hypergraphs are often studied by means of a bipartite graph representation. This bipartite graph is formed as follows:
Form a bipartite graph $\mathsf{G}_\Gamma$ with vertex set $\mathsf{V}(\mathsf{G}_\Gamma) = \mathsf{V}(\Gamma) \cup \mathsf{E}(\Gamma)$.
Place an edge between $v \in \mathsf{V}(\Gamma) $ and $e\in \mathsf{E}(\Gamma)$ if and only if $e$ is incident on $v$.
We refer to the edges of the bipartite graph as darts or half-edges and denote the collection of
darts by $\mathsf{E}(\mathsf{G}_\Gamma)$. The darts will also be denoted as $\mathsf{W}(\Gamma)$.
For any dart $i$, we denote the unique vertex in $\mathsf{V}(\Gamma)$ on which $i$ is incident by $v_{\ni i}$ and the unique hyperedge on which $i$
is incident by $e_{\ni i}$.
Consider the embedding of $\mathsf{G}_\Gamma$ on a surface $
\Sigma$. Denote the faces of $\mathsf{G}_\Gamma$ by
$\mathsf{F}(\mathsf{G}_\Gamma)$. An embedding of $\mathsf{G}_\Gamma$ is called
a hypermap. The labeling of the darts in the hypermap is performed as follows: we place a label to the left of the dart as we move along the dart from a hyperedge to an adjacent vertex in $\mathsf{G}_\Gamma$. Alternatively, we place the label counterclockwise of the dart with respect to hyperedge on which it is
incident. To distinguish between vertices and hyperedges of $\Gamma$,
elements in $\mathsf{E}(\Gamma)$ are shown as squares
whereas elements in $\mathsf{V}(\Gamma)$ are shown as circles, see Fig.~\ref{fig:dart-labeling}.
If the hypergraph is connected then the bipartite graph is also connected and it is possible to associate a pair of
permutations $\sigma, \tau \in S_n$ acting on the darts of the hypermap such that the group $\langle \sigma, \tau\rangle $ is transitive on the set of darts. Here $S_n$ is the symmetric group of permutations on $n$
elements.
Let us consider a simple example. Consider a hypergraph with 2 vertices and 2 hyperedges, where
$\mathsf{V} = \{v_1,v_2 \}$ and
$\mathsf{E} = \left\{(v_1,v_2,v_1,v_2 ), (v_1,v_2,v_1,v_2 )\right\} =\{e_1,e_2\}$.
The associated bipartite graph, see Fig.~\ref{fig:dart-labeling}, has $4$ vertices and $8$ darts. Vertices can be repeated in a hyperedge.
An embedding of this hypergraph, i.e. the embedding of its bipartite graph representation, on a torus has 4 faces, 4 vertices, 8 darts.
\begin{center}
\begin{figure}[h]
\includegraphics{fig-hypergraph}
\caption{A hypergraph embedded on a torus; opposite ends are to be identified. The circles are the vertices of the hypergraph.
The square vertices are the hyperedges of the hypergraph. The labels are to the left as one moves from squares to the circles.}
\label{fig:dart-labeling}
\end{figure}
\end{center}
In Fig.~\ref{fig:dart-labeling} the permutations $\sigma $ and $\tau$ are defined as follows:
\begin{eqnarray*}
\sigma &= & (\begin{array}{cccc}1&8&3&6\end{array})(\begin{array}{cccc}2&5&4&7\end{array})\\
\tau& = &(\begin{array}{cccc}1&2&3&4\end{array})(\begin{array}{cccc}5&6&7&8\end{array})\\
\sigma\tau^{-1}&=&(\begin{array}{cc}1&7\end{array})(\begin{array}{cc}2&8\end{array})(\begin{array}{cc}3&5\end{array})(\begin{array}{cc}4&6\end{array}),
\end{eqnarray*}
where $\sigma\tau^{-1}(i) = \sigma(\tau^{-1}(i))$.
The number of orbits of $\sigma $ is the number of vertices of original hypergraph while the number of orbits of $\tau $ is the number of hyperedges of the hypergraph. The number of orbits of $\sigma\tau^{-1}$ is the number of faces in the embedding of the hypergraph.
A dart belongs to a face if and only if the label is in the interior of the face.
We use the following shorthand for simplicity:
\begin{eqnarray}
\mathsf{F} &=& \mathsf{F}(\mathsf{G}_\Gamma) =\{ f_1,f_2,\ldots,f_{|\mathsf{F}|}\}, \\
\mathsf{W} &=&\mathsf{E}(\mathsf{G}_\Gamma)=\{ w_1,w_2,\ldots,w_{|\mathsf{W}|}\}=\mathsf{W}(\Gamma), \\
\mathsf{E} &=&\mathsf{E}(\Gamma)=\{ e_1,e_2,\ldots,e_{|\mathsf{E}|}\}\\
\mathsf{V}&= &\mathsf{V}(\Gamma)=\{ v_1,v_2,\ldots,v_{|\mathsf{V}|}\}.
\end{eqnarray}
Denote the binary vector spaces formed by taking $ \mathsf{F}(\mathsf{G}_\Gamma)$, $\mathsf{E}(\mathsf{G}_\Gamma)$, $\mathsf{E}(\Gamma)$ and $\mathsf{V}(\Gamma)$ as bases by
$\mc{F}$, $\mc{W}$, $\mc{E}$, and $\mc{V}$ respectively. Topological codes are usually defined with respect to a series of boundary maps. We now proceed to define similar maps for the hypermaps with respect to the spaces just introduced. Define a ``boundary'' map for each face, dart, and hyperedge as follows:
\begin{eqnarray}
d_2(f) & = & \sum_{i\in f}w_i \label{eq:d2}\\
d_1(w_i) &= &v_{\ni i}+v_{\ni \tau^{-1}(i)},\label{eq:d1}\\
\iota(e) &=& \sum_{i\in \delta(e)} w_i\label{eq:imap}
\end{eqnarray}
Recall that $v_{\ni i}$ is the unique vertex on which the dart $i$ is incident and $\delta(e)$ is set of darts incident on $e$.
We can interpret the map $d_1$ as giving the vertices of the hypergraph on which the half-edges $i$ and
$\tau^{-1}(i)$ are incident.
We also define a projection map $p: \mc{W}\rightarrow \mc{W}/\iota(\mc{E})$ which enforces the following relations for each edge $e$:
\begin{eqnarray}
\sum_{i\in \delta(e)} w_i =0,\label{eq:edge-dep}
\end{eqnarray}
where $\delta(e)$ is the set of darts incident on $e$. So
\begin{eqnarray}
\mc{W}/\iota(\mc{E}) & =&\left \langle w_i \bigg| w_i\in \mathsf{W}; \sum_{j\in \delta(e) } w_j =0 ; e\in \mathsf{E}(\Gamma)\right\rangle\\
p(w) &=& w +\iota(\mc{E})\label{eq:proj-W}
\end{eqnarray}
The hypermap-homology code is defined based on the following functions which lead to the chain complex in
Eq.~\eqref{eq:chain-complex}:
\begin{eqnarray}
&&\partial_2 =p\circ d_2 \label{eq:doe2}\\
&&\partial_1(w+\iota(\mc{E})) = d_1(w)\label{eq:doe1}\\
&&\begin{CD}\label{eq:chain-complex}
\mc{F} @>\partial_2>> \mc{W}/\iota(\mc{E}) @>\partial_1>> \mc{V}\\
\end{CD}.
\end{eqnarray}
In Fig.~\ref{fig:embedMaps} we summarize the various functions defined on the hypermap.
\begin{figure}[h]
\includegraphics{fig-hg-maps}
\caption{Maps relating to the embedding of the hypergraph.}\label{fig:embedMaps}
\end{figure}
In order to define the quantum code we need to identify a basis for $\mc{W}/\iota(\mc{E})$. As there are different choices of bases, the maps $\partial_i$ could have different representations for $H_x$ and $H_z$.
One canonical choice is as follows. Choose a set of darts $\mathsf{S}=\{s_1,s_2,\ldots, s_{m} \}$, such that (i) $|\mathsf{S}|=|\mathsf{E}|$, and (ii) no two darts $s_i $ and $s_j$ in $\mathsf{S}$ are incident on the same hyperedge.
These darts are called special darts. A basis for $\mc{W}/\iota(\mc{E})$ is said to be special if
it is $\mathsf{W}\setminus \mathsf{S}$ and nonspecial otherwise. A quantum code of length
$n=|\mathsf{W}|-|\mathsf{E}|$ is formed from the hypermap as follows. The map $\partial_2$ can be represented by a $ (|\mathsf{W}|-|\mathsf{E}|)\times |\mathsf{F}|$ matrix $[\partial_2]=H_z^t$.
The map $\partial_1$ can be represented by a $ |\mathsf{V}|\times (|\mathsf{W}|-|\mathsf{E}|)$ matrix
$[\partial_1]=H_x$. The quantum code has the
stabilizer matrix
\begin{eqnarray}
S=\left[\begin{array}{cc}H_x& 0 \\0 & H_z \end{array} \right]=\left[\begin{array}{cc}[\partial_1]& 0 \\0 & [\partial_2]^t \end{array} \right]. \label{eq:stabilizer-matrix}
\end{eqnarray}
For $S$ to define a stabilizer code we require $H_x H_z^t=0$; this is
ensured by the condition $\partial_1 \circ \partial_2 = 0$, see
\cite[Proposition~4.12]{martin13} and the subsequent discussion for proof. A hypermap code is said to be canonical if the basis is special and noncanonical otherwise.
We now illustrate the computations of the various maps and how they are used to construct a
quantum code with reference to Fig.~\ref{fig:dart-labeling}.
First identify a set of special darts one for each hyperedge. Let us choose the set of special darts to be
$\mathsf{S}=\{w_3,w_7 \}$. This set is not unique. In the present example they are darts below each hyperedge.
The relations from the hyperedges are
\begin{eqnarray}
w_1+w_2+w_3+w_4&=&0, \label{eq:equivReln-w3}\\
w_5+w_6+w_7+w_8&=&0. \label{eq:equivReln-w7}
\end{eqnarray}
They can be used to eliminate the special darts in the computation of the images of $\partial_i$.
\begin{eqnarray}
\partial_2(f_1) &=& p(d_2(f_2))= p(w_2+w_8)=w_2+w_8\\
\partial_2(f_2) &=& p(w_1+w_7) = w_1+w_5+w_6+w_8 \label{eq:doe2f2}\\
\partial_2(f_3) &= &p(w_3+w_5) = w_1+w_2+w_4+w_5\\
\partial_2(f_4) &=& p(w_4+w_6) = w_4+w_6
\end{eqnarray}
In Eq.~\eqref{eq:doe2f2} we can replace the special dart $w_7$ by linear combinations of nonspecial darts using Eq.~\eqref{eq:equivReln-w7} i.e. $w_5+w_6+w_7+w_8=0$. Similarly for other faces. Note that $\partial_2(f_4) =\partial_2(f_1)+\partial_2(f_2)+\partial_3(f_3)$, so there are only $|\mathsf{F}|-1$ independent relations.
We have $\mc{W}/\iota(\mc{E}) = \langle {w}_1,{w}_2, \ldots, {w}_8|w_1+w_2+w_3+w_4=0,w_5+w_6+w_7+w_8=0\rangle$.
So we can choose $B=\mathsf{W}\setminus \mathsf{S} = \{{w}_1,{w}_2, {w}_4, {w}_5, {w}_6, {w}_8\}$ as a basis for $\mc{W}/\iota(\mc{E})$. This basis is a special basis. The matrix representation of $\partial_2$ with respect to $B$ is $H_z^t$.
We then compute $\partial_1(w)$ for all $w\in B$.
For instance, $\partial_1(w_1) =v_{\ni 1} +v_{\ni\tau^{-1}(1)} = v_1+v_2$. Linearly, extending the action of $\partial_1$ we can perform similar computations for any $w\in \mc{W}/\iota(\mc{E})$. We obtain
$H_x=\left[\begin{array}{cccccc} 1&1&1 & 1& 1&1 \\ 1&1&1 & 1& 1&1 \end{array}\right]$.
Thus the hypermap-homology code defined by Fig.~\ref{fig:dart-labeling} has the following stabilizer
matrix. (We can remove the dependent rows of $H_x$ and $H_z$.)
\begin{eqnarray}
\left[\begin{array}{c|c}H_x& 0 \\\hline0 & H_z \end{array} \right] &=&
\left[\begin{array}{c|c}\begin{array}{cccccc} 1&1&1 & 1& 1&1 \\1&1&1 & 1& 1&1
\end{array} & 0 \\\hline0 & \begin{array}{cccccc}
0&1&0 & 0& 0&1 \\
1&0&0 & 1& 1&1 \\
1&1&1 & 1& 0&0 \\
0&0&1 & 0& 1&0
\end{array} \end{array} \right].\label{eq:hmap-code-stab}
\end{eqnarray}
If we choose an nonspecial basis say $B'=\{w_1, w_2'={w}_1 +{w}_2, {w}_4, {w}_5, {w}_6, {w}_8\}$. Then the matrices
$H_x$ and $H_z$ will be different and the associated quantum codes as well. In this case we compute
\begin{eqnarray}
\partial_2(f_1) &= & w_2+w_8 = w_1+w_2'+w_8\\
\partial_2(f_2) &=& w_1+w_5+w_6+w_8 \\
\partial_2(f_3) &= &w_1+w_2+w_4+w_5 = w_2'+w_4+w_5\\
\partial_2(f_4) &= &w_4+w_6
\end{eqnarray}
Similarly, we can compute $\partial_1(w_2') = \partial_1(w_1+w_2) = v_{\ni 1}+v_{\ni \tau^{-1}(1)}+
v_{\ni 2}+v_{\ni \tau^{-1}(2)}=0$.
Thus the matrices $H_x$ and$H_z$ are given as
\begin{eqnarray}
H_x = \left[ \begin{array}{cccccc}
1&0&1&1&1&1\\
1&0&1&1&1&1
\end{array}
\right]&;&
H_z = \left[ \begin{array}{cccccc}
1&1&1&0&0&0\\
1&0&0&1&1&1\\
0&1&1&1&0&0\\
0&0&1&0&1&0
\end{array}
\right].
\end{eqnarray}
Irrespective of the basis chosen for $\mc{W}/\iota(\mc{E})$, the total number of encoded qubits is a function of the genus of the
surface. In this sense we encode into the topological degrees of freedom afforded by the surface.
The distance on the other hand is not basis invariant. For instance, the hypermap code obtained from
Fig.~\ref{fig:dart-labeling} with special basis has distance two while the code with nonspecial basis has
distance one. The distance of hypermap codes was analyzed when the basis was special \cite{martin13}.
The distances of hypermap codes with nonspecial bases are a little more difficult to compute and it is not
necessary they all have the same distance. For a special basis, the distance can be
related to the cycles of the hypermap and its dual, and thus given a topological interpretation, see for example \cite[Corollary~4.23]{martin13}. In case of a
nonspecial basis such a topological interpretation of the distance has not yet been given.
The following result was proved in \cite{martin13} although it is not stated
as such therein.
\begin{theorem}[Hypermap-homology codes\cite{martin13}]\label{th:hmap-codes}
Let $\Gamma$ be a hypergraph with $|\mathsf{E}|$ hyperedges and $\mathsf{G}_\Gamma$ its bipartite graph representation with $|\mathsf{W}|$ darts. The hypermap obtained by embedding $\mathsf{G}_\Gamma$ on a surface of genus $g$ leads to a $[[|\mathsf{W}|-|\mathsf{E}|,2g]]$ quantum code.
\end{theorem}
In Theorem~\ref{th:hmap-codes}, neither the choice of special darts nor the basis for $\mc{W}/\iota(\mc{E})$
is explicit, but these choices must be made before constructing the quantum code. As was clear from the example, different choices of bases could lead to different codes with potentially different distances.
\section{Hypermap codes and surface codes}\label{sec:main}
In this section, we address some of the questions raised in \cite{martin13}. We prove some new results about
hypermap codes and then relate them to surface codes. We show that the hypermap codes with
special and nonspecial bases are related by transformations involving just CNOT gates.
Along the way we establish a correspondence between the qubits of the code and the hypermap \footnote{
Recall that qubits are placed on the edges of the graph in case of surface codes. If we try to associate a
qubit with each dart we find that there are more darts than there are qubits. An additional problem is that
$\mc{W}/\iota(\mc{E})$ has many bases in general. In such a situation each column of $H_x$ and $H_z$ correspond to linear combinations of the labels of the darts. So at first sight it appears that we cannot have a direct correspondence between the qubits and the darts. But fortunately this is not the case.
}; this was only implicit in \cite{martin13}. Finally we relate the hypermap codes to the surface codes showing an equivalence between canonical hypermap codes and surface codes.
\subsection{Relation between hypermap codes of different bases}
Consider the bipartite representation of the hypergraph or its embedding. Let $\mathsf{S}$ be the collection of
special darts. These special darts are $|\mathsf{E}|$ in number and no two of them are incident on the
same hyperedge. Choose a special basis for $\mc{W}/\iota(\mc{E})$.
A special basis for $\mc{W}/\iota(\mc{E})$ is of the form
\begin{eqnarray}
\{w_{i_1},w_{i_2},\ldots, w_{i_n} \} =\mathsf{W}\setminus \mathsf{S} \label{eq:spl-basis}
\end{eqnarray}
where $n=|\mathsf{W}|-|\mathsf{E}|$; $n$ is also the length of the code.
Then we can place $n$ qubits, one on each of the nonspecial darts labeled $i_j\in \{i_1,i_2,\ldots, i_n \}$.
The special darts carry no qubits. (Note that there are more darts than there are qubits. So the labels of the qubits need not be the same as labels of the darts on which the qubits are placed. It is possible to relabel the hypermap so that both qubit and dart labels coincide.)
Now define the stabilizer generators using the matrices $H_x$ and $H_z$. The matrix $H_z^t$ is
simply the face-dart incidence matrix of the hypermap modulo $\iota(\mc{E})$ i.e. relations of the form given
in Eq.~\eqref{eq:edge-dep}. In other words, it is constrained to have no special darts. We can view matrix $H_x$ as
the vertex incidence matrix of nonspecial darts of the hypermap but the incidence vector is
found after extending the half-edge $i$ to a full edge by combining $i$ and $\tau^{-1}(i)$.
We would like to give a similar correspondence for the noncanonical codes and make precise the connection
between canonical and noncanonical hypermap codes. But first we need the
following lemma.
\begin{lemma}\label{lm:rank-1up-factors}
Let $T$ be an invertible $n\times n $ binary matrix. Then $T$ and $T^{-1}$ can be decomposed as
\begin{eqnarray}
T&=&R_{j_1}^{i_1} R_{j_2}^{i_2} \cdots R_{j_m}^{i_m},\label{eq:T-factors}\\
T^{-1}&=& R_{j_m}^{i_m}\cdots R_{j_2}^{i_2} R_{j_1}^{i_1},\label{eq:Tinv-factors}
\end{eqnarray}
where $R_{j}^{i} =I +e_{i} e_{j}^t$ and $m\leq n^2$.
\end{lemma}
\begin{proof}
Multiplying $T$ by $R_j^i$ from right adds the $i$th column to the
$j$th column of $T$. Denote by $(T)_{i,j}$, the entry in $i$th row and $j$the column of $T$.
Suppose that $(T)_{i,i}=1$, then we can eliminate the nonzero entries $(T)_{i,j}$ in the $i$th row, for $1\leq j\neq i \leq n$ by adding the $i$th column to the $j$th column. If $(T)_{i,i}\neq 1$, then we can find some column $j$ such that $(T)_{i,j}=1$. Such a column must exist because $T$ is full rank. We can first add this column to $i$th column before eliminating the remaining nonzero entries $(T)_{i,j}$.
If an entry $(T)_{i,j}$ is already zero we do not need to multiply by $R_j^i$. Assume now that
we have made all entries in the $i$th row zero except the entry $(T)_{i,i}$. Since $T$ is full rank, the
$(i+1)$st row is not identical to the $i$th row. So we can assume that there is a nonzero entry $(T)_{i+1,j}$ in some column $j\neq i$. Let us eliminate all the non-diagonal entries in $(i+1)$st row, i.e. we eliminate all the nonzero entries except $(T)_{i+1, i+1}$. In this process the $i$th row will not be affected because all its non-diagonal entries are zero. Starting from the first row we can reduce $T$ to the identity matrix by
multiplying by
matrices of the form $R_j^i$. The product of these matrices must
equal $T^{-1}$.
So we have
\begin{eqnarray*}
T^{-1}&=& \prod_{k=1}^m R_{j_k}^{i_k} \\
T &= & \prod_{k=m}^1 (R_{j_k}^{i_k})^{-1} = \prod_{k=m}^1R_{j_k}^{i_k},
\end{eqnarray*}
where the last equality follows from the fact that $R_{j}^{i}$ is its own inverse when $i\neq j$; $R_j^i R_j^i = I+e_ie_j^t+e_ie_j^t+e_ie_j^te_ie_j^t= I$.
Relabeling the $i_k$ and $j_k$, we obtain Eqs.~\eqref{eq:T-factors}~and~\eqref{eq:Tinv-factors}.
Each row $i$ requires no more than $n-1$ entries to be made nonzero; accounting for one additional multiplication when $(T)_{i,i}=0$, we need at most $n$ multiplications per row. Thus $m\leq n^2$.
\end{proof}
\begin{theorem} \label{th:hmap-code-equiv}
Suppose that an $[[n,k]]$ canonical hypermap code has basis $B=\mathsf{W}\setminus \mathsf{S}$ and a noncanonical code has basis $B'=TB$. The canonical code can be transformed to the noncanonical code by the application of $m \leq n^2$ CNOT gates where the $l$th CNOT gate is applied from
qubit $i_l$ to $j_l$. The number and location of CNOT gates is determined by the decomposition $T=R_{j_1}^{i_1} R_{j_2}^{i_2} \cdots R_{j_m}^{i_m}$, where $R_{j}^{i} =I +e_{i} e_{j}^t$.
\end{theorem}
\begin{proof}
The special basis $B$ for the canonical hypermap code can be given as in Eq.~\eqref{eq:spl-basis}. Then
we can write the nonspecial basis $B'$ for the noncanonical hypermap code as
\begin{eqnarray}
B'=\{ T w_{i_1}, T w_{i_2},\ldots, T w_{i_n} \} =T (\mathsf{W}\setminus \mathsf{S}), \label{eq:non-spl-basis}
\end{eqnarray}
where $T$ is an invertible (binary) matrix that transforms the special basis to the nonspecial basis.
Now the columns of $H_x$ and $H_z$ are indexed by the basis vectors of $\mc{W}/\iota(\mc{E})$.
The relations between the representations of $\partial_i$ for different bases are as follows:
\begin{eqnarray}
[\partial_2]_{B'}=T^{-1}[\partial_2]_{B} & \text{ and }& [\partial_1]_{B'} = [\partial_1]_{B} T. \label{eq:basis-change}
\end{eqnarray}
This ensures that $H_x$ and $H_z$ are orthogonal because $[\partial_1]_BTT^{-1}[\partial_2]_B =0$.
By Lemma~\ref{lm:rank-1up-factors}, we see that
\begin{eqnarray}
[\partial_1]_{B'} &=& [\partial_1]_B T = [\partial_1]_B R_{j_1}^{i_1} R_{j_2}^{i_2} \cdots R_{j_m}^{i_m}\label{eq:p1}\\
{[\partial_2]_{B'} }&=& [\partial_2]_B(T^{-1})^t = [\partial_2]_B (R_{j_m}^{i_m} \cdots R_{j_2}^{i_2} R_{j_1}^{i_1})^t \nonumber\\
&=&[\partial_2]_B (R_{j_1}^{i_1})^t (R_{j_2}^{i_2})^t \cdots (R_{j_m}^{i_m})^t\nonumber\\
&=&[\partial_2]_B R_{i_1}^{j_1} R_{i_2}^{j_2} \cdots R_{i_m}^{j_m}\label{eq:p2}.
\end{eqnarray}
Let us parse Eqs.~\eqref{eq:p1}~and~\eqref{eq:p2} equations closely. The first says that we add the column $i_l$ to the column
$j_l$ for $[\partial_1]_B$ while we add the column $j_l$ to the column $i_l $ for $[\partial_2]_B$. This is precisely the action of a CNOT gate acting on qubits $i_l$ and $j_l$ with $i_l$ as control qubit, see for instance \cite[Lemma~2]{grassl03}. This proves that the transformation from the
canonical hypermap code to a noncanonical hypermap code can be achieved by the application of a sequence of CNOT gates
given by the decomposition of $T$.
\end{proof}
We make a few observations regarding the relation between canonical and noncanonical codes.
First the transformation assumes that the codes have been defined with respect to the same set of special darts. Different set of special darts could lead to different codes. With regard to the parameters of the canonical code we can relate the dimension and distance to topological properties of the surface on which the hypergraph is embedded. For a noncanonical code while the dimension is related to the genus of the surface, the distance does not appear to have such a straightforward relation in general. More importantly,
the distance of the canonical code and the noncanonical code need not be the same. The transformation in Theorem~\ref{th:hmap-code-equiv} may not preserve distance.
It is also possible that the stabilizer generators of the noncanonical code are not local, because there is no restriction on the nonspecial basis. It is obvious that a noncanonical code can be converted to a canonical code by application of CNOT gates in the reverse order.
If $T$ is simply a permutation matrix, then the code remains canonical. Since $T$ is composed of transformations of the form $R_{i_2}^{i_1}$, it is instructive to study the effect of one such transformation on the canonical code. Assume that we are applying $R_{i_2}^{i_1}$, in other words we are applying a CNOT between qubits $i_1$ and $i_2$, with $i_1$ as the control qubit.
(We assume without loss of generality that the nonspecial darts have the
labels 1 to $n$ so that we have the same labels for qubits and nonspecial darts.) Suppose
the qubits $i_1$ and $i_2$ are such that they are on adjacent darts in the hypermap as shown in Fig.~\ref{fig:hmap-face}, then the effect of CNOT on the hypermap is shown in Fig.~\ref{fig:hmap-cnot-adj-edges}.
The boundary of the face to which $i_1$ and $i_2$ belong is modified so that it no longer contains $i_1$.
The resulting code is still canonical with respect to the modified hypermap.
\begin{center}
\begin{figure}[h]
\includegraphics{fig-hypergraph-face}
\caption{A typical face in the embedding of the hypergraph. It has an even number of darts and exactly $|f|/2$ darts have their labels inside the face. The darts $l_\alpha$ and $i_\alpha$ are related as $i_\alpha = \tau(l_\alpha)$.}
\label{fig:hmap-face}
\end{figure}
\end{center}
\begin{center}
\begin{figure}[h]
\includegraphics{fig-cnot-on-adj-edges}
\caption{The code with the basis $R_{i_2}^{i_1} B$ can be obtained by applying a CNOT gate on qubits $i_1$
and $i_2$ with $i_2$ as the target qubit.. For the special case when these qubits are adjacent, the effect can be understood graphically. The resulting code is still canonical with respect to a slightly modified hypermap. A CNOT gate between non-adjacent qubits leads to a nontrivial modification of the hypermap in general.}
\label{fig:hmap-cnot-adj-edges}
\end{figure}
\end{center}
If the qubits are not adjacent, then $R_{i_2}^{i_1} B$ does not seem to effect such a simple modification to the hypermap. Understanding this transformation and relating it to the hypermap would be a very useful contribution in this context.
A combination of the transformations of $R_j^i$
could lead to a simple modification though. For instance swapping qubits $i_1$ and $i_2$ would lead
to a relabeling of the hypermap and we would still end up with a canonical code. It would be interesting to find out which transformations $T$ can be described in terms of simple operations on the hypermap.
\subsection{Relation between hypermap codes and surface codes}
Our central result is that a canonical hypermap-homology code can be reduced to a surface code. The graph associated to this surface code can be derived from the hypergraph. Our approach is constructive
and Algorithm~\ref{proc:hyper2surf} gives this transformation. We only consider hypergraphs
whose bipartite representations are connected.
If we are dealing with noncanonical hypermap code, then it can be associated to a canonical hypermap code.
From Theorem~\ref{th:hmap-code-equiv} these two codes are related by a transformation involving CNOT gates alone.
\begin{theorem}\label{th:hmap-surf-equiv}
Every canonical hypermap-homological code is equivalent to a surface code. Given a hypergraph $\Gamma$ (embedded on a surface), Algorithm~\ref{proc:hyper2surf} outputs a standard graph $\overline{\mathsf{G}}_\Gamma$ (embedded on the same surface) such that the surface code associated to $\overline{\mathsf{G}}_\Gamma$
has the same stabilizer as the hypermap-homology code associated to $\Gamma$.
\end{theorem}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithm}[H]
\caption{{\ensuremath{\mbox{ Surface code from a canonical hypermap code}}}}\label{proc:hyper2surf}
\begin{algorithmic}[1]
\REQUIRE {A hypergraph $\Gamma$, and a set of special darts $\mathsf{S} \subset \mathsf{W}(\Gamma)$, such that
$|\mathsf{S}|=|\mathsf{E}(\Gamma)|$ and every dart in $\mathsf{S}$ is incident on a distinct hyperedge.}
\ENSURE {A graph $\overline{\mathsf{G}}_\Gamma$ such that the surface code of $\overline{\mathsf{G}}_\Gamma$ is same as the canonical hypermap-homology code.}
\STATE Form $\mathsf{G}$, the bipartite representation of the hypergraph. Note that $\mathsf{E}(\mathsf{G}_\Gamma) =\mathsf{W}(\Gamma)$.
\STATE For each hyperedge $e$ of the hypergraph choose one special dart ${s_e\in \mathsf{S}}$ such that $s_e\in \delta(e)$.
\STATE Embed $\mathsf{G}$ on a suitable surface; let the genus of the surface be $g$.
\STATE In each face $f$ of the embedding, draw new edges connecting vertex nodes of the hypermap. In other words, for each dart $i\in f$ join $v_{\ni i}$ to $v_{\ni \tau^{-1}(i)}$. Denote this graph by $\mathsf{G}'_\Gamma$.
\STATE Each face of $\mathsf{G}_\Gamma$ now contains $|f|/2$ triangles; each triangle consists of two darts and one newly added edge. Exactly one label is present in each triangle. Label the new edge by that label.
\STATE Modify $\mathsf{G}'_\Gamma$ by deleting all the darts and the vertices associated to each hyperedge. Denote this graph by
$\mathsf{G}''_\Gamma$
\STATE For each hyperedge $e$ there is a special dart ${s_e}$. Delete the edge in $\mathsf{G}''_\Gamma$ which has this label.
Denote the resulting graph as $\overline{\mathsf{G}}_\Gamma$.
\STATE The surface code defined by embedding of $\overline{\mathsf{G}}_\Gamma$ gives the same quantum code as the hypermap. The stabilizer of the surface code is defined using Eqs.~\eqref{eq:vertex-op}--\eqref{eq:surf-stabilizer}.
\end{algorithmic}
\end{algorithm}
\begin{proof}
We show that Algorithm~\ref{proc:hyper2surf} leads to the same stabilizer code as the hypermap-homology code. More precisely, we show that that the matrices $H_x$ and $H_z$ arising in the hypermap-homology construction are exactly the vertex-edge and face-edge incidence matrices of the graph $\overline{\mathsf{G}}_\Gamma$.
Consider first the stabilizer of the hypermap-homolgy code. Each face leads to a $Z$-only stabilizer generator. Note that ${H}_z$ is a $(|\mathsf{W}|-|\mathsf{E}|)\times |\mathsf{F}|$ matrix.
Although there are only $|\mathsf{F}|-1$ independent generators though, see \cite{martin13}, we include the generator from each face in $H_z$.
Let $f$ be a face of the hypermap. The labeling of the darts in the hypermap is such that only half the darts
that constitute the boundary of $f$ have their labels inside $f$. This is illustrated in Fig.~\ref{fig:hmap-face}.
We can choose the set of nonspecial darts as a basis for $\mc{W}/\iota(\mc{E})$. If $w_i$ is not special then
we can write $p(w_i)=w_i$ otherwise we can write $p(w_i)= \sum_{j\neq i\in \delta (e_{\ni i})} w_j$.
The latter follows from the relation $\sum_{j\in \delta(e)} w_j=0 $ for every hyperedge $e$.
Then we obtain
\begin{eqnarray}
\partial_2(f) &=& p (d_2(f)) = p\left(\sum_{i\in f} w_i\right)\label{eq:f-bndry-hmap}\\
& =&\sum_{\stackrel{i\in f}{i\not\in \mathsf{S}} } p(w_i)+\sum_{\stackrel{i\in f}{i\in \mathsf{S}} } p(w_i)
=\sum_{\stackrel{i\in f}{i\not\in \mathsf{S}} }w_i+ \sum_{\stackrel{i\in f}{i\in \mathsf{S}} } \sum_{\stackrel{j\in \delta(e_{\ni i})}{j\neq i }} w_j\nonumber
\end{eqnarray}
The row associated to $\partial_2(f)$ in $H_z$ is simply the characteristic vector of $\partial_2(f)$.
Now let us consider the $Z$-type stabilizer generators of $\overline{\mathsf{G}}_\Gamma$. Let us begin with
the graph $\mathsf{G}'_\Gamma$. With respect the hypermap it has additional edges. The addition of edges between $v_{\ni i}$ and $v_{\ni \tau^{-1}(i)}$ transforms the face $f$ in Fig.~\ref{fig:hmap-face} as shown in Fig.~\ref{fig:hmap-face-newedges}. This creates $|f|/2$
new (triangular) faces within $f$, each of which is bounded by two darts of the hypermap and one new edge. Furthermore,
exactly one of the labels is contained in each of these new faces and every label is contained in some
triangle. Therefore, we can label each new edge by a unique label; furthermore note that these new edges
are are exactly $|\mathsf{W}|$ in number. The face $f$ is modified so that it only contains the new edges
and the vertices of the hypergraph in its boundary. We label this derived face also by $f'$. The derived face $f'$ has only the newly added edges in its boundary.
In fact we have $\partial(f')=\sum_{i\in f'} w_i$, where the boundary is with respect to $\mathsf{G}'_\Gamma$. The boundary of $f'$ in $\mathsf{G}'_{\Gamma}$ is same as the boundary of $f$ in the hypermap.
\begin{center}
\begin{figure}[h]
\includegraphics{fig-g1-face}
\caption{(Color online) Illustrating the addition of edges between $v_{\ni i}$ and $v_{\ni \tau^{-1}(i)}$. The additional edges are shown in color. We call them edges to distinguish them from the darts (half-edges) of the hypermap. Observe that $\partial(f')=\partial(f)$.}
\label{fig:hmap-face-newedges}
\end{figure}
\end{center}
Let us consider the transformation of the hyperedges due to the addition of the new edges. Because of the
edges added between the vertices $v_{\ni i}$ and $v_{\ni \tau^{-1}(i)}$, exactly one label is adjacent to any
new edge. So we can label the newly added edges by the label in the triangle, this is illustrated in
Fig.~\ref{fig:hmap-hedge-transformation-1}.
\begin{center}
\begin{figure}[h]
\includegraphics{fig-g1-hyperedge}
\caption{A hyperedge in the embedding of the hypergraph $\Gamma$. The addition of edges leads to a creation of triangular faces each containing exactly one label. }
\label{fig:hmap-hedge-transformation-1}
\end{figure}
\end{center}
The deletion of hyperedges and the darts incident on them leaves each face $f'$ unchanged. So $f'$ is also a
face of $\mathsf{G}''_\Gamma$. So we can denote without ambiguity the face derived from $f'$ as $f''$.
Furthermore, deletion of the hyperedges and the
darts transforms each hyperedge to a face in $\mathsf{G}''_{\Gamma}$, see
Fig.~\ref{fig:hmap-hedge-transformation-2}. Thus exactly
$|\mathsf{E}|$ new faces are added to $\mathsf{G}''_\Gamma$ with respect to the hypermap; and they can be indexed by the hyperedges.
\begin{center}
\begin{figure}[h]
\includegraphics{fig-g2-face}
\caption{The deletion of the darts and the hyperedge creates a new face with exactly one special edge. On the deletion of the special edge, say $i_j$, it gets merged with exactly one face $f$. The boundary of $f$ now includes the rest of the edges of $e$ i.e. excepting the special edge $i_j$.}
\label{fig:hmap-hedge-transformation-2}
\end{figure}
\end{center}
Consider any such face $e$. Only one of the edges in the boundary of $e$ has the same label as some special dart. We call such edges special edges.
Now when such a special edge in the boundary of $e$ is deleted, then $e$ is merged with exactly one face of
$f''$ of $\mathsf{G}''_\Gamma$, since an edge is shared only by two faces, see
Fig.~\ref{fig:hmap-hedge-transformation-2}. While $f''$ can be merged with many faces $e_i$ derived from the hyperedges, it is not
merged with any other face derived from the faces of the hypermap, because such derived faces do not share
any edges.
Thus $\overline{\mathsf{G}}_\Gamma$ has exactly as many faces as the hypermap and the face derived from the
merging of $f''$ and $e$ (and possibly other faces which share a special edge with $f''$) can be labeled uniquely as $\overline{f}$.
Let us look at the boundary $\overline{f}$ in $\overline{\mathsf{G}}_\Gamma$.
The boundary of $f''$ in $\mathsf{G}''_\Gamma$ is the same as the boundary of $f$ in $\mathsf{G}_\Gamma$ and is equal to $\sum_{i\in f} w_i$. When all the special edges are deleted, $f''$ may be merged with other faces $e\in \mathsf{F} (\mathsf{G}''_\Gamma)$ which share a special edge with $f''$. The
resulting face $\overline{f}$ has a boundary
that is given by union of their boundaries, (the sum is take modulo 2). The boundary of $f''$ is
$\sum_{i\in f} w_i$. Each special edge $i_j$ that is removed causes the boundary to include the remaining edges bounding $e_{\ni i_j}$. The boundary of $e$ is $\sum_{i\in \delta(e)}w_i$.
Thus the boundary of $\overline{f}$ in $\overline{\mathsf{G}}_\Gamma$ is
$\sum_{\stackrel{i\in f}{i\not\in S}} w_i + \sum_{\stackrel{i\in f}{i\in S}} \sum_{\stackrel{j\in \delta(e_{\ni i})}{j\neq i}}w_j$, which is precisely $\partial_2(f)$, the boundary of $f$ in the hypermap, as
computed in Eq.~\eqref{eq:f-bndry-hmap}.
Thus the stabilizer generator associated with a face in $\overline{\mathsf{G}}_\Gamma$ is same as the
generator associated to its parent face in the hypermap. This shows that $H_z$ is the same as the
face-edge incidence matrix of $\mathsf{G}_\Gamma$.
It remains now to show that the matrix $H_x$ is the same as the vertex-edge incidence matrix of
$\overline{\mathsf{G}}_\Gamma$. The matrix $H_x$ is determined by the map $\partial_1$ and
acts on the space $\mc{W}/\iota(\mc{E})$. As mentioned earlier, we can take the nonspecial darts as a
basis for $\mc{W}/\iota(\mc{E})$. Then the columns of $H_x$ are given by the characteristic vector of
$\partial_1(w_i)$, where $i\not\in S$. We have $\partial_1(w_i) = v_{\ni i }+v_{\ni \tau^{-1}(i)}$. But
the vertices $v_{\ni i}$ and $v_{\ni \tau^{-1}(i)}$ are connected by an edge in
$\overline{\mathsf{G}}_\Gamma$. This edge is also labeled $i$ in $\overline{\mathsf{G}}_\Gamma$.
Hence, we can obtain the $i$th column of
$H_x$ by considering the incidence vector of every edge in $\overline{\mathsf{G}}_\Gamma$. The columns
put together then give the vertex-edge incidence matrix of $\overline{\mathsf{G}}_\Gamma$.
Thus the surface code generated by embedding of $\overline{\mathsf{G}}_\Gamma$ has the same stabilizer as
the hypermap-homology code on $\Gamma$. This proves that every hypermap-homology code can be realized by an
equivalent surface code.
\end{proof}
Although Theorem~\ref{th:hmap-surf-equiv} does not mention, the choice of special darts is made explicit in
Algorithm~\ref{proc:hyper2surf}. We give a simple example that illustrates the application of Theorem~\ref{th:hmap-surf-equiv}. Consider the hypermap given in Fig.~\ref{fig:dart-labeling}. Draw additional edges in each face connecting two adjacent circles
as one goes around the face. The hypermap in Fig.~\ref{fig:dart-labeling} is modified as show in Fig.~\ref{fig:hmap-code-2-surf-code-1}.
\begin{center}
\begin{figure}[h]
\includegraphics[scale=0.9]{fig-ex-g1}
\caption{(Color online) The graph $\mathsf{G}'_{\Gamma}$ for the hypermap in Fig.~\ref{fig:dart-labeling}.
It is obtained by the addition of new edges to the hypermap. Any two adjacent darts and a newly added edge form a triangle which contains exactly one label; therefore the newly added edge can be uniquely identified by a label.}
\label{fig:hmap-code-2-surf-code-1}
\end{figure}
\end{center}
Next we modify the graph in Fig.~\ref{fig:hmap-code-2-surf-code-1} as follows. We remove all the original darts and edges of the hypergraph.
In addition we also remove the special darts. These transformations are shown in Figs.~\ref{fig:hmap-code-2-surf-code-2}~and~\ref{fig:cube-surf-code} respectively.
\begin{figure}[h]
\centering
\subfigure[]{
\includegraphics[scale=.9]{fig-ex-g2}
\label{fig:hmap-code-2-surf-code-2}
}
\subfigure[]{
\includegraphics[scale=.9]{fig-ex-g3}
\label{fig:cube-surf-code}
}
%
\caption[Transforming the hypermap-homology code to a surface code.]{
(Color online) (a) The graph $\mathsf{G}''_{\Gamma}$ obtained by the removal of the darts and hyperedge-vertices in $\mathsf{G}'_{\Gamma}$; the hyperedges are transformed to faces in $\mathsf{G}''_{\Gamma}$. (b) The graph
$\overline{\mathsf{G}}_\Gamma$ obtained by removing the special edges i.e. \{3,7 \}; each hyperedge-face $e_i$ is merged with exactly one face. $\overline{\mathsf{G}}_\Gamma$ has the same stabilizer as the hypermap-homology code of Fig.~\ref{fig:dart-labeling}.}
\label{fig:insertRank3}
\end{figure}
The hypermap-homology code obtained from Fig.~\ref{fig:dart-labeling} is identical to the surface code obtained from Fig.~\ref{fig:cube-surf-code}. Consider $\mathsf{I}_\mathsf{V}$ and $\mathsf{I}_\mathsf{F}$, the vertex-edge and face-edge incidence matrices of $\overline{\mathsf{G}}_\Gamma$ in Fig.~\ref{fig:cube-surf-code}:
\begin{eqnarray}
\mathsf{I}_\mathsf{V}=\left[\begin{array}{cccccc} 1 & 1 & 1 & 1 & 1&1\\
1 & 1 & 1 & 1 & 1&1 \end{array} \right] &\quad&
\mathsf{I}_{\mathsf{F}}=\left[\begin{array}{cccccc}
0&1&0 & 0& 0&1 \\
1&0&0 & 1& 1&1 \\
1&1&1 & 1& 0&0 \\
0&0&1 & 0& 1&0
\end{array} \right]\label{eq:surf-code-stab}
\end{eqnarray}
The stabilizer matrix of the surface code is $\left[\begin{array}{cc} \mathsf{I}_\mathsf{V}&0 \\0 & \mathsf{I}_\mathsf{F} \end{array}\right]$.
We can see that this is the same as the stabilizer matrix of the hypermap-homology code given in Eq.~\eqref{eq:hmap-code-stab}.
The substance of Theorem~\ref{th:hmap-surf-equiv} is that the procedure illustrated works in general and
reduces a canonical hypermap-homology code to a surface code.
The converse of Theorem~\ref{th:hmap-surf-equiv}, i.e. every surface code is also a (canonical) hypermap-homology code, is straightforward.
\begin{corollary}\label{co:canonical-hmap-surf-equiv}
Every canonical hypermap-homology code is equivalent to a surface code and vice versa.
\end{corollary}
\begin{proof}
We only sketch the proof. Every graph $\mathsf{G}$ can also be
viewed as a hypergraph, denote it also by $\Gamma$. The bipartite graph representation of this hypergraph is obtained by
taking the original graph and splitting every edge in $\mathsf{G}$ into two edges and adding a new vertex
for each edge. Then we can proceed with the construction proposed in \cite{martin13} to obtain a
hypermap-homology code. At this point we have two codes: a hypermap code and a surface both derived from
the same graph $\mathsf{G}$. But it is by no means obvious that they are identical. We show that they are
the same code. Note that every hyperedge in the hypermap has only
two darts incident on it. One of these can be chosen as a special dart. Now if we apply
Algorithm~\ref{proc:hyper2surf}, and trace through the various transformations, we find that every hyperedge is transformed to a face with two edges in
$\mathsf{G}_\Gamma''$. This graph is identical to the graph obtained from $\mathsf{G}$ where every edge is
replaced by two edges. Hence, $\overline{\mathsf{G}}_\Gamma$ obtained after the deletion of all the special edges from $\mathsf{G}_\Gamma''$ would be same as the original graph $\mathsf{G}$. Thus every surface code is equivalent to a hypermap-homology code. This together with
Theorem~\ref{th:hmap-surf-equiv} implies the corollary.
\end{proof}
Combining Corollary~\ref{co:canonical-hmap-surf-equiv} and Theorem~\ref{th:hmap-code-equiv} we obtain
the following result:
\begin{corollary}\label{co:hmap-surf-equiv}
Any $[[n,k]]$ hypermap code is either identical to a surface code or it can be transformed to a surface code with the application of $m\leq n^2$ CNOT gates.
\end{corollary}
The CNOT gates are required only if the hypermap code is noncanonical.
The sequence in which the CNOT gates are applied in the transformation in Corollary~\ref{co:hmap-surf-equiv} is reverse of the sequence in Theorem~\ref{th:hmap-code-equiv}. In other words it is the sequence of CNOT gates required to transform the noncanonical hypermap code into a canonical code.
While a noncanonical code may be transformed to a surface code, it is not necessary that it has the same
parameters as the resulting surface code. In particular, the distance can change. Therefore some noncanonical
codes may be realized only from the embedding of hypergraphs and not by the embedding of graphs.
\section{Conclusion and Discussion}\label{sec:disc}
Our results imply that canonical hypermap-homology codes cannot
improve upon the parameters of surface codes. Noncanonical hypermap codes may have better parameters than
surface codes. An interesting problem in this context is to construct noncanonical hypermap codes that have better parameters than surface codes. In such a construction, we also want to be able to preserve the locality of the stabilizer generators. Understanding the distance and decoding of the noncanonical codes
merits further study. Hypermap codes provide a different
perspective on topological codes which might yield new insights into their properties and potentially lead to better quantum codes and alternate decoding algorithms for topological codes.
Hypermap-homology codes introduce in a very concrete fashion the use of standard hypermap homology into the study of quantum codes. The use of hypermap homology in the construction of quantum codes is an important development and its applications are yet to be fully explored in the context of quantum codes.
Other results such those in \cite{tillich09,kovalev12} suggest that hypergraphs offer a fertile ground
for construction of new quantum codes. The use of hypergraphs has been fruitful in subsystem codes as well and it would be interesting to study homology of hypermaps in that context also \cite{suchara10}.
Homology of hypergraphs is considered with a different perspective in \cite{suchara10,pra13}. It seems that the notion of homology used therein does not entirely coincide with that used in \cite{martin13}. It would be an interesting problem to relate the two.
\def$'${$'$}
| {
"timestamp": "2014-03-17T01:06:24",
"yymm": "1312",
"arxiv_id": "1312.6344",
"language": "en",
"url": "https://arxiv.org/abs/1312.6344",
"abstract": "Recently, a new class of quantum codes based on hypermaps were proposed. These codes are obtained from embeddings of hypergraphs as opposed to surface codes which are obtained from the embeddings of graphs. It is natural to compare these two classes of codes and their relation to each other. In this context two related questions are addressed in this paper: Can the parameters of hypermap-homology codes be superior to those of surface codes and what is precisely the relation between these two classes of quantum codes? We show that a canonical hypermap code is identical to a surface code while a noncanonical hypermap code can be transformed to a surface code by CNOT gates alone. Our approach is constructive; we construct the related surface code and the transformation involving CNOT gates.",
"subjects": "Quantum Physics (quant-ph)",
"title": "Relation Between Surface Codes and Hypermap-Homology Quantum Codes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850879722155,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.709534983456423
} |
https://arxiv.org/abs/2103.11420 | An inverse-type problem for cycles in local Cayley distance graphs | Let $E$ be a proper symmetric subset of $S^{d-1}$, and $C_{\mathbb{F}_q^d}(E)$ be the Cayley graph with the vertex set $\mathbb{F}_q^d$, and two vertices $x$ and $y$ are connected by an edge if $x-y\in E$. Let $k\ge 2$ be a positive integer. We show that for any $\alpha\in (0, 1)$, there exists $q(\alpha, k)$ large enough such that if $E\subset S^{d-1}\subset \mathbb{F}_q^d$ with $|E|\ge \alpha q^{d-1}$ and $q\ge q(\alpha, k)$, then for each vertex $v$, there are at least $c(\alpha, k)q^{\frac{(2k-1)d-4k}{2}}$ cycles of length $2k$ with distinct vertices in $C_{\mathbb{F}_q^d}(E)$ containing $v$. This result is the inverse version of a recent result due to Iosevich, Jardine, and McDonald (2021). | \section{Introduction}
Let $G$ be an abelian finite group and a symmetric set $E\subset G$. The Cayley graph $C_G(E)$ is defined as the graph with the vertex set $V=G$, and there is an edge from $x$ to $y$ if $y-x\in E$. Let $\mathbb{F}_q$ be a finite field of order $q$, where $q$ is a prime power. In this paper, we consider $G$ being the whole vector space $\mathbb{F}_q^d$.
We have $C_{\mathbb{F}_q^d}(E)$ is a regular graph of degree $|E|$ with $q^d$ vertices. It is well--known in the literature that eigenvalues of $C_{\mathbb{F}_q^d}(E)$ are of the form $\lambda_m:=\sum_{x\in E}\chi(x\cdot m)=\widehat{E}(m), ~m\in \mathbb{F}_q^d,$
where $\chi$ is the principle additive character of $\mathbb{F}_q$. Define $\mu:=\max_{m\ne (0, 0,\ldots, 0)} |\lambda_m|$. This quantity is referred as the second largest eigenvalue of $C_{\mathbb{F}_q^d}(E)$. We call a graph $(n, d, \lambda)$-graph if it has $n$ vertices, the degree of each vertex is $d$, and the second largest eigenvalue is at most $\lambda$.
When $E=S^{d-1}$, the unit sphere in $\mathbb{F}_q^d$, we recall from \cite[Lemma 5.1]{ir} that $\mu=(1+o(1))q^{\frac{d-1}{2}}$. Thus, the graph $C_{\mathbb{F}_q^d}(S^{d-1})$ is a $(n, d, \lambda)$-graph with $n=q^d$, $d=|S^{d-1}|$, and $\lambda=(1+o(1))q^{\frac{d-1}{2}}$. In a $(n, d, \lambda)$-graph, a result of Alon \cite[Theorem 4.10]{expander} states that any large subset of vertices contains the correct number of copies of any fixed sparse graph. More precisely, let $H$ be a fixed graph with $r$ edges, $s$ vertices, and maximal degree $\Delta$, then any subset of $m$ vertices with $m\gg \lambda\left(\frac{n}{d}\right)^\Delta$ contains about $m^s(d/n)^r$ copies of $H$. The condition $m\gg \lambda\left(\frac{n}{d}\right)^\Delta$ can be improved when $H$ is of some specific configuration, for instance, paths and stars \cite{bene}, cycles \cite{io2021}, or complete graphs \cite{11, 17, 34, vinh}. \footnote{We use the following notations: $X \ll Y$ means that there exists some absolute constant $C>0$ such that $X \leq CY$, $X\sim Y$ means that $X\ll Y\ll X$, $X=o(Y)$ means that $\lim_{q\to \infty}X/Y=0$.}
In this paper, we focus on a more general setting, namely, \textit{local Cayley distance graphs}.
\begin{definition}
Let $E$ be a proper symmetric subset of the unit sphere $S^{d-1}$, the Cayley graph spanned by $E$ with the vertex set $\mathbb{F}_q^d$, denoted by $C_{\mathbb{F}_q^d}(E)$, is called the local Cayley distance graph spanned by $E$.
\end{definition}
The main purpose of this paper is to study the distribution of cycles in this local setting. Before stating our main theorem, we observe that the graph $C_{\mathbb{F}_q^d}(E)$ is not a \textit{pseudo-random graph} in the sense that the second eigenvalue $\mu$ is arbitrary close to the graph degree when $q$ is large enough. The precise statement is stated in the following proposition.
\begin{proposition}\label{thm1.9}
For any $1\le m\ll q^{d-1}$ and $\epsilon>0$ with $1/\epsilon\in \mathbb{Z}$. Let $q=p^{\frac{1}{\epsilon}}$. There exists $E\subset S^{d-1}$ such that $|E|=m$ and $\mu \ge \frac{|E|}{2q^{\epsilon}}$. In addition, if $|E+E|\sim |E|$, then we have $\mu\sim \lambda_{(0, \ldots, 0)}= |E|$.
\end{proposition}
Hence, it is not possible to apply techniques of pseudo-random graphs to study properties of local graphs. We recall a result of Bondy and Simonovits \cite{bondy} from 1974 which says that any graph with $n$ vertices and at least $100tn^{1+1/t}$ edges contains a cycle of length $2k$ for any $k\in[t, tn^{1/t}]$. So, for any $t\ge 2$, if $|E|\ge 100tq^{\frac{d}{t}}$, then $C_{\mathbb{F}_q^d}(E)$ contains a cycle of length $2k$ for any $k\in [t, tq^{\frac{d}{t}}]\subset [t, \frac{|E|}{100}]$.
In the following theorem, by making a connection to a very recent result on the number of congruence copies of $2k$-\textit{spherical configurations} spanning $2k-2$ dimensions due to Lyall, Magyar, and Parshall in \cite{lyall}, we prove a stronger statement, namely, there are many cycles passing through a given vertex in $C_{\mathbb{F}_q^d}(E)$.
\begin{theorem}\label{thm-lowerbound}
Let $d, k\in \mathbb{N}$ with $d\ge 4k+2$, $\alpha\in (0, 1)$ and $q\ge q(\alpha, k)$. For a symmetric set $E\subset S^{d-1}\subset \mathbb{F}_q^d$ with $|E|\ge \alpha q^{d-1}$, the number of cycles of length $2k$ in $C_{\mathbb{F}_q^d}(E)$ with distinct vertices passing through each vertex of $C_{\mathbb{F}_q^d}(E)$ is at least $c(\alpha, k)q^{\frac{(2k-1)d-4k}{2}}$.
\end{theorem}
We note that for any $k\ge 2$, one can construct subsets $E$ of $S^{d-1}$ with $|E|\sim q^{\frac{d}{2k-1}}$ such that all cycles of length $2k$ in $C_{\mathbb{F}_q^d}(E)$ do not have distinct vertices (Remark \ref{rmm}). When $E=S^{d-1}$, bounding the number of cycles in $C_{\mathbb{F}_q^d}(E)$ is much easier, since $C_{\mathbb{F}_q^d}(S^{d-1})$ is a pseudo-random graph with the second eigenvalue $\mu\sim \sqrt{|S^{d-1}|}$. More precisely, in the following proposition, an exact number of cycles is obtained.
\begin{proposition}\label{pro14} Suppose $E=S^{d-1}$, then the number of cycles of length $2k$ in $C_{\mathbb{F}_q^d}(S^{d-1})$ is $(1+o(1))|S^{d-1}|^{2k-1}q^{d-1}$. In addition, for any set $A\subset \mathbb{F}_q^d$ with $|A|\gg \min \{q^{\frac{d+1}{2}}, q^\frac{k}{k-1}\}$, the number of cycles of length $2k$ in $C_{\mathbb{F}_q^d}(E)$ with vertices in $A$ is at least $q^{-2k}|A|^{2k}$.
\end{proposition}
Proposition \ref{thm1.9} tells us that there exist sets $E$ such that the second eigenvalue of $C_{\mathbb{F}_q^d}(E)$ is close to the degree $|E|$. However, it is not known whether or not it is true for all sets $E$. Therefore, finding a precise proper subset $E\subset S^{d-1}$ such that $\mu=o(|E|)$ would be a very interesting question, which will provide a better understanding about local structures of the Erd\H{o}s-Falconer distance problem studied in \cite{hart, ir}. We remark here that with high probability such subsets exist, see \cite[Corollary 1.4]{chen} and references therein for more details.
\section{Preliminaries}
Let $\chi\colon \mathbb{F}_q\to \mathbb{S}^1$ be the canonical additive character. For example, if $q$ is a prime number, then $\chi(t)=e^{\frac{2\pi i t}{q}}$, if $q=p^n$, then we set $\chi(t)=e^{\frac{2\pi i \mathtt{Tr}(t)}{q}}$, where $\mathtt{Tr}\colon \mathbb{F}_q\to \mathbb{F}_q$ is the trace function defined by $\mathtt{Tr}(x):=x+x^p+\cdots+x^{p^{n-1}}$.
We recall the orthogonal property of $\chi$: for any $x\in \mathbb{F}_q^d$, $d\ge 1$,
\[\sum_{m\in \mathbb{F}_q^d}\chi(x\cdot m)=\begin{cases}0~~\mathtt{if}~x\ne (0, \ldots, 0) \\ q^d ~\mathtt{if}~x=(0, \ldots, 0)\end{cases},\]
where $x\cdot m=x_1m_1+\cdots+x_dm_d$.
For any $x\in \mathbb{F}_q^d$, through this paper, we define $||x||=x_1^2+\cdots+x_d^2$.
Given a set $E\subset \mathbb{F}_q^d$, we identify $E$ with its indicator function $1_E$. The Fourier transform of $E$ is defined by
\[\widehat{E}(m):=\sum_{x\in \mathbb{F}_q^d}E(x)\chi(-x\cdot m).\]
Let $E$ be a set in $\mathbb{F}_q^d$, and $k$ be a positive integer. The $k$--additive energy of $E$, denoted by $T_k(E)$, is defined by
\[T_k(E):=\#\left\lbrace (a_1, \ldots, a_k, b_1\ldots, b_k)\in E^{2k}\colon a_1+\cdots+a_k=b_1+\cdots+b_k\right\rbrace.\]
We call such a tuple $(a_1, \ldots, a_k, b_1, \ldots, b_k)$ \textit{$k$-energy tuple}.
A $k$-energy tuple $(a_1, \ldots, a_k, b_1, \ldots, b_k)\in \left(\mathbb{F}_q^d\right)^{2k}$ is called \textit{good} if for any two sets of indices $I, J\subset \{1, \ldots, k\}$, we have $\sum_{i\in I}a_i-\sum_{j\in J}b_j\ne 0$. We denote the number of good $k$-energy tuples with vertices in $E$ by $T_k^{\mathtt{good}}(E)$.
In the next lemma, we show that for every vertex $v\in \mathbb{F}_q^d$, the number of cycles of length $2k$ with distinct vertices going through $v$ is at least $T_k^{\mathtt{good}}(E)$.
\begin{lemma}\label{con}
For any $k\ge 2$ and any $v\in \mathbb{F}_q^d$, the number of cycles of length $2k$ in $C_{\mathbb{F}_q^d}(E)$ with distinct vertices going through $v$ is at least $T_k^{\mathtt{good}}(E)$.
\end{lemma}
\begin{proof}
For each good $k$-energy tuple $(a_1, \ldots, a_k, b_1, \ldots, b_k)\in E^{2k}$, we consider the following cycle of length $2k$ in $C_{\mathbb{F}_q^d}(E):$
\[v, v+a_1, v+a_1+a_2, \ldots, v+a_1+\cdots+a_k, v+\sum_{i=1}^ka_i-b_1, \cdots, v+\sum_{i=1}^ka_i-\sum_{i=1}^{k-1}b_i.\]
We observe that in this cycle, each vertex appears only one time since the $k$-energy tuple is good. So, for each vertex $v$, there are at least $T_k^{\mathtt{good}}(E)$ cycles with distinct vertices passing through $v$.
\end{proof}
We also recall the well--known Expanding mixing lemma for regular graphs. We refer the reader to \cite{hanson, expander} for proofs.
\begin{lemma}\label{exp} Let $\mathcal{G}$ be a regular graph with $n$ vertices of degree $d$. Suppose that the second eigenvalue of $\mathcal{G}$ is at most $\mu$, then for any two vertex sets $U$ and $W$ in $\mathcal{G}$, the number of edges between $U$ and $W$, denoted by $e(U, W)$, satisfies
\[\left\vert e(U, W)-\frac{d|U||W|}{n}\right\vert\le \mu |U|^{1/2}|W|^{1/2}.\]
When $U$ and $W$ are multi-sets, we also have
\[\left\vert e(U, W)-\frac{d|U||W|}{n}\right\vert\le \mu \left(\sum_{u\in \overline{U}}m(u)^2\right)^{1/2}\cdot \left(\sum_{w\in \overline{W}}m(w)^2\right)^{1/2},\]
where $\overline{X}$ is the set of distinct elements in $X$, and $m(x)$ is the multiplicity of $x$.
\end{lemma}
\section{Proof of Theorem \ref{thm-lowerbound}}
Theorem \ref{thm-lowerbound} follows directly from Lemma \ref{con} and the following lower bound for $T_k^{\mathtt{good}}(E)$.
\begin{theorem}\label{kenergy}
Suppose $E$ satisfies assumptions of Theorem \ref{thm-lowerbound}, we have
\[T_k^{\mathtt{good}}(E)\ge c(\alpha, k)q^{\frac{(2k-1)d-4k}{2}}.\]
\end{theorem}
In the rest of this section, we focus on proving Theorem \ref{kenergy}.
For each $j\ne 0$, let $S_j^{d-1}(x)$ be the sphere centered at $x\in \mathbb{F}_q^d$ of radius $j$. For the sake of simplicity, we write $S_j^{d-1}$ for $S_j^{d-1}(0, \ldots, 0)$, and $S^{d-1}$ for $S_1^{d-1}(0, \ldots, 0)$.
\begin{definition}
Let $X\subset \mathbb{F}_q^d$ be a configuration. We say that $X$ is spherical if $X\subset S_1^{d-1}(x)$ for some $x\in \mathbb{F}_q^d$. If $\dim (\mathtt{Span}(X-X))=k$, then we say $X$ spans $k$ dimensions.
\end{definition}
The following result is our key ingredient in the proof of Theorem \ref{kenergy}.
\begin{theorem}[Lyall-Magyar-Parshall, \cite{lyall}]\label{lyalll}
Let $d, k\in \mathbb{N}$ with $d\ge 2k+6$, $\alpha\in (0, 1)$ and $q\ge q(\alpha, k)$. For $E\subset S^{d-1}$ with $|E|\ge \alpha q^{d-1}$, then $E$ contains at least $c(\alpha, k) q^{\frac{(k+1)d-(k+1)(k+2)}{2}}$ isometric copies of every non-degenerate $(k+2)$-point spherical configuration spanning $k$ dimensions.
\end{theorem}
This theorem says that for any $\alpha\in (0, 1)$ and any fixed non-degenerate $(k+2)$-point spherical configuration $X$ spanning $k$ dimensions, there exists $q_0=q_0(\alpha, k)$ which is large enough, such that for any $E\subset S^{d-1}\subset \mathbb{F}_q^d$ with $|E|\ge \alpha q^{d-1}$ and $q\ge q_0$, $E$ contains many isometric copies of $X$. More precisely, let
\[X=\{\mathbf{0}, v_1, \ldots, v_k, a_1v_1+\cdots+a_kv_k\},\]
where $\mathbf{0}=(0, \ldots, 0), ~v_1, \ldots, v_k\in \mathbb{F}_q^d$ are linearly independent vectors, and $a_1, \ldots, a_k\in \mathbb{F}_q$, be a non-degenerate spherical configuration of $k+2$ points in $\mathbb{F}_q^d$ that spans a $k$-dimensional vector space. By non-degenerate, we meant that $\{\mathbf{0}, v_1, \ldots, v_k\}$ form a $k$--simplex with all non-zero side-lengths. Assume that $E\subset S^{d-1}$ satisfying the conditions of Theorem \ref{lyalll}, then $E$ contains at least $c(\alpha, k)q^{\frac{(k+1)d-(k+1)(k+2)}{2}}$ copies of $X$ of the form
\[X'=\{x_0, x_0+x_1, \ldots, x_0+x_k, x_0+a_1x_1+\cdots+a_kx_k\},\]
with $x_1, \ldots, x_k$ linearly independent such that $x_i\cdot x_j=v_i\cdot v_j$ for $1\le i\le j\le k$.
We recall that two configurations $X$ and $X'$ in $S^{d-1}$ are said to be in the same congruence class if there exists $g\in O(d, \mathbb{F}_q)$, the orthogonal group in $\mathbb{F}_q^d$, such that $g(X)=X'$.
Let $Q$ be the set of distinct congruence classes of spherical configurations $X$ of the form
\[X=\{x_0, x_0+x_1, x_0+x_2, \ldots, x_0+x_{2k-2}, x_0+\sum_{i=1}^{2k-2}(-1)^{i+1}(x+x_i)\},\]
satisfying
\begin{itemize}
\item $\{x_1, \ldots, x_{2k-2}\}$ are linearly independent.
\item $||x_i-x_j||\ne 0$, $||x_i||\ne 0$ for all $1\le i\ne j\le 2k-2$.
\item $X$ forms a good $k$-energy tuple.
\end{itemize}
We note that vectors in $X\in Q$ form a $k$-energy tuple since
\[x_0+(x_0+x_1)+(x_0+x_3)+\cdots+(x_0+x_{2k-3})=(x_0+x_2)+(x_0+x_4)+\cdots+(x_0+x_{2k-2})+u,\]
where $u=x_0+\sum_{i=1}^{2k-2}(-1)^{i+1}(x_0+x_i)$.
For each $X\in Q$, let $N(X)$ be the number of congruent copies of $X$ in $E$. Set $N(Q)=\sum_{X\in Q}N(X)$. The next lemma gives us a lower bound for $T_k^{\mathtt{good}}(E)$.
\begin{lemma}\label{step1}
Suppose $E$ satisfies assumptions of Theorem \ref{thm-lowerbound}, we have
\begin{equation}\label{eq10*}T_k^{\mathtt{good}}(E)\ge N(Q).\end{equation}
\end{lemma}
\begin{proof}
Let \[X=\{x_0, x_0+x_1, x_0+x_2, \ldots, x_0+x_{2k-2}, x_0+\sum_{i=1}^{2k-2}(-1)^{i+1}(x_0+x_i)\}\in Q,\]
and set $u=x_0+\sum_{i=1}^{2k-2}(-1)^{i+1}(x_0+x_i)$, then we have
\[x_0+(x_0+x_1)+(x_0+x_3)+\cdots+(x_0+x_{2k-3})=(x_0+x_2)+(x_0+x_4)+\cdots+(x_0+x_{2k-2})+u,\]
which provides a good $k$-energy tuple. Notice that $x_0+x_i\ne x_0+x_j$ for all pairs $(i, j)$, and $u\ne x_0, x_0+x_i$ for all $i$. Since the additive energy is invariant under the action of orthogonal matrices, we have $N(X)$ good $k$-energy tuples in $E$. Summing over all $X$, we have $N(Q)$ good $k$-energy tuples in $E$.
\end{proof}
In the form of Lemma \ref{step1}, in order to complete the proof of Theorem \ref{kenergy}, we have to find a lower bound for $N(Q)$, which will be followed by a lower bound of $|Q|$ and Theorem \ref{lyalll}. The following proposition plays an important role for this step.
\begin{proposition}\label{confi} For $d\ge \max\{ 2k-2, 4\}$ and $k\ge 2$, we have $|Q|\gg q^{2k^2-3k}$.
\end{proposition}
With Proposition \ref{confi} in hand, we derive the following corollary.
\begin{corollary}\label{qq}
Let $d, k\in \mathbb{N}$ with $d\ge 4k+2$, $\alpha\in (0, 1)$ and $q\ge q(\alpha, k)$. Let $E\subset S^{d-1}\subset \mathbb{F}_q^d$ with $|E|\ge \alpha q^{d-1}$. We have
\[N(Q)\ge c(\alpha, k)q^{\frac{(2k-1)d-4k}{2}}.\]
\end{corollary}
\begin{proof}
For each configuration in $Q$, we know from Theorem \ref{lyalll} that the number of its copies in $E$ is at least
\[c(\alpha, k)q^{\frac{(2k-1)d-(2k-1)(2k)}{2}}.\]
Taking the sum over all possible $q^{2k^2-3k}$ congruence classes, the lemma follows.
\end{proof}
Combining Lemma \ref{step1} and Corollary \ref{qq}, Theorem \ref{kenergy} is proved.
\subsection{Proof of Proposition \ref{confi}}
We now turn our attention to the Proposition \ref{confi}. The proof of Proposition \ref{confi} is quite complicated, which combines the usual Cauchy-Schwarz argument and the claim that most $k$-energy tuples in $S^{d-1}$ are $2k$-spherical configurations spanning $(2k-2)$ dimensions. We first start with some technical lemmas.
\begin{lemma}[Lemma 4.5, \cite{dothang}]\label{do}
For any $E\subseteq S^{d-1}$, and $k\ge 2$, we have
\[\left\vert T_k(E)-\frac{|E|^{2k-1}}{q} \right\vert \le q^{\frac{d-1}{2}} T_k^{1/2}T_{k-1}^{1/2},\]
where $T_1(E)=|E|$.
\end{lemma}
\begin{corollary}\label{co11}
For $k, d\ge 2$, we have
\[T_k(S^{d-1})=(1+o(1))\frac{|S^{d-1}|^{2k-1}}{q}.\]
\end{corollary}
\begin{proof}
We prove by induction on $k$.
For $k=2$, we apply Lemma \ref{do} to obtain
\[\left\vert T_2(S^{d-1})-\frac{|S^{d-1}|^3}{q}\right\vert \le q^{\frac{d-1}{2}}\cdot T_2^{1/2} |S^{d-1}|^{1/2}.\]
Using the fact that $|S^{d-1}|\sim q^{d-1}$ and set $x=\sqrt{T_2(S^{d-1})}$, we have
\[x^2\ge c_1q^{3d-4}-c_2q^{d-1}x, ~\mathtt{and}~x^2\le c_1q^{3d-4}+c_2q^{d-1}x,\]
for some positive constants $c_1$ and $c_2$.
Solving these equations gives us $x\gg q^{\frac{3d-4}{2}}$ and $x\ll q^{\frac{3d-4}{2}}$, respectively. Thus, the base case is proved.
Suppose that the claim holds for any $k-1\ge 2$, we now show that it also holds for the case $k$. Indeed, set $x=\sqrt{T_k(S^{d-1})}$, applying Lemma \ref{do} and the inductive hypothesis, we have
\[x^2-q^{\frac{d-1}{2}}|S^{d-1}|^{\frac{2k-3}{2}} x-\frac{|S^{d-1}|^{2k-1}}{q}\le 0, ~ x^2+q^{\frac{d-1}{2}}|S^{d-1}|^{\frac{2k-3}{2}} x-\frac{|S^{d-1}|^{2k-1}}{q}\ge 0.\]
Solving these inequalities will give us
\[x=(1+o(1))\left(\frac{|S^{d-1}|^{2k-1}}{q} \right)^{1/2}.\]
This completes the proof of the corollary.
\end{proof}
\begin{lemma}\label{adidaphat} For $d>n\ge 2$, let $L$ be the number of tuples $(v_0, \ldots, v_n)\in (S^{d-1})^{n+1}$ such that $v_i-v_0\in \{a_1(v_1-v_0)+\cdots+a_{i-1}(v_{i-1}-v_0)+a_{i+1}(v_{i+1}-v_0)+\cdots+a_n(v_n-v_0)\colon a_1, \ldots, a_{i-1}, a_{i+1}, \ldots, a_n\ne 0\}$ for some $1\le i\le n$. We have $L\ll \frac{|S^{d-1}|^{n+1}}{q^{2}}$.
\end{lemma}
\begin{proof}
Without loss of generality, we count the number of such tuples with $i=n$.
Let $\chi$ be the principle additive characteristic of $\mathbb{F}_q$. Using the orthogonality of $\chi$, one has
\begin{align*}
L&\le \frac{1}{q^d}\sum_{s\in \mathbb{F}_q^d}\sum_{v_0, \ldots, v_n\in S^{d-1}}\sum_{a_1, \ldots, a_{n-1}\in \mathbb{F}_q^*}\chi\left(s\cdot \bigg((v_n-v_0)-a_1(v_1-v_0)-\cdots-a_{n-1}(v_{n-1}-v_0)\bigg)\right)\\
&=\frac{|S^{d-1}|^{n+1}}{q^{d-n+1}}+\frac{1}{q^d}\sum_{s\ne \textbf{0}}\sum_{v_0, \ldots, v_n\in S^{d-1}}\sum_{a_1, \ldots, a_{n-1}\in \mathbb{F}_q^*}\chi\left(s\cdot \bigg((v_n-v_0)-a_1(v_1-v_0)-\cdots-a_{n-1}(v_{n-1}-v_0)\bigg)\right)\\
&=\frac{|S^{d-1}|^{n+1}}{q^{d-n+1}}+\frac{1}{q^d}\sum_{s\ne 0}\sum_{a_1, \ldots, a_{n-1}\in \mathbb{F}_q^*}\widehat{S^{d-1}}(a_1s)\cdots \widehat{S^{d-1}}(a_{n-1}s)\widehat{S^{d-1}}(s)\widehat{S^{d-1}}(s(1-a_1-\cdots-a_{n-1})),
\end{align*}
where $\widehat{S}(m)=\sum_{x\in \mathbb{F}_q^d}S(x)\chi(-x\cdot m)$. We now recall from \cite[Lemma 5.1]{ir} that $|\widehat{S^{d-1}}(m)|\ll q^{\frac{d-1}{2}}$ for $m\ne 0$ and $\widehat{S}(\mathbf{0})=|S^{d-1}|\sim q^{d-1}$.
We now partition the sum $\sum_{a_1, \ldots, a_{n-1}\in \mathbb{F}_q^*}$ into two sub-summands $\sum_{a_1+\cdots+a_{n-1}\ne 1}$ and $\sum_{a_1+\cdots+a_{n-1}=1}$.
Therefore,
\begin{align*}
\sum_{a_1+\cdots+a_{n-1}\ne 1}\widehat{S^{d-1}}(a_1s)\cdots \widehat{S^{d-1}}(a_{n-1}s)\widehat{S^{d-1}}(s)\widehat{S^{d-1}}(s(1-a_1-\cdots-a_{n-1}))\ll q^{\frac{(d-1)(n+1)}{2}}\cdot q^{n-1},
\end{align*}
and
\begin{align*}
\sum_{a_1+\cdots+a_{n-1}= 1}\widehat{S^{d-1}}(a_1s)\cdots \widehat{S^{d-1}}(a_{n-1}s)\widehat{S^{d-1}}(s)\widehat{S^{d-1}}(s(1-a_1-\cdots-a_{n-1}))\ll q^{\frac{(d-1)(n)}{2}}\cdot q^{d-1}\cdot q^{n-2}.
\end{align*}
These upper bounds are at most $\frac{|S^{d-1}|^{n+1}}{q^{d-n+1}}$ when $d>n$ and $n\ge 2$. In other words,
\[L\ll \frac{|S^{d-1}|^{n+1}}{q^2}.\]
\end{proof}
\begin{lemma}\label{xyz}
Suppose that $d>2k-2$ and $k\ge 2$. The number of tuples $\{x_0, x_0+x_1, \ldots, x_0+x_{2k-2}, x_0+\sum_{i=1}^{2k-2}(-1)^{i+1}(x_0+x_i)\}$ in $(S^{d-1})^{2k}$ such that
\[x_0+(x_0+x_1)+(x_0+x_3)+\cdots+(x_0+x_{2k-3})=(x_0+x_2)+(x_0+x_4)+\cdots+(x_0+x_{2k-2})+u,\]
where $u=x_0+\sum_{i=1}^{2k-2}(-1)^{i+1}(x_0+x_i)$, and $x_{i}\in \mathtt{Span}(x_1, \ldots, x_{i-1}, x_{i+1}, \ldots, x_{2k-2})$ for some $1\le i\le 2k-2$ is $o(T_k(S^{d-1}))$ .
\end{lemma}
\begin{proof}
Applying Lemma \ref{adidaphat} for the family of vectors $\{x_0, x_0+x_1, \ldots, x_0+x_{2k-2}\}$ or its sub-families, we know that there are at most $\frac{|S^{d-1}|^{2k-1}}{q^2}$ such tuples whenever $d>2k-2$ and $k\ge 2$. We also know from Corollary \ref{co11} that $T_k(S^{d-1})=(1+o(1))\frac{|S^{d-1}|^{2k-1}}{q}$. Thus, the lemma follows from the fact that
\[\frac{|S^{d-1}|^{2k-1}}{q^2}=o\left(\frac{|S^{d-1}|^{2k-1}}{q}\right).\]
\end{proof}
We are ready to give a proof of Proposition \ref{confi}.
\begin{proof}[Proof of Proposition \ref{confi}]
For any $k$-energy tuple $(a_1, \ldots, a_k, b_1, \ldots, b_k)\in S^{d-1}$, i.e.
\begin{equation}\label{xk}a_1+\cdots +a_k=b_1+\cdots+b_k,\end{equation}
we set $a_i=a_1+x_i$ for $2\le i\le k$, and $ b_i=a_1+y_i$ for $1\le i\le k$.
We first show that most of all tuples $(a_1, \ldots, a_k, b_1, \ldots, b_k)$ satisfying (\ref{xk}) will have the following properties
\begin{itemize}
\item[a.] $\{x_2, \ldots, x_k, y_1, \ldots, y_k\}$ are linearly independent.
\item[b.] $||x_i-x_j||\ne 0, ||y_i-y_j||\ne 0$ for all pairs $i\ne j$, and $||x_i-y_j||\ne 0, ||x_i||\ne 0, ||y_j||\ne 0$ for all pairs $i, j$.
\item[c.] For any $I, J\subseteq \{1, \ldots, k\}$, we have $\sum_{i\in I}a_i-\sum_{j\in J}b_j\ne 0$.
\end{itemize}
Indeed, let $T_k^{\mathtt{dep}}(S^{d-1}), T_k^{\mathtt{0}}(S^{d-1}), T_k^{\mathtt{bad}}(S^{d-1})$ be the number of $k$-energy tuples not satisfying (a), (b), and (c), respectively. We will prove that $T_k^{\mathtt{dep}}(S^{d-1}), T_k^{\mathtt{0}}(S^{d-1}), T_k^{\mathtt{bad}}(S^{d-1})=o(T_k(S^{d-1}))$.
{\bf Bounding $T_k^{\mathtt{dep}}$:} By Lemma \ref{xyz}, we have $T_k^{\mathtt{dep}}=o(T_k(S^{d-1}))$.
{\bf Bounding $T_k^0$:} It follows from our setting that $||x_i-x_j||=||a_i-a_j||$ and $||x_i-y_j||=||a_i-b_j||$. Hence, it is sufficient to count tuples with $||a_i-a_j||=0$ for some $1\le i\ne j\le k$. The other cases can be treated in the same way.
Without loss of generality, we assume that $||a_1-a_2||=0$, which is equivalent with $||x_2||=0$.
Let $U$ be the multi-set defined by
\[U:=\{a_1+\cdots+a_k\colon a_i\in S^{d-1}, ||a_1-a_2||=0 \}.\]
Let $W$ be the multi-set defined by
\[W:=\{b_1+\cdots+b_{k-1}\colon b_i\in S^{d-1}\}.\]
Let $e(U, W)$ be the number of pairs $(u, w)\in U\times W$ such that $u-w\in S^{d-1}$. Applying Lemma \ref{exp} for the graph $C_{\mathbb{F}_q^d}(S^{d-1})$, we have
\[e(U, W)\le \frac{|U||W|}{q}+q^{\frac{d-1}{2}}\left(\sum_{u\in \overline{U}}m(u)^2\right)^{1/2}\cdot \left(\sum_{w\in \overline{W}}m(w)^2\right)^{1/2},\]
where $m(u), m(w)$ are the multiplicities of $u$ and $w$ in $U$ and $W$, respectively.
We know from \cite{hart} that for any two sets $X, Y\subseteq S^{d-1}$, the number of pairs $(x, y)\in X\times Y$ such that $||x-y||=0$ is at most $\frac{|X||Y|}{q}+q^{\frac{d}{2}}|X|^{1/2}|Y|^{1/2}$.
So with $X=Y=S^{d-1}$, we obtain $|U|\le \frac{|S^{d-1}|^k}{q}$. It is clear that $|W|=|S^{d-1}|^{k-1}$.
On the other hand, it is not hard to see that
\[\sum_{u}m(u)^2\le T_k(S^{d-1}), ~\sum_{w}m(w)^2\le T_{k-1}(S^{d-1}).\]
Using Corollary \ref{co11}, one has
\[e(U, W)\le \frac{|S^{d-1}|^{2k-1}}{q^2}+q^{\frac{d-1}{2}}\cdot \frac{|S^{d-1}|^{\frac{2k-1}{2}}}{q^{1/2}}\cdot \frac{|S^{d-1}|^{\frac{2k-3}{2}}}{q^{1/2}}\ll \frac{|S^{d-1}|^{2k-1}}{q^2}.\]
On the other hand, $e(U, W)$ equals to the number of tuples satisfying (\ref{xk}) with $||a_1-a_2||=0$.
In other words,
\[T_k^0\ll \frac{|S^{d-1}|^{2k-1}}{q^2}=o(T_k(S^{d-1})).\]
{\bf Bounding $T_k^{\mathtt{bad}}$:}
Let $I$ and $J$ be two subsets of $\{1, \ldots, k\}$. Assume that $|I|=|J|=m$. The case $|I|\ne |J|$ is treated in the same way. Without loss of generality, we assume that $I=J=\{1, \ldots, m\}$. We now count the number of $k$-energy tuples $(a_1, \ldots, a_k, b_1, \ldots, b_k)\in (S^{d-1})^{2k}$ such that $a_1+\cdots+a_m-b_1-\cdots-b_m=0$. This implies that $a_{m+1}+\cdots+a_k-b_{m+1}-\cdots-b_k=0$.
We now show that the number of tuples $(a_1, \ldots, a_m,b_1, \ldots, b_m)\in \left(S^{d-1}\right)^{2m}$ such that $a_1+\cdots+a_m-b_1-\cdots-b_m=0$ is at most $\ll \frac{|S^{d-1}|^{2m-1}}{q}$.
Indeed, using the same argument as in bounding $T_k^0$, let $U', W'$ be multi-sets defined by
\[U':=\{a_1+\cdots+a_m\colon a_i\in S^{d-1}\},~~W=\{b_1+\cdots+b_{m-1}\colon b_i\in S^{d-1}\}.\]
The number of such tuples is bounded by $e(U', W')$ in the graph $C_{\mathbb{F}_q^d}(S^{d-1})$. As before, we also have
\[\sum_{u\in \overline{U'}}m(u)^2=T_m(S^{d-1}), ~\sum_{w\in \overline{W'}}m(w)=T_{m-1}(S^{d-1}).\]
Using Lemma \ref{exp} and Lemma \ref{do}, we have
\[e(U', W')\ll \frac{|S^{d-1}|^{2m-1}}{q}+q^{\frac{d-1}{2}}\cdot \frac{|S^{d-1}|^{2m-2}}{q}\ll \frac{|S^{d-1}|^{2m-1}}{q}.\]
Similarly, the number of tuples $(a_{m+1}, \ldots, a_k, b_{m+1}, \ldots, b_k)\in S^{d-1}$ such that $a_{m+1}+\cdots+a_k-b_{m+1}-\cdots-b_k=0$ is at most $\ll \frac{|S^{d-1}|^{2(k-m)-1}}{q}$.
Hence, the number of $k$-energy tuples with $\sum_{i\i I}a_i-\sum_{j\in J}b_j=0$ is at most $\ll \frac{|S^{d-1}|^{2k-2}}{q^2}$.
Summing over all possibilities of sets $I$ and $J$, we obtain
\[T_k^{\mathtt{bad}}(S^{d-1})=o(T_k(S^{d-1}).\]
From the bounds of $T_k^{\mathtt{dep}}$, $T_k^0$, and $T_k^{\mathtt{bad}}(S^{d-1})$, we conclude that most of $k$-energy tuples in $S^{d-1}$ satisfying $(a), (b)$, and $(c)$. We denote the number of those tuples by $T_k^*(S^{d-1})$.
We recall that for any two non-trivial spherical configurations $X$ and $X'$, they are in the same congruent class if there exists $g\in O(d, \mathbb{F}_q)$ such that $gX=X'$. For each configuration in $Q$, say, \[X=\{x_0, x_0+x_1, x_0+x_2, \ldots, x_0+x_{2k-2}, x_0+\sum_{i=1}^{2k-2}(-1)^{i+1}(x+x_i)\},\]
the $2k-1$ vertices $x_0, x_0+x_1, \ldots, x_{0}+x_{2k-2}$ form a non-degenerate $(2k-2)$-simplex. We know from \cite{bennet} that the stabilizer of a non-degenerate $(2k-2)$--simplex in $S^{d-1}$ is of cardinality at least $|O(d-2k+1)|$.
For any $X\in Q$, let $\mu(X)$ be the number of configurations which are congruent to $X$. We have $\sum_{X\in Q}\mu(X)=T_k^*(S^{d-1})$. By Cauchy-Schwarz inequality, we have
\begin{equation}\label{eq134}\sum_{X\in Q}\mu(X)\le |Q|^{1/2}\cdot \left(\sum_{X}\mu(X)^2\right)^{1/2}.\end{equation}
On the other hand, $\sum_{X}s(X)\mu(X)^2$ is at most the number of pairs of configurations $(X, X')$ such that $X'=g(X)$ for some $g\in (d, \mathbb{F}_q)$, where $s(X)$ is the stabilizer of $X$.
Hence, we can bound $\sum_{X}s(X)\mu(X)^2$ by $T_k^*(S^{d-1})\cdot |O(d, \mathbb{F}_q)|$. This implies that
\begin{equation}\label{eq234}\sum_{X}\mu(X)^2\le \frac{|O(d, \mathbb{F}_q)|\cdot T_k^*(S^{d-1})}{|O(d-2k+1)|}.\end{equation}
We recall from \cite{bennet} that $|O(n, \mathbb{F}_q)|\sim q^{\binom{n}{2}}$. From (\ref{eq134}) and (\ref{eq234}), we obtain $|Q|\gg q^{2k^2-3k}$. This completes the proof.
\end{proof}
\section{Proof of Proposition \ref{thm1.9}}
\begin{proof}[Proof of Proposition \ref{thm1.9}]
Suppose $q=p^r$ with $r= \frac{1}{\epsilon}$ (assume that $1/\epsilon$ is an integer).
Let $\mathcal{A}$ be an arithmetic progression in $\mathbb{F}_q$ of size $p^{r-1}$. Let $X$ be the hyperplane $x_d=0$. Define
\[H:=\{X+(0, \ldots, 0, a)\colon a\in \mathcal{A}\}.\]
Note that $H$ is a set of $|\mathcal{A}|$ translates of the hyperplane $X$.
We have $|H|=q^{d-1}\cdot q^{\frac{r-1}{r}}= q^{d-\epsilon}$. It is not hard to see that
\[|(H-H)\cap S^{d-1}|\ll q^{d-2}\cdot q^{\frac{r-1}{r}}\ll q^{d-1-\epsilon}=o(|S^{d-1}|).\]
For any $1\le m\ll |S^{d-1}|$, let $E\subset S^{d-1}\setminus (H- H)$ with $|E|=m$, we have \begin{equation}\label{contradic}(H-H)\cap E=\emptyset.\end{equation}
If $\mu<\frac{|E|}{2q^{\epsilon}}$, then by Lemma \ref{exp} for the graph $C_{\mathbb{F}_q^d}(E)$, one has
\[e(H, H)\ge \frac{|H|^2|E|}{q^d}-\frac{|E||H|}{2q^{\epsilon}}>0,\]
whenever $|H|>\frac{q^{d-\epsilon}}{2}$, which contradicts to (\ref{contradic}).
In other words, we have $\mu \ge \frac{|E|}{2q^{\epsilon}}$.
In the case $|E+E|=K|E|<q^d/2$, we start with an observation that
\[T_2(E)\ge \frac{|E|^4}{|E+E|},\]
which implies
\[T_2(E)\ge \frac{|E|^4}{K|E|}. \]
Let $X$ be the multi-set in $\mathbb{F}_q^d$ defined by $X=E+E$. We can apply the Expander mixing lemma for the graph $C_{\mathbb{F}_q^d}(E)$ to get an upper bound for $T_2(E)$. Indeed, one has
\[T_2(E)=e(X, -E)\le \frac{|E|^4}{q^d}+\mu \cdot T_2(E)^{1/2}\cdot |E|^{1/2}.\]
This gives us
\[T_2(E)\le \frac{|E|^4}{q^d}+\mu^2\cdot |E|.\]
Since $K|E|<q^d/2$, we have $\mu^2|E|\gg \frac{|E|^4}{K|E|}$. This gives $\mu\gg \frac{|E|}{K^{1/2}}$. Hence, when $K\sim 1$, we have $\mu\gg |E|$.
\end{proof}
\section{Proof of Proposition \ref{pro14}}
\begin{proof}[Proof of Proposition \ref{pro14}]
We have seen in the proof of Theorem \ref{thm-lowerbound} that the number of cycles of length $2k$ is equal to $q^d\cdot T_k^*(S^{d-1})$ and $T_k^*(S^{d-1})=(1+o(1))|S^{d-1}|^{2k-1}/q$. Hence, the number of cycles of length $2k$ in $C_{\mathbb{F}_q^d}(S^{d-1})$ is $(1+o(1))|S^{d-1}|^{2k-1}q^{d-1}$.
To prove the upper bound on the number of cycles of length $2k$ in a given set $A\subset \mathbb{F}_q^d$, we need to recall the following result from \cite{bene}.
\begin{theorem}[Bennett-Chapman-Covert-Hart-Iosevich-Pakianathan, \cite{bene}]\label{Mophat-Adidaphat} For $A\subset \mathbb{F}_q^d$, $d\ge 2$ and an integer $k\ge 1$. Suppose that $\frac{2k}{\ln 2}q^{\frac{d+1}{2}}=o(|A|)$ then the number of paths of length $k$ with vertices in $A$ in $C_{\mathbb{F}_q^d}(S^{d-1})$ is $(1+o(1))\frac{|A|^{k+1}}{q^k}$.
\end{theorem}
Let $N$ be the number of cycles of length $2k$ with vertices in $A$. For any two vertices $x, y\in A$, let $P(x, y)$ be the number of paths of length $k$ between $x$ and $y$ with vertices in $A$. It follows from Theorem \ref{Mophat-Adidaphat} that
\[\sum_{x, y\in A}P(x, y)=(1+o(1))\frac{|A|^{k+1}}{q^k}.\]
It is clear that
\[N=\sum_{x, y\in A}\binom{P(x, y)}{2}.\]
Using the convexity of the function $\binom{x}{2}$, one has
\[N\gg |A|^2\cdot \binom{\frac{\sum_{x, y\in A}P(x, y)}{|A|^2}}{2}\gg \frac{|A|^{2k}}{q^{2k}},\]
provided that $|A|\gg q^{\frac{k}{k-1}}$. This completes the proof.
We remark here that in a recent paper Iosevich, Jardine, and McDonald \cite{io2021} proved that when $|A|\gg q^{\frac{d+2}{2}}$, then we have a tight upper bound on the number of of cycles with vertices in $A$. More precisely,
\begin{theorem}[Iosevich-Jardine-McDonald, \cite{io2021}]\label{thm11}
Let $A$ be a set in $\mathbb{F}_q^d$. Suppose that $|A|\gg q^{\frac{d+2}{2}}$, then for any positive integer $k\ge 3$, the number of cycles of length $k$ in $C_{\mathbb{F}_q^d}(S^{d-1})$ with vertices in $A$ is $(1+o(1))|A|^{k}q^{-k}$.
\end{theorem}
The authors of \cite{io2021} indicated that when $k$ is large, then the exponent $\frac{d+2}{2}$ can be improved, namely, the condition
\[|A|\ge \begin{cases}q^{\frac{1}{2}\left(d+2-\frac{k-4}{k-2}+\delta \right)}, ~\mathtt{if}~k\ge 4~ \mathtt{even}\\ q^{\frac{1}{2}\left(d+2-\frac{k-3}{k-1}+\delta \right)}, ~\mathtt{if}~k\ge 3~ \mathtt{odd}\end{cases},\]
where $0<\delta\ll \frac{1}{k^2}$,
would be enough. However, these exponents are still bigger than $\frac{d+1}{2}$. We refer the reader to \cite{io2021} for more discussions.
\end{proof}
\begin{remark}\label{rmm}
We remark here that for any $k\ge 2$, there exists a set $E\subseteq S^{d-1}$ with $|E|\gg q^{\frac{d}{2k-1}}$ such that all cycles of length $2k$ in $C_{\mathbb{F}_q^d}(E)$ do not have distinct vertices. Such a set can be constructed easily as follows. Let $H$ be a $2k$-uniform hypergraph with the vertex set $S^{d-1}$, and each edge is a good $k$-energy tuple, then we know from the proof of Proposition \ref{confi} that the number of edges in $H$ is at most $|S^{d-1}|^{2k-1}/q$. Applying Spencer's independent hypergraph number lemma in \cite{spencer}, we get an independent set $E$ of size at least $\gg q^{\frac{d}{2k-1}}$. This set will satisfy our desired properties.
\end{remark}
\section*{Acknowledgments}
The author was supported by Swiss National Science Foundation grant P4P4P2-191067. I would like to thank Ilya Shkredov for useful discussions about the second eigenvalue of the local Cayley distance graphs.
\bibliographystyle{amsplain}
| {
"timestamp": "2021-03-23T01:20:03",
"yymm": "2103",
"arxiv_id": "2103.11420",
"language": "en",
"url": "https://arxiv.org/abs/2103.11420",
"abstract": "Let $E$ be a proper symmetric subset of $S^{d-1}$, and $C_{\\mathbb{F}_q^d}(E)$ be the Cayley graph with the vertex set $\\mathbb{F}_q^d$, and two vertices $x$ and $y$ are connected by an edge if $x-y\\in E$. Let $k\\ge 2$ be a positive integer. We show that for any $\\alpha\\in (0, 1)$, there exists $q(\\alpha, k)$ large enough such that if $E\\subset S^{d-1}\\subset \\mathbb{F}_q^d$ with $|E|\\ge \\alpha q^{d-1}$ and $q\\ge q(\\alpha, k)$, then for each vertex $v$, there are at least $c(\\alpha, k)q^{\\frac{(2k-1)d-4k}{2}}$ cycles of length $2k$ with distinct vertices in $C_{\\mathbb{F}_q^d}(E)$ containing $v$. This result is the inverse version of a recent result due to Iosevich, Jardine, and McDonald (2021).",
"subjects": "Combinatorics (math.CO)",
"title": "An inverse-type problem for cycles in local Cayley distance graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850867332735,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.709534982562225
} |
https://arxiv.org/abs/2102.10358 | Mean dimension of product spaces: a fundamental formula | Mean dimension is a topological invariant of dynamical systems, which originates with Mikhail Gromov in 1999 and which was studied with deep applications around 2000 by Elon Lindenstrauss and Benjamin Weiss within the framework of amenable group actions. Let a countable discrete amenable group $G$ act continuously on compact metrizable spaces $X$ and $Y$. Consider the product action of $G$ on the product space $X\times Y$. The product inequality for mean dimension is well known: $\mathrm{mdim}(X\times Y,G)\le\mathrm{mdim}(X,G)+\mathrm{mdim}(Y,G)$, while it was unknown for a long time if the product inequality could be an equality. In 2019, Masaki Tsukamoto constructed the first example of two different continuous actions of $G$ on compact metrizable spaces $X$ and $Y$, respectively, such that the product inequality becomes strict. However, there is still one longstanding problem which remains open in this direction, asking if there exists a continuous action of $G$ on some compact metrizable space $X$ such that $\mathrm{mdim}(X\times X,G)<2\cdot\mathrm{mdim}(X,G)$. We solve this problem. Somewhat surprisingly, we prove, in contrast to (topological) dimension theory, a rather satisfactory theorem: If an infinite (countable discrete) amenable group $G$ acts continuously on a compact metrizable space $X$, then we have $\mathrm{mdim}(X^n,G)=n\cdot\mathrm{mdim}(X,G)$, for any positive integer $n$. Our product formula for mean dimension, together with the example and inequality (stated previously), eventually allows mean dimension of product actions to be fully understood. | \section{Main result}
Mean dimension is a topological invariant of dynamical systems, which originates with Mikhail Gromov in 1999 and which was investigated with deep applications around 2000 by Elon Lindenstrauss and Benjamin Weiss within the framework of amenable group actions. The purpose of this paper is to establish a fundamental formula for \textit{mean dimension of product actions}. We shall state our main theorem very quickly in this section (Section 1). The definition of mean dimension and all the necessary terminologies can be found in Section 2. The proof of the main result is located in Section 3.
\medskip
Let us start with convention. Throughout this paper the symbol $\mathbb{N}$ will denote the set of positive integers. All acting groups are always assumed to be \textit{countable} and \textit{discrete}. If an amenable group $G$ acts continuously on a compact metrizable space $X$ then we denote its mean dimension by $\mdim(X,G)$, which takes values in $[0,+\infty]$.
\medskip
Let an amenable group $G$ act continuously on compact metrizable spaces $X_i$, respectively, where $i$ ranges over some subset $I$ of $\mathbb{N}$. Consider the product action of $G$ on the product space $\prod_{i\in I}X_i$. The product inequality for mean dimension (due to Lindenstrauss and Weiss \cite{LW}) is well known: $$\mdim(\prod_{i\in I}X_i,G)\le\sum_{i\in I}\mdim(X_i,G).$$ Nevertheless, it was unknown for a long time if the product inequality could always be an equality. In 2019, Masaki Tsukamoto \cite{Tsukamoto} successfully constructed the first example of two \textit{different} continuous actions of $G$ on compact metrizable spaces $X$ and $Y$, respectively, such that the product inequality becomes strict: $$\mdim(X\times Y,G)<\mdim(X,G)+\mdim(Y,G).$$ A serious reader may observe that in order to have a full understanding of mean dimension of product actions, there is still one longstanding issue that remains open, asking if it is possible for the two continuous actions $(X,G)$ and $(Y,G)$ (mentioned in the above example) to be \textit{essentially the same}. Formally, we study the problem in this direction as follows:
\begin{itemize}\item
For an (arbitrarily fixed) amenable group $G$, does there exist a continuous action of $G$ on some compact metrizable space $X$ such that $$\mdim(X\times X,G)<2\cdot\mdim(X,G)?$$
\end{itemize}
We solve this problem completely.
\medskip
First of all, let us make two observations on this issue here. On the one hand, we note that in dimension theory there is an example (to be precise, we refer to Lemma \ref{dimproduct}) of a compact metrizable space $K$ of (topological) dimension $\dim(K)$ so that the product space $K\times K$ satisfies $$\dim(K\times K)<2\cdot\dim(K).$$ On the other hand, we have made mention of Tsukamoto's example which is highly similar to such a classically known analogue that takes place in dimension theory.
Apparently, both of these two notable phenomena lead naturally to a seemingly plausible impression, i.e. it would be true that we could finally find a compact metrizable space $X$ (with a continuous action of $G$ on $X$) satisfying that $\mdim(X\times X,G)$ is strictly less than $2\cdot\mdim(X,G)$, as the (former) example in dimension theory would stimulate us to strengthen the (latter) construction of Tsukamoto's example in mean dimension theory with the help of some sufficiently refined method (which seems to be hopeful and which might be technically difficult).
However, this assertion turns out to be \textit{false}. Somewhat surprisingly, we prove a rather satisfactory theorem:
\medskip
\begin{theorem}[Main theorem]\label{main}
If an infinite amenable group $G$ acts continuously on a compact metrizable space $X$, then we have $$\mdim(X^n,G)=n\cdot\mdim(X,G)$$ for all positive integers $n$.
\end{theorem}
\medskip
\begin{remark}
Theorem \ref{main} also applies to $n\in\mathbb{N}\cup\{0\}\cup\{+\infty\}$ provided $0\cdot(+\infty)=(+\infty)\cdot0=0$. Indeed, this statement is obviously correct for $n=0$ if we set $X^0$ to be the one-point set. Moreover, with a slightly more effort assuming Theorem \ref{main} for any $n\in\mathbb{N}$ we are able to show that the statement is true for $n=+\infty$. In fact, there are two cases. If $\mdim(X,G)=0$, then $\mdim(X^\infty,G)=0$ follows directly from the product inequality for mean dimension. Now we suppose $\mdim(X,G)>0$. Since it is clear that $\mdim(X^\infty,G)\ge\mdim(X^n,G)=n\cdot\mdim(X,G)$ for every $n\in\mathbb{N}$ (by definition and by the statement of Theorem \ref{main} for all $n\in\mathbb{N}$), we have $\mdim(X^\infty,G)=+\infty$.
\end{remark}
\begin{remark}
If $G$ is a finite group (which is automatically amenable) then Theorem \ref{main} may be false. Notice that in this case we have by definition $\mdim(X,G)=\dim(X)/|G|$. As follows is an entire picture of the situation: If $X$ satisfies $\dim(X)=+\infty$, then $\mdim(X,G)=+\infty$. So does any of its self-product. Thus, the statement remains true for every $n\in\mathbb{N}\cup\{0,+\infty\}$ in this case. Now let us suppose that $X$ is finite dimensional. It follows from Lemma \ref{dimproduct} that for each $n\in\mathbb{N}$ \begin{align*}\mdim(X^n,G)=\begin{cases}n\cdot\mdim(X,G),&\text{if $\dim(X\times X)=2\dim(X)$}\\n\cdot\mdim(X,G)-(n-1)/|G|,&\text{otherwise}\end{cases}.\end{align*} Thus, in this case the statement fails if and only if $X$ does not satisfy $\dim(X\times X)=2\dim(X)$ and meanwhile $n$ does not belong to $\{0,1,+\infty\}$. In short, the \textit{exact range} to which the statement of Theorem \ref{main} does \textit{not} apply is where $G$ is a finite group, $X$ satisfies $\dim(X\times X)<2\dim(X)$, and $n\in\mathbb{N}\setminus\{1\}$.
\end{remark}
\medskip
In contrast to dimension theory, Theorem \ref{main} enables an unexpected behaviour in mean dimension theory to become clarified. Furthermore, our main theorem, together with Lindenstrauss--Weiss' inequality and Tsukamoto's example (stated previously), eventually allows mean dimension of product actions to be fully understood.
Our result is new even for $\mathbb{Z}$-actions. A novel point of the theorem is that the statement applies to the context of amenable group actions, whereas the proof goes through the framework of its sofic nature. The key ingredient of our idea is to produce \textit{different} sofic approximation sequences for the acting group, with respect to which, we consider the sofic mean dimension of a group action.
\medskip
\section{A brief review of mean dimension}
Both mean dimension and sofic groups originate with Misha Gromov around 1999. A systematic study of mean dimension in the context of amenable group actions was given around 2000 by Lindenstrauss and Weiss \cite{LW}. In 2013, Hanfeng Li \cite{Li} introduced the notion of sofic mean dimension which is a successful extension of the definition of mean dimension to the setting of sofic group actions, and further, Li built its connection with classical mean dimension. This section is devoted to all the precise notions and notations in relation to our result, and to collecting fundamental material on them.
\subsection{Sofic groups}
We denote by $|F|$ the cardinality of a set $F$. For every $d\in\mathbb{N}$ we write $[d]$ for the set $\{k\in\mathbb{N}:1\le k\le d\}$ and $\Sym(d)$ for the group of permutations of $[d]$. A group $G$ is \textbf{sofic} if there is a sequence $$\Sigma=\{\sigma_i:G\to\Sym(d_i)\}_{i\in\mathbb{N}}$$ together with a sequence $\{d_i\}_{i\in\mathbb{N}}\subset\mathbb{N}$ such that the following three conditions are satisfied:
\begin{align*}
&\bullet\quad\quad\lim_{i\to\infty}\frac{1}{d_i}|\{k\in[d_i]:\sigma_i(st)(k)=\sigma_i(s)\sigma_i(t)(k)\}|=1\quad\text{for all $s,t\in G$;}\\
&\bullet\quad\quad\lim_{i\to\infty}\frac{1}{d_i}|\{k\in[d_i]:\sigma_i(s)(k)\ne\sigma_i(t)(k)\}|=1\quad\text{for all distinct $s,t\in G$;}\\
&\bullet\quad\quad\lim_{i\to\infty}d_i=+\infty.
\end{align*}
Such a sequence $\Sigma$ is called a \textbf{sofic approximation sequence} for $G$.
\begin{remark}
Note that the third condition will be fulfilled automatically if $G$ is an infinite group.
\end{remark}
\begin{remark}
The sofic groups are a fairly extensive class, which contain in particular all amenable groups and all residually finite groups. However, it has not yet been confirmed if there exists a non-sofic group.
\end{remark}
\subsection{Product actions}
Let $G$ be a group. By the terminology ``$G$ \textbf{acts continuously on} a compact metrizable space $X$'' we understand a continuous mapping $$\Phi:G\times X\to X,\quad(g,x)\mapsto gx$$ satisfying $$\Phi(e,x)=x,\quad\Phi(gh,x)=\Phi(g,\Phi(h,x)),\quad\forall x\in X,\;\forall g,h\in G,$$ where $e$ is the identity element of the group $G$.
Let a group $G$ act continuously on compact metrizable spaces $X_n$, respectively, where $n$ ranges over some $R\in\{[r]:r\in\mathbb{N}\}\cup\{\mathbb{N}\}$. The \textbf{product action} of $G$ on the product space $\prod_{n\in R}X_n$ is defined as follows: $$g(x_n)_{n\in R}=(gx_n)_{n\in R},\quad\forall g\in G,\;\forall(x_n)_{n\in R}\in\prod_{n\in R}X_n.$$
\subsection{Dimension}
We denote by $\dim(K)$ the topological dimension (i.e. the Lebesgue covering dimension) of a compact metrizable space $K$. If the space $K$ is empty, then we set $\dim(K)=-\infty$. For a finite dimensional (nonempty) compact metrizable space $K$, since it was classically known that $$2\dim(K)-1\le\dim(K\times K)\le2\dim(K)$$ and since $\dim(K)$ must be a nonnegative integer, we have
\begin{itemize}\item either $\quad\dim(K\times K)=2\dim(K)$,\item or $\quad\dim(K\times K)=2\dim(K)-1$.\end{itemize}
For a friendly treatment of the following result in dimension theory we refer to \cite[Theorem 2.5]{Tsukamoto}.
\begin{lemma}\label{dimproduct}
Let $K$ be a finite dimensional compact metrizable space. Then for every $n\in\mathbb{N}$
$$\dim(K^n)=\begin{cases}n\dim(K),&\text{if $K$ satisfies $\dim(K\times K)=2\dim(K)$},\\n\dim(K)-n+1,&\text{otherwise}.\end{cases}$$
\end{lemma}
Let $X$ and $P$ be two compact metrizable spaces. Let $\rho$ be a compatible metric on $X$. For $\epsilon>0$ a continuous mapping $f:X\to P$ is called an \textbf{$\epsilon$-embedding} with respect to $\rho$ if $f(x)=f(x^\prime)$ implies $\rho(x,x^\prime)<\epsilon$, for all $x,x^\prime\in X$. Let $\Widim_\epsilon(X,\rho)$ be the minimum topological dimension $\dim(P)$ of a compact metrizable space $P$ which admits an $\epsilon$-embedding $f:X\to P$ with respect to $\rho$.
\begin{remark}
We may verify that the topological dimension of $X$ may be recovered by $\dim(X)=\lim_{\epsilon\to0}\Widim_\epsilon(X,\rho)$.
\end{remark}
Let $K$ be a compact metrizable space with a compatible metric $\rho$. For every $n\in\mathbb{N}$ we define on the product space $K^n$ two compatible metrics $\rho_2$ and $\rho_\infty$ as follows: $$\rho_2\left((x_i)_{i\in[n]},(y_i)_{i\in[n]}\right)=\sqrt{\frac1n\sum_{i\in[n]}(\rho(x_i,y_i))^2},$$ $$\rho_\infty\left((x_i)_{i\in[n]},(y_i)_{i\in[n]}\right)=\max_{i\in[n]}\rho(x_i,y_i).$$ We do not include $n\in\mathbb{N}$ in the notations $\rho_2$ and $\rho_\infty$ because it does not cause any ambiguity.
\subsection{Mean dimension}
A group $G$ is \textbf{amenable} if there exists a sequence $\{F_n\}_{n\in\mathbb{N}}$ of nonempty finite subsets of $G$ such that for any $g\in G$
$$\lim_{n\to\infty}\frac{|F_n\triangle gF_n|}{|F_n|}=0.$$
Such a sequence $\{F_n\}_{n\in\mathbb{N}}$ is called a \textbf{F\o lner sequence} of the group $G$.
Let an amenable group $G$ act continuously on a compact metrizable space $X$. Take a F\o lner sequence $\{F_n\}_{n\in\mathbb{N}}$ of $G$ and a compatible metric $\rho$ on $X$. For a nonempty finite subset $F$ of $G$ we set $$\rho_F(x,x^\prime)=\rho_\infty\left((gx)_{g\in F},(gx^\prime)_{g\in F}\right),\quad\forall\,x,x^\prime\in X.$$ It is clear that $\rho_F$ is also a compatible metric on $X$. The \textbf{mean dimension} of $(X,G)$ is defined by $$\mdim(X,G)=\lim_{\epsilon\to0}\lim_{n\to\infty}\frac{\Widim_\epsilon(X,\rho_{F_n})}{|F_n|}.$$ It is well known that the limits in the above definition always exist. The value $\mdim(X,G)$ is independent of the choices of a F\o lner sequence $\{F_n\}_{n\in\mathbb{N}}$ of $G$ and a compatible metric $\rho$ on $X$.
\subsection{Sofic mean dimension}
Suppose that $\Sigma=\{\sigma_i:G\to\Sym(d_i)\}_{i\in\mathbb{N}}$ is a sofic approximation sequence for a sofic group $G$ which acts continuously on a compact metrizable space $X$ equipped with a compatible metric $\rho$. For a finite subset $F$ of $G$, $\delta>0$ and a map $\sigma:G\to\Sym(d)$ (where $d\in\mathbb{N}$) we define $$\Map(\rho,F,\delta,\sigma)=\{\phi:[d]\to X:\rho_2(\phi\circ\sigma(s),s\phi)\le\delta,\,\forall s\in F\}.$$ We consider the set $\Map(\rho,F,\delta,\sigma)$ as a compact subspace of the product space $X^d$. The \textbf{sofic mean dimension} of $(X,G)$ with respect to $\Sigma$ is defined by $$\mdim_\Sigma(X,G)=\sup_{\epsilon>0}\inf_{F\subset G\text{ finite, }\,\delta>0}\limsup_{i\to\infty}\frac{\Widim_\epsilon\left(\Map(\rho,F,\delta,\sigma_i),\rho_\infty\right)}{d_i}.$$ The definition of $\mdim_\Sigma(X,G)$ does not depend on the compatible metrics $\rho$ on $X$. Nevertheless, it is not clear yet if there is an example of a sofic approximation sequence $\Sigma^\prime$ different from $\Sigma$, which leads to a different value $\mdim_{\Sigma^\prime}(X,G)$. We shall make use of the following theorem \cite[Section 3]{Li}.
\begin{lemma}\label{lithm}
If an infinite amenable group $G$ acts continuously on a compact metrizable space $X$ and if $\Sigma$ is a sofic approximation sequence for $G$, then $\mdim_\Sigma(X,G)=\mdim(X,G)$.
\end{lemma}
\medskip
\section{Proof of the main theorem}
Let $G$ be an infinite amenable group which acts continuously on a compact metrizable space $X$. We fix a positive integer $n$ in this section. Recall that $(X^n,G)$ denotes the product action of $G$ on the product space $X^n$. We shall prove $$\mdim(X^n,G)=n\cdot\mdim(X,G).$$
\medskip
Since the group $G$ is amenable, it is sofic. Therefore we may take a sofic approximation sequence for $G$: $$\Sigma=\{\sigma_i:G\to\Sym(d_i)\}_{i\in\mathbb{N}},$$ where $\{d_i\}_{i\in\mathbb{N}}$ is a sequence of positive integers with $d_i\to+\infty$ as $i\to+\infty$. We generate a new sofic approximation sequence for $G$ (confirmed below) as follows: $$\Sigma^{(n)}=\{\sigma^{(n)}_i:G\to\Sym(nd_i)\}_{i\in\mathbb{N}},$$ where for every $i\in\mathbb{N}$ the map $$\sigma^{(n)}_i:G\to\Sym(nd_i)$$ is defined by: $$\sigma^{(n)}_i(g)\left((j-1)n+l\right)=(\sigma_i(g)(j)-1)n+l,\,\quad\;\forall g\in G,\;\forall j\in[d_i],\;\forall l\in[n].$$
\medskip
\begin{lemma}
$\Sigma^{(n)}$ is a sofic approximation sequence for $G$.
\end{lemma}
\begin{proof}
Clearly, for every $i\in\mathbb{N}$ and $g\in G$ the map $\sigma^{(n)}_i(g):[nd_i]\to[nd_i]$ is a permutation of $[nd_i]$. Besides, it is straightforward to verify that for any $s,t\in G$, $j\in[d_i]$ and $l\in[n]$ we have $$\sigma^{(n)}_i(st)\left((j-1)n+l\right)=\sigma^{(n)}_i(s)\sigma^{(n)}_i(t)\left((j-1)n+l\right)\iff\sigma_i(st)(j)=\sigma_i(s)\sigma_i(t)(j),$$$$\sigma^{(n)}_i(s)\left((j-1)n+l\right)=\sigma^{(n)}_i(t)\left((j-1)n+l\right)\iff\sigma_i(s)(j)=\sigma_i(t)(j).$$ Since $\Sigma$ is a sofic approximation sequence for $G$, the assertion follows.
\end{proof}
\medskip
Let us consider the sofic mean dimension of $(X,G)$ and $(X^n,G)$ with respect to the sofic approximation sequences $\Sigma^{(n)}$ and $\Sigma$, respectively. These two values share the following relation.
\medskip
\begin{lemma}[Key lemma]\label{submain}
$$\mdim_{\Sigma^{(n)}}(X,G)=\frac1n\cdot\mdim_\Sigma(X^n,G).$$
\end{lemma}
\begin{proof}
We fix a compatible metric $\rho$ on $X$ in the proof. Let $\rho^{(n)}$ be the compatible metric $\rho_\infty$ on $X^n$. Let us consider two compact metric spaces as follows: $(X,\rho)$ and $(X^n,\rho^{(n)})$. We take $\epsilon>0$, $\delta>0$, a finite subset $F$ of $G$, and a positive integer $i$, arbitrarily and fix them temporarily.
We note that both of the following two sets: $$\Map(\rho,F,\delta,\sigma^{(n)}_i)=\{\phi:[nd_i]\to X:\rho_2(\phi\circ\sigma^{(n)}_i(s),s\phi)\le\delta,\,\forall s\in F\}$$$$\Map(\rho^{(n)},F,\delta,\sigma_i)=\{\phi:[d_i]\to X^n:\rho^{(n)}_2(\phi\circ\sigma_i(s),s\phi)\le\delta,\,\forall s\in F\}$$ can be regarded as compact subspaces of the product space $X^{nd_i}=(X^n)^{d_i}$. More explicitly, the point here is that we identify $X^{nd_i}$ with $(X^n)^{d_i}$. We notice that the construction of the sofic approximation sequence $\Sigma^{(n)}$ for $G$ and the definition of the product action $(X^n,G)$ ensure that the terms $\phi\circ\sigma^{(n)}_i(s)$ and $\phi\circ\sigma_i(s)$ agree, i.e. $$\phi\circ\sigma^{(n)}_i(s)=\phi\circ\sigma_i(s),\quad\;\forall\phi\in X^{nd_i}=(X^n)^{d_i},\;\forall s\in F.$$ Further, we also remark that $\rho_\infty$ defined on $X^{nd_i}$ corresponds to $\rho^{(n)}_\infty$ defined on $(X^n)^{d_i}$, namely $$\rho_\infty(\psi,\psi^\prime)=\rho^{(n)}_\infty(\psi,\psi^\prime),\quad\,\forall\psi,\psi^\prime\in X^{nd_i}=(X^n)^{d_i},$$ while $\rho_2$ defined on $X^{nd_i}$ and $\rho^{(n)}_2$ defined on $(X^n)^{d_i}$ satisfy the inequality: $$\frac{1}{\sqrt{n}}\cdot\rho^{(n)}_2(\psi,\psi^\prime)\le\rho_2(\psi,\psi^\prime)\le\rho^{(n)}_2(\psi,\psi^\prime),\quad\,\forall\psi,\psi^\prime\in X^{nd_i}=(X^n)^{d_i}.$$ The above observation implies that $$\Map(\rho^{(n)},F,\delta,\sigma_i)\subset\Map(\rho,F,\delta,\sigma^{(n)}_i)\subset\Map(\rho^{(n)},F,\sqrt{n}\delta,\sigma_i).$$ It follows that
\begin{align*}&\Widim_\epsilon\left(\Map(\rho^{(n)},F,\delta,\sigma_i),\rho^{(n)}_\infty\right)\\\le&\Widim_\epsilon\left(\Map(\rho,F,\delta,\sigma^{(n)}_i),\rho_\infty\right)\\\le&\Widim_\epsilon\left(\Map(\rho^{(n)},F,\sqrt{n}\delta,\sigma_i),\rho^{(n)}_\infty\right).\end{align*}
Since $\epsilon>0$, $\delta>0$, a finite subset $F\subset G$ and $i\in\mathbb{N}$ (which we took in the beginning of the proof) are arbitrary, we deduce that $$\mdim_{\Sigma^{(n)}}(X,G)=\frac1n\cdot\mdim_\Sigma(X^n,G).$$ Thus, we end the proof.
\end{proof}
\begin{remark}
The equality established in Lemma \ref{submain} is generally true for all sofic group actions $(X,G)$ and all positive integers $n$. The acting group $G$ in this lemma is not required to be infinite.
\end{remark}
\medskip
We are now able to prove Theorem \ref{main}. The key lemma (Lemma \ref{submain}) indicates that $$\mdim_{\Sigma^{(n)}}(X,G)=\frac1n\cdot\mdim_\Sigma(X^n,G).$$ By Lemma \ref{lithm}, sofic mean dimension (with respect to any sofic approximation sequence) will coincide with (classical) mean dimension, as the acting group $G$ is infinite. Thus, we conclude with $$\mdim(X,G)=\frac1n\cdot\mdim(X^n,G).$$
\medskip
\begin{remark}
We explain shortly about the difficulty with this problem. Let $G$ be an infinite amenable group which acts continuously on a compact metrizable space $X$. We fix a positive integer $n$ and a F\o lner sequence $\{F_k\}_{k\in\mathbb{N}}$ of $G$. We recall that $$\mdim(X^n,G)=\lim_{\epsilon\to0}\lim_{n\to\infty}\frac{\Widim_\epsilon(X^n,(\rho_\infty)_{F_k})}{|F_k|},$$$$n\cdot\mdim(X,G)=\lim_{\epsilon\to0}\lim_{n\to\infty}\frac{n\cdot\Widim_\epsilon(X,\rho_{F_k})}{|F_k|},$$ where $\rho$ and $\rho_\infty$ are compatible metrics on $X$ and $X^n$, respectively. To show $$\mdim(X^n,G)\ge n\cdot\mdim(X,G),$$ a main issue is how to estimate the term $n\cdot\Widim_\epsilon(X,\rho_{F_k})$ from above with terms such as some variants of $\Widim_\epsilon(X^n,(\rho_\infty)_{F_k})$. We overcome this obstacle. The strategy we adopted is to consider \textit{different} approximation sequences in the limits. For a systematic treatment we went through an approach of sofic mean dimension. To make it clearer, let us focus on the case of $\mathbb{Z}$-actions. More precisely, let $(X,d)$ be a compact metric space and $T:X\to X$ a homeomorphism. For convenience we change our notations here, which apply only to the remark. For every positive integer $k$ we write $d_k$ for the compatible metric on $X$ defined by $$d_k(x,x^\prime)=\max_{0\le i<k}d(T^ix,T^ix^\prime),\quad\forall\,x,x^\prime\in X.$$ For simplicity we assume $n=2$. The product action of $\mathbb{Z}$ on the product space $X\times X$ is denoted by $T\times T$. We take a compatible metric $d\times d$ on $X\times X$ as follows: $$d\times d\,((x_1,x_2),(x^\prime_1,x^\prime_2))=\max\{d(x_1,x^\prime_1),d(x_2,x^\prime_2)\},\,\quad\forall\,(x_1,x_2),\,(x^\prime_1,x^\prime_2)\;\in X\times X.$$ To estimate $2\cdot\mdim(X,T)$ from above, we have to turn to an alternative expression (replacing $N$ by $2N$ in the midst of the equality below): $$2\cdot\mdim(X,T)=\lim_{\epsilon\to0}\lim_{N\to+\infty}\frac{2\cdot\Widim_\epsilon(X,d_N)}{N}=\lim_{\epsilon\to0}\lim_{N\to+\infty}\frac{\Widim_\epsilon(X,d_{2N})}{N}.$$ Therefore, in order to show $$2\cdot\mdim(X,T)\le\mdim(X\times X,T\times T)=\lim_{\epsilon\to0}\lim_{N\to+\infty}\frac{\Widim_\epsilon(X\times X,(d\times d)_N)}{N}$$ it suffices to prove $$\Widim_\epsilon(X,d_{2N})\le\Widim_\epsilon(X\times X,(d\times d)_N)$$ for $\epsilon>0$ and $N\in\mathbb{N}$. This will be deduced from the following statement: The continuous mapping $$X\to X\times X,\quad x\mapsto(x,T^Nx)$$ is distance-increasing (actually it is distance-preserving) with respect to $(X,d_{2N})$ and $(X\times X,(d\times d)_N)$, i.e. $$d_{2N}(x,x^\prime)=\max_{0\le i\le2N-1}d(T^ix,T^ix^\prime)=(d\times d)_N\left((x,T^Nx),(x^\prime,T^Nx^\prime)\right),\quad\forall\,x,x^\prime\in X.$$
\end{remark}
\bigskip
| {
"timestamp": "2022-11-22T02:09:19",
"yymm": "2102",
"arxiv_id": "2102.10358",
"language": "en",
"url": "https://arxiv.org/abs/2102.10358",
"abstract": "Mean dimension is a topological invariant of dynamical systems, which originates with Mikhail Gromov in 1999 and which was studied with deep applications around 2000 by Elon Lindenstrauss and Benjamin Weiss within the framework of amenable group actions. Let a countable discrete amenable group $G$ act continuously on compact metrizable spaces $X$ and $Y$. Consider the product action of $G$ on the product space $X\\times Y$. The product inequality for mean dimension is well known: $\\mathrm{mdim}(X\\times Y,G)\\le\\mathrm{mdim}(X,G)+\\mathrm{mdim}(Y,G)$, while it was unknown for a long time if the product inequality could be an equality. In 2019, Masaki Tsukamoto constructed the first example of two different continuous actions of $G$ on compact metrizable spaces $X$ and $Y$, respectively, such that the product inequality becomes strict. However, there is still one longstanding problem which remains open in this direction, asking if there exists a continuous action of $G$ on some compact metrizable space $X$ such that $\\mathrm{mdim}(X\\times X,G)<2\\cdot\\mathrm{mdim}(X,G)$. We solve this problem. Somewhat surprisingly, we prove, in contrast to (topological) dimension theory, a rather satisfactory theorem: If an infinite (countable discrete) amenable group $G$ acts continuously on a compact metrizable space $X$, then we have $\\mathrm{mdim}(X^n,G)=n\\cdot\\mathrm{mdim}(X,G)$, for any positive integer $n$. Our product formula for mean dimension, together with the example and inequality (stated previously), eventually allows mean dimension of product actions to be fully understood.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Mean dimension of product spaces: a fundamental formula",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850867332735,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.709534982562225
} |
https://arxiv.org/abs/1807.01667 | Equicontinuity of minimal sets for amenable group actions on dendrites | In this note, we show that if $G$ is an amenable group acting on a dendrite $X$, then the restriction of $G$ to any minimal set $K$ is equicontinuous, and $K$ is either finite or homeomorphic to the Cantor set. | \section{Introduction}
It is well known that every \tw{continuous} action of
\tw{a topological group}~$G$ on \tw{a compact metric space}~$X$
must have a minimal set~$K$. A natural
question is \tw{to ask}
what can \tw{be said} about the topology of~$K$,
and the dynamics of the subsystem~$(K, G)$.
The answer
to this question \tw{certainly}
depends on the topology of~$X$ and
\tw{involves} the algebraic structure of~$G$.
\tw{We assume throughout that groups are
topological groups, and that the actions
are continuous.}
In the case of
\tw{an orientation-preserving} group action on
the circle~$\mathbb S^1$, the topology of minimal sets and the dynamics on them are well understood.
In fact, for any
action of
\tw{a topological} group~$G$ on~$\mathbb S^1$,
the minimal set~$K$ can only be a finite set,
a Cantor set, or the whole circle
(see, \tw{for example,}~\cite{Nav}).
\tw{The interaction between the topology of~$K$
and the algebraic structure of~$G$ arises as follows.}
\begin{itemize}
\item \tw{If}~$K$ is a Cantor set, then~$(K, G)$ is
\tw{semi-conjugate} to a minimal action \tw{of~$G$}
on~$\mathbb S^1$.
\item If~$K=\mathbb S^1$, then~$(K, G)$ is either
equicontinuous,
or~$(K,G)$ is~$\epsilon$-strongly proximal for
some~$\epsilon>0$, and~$G$ contains a
free non-commutative subgroup
(\tw{so, in particular},~$G$ cannot be
amenable; see \cite{Ma}).
\end{itemize}
The
classes of minimal group actions on the circle
\tw{up to topological conjugacy}
\tw{have been} classified by
Ghys using bounded Euler class (see~\cite{Ghy, Gh}).
Recently, there
\tw{has been} considerable progress
in \tw{the study of} group actions on dendrites.
Minimal group actions on dendrites
appear naturally in the theory of~$3$-dimensional
hyperbolic geometry (see,
for example,~\cite{Bo, Mi}).
Shi proved that every minimal group action
on \tw{a dendrite} is strongly proximal,
and the acting group cannot be amenable (see~\cite{Sh, SWZ}).
Based on the results obtained by
Marzougui and Naghmouchi in~\cite{MN},
Shi and Ye showed that
\tw{an} amenable
group action on
\tw{a dendrite} always
has a minimal set consisting of~$1$
or~$2$ points (see~\cite{SY}),
which is also implied by
the work of Malyutin and Duchesne--Monod
(see~\cite{Mal, DM}). For group actions on dendrites with no finite orbits, Glasner and Megrelishvili showed the
extreme proximality of minimal subsystems
and the strong proximality of the whole system; for amenable group actions on dendrites,
they showed that every infinite minimal subsystem is almost automorphic (see~\cite{GM}).
For~$\mathbb Z$ actions on dendrites,
Naghmouchi proved that every minimal set is either
finite or an adding machine (see~\cite{Nag}).
We \tw{prove} the following theorem in this
paper, which extends the corresponding
result for~$\mathbb Z$ actions in~\cite{Nag},
and
\tw{answers} a question proposed by
Glasner and Megrelishvili in~\cite{GM}.
\begin{thm}\label{theoremwas1-1}
Let~$G$ be an amenable group acting on a
dendrite~$X$,
\tw{and suppose that}~$K$ is a minimal set
\tw{for the action}.
Then~$(K, G)$ is equicontinuous,
and~$K$ is either finite or homeomorphic to the Cantor set.
\end{thm}
Recently, Shi and Ye have shown that every amenable group action on uniquely arcwise connected continua (without the assumption of local connectedness)
must have a minimal set consisting of~$1$ or~$2$
points (see~\cite{SY1}).
We end this
\tw{introduction} with the following general question:
\begin{quote}
{\it What results holding for group actions on dendrites can be extended to actions on uniquely arcwise connected continua?}
\end{quote}
In the following, we assume all the groups
\tw{appearing}
in this paper are countable.
\section{Preliminaries}
\subsection{Group actions}
Let~$X$ be a compact metric space,~${\rm Homeo}(X)$
\tw{its homeomorphism group},
\tw{and let}~$G$ be a group. A group
homomorphism~$\phi: G\rightarrow {\rm Homeo}(X)$ is called an
\emph{action} of $G$ on $X$; we
\tw{also write}~$(X, G)$ to denote
\tw{an} action of~$G$
on~$X$.
For brevity, we usually
\tw{write}~$gx$ or~$g(x)$ instead of~$\phi(g)(x)$.
The \emph{orbit}
of~$x\in X$ under the action of~$G$ is the
set
\[
Gx=\{gx\mid g\in
G\}.
\]
For a subset~$A\subseteq X$,
set~$GA=\bigcup_{x\in
A}Gx$;
\tw{a set}~$A$ is said to be~$G$-\emph{invariant}
if~$GA=A$;
\tw{finally, a point}~$x\in X$ is called a
\emph{fixed point} of
\tw{the action} if~$Gx=\{x\}$.
If~$A$ is a~$G$-invariant closed subset of~$X$
and~$\overline{Gx}=A$ for every~$x\in A$
\tw{(that is, the orbit of each point is dense)},
then~$A$ is called a \emph{minimal set for the action}.
\tw{In this setting every action has a minimal
set by Zorn's lemma.}
A Borel probability measure~$\mu$ on~$X$
is called~$G$-\emph{invariant} if~$\mu(g(A))=\mu(A)$
for every Borel set~$A\subset X$ and every~$g\in G$.
The following lemma follows directly from the~$G$-invariance of \tw{the support}~${\rm supp(\mu)}$
\tw{(which is automatic)}.
\begin{lem}\label{lemmawas2-1}
If~$(X, G)$ is minimal and~$\mu$ is a~$G$-invariant Borel probability
measure on~$X$, then~${\rm supp(\mu)}=X$.
\end{lem}
\begin{lem}\label{lemmawas2-2}
\tw{Suppose that a group}~$G$ acts on a
compact metric space~$X$,
\tw{ and that}~$K$ is a minimal set in~$X$
\tw{carrying a}~$G$-invariant Borel probability
measure~$\mu$.
If~$U$ and~$V$ are open sets in~$X$ such that~$V\supset U$
and~$g(V\cap K)\subset U\cap K$ for some~$g\in G$,
then~$K\cap (V\setminus {\overline U})=\emptyset$.
\end{lem}
\begin{proof}
Assume to the contrary that there is
some~$u\in K\cap (V\setminus {\overline U})$.
Then there is
\tw{an} open neighborhood~$W\ni u$
with~$W\subset V\setminus {\overline U}$.
By Lemma~\ref{lemmawas2-1},
\tw{we have}~$\mu(W\cap K)>0$.
\tw{This then implies that}~$\mu(V\cap K)=\mu (g(V\cap K))\leq \mu(U\cap K)<\mu(V\cap K)$, a contradiction.
\end{proof}
\subsection{Amenable groups}
Amenability was first introduced by
von Neumann. Recall that a
countable group $G$ is
\tw{said to be}
\emph{amenable} if there is a
sequence of finite sets~$F_i$ ($i=1, 2, 3,\ \dots$) such that
\[
\lim\limits_{i\to\infty}\frac{|gF_i\bigtriangleup F_i|}{|F_i|}=0
\]
for every~$g\in G$, where~$|F_i|$ is the
number of elements in~$F_i$.
The \tw{sequence}~$(F_i)$ is called a \tw{\emph{F{\o}lner sequence}}
\tw{and each~$F_i$ a F{\o}lner set}.
It is well known
that solvable groups and finite groups are amenable
\tw{ and that }any group containing a free
\tw{non-commutative} subgroup is not amenable.
One may consult
\tw{the monograph of Paterson}~\cite{Pa}
for the proofs of the following lemmas.
\begin{lem}\label{lemmawas2-3}
Every subgroup of an amenable group is amenable.
\end{lem}
\begin{lem}\label{lemmawas2-4}
A group $G$ is amenable if and only if every action of $G$ on a compact metric
space~$X$ has a~$G$-invariant Borel probability measure on~$X$.
\end{lem}
\subsection{Dendrites}
A \emph{continuum} is a
\tw{non-empty} connected compact metric space. A
continuum is \tw{said to be} \emph{non-degenerate}
if it is not a single point. An
\emph{arc} is a continuum which is homeomorphic to the closed
interval~$[0, 1]$.
A continuum~$X$ is \emph{uniquely arcwise
connected} if for any two points~$x\not=y\in X$ there is a unique
arc~$[x, y]$ in~$X$ \tw{connecting}~$x$ and~$y$.
A \emph{dendrite}~$X$ is
a locally connected, uniquely arcwise connected, continuum.
If~$Y$ is a subcontinuum of
\tw{a dendrite}~$X$, then~$Y$ is
called a \emph{subdendrite} of~$X$.
For a dendrite~$X$ and a point~$c\in X$,
if~$X\setminus\{c\}$ is not connected,
then $c$ is called a \emph{cut point} of~$X$;
if~$X\setminus\{c\}$ has at least~$3$ components,
then~$c$ is called a \emph{branch point} of~$X$.
\tw{Lemmas~\ref{lemmawas2-5}
to~\ref{lemmawas2-8} are
taken} from~\cite{Na}.
\begin{lem}\label{lemmawas2-5}
Let~$X$ be a dendrite with metric~$d$.
Then, for every~$\epsilon>0$, there is a~$\delta>0$
such that~${\rm diam}([x, y])<\epsilon$ whenever~$d(x, y)<\delta$.
\end{lem}
\begin{lem}\label{lemmawas2-6}
Let~$X$ be a dendrite.
If~$A_i\ (i=1, 2, 3, \dots)$ is a
sequence of mutually disjoint sub-dendrites of~$X$,
then~${\rm diam}(A_i)\rightarrow 0$ as~$i\rightarrow\infty$.
\end{lem}
\begin{lem}\label{lemmawas2-7}
Let~$X$ be a dendrite. Then~$X$ has
at most countably many branch points.
If~$X$ is nondegenerate,
then the cut point set of~$X$ is uncountable.
\end{lem}
\begin{lem}\label{lemmawas2-8}
Let~$X$ be a dendrite and~$c\in X$. Then each component~$U$
of~$X\setminus\{c\}$ is open in~$X$,
and~$\overline U=U\cup \{c\}$.
\end{lem}
Now we give a proof of the following technical lemma.
\begin{lem}\label{lemmawas2-9}
Let~$X$ be a dendrite and let~$f:X\rightarrow X$ be a
homeomorphism. Suppose~$o$ is a fixed point of~$f$,
\tw{and let}~$c_1, c_2$ be cut points of~$X$
different from~$o$.
Suppose \tw{that}~$U$ is a component of~$X\setminus \{c_1\}$
not \tw{containing}~$o$,
\tw{that}~$V$ is a component
of~$X\setminus \{c_2\}$
not \tw{containing}~$o$,
\tw{and that}~$f(c_1)\in V$. Then~$f(U)\subset V$.
\end{lem}
\begin{proof}
Assume to the contrary that there is
some~$u\in U$ with~$f(u)\notin V$.
Since~$c_2$ is a cut point,~$f(c_1)\in
V$,~$o\notin V$, and~$f(o)=o$,
we have~$c_2\in [f(o), f(c_1)]$
and~$c_2\in [f(u), f(c_1)]$.
This implies
\tw{that}~$f^{-1}(c_2)\in [o, c_1]\cap [u, c_1]=\{c_1\}$
since~$o\notin U$.
Thus~$f(c_1)=c_2$, which contradicts
\tw{the assumption that}~$f(c_1)\in V$.
\end{proof}
If~$[a, b]$ is an arc in a dendrite~$X$,
denote by~$[a, b)$,~$(a,b]$,
and~$(a, b)$ the
sets~$[a,b]\setminus\{b\}$,~$[a,b]\setminus\{a\}$,
and~$[a,b]\setminus\{a, b\}$, respectively.
\subsection{Equicontinuity}
Let~$X$ be a compact metric space with
metric~$d$, and let~$G$ be a group
acting on~$X$. Two points~$x, y\in X$
are said to be \emph{regionally proximal}
if there are sequences~$(x_i)$,~$(y_i)$
in~$X$
and~$(g_i)$ in~$G$
such that~$x_i\rightarrow x$
\tw{and}~$y_i\rightarrow y$
as~$i\rightarrow\infty$,
and~$\lim g_ix_i=\lim g_iy_i=w$
for some~$w\in X$.
If~$x,y$ are regionally proximal
and~$x\not=y$, then $\{x, y\}$
\tw{is} said to be a \emph{non-trivial regionally proximal pair}.
The action~$(X, G)$ is \emph{equicontinuous}
if, for every~$\epsilon>0$, there is a~$\delta>0$ such
that~$d(gx, gy)<\epsilon$ \tw{for all~$g\in G$}
whenever~$d(x, y)<\delta$.
The following lemma can be \tw{found} in~\cite{Au}.
\begin{lem}\label{lemmawas2-10}
Suppose~$(X,G)$ is a group action.
Then~$(X,G)$ is equicontinuous if and only if
it contains no non-trivial regionally proximal pair.
\end{lem}
\section{Proof of the main theorem}
In this section we are going to show our main result. Before doing this we state two simple lemmas.
\begin{lem}\label{lemmawas3-1}
Suppose a group~$G$ acts on the closed interval~$[0, 1]$.
If~$K\subset [0, 1]$ is minimal, then~$K$ contains at most~$2$ points.
\end{lem}
\begin{proof}
Let~$x=\inf{K}$ and~$y=\sup{K}$.
Then~$G$ preserves the set~$\{x, y\}$,
so~$K=\{x,y\}$ by the minimality of~$K$.
\end{proof}
\begin{lem}[\tw{See}~\cite{SY}]\label{lemmawas3-2}
Let~$G$ be an amenable group acting
on a dendrite~$X$. Then there is a~$G$-invariant
set consisting of~$1$ or~$2$ points.
\end{lem}
Now we are ready to prove the main result.
\begin{proof}[Proof of Theorem~\ref{theoremwas1-1}] We first show that~$(K,G)$ is equicontinuous.
Assume to the contrary that~$(K, G)$ is not equicontinuous.
Then \tw{by}
Lemma~\ref{lemmawas2-10},
there are~$u\not=v\in K$ such that~$u,v$
are regionally proximal; that is,
there are sequences~$(u_i), (v_i)$
in~$X$ and~$(g_i)$ in~$G$ with
\begin{equation}\label{equationwas3-1}
u_i\rightarrow u, v_i\rightarrow v,\ \lim g_ix_i=\lim g_iy_i=w
\end{equation}
\tw{as~$i\to\infty$}
for some~$w\in K$.
\tw{By} Lemma~\ref{lemmawas3-2},
there are~$o_1, o_2\in X$ such that~$\{o_1, o_2\}$ is
a~$G$-invariant set. Then~$[o_1, o_2]$ is~$G$-invariant
by the \tw{unique} arcwise
connectedness of~$X$. From the assumption,~$K$ is infinite
so~$K\cap [o_1, o_2]=\emptyset$ by Lemma~\ref{lemmawas3-1}.
Without loss of generality, we may
suppose \tw{that}~$o_1=o_2$
\tw{and denote this common point by}~$o$; otherwise, we need only collapse~$[o_1, o_2]$ to one point.
Then~$o$ is a fixed point \tw{for the action}.
\medskip
\noindent
{\bf Case 1.} $[u, o]\cap [v, o]=\{o\}$ (see Fig.1(1)).
By Lemma~\ref{lemmawas2-7},
we can \tw{choose} cut points~$c_1\in (u, o)$
and~$c_2\in (v, o)$. Let~$D_u$ be the component
of~$X\setminus \{c_1\}$, which contains~$u$;
let~$D_v$ be the
component of~$X\setminus \{c_2\}$, which contains~$v$.
From minimality and Lemma~\ref{lemmawas2-8},
there is some~$g'\in G$ with~$g'w\in D_u$.
From~\eqref{equationwas3-1}
and Lemma~\ref{lemmawas2-5},
we have
\begin{equation}\label{equationwas3-2}
u_i\in D_u, v_i\in D_v \ {\mbox {and}}\ g'g_i[u_i, v_i]\subset D_u
\end{equation}
\tw{for large enough~$i$.}
Write~$g=g'g_i$.
Then~$o\in [u_i, v_i]$ and~$g(o)\in D_u$.
This is a contradiction, since~$o$ is fixed by~$G$.
\medskip
\noindent {\bf Case 2.} $[u, o]\cap [v, o]=[z, o]$
for some~$z\not=o$.
\medskip
\noindent {\bf Subcase 2.1.} $z=v$ (see Fig.1(2)).
Then~$u\not=z$ and~$z\in K$.
Take a cut point~$c_1\in (u, z)$
\tw{and let}~$D_u$ be the
component of~$X\setminus \{c_1\}$
which contains~$u$.
Then~$v\notin D_u$,
and there is some~$g\in G$
with~$gz\in D_u$ by the minimality of~$K$.
Take a cut point~$c_2\in (z, o)$ which is sufficiently
close to~$z$
\tw{to ensure} that~$g(c_2)\in D_u$.
Let~$D_z$ be the component
of~$X\setminus \{c_2\}$ which contains~$z$.
By Lemma~\ref{lemmawas2-4},
there is
a~$G$-invariant Borel probability measure on~$K$.
Applying Lemma~\ref{lemmawas2-9},
we get~$g(D_z)\subset D_u$,
\tw{which} contradicts Lemma~\ref{lemmawas2-2},
since~$z\in D_z\setminus {\overline D_u}$.
\medskip
\noindent {\bf Subcase 2.2.} $z=u$. \tw{In this case
we can deduce a contradiction
along the lines of the} argument in Subcase~2.1.
\medskip
\noindent {\bf Subcase 2.3.} $z\not=u$
and~$z\not=v$ (see Fig.1(3)).
Take a cut point~$c_1\in (u, z)$. Let~$D_u$
be the component of~$X\setminus \{c_1\}$,
which contains~$u$.
Similar to the argument in Case~1,
there is some~$g\in G$
with~$g(z)\in D_u$.
Take a cut point~$c_2\in (z, o)$
which is sufficiently close to~$z$
\tw{to ensure} that~$g(c_2)\in D_u$.
Let~$D_z$
be the component of~$X\setminus \{c_2\}$,
which contains~$z$. Then~$g(D_z)\subset D_u$ by
Lemma~\ref{lemmawas2-9}.
This contradicts Lemma~\ref{lemmawas2-2}
since~$v\in D_z\setminus {\overline D_u}$.
\medskip
Now we prove that if~$K$ is not finite,
then~$K$ is homeomorphic to the Cantor set.
\tw{If not, then} there is some
non-degenerate connected
component~$Y$ of~$K$.
Clearly, for any~$g, g'\in G$,
either~$g(Y)=g'(Y)$ or~$g(Y)\cap g'(Y)=\emptyset$.
This, together with Lemma~\ref{lemmawas2-6}
and the equicontinuity of~$(K, G)$,
implies that the subgroup~$H=\{g\in G: g(Y)=Y\}$
has finite index in~$G$.
\tw{It follows that}~$(Y, H)$ is minimal.
This contradicts Lemma~\ref{lemmawas3-2}
and Lemma~\ref{lemmawas2-3}, since~$Y$
is a non-degenerate dendrite.
\end{proof}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.8]{fig1.pdf}
\centerline{Fig. 1}
\end{figure}
\subsection*{Acknowledgements}
The authors would like to thank Eli Glasner for sending us the early version of his work with Megrelishvili.
The work is supported by NSFC (No. 11771318, 11790274, 11431012).
| {
"timestamp": "2020-06-29T02:08:21",
"yymm": "1807",
"arxiv_id": "1807.01667",
"language": "en",
"url": "https://arxiv.org/abs/1807.01667",
"abstract": "In this note, we show that if $G$ is an amenable group acting on a dendrite $X$, then the restriction of $G$ to any minimal set $K$ is equicontinuous, and $K$ is either finite or homeomorphic to the Cantor set.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Equicontinuity of minimal sets for amenable group actions on dendrites",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850857421198,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7095349818468666
} |
https://arxiv.org/abs/1611.05070 | Scaling Laws for Maximum Coloring of Random Geometric Graphs | We examine maximum vertex coloring of random geometric graphs, in an arbitrary but fixed dimension, with a constant number of colors. Since this problem is neither scale-invariant nor smooth, the usual methodology to obtain limit laws cannot be applied. We therefore leverage different concepts based on subadditivity to establish convergence laws for the maximum number of vertices that can be colored. For the constants that appear in these results, we provide the exact value in dimension one, and upper and lower bounds in higher dimensions. | \section{Introduction}
\label{sec:intro}
We examine maximum coloring of random geometric graphs (RGGs),
in an arbitrary but fixed dimension~$d$, with a constant number
of colors.
The vertices of an RGG (whose spatial distribution will be defined below)
are embedded in an Euclidean space that is equipped with the $\ell_2$
distance or some $\ell_p$ distance in general, and two vertices are
connected if and only if they are within a given Euclidean distance~$r$.
More specifically, we address the questions:
{\it What is the maximum number of vertices in a sparse RGG that can
be properly colored with a constant number of colors?
In particular, what is the asymptotic behavior of that value,
as the total number of vertices in the graph tends to infinity?}
It is important to emphasize the distinction between our problem and that
of determining the chromatic number, which is the minimum required
number of colors to properly color all the vertices of a graph such that
no two adjacent vertices are assigned the same color.
Determining whether an RGG (or a unit-disk graph) is $k$-colorable,
i.e., whether its chromatic number is at most~$k$, is NP-hard even
for $k=3$, see~\cite{brent-1990-unitdiskgraphs}.
Our problem is different from determining the chromatic number,
since we are interested in the maximum number of vertices that can
be properly colored with given $k \in \mathbb{N}$ colors,
as well as from $k$-colorability, which is a binary decision problem.
The chromatic number of RGGs has been studied in detail
(for different values of the expected degree),
see Theorem 1.1 in~\cite{mcdiarmid-2011-chromatic}.
The chromatic number in the {\em thermodynamic regime}, when the expected degree is constant, is `almost'
logarithmic in the number of vertices~$n$, i.e., $(1+o(1))\log n / \log\log n$,
which additionally inspires our problem where only a constant number
of colors is available.
The above-mentioned questions are not only of fundamental interest,
but also motivated by applications in wireless networks, where the various
users need to be assigned channels (transmission frequencies) in order
to be able to communicate, subject to certain interference constraints.
For example, in order to avoid excessive interference, the same channel
cannot be assigned to two users within a certain reuse distance~$r$.
The total number of required channels to cover all users then
corresponds to the chromatic number of the associated interference graph
where two users are neighbors when they are located within distance~$r$.
When the user locations are governed by a spatial Poisson process,
the interference graph is an RGG, and the chromatic number will grow
without bound as the total number of users grows large.
As a result, the required number of channels to cover all users will
grow without bound, implying that the capacity per channel,
and hence the so-called max-min throughput of the network, will vanish
in the limit, which is obviously undesirable.
The question thus arises how many users can be covered when
the number of available channels is finite.
It will then not be feasible to cover all users as the total number
of users grows large, but the users that do get covered are ensured
to receive a strictly positive throughput.
The results that we prove in the present paper imply that any target
for the fraction of users to be covered, arbitrarily close to one,
can be achieved in the limit with a sufficiently large but constant
number of channels.
Besides wireless networks, RGGs have also found applications in
various further areas, e.g.~cluster analysis, statistical physics,
modeling data in high-dimensional spaces, and hypothesis testing,
to mention just a few~\cite{penrose:book}.
For problems on many of these `real' networks, the sparse regime
with constant expected vertex degree is particularly relevant,
see~\cite{DBLP:journals/im/LeskovecLDM09}.
We now formally state the main problem and results.
For any subset of points $V \subseteq {\mathbb R}^d$ and $r \in {\mathbb R}_+$,
let $G_r(V)$ be the graph with vertex set~$V$ and edge set
$E = \{\{u, v\} \in V^2: ||u - v|| \leq r\}$, i.e., connecting all
pairs of points that are within a given Euclidean distance~$r$.
The main object of interest is the cardinality of a set obtained
by a maximum proper coloring with $k$ colors of a given graph $G_r(V)$.
Note that such a set obtained by a maximum proper coloring on finite $V$
may not be unique, but its cardinality is unique and defined as follows.
For any $k \in {\mathbb N}$, let $N_{k,r}(V)$ be the maximum number
of vertices that can be properly colored in $G_r(V)$ with $k$~colors.
For any $\lambda > 0$, let ${\mathcal X}_\lambda$ be a Poisson point process
of intensity~$\lambda$ in ${\mathbb R}^d$. For compactness, denote
\[
F_{k,\lambda}(t) = N_{k,1}([0, t]^d \cap {\mathcal X}_\lambda)
\]
for any $t \geq 0$.
Also, let ${\mathcal I}_n$ be a collection of $n$~points uniformly
and independently distributed in the unit cube $[0, 1]^d$.
For compactness, denote
\[
H_{k,r}(n) = N_{k,r}({\mathcal I}_n)
\]
for any $n \in {\mathbb N}$ and $r > 0$.
\textbf{The main problem:} We are interested in the asymptotic behavior
of the expectation and moreover the distribution of $F_{k,\lambda}(t)$
as $t \to \infty$, as well as $H_{k,\nu}(n)$ as $n \to \infty$.
\textbf{The main results:}
We show that for any $d, k \in \mathbb{N}$ and $\lambda > 0$, the functional
$F_{k,\lambda}(t)$ converges in probability
\[
\frac{F_{k,\lambda}(t)}{\lambda t^d} \stackrel{\textrm{p}}{\to} a_{k,\lambda} \,,
\]
for some $a_{k,\lambda} \in (0,1]$, and in distribution, for any $\nu > 0$,
\[
\frac{H_{k,\sqrt[d]{\nu / n}}(n)}{n} \stackrel{\textrm{d}}{\to} a_{k,\nu} \,.
\]
One of our main methods involves the notion of {\it subadditivity}.
Concretely, we divide the cube $[0,t]^d$ into cubes of volume $s^d$,
for some $s<t$ which we specify later, and apply the subadditivity
argument in order to relate $F_{k,\lambda}(t)$ and $F_{k,\lambda}(s)$.
We show that the lower and upper limits as $t \to \infty$
of $F_{k,\lambda}(t)$ exist and are the same, and moreover we establish
the weak law of large numbers for $F_{k,\lambda}(t)$
and the strong law of large numbers for $H_{k,\nu}(n)$.
In Lemma~\ref{lemma:sigma.lb}, we prove that the variance
$\vari{F_{k,\lambda}(t)} = \Omega(t^d)$, i.e.,~the limiting variance
normalized by $t^d$ is bounded away from~$0$,
and in Lemma~\ref{lemma:sigma.ub.1} we present an upper bound
on $\vari{F_{k,\lambda}(t)} =O(t^d)$, which together imply
$\vari{F_{k,\lambda}(t)} = \Theta(t^d)$, see Lemma~\ref{lemma:sigma.lub}.
There are two branches of methods prevalent in discrete stochastic
geometry, subadditive and stabilization methods, usually used to obtain
the limiting behavior of some Euclidean functionals:
laws of large numbers, central limit theorems, etc.
For an excellent survey, the reader is referred to Yukich~\cite{yukich-2013-limit}.
At first glance, the results in this paper can be seen as a subproblem
and amenable to analysis by using techniques from~\cite{bhh-1959},
and even the ``more general subadditive methods'' developed by Steele
in Chapter~$3$ of~\cite{steele-1997-probability}.
In order to apply these techniques from~\cite{steele-1997-probability},
a function $L$ that maps a finite subset of points from ${\mathbb R}^d$
to ${\mathbb R}_+$ must satisfy the following four hypotheses:
(i) normalization $L(\emptyset) = 0$;
(ii) homogeneity $L\left(\alpha x_1, \alpha x_2,\dots,\alpha x_n\right) =
\alpha L\left(x_1,x_2,\dots,x_n\right)$ for every $\alpha>0$;
(iii) translation invariance
$\forall y \in \mathbb{R}^d$
$L\left(x_1+y,x_2+y,\dots,x_n+y\right) =
L\left(x_1,x_2,\dots,x_n\right)$;
(iv) geometric subadditivity, where for all $m, n \geq 1$
and $x_1,x_2,\dots,x_n \in [0,1]^d$ we have
\begin{equation}
L\left(x_1,x_2,\dots,x_n\right) \leq \sum_{i=1}^{m^d}
L\left(\{x_1,x_2,\dots,x_n\} \cap Q_i\right) + O(m^{d-1}) \,,
\end{equation}
where the unit cube $[0,1]^d$ is partitioned into $m^d$ cubes $Q_i$
with side $1/m$.
Additionally, (v) $L$ is monotone, if for all~$n$ and $x_i$,
$L\left(x_1,\dots,x_n\right) \leq L\left(x_1,\dots,x_n,x_{n+1}\right)$.
For example, Steele proves a so-called ``basic theorem'',
see Theorem 3.1.1 in~\cite{steele-1997-probability},
for general subadditive
Euclidean functionals that are monotone and satisfy the four
conditions (i)--(iv).
This theorem states that if $x_1, x_2, \dots, x_n$ are independent
random variables uniformly distributed on $[0,1]^d$ then with probability one
\begin{equation}
\label{eq:L.limit}
\lim_ {n \to \infty} n^{-(d-1)/d} L\left(x_1,x_2,\dots,x_n\right) = \beta_L(d) \,,
\end{equation}
where $\beta_L(d)$ is a positive constant, which depends both on the
functional $L$ and the dimension~$d$.
In order to show applications of Theorem 3.1.1 from~\cite{steele-1997-probability},
we first mention, one of the classical problems in combinatorial optimization, the famous
traveling salesman problem (TSP), where the goal is to find the minimum-length Hamiltonian tour
among a list of cities, given the distances between each pair of cities.
For more details on TSP, the reader is referred to~\cite{applegate:tsp:book}.
One subcase of TSP is Euclidean TSP, where the objective is to determine the minimum-length Hamiltonian tour
of $n$~points $x_1,x_2,\dots,x_n$ distributed in $\mathbb{R}^d$, and analyze its limiting behavior as $n$ grows.
Euclidean TSP was originally studied and the limiting behavior (\ref{eq:L.limit}) was shown by Beardwood, Halton and Hammersley in~\cite{bhh-1959}.
Euclidean TSP is one representative example to apply Theorem 3.1.1 from~\cite{steele-1997-probability}.
It is clear that the length of the minimum tour satisfies (i)--(v).
In other words, the minimum-length Hamiltonian tour on $n$ points:
(i) is equal to $0$ if the number of points $n=0$ (normalization);
(ii) increases $\alpha$ times, for every $\alpha>0$, if the positions
of the points are rescaled $\alpha$ times (homogeneity);
(iii) is translation invariant (does not change under the translation
of the points);
(iv) is subadditive, see~\cite{steele-1997-probability,goemans-1991-prob};
and
(v) is monotone by applying the triangle inequality on three distances among
$x_{n+1}$ and its two adjacent points in the optimal tour on $x_1,x_2,\dots,x_{n+1}$.
Besides Euclidean TSP, subadditive Euclidean functionals, such as, the length of the least Euclidean matching, Steiner tree, and rectilinear Steiner tree, satisfy the conditions of Theorem 3.1.1 from~\cite{steele-1997-probability},
and their asymptotic growth rates have been studied in~\cite{subadditive-1981-steele}.
However, the functionals of interest in this work $F_{k,\lambda}(t)$
and $H_{k,\nu}(n)$ satisfy normalization, translation invariance,
and subadditivity, but {\em not\/} the homogeneity property, i.e.,
$N_{k,r}(\alpha V) \neq \alpha N_{k,r}(V)$, where $\alpha > 0$ and $V \subset \mathbb{R}^d$.
Moreover, we later show that
$N_{k,r}(V)$ is {\em not continuous\/}, also called {\em not smooth}.
In more detail, for any $V$ and $V'$ at distance at least~$r$,
$|N_{k,r}(V \cup V') - N_{k,r}(V)| = |N_{k,r}(V')| = \Theta(V')$,
which is not $O(|V'|^{(d-p)/d})$ for some $p>0$, hence one cannot apply the methods from~\cite{rhee-1993-matching}.
For example, TSP and MST (Minimum Spanning Tree) are smooth
functionals.
In~\cite{yukich-2013-limit}, Yukich presents laws of large numbers
for smooth, superadditive Euclidean functionals, see Theorem 8.1,
as well as the general `umbrella theorem' for subadditive, smooth
Euclidean functionals, see Theorem 8.3.
Additionally, there is a large body of work, which uses the
{\it stabilization\/} method, see Penrose~\cite{penrose:book},
Yukich~\cite{yukich-2013-limit}.
It is not hard to show that stabilization holds in the sub-critical
regime for $\lambda < \lambda_c$, when a.a.s~\footnote{a.a.s. stands for ``asymptotically almost surely''.}
no infinite cluster exists,
but it is not clear at all whether stabilization holds in the
super-critical regime $\lambda > \lambda_c$ that we are interested in.
Moreover, the stabilization is a very `effective' method, see Penrose~\cite{penrose:book}, which needs
assumptions on a considered functional
that seem rather strong and that we do not impose in our proofs.
\section{Main Properties and Preliminary Results}
\label{sec:def}
As introduced before, for any subset of points $V \subseteq {\mathbb R}^d$
and $r \in {\mathbb R}_+$, let $G_r(V)$ be the graph with vertex set~$V$
and edge set $E = \{\{u, v\} \in V^2: ||u - v|| \leq r\}$, i.e.,
connecting all pairs of points that are within a given distance~$r$.
For any $k \in {\mathbb N}$, let $N_{k,r}(V)$ be the maximum number of
vertices that can be properly colored in $G_r(V)$ with $k$~colors.
We state a few basic but useful properties.
\newline
(0)
$N_{k,r}(\alpha V) = N_{k, r / \alpha}(V)$, for all $\alpha>0$,
since the edge sets of $G_r(\alpha V)$ and $G_{r / \alpha}(V)$ coincide.
\newline
(i) Inhomogeneity ({\em no\/} scale-invariance):
There exists $\alpha>0$ such that $N_{k,r}(\alpha V) \neq \alpha N_{k, r}(V)$,
and in general the inequality holds for many values $\alpha>0$.''
\newline
(ii) Monotonicity:
If $U \subseteq V$, then $N_{k,r}(U) \leq N_{k,r}(V)$.
Also, $N_{k,r}(V)$ is decreasing in~$r$.
\newline
(iii) Subadditivity:
$N_{k,r}(V_1 \cup V_2) \leq N_{k,r}(V_1) + N_{k,r}(V_2)$ for any
$V_1$, $V_2$.
\newline
(iv) Near-superadditivity:
If $V_1, V_2 \subseteq V$ are at a distance more than~$r$, i.e.,
$||v_1 - v_2|| > r$ for all $v_1 \in V_1$, $v_2 \in V_2$,
then $N_{k,r}(V_1 \cup V_2) \geq N_{k,r}(V_1) + N_{k,r}(V_2)$.
By virtue of the property $(0)$ and the properties of
the spatial Poisson process,
\[
N_{k,r}([0, t]^d \cap {\mathcal X}_\lambda) \stackrel{{\mathrm st}}{=}
N_{k,1}(([0, t]^d \cap {\mathcal X}_\lambda)/r) \stackrel{{\mathrm st}}{=}
N_{k,1}(([0, t/r]^d \cap {\mathcal X}_{\lambda r^d}) = F_{k,\lambda r^d}(t/r),
\]
and hence we may focus the attention on the random variable
$F_{k,\lambda}(t)$ without loss of generality.
Also, it follows from the above-mentioned monotonicity property
and a simple coupling argument that $H_{k,\nu}(n)$ is
stochastically increasing in~$n$.
For any $\mu \in {\mathbb R}_+$, let $\textrm{Po}(\mu)$ be a Poisson random
variable with parameter~$\mu$ and $\pi(\mu, m) = \pr{\textrm{Po}(\mu) = m}$.
The property $(0)$ and the properties of the spatial
Poisson process furnish a useful relationship between the variables
$F_{k,\lambda}(t)$ and $H_{k,\nu}(n)$:
\[
F_{k, \lambda}(t)
\stackrel{{\mathrm st}}{=} N_{k,1/t}(([0, t]^d \cap {\mathcal X}_\lambda)/t)
\stackrel{{\mathrm st}}{=} N_{k,1/t}([0, 1]^d \cap {\mathcal X}_{\lambda t^d})
\stackrel{{\mathrm st}}{=} H_{k,1/t}(\textrm{Po}(\lambda t^d)),
\]
and in particular,
\begin{equation}
\expect{F_{k,\lambda}(t)} = \expect{H_{k,1/t}(\textrm{Po}(\lambda t^d))} =
\sum_{l = 0}^{\infty} \expect{H_{k,1/t}(l)} \pi(\lambda t^d, l).
\label{rela1}
\end{equation}
For any $s, t \in {\mathbb R}_+$, define
\[
M^u(s, t) = \left\lceil\frac{t}{s}\right\rceil^d \,,
\]
and
\[
M^l(s, t) = \left\lfloor\frac{t}{s + 1}\right\rfloor^d \,.
\]
Invoking the stationarity (translation invariance) of the spatial
Poisson process, the subadditivity property implies
\begin{equation}
F_{k,\lambda}(t) \leq_{{\mathrm st}} \sum_{i = 1}^{M^u(s, t)} N_i(s)\,,
\label{ub1}
\end{equation}
while the near-superadditivity property implies
\begin{equation}
F_{k,\lambda}(t) \geq_{{\mathrm st}} \sum_{i = 1}^{M^l(s, t)} N_i(s)\,,
\label{lb1}
\end{equation}
where $N_1(s), N_2(s), \dots$ are i.i.d.\ copies of the random
variable $F_{k,\lambda}(s)$.
For compactness, denote
$\overline{F}_{k,\lambda}(t) = \expect{\widetilde{F}_{k,\lambda}(t)}$, with
\[
\widetilde{F}_{k,\lambda}(t) = \frac{F_{k,\lambda}(t)}{\lambda t^d},
\]
as the `coloring ratio', i.e., the ratio of the expected maximum number
of vertices that can be colored and the expected total number of vertices.
The stochastic upper and lower bounds~(\ref{ub1}) and~(\ref{lb1}) yield
\begin{equation}
\widetilde{F}_{k,\lambda}(t) \leq_{{\mathrm st}}
\left(\frac{s}{t}\right)^d \sum_{i = 1}^{M^u(s, t)} \widetilde{N}_i(s)\,,
\label{ub2}
\end{equation}
and
\begin{equation}
\widetilde{F}_{k,\lambda}(t) \geq_{{\mathrm st}}
\left(\frac{s}{t}\right)^d \sum_{i = 1}^{M^l(s, t)} \widetilde{N}_i(s)\,,
\label{lb2}
\end{equation}
where $\widetilde{N}_1(s), \widetilde{N}_2(s), \dots$ are i.i.d.\ copies
of the random variable $\widetilde{F}_{k,\lambda}(s)$.
\begin{proposition}
\label{prop:one}
For any $k \geq 1$, $\lambda > 0$, the limit
$\lim_{t \to \infty} \overline{F}_{k,\lambda}(t)$ exists, and equals
$a_{k,\lambda} := \inf_{t > 0} \overline{F}_{k,\lambda}(t)$.
\end{proposition}
\begin{proof}
Taking expectations in the stochastic upper bound~(\ref{ub2}), we obtain
\[
\overline{F}_{k,\lambda}(t) \leq
\left(\frac{s}{t}\right)^d M^u(s, t) \overline{F}_{k,\lambda}(s) \,.
\]
Note that for any fixed~$s$,
\[
\lim_{t \to \infty} \left(\frac{s}{t}\right)^d M^u(s, t) = 1 \,,
\]
i.e., for any $\delta > 0$, there exists $t(s, \delta)$ such that
\[
\left(\frac{s}{t}\right)^d M^u(s, t) \leq 1 + \delta
\]
for all $t \geq t(s, \delta)$.
This yields
\[
\limsup_{t \to \infty} \overline{F}_{k,\lambda}(t) \leq
(1 + \delta) \overline{F}_{k,\lambda}(s)
\]
for any fixed~$s$, hence
\[
\limsup_{t \to \infty} \overline{F}_{k,\lambda}(t) \leq (1 + \delta)
\inf_{s > 0} \overline{F}_{k,\lambda}(s) \,,
\]
and therefore
\[
\limsup_{t \to \infty} \overline{F}_{k,\lambda}(t) \leq
\inf_{s > 0} \overline{F}_{k,\lambda}(s) \,,
\]
since $\delta > 0$ is arbitrary.
Because obviously
\[
\liminf_{t \to \infty} \overline{F}_{k,\lambda}(t) \geq
\inf_{s > 0} \overline{F}_{k,\lambda}(s) \,,
\]
we deduce
\[
\lim_{t \to \infty} \overline{F}_{k,\lambda}(t) =
\inf_{s > 0} \overline{F}_{k,\lambda}(s) \,.
\]
\end{proof}
The next proposition provides two useful properties for the constant
$a_{k,\lambda}$ introduced in Proposition~\ref{prop:one}.
\begin{proposition}
For any $\epsilon > 0$,
\[
a_{k,\lambda (1 + \epsilon)} \leq a_{k,\lambda} + \epsilon \,,
\]
and thus
\[
a_{k,\lambda (1 - \epsilon)} \geq a_{k,\lambda} - \frac{\epsilon}{1 - \epsilon} \,.
\]
\end{proposition}
\begin{proof}
By virtue of the subadditivity property and the properties of the
spatial Poisson process, for any $\epsilon > 0$, $t > 0$,
\begin{eqnarray*}
F_{k,\lambda (1 + \epsilon)}(t)
&=&
N_{k,1}([0, t]^d \cap {\mathcal X}_{\lambda (1 + \epsilon)}) \\
&\stackrel{{\mathrm st}}{=}&
N_{k,1}(([0, t]^d \cap {\mathcal X}_\lambda) \cup
([0, t]^d \cap {\mathcal X}'_{\epsilon \lambda})) \\
&\leq_{{\mathrm st}}&
N_{k,1}([0, t]^d \cap {\mathcal X}_\lambda) +
N_{k,1}([0, t]^d \cap {\mathcal X}_{\epsilon \lambda}) \\
&\leq&
N_{k,1}([0, t]^d \cap {\mathcal X}_\lambda) + |[0, t]^d \cap {\mathcal X}_{\epsilon \lambda}| \,.
\end{eqnarray*}
Taking expectations and dividing by $\lambda t^d$ yields
\[
\overline{F}_{k,\lambda (1 + \epsilon)}(t) \leq
\overline{F}_{k,\lambda}(t) + \epsilon \,.
\]
Taking the limit over $t > 0$, we obtain the statement of the
proposition.
\end{proof}
For any $k \geq 1$, $\nu > 0$, define
$\widetilde{H}_{k, \nu}(n) = H_{k, \nu}(n) / n$.
\begin{proposition}
\label{prop:fkl.convergence}
For any $k \geq 1$, $\nu > 0$,
\[
\lim_{n \to \infty} \expect{\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n)} =
a_{k,\nu} \,.
\]
\end{proposition}
\begin{proof}
The proof relies on the relationship~(\ref{rela1}) and the
monotonicity property of $H_{k,\nu}(n)$, in conjunction with
asymptotic lower and upper bounds.
We first consider the upper bound.
Using~(\ref{rela1}) and observing that $\expect{H_{k,1/t}(l)}$ is
increasing in~$l$, we obtain
\[
\expect{F_{k,\lambda}(t)} \geq
\expect{H_{k,1/t}(n)} \pr{\textrm{Po}(\lambda t^d) \geq n}\,,
\]
for any $n \in {\mathbb N}$.
Taking $t = \sqrt[d]{n / \nu}$ and $\lambda = \nu (1 + \epsilon)$,
we deduce
\[
\expect{F_{k, \nu (1 + \epsilon)}(\sqrt[d]{n / \nu})} \geq
\expect{H_{k,\sqrt[d]{\nu / n}}(n)} \pr{\textrm{Po}(n (1 + \epsilon)) \geq n}\,.
\]
Noting that $\pr{\textrm{Po}(n (1 + \epsilon)) \geq n} \to 1$
as $n \to \infty$, Proposition~\ref{prop:one} then implies
\[
\limsup_{n \to \infty} \frac{\expect{H_{k,\sqrt[d]{\nu / n}}(n)}}{n} \leq (1 + \epsilon) \limsup_{n \to \infty} \frac{\expect{F_{k, \nu (1 + \epsilon)}(\sqrt[d]{n / \nu})}}{\nu (1+\epsilon) n / \nu} = (1 + \epsilon) a_{k, \nu (1 + \epsilon)}.
\]
Since $\epsilon > 0$ is arbitrary, it then follows from Proposition~2 that
\[
\limsup_{n \to \infty} \frac{\expect{H_{k,\sqrt[d]{\nu / n}}(n)}}{n} \leq
a_{k, \nu} \,.
\]
We now turn to the lower bound.
Invoking~(\ref{rela1}) and noting that $\expect{H_{k,1/t}(l)}$ is
increasing in~$l$, we obtain
\[
\expect{F_{k,\lambda}(t)} \leq \expect{H_{k,1/t}(n)} +
\sum_{l = n}^{\infty} l \pr{\textrm{Po}(\lambda t^d) = l},
\]
for any $n \in {\mathbb N}$.
Choosing $t = \sqrt[d]{n / \nu}$ and $\lambda = \nu (1 - \epsilon)$,
we deduce
\[
\expect{F_{k,\nu (1 - \epsilon)}(\sqrt[d]{n / \nu})} \leq
\expect{H_{k,\sqrt[d]{\nu / n}}(n)} +
\sum_{l = n}^{\infty} l \pr{\textrm{Po}(n (1 - \epsilon)) = l}.
\]
Invoking that
$\sum_{l = n}^{\infty} l \pr{\textrm{Po}(n (1 - \epsilon)) = l} = {\mathrm o}(1)$
as $n \to \infty$, it then follows from Proposition~\ref{prop:one} that
\[
\liminf_{n \to \infty} \frac{\expect{H_{k,\sqrt[d]{\nu / n}}(n)}}{n} \geq (1 - \epsilon) \liminf_{n \to \infty} \frac{\expect{F_{k, \nu (1 - \epsilon)}(\sqrt[d]{n / \nu})}}{\nu (1-\epsilon) n / \nu} = (1 - \epsilon) a_{k, \nu (1 - \epsilon)}.
\]
Since $\epsilon > 0$ is arbitrary, it then follows from Proposition~2 that
\[
\liminf_{n \to \infty} \frac{\expect{H_{k,\sqrt[d]{\nu / n}}(n)}}{n} \geq
a_{k, \nu}.
\]
\end{proof}
\iffalse
===START
We now discuss the choice of~$\epsilon$.
The probability
\[
\pr{\textrm{Po}(n (1 + \epsilon)) \geq n} \geq
1 - \exp\left( - \frac{n}{2} \epsilon^2 (1+o(1))\right) \,
\]
which is $1-O(n^{-b})$ for any $b>0$
and $\epsilon = \omega\left(\sqrt{2b \log n /n}\right)$.
Moreover,
\[
\sum_{l = n}^{\infty} l \pr{\textrm{Po}(n (1 - \epsilon)) = l} =
n \pr{\textrm{Po}(n (1 - \epsilon)) \geq n-1} \leq
\exp\left( - \frac{n}{2} \epsilon^2 (1+o(1))\right),
\]
which is $O(n^{-b})$, again, for any $b>0$
and $\epsilon = \omega\left(\sqrt{2b \log n /n}\right)$.
The proof of Proposition~\ref{prop:fkl.convergence} still holds for
$\epsilon = \omega\left(\sqrt{2b \log n /n}\right)$ where $b>0$.
===END
\fi
For conciseness, denote $\mu(t) = \expect{F_{k, \lambda}(t)}$
and $\sigma^2(t) = \vari{F_{k, \lambda}(t)}$.
\begin{lemma}
\label{lemma:sigma.lb}
For all $\lambda, d, k$,
\[
\inf_{t > 3} \frac{\sigma^2(t)}{t^d} > 0 \,.
\]
\end{lemma}
\begin{proof}
Let $C^d(t) = \left(\lfloor\frac{t}{3}\rfloor\right)^d$ and let
${\mathcal V} = v_1, v_2, \dots, v_{C^d(t)}$ be a set of points in $[0, t]^d$
such that no two points are within distance~3 and no point is within
distance $3 / 2$ from the boundary.
For each point~$v_i$ consider two spheres with radii $1 / 2$ and $3 / 2$
centered on~$v_i$, along with the shell of unit width consisting of the
area covered by the larger sphere but not by the smaller sphere.
Let ${\mathcal W} \subseteq {\mathcal V}$ be the set of points for which the associated
shells do not contain any points of the Poisson point process ${\mathcal X}_\lambda$.
Note that
\[
\pr{{\mathcal W}} = q^{|{\mathcal W}|} (1 - q)^{C^d(t) - |{\mathcal W}|}\,,
\]
with
\[
q = {\mathrm e}^{- \lambda (V^d(3 / 2) - V^d(1 / 2))} > 0\,,
\]
and
\[
V^d(r) = \frac{\pi^{d / 2} r^d}{\Gamma(d / 2 + 1)}
\]
denoting the volume of a sphere of radius~$r$ in $d$~dimensions.
Given a maximum proper coloring with $k$ colors of $[0, t]^d \cap {\mathcal X}_\lambda$,
let $D_i \in \{0, 1, \dots, k\}$ be the number of colored points covered
by the smaller sphere around~$v_i$
and $D_0 = F_{k, \lambda}(t) - \sum_{i \in {\mathcal W}} D_i$.
We may write
\[
\sigma^2(t) = \expect{(F_{k, \lambda}(t) - \mu(t))^2} =
\sum_{{\mathcal W} \subseteq \{1, \dots, C^d(t)\}}
\expect{(F_{k, \lambda}(t) - \mu(t))^2 | {\mathcal W}} \pr{{\mathcal W}}.
\]
Now observe that
\[
D_i \stackrel{{\mathrm st}}{=} \min\{k, \textrm{Po}(\lambda V^d(1 / 2))\},
\]
for all $i \in {\mathcal W}$, and that $D_0$ and $D_i$, $i \in {\mathcal W}$,
are all mutually independent conditioned on ${\mathcal W}$.
Denoting $\mu_k = \expect{\min\{k, \textrm{Po}(\lambda V^d(1 / 2))\}}$,
and $\sigma_k^2 = \vari{\min\{k, \textrm{Po}(\lambda V^d(1 / 2))\}}$, we deduce
\begin{eqnarray*}
\expect{\left(F_{k, \lambda}(t) - \mu(t) \right)^2 | {\mathcal W}}
&=&
\expect{\left(D_0 + \sum_{i \in {\mathcal W}} D_i - \mu(t)\right)^2 | {\mathcal W}} \\
&=&
\expect{\left(D_0 - (\mu(t) - |{\mathcal W}| \mu_k) +
\sum_{i \in {\mathcal W}} (D_i - \mu_k)\right)^2 | {\mathcal W}} \\
&=&
\expect{\left(D_0 - \left(\mu(t) - |{\mathcal W}| \mu_k \right) \right)^2 | {\mathcal W}} +
\sum_{i \in {\mathcal W}} \expect{\left(D_i - \mu_k \right)^2 | {\mathcal W}} \\
&\geq&
|{\mathcal W}| \sigma_k^2 \,.
\end{eqnarray*}
Combining the above properties yields
\[
\sigma^2(t) \geq
\sum_{{\mathcal W} \subseteq \{1, \dots, C^d(t)\}} |{\mathcal W}| \sigma_k^2
q^{|{\mathcal W}|} (1 - q)^{C^d(t) - |{\mathcal W}|} =
\sigma_k^2 \sum_{m = 0}^{C^d(t)}
\left(\begin{array}{c} m \\ C^d(t)\end{array}\right) m q^m (1 - q)^{C^d(t) - m} =
q C^d(t) \sigma_k^2 \,.
\]
Since $q > 0$, $\sigma_k^2 > 0$, regardless of~$t$, and
$C^d(t) \geq (t / 6)^d$ for all $t > 3$, the statement of the lemma follows
with
\begin{equation}
\inf_{t > 3} \frac{\sigma^2(t)}{t^d} \geq q \sigma_k^2 / 6^d > 0\,.
\end{equation}
\end{proof}
\begin{lemma}
\label{lemma:sigma.H.ub}
For any $n \in {\mathbb N}$,
\[
\var{H_{k, \sqrt[d]{\nu / n}}(n)} \leq n/2 \,.
\]
\end{lemma}
\begin{proof}
Recall that ${\mathcal I}_n$ is a collection of $n$~points uniformly
and independently distributed in the unit cube $[0, 1]^d$.
Let ${\mathcal I}_n^i$ be a collection of $n$~points obtained
from ${\mathcal I}_n$ by replacing (only) the $i$-th point $x_i$
by $x'_{i} \in [0,1]^d$.
By definition, $H_{k,r}(n)=N_{k,r}({\mathcal I}_n)$.
Moreover, $N_{k,r}({\mathcal I}_n)$ is one-Lipschitz
\begin{equation}
\label{eq:N_one_Lipschitz}
\left| N_{k,r}({\mathcal I}_n) - N_{k,r}({\mathcal I}^{i}_n) \right| \leq 1 \,,
\end{equation}
by the nature of a maximum proper coloring with $k$ colors.
Now the one-Lipschitz property of $N_{k,r}({\mathcal I}_n)$ and the
Efron-Stein bound~\cite{efron-1981-jackknife} yield the assertion
of the lemma
\begin{eqnarray*}
\var{H_{k, \sqrt[d]{\nu / n}}(n)} &=& \var{N_{k, \sqrt[d]{\nu / n}}({\mathcal I}_n)} \\
&\leq& \frac{1}{2} \expect{ \sum_{i=1}^n \left(N_{k, \sqrt[d]{\nu / n}} \left({\mathcal I}_n\right) - N_{k, \sqrt[d]{\nu / n}}\left({\mathcal I}^{i}_n\right) \right)^2} \\
&\leq& n/2 \,.
\end{eqnarray*}
\end{proof}
\begin{lemma}
\label{lemma:sigma.ub.1}
For any $\lambda, d, k$ and for every $\varepsilon>0$ and sufficiently large $t$,
\[
\sigma^2(t) \leq (1+\varepsilon) \lambda t^d /2 \,.
\]
\end{lemma}
\begin{proof}
Conditioning on the number of points in ${\mathcal X}_{\lambda} \cap [0,t]^d$,
which is a Poisson random variable and concentrates around its mean
$\lambda t^d$, see~\cite{penrose:book}, the proof follows by using the
bound on the variance of $H_{k, \sqrt[d]{\nu / n}}(n)$ provided
by Lemma~\ref{lemma:sigma.H.ub}.
\end{proof}
\begin{lemma}
\label{lemma:sigma.lub}
For all $\lambda>0$ and $d, k \in \mathbb{N}$, asymptotically as $t$ tends to $\infty$, we have
\begin{equation}
\nonumber
\sigma^2(t) = \Theta(t^d)\,.
\end{equation}
\end{lemma}
\begin{proof}
The proof follows from Lemmas~\ref{lemma:sigma.lb} and~\ref{lemma:sigma.ub.1}.
\end{proof}
\section{Convergence of functionals $\widetilde{F}_{k, \lambda}(t)$
and $\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n)$}
\label{sec:lln}
The next two propositions establish convergence in probability,
i.e., a weak law-of-large-numbers result for the two functionals
$\widetilde{F}_{k, \lambda}(t)$ and $\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n)$,
and a strong law-of-large-numbers result (almost-sure convergence)
of $\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n)$.
\begin{proposition}
\label{prop:H.conv.in.prob}
For any $k \geq 1$, $\nu > 0$, the random variable
$\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n)$ almost surely converges
to $a_{k,\nu}$ as $n \to \infty$.
\end{proposition}
The first part of the proof below establishes the convergence
in probability, while the second part concludes with the almost-sure
convergence.
\begin{proof}
We first show the convergence in probability.
We have that $N_{k, r}(V)$ is one-Lipschitz, see~\ref{eq:N_one_Lipschitz}.\
Hence a `simple concentration bound', see p.79 (Section 10.1)
of~\cite{molloy-reed-book-2002}, implies that for any $\varepsilon \geq 0$
\[
\pr{\left|H_{k, \sqrt[d]{\nu / n}}(n) -
\expect{H_{k, \sqrt[d]{\nu / n}}(n)}\right| \geq \varepsilon} \leq 2 {\mathrm e}^{- \varepsilon^2 /2n} \,.
\]
Moreover, for any $\delta > 0$, Proposition~\ref{prop:fkl.convergence}
implies that there exists an $n = n_\delta$ such that
\[
\left|\expect{\widetilde{H}_{k, \sqrt[d]{\nu / n}}(n)} - a_{k, \lambda} \right| \leq
\frac{1}{2} \delta
\]
for all $n \geq n_\delta$.
Thus, for all $n \geq n_\delta$,
\begin{eqnarray}
\label{eq:exp.bound.conv.prob}
\nonumber
\pr{\left|\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n) - a_{k,\nu}\right| > \delta}
&\leq&
\pr{\left|\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n) -
\expect{\widetilde{H}_{k, \sqrt[d]{\nu / n}}(n)}\right| > \frac{1}{2} \delta} \\
\nonumber
&=&
\pr{\left|H_{k,\sqrt[d]{\nu / n}}(n) -
\expect{H_{k, \sqrt[d]{\nu / n}}(n)}\right| > \frac{n}{2} \delta} \\
&\leq&
2 {\mathrm e}^{- n \delta^2 / 2} \,.
\end{eqnarray}
For any $\delta > 0$, the latter term tends to~$0$ as $n$ tends to $\infty$,
which implies the convergence in probability of $\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n)$.
We now proceed to establish the almost-sure convergence.
From~(\ref{eq:exp.bound.conv.prob}), for any $\delta > 0$ and the corresponding finite $n_\delta$, it follows:
\begin{equation}
\sum_{n=1}^\infty
\pr{\left|\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n) - a_{k,\nu}\right| > \delta}
\leq
n_\delta + 2 \sum_{n=n_\delta}^\infty {\mathrm e}^{- n \delta^2 / 2} < \infty \,,
\end{equation}
which implies the almost-sure convergence of $\widetilde{H}_{k,\sqrt[d]{\nu / n}}(n)$.
\end{proof}
Given Proposition~\ref{prop:H.conv.in.prob}, we prove the convergence
in probability of $\widetilde{F}_{k,\lambda}(t)$.
\begin{proposition}
\label{prop:lln1}
For any $k \geq 1$, $\lambda > 0$, the random variable
$\widetilde{F}_{k,\lambda}(t)$ converges to $a_{k, \lambda}$
in probability as $t \to \infty$.
\end{proposition}
\begin{proof}
The proof relies on lower and upper bounds, which asymptotically
coincide.
For both bounds, we will use the fact that
\[
\vari{\widetilde{F}_{k, \lambda}(s)} \leq \frac{1}{\lambda s^d}
\]
because of Lemma~\ref{lemma:sigma.ub.1}, and hence
\begin{equation}
\pr{\left|\frac{1}{M} \sum_{i = 1}^{M} \widetilde{N}_i(s) -
\expect{\widetilde{F}_{k, \lambda}(s)}\right| \geq \varepsilon} \leq
\frac{1}{M \lambda s^d \varepsilon^2}
\label{chebyshev1}
\end{equation}
for any $M \geq 1$ and $\varepsilon > 0$ by virtue of Chebyshev's inequality.
We first consider the lower bound.
Proposition~\ref{prop:one} implies that there exists an $s = s_\delta$ such that
\[
\overline{F}_{k, \lambda}(s) \geq a_{k, \lambda} - \frac{1}{4} \delta.
\]
There also exists a $t(s, \delta)$ so that
\[
\left(\frac{t}{s}\right)^d \frac{1}{M^l(s, t)} \leq
1 + \frac{1}{4} \delta
\]
for all $t \geq t(s, \delta)$.
Noting that $\overline{F}_{k, \lambda}(s) \leq 1$
and $M^l(s, t) \leq \left(\frac{t}{s}\right)^d$, we then have
\begin{eqnarray*}
\left(\frac{t}{s}\right)^d \frac{1}{M^l(s, t)} (a_{k, \lambda} - \delta) &\leq&
\left(\frac{t}{s}\right)^d \frac{1}{M^l(s, t)}
\left(\overline{F}_{k, \lambda}(s) - \frac{3}{4} \delta\right) \\
&\leq&
\left(1 + \frac{1}{4} \delta\right) \left( \overline{F}_{k, \lambda}(s) -
\frac{3}{4} \delta \right) \\
&\leq&
\overline{F}_{k, \lambda}(s) - \frac{1}{2} \delta \\
&=&
\expect{\widetilde{F}_{k, \lambda}(s)} - \frac{1}{2} \delta
\end{eqnarray*}
for all $t \geq t(s, \delta)$.
Using the stochastic lower bound~(\ref{lb1}), the above inequality
and~(\ref{chebyshev1}), we derive for all $t \geq t(s, \delta)$,
\begin{eqnarray*}
\pr{\widetilde{F}_{k, \lambda}(t) \leq a_{k, \lambda} - \delta}
&\leq&
\pr{\left(\frac{s}{t}\right)^d \sum_{i = 1}^{M_l(s, t)} \widetilde{N}_i(s) \leq
a_{k, \lambda} - \delta} \\
&=&
\pr{\frac{1}{M^l(s, t)} \sum_{i = 1}^{M^l(s, t)} \widetilde{N}_i(s) \leq
\left(\frac{t}{s}\right)^d \frac{1}{M^l(s, t)} (a_{k, \lambda} - \delta)} \\
&\leq&
\pr{\frac{1}{M^l(s, t)} \sum_{i = 1}^{M^l(s, t)} \widetilde{N}_i(s) \leq
\expect{\widetilde{F}_{k, \lambda}(s)} - \frac{1}{2} \delta} \\
&\leq&
\frac{4}{\lambda s^d M^l(s, t) \delta^2}.
\end{eqnarray*}
The latter term tends to~0 as $t$ grows large for any $\delta > 0$.
We now turn to the upper bound.
Proposition~\ref{prop:one} implies that there exists an $s = s_\delta$ such that
\[
\overline{F}_{k, \lambda}(s) \leq a_{k, \lambda} + \frac{1}{4} \delta.
\]
There also exists a $t(s, \delta)$ so that
\[
\left(\frac{t}{s}\right)^d \frac{1}{M^u(s, t)} \geq
1 - \frac{1}{8} \delta
\]
for all $t \geq t(s, \delta)$.
Noting that $\overline{F}_{k, \lambda}(t) \leq 1$ and assuming
$\delta \leq 1$, we then have
\[
\left(\frac{t}{s}\right)^d \frac{1}{M^u(s, t)} (a_{k, \lambda} + \delta) \geq
\left(\frac{t}{s}\right)^d \frac{1}{M^u(s, t)}
(\overline{F}_{k, \lambda}(s) + \frac{3}{4} \delta) \geq
\left(1 - \frac{1}{8} \delta\right)
(\overline{F}_{k, \lambda}(s) + \frac{3}{4} \delta) \geq
\]
\[
\overline{F}_{k, \lambda}(s) + \frac{5}{8} \delta - \frac{3}{32} \delta^2 \geq
\overline{F}_{k, \lambda}(s) + \frac{1}{2} \delta =
\expect{\widetilde{F}_{k, \lambda}(s)} + \frac{1}{2} \delta
\]
for all $t \geq t(s, \delta)$.
Using the stochastic upper bound~(\ref{ub2}), the above inequality
and~(\ref{chebyshev1}), we derive for all $t \geq t(s, \delta)$,
\begin{eqnarray*}
\pr{\widetilde{F}_{k, \lambda}(t) \geq a_{k, \lambda} + \delta}
&\leq&
\pr{\left(\frac{s}{t}\right)^d \sum_{i = 1}^{M_u(s, t)} \widetilde{N}_i(s) \geq
a_{k, \lambda} + \delta} \\
&=&
\pr{\frac{1}{M^u(s, t)} \sum_{i = 1}^{M^u(s, t)} \widetilde{N}_i(s) \geq
\left(\frac{t}{s}\right)^d \frac{1}{M^u(s, t)} (a_{k, \lambda} + \delta)} \\
&\leq&
\pr{\frac{1}{M^u(s, t)} \sum_{i = 1}^{M^u(s, t)} \widetilde{N}_i(s) \geq
\expect{\widetilde{F}_{k, \lambda}(s)} + \frac{1}{2} \delta} \\
&\leq&
\frac{4}{\lambda s^d M^u(s, t) \delta^2}.
\end{eqnarray*}
The latter term tends to~0 as $t$ grows large for any $\delta > 0$.
\end{proof}
\section{Evaluation of $a_{k,\lambda}$}
In Propositions~\ref{prop:H.conv.in.prob} and~\ref{prop:lln1} we showed
that the fraction of nodes that can be properly colored converges
in probability to a constant $a_{k,\lambda}$ as the size of the area
and the total number of nodes grow large.
It appears difficult to obtain an explicit expression for
$a_{k,\lambda}$ in general.
Below we provide the exact value for the one-dimensional case $d = 1$
and present lower and upper bounds for $d > 1$.
\subsection{One-dimensional case (line)}
In case $d = 1$, a maximum proper coloring of points in the
interval $[0, t]$ can be obtained in a greedy manner by sequential
inspection of these points from left to right.
Specifically, a point at position $x \in [0, t]$ is selected
if fewer than $k$ points in $[x - 1, x]$ have been included,
and then assigned any color that is not already used for any points
in $[x - 1, x]$, and skipped otherwise.
If we now interpret the points in $[0, t]$ as arrival times of
customers, then it can be verified that the maximum proper coloring
obtained in the above fashion corresponds exactly to the arrival
times of those customers that are admitted in a so-called Erlang
loss system with service times~$1$ and capacity~$k$,
starting from an empty state at time~$0$.
In particular, the value of $F_{k,\lambda}(t)$ equals the number
of admitted customers $A(k, \lambda, t)$ in the latter Erlang loss system,
where the arrivals are governed by a Poisson process.
It is well known~\cite{kelly-1979-reversibility} that $A(k, \lambda, t) / (\lambda t)$
converges
in probability to $1 - \mbox{Erl}(\lambda, k)$
as $t \to \infty$, where
\[
\mbox{Erl}(\lambda, k) =
\frac{\frac{\lambda^k}{k!}}{\sum_{l = 0}^{k} \frac{\lambda^l}{l!}} =
\frac{\pr{\textrm{Po}(\lambda) = k}}{\pr{\textrm{Po}(\lambda) \leq k}}
\]
denotes the so-called Erlang loss probability.
Thus
\[
a_{k,\lambda} = 1 - \frac{\frac{\lambda^k}{k!}}{\sum_{l = 0}^{k} \frac{\lambda^l}{l!}} =
\frac{\sum_{m = 0}^{k - 1} \frac{\lambda^m}{m!}}{\sum_{l = 0}^{k} \frac{\lambda^l}{l!}} \,.
\]
\iffalse
If we assume $k(\rho) = \kappa \rho$, with $\kappa \in (0, \infty)$, then
\[
\lim_{\rho \to \infty} Erl(\rho, k) = \max\{1 - \kappa, 0\}.
\]
This suggests that in a regime $d(n) = f(n)$, $k(n) = \kappa f(n)$,
where $f(n)$ is some slowly increasing function of~$n$, so that the graph
grows dense and the number of colors grows large, we may expect to see
\[
\lim_{n \to \infty} \expect{L(n, d(n), k(n))} = \max\{1 - \kappa, 0\}.
\]
\fi
\subsection{Higher dimensions}
For $d > 1$ it seems difficult to derive an explicit expression for
$a_{k,\lambda}$.
In order to establish bounds, let $g_{\max}$ be the maximum density
that any set of points $V \subseteq {\mathbb R}_+^d$ can have such that
no two points are within distance~$1$.
Thus there is then a sphere of unit radius around every point in~$V$
that does not contain any other point in~$V$.
Hence a trivial upper bound is $g_{\max} \leq 1/V_d(1)$, where $V_d(1)$
is the volume of a sphere of unit radius in $d$~dimensions.
Now observe that any two points that are assigned the same color
cannot be within distance~$1$, i.e., the density of the points that
are assigned the same color can be at most $g_{\max}$.
This implies that $a_{k,\lambda} \leq \max\{1, k g_{\max} / \lambda\}$.
While admittedly crude, we expect this upper bound to be asymptotically
tight for large values of~$\lambda$, in the sense that
$\lambda a_{k,\lambda} \to k g_{\max}$ as $\lambda \to \infty$.
For a given~$s$, let $m(s)$ be the minimum number of sets
$B_1, \dots, B_{m(s)}$ needed such that (i) any point in ${\mathbb R}^d$
is within distance~$s$ from one of the points
in $B_1 \cup \dots \cup B_{m(s)}$
and (ii) no two points within each of the sets $B_m$ are within
distance $1 + 2 s$, $m = 1, \dots, m(s)$.
The points in $B_1 \cup \dots \cup B_{m(s)}$ may be collectively
interpreted as `cell anchor points' or `base stations'.
Now suppose we partition the $k$~colors in $m(s)$ disjoint groups
$C_1, \dots, C_{m(s)}$ of size at least $\lfloor k / m(s) \rfloor$,
and then allocate all the colors in the group $C_m$ to all the
points in the set $B_m$, $m = 1, \dots, m(s)$.
In order to color a point, we will assign it one of the colors
allocated to the nearest anchor point.
Since any point in ${\mathbb R}^d$ is within distance~$s$ from one of the
anchor points, the points that need to be supported by a given anchor
point are all within radius~$s$, and hence the total number is bounded
from above by a Poisson random variable with parameter $\lambda V_d(s)$.
This implies that $a_{k,\lambda} \geq
\expect{\min\{\textrm{Po}(\lambda V_d(s)), \lfloor k / m(s) \rfloor\}} / \lambda V_d(s)$.
While rather rough, this upper bound demonstrates that
$a_{k,\lambda} \to 1$ as $k \to \infty$.
Thus, any target coloring ratio, however close to one, can be asymptotically
achieved with a sufficiently large but constant number of colors.
\section{Conclusion}
We have examined maximum vertex coloring of random geometric graphs,
in an arbitrary but fixed dimension.
This is a problem in discrete stochastic geometry which is neither
scale-invariant nor smooth, and hence the traditional methodological
framework to obtain limit laws cannot be applied.
We have therefore leveraged different concepts based on subadditivity
to establish convergence laws for the maximum number of vertices
that can be colored with a constant number of colors.
For the constants involved in these results, we derived the exact
value in dimension one, and upper and lower bounds in higher dimensions.
The approach that we have developed for this specific non-linear
Euclidean problem, offers great potential to be extended to a broader
class of problems, which will be the subject of further research.
Some specific questions that we aim to address in future work are:
(1) What can one say when the `coverage area' is not a disk,
but some arbitrarily shaped area?
(2) Can one prove generalizations when the disk radius~$r$ is not
a constant, but a generally distributed random variable?
| {
"timestamp": "2016-11-17T02:00:34",
"yymm": "1611",
"arxiv_id": "1611.05070",
"language": "en",
"url": "https://arxiv.org/abs/1611.05070",
"abstract": "We examine maximum vertex coloring of random geometric graphs, in an arbitrary but fixed dimension, with a constant number of colors. Since this problem is neither scale-invariant nor smooth, the usual methodology to obtain limit laws cannot be applied. We therefore leverage different concepts based on subadditivity to establish convergence laws for the maximum number of vertices that can be colored. For the constants that appear in these results, we provide the exact value in dimension one, and upper and lower bounds in higher dimensions.",
"subjects": "Probability (math.PR); Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "Scaling Laws for Maximum Coloring of Random Geometric Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850857421197,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7095349818468665
} |
https://arxiv.org/abs/1603.01183 | Solving systems of polynomial inequalities with algebraic geometry methods | The goal of this paper is to provide computational tools able to find a solution of a system of polynomial inequalities. The set of inequalities is reformulated as a system of polynomial equations. Three different methods, two of which taken from the literature, are proposed to compute solutions of such a system. An example of how such procedures can be used to solve the static output feedback stabilization problem for a linear parametrically-varying system is reported. | \section{Introduction\label{sec:intro}}
In several control problems, it is needed to guarantee
the existence of real solutions, and, possibly, to compute one of them,
for a system of polynomial equalities or inequalities \cite{abdallah1995applications,henrion2005positive,chesi2010lmi}.
For instance, a solution of a set of polynomial equalities and inequalities has to be found to
solve the static output feedback stabilization problem \cite{astolfi2004static}, to compute
the equilibrium points of a nonlinear system \cite{vidyasagar2002nonlinear},
to establish if a polynomial can be written as sum of squares \cite{ParilloPhd},
to study the stability of linear systems, with structured uncertainty~\cite{chesi2007robust}.
In this paper, three algorithms, which use the tools of algebraic geometry, are used to compute
solutions of a system of polynomial equations.
Algebraic geometry tools have been already used for control problems (see, for instance,
\cite{diop1991elimination,menini2014algebraic,menini2014CDC,possieri2014polynomial}).
The first algorithm is based on the computation of a quotient--ring basis
\cite{cox1992ideals,cox1998using}
and of the eigenvalues of some matrices characterizing such a basis.
The second algorithm is based on the Rational Univariate Representation \cite{rouillier1999solving}
of a given ideal. The third algorithm is based on the computation of a Groebner basis \cite{cox1992ideals}
of an `extended' ideal. The first two algorithms are taken from the literature, whereas the last one
is new, to the best authors' knowledge.
Even if these techniques are able to solve only systems of equalities, by using the procedure given
in \cite{anderson1977output}, it is possible to reformulate
a set of inequalities into a set of equalities; whence, the three mentioned algorithms can be also
used to find a solution of a set of inequalities.
Moreover, thanks to the recent advantages in Computer Algebra Systems, able to
carry out complex algebraic geometry computations (as, e.g., Macaulay2 \cite{M2}),
by using the algorithm based on the computation of a Groebner basis of an `extended' ideal,
which is the new main result,
a solution to a set
of polynomial inequalities can be obtained also when some coefficients of the polynomials are
unknown parameters. Hence, when the values of such parameters can be assumed to be known in real time,
as for Linear Parametrically--Varying (briefly, LPV) system,
the new method proposed here allows to compute off--line most of the solution
in parametric form, leaving only small portion of the computations to be executed in real time,
for the actual values of the parameters. In Section~\ref{sec:LPVSOF}, this method is applied
to compute a parameter dependent Static Output Feedback (briefly, SOF),
which makes a LPV system asymptotically stable.
\section{Notation and preliminaries\label{sec:basicNotions}}
In this section, some notions of algebraic geometry are recalled,
following the exposition in \cite{cox1992ideals,cox1998using}.
Let $x=[\begin{array}{ccc}
x_1 & \cdots & x_n
\end{array}]^\top$. A \emph{monomial} in $x$ is a product of the form $x_1^{\alpha_1}\cdots
x_n^{\alpha_n}$, where $\alpha_i$, for $i=1,\dots,n$, are non--negative integers
(i.e., $\alpha\in\mathbb{Z}^n_{\geq 0}$);
a \emph{polynomial} in $x$ is a finite $\mathbb{R}$--linear combination of monomials in $x$.
Let
$\mathbb{R}[x]$
denote the ring of all the polynomials.
Given a set of polynomials $\{p_1,\dots,p_s\}\subset\mathbb{R}[x]$, the \emph{affine variety} defined by
$p_1,\dots,p_s$ (see, e.g., \cite{danilov1998algebraic,liu2002algebraic}) is
\begin{equation*}
\bm{V}(p_1,\dots,p_s)=\{x\in\mathbb{R}^n:\;p_i(x)=0,\,i=1,\dots,s\},
\end{equation*}
whereas, the \emph{semi--algebraic set} defined by $p_1,\dots,p_s$ is
\begin{equation*}
\bm{W}(p_1,\dots,p_s)=\{x\in\mathbb{R}^n:\;p_i(x)\geq 0,\,i=1,\dots,s\}.
\end{equation*}
The \emph{ideal} $\langle p_1,\dots,p_s \rangle$ in $\mathbb{R}[x]$ is the
set of all the polynomials $q\in\mathbb{R}[x]$, which can be expressed as a finite linear
combination of $p_1,\dots,p_s$, with polynomial coefficients $h_1,\dots,h_s\in\mathbb{R}[x]$,
i.e., $q(x)=\sum_{i=1}^{s}h_i(x)p_i(x)$. Affine varieties and ideals are notions linked by the concept
of affine variety of an ideal. Let $\mathcal{I}$ be an ideal in $\mathbb{R}[x]$, the
\emph{affine variety of the ideal $\mathcal{I}$}, denoted by $\bm{V}(\mathcal{I})$, is the set
\begin{equation*}
\bm{V}(\mathcal{I})=\{x\in\mathbb{R}^n:\;q(x)=0,\,\forall q\in\mathcal{I} \}.
\end{equation*}
A notation needed to analyze polynomials is the \emph{monomial
ordering} on $\mathbb{R}[{x}]$, denoted as
$>$, which is a relation on the set of monomials ${x^{\alpha}},\,{\alpha}\in\mathbb{Z}_{\geq0}^{n}$,
satisfying the following properties:
\begin{inparaenum}[1)]
\item if ${\alpha}>{\beta}$ and ${\gamma}\in\mathbb{Z}_{\geq0}^{n}$,
then ${\alpha}+{\gamma}>{\beta}+{\gamma}$;
\item every nonempty
subset of monomials has a smallest element under $>$.
\end{inparaenum}
The \emph{lex ordering} is a monomial ordering, denoted by $>_{l}$
and defined as:
let ${\alpha}
$
and ${\beta}
\in\mathbb{Z}_{\geq0}^n$ be given, then, ${\alpha}>_{l}{\beta}$ if, in
the vector difference ${\alpha}-{\beta}\in\mathbb{Z}^n$, the first
nonzero entry is positive.
The \emph{leading term of a polynomial $f({x})$},
denoted by $\mathrm{LT}(f({x}))$,
is, for a fixed a monomial ordering, the largest monomial appearing in $f({x})$.
For a fixed a monomial ordering, a finite subset $\mathcal{G}=\{g_{1},\dots,g_{l}\}$
of an ideal $\mathcal{I}$ is said to be a \emph{Groebner basis}
of $\mathcal{I}$ if $\langle\mathrm{LT}(g_{1}),\dots,\mathrm{LT}(g_{l})\rangle=\langle\mathrm{LT}(\mathcal{I})\rangle$,
where the \textsl{leading term of the ideal $\mathcal{I}$} is $\mathrm{LT}(\mathcal{I}):=
\{ c{x^{\alpha}}:\exists\, f\in\mathcal{I},\text{ with }\mathrm{LT}(f({x}))=c{x^{\alpha}}\}$.
The remainder of
the division of a polynomial function $f$ for the elements of a Groebner
basis $\mathcal{G}$ of $\mathcal{I}$,
denoted by $\overline{f}^{\mathcal{G}}$,
is unique and is a finite $\mathbb{R}$--linear combination of monomials
$x^\alpha\notin\langle \mathrm{LT}(\mathcal{I}) \rangle$
(for more details see, e.g., \cite{cox1992ideals,buchberger2006bruno}).
Moreover, it can be easily checked that, given $f,\,g\in\mathbb{R}[x]$, one has that
$\overline{f}^{\mathcal{G}}+\overline{g}^{\mathcal{G}}=\overline{f+g}^{\mathcal{G}}$
and that $\overline{\overline{f}^{\mathcal{G}}\cdot \overline{g}^{\mathcal{G}}}^\mathcal{G}
=\overline{f \cdot g}^{\mathcal{G}}$.
Let $\mathcal{I}$ be a given ideal in $\mathbb{R}[x_1,\dots,x_n]$. The \emph{$j$-th elimination ideal of} $\mathcal{I}$
is $\mathcal{I}^j:=\mathcal{I}\cap\mathbb{R}[x_{j+1},\dots,x_n]$. Let $\mathcal{G}$ be
a Groebner basis of $\mathcal{I}$, with respect to the lex ordering, with $x_1>_lx_2>_l\dots>_lx_n$. Then,
by the Elimination Theorem, for every $0\leq j \leq n$,
the set $\mathcal{G}_j=\mathcal{G}\cap\mathbb{R}[x_{j+1},\dots,x_n]$ is a Groebner basis of the $j$-th elimination ideal
$\mathcal{I}^j$.
\section{Algebraic geometry algorithms for solving systems of polynomial equations\label{sec:solPol}}
In this section, three methods to solve a system of polynomial equations,
having a finite number of solutions, are presented.
The first algorithm is based on the computation of a quotient--ring basis
and of the eigenvalues of some matrices characterizing this basis \cite{cox1992ideals,cox1998using}.
The second one is based on the computation of a Rational
Univariate Representation of the solutions
of the system of polynomial equations \cite{rouillier1999solving}.
The third one
is based on the computation of a Groebner basis of an `extended' ideal.
The first two methods are taken from the literature, whereas, the last one is new,
to the best authors knowledge.
Such methods are used in Section~\ref{sec:ineq} to find a solution to a system of polynomial
inequalities.
\subsection{Solution of a system of polynomial equations by using finite--dimensional quotient rings\label{sec:eigenvalues}}
In this section, the basic notions of quotient rings are recalled and an algorithm,
taken form \cite{cox1998using} and \cite{sturmfels2002solving}, to
solve systems of polynomial equations is given.
Let $\mathcal{I}\subset\mathbb{R}[x]$ be an ideal, and let $f,\,g\in\mathbb{R}[x]$. The polynomials
$f$ and $g$ are \emph{congruent modulo $\mathcal{I}$}, denoted by $f=g\,\mathrm{mod}\,\mathcal{I}$,
if $f-g\in\mathcal{I}$. The equivalence class of $f$ modulo $\mathcal{I}$, denoted by $[f]_{\mathcal{I}}$,
is defined as $[f]_{\mathcal{I}}=\{g\in\mathbb{R}[x]:\;g=f\,\mathrm{mod}\,\mathcal{I}\}$.
The \emph{quotient} of $\mathbb{R}[x]$ modulo $\mathcal{I}$, denoted by $\mathbb{R}[x]/ \mathcal{I}$
is the set of all the equivalence classes modulo $\mathcal{I}$,
\begin{equation*}
\mathbb{R}[x]/\mathcal{I}=\{[f]_{\mathcal{I}},\,f\in\mathbb{R}[x]\}.
\end{equation*}
Let $\mathcal{G}$ be a Groebner basis of the ideal $\mathcal{I}$, according to any monomial ordering.
By the definition of the class $[f]_{\mathcal{I}}$, one has that $\overline{f}^\mathcal{G}\in[f]_{\mathcal{I}}$.
Hence, the remainder $\overline{f}^\mathcal{G}$ can be used as a standard representative of the class
$[f]_{\mathcal{I}}$ (in the rest of this paper, the remainder $\overline{f}^\mathcal{G}$
is identified with its class $[f]_{\mathcal{I}}$).
Therefore, since the operations of sum and product by a constant on $\mathbb{R}[x]/\mathcal{I}$
have a one--to--one correspondence with the same operations on the remainders, the
elements in $\mathbb{R}[x]/\mathcal{I}$ can be added and multiplied by a constant.
Thus, the quotient ring $\mathbb{R}[x]/\mathcal{I}$ has the structure of a vector field over $\mathbb{R}$
(it is called an \emph{algebra}).
Since all the remainders $\overline{f}^\mathcal{G}$ are
$\mathbb{R}$--linear combinations of monomials, none of which is in the ideal
$\langle \mathrm{LT}(\mathcal{I}) \rangle$, it is possible to form a monomial basis $\mathcal{B}$ of
the quotient ring $\mathbb{R}[x]/\mathcal{I}$ as
\[
\mathcal{B}=\{x^\alpha:\;x^\alpha\notin\langle\mathrm{LT}(\mathcal{I})\rangle \}.
\]
The following theorem gives conditions on $\mathcal{I}$, for
the algebra $\mathfrak{A}=\mathbb{R}[x]/\mathcal{I}$ to be
finite--dimensional.
\begin{thm}
\cite{cox1998using}
Let $\mathcal{I}$ be an ideal in $\mathbb{R}[x]$ and let $\mathcal{G}$ be a Groebner basis of
$\mathcal{I}$, according to any monomial ordering. The following conditions are equivalent:
\begin{enumerate}
\item The algebra $\mathfrak{A}=\mathbb{R}[x]/\mathcal{I}$
is finite--dimensional over $\mathbb{R}$.
\item The affine variety $\bm{V}(\mathcal{I})$ is a finite set.
\item For each $i\in\{1,\dots,n\}$, there exists an $m_i\geq 0$ such that
$x_i^{m_i}=\mathrm{LT}(g)$, for some $g\in\mathcal{G}$.
\end{enumerate}
\label{thm:finiteness}
\end{thm}
If an ideal $\mathcal{I}$ in $\mathbb{R}[x]$ is such that one of the conditions of Theorem~\ref{thm:finiteness}
hold, then $\mathcal{I}$ is called \emph{zero dimensional}. An immediate consequence of Theorem~\ref{thm:finiteness} is that,
if the ideal $\mathcal{I}$ is zero dimensional, then any basis $\mathcal{B}$ of $\mathbb{R}[x]/\mathcal{I}$
has a finite number of elements, which can be chosen to be all monomials.
With such a choice (assumed in the following), given a polynomial $f\in\mathbb{R}[x]$, one can use multiplication
to define a linear map $m_f^{\mathfrak{A}}$ between $\mathfrak{A}=\mathbb{R}[x]/\mathcal{I}$ and itself.
More precisely, $m_f^{\mathfrak{A}}:\mathfrak{A}\rightarrow\mathfrak{A}$
is defined as:
\begin{equation*}
m_f^{\mathfrak{A}}([g]_{\mathcal{I}})=[f]_{\mathcal{I}}\cdot [g]_{\mathcal{I}}=[f\cdot g]_{\mathcal{I}}\in\mathfrak{A},\quad\forall[g]_{\mathcal{I}}
\in\mathfrak{A}.
\end{equation*}
Since the algebra $\mathfrak{A}$ is finitely generated over $\mathbb{R}$, then the map $m_f^{\mathfrak{A}}$ can be
represented by its associated matrix $M_f^{\mathfrak{A}}$, with respect to the chosen finite--dimensional basis
$\mathcal{B}$ of $\mathfrak{A}$.
The following two propositions characterize the elements of the affine variety $\bm{V}(\mathcal{I})$ of a zero
dimensional ideal $\mathcal{I}$.
\begin{prop}
\cite{sturmfels2002solving}
Let the ideal $\mathcal{I}$ in $\mathbb{R}[x]$ be zero dimensional.
Let $\mathcal{B}=\{b_1,\dots,b_\ell\}$ be the basis of the finite--dimensional algebra $\mathfrak{A}=\mathbb{R}[x]/\mathcal{I}$.
Let $M_{b_i}^\mathfrak{A}$ be the matrix representing the linear map $m_{b_i}^\mathfrak{A}$, with
respect to the basis $\mathcal{B}$, for $i=1,\dots,\ell$.
Define the real symmetric matrix
$T$ as:
\begin{equation*}
[T]_{j,k}=\mathrm{Tr}(M_{b_j}^\mathfrak{A}M_{b_k}^\mathfrak{A}),
\end{equation*}
where $[T]_{j,k}$ denotes the $j\times k$th entry of $T$ and $\mathrm{Tr}(\cdot)$ is
the trace operator. The number of elements (including their multiplicity) of $\bm{V}(\mathcal{I})\subset\mathbb{R}$
equals the signature of $T$, i.e. the number of positive eigenvalues of $T$ minus the number of
negative eigenvalues of $T$.
\label{prop:numberPoints}
\end{prop}
\begin{prop}
\cite{cox1998using}
Let the assumptions of Proposition~\ref{prop:numberPoints} hold.
Let $M_{x_i}^\mathfrak{A}$ be the matrix representing the linear map $m_{x_i}^\mathfrak{A}$, with
respect to the basis $\mathcal{B}$, for $i=1,\dots,n$.
The real eigenvalues of
the matrix $M_{x_i}^\mathfrak{A}$ are the $x_i$--coordinates of the points of
$\bm{V}(\mathcal{I})\subset\mathbb{R}^n$, for $i=1,\dots,n$.
\label{prop:coordinates}
\end{prop}
By Proposition~\ref{prop:numberPoints} and Proposition~\ref{prop:coordinates},
Algorithm~\ref{alg:solutionMatrix} is able to solve a system of polynomial equations
having a finite number of solutions.
\begin{algorithm}[htb]
\begin{algorithmic}[1]
\REQUIRE A zero dimensional ideal $\mathcal{I}$ in $\mathbb{R}[x]$.
\ENSURE The points in $\bm{V}(\mathcal{I})\subset\mathbb{R}^n$.
\STATE Define the algebra $\mathfrak{A}=\mathbb{R}/\mathcal{I}$.
\STATE Compute a basis $\mathcal{B}=\{b_1,\dots,b_\ell \}$ of the algebra $\mathfrak{A}$.
\FOR {$i=1$ \TO $n$}
\STATE Compute the matrix $M_{x_i}^\mathfrak{A}$, representing the map .
\STATE Compute the set $\mathcal{E}_i$ of the real eigenvalues of $M_{x_i}^\mathfrak{A}$\label{step:Ei}.
\ENDFOR
\STATE Compute the matrix $T$, such that $[T]_{j,k}=\mathrm{Tr}(M_{b_j}^\mathfrak{A}M_{b_k}^\mathfrak{A})$.
\STATE Let $d$ be equal to the signature of the matrix $T$.
\STATE Let $\mathcal{S}$ be the set of the $n$--tuple $s_j=[\begin{array}{ccc}
s_{j,1} & \cdots & s_{j,n}\end{array}]^\top$, $s_{j,i}\in\mathcal{E}_i$, for $i=1,\dots,n$ and for $j=1,\dots,d$,
which are such that, for all $f\in\mathcal{I}$, $f(s_j)=0$, for $j=1,\dots,d$ \label{step:sol}.
\RETURN $\mathcal{S}=\bm{V}(\mathcal{I})$.
\end{algorithmic}
\caption{Solution of polynomial equations through computation of eigenvalues.\label{alg:solutionMatrix}}
\end{algorithm}
\begin{rem}
Let $m$ be the dimension of the basis $\mathcal{B}$. The dimension of the matrices
$M_{x_i}^\mathfrak{A}$ is $m\times m$, for all $i\in\{1,\dots,n\}$. Therefore, the sets
$\mathcal{E}_i$, defined at Step~\ref{step:Ei}, for $i=1,\dots,n$, are composed at most by $m$ elements. Thus, Step~\ref{step:sol}
of Algorithm~\ref{alg:solutionMatrix} can be carried out by evaluating the polynomials $p_1,\dots,p_s$
at most in $m^n$ iterations.
\end{rem}
\subsection{The Rational Univariate Representation\label{sec:RUR}}
In this section, the Rational Univariate Representation (briefly, RUR), which is able to
solve a system of polynomial equations is reported, following the exposition in \cite{rouillier1999solving}.
Let $\mathbb{C}$ be the field of complex number (which is the algebraic closure of $\mathbb{R}$).
Let $\mathcal{I}$ be a zero dimensional ideal in $\mathbb{R}[x]$.
Let $\bm{V}_{\mathbb{C}}(\mathcal{I}) \subset \mathbb{C}^n$ be defined as
\begin{equation*}
\bm{V}_{\mathbb{C}}(\mathcal{I})=\{x\in\mathbb{C}^n:\;q(x)=0,\,\forall q\in\mathcal{I} \}.
\end{equation*}
A polynomial $q\in\mathbb{R}[x]$ is called
\emph{separating with respect to $\bm{V}_{\mathbb{C}}(\mathcal{I})$} if $\forall \alpha,\,\beta \in\bm{V}_{\mathbb{C}}
(\mathcal{I})$, $\alpha\neq \beta$, then $q(\alpha) \neq q(\beta)$.
Let $\mathcal{I}$ be a zero dimensional ideal in $\mathbb{R}[x]$, let $h$ be a polynomial in $\mathbb{R}[t]$
and let $\phi:\bm{V}_{\mathbb{C}}(\mathcal{I})\rightarrow \bm{V}_{\mathbb{C}} ( \langle h \rangle ) $ be an
isomorphism represented by a polynomial $q\in\mathbb{R}[t]$, i.e.,
$\phi(\alpha)=q(\alpha)$, for any $\alpha\in\bm{V}_{\mathbb{C}}(\mathcal{I})$.
The pair $(
\phi ,\, h)$ is a \emph{Univariate Representation of $\bm{V}_{\mathbb{C}}(\mathcal{I})$} if,
for any $\alpha\in\bm{V}_{\mathbb{C}}(\mathcal{I})$, one has that $\mu(\alpha)=\mu(q(\alpha))$,
where the symbol $\mu(\cdot)$ denotes the multiplicity of the element at argument.
On the other hand, letting $\mathcal{I}$ be a zero dimensional ideal in $\mathbb{R}[x]$, letting $f$ be any
polynomial in $\mathbb{R}[x]$, letting $\mathcal{B}$ be a monomial basis of the algebra $\mathfrak{A}=\mathbb{R}[x]
/\mathcal{I}$, letting $M_f^{\mathfrak{A}}$ be the matrix representing the linear map $m_f^{\mathfrak{A}}$
over the basis $\mathcal{B}$ and letting $\chi_f$ be the characteristic polynomial of the matrix $M_f^\mathfrak{A}$,
define, for any $\nu\in\mathbb{R}[x]$, the following polynomial
\begin{equation*}
g_f(\nu,t)=\sum\limits_{\alpha\in\bm{V}_{\mathbb{C}}(\mathcal{I})} \mu(\alpha)\nu(\alpha)+
\sum\limits_{y\neq f(\alpha),\,y\in \bm{V}_{\mathbb{C}}(\langle \chi_f \rangle)} (t-y),
\end{equation*}
the \emph{$f$--representation of $\mathcal{I}$} is the polynomial $(n+2)$--tuple
\begin{equation*}
\{\chi_f(t),\,g_f(1,t),\,g_f(x_1,t),\cdots,g_f(x_n,t) \},
\end{equation*}
where $\chi_f\in\mathbb{R}[t]$, and, if $f$ separates $\bm{V}_{\mathbb{C}}(\mathcal{I})$, the $f$--representation of $\mathcal{I}$
is called the \emph{Rational Univariate Representation}
\emph{of $\mathcal{I}$ associated to $f$}.
If one is able to compute the RUR of $\mathcal{I}$ associated to $f$, then, the set $\bm{V}_{\mathbb{C}}
(\mathcal{I})$ can be obtained by computing the complex solutions to
\begin{equation}
\chi_f(t)=0.
\label{eq:complexUnivariate}
\end{equation}
Letting $\mathcal{T}$ be the set of all the complex solutions to \eqref{eq:complexUnivariate} in $t$, one has that,
by Theorem 3.1 of \cite{rouillier1999solving},
\begin{equation*}
\bm{V}_{\mathbb{C}}(\mathcal{I})=\{[\begin{array}{ccc}
\frac{g_f(x_1,t)}{g_f(1,t)} & \cdots & \frac{g_f(x_n,t)}{g_f(1,t)}
\end{array}]^\top,\,\forall t\in\mathcal{T} \}.
\end{equation*}
Thus, by construction, the affine variety $\bm{V}(\mathcal{I})\subset\mathbb{R}^n$
can be computed as:
$
\bm{V}(\mathcal{I})=\bm{V}_{\mathbb{C}}(\mathcal{I})\cap \mathbb{R}^n
$.
However, in \cite{rouillier1999solving}, an alternative method to compute
$\bm{V}(\mathcal{I})$ is given. Let $\mathcal{T}_{\mathbb{R}}$ be the set
of the real solutions to \eqref{eq:complexUnivariate}, i.e. $\mathcal{T}_{\mathbb{R}}
= \mathcal{T}\cap\mathbb{R}$. One has that
\begin{equation*}
\bm{V}(\mathcal{I})=\{[\begin{array}{ccc}
\frac{g_f(x_1,t)}{g_f(1,t)} & \cdots & \frac{g_f(x_n,t)}{g_f(1,t)}
\end{array}]^\top,\,\forall t\in\mathcal{T}_{\mathbb{R}} \}.
\end{equation*}
A test to compute the number of
real roots, with their multiplicity, of an univariate polynomial on
$\mathbb{R}$ is given in \cite{yang1999recent}.
Alternatively, the Sturm's Test \cite{gonzalez1998sturm} can be used.
In \cite{rouillier1999solving} an algorithm is given to compute a RUR of a
given zero dimensional ideal. Such algorithm is not reported here for space
reasons. An implementation of this algorithms in the CAS
Maple is available through the command \textsf{RationalUnivariateRepresentation}
\cite{GRBpackage,RURpackage}.
Thus, the set of real solutions of a system of polynomial system
of equations can be computed by using Algorithm~\ref{alg:solutionRUR}.
\begin{algorithm}[htb]
\begin{algorithmic}[1]
\REQUIRE A zero dimensional ideal $\mathcal{I}$ in $\mathbb{R}[x]$.
\ENSURE The points in $\bm{V}(\mathcal{I})\subset\mathbb{R}^n$.
\STATE Compute the RUR $\{\chi_f,\,g_f(1,t),\cdots,g_f(x_n,t) \},$ of $\mathcal{I}$.
\STATE Find the number $d$ of real roots of $\chi_f$.
\STATE Compute the set $\mathcal{T}_{\mathbb{R}}$ composed by $d$ real roots of $\chi_f$.
\STATE Define $\mathcal{S}=\{[\begin{array}{ccc}
\frac{g_f(x_1,t)}{g_f(1,t)} & \cdots & \frac{g_f(x_n,t)}{g_f(1,t)}
\end{array}]^\top,\,\forall t\in\mathcal{T}_{\mathbb{R}} \}$.
\RETURN $\mathcal{S}=\bm{V}(\mathcal{I})$.
\end{algorithmic}
\caption{Solution of polynomial equations through RUR.\label{alg:solutionRUR}}
\end{algorithm}
\begin{rem}
Note that, before using Algorithm~\ref{alg:solutionRUR}, one has to verify that
the ideal $\mathcal{I}$ is zero dimensional. In \cite{cox1998using} an efficient
method to check this property of the ideal $\mathcal{I}$ is given.
\end{rem}
\subsection{The real Polynomial Univariate Representation\label{sec:GRNsol}}
In this section, an alternative method, with respect to the ones presented in Section~\ref{sec:eigenvalues}
and in Section~\ref{sec:RUR}, is presented. Such a method is based on the
definition of an `extended' ideal $\mathcal{I}_t=\mathcal{I}\cup \langle h\rangle \in\mathbb{R}[x,t]$,
where $h$ is a polynomial in $x$ and $t$, and on the computation of the Groebner basis of $\mathcal{I}_t$,
according to the lex ordering.
For a given ideal $\mathcal{J}$, let $\mathcal{J}\in\mathbb{R}[x,t]$ and let
\begin{equation*}
\bm{V}^t(\mathcal{J})=\{(x,\,t)\in\mathbb{R}^n\times\mathbb{R}:\;
g(x,t)=0,\,\forall g\in\mathcal{J} \},
\end{equation*}
and let the symbol $\bm{V}^t_{\mathbb{C}}(\mathcal{J})$ denote the set
\begin{equation*}
\bm{V}^t_{\mathbb{C}}(\mathcal{J})=\{(x,\,t)\in\mathbb{C}^n\times\mathbb{C}:\;
g(x,t)=0,\,\forall g\in\mathcal{J} \},
\end{equation*}
The \emph{projection map} is defined as the map $\pi_t:\mathbb{C}^n\times \mathbb{C}\rightarrow \mathbb{C}$,
which send each couple $(\bar{x},\,\bar{t})\in\bm{V}^t_{\mathbb{C}}(\mathcal{J})$ in $\bar{t}$.
The following theorem characterizes the projection map
and can be proved by a little modification of the proof of the Closure Theorem
given in \cite{cox1992ideals}.
\begin{thm}
\label{thm:closure}
If the ideal $\mathcal{I}_t$ in $\mathbb{R}[x,t]$ is zero dimensional,
one has that, letting $\mathcal{I}_t^n=\mathcal{I}_t\cap\mathbb{R}[t]$,
\begin{equation*}
\pi_t(\bm{V}^t_{\mathbb{C}}(\mathcal{I}_t)) = \{t\in\mathbb{C}:\;g(t)=0,\,\forall g \in \mathcal{I}_t^n\}.
\end{equation*}
\end{thm}
Consider the following assumption.
\begin{ass}
Let $\mathcal{I}\subset\mathbb{R}[x]$ be zero dimensional and let $s\in\mathbb{R}[x]$ be a polynomial
separating with respect to $\bm{V}_{\mathbb{C}}(\mathcal{I})\subset\mathbb{C}^n$.
Let $h$ be the following polynomial in $\mathbb{R}[x,t]$:
\begin{equation*}
h(x,t) = t-s(x),
\end{equation*}
where $t$ is an auxiliary variable.
\label{ass:basic}
\end{ass}
The following two lemmas characterize the ideal $\mathcal{I}_t=\mathcal{I}\cup\langle h \rangle$
and the elimination ideal $\mathcal{I}_t^n:=\mathcal{I}_t\cap\mathbb{R}[t]$.
\begin{lem}
Let Assumption~\ref{ass:basic} hold.
Let $\mathcal{C}=\{c_1,\dots,c_\ell\}$ be any basis of the ideal $\mathcal{I}$. The ideal
$\mathcal{I}_t=\langle c_1,\dots,c_\ell,h \rangle\subset\mathbb{R}[x,t]$
is zero dimensional.
\label{lem:zeroDim}
\end{lem}
\begin{proof}
Consider the ideal $\mathcal{I}_t$. By
\cite{cox1992ideals}, one has that
$\mathcal{I}_t=\mathcal{I}\cup\langle h \rangle$. Hence, since
$\bm{V}^t_{\mathbb{C}}(\mathcal{I}_t)=\bm{V}^t_{\mathbb{C}}(\mathcal{I})\cap
\bm{V}^t_{\mathbb{C}}( \langle h \rangle )$ \cite{cox1992ideals} and since $s$ is separating
with respect to $\bm{V}_{\mathbb{C}}(\mathcal{I})$, one has that
$\bm{V}^t_{\mathbb{C}}(\mathcal{I}_t)$ is a finite set.
Thus, by Theorem~\ref{thm:finiteness}, $\mathcal{I}_t$ is zero dimensional.
\end{proof}
\begin{lem}
\label{lem:polynomialGRB}
Let Assumption~\ref{ass:basic} hold. Let $\mathcal{G}_t$ be a Groebner basis of
the ideal $\mathcal{I}_t=\mathcal{I}\cup\langle h \rangle$, with respect to any lex ordering,
with $x_i>_lt$, for $i=1,\dots,n$.
There exists a polynomial $\eta\in\mathcal{G}_t\cap\mathbb{R}[t]$ different from the zero polynomial,
and the roots in $t$ of the polynomial $\eta$ are in
\begin{equation*}
\mathcal{T}:=\{t\in\mathbb{C}:\;t=f(x),\,\forall x\in\bm{V}_{\mathbb{C}}(\mathcal{I}) \}.
\end{equation*}
\end{lem}
\begin{proof}
By Lemma~\ref{lem:zeroDim}, the ideal $\mathcal{I}_t$ is zero dimensional.
Hence, by Theorem~\ref{thm:finiteness}, one has that there exists a polynomial $\eta$
different from the zero polynomial, such that $\eta\in\mathcal{G}_t\cap\mathbb{R}[t]$,
and, by the Elimination Theorem \cite{cox1992ideals}, one has that $\eta$ is a Groebner
basis of $\mathcal{I}_t^n:=\mathcal{I}_t\cap\mathbb{R}[t]$.
Thus, by considering that $\pi_t(\bm{V}^t_{\mathbb{C}}(\mathcal{I}_t))=
\pi_t(\bm{V}^t_{\mathbb{C}}(\mathcal{I})\cap\bm{V}^t_{\mathbb{C}}( \langle h \rangle ))=
\mathcal{T}$, by Theorem~\ref{thm:closure}, one has that
$\mathcal{T} = \{t\in\mathbb{C}:\;g(t)=0,\,\forall g \in\langle \eta \rangle \}$.
\end{proof}
Let the lex ordering, with $x_1>_lx_2>_l\cdots>_lx_n>_lt$, be fixed.
A \emph{real Polynomial Univariate Representation} (briefly, \emph{PUR})
\emph{of the ideal $\mathcal{I}$} is
a $(n+1)$--tuple
\begin{equation*}
\{\eta(t),\,g_n(x_n,t),g_{n-1}(x_{n-1},t),\dots,g_2(x_2,t),\,g_1(x_1,t) \},
\end{equation*}
which is such that $\eta\in\mathbb{R}[t]$, $g_i\in\mathbb{R}[x_i,t]$,
$\mathrm{LT}(g_i)=x_i$, for $i=1,\dots,n$, and, letting $\mathcal{T}_\mathbb{R}$
be the set of all the real solutions to $\eta(t)=0$ in $t$,
\begin{equation*}
\bm{V}(\mathcal{I})=\{
x\in\mathbb{R}^n:\;g_i(x_i,t)=0,\,\forall t\in\mathcal{T}_{\mathbb{R}},\,i=1,\dots,n
\}.
\end{equation*}
The following theorem gives
a constructive method to compute the real PUR of a given ideal $\mathcal{I}$.
\begin{thm}
Let Assumption~\ref{ass:basic} hold.
Let the lex ordering, with $x_1>_lx_2>_l\cdots>_lx_n>_lt$, be fixed.
The Groebner basis $\mathcal{G}_t$
of the ideal $\mathcal{I}_t=\mathcal{I}\cup\langle h \rangle$ is a real PUR
of $\mathcal{I}$.
\label{thm:PUR}
\end{thm}
\begin{proof}
By Lemma~\ref{lem:polynomialGRB}, one has that there exists an $\eta
\in\mathcal{G}_t\cap\mathbb{R}[t]$ different from the zero polynomial, whose root
are in the set $\mathcal{T}=\{t\in\mathbb{C}:\;t=f(x),\,\forall x\in\bm{V}_{\mathbb{C}}(\mathcal{I}) \}$.
Let $\mathcal{T}_{\mathbb{R}}=\mathcal{T}\cap\mathbb{R}$ be the set of the real roots of $\eta$.
Let $\pi_{x_i}^{\mathbb{R}}$ denote the \emph{projection map}, $\pi_{x_i}^{\mathbb{R}}:\mathbb{R}^n\times \mathbb{R}\rightarrow
\mathbb{R}$, which maps each couple $(\bar{x},\,\bar{t})$ in $\bar{x}_i$, for $i=1,\dots,n$.
By the Lagrange Interpolation Formula \cite{sauer1995multivariate}, there exists a polynomial
$\varrho_i(t)\in\mathbb{R}[t]$, which is such that $\pi_{x_i}^\mathbb{R}((\bar{x},\bar{t}))=
\varrho_i(\bar{t})$, for all the couples $(\bar{x},\bar{t})\in\bm{V}^t(\mathcal{I}_t)$, for $i=1,\dots,n$.
Hence, in the ideal $\mathcal{I}_t$
there exists polynomials $w_i=x_i-\varrho_i(t)$, for $i=1,\dots,n$, and, by the definition of a Groebner basis,
one has that a Groebner basis of $\mathcal{I}_t$ is $\{\eta,w_n,w_{n-1},\dots,w_1 \}$ and,
by construction, this is a real PUR of the ideal $\mathcal{I}$.
\end{proof}
By Theorem~\ref{thm:PUR}, Algorithm~\ref{alg:solutionPUR} is able to compute the real
solutions to a system of equations.
\begin{algorithm}[htb]
\begin{algorithmic}[1]
\REQUIRE A zero dimensional ideal $\mathcal{I}$ in $\mathbb{R}[x]$.
\ENSURE The points in $\bm{V}(\mathcal{I})\subset\mathbb{R}^n$.
\STATE Define a random polynomial $s(x)\in\mathbb{R}[x]$ and the
polynomial $h(x,t)=t-s(x)$\label{step:randPol}.
\STATE Letting $\{c_1,\dots,c_\ell\}$ be a set of generators of the ideal $\mathcal{I}$,
define the ideal $\mathcal{I}_t=\langle c_1,\dots,c_\ell,h \rangle$.
\STATE Compute a Groebner basis $\mathcal{G}_t$ of the ideal $\mathcal{I}_t$,
according to lex ordering, with $x_1>_lx_2>_l\cdots>_lx_n>_lt$.
\STATE Verify that $\mathcal{G}_t$ is a real PUR, otherwise return to step~\ref{step:randPol}.
\STATE Let $\mathcal{G}_t=\{\eta(t),\,g_n(x_n,t),\dots,\,g_1(x_1,t) \}$, where
$g_i=x_i-\varrho_i(t)$, with $\varrho_i\in\mathbb{R}[t]$, for $i=1,\dots,n$.
\STATE Find the number $d$ of real solutions of $\eta(t)=0$.
\STATE \label{step:solpol}Compute the set $\mathcal{T}_{\mathbb{R}}$ of the $d$ real solutions of $\eta(t)=0$.
\STATE Define $\mathcal{S}=\{[\begin{array}{ccc}
\varrho_1(t) & \cdots & \varrho_n(t)
\end{array}]^\top,\;\forall t \in\mathcal{T}_{\mathbb{R}} \}$.
\RETURN $\mathcal{S}=\bm{V}(\mathcal{I})$.
\end{algorithmic}
\caption{Solution of polynomial equations through PUR.\label{alg:solutionPUR}}
\end{algorithm}
\begin{rem}
It can be easily proved that, if one define a random polynomial $s(x)$, there is a probability of 1
that it is separating (i.e., there exist isolated monomial coefficients which makes the polynomial
$s$ be not separating). Hence, the iterations required by Algorithm~\ref{alg:solutionPUR}
are finite.
\end{rem}
\begin{rem}
By \cite{alonso1996zeros}, one has that the numerical computation of roots of the polynomial $\eta$, obtained by using
Algorithm~\ref{alg:solutionPUR} is, generally, numerically more complex than the computation of the roots of
the polynomial $\chi_f$, obtained by using Algorithm~\ref{alg:solutionRUR}.
However, since the representation of $\bm{V}(\mathcal{I})$ obtained by using Algorithm~\ref{alg:solutionPUR}
is polynomial,
it can be preferred to the rational one obtained by using Algorithm~\ref{alg:solutionRUR}.
\end{rem}
\begin{rem}
\label{rem:param}
Note that the computations required by Algorithm~\ref{alg:solutionMatrix} and Algorithm~\ref{alg:solutionPUR}
can be carried out also when some (or, possibly, all the) coefficients of the polynomials $p_1,\dots,p_s$ are functions of
some parameters.
However, even if the computation of the matrices $M_{x_i}^\mathfrak{A}$ is generally faster then
the computation of the Groebner basis $\mathcal{G}_t$, since the computations which have to be carried out
at Step~\ref{step:sol} of Algorithm~\ref{alg:solutionMatrix} can be very computationally expensive, in many
cases of practical interest, Algorithm~\ref{alg:solutionPUR} may be preferred, because the
computations needed to solve Step~\ref{step:solpol} of Algorithm~\ref{alg:solutionPUR}
can be carried out, generally, in a faster way.
\end{rem}
\section{Solution of systems of polynomial inequalities \label{sec:ineq}}
In this section, a procedure to compute, if any, a solution of a system of polynomial
inequalities is given. This method, based on penalizing variables and reported, e.g., in \cite{anderson1977output},
is connected to the solution of a set
of polynomial equations, which can be computed with the algorithms
of Section~\ref{sec:solPol}.
Consider the following problem.
\begin{problem}
Let $x=[\begin{array}{ccc}
x_1 & \cdots & x_n
\end{array}]^\top$.
Let the set of polynomials $\mathcal{P}=\{p_1,\dots,p_s\}\subset\mathbb{R}[x]$ be given.
\begin{enumerate}[(a)]
\item Find, if any, a point $\bar{x}\in\bm{W}(p_1,\dots,p_s)$.\label{prob:a}
\item Let $\{p_1,\dots,p_\ell\}\subseteq\mathcal{P}$. Find, if any, a point
$\bar{x}\in\bm{W}(p_1,\dots,p_s)$, $\bar{x}\notin\bm{V}(p_1,\dots,p_\ell)$.\label{prob:b}
\end{enumerate}
\label{prob:dis}
\end{problem}
Note that a solution $\bar{x}$ to Problem~\ref{prob:dis}~\eqref{prob:a} is a solution of
the system of polynomial inequalities $p_i(x)\geq 0$, for $i=1,\dots,s$, whereas, a
solution $\bar{x}$ to Problem~\ref{prob:dis}~\eqref{prob:b} is a solution of the system of
polynomial inequalities $p_i(x)>0$, for $i=1,\dots,\ell$, and $p_i(x)\geq 0$,
for $i=\ell+1,\dots,s$.
Let $v=[\begin{array}{ccc}
v_1 & \cdots & v_s
\end{array}]^\top\in\mathbb{R}^s$ and let
$w=[\begin{array}{ccc}
w_1 & \cdots & w_s
\end{array}]^\top\in\mathbb{R}^s$ be auxiliary variables.
By \cite{anderson1977output}, Problem~\ref{prob:dis}~\eqref{prob:a} can be solved
with the following procedure:
\begin{enumerate}
\item Let $\alpha_i,\,\beta_i$, for $i=1,\dots,n$, and $\gamma_k,\,\delta_k$, for $k=1,\dots,s$,
be fixed random real numbers.
\item Define the polynomial function $J\in\mathbb{R}[x,w]$:
\begin{equation}
J=\sum_{i=1}^{n} \alpha_i(x_i-\beta_i)^2+\sum_{k=1}^s\gamma_k(w_k-\delta_k)^2.
\label{eq:J}
\end{equation}
\item Define the polynomial function $H\in\mathbb{R}[x,v,w]$\label{step:Hdef}
\begin{equation*}
H=J+\sum_{k=1}^{s}v_k(p_k-w_k^2).
\end{equation*}
\item Solve the following
polynomial system of equations \label{step:solve}
\begin{subequations}
\begin{eqnarray}
\frac{\partial H(x,v,w)}{\partial x} & = & 0,\\
\frac{\partial H(x,v,w)}{\partial v} & = & 0,\\
\frac{\partial H(x,v,w)}{\partial w} & = & 0,
\end{eqnarray}
\label{eq:Astat}%
\end{subequations}
in $[\begin{array}{ccc}
x^\top & v^\top & w^\top
\end{array}]^\top\in\mathbb{R}^{n+2s}$.
\item Let $\pi_x:\mathbb{R}^{n+2s}\rightarrow\mathbb{R}^n$ be the map which maps each
vector $[\begin{array}{ccc}
x^\top & v^\top & w^\top
\end{array}]^\top\in\mathbb{R}^{n+2s}$ in $x\in\mathbb{R}^n$ and let $\mathcal{S}$ be
the set of the solutions to \eqref{eq:Astat}. A solution to
Problem~\ref{prob:dis}~\eqref{prob:a} is given by
\begin{equation*}
\{x\in\mathbb{R}^n:\;x=\pi_x(\zeta),\,\forall\zeta\in\mathcal{S} \}.
\end{equation*}
\end{enumerate}
On the other hand, always by \cite{anderson1977output}, a solution to Problem~\ref{prob:dis}~\eqref{prob:b}
can be obtained by using the same procedure used to solve Problem~\ref{prob:dis}~\eqref{prob:a},
by changing only step~\ref{step:Hdef}):
\begin{enumerate}
\setcounter{enumi}{2}
\item Define the polynomial function $H\in\mathbb{R}[x,v,w]$
\begin{equation*}
H=J+\sum_{k=1}^{\ell}v_k(w_k^2p_k-1)+\sum_{k=\ell+1}^{s}v_k(p_k-w_k^2).
\end{equation*}
\end{enumerate}
\begin{rem}
\label{rem:nosolutions}
By \cite{anderson1977output}, one has that there exists a solution to \eqref{eq:Astat}
if and only if there exists a solution to Problem~\ref{prob:dis}.
\end{rem}
\begin{rem}
Note that the solution to Problem~\ref{prob:dis}~\eqref{prob:a} (respectively, Problem~\ref{prob:dis}~\eqref{prob:b})
obtained by using the procedure given in \cite{anderson1977output} corresponds to computing the
stationary points of the function $J$
in \eqref{eq:J}, subject to the constraints $p_i(x)=w_i^2$, for $i=1,\dots,s$
(respectively, $w_i^2p_i(x)=1$, for $i=1,\dots,\ell$, and $p_j(x)= w_j^2$, for $j=\ell+1,\dots,s$),
where $w_i^2>0$, for $i=1,\dots,s$. Therefore, since a point
$[\begin{array}{ccc}
\bar{x}^\top & \bar{v}^\top & \bar{w}^\top
\end{array}]^\top \in\mathcal{S}$ is such that $p_i(\bar{x})=\bar{w}_i^2\geq 0$
(respectively, $\bar{w}_i^2p_i(\bar{x})=1$, for $i=1,\dots,\ell$, and $p_j(x)= w_j^2$, for $j=\ell+1,\dots,s$),
one has that $\pi_x([\begin{array}{ccc}
\bar{x}^\top & \bar{v}^\top & \bar{w}^\top
\end{array}])=\bar{x}$ is a solution to Problem~\ref{prob:dis}~\eqref{prob:a} (respectively, Problem~\ref{prob:dis}~\eqref{prob:b}).
\end{rem}
By \cite{anderson1977output} one has that the ideal $\langle \frac{\partial H(x,v,w)}{\partial x},
\frac{\partial H(x,v,w)}{\partial v},
\frac{\partial H(x,v,w)}{\partial w} \rangle$ is, for almost any choice of
$\alpha_i,\,\beta_i$, $\gamma_k,\,\delta_k$, zero dimensional,
Hence, step \ref{step:solve}) of this procedure can be actually carried out by using one of the three procedures
given in Algorithm~\ref{alg:solutionMatrix}, Algorithm~\ref{alg:solutionRUR} or Algorithm~\ref{alg:solutionPUR}
in Section~\ref{sec:solPol}.
\begin{rem}
Note that, as it is pointed out in Remark~\ref{rem:param}, Algorithm~\ref{alg:solutionMatrix} and
Algorithm~\ref{alg:solutionPUR} can be used also when the coefficients of the polynomials depend on
some parameters. Hence, Problem~\ref{prob:dis} can be solved also with parametric coefficients
of the polynomials $p_1,\dots,p_s$ .
Algorithm~\ref{alg:solutionPUR} may be preferred to Algorithm~\ref{alg:solutionMatrix},
because the computations needed to carry out Step~\ref{step:solpol} of the first one may
be faster then the ones needed to carry out Step~\ref{step:sol} of the latter one.
\end{rem}
\begin{example}
\label{example:inters}
Let $n=2$, $x=[\begin{array}{cc}
x_1 & x_2
\end{array}]^\top$, $s=2$,
$p_1(x)= -(16-x_1^2)x_2^2+(-16+x_1^2+8x_2)^2$,
$p_2(x)=5x_1^2-x_1^4-4x_2^2+x_2^4$.
Let $v=[\begin{array}{cc}
v_1 & v_2
\end{array}]^\top$, $w=[\begin{array}{cc}
w_1 & w_2
\end{array}]^\top$.
The goal is to compute a point $\bar{x}\in\bm{W}(p_1,p_2)\subset\mathbb{R}^2$,
By using the procedure given in Section~\ref{sec:ineq} one has to solve the following
set of polynomial equations:
\begin{equation}
q_j(x,v,w)=0,\quad \text{for }j=1,\dots,6,\label{eq:systex1}%
\end{equation}
where $q_1 = v_1 (10 x_1-4 x_1^3)+2 v_2 x_1 ((w_2^2-x_2){}^2+2 (-8
w_2^2+x_1^2+8 (x_2-2)))+2 \alpha _1 (x_1-\beta _1)$,
$q_2=4 v_1 ((w_1^2-x_2){}^2-2) (x_2-w_1^2)-2 v_2
((x_1^2+48) w_2^2-48 x_2-x_1^2 (x_2+8)+128)+2 \alpha
_2 (x_2-\beta _2)$,
$q_3 = -x_1^4+5 x_1^2+(w_1^2-x_2)^4-4 (w_1^2-x_2)^2$,
$q_4= (-8 w_2^2+x_1^2+8 (x_2-2))^2+(x_1^2-16)
(w_2^2-x_2)^2$,
$q_5 = 8 v_1 w_1 (2-(w_1^2-x_2){}^2) (x_2-w_1^2)+2 \gamma _1
(w_1-\delta _1)$,
$q_6 = 4 v_2 w_2 ((x_1^2+48) w_2^2-48 x_2-x_1^2
(x_2+8)+128)+2 \gamma _2 (w_2-\delta _2) $ and
$\alpha_1,\,\alpha_2,\,\beta_1,\,\beta_2,\,\gamma_1,\,\gamma_2,\,
\delta_1,\,\delta_2$ are random values.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{Plotter.pdf}
\caption{Solutions to the problem of Example~\ref{example:inters}, for 100 different values of the random constants.\label{fig:exampletoy}}
\end{figure}
The three algorithms given in Section~\ref{sec:solPol} have been used to
solve such a problem, with the same random choices of the coefficients $\alpha_1,\,\alpha_2,\,\beta_1,\,
\beta_2,\,\gamma_1,\,\gamma_2,\,\delta_1\text{ and }\delta_2$.
Figure~\ref{fig:exampletoy} shows the results obtained by using these algorithms, for 100
different choices of the random parameters $\alpha_1,\,\alpha_2,\,\beta_1,\,
\beta_2,\,\gamma_1,\,\gamma_2,\,\delta_1\text{ and }\delta_2$.
As such a figure shows, these procedures are able to
solve a set of inequalities.
\end{example}
\section{Application to a LPV SOF stabilization problem\label{sec:LPVSOF}}
In this section, the techniques given in this paper are used to solve
the Static Output Feedback problem \cite{syrmos1997static}
for a Linear Parametrically--Varying
system \cite{mohammadpour2012control}.
Consider the following missile model \cite{nichols1993gain,scherer1997parametrically}
\begin{subequations}
\begin{eqnarray}
\dot{\alpha} & = & \kappa_\alpha M((a_n\alpha^2+b_n\alpha+c_n) \alpha + d_n\delta)+q,\\
\dot{q} & = & \kappa_q M((a_m\alpha^2+b_m\alpha+c_m) \alpha + d_m\delta),
\end{eqnarray}
\label{eq:LPV}%
\end{subequations}
where $\alpha$ is the angle of attack, $q$ is the pitch rate, $M$ is the Mach
number of the missile, $\kappa_\alpha$, $\kappa_q$, $a_n$, $b_n$, $c_n$, $d_n$, $a_m$, $b_m$, $c_m$, $d_m$
are known aerodynamic coefficients and $\delta$ is the tail fin deflection, which is
considered as the control input. It is assumed that the only available measure is the
angle $\alpha$.
System~\eqref{eq:LPV} can be rewritten as the following LPV system:
\begin{subequations}
\begin{eqnarray}
\dot{\alpha} & = & \theta_1 \alpha +q +\kappa_\alpha M d_n\delta\\
\dot{q} & = & \theta_2 \alpha +\kappa_q M d_m\delta,
\end{eqnarray}
\label{eq:LPVr}%
\end{subequations}
where $\theta_1 = \kappa_\alpha M(a_n\alpha^2+b_n\alpha+c_n)$ and
$\theta_2 = \kappa_q M(a_m\alpha^2+b_m\alpha+c_m)$.
Hence, letting $\theta = [\begin{array}{cc}
\theta_1 & \theta_2
\end{array}]^\top$,
\begin{equation*}
A(\theta) = \left[\begin{array}{cc}
\theta_1 & 1 \\
\theta_2 & 0
\end{array}\right],\;
B = \left[\begin{array}{c}
\kappa_\alpha M d_n\\
\kappa_q M d_m
\end{array}\right],\;
C = [\begin{array}{cc}
1 & 0
\end{array}],
\end{equation*}
$x=[\begin{array}{cc}
\alpha & q
\end{array}]^\top$,
system~\eqref{eq:LPVr} can be written as
\begin{subequations}
\label{eq:linearized}%
\begin{eqnarray}
\dot{x} & = & A(\theta)x + B u,\\
y & = & Cx.
\end{eqnarray}
\end{subequations}
\begin{problem}
\label{prob:SOFLPV}
Let system~\eqref{eq:linearized} be given and let $\delta = K(\theta)\alpha$, where $K(\theta)$,
is a scalar gain, dependent on the vector $\theta$. Find a $K(\theta)$ and
$\mathcal{L}\subset\mathbb{R}^2$,
such that the closed loop system
\begin{equation}
\dot{x} = (A(\theta)+B K(\theta) C) x
\label{eq:closedloop}
\end{equation}
is exponentially stable, with attraction domain containing $\mathcal{L}$.
\end{problem}
\begin{rem}
Consider that, by construction, the parameters $\theta_1$ and $\theta_2$ are dependent
only on the state $\alpha$. Hence, if Problem~\ref{prob:SOFLPV} can be solved,
system~\eqref{eq:linearized} can be stabilized by measuring $\alpha$.
\end{rem}
Consider the closed loop dynamic matrix
\begin{equation*}
\tilde{A}(\theta) = A(\theta)+B K(\theta) C.
\end{equation*}
Let $p_{\tilde{A}}$ be the characteristic polynomial of the matrix $\tilde{A}(\theta)$,
\begin{equation*}
p_{\tilde{A}}(s,\theta)=s^2+p_1(\theta)s+p_2(\theta),
\end{equation*}
where $p_1(\theta)=-K M \kappa _{\alpha } d_n-\theta_1$ and $p_2(\theta)=-K M d_m \kappa _q-\theta_2$.
Let $\Theta$ be a subset of $\mathbb{R}^2$.
One has that, for each fixed ${\theta}\in\Theta$, the eigenvalues of
$A({\theta})$ are complex conjugate if
\begin{subequations}
\begin{equation}
p_1^2({\theta})-4p_2({\theta})<0,
\end{equation}
and the real parts of such eigenvalues are lower than $-\lambda$ if
\begin{equation}
p_1({\theta})>2\lambda.\label{eq:eigs}
\end{equation}
\label{eq:conditions}%
\end{subequations}
where $\lambda$ is a fixed real value greater than zero.
By considering that \eqref{eq:conditions} are polynomial inequalities, parametrized in the unknown
vector $\theta$, one has that they can be actually solved by using the procedure given in Section~\ref{sec:ineq}
to transform such a set of inequalities into a set of equalities, and Algorithm~\ref{alg:solutionPUR}
to compute a solution $K$, depending on $\theta$, of the set of equalities.
By using such a method one obtains the following two polynomials:
\begin{equation*}
\eta(\theta,t), \quad \varrho_K(\theta,t).
\end{equation*}
For each fixed value $\bar{\theta}$, one can obtain the correspondent gain $K(\bar{\theta})$,
which is such that \eqref{eq:conditions} holds, by computing a solution $\bar{t}$ of the equation
$\eta(\bar{\theta},t)=0$ and
setting $K = \varrho_K(\bar{\theta},\bar{t})$. Note that, by Remark~\ref{rem:nosolutions},
if one is not able to find a solution to $\eta(\bar{\theta},t)=0$, then there exists no solution to
\eqref{eq:conditions} for such a value of $\bar{\theta}$ and for such
a $\lambda$.
In the rest of this section, Assumption~\ref{ass:KKK} is made.
\begin{ass}
\label{ass:KKK}
Let $K(\theta)=\varrho_K(\theta,\bar{t}_\theta)$, where $\bar{t}_\theta$ is such that
\begin{equation*}
\eta(\theta,\bar{t}_\theta)=0.
\end{equation*}
\end{ass}
Consider now system~\eqref{eq:LPV}, with $\delta = K(\theta)\alpha$.
One has that
\begin{subequations}
\begin{eqnarray}
\dot{\theta}_1 & = & \kappa_\alpha M(2a_n\alpha+b_n)\dot{\alpha}\\
\dot{\theta}_2 & = & \kappa_q M(2a_m\alpha+b_m)\dot{\alpha}
\end{eqnarray}
\label{eq:dottheta}
\end{subequations}
where $\dot{\alpha}=\kappa_\alpha M((a_n\alpha^2+b_n\alpha+c_n) \alpha + d_nK(\alpha)\alpha)+q$. Hence, by the definition of the parameters $\theta_1$
and $\theta_2$ and of their time derivatives, one has that there exist polynomial
functions $\phi_1,\,\phi_2,\,\psi_1$ and $\psi_2\in\mathbb{R}[x]$,
which are such that
\begin{equation*}
\begin{array}{rclcrcl}
\theta_1 & = & \phi_1(x), & \quad &
\theta_2 & = & \phi_2(x),\\
\dot{\theta}_1 & = & \psi_1(x), & &
\dot{\theta}_2 & = & \psi_2(x).
\end{array}
\end{equation*}
\begin{ass}
Let system~\eqref{eq:closedloop} be given,
let $\mathcal{D}$ be a subset of the state space of system~\eqref{eq:closedloop},
with $0\in\mathcal{D}$, and let
\begin{subequations}
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\mathcal{E} &\!\!\!\! = \!\!\!\! & \{\theta\in\mathbb{R}^2:\exists x \in \mathcal{D}:\theta_1=\phi_1(x),\,
\theta_2=\phi_2(x)\},\label{eq:eee}\\
\!\!\!\!\!\!\!\!\!\!\mathcal{F} &\!\!\!\! = \!\!\!\!& \{\omega\in\mathbb{R}^2:\exists x \in \mathcal{D}:\omega_1=\psi_1(x),\,
\omega_2=\psi_2(x)\}.\label{eq:fff}
\end{eqnarray}
\end{subequations}
\begin{enumerate}
\item
\label{point:lip} The matrix $\tilde{A}(\theta)$ is Lipschitz in $\theta$ in $\mathcal{E}$, i.e. $\exists L_A$:
\begin{equation*}
\Vert \tilde{A}(\theta)-\tilde{A}(\theta') \Vert < L_A \Vert \theta - \theta' \Vert,\quad\text{for all }\theta,\,\theta'\in\mathcal{E}.
\end{equation*}
\item \label{point:norm}There exist constants $m\geq 1$ and $\lambda>0$ such that, for
each fixed $\bar{\theta}\in\mathcal{E}$, any solution of \eqref{eq:closedloop}
is such that
\begin{equation*}
\Vert x(t) \Vert \leq m e^{-\lambda t} \Vert x(0) \Vert,\quad \forall t\geq 0.
\end{equation*}
\item \label{point:dotth}Let $L_A$ be the Lipschitz constant of item~\ref{point:lip}) and let
$m$ and ${\lambda}$ be the constants of item~\ref{point:norm}). One has that
\begin{equation*}
\Vert \dot{\theta} \Vert < \frac{{\lambda}^2}{4 L_A m \log(m)},\quad \text{for all }\dot{\theta}\in\mathcal{F}.
\end{equation*}
\end{enumerate}
\label{ass:exp}
\end{ass}
The following three propositions show that for system~\eqref{eq:closedloop},
under Assumption~\ref{ass:KKK}, there exists a domain $\mathcal{D}$, with $0\in\mathcal{D}$,
such that items \ref{point:lip}) -- \ref{point:dotth}) of
Assumption~\ref{ass:exp} hold, for some $L_A>0$, $m>0$ and $\lambda>0$.
\begin{prop}
Let Assumption~\ref{ass:KKK} hold. One has that there exists a domain
$\mathcal{D}_1\subset\mathbb{R}^n$,
with $0\in\mathcal{D}_1$,
such that the closed loop
dynamic matrix $\tilde{A}(\bar{\theta})$ is Lipschitz in $\bar{\theta}$ in $\mathcal{E}_1$,
with $\mathcal{E}_1$ defined as in \eqref{eq:eee}, with $\mathcal{D}$ replaced by $\mathcal{D}_1$.
\label{prop:lipschi}
\end{prop}
\begin{proof}
The proof of this proposition follows directly by the definition of the matrix
${A}(\theta)$ and of the matrix $K(\theta)$. As a matter of fact, the matrix $A(\theta)$
is trivially Lipschitz, whereas, the matrix $K(\theta)$ is obtained by computing the roots
of a polynomial whose coefficients are polynomially dependent on the parameters
vector $\theta$ and using the polynomial $\varrho_K$.
On the other hand $K(\theta)$ is
differentiable for all the values of $\theta$ for which
the following equalities in $t$
\begin{equation}
\eta(t,\theta) = 0,\quad \quad
\frac{\partial \eta(t,\theta)}{\partial t} = 0,
\label{eq:discriminant}%
\end{equation}
have not real solution.
As detailed in Remark~\ref{rem:critic} below, one has that
the system of equalities \eqref{eq:discriminant} has no solution
for $\theta=[\begin{array}{cc}
\phi_1(x) & \phi_2(x)
\end{array}]^\top$, $\forall x \in\mathbb{R}^2$.
Therefore, there exists a bounded domain $\mathcal{D}_1$, with $0\in\mathcal{D}_1$, such that \eqref{eq:discriminant} does
not have any real solution,
for all $\theta\in\mathcal{E}_1$. Hence,
also the function $K(\theta)$ is Lipschitz in any bounded domain $\mathcal{D}_1$, with $0\in\mathcal{D}_1$,
for some $L_A$. Note that choosing a larger $\mathcal{D}_1 $ will render $L_A$ larger,
in general cases.
\end{proof}
\begin{rem}
\label{rem:critic}
Note that the set $\mathcal{N}=\{x\in\mathbb{R}^2:\;\eqref{eq:discriminant}\text{ holds, with }$\linebreak$\theta=[\begin{array}{cc}
\phi_1(x) & \phi_2(x)
\end{array}]^\top\}$
can be computed by defining the ideal \linebreak$\mathcal{J}=\langle \eta(t,\theta),\frac{\partial \eta(t,\theta)}{\partial t},\theta_1-\phi_1(x),
\theta_2-\phi_2(x) \rangle$
and by computing any Groebner basis $\mathcal{G}$,
with respect to the lex order, with $t>_l \theta_1 >_l \theta_2>_l q>_l \alpha$, of such an ideal. Hence, letting $g_1,\dots,g_l$ be the
polynomials in $\mathbb{R}[x]$ such that $\{g_1,\dots,g_l\}=\mathcal{G}\cap\mathbb{R}[x]$,
all the points in the set $\mathcal{N}$ can be obtained as the solution to the following equalities
\begin{equation*}
g_1(x) = 0,\quad
\cdots\quad
g_l(x) = 0,
\end{equation*}
which can be solved with the algorithms of Section~\ref{sec:solPol}.
By applying Algorithm~\ref{alg:solutionPUR}, it has be proved that $\mathcal{N}$ is empty.
\end{rem}
The following proposition can be easily proved by considering that, by using the techniques
given in \cite{yang1999recent}, there exist a domain
$\mathcal{D}_2\in\mathbb{R}^2$, with $0\in\mathcal{D}_2$,
such that $\eta(\theta,t)=0$ has a solution in $t$
for $\theta=[\begin{array}{cc}
\phi_1(x) & \phi_2(x)
\end{array}]^\top$, $\forall x\in\mathcal{D}_2$ and that,
by Assumption~\ref{ass:KKK}, one has that \eqref{eq:conditions} holds for all $x\in\mathcal{D}_2$.
\begin{prop}
\label{prop:boundeig}
Let Assumption~\ref{ass:KKK} hold. There exists a domain
$\mathcal{D}_2\subset\mathbb{R}^n$,
with $0\in\mathcal{D}_2$,
such that, letting $\mathcal{E}_2$
be defined as in \eqref{eq:eee}, with $\mathcal{D}$ replaced by $\mathcal{D}_2$, there exists constant $m\geq 1$ and
${\lambda}>0$, such that,
any solution of \eqref{eq:closedloop} is such that
\begin{equation}
\Vert x(t) \Vert \leq m e^{-{\lambda} t} \Vert x(0) \Vert,\quad \text{for each fixed }\bar{\theta}\in\mathcal{E}_2.
\label{eq:exponent}
\end{equation}
\end{prop}
\begin{prop}
\label{prop:boundedtheta}
Let Assumption~\ref{ass:KKK} hold.
Let $L_A$ be the Lipschitz constant chosen in the proof of Proposition~\ref{prop:lipschi}
and let $m$ and ${\lambda}$ be the constants chosen in the proof of Proposition~\ref{prop:boundeig}.
There exists $\mathcal{D}_3\subset\mathbb{R}^2$,
with $0\in\mathcal{D}_3$,
such that
\begin{equation}
\Vert \dot{\theta} \Vert < \frac{{\lambda}^2}{4 L_A m \log(m)},\quad \text{for all }\dot{\theta}\in\mathcal{F},
\label{eq:boundedot}
\end{equation}
where $\mathcal{F}$ is defined as in \eqref{eq:fff},
with $\mathcal{D}$ replaced by $\mathcal{D}_3$.
\end{prop}
\begin{proof}
By \eqref{eq:dottheta}, $\dot{\theta}$ is a linear function of $\dot{\alpha}$, and $\frac{\dot{\theta}_i}{\dot{\alpha}}$
is affine in $\alpha$. Hence, \eqref{eq:boundedot} holds in some domain $\mathcal{D}_3$, with $0\in\mathcal{D}_3$.
\end{proof}
By Proposition~\ref{prop:lipschi}, Proposition~\ref{prop:boundeig} and Proposition~\ref{prop:boundedtheta},
one has that system~\eqref{eq:closedloop} is such that Assumption~\ref{ass:exp} holds, with
$\mathcal{D}=\mathcal{D}_1\cap\mathcal{D}_2\cap\mathcal{D}_3$, where $\mathcal{D}_1$, $\mathcal{D}_2$ and
$\mathcal{D}_3$ are the domains chosen in the proofs of Proposition~\ref{prop:lipschi}, Proposition~\ref{prop:boundeig} and Proposition~\ref{prop:boundedtheta}, respectively.
The following two theorems and lemma show that the gain $K(\theta)$
and a domain $\mathcal{L}\subset\mathcal{D}$
are a solution to Problem~\ref{prob:SOFLPV}.
\begin{thm}
\cite{mohammadpour2012control}
Let system~\eqref{eq:closedloop} be given.
Let $\Theta$ be the set of all the admissible parameters $\theta$.
Let $\tilde{A}(\cdot)$ be Lipschitz continuous,
with a Lipschitz constant $L_A$, for each $\theta\in\Theta$.
Assume that, for any fixed $\bar{\theta}\in\Theta$,
any solution to the LTI system
$
\dot{x}(t)=\tilde{A}(\bar{\theta})x(t)
$
is such that there exist constants $m\geq 1$ and $\lambda>0$ such that a$
\Vert x(t) \Vert \leq m e^{-\lambda t} \Vert x(0) \Vert$.
If $\Vert \dot{\theta}(t) \Vert < \frac{{\lambda}^2}{4 L_A m \log(m)},\, \forall t \geq 0$,
then $0$ is exponentially stable, with respect to system~\eqref{eq:closedloop}.
\label{thm:stability}
\end{thm}
\begin{lem}
\label{lem:subdomain}
Let Assumption~\ref{ass:KKK} and Assumption~\ref{ass:exp} hold.
There exists $\mathcal{L}\subset\mathcal{D}$, such that,
if $x(0)\in\mathcal{L}$, then
$x(t)\in\mathcal{D}$, $\forall t \geq 0$.
\end{lem}
\begin{proof}
By Assumption~\ref{ass:KKK}, one has that \eqref{eq:conditions} holds.
Hence, by \cite{desoer1969slowly}, there exists a positive definite matrix $P(\theta)$, such that
\begin{subequations}
\begin{eqnarray}
-I & = & \tilde{A}(\theta)P(\theta)+\tilde{A}(\theta)^\top P(\theta),\label{eq:classlyap}\\
P(\theta) & = & \textstyle\int_{0}^{\infty} e^{\tau \tilde{A}(\theta)^\top}e^{\tau \tilde{A}(\theta)}d\tau,
\end{eqnarray}
\end{subequations}
where $I$ is the 2--dimensional identity matrix, for all $\theta\in\mathcal{E}$. By
the proof of Proposition~\ref{prop:lipschi}, one has that the matrix $\tilde{A}(\theta)$
is differentiable. Hence, by \cite{wilcox1967exponential},
one has that $\frac{\partial e^{\tau \tilde{A}(\theta)}}{\partial \theta_i}=\int_0^\tau e^{(\tau-u) \tilde{A}(\theta)}
\frac{\partial \tilde{A}(\theta)}{\partial \theta_i}e^{u \tilde{A}(\theta)}du$, for $i=1,2$.
Therefore, by item \ref{point:lip}) and item \ref{point:norm}) of Assumption~\ref{ass:exp},
one has that there exists constants $C_i>0$ and $\lambda_i>0$ such that
$\Vert \frac{\partial P(\theta)}{\partial \theta_i} \Vert \leq \int_{0}^{\infty} C_i\tau e^{-2\lambda_i\tau}d\tau<\infty$,
$i=1,2$.
Hence, there exists a constant $L_P$, such that $\Vert P(\theta)-P(\theta') \Vert < L_P \Vert \theta
-\theta' \Vert$, for each $\theta$, $\theta'\in\mathcal{E}$.
Hence, due to \eqref{eq:classlyap}, the function
$V=x^\top P(\theta)x$ is such that
$\dot{V}\leq x^\top(L_P \Vert \dot{\theta}\Vert -1)x$, for $\dot{\theta}=[\begin{array}{cc}
\psi_1(x) & \psi_2(x)
\end{array}]^\top$, $\forall x \in \mathcal{D}$.
Hence, by the continuity of the functions $\psi_1(x)$ and $\psi_2(x)$ and
since $\psi_1(0)=\psi_2(0)=0$,
there exists a domain $\mathcal{H} = \{ x\in\mathcal{D}:\,\Vert [\begin{array}{cc}
\psi_1(x) & \psi_2(x)
\end{array}]^\top \Vert < L_P^{-1} \}$
such that $0\in\mathcal{H}$. Thus, let $c>0$ be the largest constant
such that $\mathcal{W}(\theta)=\{x\in\mathbb{R}^2:\;x^\top P(\theta)x<c \}$
is a subset of $\mathcal{H}$, for all $\theta\in\mathcal{E}$.
Define the set $\mathcal{L}=\cap_{\theta\in\mathcal{E}}\mathcal{W}(\theta)$.
Since $V$ is a Lyapunov function, with respect to system~\eqref{eq:closedloop},
one has that, if $x(0)\in\mathcal{L}$, then $x(t)\in\mathcal{H}\subset\mathcal{D}$,
for all times $t\geq 0$.
\end{proof}
\begin{thm}
\label{thm:conv}
Let the assumptions of Lemma~\ref{lem:subdomain} hold.
Then $0$ is exponentially stable for system~\eqref{eq:closedloop},
with $\mathcal{L}$ being a conservative estimate of the attraction domain.
\end{thm}
\begin{proof}
By Lemma~\ref{lem:subdomain}, if $x(0)\in\mathcal{L}$, then $x(t)\in\mathcal{D}$, for all times $t\geq 0$.
Hence, since, if $x(t)\in\mathcal{D}$, for all times $t\geq 0$, all the assumptions of Theorem~\ref{thm:stability} hold,
then $0$ is exponentially stable, with respect to system~\eqref{eq:closedloop}.
\end{proof}
By Theorem~\ref{thm:conv},
$K(\theta)$ and $\mathcal{L}$ are a solution to Problem~\ref{prob:SOFLPV}.
A gain $K$, dependent on the parameters $\theta$
has been computed by solving \eqref{eq:conditions}, with $\lambda=15$,
by using the procedure given in Section~\ref{sec:ineq} and
Algorithm~\ref{alg:solutionPUR} to compute the polynomials $\eta(\theta,t)$
and $\varrho_K(\theta,t)$.
One has that the domain $\mathcal{D}=\{x=[\begin{array}{cc}
\alpha & q
\end{array}]^\top\in\mathbb{R}^2:\;
\vert \alpha \vert \leq 100,\,\vert q \vert \leq 100 \}$ is such that all the conditions of
Assumption~\ref{ass:exp} hold, with $L_A = 50$, ${\lambda}=14.5$ and $m=1.0026$.
A simulation has been carried out, with $K(\theta)=\varrho_K(\theta,\bar{t})$,
where $\bar{t}$ is a solution to $\eta(\theta,t)=0$, from
initial conditions starting inside the domain $\mathcal{Q}=\{x\in\mathcal{D}:\;\alpha\geq 0, \,q\geq -5,\,
q \leq 100 \} \cup \{x\in\mathcal{D}:\;\alpha \leq 0,\,q\geq-100,\,q\leq 5 \}$.
Figure~\ref{fig:clostraj} shows the trajectories of the system~\eqref{eq:closedloop} with $x(0)\in\mathcal{Q}$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.43\textwidth]{lpvsys.pdf}
\caption{Trajectories of system~\eqref{eq:closedloop}, with $x(0)\in\mathcal{Q}$.\label{fig:clostraj}}
\end{figure}
\section{Conclusions}
Three algorithmic procedures for solving systems of polynomial inequalities are described. The first step,
common to the three, is the classical \cite{anderson1977output} reduction to a system of equalities.
To solve this last system two available methods have been adapted and a new one has been derived.
The latter one has been used to solve the Static Output Feedback stabilization problem for a LPV system.
\bibliographystyle{ieeetr}
| {
"timestamp": "2016-03-04T02:13:38",
"yymm": "1603",
"arxiv_id": "1603.01183",
"language": "en",
"url": "https://arxiv.org/abs/1603.01183",
"abstract": "The goal of this paper is to provide computational tools able to find a solution of a system of polynomial inequalities. The set of inequalities is reformulated as a system of polynomial equations. Three different methods, two of which taken from the literature, are proposed to compute solutions of such a system. An example of how such procedures can be used to solve the static output feedback stabilization problem for a linear parametrically-varying system is reported.",
"subjects": "Dynamical Systems (math.DS); Algebraic Geometry (math.AG)",
"title": "Solving systems of polynomial inequalities with algebraic geometry methods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850857421197,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7095349818468665
} |
https://arxiv.org/abs/1504.01242 | Free divisors and rational cuspidal plane curves | A characterization of freeness for plane curves in terms of the Hilbert function of the associated Milnor algebra is given as well as many new examples of rational cuspidal curves which are free. Some stronger properties are stated as conjectures. | \section{Introduction} \label{sec:intro}
A (reduced) curve $C$ in the complex projective plane $\PP^2$ is called { \it free}, or a free divisor,
if the rank two vector bundle $T\langle C\rangle=Der(-logC)$ of logarithmic vector fields along $C$ splits as a direct sum of two line bundles on $\PP^2$. Note that $C$ is a free divisor exactly when the surface given by the cone over $C$ is free at its vertex, the origin of $\C^3$, in the sense of K. Saito who introduced this fundamental notion in \cite{KS}. See also \cite{BEG}.
A curve $C$ in $\PP^2$ is called { \it cuspidal } if all of its singular points $p$ are generalized cusps, i.e. the analytic germs $(C,p)$ are irreducible. If $d$ is the degree of $C$ and $m$ is the maximal multiplicity of the singular points of $C$, then $C$ is called a curve of type $(d,m)$. Let $\kappa$ be the total number of cusps. Only one rational cuspidal curve with
four cusps is known, the quintic with cuspidal configuration [($2_3$); ($2$); ($2$); ($2$)], in other words singularity types $3A_2+A_6$.
It is conjectured that $\kappa \leq 3$ for rational cuspidal curves of degree $d \geq 6$, see \cite{Pion} for more details.
Several classification results for rational cuspidal curves have been obtained and some of them are recalled in the third section. They lead to series of rational cuspidal curves, which have sometimes additional properties, e.g. some of these curves are projectively rigid.
In this note we investigate a stronger property for these series of curves, namely we search among them the free divisors. This is motivated by the fact that the number of known examples of { \it irreducible}
free divisors seems to be very limited: the Cayley sextic as described in \cite{Sim} and the family of rational cuspidal curves with one cusp
\begin{equation} \label{STfam}
C_d: f_d=y^{d-1}z+x^d+ax^2y^{d-2}+bxy^{d-1}+cy^{d}=0, \ \ \ a \ne 0
\end{equation}
of type $(d,d-1)$ for $d \geq 5$ described in \cite{ST}. Recently, R. Nanduri has constructed a new family of irreducible free divisors of degree $d \geq 5$, containing the family \eqref{STfam} and consisting of curves having a unique singular point $p$, of multiplicity $m=d-1$, with possibly several branches at $p$, see \cite{N}, especially Remark 2.4. The family \eqref{STfam} as well as the family constructed by Nanduri consist only of rational curves in an obvious way (i.e. the variable $z$ occurs only with exponent one in the corresponding polynomials). For the Cayley sextic the rationality is noted in Remark 3.3 in \cite{Sim}, see also \cite{Se}.
These examples and those given in the present note suggest the following.
\begin{conj}
\label{conj}
An irrreducible plane curve of degree $d\geq 2$ which is a free divisor is a rational curve.
\end{conj}
For free divisors involving several irreducible components, it is easy to construct a free divisor $C$ with at least one irreducible component which is irrational using the recent construction by J. Vall\`es in \cite{JV}, e.g.
\begin{equation} \label{pen}
C:f=xyz(x^3+y^3+z^3)[(x^3+y^3+z^3)^3-27x^3y^3z^3]=0.
\end{equation}
In the second section we find new properties of the free divisors in $\PP^2$: a numerical characterization of freeness in Theorem \ref{thmFREE} and Conjecture \ref{conj10}, and the fact that for a free curve $C:f=0$
its degree $d$ and the total Tjurina number $\tau(C)$ determine the Hilbert function of the graded Milnor algebra $M(f)$, see Theorem \ref{thmHP}. We also give information on the possibile values of the total Tjurina number $\tau(C)$ in terms of the degree $d$, see Theorem \ref{thmWH}. This results also explains why a rational cuspidal curve which is free of degree $d\geq 6$ must have some non weighted homogeneous singularity, i.e. the Jacobian ideal $J_f$ is not of linear type.
In the third section we collect, for the reader's convenience and to fix the notations, some classification results of rational cuspidal plane curves due to various authors.
This classification is used in the forth section to construct new families of irreducible free divisors in $\PP^2$, namely an infinite series in Theorem \ref{thm2ii} of rational curves having two cusps, and the beginning curves in three potentially infinite series in Example \ref{exprop2i}, Example \ref{exprop3}, Example \ref{exprop4} and Conjecture \ref{conj40}. All these curves have type $(d,m)$ with $m\leq d-2$, hence they are distinct from the curves in \eqref{STfam} or in Nanduri's family in \cite{N}.
We also list all the free rational cuspidal curves of degree 6 in Example \ref{exd=6}.
The computations of various invariants given in this paper were made using two computer algebra systems, namely CoCoA \cite{Co} and Singular \cite{Sing}, and play a key role especially in the final section.
The corresponding codes are available on request, some of them being available in \cite{St}.
The first author thanks Aldo Conca for some useful discussions.
\section{Free divisors and Milnor algebras} \label{sec2}
Let $f$ be a homogeneous polynomial of degree $d$ in the polynomial ring $S=\C[x,y,z]$ and denote by $f_x,f_y,f_z$ the corresponding partial derivatives.
Let $C$ be the plane curve in $\PP^2$ defined by $f=0$ and assume that $C$ is reduced. We denote by $J_f$ the Jacobian ideal of $f$, i.e. the homogeneous ideal of $S$ spanned by $f_x,f_y,f_z$ and denote by $M(f)=S/J_f$ the corresponding graded ring, called the Jacobian (or Milnor) algebra of $f$. Let $I_f$ denote the saturation of the ideal $J_f$ with respect to the maximal ideal $(x,y,z)$ in $S$.
Consider the graded $S-$submodule $AR(f) \subset S^{3}$ of {\it all relations} involving the derivatives of $f$, namely
$$\rho=(a,b,c) \in AR(f)_m$$
if and only if $af_x+bf_y+cf_z=0$ and $a,b,c$ are in $S_m$. We set $ar(f)_k=\dim AR(f)_k$ and $m(f)_k=\dim M(f)_k$ for any integer $k$. Then $C$ is a free divisor if $AR(f)$ is a free graded $S$-module of rank two.
Recall the following basic fact, see \cite{ST}, \cite{Se}, \cite{DS14}.
\begin{prop}
\label{prop5}
Let $C$ be the plane curve in $\PP^2$ defined by $f=0$ and assume that $C$ is reduced and $d=\deg (f)$. Then the following hold.
\noindent (i) The curve $C$ is a free divisor if and only if $I_f=J_f$.
\noindent (ii) The curve $C$ is projectively rigid if and only if the degree $d$ homogeneous components $I_{f,d}$ and $J_{f,d}$ coincide.
In particular, any free divisor is projectively rigid.
\end{prop}
We recall also some definitions, see \cite{DStEdin}.
\begin{definition}
\label{def}
For a plane curve $C:f=0$ of degree $d$ with isolated singularities we introduce three integers, as follows.
\noindent (i) the {\it coincidence threshold}
$$ct(f)=\max \{q:\dim M(f)_k=\dim M(f_s)_k \text{ for all } k \leq q\},$$
with $f_s$ a homogeneous polynomial in $S$ of degree $d$ such that $C_s:f_s=0$ is a smooth curve in $\PP^2$.
\noindent (ii) the {\it stability threshold}
$st(f)=\min \{q~~:~~\dim M(f)_k=\tau(C) \text{ for all } k \geq q\},$
where $\tau(C)$ is the total Tjurina number of $C$, that is
$\tau(C)=\sum_{i=1,p} \tau(C,{a_i}).$
\noindent (iii) the {\it minimal degree of a syzygy} $mdr(f)=\min \{q~~:~~ H^2(K^*(f))_{q+2}\ne 0\}$,
where $K^*(f)$ is the Koszul complex of $f_x,f_y,f_z$ with the natural grading.
\end{definition}
Note that one has for $j<d-1$ the following equality
\begin{equation}
\label{ar=er}
AR(f)_j=H^2(K^*(f))_{j+2}.
\end{equation}
We set $er(f)_j=\dim H^2(K^*(f))_{j+2}$ for any $j$, the number of essential relations among the partial derivatives of $f$.
It is known that one has
\begin{equation}
\label{REL}
ct(f)=mdr(f)+d-2,
\end{equation}
It is interesting that the freeness of the plane curve $C$ can be characterized in terms of these invariants. Let $T=3(d-2)$ denote the degree of the socle of the ring $M(f_s)$.
\begin{thm}
\label{thmFREE} For a plane curve $C:f=0$ of degree $d\geq 4$, the following are equivalent.
\noindent (i) $C$ is free;
\noindent (ii) the
equality
$$m(f)_{2d-5-j}+ar(f)_j=\tau(C)$$
holds for any integer $j$ with $-1 \leq j \leq d-2$ and $ar(f)_{d-2} \ne 0$.
In particular
$$st(f)=2d-4-mdr(f)=T-ct(f).$$
\noindent (iii) one has $m(f)_{[\frac{T}{2}]}+m(f)_{T-[\frac{T}{2}]}-m(f_s)_{[\frac{T}{2}]}=\tau(C),$ where $
[\frac{T}{2}]$ denotes the integral part of $\frac{T}{2}$.
\end{thm}
\proof
The fact that the two last formulas in $(i)$ are equivalent follows from \eqref{REL}. Moreover,
if $C$ is free, then $I_f=J_f$, and Proposition 2 in \cite{DBull} implies that
$$\dim M(f) _k= \tau (C)$$
for all $k\geq q$ if and only if $q=T-ct(f)$. The definition of $st(f)$ implies that $st(f)=T-ct(f)$.
Note also that
$$m(f)_k \geq m(f)_{k+1}\ge \tau(C)$$
for $k\geq 2d-5$ by Corollary 8 in \cite{CD}. This implies the formula for $st(f)$ given in (ii).
Now we pass to the proof of the theorem, by showing that (i) is equivalent to (ii).
Consider the rank two vector bundle $T\langle C\rangle=Der(-logC)$ of logarithmic vector fields along $C$.
By definition, $C$ is free if this bundle splits as a direct sum of two line bundles on $\PP^2$, and this happens exactly when
\begin{equation}
\label{eq1}
H^1(\PP^2, T\langle C\rangle(k))=I_{f,k+d}/J_{f,k+d}=0
\end{equation}
for any $k$, see Remark 4.7 in \cite{DS14}.
Using the results in the third section of \cite{DS14}, we know that
$$\chi(T\langle C\rangle(k))=3{k+3 \choose 2}-{d+k+2 \choose 2} +\tau(C),$$
and $h^0((T\langle C\rangle(k))=ar(f)_{k+1}$, $h^2((T\langle C\rangle(k))=ar(f)_{d-5-k}$.
It follows that we have
\begin{equation}
\label{eq2}
h^1((T\langle C\rangle(k))=ar(f)_{k+1}+ar(f)_{d-5-k}-\tau(C)+{d+k+2 \choose 2}-3{k+3 \choose 2}.
\end{equation}
Recall that one has
$$I_{f,j}/J_{f,j}=I_{f,T-j}/J_{f,T-j}$$
for any $j$ by \cite{DS1}, \cite{Se}. Corollary 4.3 in \cite{DPop} shows that in order to check \eqref{eq1}, it is enough to consider only the case
$$-3 \leq k\leq T/2-d= \frac{d}{2}-3\leq d-4.$$
If we make the substitution $k=d-4-i$, then $0 \leq i \leq d-1$ and the above formula becomes
\begin{equation}
\label{eq3}
h^1((T\langle C\rangle(k))=ar(f)_{d-3-i}+ar(f)_{i-1}-\tau(C)+{d+i \choose 2}-3{i+1 \choose 2}.
\end{equation}
Note that both indices $d-3-i$ and $i-1$ are in the interval $[-2,d-2]$.
On the other hand, for $ j \leq d-2$, Theorem 1 in \cite{DBull} and \eqref{ar=er} imply that
\begin{equation}
\label{eq3.5}
ar(f)_j=m(f)_{d-1+j}-m(f_s)_{d-1+j}.
\end{equation}
It follows that
\begin{equation}
\label{eq4}
ar(f)_{d-3-i}=m(f)_{2d-4-i}-m(f_s)_{2d-4-i}=m(f)_{2d-4-i}-{d+i \choose 2}+3{i+1 \choose 2}.
\end{equation}
This gives
$$h^1((T\langle C\rangle(k))=m(f)_{2d-4-i}+ar(f)_{i-1}-\tau(C).$$
Assume now that $ar(f)_{d-2} = 0$, which clearly implies $st(f) \leq d-3$.
Note that for any plane curve $C:f=0$ of degree $d$ one has $ct(f) \leq st(f)$ if $d$ is even and $ct(f)-1 \leq st(f)$ if $d$ is odd and $st(f)= [T/2].$
It follows that either $ct(f) \leq d-3$, which is a contradiction, or $ct(f)=st(f)+1=d-2$ and
$d-3=[T/2]$, which is again a contradiction.
This clearly completes the proof of the fact that (i) is equivalent to (ii).
To show that (ii) is equivalent to (iii), use
Corollary 4.3 in \cite{DPop} which shows that it is enough to consider only the case $k={[\frac{T}{2}]}-d$.
Then the equality in (ii) transformed using the formula \eqref{eq3.5} yields the formula in (iii).
\endproof
\begin{rk}
\label{rkproof}
To prove in a quicker (but perhaps more mysterious) way the fact that (i) is equivalent to (ii), one may alternatively use Corollary 4 in \cite{DS1}. This result also implies that for a free divisor $C:f=0$ one has
$$m(f)_{2d-5-j}+er(f)_j=\tau(C)$$
for any integer $j$.
\end{rk}
In fact, for a free divisor $C:f=0$, the dimensions $m(f)_j$ are completely determined by two invariants, namely the degree $d$ and the total Tjurina number $\tau(C)$. More precisely, one has the following result.
\begin{thm}
\label{thmHP} Let $C:f=0$ be a free divisor of degree $d$ and total Tjurina number $\tau(C)$, which is not a pencil of lines. Let $d_1$ and $d_2$ with $d_1 \leq d_2$ be the degrees of two homogeneous generators of the free graded $S$-module $AR(f)$. Then the following holds.
\noindent (i) The degrees $d_1$ and $d_2$ are the roots of the equation
$$t^2-(d-1)t+(d-1)^2-\tau(C)=0.$$
In particular $d=d_1+d_2+1$ and $\tau(C)=(d-1)^2-d_1d_2$ and hence the pairs $(d,\tau(C))$ and $(d_1,d_2)$ determine each other.
\noindent (ii) $mdr(f)=d_1$, $ct(f)=d+d_1-2$ and $st(f)=d+d_2-3$.
\noindent (iii) $ct(f)\leq d+j\leq st(f)$ if and only if $d_1-2 \leq j\leq d_2-3$, and for such $j$'s one has
$$m(f)_{d+j}=m(f_s)_{d+j}+ {j-d_1+3 \choose 2}.$$
In particular, one has
$$\tau(C)=m(f_s)_{d+d_2-3}+ {d_2-d_1 \choose 2}.$$
\noindent (iv) Let $U=\PP^2 \setminus C$. Then the Euler number $E(U)$ of $U$ is given by
$$E(U)=\tau(C)-\mu(C) +(d_1-1)(d_2-1),$$
where $\mu(C)$ is the total Milnor number of $C$. In particular, if $C$ is irreducible one has $E(U) \geq 1$ and $d_1>1$.
\end{thm}
Note that for a line arrangement $C$ in $\PP^2$ one has $\tau(C)=\mu(C)$, $b_1(U)=d-1=d_1+d_2$ and $b_2(U)=E(U)+(d-1)-1=d_1d_2$, the claims (i) and (iv) above are a very special case of Terao's results in \cite{Te}. See also \cite{Yo} for a very good survey on free line arrangements.
\proof The definition of the degree $d_1$ and $d_2$ is equivalent to the equality
$$T\langle C\rangle(-1)=\OO(-d_1)\oplus \OO(-d_2).$$
Then the first claim follows from Lemma 4.4 in \cite{DS14} and the second claim follows
from \eqref{REL}, Theorem \ref{thmFREE} and the obvious fact $mdr(f)=d_1$. Note that $d_1>0$ as $C$ is not a pencil of lines. Hence $d_2-2\leq d-4$.
Then equation \eqref{eq3.5}
implies $m(f)_{d+j}=m(f_s)_{d+j}+ar(f)_{j+1}$ for $j \leq d-3$. Since $ j \leq d_2-2$, one has
$$ar(f)_{j+1}= {j-d_1+3 \choose 2},$$
which completes the proof of (iii). To prove (iv), we use the formula
\begin{equation}
\label{EC}
E(C)=2-(d-1)(d-2) +\mu(C),
\end{equation}
see for instance \cite{D1}. Then we compute
$$E(U)=E(\PP^2) -E(C)=3-(2-(d-1)(d-2) +\mu(C))$$
and this yields the claimed formula using (i). The final claim is clear, since $C$ irreducible is equivalent to $b_1(U)=0$.
\endproof
\begin{rk}
\label{rkRes} The equations $d=d_1+d_2+1$ and $$(d-1)^2-d_1d_2=m(f_s)_{d+d_2-3}+ {d_2-d_1 \choose 2}$$ obtained from Theorem \ref{thmHP} do not impose restrictions on the integers $1 \leq d_1 \leq d_2$, as they reduce to an identity involving $d_1$ and $d_1$.
\end{rk}
The study of a large number of examples suggests the following conjecture.
\begin{conj}
\label{conj10}
A plane curve $C:f=0$ is free if and only if
$$ct(f)+st(f)=T.$$
\end{conj}
The following result gives some restrictions on the total Tjurina number of an irreducible free curve.
\begin{thm}
\label{thmWH} Let $C:f=0$ be an irreducible free divisor of degree $d>1$. Then $d \geq 5$ and the following hold.
\noindent (i) If $d=5$, then $C$ is rational and all singularities of $C$ are cusps and weighted homogeneous. Moreover in this case $d_1=d_2=2$ and $\tau(C)=12$.
\noindent (ii) If $d\geq 6$, then $C$ is either irrational, or $C$ has at least one singularity which is either not a cusps or it is not weighted homogeneous. Moreover in this case one has
$$\frac{3}{4}(d-1)^2 \leq \tau(C) \leq d^2-4d+7,$$
$d_1\geq 2$ and the integer $\Delta=4\tau(C)-3(d-1)^2$ is a perfect square.
\end{thm}
\proof
One has
$E(C)=b_0(C)-b_1(C)+b_2(C)=2-b_1(C) \leq 2.$ It follows from \eqref{EC} that
\begin{equation}
\label{mu}
\mu(C) \leq (d-1)(d-2)
\end{equation}
and the equality holds if and only if $C$ is rational and cuspidal as only then $b_1(C)=0$.
On the other hand, one clearly has $\tau(C) \leq \mu(C)$ with equality if and only if all the singularities of $C$ are weighted homogeneous. Lemma 4.4 in \cite{DS14} imply that
\begin{equation}
\label{eq10}
4\tau(C)-3(d-1)^2=u^2
\end{equation}
for $u=d_2-d_1$. In particular, we get by putting everything together
\begin{equation}
\label{eq11}
\frac{3}{4}(d-1) \leq \frac{\tau(C)}{d-1} \leq d-2.
\end{equation}
This clearly implies $d \geq 5$. If $d=5$, then the two extreme terms coincide to 3, hence we should have equalities everywhere. This proves the claim (i) by using Theorem \ref{thmHP}.
Assume now $d>5$.Then $d_1 \geq 2$ as shown in Theorem \ref{thmHP} (iv) and the obvious fact that $\tau(C)=(d-1)^2-d_1d_2$ is maximal (resp. minimal) when the difference $d_2-d_1$ is maximal i.e. when $d_1=2$ (resp. minimal, i.e. when $d_1=[(d-1)/2]$) yields the claimed inequalities.
\endproof
\begin{cor}
\label{corRC}
If $C$ is a free irreducible curve, then $C$ is rational cuspidal if and only if
$$(d_1-1)(d_2-1)=\mu(C)-\tau(C)+1.$$
In particular, a free rational cuspidal curve cannot have only weighted homogeneous singularities unless $d_1=d_2=2$, and hence $d=5$.
\end{cor}
\proof The proof follows from the fact that an irreducible curve $C$ is rational cuspidal if and only if $C$ is homeomorphic to $\PP^1$, and this happens exactly when $E(C)=E(\PP^1)$ which is equivalent to $E(U)=1$.
\endproof
\begin{rk}
\label{rkWH} (i) As shown in Proposition 1.6 in \cite{ST} and mentioned in Remark 4.7 in \cite{DS14},
the curve $C:f=0$ has only weighted homogeneous singularities if and only if the Jacobian ideal $J_f$ is of linear type. In particular, a rational cuspidal free divisor of degree $d\geq 6$ cannot have a Jacobian ideal $J_f$ of linear type by Theorem \ref{thmWH}. This is the case with the family \eqref{STfam}.
(ii) Note that the perfect square $\Delta$ can be zero for $d$ odd, for instance for the degree $13$ curve $C_1$ described in Proposition \ref{prop4} and Example \ref{exprop4}, which has $\tau(C_1)=108$ and for the curves described in Theorem \ref{thm2ii} which have $d=2k+1$ and $\tau(C)=3k^2$.
In fact, with the notation from Theorem \ref{thmHP}, we have $\Delta=(d_1-d_2)^2$.
For the family \eqref{STfam} one has $\tau(C_d) =d^2-4d+7$ (and this happens for any irreducible free curve with $d_1=2$ as noticed above), hence both inequalities given
in Theorem \ref{thmWH} (ii) for $\tau(C)$ are sharp.
(iii) If $C$ is not irreducible, then one has the following inequalities in analogy to those in Theorem \ref{thmWH}.
Note that $E(C) \leq 2$ is equivalent to $E(U)=E(\PP^2)-E(C) \geq 1$.
Hence in this case we also get $d_1 \geq 2$, and the inequalities in
Theorem \ref{thmWH} (ii) work unchanged.
If $E(U)=0$, then it is known that all the irreducible components of $C$ are rational curves, see \cite{GP}, \cite{WV}. In this case we get from $E(C)=3$ the inequality
$$ \tau (C) \leq \mu(C) = (d-1)(d-2)+1.$$
The equality $ \tau (C) =\mu(C)=(d-1)(d-2)+1$ implies as via Theorem \ref{thmHP} (iv), that $d_1=1$.
Hence the curve $C$ admits a 1-dimensional symmetry group and such curves have been studied in \cite{duW}. In particular, there are free divisors among them, see Proposition 1.3 (2a) in \cite{duW}. Moreover, the corresponding symmetry group is semi-simple if $d \geq 6$ by \cite{duW}, the corresponding degree one syzygy can be diagonalized and hence the corresponding curves are among those studied in the third section of \cite{BC}. The subcase $\tau (C) \leq (d-1)(d-2)$ leads again to $d_1 \geq 2$, and the inequalities in
Theorem \ref{thmWH} (ii) work unchanged.
Finally, if $E(U)<0$, then any irreducible component $C_i$ is a rational cuspidal curve and there is a unique point $p$ such that $C_i \cap C_j =\{p\}$ for any $i \ne j$, i.e. the curve $C$ is very special. We do not know which free divisors occur in this way, except the union of lines passing through one point.
In conclusion, with the exception of the special cases listed here, one has
$ \tau (C) \leq d^2-4d+7$ even for a reducible free divisor $C$.
\end{rk}
\begin{ex}\label{exd=7,8}
Let $C$ be a free divisor of degree $d=6$ which is rational and cuspidal. It follows from Theorem \ref{thmWH} that $C$ has some non weighted homogeneous singularity and moreover
$\tau(C)=19.$
Let $C$ be a free divisor of degree $d=7$ which is rational and cuspidal. It follows from Theorem \ref{thmWH} that
$\tau(C) \in \{27,28\}$ . However, all the examples of such curves coming from \cite{FZ}, \cite{SaTo}, \cite{ST} have $\tau(C) =28.$ On the other hand, there are free line arrangements of degree 7 having $\tau=27$, for instance the line arrangement whose defining polynomial is
$$f=(x+z)(x-z)(y+z)(y-z)(x+y)(x-y)z.$$
Similarly, let $C$ be a free divisor of degree $d=8$ which is rational and cuspidal. It follows from Theorem \ref{thmWH} that
$\tau(C) \in \{37,39\},$ but all the examples of such curves coming from \cite{FZ}, \cite{SaTo}, \cite{ST} have $\tau(C) =39.$ But there are free line arrangements of degree 8 having $\tau=37$, for instance that defined by the polynomial
$$f=(x+z)(x-z)(y+z)(y-z)(x+y)(x-y)yz.$$
However, for $d=9$ it follows from Theorem \ref{thmWH} that
$\tau(C) \in \{48,49, 52\},$ and the divisor $C_9$ in Remark \ref{rkeq2i} has $\tau(C_9)=52$, while the curve $C_9$ described in Theorem \ref{thm2ii} has $\tau(C_1)=48$. Moreover, the free line arrangement given by
$$f=(x+z)(x-z)(y+z)(y-z)(x+y)(x-y)xyz=0$$
has $\tau(C)=49$.
\end{ex}
\section{Some classification results for rational cuspidal curves}
In this section we recall some of the classification results for rational cuspidal curves in $\PP^2$.
First we consider the case of rational cuspidal curves of type $(d,d-2)$, following the work by Flenner-Zaidenberg \cite{FZ} and Sakai-Tono \cite{SaTo}. Note that a rational cuspidal curve of type $(d,d-2)$ may have as its "largest" cusp a cusp with the unique Puiseux characteristic pair $(d-1,d-2)$, i.e. topologically equivalent to the cusp $u^{d-1}+v^{d-2}=0$, see Examples \ref{exprop2i} and \ref{exprop3} below. Such a cups is called in the sequel a cusp of type $(d-1,d-2)$. However, this largest cusp can also be a cusp with the unique Puiseux characteristic pair $(d,d-2)$ for $d$ odd, i.e. topologically equivalent to the cusp $u^{d}+v^{d-2}=0$, called a cusp of type $(d,d-2)$, see Theorem \ref{thm2ii} for an infinite series of examples.
In increasing number of cusp order, we have the following results.
The case of one cusp is quite clear-cut, and it is described in \cite{SaTo}.
\begin{prop}
\label{prop1}
Let $C$ be a rational cuspidal curve of type $(d,d-2)$ having a unique cusp. Then $d$ is even and up to projective equivalence the equation of $C$ can be written as
$$f=(y^kz+\sum_{i=1,k+1}a_ix^i y^{k+1-i})^2 - xy^{2k+1}=0,$$
where $a_{k+1}\ne 0$ and $d=2k+2 \geq 4$.
\end{prop}
The case of two cusps is more involved, and is described in \cite{SaTo}.
\begin{prop}
\label{prop2}
Let $C_d:f_d=0$ be a rational cuspidal curve of type $(d,d-2)$ having two cusp, let's say $q_1$ of multiplicity $d-2$ and $q_2$ of multiplicity $\leq d-2$. Then one of the following three cases arises.
\noindent (i) The germ $(C_d,q_2)$ is a singularity of type $A_{2d-4}$, hence of multiplicity $2$ and Milnor (or Tjurina) number $2d-4$. Then for each $d \geq 4$, $C_d$ is unique up to projective equivalence.
\noindent (ii) The germ $(C_d,q_2)$ is a singularity of type $A_{d-1}$, with $d$ odd. Up to projective equivalence the equation of $C_d$ can be written as
$$C:f=(y^{k-1}z+\sum_{i=2,k}a_ix^i y^{k-i})^2y - x^{2k+1}=0,$$
where $d=2k+1 \geq 5$.
\noindent (iii) The germ $(C_d,q_2)$ is a singularity of type $A_{2j}$, with $d$ even and $1 \leq j \leq (d-2)/2$. Up to projective equivalence the equation of $C_d$ can be written as
$$C_d:f_d=(y^{k+j}z+\sum_{i=2,k+j+1}a_ix^i y^{k+j+1-i})^2 - x^{2j+1}y^{2k+1}=0,$$
where $a_{k+j+1} \ne 0$, $d=2k+2j+2 \geq 6$, $k \geq 0$, $j \geq 1$.
\end{prop}
\begin{rk}
\label{rkeq2i}
To obtain the equations $f_d=0$ for the curves $C_d$ in Proposition \ref{prop2} $(i)$, one can proceed as follows. Start with the equation for the cuspidal cubic $C_3: f_3(x,y,z)=yz^2-x^2z+x^3$. For $d \geq 4$, to get the polynomial $f_d$ from the previous polynomial $f_{d-1}$,
supposed already computed, we proceed as follows.
First we perform the substitution in $f_{d-1}$ given by
$$x \mapsto x^2, \\ y \mapsto xy, \\ z \mapsto yz+a_{d-1}x^2,$$
with $a_{d-1}$ the coefficient of $x^{d-1}$ in $f_{d-1}$, then we divide by $x^{d-3}y$ and denote the resulting polynomial by $f_d$. In other words
$$f_d(x,y,z)=f_{d-1}(x^2,xy, yz+a_{d-1}x^2)x^{-d+3}y^{-1}.$$
We get in this way
$$C_4: f_4(x,y,z)=(yz+x^2)^2-x^3z=0,$$
$$C_5: f_5(x,y,z)=y(yz+x^2)^2+2x^3(yz+x^2)-x^4z=0,$$
$$C_6:(yz + 2x^2)^2y^2 + 2(yz + 2x^2)yx^3 + (2yz + 5x^2)x^4 - x^5z = 0,$$
These equations occur already in \cite{SaTo}.
The next few curves in this family are the curves
$$C_7:f_7=14x^7 + 14x^6y + 20x^5y^2 + 25x^4y^3 - x^6z + 2x^5yz + 2x^4y^2z + 4x^3y^3z + 10x^2y^4z $$
$$+ y^5z^2=0,$$
$$C_8: f_8= 42x^8 + 48x^7y + 81x^6y^2 + 140x^5y^3 + 196x^4y^4 - x^7z + 2x^6yz + 2x^5y^2z $$
$$+ 4x^4y^3z + 10x^3y^4z +
28x^2y^5z + y^6z^2=0,$$
$$C_9: f_9=132x^9 + 165x^8y + 308x^7y^2 + 616x^6y^3 + 1176x^5y^4 + 1764x^4y^5 - x^8z + 2x^7yz $$
$$+ 2x^6y^2z + 4x^5y^3z +
10x^4y^4z + 28x^3y^5z + 84x^2y^6z + y^7z^2=0,$$
and finally
$$C_{10} :f_{10}=429x^{10} + 572x^9y + 1144x^8y^2 + 2496x^7y^3 + 5460x^6y^4 + 11088x^5y^5 + 17424x^4y^6 $$
$$- x^9z + 2x^8yz +
2x^7y^2z + 4x^6y^3z + 10x^5y^4z + 28x^4y^5z + 84x^3y^6z + 264x^2y^7z + y^8z^2=0.$$
The curve $C_d:f_d=0$ for $5 \leq d \leq 15$ has $d_1=2$, the corresponding relation being
$$A_df_{d,x}+B_df_{d,y}+C_df_{d,z}=0,$$
with the coefficients $A_d= (d-2)x^2+4(d-3)xy$, $B_d=2(d-1)xy-4(2d-3)y^2$, $C_d=2d(2d-7)a_{d-1}x^2-(d-1)(d-2)xz+2(d-2)(2d-3)yz$, and the coefficient $a_{d-1}$ as introduced above. It follows from Theorem \ref{thmHP} that $d_2=d-3$ and $\tau(C_d)=d^2-4d+7$.
\end{rk}
Now consider the case of three cusps, considered in \cite{FZ}. One has the following result.
\begin{prop}
\label{prop3}
Let $C$ be a rational cuspidal curve of type $(d,d-2)$ having three cusps. Then there exists a unique pair of integers $a,b$, $a\geq b \geq 1$ with $a+b=d-2$ such that up to projective equivalence the equation of $C$ can be written in affine coordinates $(x,y)$ as
$$f(x,y)= \frac{x^{2a+1}y^{2b+1}-((x-y)^{d-2}-xyg(x,y))^2}{(x-y)^{d-2}},$$
where $d \geq 4$, $g(x,y)=y^{d-3}h(x/y)$ and
$$h(t)= \sum_{k=0,d-3}\frac{a_k}{k!}(t-1)^k,$$
with $a_0=1$, $a_1=a-\frac{1}{2}$ and $a_k=a_1(a_1-1) \cdots (a_1-k+1)$ for $k>1$.
\end{prop}
\begin{rk}
\label{rkrigid}
The above results imply that the cuspidal rational curves in Proposition \ref{prop2} $(i)$ and Proposition \ref{prop3} are projectively rigid, a fact explicitly discussed in \cite{FZ}.
\end{rk}
Another classification problem is to find all the Puiseux pairs $(a,b)$ such that there is a rational unicuspidal (i.e. $\kappa=1$) curve $C$ in $\PP^2$, whose cusp has a unique Puiseux pair $(a,b)$. The complete result is described in Theorem 1.1 in \cite{FLMN}, but here we just mention the cases $(b)$ and $(d)$ of this Theorem. See also \cite{BL}, section 2.3 and \cite{Ka}, Corollary 11.4.
\begin{prop}
\label{prop4}
(i) The curve $C_d:f_d=(zy-x^2)^k-xy^{2k-1}=0$ is a rational unicuspidal plane curve of degree $d=2k$ for $k\geq 2$, with a unique Puiseux pair $(k,4k-1)$.
(ii) Let $a_i$ be the Fibonacci numbers with $a_0=0$, $a_1=1$, $a_{j+2}=a_{j+1}+a_{j}$. Then there is a rational unicuspidal plane curve $C_k$ of degree $a_{2k+5}$ for $k\geq 0$, with a unique Puiseux pair $(a_{2k+3},a_{2k+7})$ and whose defining affine equation is obtained as follows.
Set $P_{-1}=y-x^2$, $Q_{-1}=y$, $P_0=(y-x^2)^2-2xy^2(y-x^2)+y^5$, $Q_0=y-x^2$,
$G=xy-x^3-y^3$. Then define for any $k >0$ recursively
$Q_k=P_{k-1}$ and $P_k=(G^{a_{2k+3}}+Q_k^3)/Q_{k-1}$. Then $P_k=0$ is the defining affine equation for the curve $C_k$.
\end{prop}
\section{New examples of irreducible free divisors} \label{sec3}
Motivated by this Proposition \ref{prop5} and by Remark \ref{rkrigid}, as well as the discussion in the Introduction, we now search for free divisors among the cuspidal rational curves listed in the previous section.
We start with the rigid curves, and recall that a rational quartic is never a free divisor, see \cite{ST}.
\begin{ex}\label{exprop2i}
The curves $C_d$, with $5 \leq d \leq 15$ from Proposition \ref{prop2} $(i)$, whose equations are given in
Remark \ref{rkeq2i} are free divisors. Indeed, the condition $I_{f,d}=J_{f,d}$ is in fact equivalent to the condition $\dim I_{f,d}=\dim J_{f,d}$ as we have by definition $J_{f,d}\subset I_{f,d}$.
And the equality of dimensions is easily checked using Singular, by computing the Hilbert functions associated to the graded rings $S/I_f$ and $M(f)$.
Each of these curves $C_d$ has two cusps, one of type $(d-1,d-2)$ and the other of type $(2,2d-3)$, i.e. a singularity $A_{2d-2}$. Moreover $d_1=2$ for all of them as shown in Remark \ref{rkeq2i}, and hence
$\tau(C_d)=d^2-4d+7$ and $\mu(C_d)= (d-2)(d-3)+2d-2=d^2-3d+4$ which is compatible with the formula in Corollary \ref{corRC}.
\end{ex}
\begin{ex}\label{exd=6}
The cuspidal rational curves of degree 6 have been classified by Fenske, \cite{F},
who obtained 11 main cases.
The 1-cuspidal curves in this classification fall into 3 classes: $C_1$ (with subcases (a), (b), (c), (d)), $C_2$ (with subcases (a), (b)), $C_3$ (with subcases (a), (b), (c)).
Among these families, only the type $C_3$ (a) is free. It follows that the curve obtained for $d=6$ in the family \eqref{STfam} belongs to this class.
The 2-cuspidal curves fall into 6 classes: $C_4$ (with subcases (a), (b)), $C_5$, $C_6$, $C_7$, $C_8$ and $C_9$. Among these families, the types $C_4$ (b), $C_5$, $C_6$ and $C_9$ are free.
The two classes of 3-cuspidal curves are special cases of the curves discussed in Proposition \ref{prop3} and Example \ref{exprop3}
All the free divisors above satisfy $ct(f)=st(f)=6$ and $\tau(C)=19$. To decide which divisors are free we use Singular computations, and for families involving parameters we use Lemma 1.1 in \cite{ST}.
\end{ex}
\begin{ex}\label{exprop3}
The curves $C_d$ from Proposition \ref{prop3} for $5 \leq d \leq 10$ are free divisors.
This is proved by following the same approach as in the previous example. Moreover, in this case we have $d_1=2$ and hence again $\tau(C_d)=d^2-4d+7$. The description of singularities in
Proposition \ref{prop3} implies that $\mu(C)= d^2-3d+2$ for any $(a,b)$.
\end{ex}
\begin{ex}\label{exprop4}
(i) The curves $C_d$ from Proposition \ref{prop4} (i) for $6 \leq d=2k \leq 20$ are free divisors with $d_1=2$ and hence $\tau(C_d)=d^2-4d+7$.
(ii) The curves $C_k$ from Proposition \ref{prop4} (ii) for $0 \leq k \leq 3$ are free divisors and have the following invariants.
\noindent (0) The curve $C_0$ has degree $d=5$, $\tau(C)=12$, $\mu(C)=12$ and $d_1=2$.
\noindent (1) The curve $C_1$ has degree $d=13$, $\tau(C)=108$, $\mu(C)=132$ and $d_1=6$.
\noindent (2) The curve $C_2$ has degree $d=34$, $\tau(C)=823$, $\mu(C)= 1056$ and $d_1=14$.
\noindent (3) The curve $C_3$ has degree $d=89$, $\tau(C)=5889$, $\mu(C)=7656$ and $d_1=35$.
This is proved by following the same approach as in the previous example.
\end{ex}
It is natural to suggest the following conjecture.
\begin{conj}
\label{conj40}
Any of the rational cuspidal curves described in Proposition \ref{prop2} $(i)$, Proposition \ref{prop3}, and Proposition \ref{prop4} is a free divisor if its degree $d$ is at least $5$.
\end{conj}
Next we present an infinite series of irreducible free divisors obtained from the classification of
rational cuspidal curves. This family of curves is a special case of the family described in Proposition \ref{prop2} $(ii)$.
\begin{thm}
\label{thm2ii}
The rational cuspidal curve
$$C_{2k+1}: f_{2k+1}=(y^{k-1}z+x^k)^2y-x^{2k+1}=0$$
of type $(2k+1,2k-1)$ has two cusps of type $(2k+1,2k-1)$ and respectively $(2k+1,2)$ and is a free divisor for any $k \geq 2$. The corresponding Jacobian ideal $J_{f_{2k+1}}$ is of linear type if and only if $k=2$.
Moreover $\tau( C_{2k+1})=3k^2$, $\mu( C_{2k+1})=2k(2k-1)$ and $d_1=d_2=k$.
\end{thm}
\proof To prove that we have free divisors for any $k$ we proceed as follows.
We look at the syzygies among the partial derivatives $f_x,f_y,f_z$ and find that we have two such syzygies in degree $k$, namely:
$$(r_1 ): \ \ \ a_xf_x+a_yf_y+a_zf_z=0,$$
where
$a_x=2x^k+2y^{k-1}z$,
$a_y=(4k+2) x^k -4k x^{k-1}y-(8k^2-2) y^{k-1}z$,\\
$ a_z=4k(k-1) x^{k-1}z + (8k^3-4k^2-2k+1) y^{k-2}z^2$ and
$$(r_2 ): \ \ \ b_xf_x+b_yf_y+b_zf_z=0,$$
where
$b_x=0$,
$b_y=-2y^k$ and
$b_z=x^k+(2k-1)y^{k-1}z$. It is clear that $(r_1)$ and $(r_2)$ are linearly independent as $b_x=0$ and $a_x \ne 0$. Then we apply Lemma 1.1 and Proposition 1.8 in \cite{ST}, exactly as in the proof of Proposition 2.2 in \cite{ST}.
The Milnor number $\mu( C_{2k+1})$ is computed using Corollary \ref{corRC} and this implies that the largest cusp of $C_{2k+1}$ has type $(2k+1,2k-1)$. Indeed, the other cusp is described in
Proposition \ref{prop2} $(ii)$ and we know that it has type $(2k+1,2)$, i.e. it is an $A_{2k}$-singularity with Milnor number $2k$.
\endproof
| {
"timestamp": "2015-05-04T02:04:42",
"yymm": "1504",
"arxiv_id": "1504.01242",
"language": "en",
"url": "https://arxiv.org/abs/1504.01242",
"abstract": "A characterization of freeness for plane curves in terms of the Hilbert function of the associated Milnor algebra is given as well as many new examples of rational cuspidal curves which are free. Some stronger properties are stated as conjectures.",
"subjects": "Algebraic Geometry (math.AG); Commutative Algebra (math.AC)",
"title": "Free divisors and rational cuspidal plane curves",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.983085084750966,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.709534981131508
} |
https://arxiv.org/abs/1905.12306 | Effective viscosity of a polydispersed suspension | We compute the first order correction of the effective viscosity for a suspension containing solid particles with arbitrary shapes. We rewrite the computation as an homogenization problem for the Stokes equations in a perforated domain. Then, we extend the method of reflections to approximate the solution to the Stokes problem with a fixed number of particles. By obtaining sharp estimates, we are able to prove that this method converges for small volume fraction of the solid phase whatever the number of particles. This allows to address the limit when the number of particles diverges while their radius tends to 0. We obtain a system of PDEs similar to the Stokes system with a supplementary term in the viscosity proportional to the volume fraction of the solid phase in the mixture. | \section{Introduction}
When a viscous fluid transports solid particles, the particles modify in return the properties of the fluid. For instance, the rheological properties of the fluid are altered.
In his seminal paper \cite{Einstein}, Einstein addresses the computation of the effective viscosity of the mixture, having in mind it could help recovering the size of the transported particles. He obtains then the formula:
\begin{equation} \label{eq_einstein}
\mu_{eff} = \mu \left(1 + \dfrac{5}{2}\phi + o(\phi)\right).
\end{equation}
Here $\mu$ stands for the (bulk) viscosity of the incompressible fluid alone,
$\mu_{eff}$ denotes the viscosity of the mixture and $\phi$ stands for the
volume fraction of the solid suspension of spheres. Einstein formula has been the subject of numerous studies: analysis of Einstein "formal" computations \cite{Almog-Brenner,Keller-Rubenfeld,Ammari,Haines-Mazzucato}, computation of second order expansion \cite{Hinch,Batchelor-Green,GVH}. We refer the reader also to
\cite{JeffreyAcrivos} for a comprehensive picture on the possible phenomena influencing the effective viscosity of a suspension. Most of these studies consider homogeneous suspensions. However, as mentioned in \cite{JeffreyAcrivos}, a formula for the effective viscosity depending only on the volume fraction is hopeless to describe general suspensions, the factor $5/2$ in the above formula being in particular valid for a suspension of spheres {\em a priori}. In this paper, we provide a method for the computation of an effective viscosity allowing a distribution of shapes for the particles in the suspension.
\medskip
A second motivation of the paper is to obtain a "local" formula for the effective viscosity similar to \cite{Almog-Brenner,Niet-Schub}. To be more precise, we rephrase now the computation of an effective viscosity, as depicted in \cite{Batchelor}, into a homogenization problem. We consider an incompressible newtonian fluid occupying the whole space $\mathbb R^3$ and transporting a cloud made of $N$ particles. We neglect the particle and
fluid inertia so that computing an effective viscosity amounts to understand
the behavior of the system when it is submitted to a strain flow $x \mapsto Ax$
(where $A$ is a symmetric trace-free matrix). This reduces to the following
{\em stationary} problem. We denote by $(u,p)$ the fluid velocity-field/pressure.
The domain of the $l$-th solid particle is the smooth bounded open set $B_l \subset \mathbb R^3$ and its center of mass is $x_l.$ The motion of $B_l$ is associated to a pair of translational/rotational velocities $(V_l,\omega_l)$. Introducing $\mu$ the viscosity of the fluid, the unknowns $(u,p,(V_l,\omega_l)_{l=1,\ldots,N})$ are computed by solving the problem
\begin{align}
\label{eq_Stokes}
& \left\{
\begin{aligned}
- {\rm div}\, \Sigma_{\mu}(u,p) &= 0 \\
{\rm div}\, u &= 0
\end{aligned}
\right.
\qquad \qquad
\text{ in $\mathbb R^3 \setminus \bigcup_{l=1}^N \overline{B}_l,$}
\\
\label{bc_Stokes}
& \left\{
\begin{aligned}
u(x) &= V_l + \omega_l \times (x-x_l) && \text{ on $\partial B_l,$ for $l=1,\ldots,N$} \\
u(x) &= Ax && \text{ at infinity,}
\end{aligned}
\right.
\\
\label{eq_Newton}
& \left\{
\begin{aligned}
\int_{\partial B_l} \Sigma_{\mu}(u,p) n {\rm d}s &= 0 \\
\int_{\partial B_l} (x-x_l) \times \Sigma_{\mu}(u,p) n {\rm d}s &= 0 \\
\end{aligned}
\right.
\qquad \qquad \text{for }\, l = 1,\ldots,N.
\end{align}
In this system, we introduced the fluid stress-tensor $\Sigma_{\mu}(u,p).$
Under the assumption that the fluid is newtonian, it reads:
\[
\Sigma_{\mu}(u,p) = 2\mu D(u) - p \mathbb I_{3} = \mu (\nabla u + \nabla^{\top} u) - p \mathbb I_3.
\]
The zero source terms on the right-hand side of the first equation in \eqref{eq_Stokes} and both equations of \eqref{eq_Newton} are reminiscent of the intertialess assumption. The second equation of \eqref{bc_Stokes} must be understood as
\[
\lim_{x \to \infty} |u(x) - Ax| = 0.
\]
In the last equations \eqref{eq_Newton} the symbol $n$ stands for the normal to
$\partial B_l.$ By convention, we assume that it points inwards the solid $B_l$
and outwards the fluid domain that we denote $\mathcal F_N$ in what follows:
\[
\mathcal F_N = \mathbb R^3 \setminus \bigcup_{l=1}^N \overline{B}_l.
\]
Under the assumption that the $B_l$ do not overlap, existence/uniqueness of a solution to \eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton} falls into the scope of the classical theory for the Stokes equations (see \cite[Section V]{Galdi}). We give a little more details in the next section. We only mention here that the pressure is unique up to a constant. But, this has no impact on our computations and we consider the pressure as being uniquely defined below (this problem could be fixed by assuming that one mean of the pressure has a fixed value). Our aim is to tackle the asymptotics of this solution when the $B_l$ are small and many. To make this statement quantitative, we introduce further assumptions regarding the $B_l$.
Namely, we assume that there exists a diameter $a>0,$ centers $x_l \in \mathbb R^3$ and shapes $\mathcal B_l$ (meaning smooth bounded connected open sets of $\mathbb R^3$) such that
\begin{equation} \tag{H1} \label{H1}
\mathcal B_l \subset B(0,1), \qquad \int_{\mathcal B_l} x{\rm d}x = 0, \qquad
B_l = x_l + a \mathcal B_l, \qquad \forall \, l=1,\ldots,N.
\end{equation}
Then, we prescribe that the solid domains remain in a compact set $K$ and that there is no-overlap between the particles:
\begin{equation} \tag{H2} \label{H2}
B_l \subset K \quad \forall \, l =1,\ldots,N \,, \qquad d := \min_{l\neq \lambda} |x_l-x_\lambda| > 4 a.
\end{equation}
With these conventions, we note that the total volume of the solid phase
is at most $4Na^3\pi/3$ so that, globally, in the volume $K,$ the volume fraction of the solid phase is controlled by $4Na^3\pi/3|K|.$ However, the separation assumption \eqref{H2} implies that we have also a uniform local control of the solid phase volume fraction by $a^3/d^3.$ We use also
constantly below that, with \eqref{H1}-\eqref{H2}, we obtain $N \leq C|K|/ d^{3}.$
\medskip
In order to derive an effective viscosity for the mixture, the classical point of view proposed in \cite{Einstein,Batchelor} is to compute the rate of work of the viscous stress tensor on the boundary $\partial K$ of the domain $K$ containing the solid particles:
\[
W_{eff} := \int_{\partial K} \Sigma_{\mu}(u,p) n \cdot Ax {\rm d}s
\]
and to compare the excess with respect to the value $W_0 = 2 \mu A:A |K|$
that would yield in case there is no particle. In brief, the analysis of Einstein -- in the case
the $B_l$ are spheres of radius $a$ filling a bounded domain -- relies on splitting the solution $(u,p)$ into $u = u_0+u_1,$ $p=p_0+p_1.$ Here $(u_0,p_0)$ is the pure strain applied on the boundaries at infinity:
\[
u_0(x) = Ax \qquad p_0(x) = 0,
\]
(this is a solution to the Stokes equations on $\mathbb R^3$ since $A$ is trace-free) while the term $(u_1,p_1)$ compensates the trace of the boundary conditions $u-u_0(x) = -Ax$ on the $B_l$ that cannot be
matched by a suitable pair $(V_l,\omega_l)$ in \eqref{bc_Stokes}. Namely, one may write:
\[
u(x)-u_0(x) = Ax_l - A(x-x_l) \qquad \text{ on $\partial B_l$}.
\]
Since $A$ is symmetric the latter linear term in the boundary condition cannot be compensated by a rigid rotation. Under the assumption that the holes are well-separated one provides the approximation:
\[
u_1(x) = \sum_{l=1}^{N} U^a[A](x-x_l), \qquad p_1(x) = \sum_{l=1}^N P^a[A](x-x_l),
\]
where $(U^a[A],P^a[A])$ is the solution to the Stokes problem \eqref{eq_Stokes} outside $B(0,a)$ with boundary
condition $U^a(x) = -Ax$ on $\partial B(0,a)$ (and vanishing boundary conditions at infinity). With this formula at-hand, one obtains that
\[
W_{eff} = W_0 + \sum_{l=1}^N \int_{\partial K} \Sigma_{\mu}(U^a[A](\cdot -x_l),P^a[A](\cdot-x_l))n \cdot Ax{\rm d}s.
\]
Via conservation arguments related to the divergence form of the Stokes equation, the boundary
integrals involved in $W_{eff}$ can be transformed into $N$ integrals over the boundaries of $B(x_l,a).$
It is then possible to apply the explicit value of the solution $(U^a,P^a).$
Summing the contributions of all the particles leads finally to the first order expansion:
\[
W_{eff} = 2 \mu A:A \left( |K| + \dfrac{1}{2} \dfrac{20\pi Na^3}{3} \right),
\]
leading to formula \eqref{eq_einstein}.
We refer the reader to \cite[p.246]{Batchelor} for more details on this computation.
\medskip
Herein, we show that solutions to \eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton} are close to solutions to the continuous analogue:
\begin{align} \label{eq_continu}
& \left\{
\begin{aligned}
- {\rm div} (2\mu [(1+ \mathbb M_{eff})(D(u))] - p \mathbb{I}_3) &=& 0 \\
{\rm div} u &=& 0
\end{aligned}
\right. && \text{ on $\mathbb R^3$}, \\ \label{bc_continu}
& \quad u(x) = A x && \text{ at infinity}.
\end{align}
Here the symbol $(1+\mathbb M_{eff})(D(u))$ stands for $D(u) + \mathbb {M}_{eff}(D(u)),$ with $\mathbb{M}_{eff}$ an application that maps $D(u)$ to the $3\times 3$ matrix $\mathbb{M}_{eff}(D(u))$. This linear mapping measures the collective reaction of the particles to the strain induced by $D(u).$
We emphasize that we allow this mapping to depend on the space variable $x.$
To be more precise, we explain now the computation of $\mathbb M_{eff}.$
For arbitrary $l \in \{1,\ldots,N\}$ let denote by $(U[A,B_l],P[A,B_l])$ the unique solution
to
\begin{align} \label{eq_Stokeslocal}
& \left\{
\begin{aligned}
- {\rm div \Sigma(u,p)}&=& 0 && \text{ in $\mathbb R^3 \setminus \overline{B_l}$},\\
{\rm div} u &=& 0 &&\text{ in $\mathbb R^3 \setminus \overline{B_l}$},
\end{aligned}
\right.
&&
\left\{
\begin{aligned}
& u(x) = -Ax + V + \omega \times x && \text{ on $\partial B_l$}, \\
& u(x) = 0 && \text{ at infinity },
\end{aligned}
\right.
\\ \label{eq_Newtonlocal}
& \int_{\partial B_l} \Sigma(u,p)n{\rm d}\sigma = 0 &&
\int_{\partial B_l} (x-x_l) \times \Sigma(u,p)n{\rm d}\sigma = 0.
\end{align}
In this system, we have $\mu=1$ but we drop the index $1$ of the symbol $\Sigma$ for legibility.
We note that $(V,\omega)$ are also unknowns in this problem. But they are the lagrange multipliers of the constraint \eqref{eq_Newtonlocal}, so that we may retain only $(U[A,B_l],P[A,B_l])$ as the solution. We associate to this solution:
\[
\mathbb M[A,B_l] := \mathbb P_{3,\sigma} \left[ \int_{\partial \mathcal B_l} - \Sigma(U[A,B_l],P[A,B_l])n \otimes (x-x_l) + 2 U[A,B_l] \otimes n {\rm d}s\right],
\]
where $ \mathbb P_{3,\sigma}$ stands for the orthogonal projection (w.r.t. matrix contraction) on the space of symmetric trace-free $3\times 3$ matrices
${\rm Sym}_{3,\sigma}(\mathbb R)$. As shown in Section \ref{sec_cellpbm} below the matrix $\mathbb M[A,B_l]$ encodes the far-field decay of the solution $U[A,B_l]$ in the sense that:
\begin{equation} \label{eq_farfield}
U_i[A,B_l](x) = \mathbb M[A, B_l] : \nabla \mathcal U^i (x) + l.o.t \quad \text{ for i=1,2,3}
\end{equation}
at infinity (where $\mathcal U^i$ contains vector-fields build up from the Green-function for the Stokes problem).
Due to the linearity of the Stokes equations, we have that, for fixed $B_l$
the mapping $A \mapsto \mathbb M[A,B_l]$ is linear and thus given by a mapping $\mathbb M[B_l]: {\rm Sym}_{3,\sigma}(\mathbb R) \to {\rm Sym}_{3,\sigma}(\mathbb R)$ (such a mapping can be identified with a $5\times 5$ matrix). We set then:
\[
\mathbb M_N(x) = \dfrac{3}{4\pi a^3}\sum_{l=1}^N\mathbb M[{B}_l] \mathbf{1}_{B(x_l,a)} (x)= \dfrac{3}{4\pi}\sum_{l=1}^N\mathbb M[\mathcal{B}_l] \mathbf{1}_{B(x_l,a)}(x)
\quad \forall \, x \in \mathbb R^3.
\]
We shall obtain below -- under assumption \eqref{H1}-\eqref{H2}-- that $|\mathbb M[\mathcal B_l] |\leq C$ independent of the shape $\mathcal B_l.$
The mapping-function $\mathbb M_N$ has then support in $K$ with $\|\mathbb M_N\|_{L^{1}(\mathbb R^3)} \lesssim a^3/d^3$ so that it is bounded independent of $N.$ Then, one can think $\mathbb M_{eff}$ as a possible weak limit if the parameter $N$ was tending to $\infty$.
\medskip
For instance, in the case $B_l$ are spheres of radius $a$ (so that
$\mathcal B_l$ is a sphere of radius $1$) comparing the
expansion \eqref{eq_farfield} with the explicit solutions to the Stokes problem
(see \cite[p. 39] {Guazzelli}) we obtain that $\mathbb M[A,\mathcal B_l] = 20\pi A/3$
so that
\[
\mathbb M_N \sim 5 \sum_{l=1}^N \mathbf{1}_{B(x_l,a)}.
\]
In this case, the convergence of $\mathbb M_N$ reduces to the convergence of the distribution of centers $(x_l)_{l=1,\dots,N}.$ If the empirical measures
associated to the distribution of centers converges to some $f \in L^1(\mathbb R^3),$ we obtain, with $\phi= 4\pi Na^3/(3|K|)$ the volume fraction of particles :
\begin{equation} \label{eq_casesphere}
\mathbb M_N \rightharpoonup 5 \phi f \text{ in $L^1(\mathbb R^3)-w.$}
\end{equation}
\medskip
We give herein a quantitative result with explicit stability bounds for the distance between solutions to the perforated problem \eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton} and to the continuous problem
\eqref{eq_continu}-\eqref{bc_continu}. We restrict below to functions $\mathbb M_{eff}$ in classes
\[
\mathcal M(\varepsilon) := \left\{ \mathbb M \in L^{\infty}(\mathbb R^3;{\rm Mat}_{5}(\mathbb R)), \text{ s.t. } Supp(\mathbb M) \subset K \text{ and } \|\mathbb M\|_{L^{\infty}(K)} \leq \varepsilon \right\}.
\]
Here $\varepsilon >0$ is a given parameter related to the volume fraction $a^3/d^3.$ We identify the space of linear mappings ${\rm Sym}_{3,\sigma}(\mathbb R) \to {\rm Sym}_{3,\sigma}(\mathbb R)$ with ${\rm Mat}_{5}(\mathbb R).$ With the notations introduced before, a precise statement of our main result is the following theorem
\begin{theorem} \label{thm_main}
Let \eqref{H1}-\eqref{H2} be in force and denote by $(u_N,p_N)$ the unique solution to
\eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton}. Let $\varepsilon_0 >0,$ $\mathbb M_{eff} \in \mathcal M(\varepsilon_0)$ and denote by $(u_c,p_c)$ the unique solution to \eqref{eq_continu}-\eqref{bc_continu}.
Under the assumption that $\varepsilon_0$ is sufficiently small and that $a^3/d^3 < \varepsilon_0,$ for arbitary $p\in [1,3/2)$ there exists a constant $C_0$ depending only on $p,\varepsilon_0,K$ for which:
\begin{multline} \label{eq_mainresult}
\|u_N - u_c\|_{L^p_{loc}(\mathbb R^3)}\\
\leq C_p(K,\varepsilon_0) |A|\left[ \|\mathbb M_N -\mathbb M_{eff}\|_{\dot{H}^{-1}(\RR^3)} + \left( \dfrac{a^3}{d^3}\right)^{1+\theta} + \|\mathbb{M}_{eff}\|^2_{L^{\infty}(\mathbb R^3)} \right]
\end{multline}
where $\theta = \frac 1p - \frac 23.$
\end{theorem}
Several comments are in order. First, In \eqref{eq_mainresult},
the $\dot{H}^{-1}(\mathbb R^3)$ norm on the right-hand side must be understood componentwise. Second, in the particular case of spheres, we can
compute $\mathbb M_{eff}$ via \eqref{eq_casesphere} so that, we obtain a fully rigorous justification of the system:
\begin{align} \label{eq_continu_sphere}
& \left\{
\begin{aligned}
- {\rm div} \left[2\mu\left(1+ 5\phi f \right) D(u) - p \mathbb{I}_3\right] &=& 0 \\
{\rm div} u &=& 0
\end{aligned}
\right. && \text{ on $\mathbb R^3$}, \\ \label{bc_continu_sphere}
& \quad u(x) = A x && \text{ at infinity},
\end{align}
that has been obtained previously in \cite{Niet-Schub,Almog-Brenner}.
Finally, the restriction on exponent $p$ is reminiscent of the singularity
of solutions to \eqref{eq_Stokeslocal}, corresponding to the gradient of the
Green function for the Stokes problem on $\mathbb R^3,$ {\em i.e.} like $1/|x|^{2}$. This singularity allows an $L^p$-space for $p<3/2$ in
dimension 3. In particular, this restriction can be removed when measuring
the distance between $u_N$ and $u_c$ outside the particle domain $K$
(see \cite{Niet-Schub} in the case of spheres).
\medskip
As in the original proof of Einstein, Theorem \ref{thm_main} relies on two main properties. First,
each particle in the cloud behaves as if it was alone in the strain flow $x \mapsto Ax.$ Second, there is an
underlying additivity principle which implies that the action of the cloud of particles on the fluid
is the sum of the undividual actions of the different particles. In the two next sections, we justify the first of these two properties by extending Einstein computations to general suspensions. Broadly, a first guess $(u,p)$ for a solution to \eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton} could be
\[
u_{app}^{(0)}(x) = Ax \qquad p_{app}^{(0)}(x) = 0.
\]
This yields a solution to \eqref{eq_Stokes} and \eqref{eq_Newton} which does not fulfill the boundary
conditions \eqref{bc_Stokes} on $\partial B_l.$ So, we apply the linearity of the Stokes problem and introduce a first corrector:
\[
\left\{
\begin{aligned}
u_1 (x) &= & \sum_{l=1}^{N} U[A,B_l](x-x_l) \\
p_1(x) & =& \sum_{l=1}^{N} \mu P[A,B_l](x-x_l)
\end{aligned}
\right.
\qquad \text{ where }
\left\{
\begin{aligned}
U[A,B_l](x) &=& a U[A,\mathcal B_l]\left(\dfrac{x}{a}\right) \\
P[A,B_l](x) &=& P[A,\mathcal B_l] \left(\dfrac{x}{a}\right)
\end{aligned}
\right.
\]
Again the candidate $u_{app} ^{(1)}= u^{(0)}_{app}+u_1$ is a solution to \eqref{eq_Stokes} and \eqref{eq_Newton}
but does not match boundary conditions \eqref{bc_Stokes}. So, we proceed with compensating again the
non rigid part of the velocity-field $u_{app}^{(1)}$ on the boundaries $\partial B_l.$ This starts a process known as the "method of reflections". It has been studied in other contexts in \cite{Velazquez,Salomon,OJ} and extended to the problem of effective viscosity for a suspension of spheres in \cite{Niet-Schub}. Herein, we modify a bit the method by correcting only the first order term in the expansion of the boundary values of
$u_{app}^{(k)}$ on $\partial B_l:$
\[
u_{app}^{(k)}(x) = V^{(k)}_l + \tilde{A}^{(k)}_{l} \cdot (x - x_l)+ O(|x-x_l|^2),
\qquad V^{(k)}_l = u_{app}^{(k)}(x_l), \; \tilde{A}^{(k)}_l = \nabla u_{app}^{(k)}(x_l).
\]
This enables to rely on the semi-explicit solutions $(U[A,\mathcal B_l],P[A,\mathcal B_l])$ to \eqref{eq_Stokeslocal}
and relate the final computations with the associated $\mathbb M[A,B_l].$ However, this does not
rule out the key-difficulty of the process. Indeed, the method of reflections leads to the iterative
formula:
\[
A^{(k+1)}_l= \sum_{\lambda \neq l} D(U)[A^{(k)}_{\lambda},B_{\lambda}](x_{l}- x_{\lambda}).
\]
with a kernel $D(U)[A,B_{\lambda}]$ wich decays generically like $x \mapsto a^3A/|x|^3.$ A priori,
the above iterative formula entails then the bound:
\[
\max_{l} |A^{(k+1)}_{l}| \lesssim \left( \dfrac{a^3}{d^3}\right) |\ln(N)| \max_{l} |A^{(k)}_{l}|
\]
which yields that $a^3/d^3$ must be small w.r.t. $\ln(N)$ for the method to converge (see assumption (2.3) in \cite{Niet-Schub}). We remove this difficulty herein
by showing that there exists a Calder\`on-Zygmund operator underlying the above recursive formula. This enables to rule out the limitation on $a^3/d^3$ with respect to the number $N$ of particles.
These computations are explained in the two next sections.
Section \ref{sec_cellpbm} is devoted to the analysis of the problem \eqref{eq_Stokeslocal}-\eqref{eq_Newtonlocal}. The Section \ref{sec_refmet} builds up on this analysis to study the convergence of the method of reflections and compute
error estimates between the sequence of approximated solutions $u_{app}^{(n)}$ and the exact solution $u_N$ to \eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton}.
\medskip
The two last sections are devoted to the proof of the additivity principle and to complete the proof of Theorem \ref{thm_main}. Once the method of reflections is proved to converge, we have an expansion of the solution
to \eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton} in terms of the parameter $a^3/d^3.$ We prove that there exits an equivalent expansion of the
solution to \eqref{eq_continu}-\eqref{bc_continu} w.r.t. $\mathbb M_{eff}$ so that there is a correspondance between the first terms in the expansions of both solutions.
We emphasize that, as classical with the weak-formulation of the Stokes problem, one obtains estimates on the difference of velocity-fields
$u_N-u_c.$ Regularity properties of the Stokes problem entail then
similar properties for the pressures.
\medskip
Through the paper, we use the following conventions. In the space of $3\times 3$ matrices ${\rm Mat}_{3}(\mathbb R),$
we denote by ${\rm Sym}_3(\mathbb R)$ the set of symmetric matrices
and ${\rm Sym}_{3,\sigma}(\mathbb R)$ its subspace containing only the trace-free ones. We denote $\mathbb P_{3,\sigma}$ the orthogonal projection from
${\rm Mat}_3(\mathbb R)$ onto ${\rm Sym}_{3,\sigma}(\mathbb R)$ with respect to the matrix contraction.
Concerning function spaces, we use classical notations for Lebesgue and Sobolev spaces. We also introduce the Beppo-Levi space $\dot{H}^1$
and its divergence-free variant:
\begin{eqnarray*}
\dot{H}^1_\sigma(\mathcal O):=\{u\in\dot{H}^1(\mathcal O) \text{ such that } {\rm div } u=0 \text{ on $\mathcal O$\}}.
\end{eqnarray*}
In the whole paper, we denote $\mathbf{U}:=(U^{ij})_{1\leq i,j\leq 3}$ and $\mathbf{q}:=(q_1,q_2,q_3)$ the fundamental solution to the Stokes equation in the whole space $\RR^3$, which can be written
\begin{eqnarray*}
U^{ij}:=-\frac{1}{8\pi}\left[\frac{\delta_{ij}}{|x-y|}+\frac{(x_i-y_i)(x_j-y_j)}{|x-y|^3}
\right],
\qquad
q_j=\frac{1}{4\pi}\frac{x_j-y_j}{|x-y|^3}.
\end{eqnarray*}
for $i,j=1,2,3.$ We collect $(U^{i,1},U^{i,2},U^{i,3})$ in the vector $\mathcal U^{i}.$
We also introduce the Bogovskii operator $\mathfrak{B}_{\mathcal{B}}[f]$
defined for arbitrary mean-free $f\in L^2(\mathcal{B})$. It is well-known that this $\mathfrak{B}_{\mathcal{B}}$ is continuous with values in $H_0^1(\mathcal{B})$ and characterized by ${\rm div} \mathcal B_{\mathcal{B}}[f]=f$ in $\mathcal{B}$. In particular, we denote ${\rm div} \mathfrak{B}_{\lambda_1,\lambda_2}[f]:=\mathfrak{B}_{B(0,\lambda_2)\setminus \overline{B(0,\lambda_1)}}[f]$ for any $0<\lambda_1<\lambda_2$.
\medskip
\section{Analysis of the Stokes problem} \label{sec_cellpbm}
In the whole section, we suppose that $\mathcal{B}\subset B(0,1)\subset\RR^3$ has smooth boundaries $\partial\mathcal{B}.$ Given a trace-free $A \in {\rm Sym}_{3,\sigma}(\mathbb R),$ let consider the following problem:
\begin{align} \label{eq_Stokessec}
& \left\{
\begin{aligned}
- {\rm div \Sigma(u,p)}&=& 0 && \text{ in $\mathbb R^3 \setminus \overline{\mathcal B}$},\\
{\rm div} u &=& 0 &&\text{ in $\mathbb R^3 \setminus \overline{\mathcal B}$},
\end{aligned}
\right.
&&
\left\{
\begin{aligned}
&u(x) = -Ax + V + \omega \times x && \text{ on $\partial \mathcal B$}, \\
&u(x) = 0 && \text{ at infinity },
\end{aligned}
\right.
\\ \label{eq_Newtsec}
& \int_{\partial \mathcal B} \Sigma(u,p)n{\rm d}\sigma = 0 &&
\int_{\partial \mathcal B} x \times \Sigma(u,p)n{\rm d}\sigma = 0.
\end{align}
It is classical that, given alternatively a $3\times3$ matrix $A$ and $V,\omega \in \mathbb R^3 \times \mathbb R^3,$ there exists a unique solution $(u[A],p[A])$ (with $V=\omega=0$)
and $(u[V,\omega],p[V,\omega])$ (with $A=0$) to \eqref{eq_Stokessec} in $\dot{H}^1(\mathbb R^3) \times L^2(\mathbb R^3)$. The mapping
\[
(V,\omega) \mapsto ( \int_{\partial \mathcal B} \Sigma(u[V,\omega],p[V,\omega])n{\rm d}\sigma ,
\int_{\partial \mathcal B} x \times \Sigma(u[V,\omega],p[V,\omega])n{\rm d}\sigma)
\]
is then linear and symmetric positive definite. In particular,
there exists a unique solution $(V_A,\omega_A)$ to the problem:
\begin{align*}
\int_{\partial \mathcal B} \Sigma(u[V_A,\omega_A],p[V_A,\omega_A])n{\rm d}\sigma & =-\int_{\partial \mathcal B} \Sigma(u[A],p[A])n{\rm d}\sigma \\
\int_{\partial \mathcal B} x \times \Sigma(u[V_A,\omega_A],p[V_A,\omega_A])n{\rm d}\sigma
&=- \int_{\partial \mathcal B} x \times \Sigma(u[A],p[A])n{\rm d}\sigma.
\end{align*}
The candidate $U[A,\mathcal B]= u[A] + u[V_A,\omega_A], \; P[A,\mathcal B] = p[A] +p[V_A,\omega_A]$ is then a solution to \eqref{eq_Stokessec}-\eqref{eq_Newtsec}. By difference and integration by parts, we obtain uniqueness of a velocity-field solution which enables to recover that the pressure is unique up to a constant also.
Since $U[A,\mathcal B]$ matches a velocity-field of the form $-Ax + V + \omega\times x$ on $\partial \mathcal B,$ it is classical to extend $U[A,\mathcal B]$
by the field corresponding to this boundary value on $\mathcal B$ yielding
a vector-field $U[A,\mathcal B] \in \dot{H}^1_{\sigma}(\mathbb R^3).$
Straightforward integration by parts arguments show that
this extended $U[A,\mathcal B]$ realizes:
\[
\min \Bigl \{ \int_{\mathbb R^3} |D(u)|^2, \quad u \in \dot{H}^1_{\sigma}(\mathbb R^3), \quad D(u) = -A \text{ on $\mathcal B$}\Bigr\}.
\]
In particular, we note that the set on which the minimum is computed on the right-hand side increases when $\mathcal B$ decreases. Since we assume
$\mathcal B \subset B(0,1)$ in this section, we infer a uniform bound for $\|D(U[A,\mathcal B])\|_{L^{2}(\mathbb R^3)}$ by
the minimum reached for $\mathcal B = B(0,1).$ This yields that
\begin{equation} \label{eq_unifbound}
\|D(U[A,\mathcal B])\|_{L^2(\mathbb R^3)}
\leq C|A|
\end{equation}
(and thus $\|U[A,\mathcal B]\|_{\dot{H}^1(\mathbb R^3)} \leq C|A|$ also) with a constant
$C$ uniform in $\mathcal B \subset B(0,1).$
One may proceed similarly to show that, under the assumption \eqref{H1}-\eqref{H2}, the problem \eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton} admits a unique solution $(u_N,p_N)$ such that
\begin{equation} \label{eq_integrabilite}
v_N : x \mapsto u_N(x)- Ax \in \dot{H}^1 (\mathcal F_N), \qquad x \mapsto p_N(x) \in L^2(\mathcal F_N)
\end{equation}
Furthermore, the velocity-field of this solution can be extended to the whole $\mathbb R^3$ to yield a vector-field $v_N(x) = u_N(x) - Ax$ that realizes:
\[
\min \Bigl \{ \int_{\mathbb R^3} |D(u)|^2, \quad u \in \dot{H}^1_{\sigma}(\mathbb R^3), \quad D(u) = -A \text{ on $\mathcal B_l \quad \forall \, l=1,\ldots,N$}\Bigr\}.
\]
In particular, under assumptions \eqref{H1}-\eqref{H2} we can construct an extension $\tilde{v}_N$ on $\bigcup_{l} B(x_l,2a)$ of the field that matches $x \mapsto -A(x-x_l)$ on each of the $B(x_l,a)$ (and thus on $B_l \subset B(x_l,a)$) by truncating and lifting the divergence terms.
Straightforward computations show that we have then:
\[
\int_{\mathbb R^3} |\nabla v_N|^2 = 2 \int_{\mathbb R^3} |D(v_N)|^2
\leq 2 \int_{\mathbb R^3} |D(\tilde{v}_N)|^2 \leq C \dfrac{a^3}{d^3} |A|.
\]
so that there exists a uniform constant $C$ for which:
\begin{equation} \label{eq_unifuN}
\|\nabla v_N\|_{L^2(\mathbb R^3)} \leq C \left( \dfrac{a^3}{d^3}\right)^{\frac 12}.
\end{equation}
\medskip
Before going to the main result of this section, we prepare the proof with a
control on momenta of the trace of
\[
\Sigma[A,\mathcal{B}] = 2 D(U)[A,\mathcal{B}] - P[A,\mathcal{B}] \mathbb{I}_3.
\]
on $\partial \mathcal B.$ This is the content of the following preliminary lemma:
\begin{lemma}\label{resistence}
There exists an absolute constant $C$ such that:
\begin{eqnarray*}
\Bigl| \mathbb P_{3,\sigma}\left[ \int_{\partial\mathcal{B}}\Sigma[A,\mathcal{B}]n\otimes yd\sigma(y)\right] \Bigr|\leq C|A|,
\end{eqnarray*}
\end{lemma}
We recall that $\mathbb P_{3,\sigma}$ stands for the orthogonal projection from ${\rm Mat}_{3}(\mathbb R)$ onto ${\rm Sym}_{3,\sigma}(\mathbb R).$
With this lemma, we obtain that the linear mappings
\[
A \mapsto \mathbb P_{3,\sigma}\left[ \int_{\partial\mathcal{B}}\Sigma[A,\mathcal{B}]n\otimes yd\sigma(y)\right]
\]
are uniformly bounded whatever $\mathcal B \subset B(0,1).$
\begin{proof}
Because of the linearity of the Stokes equations and of the stress tensor, the mapping from ${\rm Sym}_{3,\sigma}(\mathbb R)$ to ${\rm Sym}_{3,\sigma}(\mathbb R)$:
\begin{eqnarray*}
\mathcal{L}:~~A\mapsto \mathbb P_{3,\sigma}\left[ \int_{\partial\mathcal{B}}\Sigma[A,\mathcal{B}]n\otimes yd\sigma(y)\right]
\end{eqnarray*}
is also linear. So, let $(\mathcal{E}_i)_{i=1,\dots,5}$ an orthonormal basis of ${\rm Sym}_{3,\sigma}(\mathbb R)$, and $(V_i:=U[\mathcal{E}_i,\mathcal{B}])_{i=1,\ldots,5}$ the corresponding velocity-fields solution to the Stokes problem \eqref{eq_Stokessec}-\eqref{eq_Newtsec}. Then, the mapping $\mathcal{L}$ is represented in this basis by the matrix $\mathbb{L}$:
\begin{eqnarray*}
\mathbb{L}:=\Big(\int_{\partial\mathcal{B}}\Sigma[\mathcal{E}_i,\mathcal{B}]n\otimes yd\sigma(y):\mathcal{E}_j\Big)_{1\leq i,j\leq 5},
\end{eqnarray*}
Our proof reduces to obtaining that $|\mathbb{L}_{i,j}| \leq C$
for arbitrary $i,j$ in $\{1,\ldots,5\}.$ So, let fix $i,j,$ by integrating by parts, we have that
\[
\mathbb{L}_{i,j} = \int_{\partial \mathcal B} \Sigma[\mathcal E_i,\mathcal B]n \cdot (\mathcal E_j y){\rm d}\sigma(y)
=
2\int_{\RR^3\setminus\overline{\mathcal{B}}}D(V_i):D(V_j) .
\]
so that:
\[
|\mathbb L_{i,j}| \leq 2 \|D(V_i)\|_{L^2(\RR^3\setminus\overline{\mathcal{B}})}\|D(V_j)\|_{L^2(\RR^3\setminus\mathcal{B})} \leq C|\mathcal E_i||\mathcal E_j|,
\]
where we applied \eqref{eq_unifbound} to obtain the last inequality.
This concludes the proof.
\end{proof}
\medskip
We continue the analysis of \eqref{eq_Stokessec}-\eqref{eq_Newtsec}
by providing pointwise estimates on $U[A,\mathcal B]$. The content of the following theorem is reminiscent of \cite[Section V.3]{Galdi}:
\begin{proposition}\label{asym-stokes}
Let $(U[A,\mathcal{B}],P[A,\mathcal B])$ be the unique solution to \eqref{eq_Stokessec}-\eqref{eq_Newtsec}. There exists a vector field $\mathcal{H}[A,\mathcal{B}]$ depending on $A$ and $\mathcal{B}$, such that for any $|x|>4$,
\begin{eqnarray*}
U[A,\mathcal{B}](x)=\mathcal{K}[A,\mathcal{B}](x)+\mathcal{H}[A,\mathcal{B}](x),
\end{eqnarray*}
where $\mathcal{K}[A,\mathcal{B}]_i(x)=\mathbb{M}[A,\mathcal{B}]:\nabla\mathcal{U}^{i}(x)$ for $i=1,2,3$ with:
\begin{equation} \label{eq_defM}
\mathbb M[A,\mathcal B] = \mathbb{P}_{3,\sigma}\left\{ \int_{\partial\mathcal{B}}\left[ -(\Sigma[A,\mathcal{B}]n)\otimes y + 2U[A,\mathcal B] \otimes n \right]d\sigma(y)\right\}.
\end{equation}
Moreover, there exists a constant
$C$ independent of $A$ for which:
\begin{eqnarray*}
|\mathbb M[A,\mathcal B]| \leq C|A|\,, \qquad
|\nabla^{\beta}\mathcal{H}[A,\mathcal{B}](x)|\leq C|A|\frac{1}{|x|^{3+|\beta|}}
\quad \forall \, \beta \in \mathbb N^3.
\end{eqnarray*}
\end{proposition}
Before giving a proof of this proposition, we note that for
large $x$, we have:
\[
\nabla \mathcal{U}^i(x) \sim \dfrac{C}{|x|^2}.
\]
Consequently the splitting that we obtain in the above proposition
corresponds to the extraction of the leading order term ($\mathcal K[A,\mathcal B](x)$) at infinity. A second crucial remark induced by this proposition is that the amplitude of both terms (the leading term $\mathcal K$ and remainder $\mathcal H$) do not depend asymptotically on the shape $\mathcal B.$
\begin{proof}
Let $\chi(x)\in C_0^{\infty}(\mathbb R^3)$ such that $\chi\equiv1$ in $\RR^3\setminus B(0,2)$ and $\chi\equiv0$ in $B(0,1)$. We recall that $\mathfrak{B}_{\lambda_1,\lambda_2}$ stands for the Bogovskii operator lifting the divergence on the annulus $B(0,\lambda_2)\setminus \overline{B(0,\lambda_1)}$.
\medskip
By standard ellipticity arguments $U[A,\mathcal B],P[A,\mathcal B]$ are $C^{\infty}$ on $\mathbb R^3 \setminus \overline{\mathcal B}.$ Let define
\begin{align*}
\bar{U}[A,\mathcal{B}](x) & :=U[A,\mathcal{B}](x)\chi(x)-\mathfrak{B}_{1,2}[U[A,\mathcal{B}] \cdot \nabla \chi](x) \\
\bar{P}[A,\mathcal{B}](x) & :=P[A,\mathcal{B}](x)\chi(x).
\end{align*}
Up to a mollifying argument that we skip for conciseness, we may assume that $\bar{U}[A,\mathcal B]
\in C^{\infty}(\mathbb R^3).$ The pair $(\bar{U}[A,\mathcal B],\bar{P}[A,\mathcal{B}])$ satisfies then the Stokes equation on $\RR^3$ with source term $f_\chi[A,\mathcal{B}] = -{\rm div} \, \bar{\Sigma}[A,\mathcal{B}] \in C^{\infty}_c(B(0,2)\setminus \overline{B(0,1)})$ where:
\[
\bar{\Sigma}[A,\mathcal{B}]:=-\bar{P}[A,\mathcal{B}]\mathbb{I}_3+2D(\bar{U}[A,\mathcal{B}]).
\]
Since $\bar{U}[A,\mathcal B] \in \dot{H}^1(\mathbb R^3)$ and we have uniqueness of $\dot{H}^1$-solutions to the Stokes equations on $\mathbb R^3,$ we may use the Green function $\mathbf{U}$ to compute $\bar{U}[A,\mathcal B]$. This entails that, for each $i=1,2,3,$
we have:
\begin{eqnarray*}
\bar{U}[A,\mathcal{B}]_i=\int_{\RR^3}f_\chi[A,\mathcal{B}](y)\cdot\mathcal{U}^i(x-y)dy.
\end{eqnarray*}
In particular, for $|x|>2$ and $i=1,2,3$ (where $\bar{U}[A,\mathcal{B}]$ coincides with $U[A,\mathcal{B}]$), a Taylor expansion yields:
\begin{eqnarray*}
U[A,\mathcal{B}]_i(x)&=&\int_{\RR^3}f_\chi[A,\mathcal{B}](y)dy\cdot\mathcal{U}^i(x)-\int_{\RR^3}f_{\chi}[A,\mathcal{B}](y)\otimes ydy:\nabla\mathcal{U}^i(x)\\
&&+\sum_{|\alpha|=2}\int_{\RR^3}f_\chi[A,\mathcal{B}](y)\cdot\int_0^1(1-t)y^{\alpha}D^{\alpha}\mathcal{U}^{i}(x-ty)dtdy.
\\
&=& T_0 + \mathcal K[A,\mathcal B](x)+ \mathcal H[A,\mathcal B](x).
\end{eqnarray*}
Concerning $T_0$, we notice that
\begin{eqnarray*}
\int_{\RR^3}f_{\chi}[A,\mathcal{B}](y)dy&=& \int_{B(0,3)\setminus B(0,1)}
{\rm div} \bar{\Sigma}[A,\mathcal{B}] \\
&=&\int_{\partial B(0,3)}\bar{\Sigma}[A,\mathcal{B}](y)nd\sigma(y)=\int_{\partial B(0,3)}\Sigma[A,\mathcal{B}](y)nd\sigma(y)\\
&=&\int_{\partial\mathcal{B}}\Sigma[A,\mathcal{B}](y)nd\sigma(y) =0
\end{eqnarray*}
Hence $T_0=0$. To analyse $\mathcal K[A,\mathcal B](x)$, we denote:
\begin{eqnarray*}
{\mathcal{M}}[A,\mathcal{B}]:=\frac{1}{2}\Bigl(\int_{\RR^3}f_{\chi}[A,\mathcal{B}](y)\otimes ydy+(\int_{\RR^3}f_{\chi}[A,\mathcal{B}](y)\otimes ydy)^T\Bigr)
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{A}[A,\mathcal{B}]:=\frac{1}{2}\Bigl(\int_{\RR^3}f_{\chi}[A,\mathcal{B}](y)\otimes ydy-(\int_{\RR^3}f_{\chi}[A,\mathcal{B}](y)\otimes ydy)^T\Bigr).
\end{eqnarray*}
First, for arbitrary skew-symmetric matrix $E$, there holds:
\begin{eqnarray*}
\int_{\RR^3}f_{\chi}[A,\mathcal{B}](y)\cdot(Ey)dy&=&\int_{B(0,3)\setminus B(0,1)}f_{\chi}[A,\mathcal{B}](y)\cdot(Ey)dy\\
&=&\int_{\partial B(0,3)}\big[\bar{\Sigma}[A,\mathcal{B}]n\big]\cdot Eyd\sigma(y)-\int_{B(0,3)\setminus B(0,1)}\bar{\Sigma}[A,\mathcal{B}]:\nabla (Ey)dy,
\end{eqnarray*}
On the right-hand side, we have, since $\bar{\Sigma}[A,\mathcal{B}]$
is symmetric and $E$ is skew-symmetric:
\begin{eqnarray*}
\int_{B(0,3)\setminus B(0,1)}\bar{\Sigma}[A,\mathcal{B}]:\nabla (Ey)dy=\int_{B(0,3)\setminus B(0,1)}\bar{\Sigma}[A,\mathcal{B}]: E\, dy=0.
\end{eqnarray*}
We also notice that
\begin{eqnarray*}
\int_{\partial B(0,3)}\big[\bar{\Sigma}[A,\mathcal{B}]n\big]\cdot Eyd\sigma(y)&=&\int_{\partial B(0,3)}\big[\Sigma[A,\mathcal{B}]n\big]\cdot Eyd\sigma(y)\\
&=&\int_{B(0,3)\setminus B(0,1)}\Sigma[A,\mathcal{B}]:\nabla (Ey)dy - \int_{\partial\mathcal{B}}\big[\Sigma[A,\mathcal{B}]n\big]\cdot Eyd\sigma(y)\\
&=& - \int_{\partial\mathcal{B}}\big[\Sigma[A,\mathcal{B}]n\big]\cdot Eyd\sigma(y) = 0.
\end{eqnarray*}
To obtain the last equality, we use that since $E$ is skew-symmetric, there is a vector $e$ such that $Ex=e\times x$ and:
\begin{eqnarray*}
\int_{\partial\mathcal{B}}\big[\Sigma[A,\mathcal{B}]n\big]\cdot Eyd\sigma(y)=\int_{\partial \mathcal{B}}\Sigma[A,\mathcal{B}] n\times yd\sigma(y)\cdot e=0.
\end{eqnarray*}
Therefore we obtain that $\mathcal{A}[A,\mathcal{B}]=0$. Consequently, we
have
\[
\mathcal K[A,\mathcal B](x)= {\mathcal M}[A,\mathcal B] : \nabla \mathcal U^i(x).
\]
Since
${\mathcal M}[A,\mathcal B]$ is symmetric, we deduce that:
\[
\mathcal K[A,\mathcal B](x)= {\mathcal M}[A,\mathcal B] : D(\mathcal U^i)(x) = \mathbb P_{3,\sigma} [{\mathcal M}[A,\mathcal B] ] : D(\mathcal U^i)(x)
= \mathbb P_{3,\sigma}[{\mathcal M}[A,\mathcal B]] : \nabla \mathcal U^i(x).
\]
So, we set $\mathbb M[A,\mathcal B] =\mathbb P_{3,\sigma}[{\mathcal M}[A,\mathcal B]]$ and we turn to show \eqref{eq_defM} and $|\mathbb{M}[A,\mathcal{B}]|\leq C|A|$. To this end, we notice that $\mathbb{M}[A,\mathcal B]$ is completely fixed by its action on matrices $S \in {\rm Sym}_{3,\sigma}(\mathbb R).$
So, let fix $S \in {\rm Sym}_{3,\sigma}(\mathbb R).$ We have
\begin{eqnarray*}
\int_{\RR^3}f_{\chi}[A,\mathcal{B}](y)\otimes y dy:S=\int_{\RR^3}f_{\chi}[A,\mathcal{B}](y)\cdot(Sy)dy=\int_{B(0,3)\setminus \mathcal{B}}f_{\chi}[A,\mathcal{B}](y)\cdot(Sy)dy.
\end{eqnarray*}
Applying that $\mathrm{div}\bar{\Sigma}[A,\mathcal{B}]=f_\chi[A,\mathcal{B}]$ again, we obtain
\begin{eqnarray*}
\int_{B(0,3)\setminus \mathcal{B}}f_{\chi}[A,\mathcal{B}](y)\cdot(Sy)dy&=&\int_{\partial B(0,3)}(\bar{\Sigma}[A,\mathcal{B}]n)\cdot(Sy)d\sigma(y)-\int_{B(0,3)\setminus \mathcal{B}}\bar{\Sigma}[A,\mathcal{B}]:Sdy\\
&=&\int_{\partial B(0,3)}(\bar{\Sigma}[A,\mathcal{B}]n)\cdot(Sy)d\sigma(y)-2\int_{B(0,3)\setminus \mathcal{B}}D(\bar{U}[A,\mathcal{B}]):Sdy\\
&&+\int_{B(0,3)\setminus \mathcal{B}}\bar{P}[A,\mathcal{B}]\mathbb{I}_3:Sdy
\end{eqnarray*}
Since $S$ is trace-free, the last pressure term vanishes.
We rewrite the first term on the right-hand side:
\begin{eqnarray*}
\int_{\partial B(0,3)}(\bar{\Sigma}[A,\mathcal{B}]n)\cdot(Sy)d\sigma(y)&=& \int_{\partial B(0,3)}(\Sigma[A,\mathcal{B}]n)\cdot(Sy)d\sigma(y)\\
&=& - \int_{\partial\mathcal{B}}(\Sigma[A,\mathcal{B}]n)\cdot(Sy)d\sigma(y)+2\int_{B(0,3)\setminus\mathcal{B}}D(U[A,\mathcal{B}]):Sdy
\end{eqnarray*}
This entails that:
\begin{eqnarray*}
\int_{B(0,3)\setminus \mathcal{B}}f_{\chi}[A,\mathcal{B}](y)\cdot(Sy)dy&=& - \int_{\partial\mathcal{B}}(\Sigma[A,\mathcal{B}]n)\cdot(Sy)d\sigma(y)\\
& &+2 \int_{B(0,3)\setminus\mathcal{B}}D(U[A,\mathcal{B}]-\bar{U}[A,\mathcal{B}]):Sdy.
\end{eqnarray*}
We recall that pressure term do vanish since $S$ is trace-free.
Concerning the first integral on the right-hand side, we notice again that:
\begin{eqnarray*}
\int_{\partial\mathcal{B}}(\Sigma[A,\mathcal{B}]n)\cdot(Sy)d\sigma(y)= \mathbb P_{3,\sigma}\left[ \int_{\partial\mathcal{B}}(\Sigma[A,\mathcal{B}]n)\otimes yd\sigma(y)\right]:S.
\end{eqnarray*}
We are then in position to apply Lemma \ref{resistence} which yields that
\begin{equation} \label{eq_controlbord}
\Big|\int_{\partial\mathcal{B}}(\Sigma[A,\mathcal{B}]n)\cdot(Sy)d\sigma(y)\Big|\leq C|A||S|,
\end{equation}
for an absolute constant $C.$
We proceed with the second integral. We notice that $U[A,\mathcal{B}](x)=\bar{U}[A,\mathcal{B}](x)$ for any $|x|>2$ and $\bar{U}[A,\mathcal{B}](x)=0$ on $\partial\mathcal{B}$. Hence
\begin{eqnarray*}
\int_{B(0,3)\setminus\mathcal{B}}D(U[A,\mathcal{B}]-\bar{U}[A,\mathcal{B}]):Sdy&=&\int_{B(0,3)\setminus\mathcal{B}}D\big[U[A,\mathcal{B}]-\bar{U}[A,\mathcal{B}]\big]dy:S\\
&=& \int_{\partial B(0,3) \cup \partial \mathcal B} (U[A,\mathcal{B}]-\bar{U}[A,\mathcal{B}]) \otimes n d\sigma : S \\
&=& \int_{\partial \mathcal B} U[A,\mathcal{B}] \otimes n d\sigma : S\\
&=& A:S\ |\mathcal{B}|,
\end{eqnarray*}
where we applied that $U[A,\mathcal B](y) = Ay + V + \omega \times y $
to obtain the last identity. Gathering the previous computations
we obtain that, for arbitrary $S \in {\rm Sym}_{3,\sigma}(\mathbb R)$,
there holds:
\begin{align*}
\mathbb M[A,\mathcal B] : S & = \int_{\mathbb R^3} f_{\chi}[A,\mathcal B](y) \cdot S y dy \\
& = - \mathbb{P}_{3,\sigma} \left[\int_{\partial \mathcal B} \Sigma[A,\mathcal B] n \otimes y d\sigma(y) \right] : S +2 \int_{\partial B} U[A,\mathcal B](y) \otimes nd\sigma(y) : S.
\end{align*}
This concludes the proof of \eqref{eq_defM}. Recalling that $\mathcal{B} \subset B(0,1)$, and applying the explicit computation of the last integral
in this latter identity, we obtain also $|\mathbb{M}[A,\mathcal{B}]|\leq C|A|.$
\medskip
To finish the proof, we handle the last term $\mathcal H[A,\mathcal B](x)$ for $|x|>4$. We prove the required estimate for $\beta =(0,0,0),$ the extension to arbitrary $\beta$ being obvious.
Given $|\alpha|=2$, and $|x|>2$, we can find $\phi_x\in C^{\infty}(\RR^3)$
with $\mathrm{Supp}(\phi_x)\subset \overline{B(0,3)}\setminus B(0,1)$
such that:
\begin{eqnarray*}
\int_0^1(1-t)y^{\alpha}D^{\alpha}\mathcal{U}^{i}(x+ty)dt =:\phi_x(y)
\quad
\forall \, y \in \mathrm{Supp}(\chi)
\end{eqnarray*}
Asymptotic properties of $\mathcal U^i$ entail then that
$\|\phi_x\|_{W^{1,\infty}(\mathbb R^3)} \leq C/|x|^3.$
Therefore, we have, thanks to the uniform bound \eqref{eq_unifbound}
and the embedding $\dot{H}^1(\mathbb R^3) \subset L^2_{loc}(\mathbb R^3):$
\begin{multline*}
\Big|\int_{\RR^3}f_\chi[A,\mathcal{B}](y)\cdot\int_0^1(1-t)y^{\alpha}D^{\alpha}\mathcal{U}^{i}(x+ty)dtdy\Big| \\
\begin{array}{l}
\leq C\|f_\chi[A,\mathcal{B}]\|_{\dot{H}^{-1}(B(0,3)\setminus \overline{B(0,1)})}\|\phi_x\|_{{H}^1_0(B(0,3) \setminus \overline{B(0,1)})} \\[8pt]
\leq C\|\bar{U}[A,\mathcal{B}]\|_{\dot{H}^1(B(0,3)\setminus \overline{B(0,1)})}\frac{1}{|x|^3}\leq C\|U[A,\mathcal{B}]\|_{\dot{H}^1(\RR^3)}\frac{1}{|x|^3}\leq C \frac{|A|}{|x|^3}.
\end{array}
\end{multline*}
This ends the proof of the proposition.
\end{proof}
We end this section by having a look to the interactions between the decomposition in Proposition \ref{asym-stokes} with scaling properties of the Stokes problem \eqref{eq_Stokessec}-\eqref{eq_Newtsec}. Indeed, for any $a < 1$,
standard scaling arguments imply that:
\[
U[A,a \mathcal{B}](x)=aU[A,\mathcal{B}](x/a),
\qquad
P[A,a\mathcal{B}](x)=P[A,\mathcal{B}](x/a).
\]
Consequently for arbitrary $w \in \mathbb S^2,$ we have:
\[
\lim_{t \to \infty} t^2 U[A,a \mathcal{B}](tw) =a^3 \lim_{t\to \infty} t^2 U[A,\mathcal{B}](tw)
\]
This entails that $\mathbb M[A,a\mathcal B]= a^3\mathbb M[A,\mathcal B]$ and
\begin{equation} \label{eq_scalelead}
\mathcal{K}[A,a\mathcal{B}](x)=a^3\mathcal{K}[A,\mathcal{B}](x).\end{equation}
We can then compare remainder terms. This yields:
\begin{equation} \label{eq_scaleremainder}
|\nabla^{\beta} \mathcal{H}[A,a \mathcal{B}](x)| \leq \dfrac{C|A|a^4}{|x|^{3+|\beta|}}
\quad
\forall \, |x| > 4a.
\end{equation}
\section{Approximation of the solution to the $N$-particle problem}
\label{sec_refmet}
In this section, we fix $N$ large and $A \in {\rm Sym}_{3,\sigma}(\mathbb R).$
We provide an approximation {\em via} the method of reflections for the solution $(u_N,p_N)$ to
\begin{align}
\label{eq_Stokes_ref}
& \left\{
\begin{aligned}
- {\rm div}\, \Sigma_{\mu}(u,p) &= 0 \\
{\rm div}\, u &= 0
\end{aligned}
\right.
\qquad \qquad
\text{ in $\mathbb R^3 \setminus \bigcup_{l=1}^N \overline{B}_l,$}
\\
\label{bc_Stokes_ref}
& \left\{
\begin{aligned}
& \; u(x) = V_l + \omega_l \times (x-x_l) && \text{ on $\partial B_l,$ for $l=1,\ldots,N$} \\
& \; u(x) = Ax && \text{ at infinity,}
\end{aligned}
\right.
\\
\label{eq_Newton_ref}
& \left\{
\begin{aligned}
\int_{\partial B_l} \Sigma_{\mu}(u,p) n {\rm d}s &= 0 \\
\int_{\partial B_l} (x-x_l) \times \Sigma_{\mu}(u,p) n {\rm d}s &= 0 \\
\end{aligned}
\right.
\qquad \qquad \text{for }\, l = 1,\ldots,N.
\end{align}
We recall that the method of reflections consists in matching the boundary conditions on each particle by solving a Stokes system around each particle, gluing together the local solutions into one approximation and iterating the process, since by gluing the local solutions we alter the boundary values of the approximation. More precisely, we first define
\[
u_{app}^{(0)}(x):=Ax, \qquad A^{(0)}_l:=A \text{ for $l=1,\dots,N$.}
\]
Given $n \geq 0$ and assuming that a vector-field $u_{app}^{(n)}$ and
matrices $(A_{l}^{(n)})_{l=1,\ldots,N}$ are constructed we set:
\begin{align} \label{eq_iter_matrix}
A^{(n+1)}_l &:= \sum_{l\neq\lambda}D(\mathcal{K})[A^{(n)}_\lambda,{B}_\lambda](x_l-x_\lambda) && \forall \, l=1,\ldots,N \\
v^{(n+1)}(x) & := \sum_{l=1}^N U[A^{(n)}_{l},{B}_l](x-x_l) &&
\text{ on $\mathcal F^N$} \label{eq_iter_corrector}\\
u_{app}^{(n+1)}(x)&:= u_{app}^{(n)}(x)+v^{(n+1)}(x) &&
\text{ on $\mathcal F^N$}. \label{eq_iter_uapp}
\end{align}
Correspondingly, we compute a sequence of approximate pressure:
\[
p_{app}^{(0)} = 0, \qquad
p_{app}^{(n+1)} = p_{app}^{(n)} + \mu \sum_{l=1}^{N} P[A^{(n)}_{l},{B}_l] \,,
\qquad
\forall \, n \in \mathbb N.
\]
The factor $\mu$ is introduced here since $(U,P)$ solves a Stokes system without viscosity.
\medskip
The motivation of these definitions is the following remark. For each $n \geq 0$
the flow $v^{(n+1)}$ cancels the first order symmetric term of the leading part of the boundary value of $u_{app}^{(n)}$ on each $\partial B_l$. For instance, for $n=0$, we notice that on $\partial B_l$, it holds:
\begin{eqnarray*}
&&u^{(0)}(x)=Ax=Ax_l+A(x-x_l),\\
&&v^{(1)}(x)=-A(x-x_l) + \sum_{l\neq\lambda} U[A,{B}_l](x-x_\lambda),
\end{eqnarray*}
which implies that:
\begin{eqnarray*}
u^{(1)}=Ax_l + \sum_{l\neq\lambda} U[A,{B}_l](x-x_\lambda).
\end{eqnarray*}
Since $\varepsilon_0$ is very small, meaning that $a\ll d$, by Proposition \ref{asym-stokes} and \eqref{eq_scalelead}, we have that for each $\lambda\neq l$ and any $x\in\partial B_l$:
\begin{eqnarray*}
U[A,{B}_l](x-x_\lambda)&=&a^3\mathcal{K}[A,\mathcal{B}_\lambda](x-x_\lambda)+\mathcal{H}[A,{B}_\lambda](x-x_\lambda).
\end{eqnarray*}
where $\mathcal{H}[A,{B}_\lambda](x-x_\lambda) << \mathcal{K}[A,B_\lambda](x-x_\lambda)$ since $|x-x_l| >> a$ on $\partial B_l.$
On the other hand, by Taylor expansion, for any $x\in\partial B_l$ and any $\lambda\neq l$,
\begin{eqnarray*}
a^3\mathcal{K}[A,\mathcal{B}_\lambda](x-x_\lambda)=\mathrm{constant}+\mathrm{rotation}+a^3D(\mathcal{K})[A,\mathcal{B}_\lambda](x_l-x_\lambda)(x-x_l)+O(|x-x_l|^2).
\end{eqnarray*}
Hence in the reflection method, we aim at canceling the symmetric gradient $a^3D\mathcal{K}[A,\mathcal{B}_\lambda](x_l-x_\lambda)(x-x_l)$.
By a direct iteration, we obtain that, for any $x\in\partial B_l$ and $n\geq 2,$
there holds:
\begin{align} \label{eq_splitboundary}
u^{(n)}_{app}(x) =&\mathrm{constant}+\mathrm{rotation} \\
&+a^3 \sum_{l\neq \lambda} \mathcal{K}[A^{(n-1)}_\lambda,\mathcal{B}_\lambda](x-x_\lambda) \notag \\
& + a^3 \sum_{j=0}^{n-2}
\sum_{l\neq\lambda}\Big( \mathcal{K}[A^{(j)}_\lambda,\mathcal{B}_\lambda](x-x_\lambda) - \mathcal{K}[A^{(j)}_\lambda,\mathcal{B}_\lambda](x_l-x_\lambda)
\notag \\
& \qquad - [\nabla \mathcal{K}[A^{(j)}_\lambda,\mathcal{B}_\lambda](x_l-x_\lambda)](x-x_l)\Big) \notag \\
& +\sum_{j=0}^{n-1}\sum_{l\neq\lambda}\mathcal{H}[A^{(j)}_\lambda,{B}_\lambda](x-x_\lambda)\notag
\end{align}
The purpose of this section is twofold. First, we show that the method of reflections converges. We quantify then how close the family of approximations
$(u_{app}^{(n)})_{n\in \mathbb N}$ are to the velocity-field $u_N$
solution to \eqref{eq_Stokes_ref}-\eqref{bc_Stokes_ref}-\eqref{eq_Newton_ref}.
\medskip
We start with the convergence of the method. Since the correctors are fixed
with respect to the family of matrices $(A_{l}^{(n)})_{l=1,\ldots,N}$ this amounts
to prove that this family of matrices defines a converging series (in $n$ for arbitrary $l\in \{1,\ldots,N\}$). This is the content of the following proposition
which relies mostly on item (1) of Lemma \ref{modified goal} in Appendix \ref{app_ref}:
\begin{proposition}\label{Al}
There exists $\varepsilon_0 >0$ sufficiently small such that, for $\varepsilon < \varepsilon_0$ and $1<q<\infty$, there exists a constant $C(q,\varepsilon_0)$ depending on $q$ and $\varepsilon_0$, but independent of $N$, such that
\begin{eqnarray*}
\Big(\sum_{l=1}^N |A^{(n+1)}_{l}|^q\Big)^{1/q}\leq C(q,\varepsilon_0)\left(\frac{a}{d}\right)^{3-3/q}\Big(\sum_{l=1}^N |A^{(n)}_{l}|^q\Big)^{1/q}
\quad \forall \, n \in \mathbb N.
\end{eqnarray*}
\end{proposition}
\begin{proof}
Let $n \geq 0.$ We first notice that, by Proposition \ref{asym-stokes}, there exists symmetric matrices $\mathbb M^{(n)}_l := \mathbb M[A^{(n)}_{l},\mathcal B_l]$ such that
\[
\mathcal K[A^{(n)}_{l},B_l]_i = a^3 \mathbb M^{(n)}_l : \nabla \mathcal U^i =
a^3 \sum_{k=1}^3 [\mathbb M^{(n)}_l ]_{kl} \partial_{k} U^{i,l}, \quad
i=1,2,3.
\]
We remark then that, for $i,j\in\{1,2,3\}$, $U^{ij}$ is homogeneous in $\RR^3\setminus\{0\}$ with degree $-1$. Moreover $U^{ij}$ satisfies that
\begin{eqnarray*}
\Delta U^{ij}=\partial_i q_j~~\mathrm{in}~~\RR^3\setminus\{0\},
\end{eqnarray*}
where for each $j\in\{1,2,3\}$, $q_j$ is harmonic in $\RR^3\setminus\{0\}$. We can apply then Lemma \ref{modified goal} to the computation of the components of $A^{(n+1)}_{l}$ by choosing $V:=U^{ij}$ and $Q:=\partial_i q_j$ for each $i,j\in\{1,2,3\}.$ This yields that, for $\varepsilon_0$ sufficiently small
and arbitrary $q \in (1,\infty)$
\[
\Big(\sum_{l=1}^N |A^{(n+1)}_{l}|^q\Big)^{1/q}\leq C(q,\varepsilon_0)\left(\frac{a}{d}\right)^{3-3/q} \left( \sum_{l=1}^{N}|\mathbb M_l^{(n)}|^q \right)^{1/q}.
\]
However, by Proposition \ref{asym-stokes}, there exists an absolute constant
(independent of $n,l$ and other parameters) such that $|\mathbb M_l^{(n)}| \leq C|A^{(n)}_l|.$
This completes the proof of the proposition.
\end{proof}
We proceed with the analysis of the quality of the sequence of approximations
$(u^{(n)}_{app})_{n\in\mathbb N}$.
\begin{proposition} \label{prop_perfore}
Let $\varepsilon_0$ sufficiently small. There exists a constant $C_{app}(\varepsilon_0)$, such that for $n \geq 3$ and $\varepsilon < \varepsilon_0,$ there holds
\begin{eqnarray*}
\|u_N-u^{(n)}_{app}\|_{\dot{H}^{1}(\mathbb R^3)}\leq C_{app}(\varepsilon_0)|A|\big(\frac{a}{d}\big)^{11/2} .
\end{eqnarray*}
\end{proposition}
\begin{proof}
By substracting the equations satisfied by
$(u_N,p_N)$ and $(u^{(n)}_{app},p^{(n)}_{app}),$ we obtain that $\delta_u = u_N - u^{(n)}_{app},$ $\delta_p = p_N - p^{(n)}_{app}$
satisfies:
\begin{align}
\label{eq_Stokes_delta u}
& \left\{
\begin{aligned}
- {\rm div}\, \Sigma_{\mu}(\delta_u,\delta_p) &= 0 \\
{\rm div}\, \delta_u &= 0
\end{aligned}
\right.
\qquad \qquad
\text{ in $\mathbb R^3 \setminus \bigcup_{l=1}^N \overline{B}_l,$}
\end{align}
and
\begin{align}
\label{eq_Newton_deltaU}
\left\{
\begin{aligned}
\int_{\partial B_l} \Sigma_{\mu}(\delta_u,\delta_p) n {\rm d}s &= 0 \\
\int_{\partial B_l} (x-x_l) \times \Sigma_{\mu}(\delta_u,\delta_p) n {\rm d}s &= 0 \\
\end{aligned}
\right.
\qquad \qquad \text{for }\, l = 1,\ldots,N.
\end{align}
As for boundary conditions, we note that by definition of $u_{app}^{(n)}$, we have that
\begin{align*}
\delta_u(x)& =u_N(x)-Ax-\sum_{j=1}^{n} v^{(j)}(x), \qquad \text{ in $\mathcal F_N$}
\end{align*}
Thanks to \eqref{eq_integrabilite} and extending the velocities $u_{app}^{(n)}$
and $u_N$ inside the particle domains with their boundary values,
we have that $\delta_u \in \dot{H}^1(\mathbb R^3).$ On the boundaries,
reorganizing the terms involved in $v^{(j)},$ see also \eqref{eq_splitboundary},
we have that there exists vectors $(W_{l},\varpi_l)_{l=1,\ldots,N}$ for which:
\begin{equation} \label{bc_delta_u}
\delta_u(x)=W_l+\varpi_l\times(x-x_l)+u_{l,n}^*, ~~\mathrm{on} ~~B_l,~~\forall l=1,\ldots,N,\\
\end{equation}
where we have $u_{l,n}^*=S^{(n)}_l+R^{(n)}_l$ with:
\begin{eqnarray*}
S^{(n)}_l(x):=\sum_{l\neq\lambda}a^3\mathcal{K}[A^{(n-1)}_\lambda,\mathcal{B}_\lambda](x-x_\lambda),
\end{eqnarray*}
and
\begin{eqnarray*}
R^{(n)}_l(x)&:=&\sum_{j=0}^{n-2}\sum_{l\neq\lambda}a^3\mathcal{K}[A^{(j)}_\lambda,\mathcal{B}_\lambda](x-x_\lambda)-\sum_{j=0}^{n-2}\sum_{l\neq\lambda}a^3\mathcal{K}[A^{(j)}_\lambda,\mathcal{B}_\lambda](x_l-x_\lambda)\\
&&-\sum_{j=0}^{n-2}\sum_{l\neq\lambda}a^3[\nabla \mathcal{K}[A^{(j)}_\lambda,\mathcal{B}_\lambda](x_l-x_\lambda)](x-x_l)+\sum_{j=0}^{n-1}\sum_{l\neq\lambda}\mathcal{H}[A^{(j)}_\lambda,{B}_\lambda](x-x_\lambda).
\end{eqnarray*}
We notice that for each $l=1,\dots,N$, the formula defining $u^*_{l,n}$ can be extended to $B(x_l,2a)$. We also mention that again
classical integration by parts arguments yield that $\delta_u$ realizes
\begin{equation}\label{min}
\min\Big\{ \int_{\mathcal{F}_N}|D(v)|^2, \; v\in\dot{H}^{1}(\mathbb R^3),\quad {\rm div}\ v=0, \; D(v - u_{l,n}^*) = 0 \text{ on $B_l$},~\forall\, l\Big\}.
\end{equation}
The proof of our theorem then reduces to construct divergence-free vector-fields $w_{l,n} \in C^{\infty}_c(B(x_l,2a))$ that match $u^*_{l,n}$
(up to a rigid vector-field) on $B_l$ for each $l=1,\dots,N$.
Indeed, since $\delta_u$ is divergence-free and using the minimizing principle
of \eqref{min}, we have then:
\[
\int_{\mathbb R^3} |\nabla \delta_u|^2 = 2 \int_{\mathbb R^3} |D(\delta_u)|^2
\leq C \sum_{l=1}^N \int_{B(x_l,2a)} |\nabla w_{l,n}|^2.
\]
So, we define:
\begin{eqnarray*}
w_{l,n}(x):=\sum_{l=1}^N\big(\chi(\frac{x-x_l}{a})(u^*_{l,n}(x)-\bar{u}^*_{l,n})-\mathfrak{B}_{B(x_l,2a)\setminus\overline{ B(x_l,a)}}[(u^*_{l,n}(x)-\bar{u}^*_{l,n})]\cdot \nabla \chi(\frac{x-x_l}{a})\big),
\end{eqnarray*}
Here, we denoted $\chi\in C_0^\infty(\RR^3)$ such that $\chi\equiv1$ on $B(0,3/2)$ and $\chi\equiv0$ in $\mathbb R^3 \setminus \overline{B(0,2)}$, $\bar{u}^*_{l,n}$ is the mean-value of $u^*_{l,n}$ over $B(x_l,2a)$. Clearly, our candidate matches the condition
\[
w_{l,n}(x)=\mathrm{constant}+u^*_{l,n}, \quad \text {on $B_l.$}
\]
For the next computations, we introduce also $\bar{S}^{(n)}_l$ and $\bar{R}^{(n)}_l$ the mean-values of $S^{(n)}_l$ and $R^{(n)}_l$ over $B(x_l,2a)$ respectively so that $\bar{u}_{l,n}^* = \bar{S}^{(n)}_l + \bar{R}^{(n)}_l$.
\medskip
By the scaling properties of the Bogovskii operator, we obtain that
\begin{eqnarray*}
\int_{\mathcal{F}_N}|\nabla w_{l,n}|^2& \leq & C\sum_{l=1}^N\|\nabla(\chi(\frac{\cdot-x_l}{a})(u^*_{l,n}-\bar{u}^*_{l,n}))\|^2_{L^2(B(x_l,2a))}\\
&\lesssim&\sum_{j=1}^4 H^{(n)}_{l,j},
\end{eqnarray*}
where
\begin{eqnarray*}
H^{(n)}_{l,1}:=\sum_{l=1}^N\|\nabla S^{(n)}_l\|^2_{L^2(B(x_l,2a))},~H^{(n)}_{l,2}:=\sum_{l=1}^N\|\nabla R^{(n)}_l\|^2_{L^2(B(x_l,2a))},
\end{eqnarray*}
and
\begin{eqnarray*}
H^{(n)}_{l,3}:=\dfrac{1}{a^2}\sum_{l=1}^N\| S^{(n)}_l-\bar{S}^{(n)}_l\|^2_{L^2(B(x_l,2a))},~~H^{(n)}_{l,4}:=\dfrac{1}{a^2}\sum_{l=1}^N\| R^{(n)}_l-\bar{R}^{(n)}_l\|^2_{L^2(B(x_l,2a))}.
\end{eqnarray*}
Here, it is standard that the Poincar\'e-Wirtinger inequality entails that $H^{(n)}_{l,3}\leq CH^{(n)}_{l,1}$ and $H^{(n)}_{l,4}\leq CH^{(n)}_{l,2}$. Hence, we only need to bound $H^{(n)}_{l,1}$ and $H^{(n)}_{l,2}$.
\medskip
We deal with $H^{(n)}_{l,1}$ first. According to the definition of $S^{(n)}_l$
and $\mathcal K[A_{\lambda}^{(n-1)},\mathcal B_{\lambda}]$ (see Proposition \ref{asym-stokes}), for each $l=1,\dots N$, we have
\begin{eqnarray*}
S^{(n)}_l&=&\sum_{l\neq\lambda}a^3\mathcal{K}[A^{(n-1)}_\lambda,\mathcal{B}_\lambda](x-x_\lambda)\\
&=&a^3\sum_{l\neq\lambda}(\mathbb{M}[A^{(n-1)}_\lambda,\mathcal{B}_\lambda]:\nabla \mathcal{U}^1(x),\mathbb{M}[A^{(n-1)}_\lambda,\mathcal{B}_\lambda]:\nabla \mathcal{U}^2(x),\mathbb{M}[A^{(n-1)}_\lambda,\mathcal{B}_\lambda]:\nabla \mathcal{U}^3(x)).
\end{eqnarray*}
As in the proof of {Proposition \ref{Al}}, for each $i,j\in\{1,2,3\}$, $U^{ij}$ is homogeneous in $\RR^3\setminus\{0\}$ with degree $-1$ such that
\begin{eqnarray*}
\Delta U^{ij}=\partial_i q_j~~\mathrm{in}~~\RR^3\setminus\{0\},
\end{eqnarray*}
where for each $j\in\{1,2,3\}$, $q_j$ is harmonic in $\RR^3\setminus\{0\}$. By the definition of $A^{(n+1)}_{l}$ and applying Lemma \ref{modified goal} by choosing $V:=U^{ij}$ and $Q:=\partial_i q_j$ for each $i,j\in\{1,2,3\}$ and Proposition \ref{Al}, we have
\[
H^{(n)}_{l,1}\leq [C(2,\varepsilon_0)]^{n-1} \big(\frac{a}{d}\big)^{3n}(a^3N)|A|^2.
\]
Up to restrict the size of $\varepsilon_0$ we obtain that:
\begin{equation} \label{eq_Hl1}
H^{(n)}_{l,1}\leq C(\varepsilon_0) \big(\frac{a}{d}\big)^{8}(a^3N)|A|^2.
\end{equation}
Now we turn to deal with $H^{(n)}_{l,2}$. By the definition of $R^{(n)}_l$, we have that for any $x\in B(x_l,2a)$,
\begin{eqnarray*}
|\nabla R^{(n)}_l(x)|\leq Ca^4\sum_{j=0}^{n-1}\sum_{l\neq\lambda}|A^{(j)}_\lambda|\frac{1}{|x_l-x_\lambda|^4}.
\end{eqnarray*}
We notice here -- since the minimum distance between two $x_l$'s is lager than $d$ which is much larger than $a$ (for small $\varepsilon_0$) -- that, for each $l=1,\dots,N$ and for any $x\in B(x_l,2a)$, there holds:
\begin{eqnarray*}
\sum_{l\neq\lambda}|A^{(j)}_{\lambda}|\frac{1}{|x_l-x_\lambda|^4}&\leq& Cd^{-3}\sum_{l\neq \lambda}\int_{B(x_\lambda,d/2)}\dfrac{|A^{(j)}_\lambda|}{|x_l-y|^4}dy\\
& \leq& Cd^{-3}\int_{\RR^3}|\Phi^{(j)}(y)|\frac{\mathbf{1}_{|x_l-y|>d/2}}{|x_l-y|^4}dy\\
&\leq&C(\varepsilon_0)d^{-3}\int_{\RR^3}|\Phi^{(j)}(y)|\frac{\mathbf{1}_{|x-y|>d/2}}{|x-y|^4}dy,
\end{eqnarray*}
where
\begin{eqnarray*}
\Phi^{(j)}(x):=\sum_{l=1}^NA^{(j)}_l\mathbf{1}_{B(x_l,d/2)}(x).
\end{eqnarray*}
Therefore we obtain, with a direct Young inequality for convolution:
\begin{eqnarray*}
H^{(n)}_{l,2}\leq C\frac{a^8}{d^6}\|\sum_{j=0}^{n-1}|\Phi^{(j)}|*\frac{\mathbf{1}_{|y|>d/2}}{|y|^4}\|^2_{L^2(\RR^3)}\leq C\frac{a^8}{d^8}\Big(\sum_{j=0}^{n-1}\|\Phi^{(j)}\|_{L^2(\mathbb R^3)}\Big)^2,
\end{eqnarray*}
which combined with Proposition \ref{Al}, yields that:
\begin{equation} \label{eq_Hl2}
H^{(n)}_{l,2}\leq \Big(\frac{a}{d}\Big)^{8}\sum_{j=0}^{n-1}\Big( C(2,\varepsilon_0) \left(\frac{a}{d} \right)^{3/2}\Big)^{2j}Na^3|A|^2\leq C(\varepsilon_0)\Big(\frac{a}{d}\Big)^{8} Na^3 |A|^2,
\end{equation}
where we have chosen $\varepsilon_0$ sufficiently small so that the series
$(\sum_{j\geq 0}C(2,\varepsilon_0)(a/d)^{3/2})^{2j}$ converges.
Combining \eqref{eq_Hl1}-\eqref{eq_Hl2}, we obtain the expected result.
\end{proof}
\section{Approximation of the target system} \label{sec_continu}
In this section, we fix $A \in {\rm Sym}_{3,\sigma}(\mathbb R)$ and $\mathbb M_{eff} \in \mathcal M(\varepsilon_0)$ (for some small $\varepsilon_0$) and we
analyse the properties of the asymptotic problem
\begin{align} \label{eq_continu_sec}
& \left\{
\begin{aligned}
- {\rm div} (2\mu(1+ \mathbb M_{eff})(D(u)) - p \mathbb{I}_3) &=& 0 \\
{\rm div} u &=& 0
\end{aligned}
\right. && \text{ on $\mathbb R^3$}, \\
& \quad u(x) = A x && \text{ at infinity}. \label{bc_continu_sec}
\end{align}
We note that $\mu$ is a gain a simple factor in this equation so that we only treat the case $\mu = 1$ below.
This system is associated with the weak formulation:
\begin{center}
\begin{minipage}{.8\textwidth}
Find $v \in \dot{H}^1_{\sigma}(\mathbb R^3)$ such that:
\[
2 \mu \int_{\mathbb R^3} [( 1 + \mathbb{M}_{eff} )(D(v) + A)] : \nabla w = 0\,,
\quad \forall \, w \in \dot{H}^1_{\sigma}(\mathbb R^3).
\]
\end{minipage}
\end{center}
Since $\mathbb{M}_{eff}$ has compact support, for $\varepsilon_0$ sufficiently small we have that $\|\mathbb{M}_{eff}\|_{L^{\infty}(\mathbb R^3)} \leq 1/2$ so that
construction of a weak solution falls into the scope of the Lax-Milgram theorem. Hence, under the assumption that $\varepsilon_0$ is sufficiently small we have existence and uniqueness of a $u_c$ satisfying
\begin{itemize}
\item $v_c(x) = (u_c(x) -Ax) \in \dot{H}^1_{\sigma}(\mathbb R^3),$
\item there exists $p_c$ for which \eqref{eq_continu_sec} holds true (in $\mathcal D'(\mathbb R^3)$),
\end{itemize}
and consequently, the pressure $p_c$ exists and is unique up to a constant.
We focus now -- as in the previous section for the problem in a perforated domain -- on a possible expansion of the solution $u_c$ in terms of "powers of $\mathbb{M}_{eff}$". Namely, for small $\varepsilon_0$ the matrix
$\mathbb{M}_{eff}$ can be seen as a perturbation of the identity so that
one may look for a solution to \eqref{eq_continu_sec}-\eqref{bc_continu_sec}
by iterating the mapping $v \mapsto \tilde{v} := \mathcal Lv$ solving the system:
\begin{align*}
&\left\{ \begin{aligned}
- {\rm div} ( D(\tilde{v}) - p \mathbb{I}_3) &= {\rm div} \, \mathbb{M}_{eff}(D(v)) + {\rm div} \, \mathbb{M}_{eff}(A) \\
{\rm div} \tilde{v} &= 0
\end{aligned}
\right.
&&
\text{ in $\mathbb R^3,$}\\
& \quad \tilde{v}(x) = 0 && \text{ at infinity},
\end{align*}
starting form $v^{(0)}(x) = 0.$ Again, it is standard by introducing a weak formulation and Lax-Milgram arguments that there exists a unique velocity-field $\tilde{v}_c$ satisfying
\begin{itemize}
\item $\tilde{v}_c \in \dot{H}^1_{\sigma}(\mathbb R^3)$
\item there exists a pressure $\tilde{p}_c$ such that
\begin{equation*}
\left\{ \begin{aligned}
- {\rm div} ( D(\tilde{v}_c) - \tilde{p}_c \mathbb{I}_3) &= {\rm div} \, \mathbb{M}_{eff}(A) \\
{\rm div} \tilde{v} &= 0
\end{aligned}
\right.
\qquad
\text{ in $\mathbb R^3,$}
\end{equation*}
\end{itemize}
The main result of this section is the following proposition which
compares the velocity-field $v_c(x) = u_c(x) - Ax $ with $\tilde{v}_c.$
\begin{proposition}\label{uc-uc}
Under the assumption that $\varepsilon_0>0$ is sufficiently small, there exists a constant $C(K,\varepsilon_0)$ such that
\begin{eqnarray}\label{ucuc}
\|\nabla v_c-\nabla\tilde{v}_c\|_{L^2(\RR^3)}\leq C(K,\varepsilon_0)\|\mathbb{M}_{eff}\|^2_{L^{\infty}(\RR^3)} |A|.
\end{eqnarray}
\end{proposition}
\begin{proof}
This proof is a straightforward application of fixed-point arguments.
First, let prove that the mapping $\mathcal L$ is a contraction on $\dot{H}^1_{\sigma}(\mathbb R).$ Indeed, for arbitrary $(v_1,v_2) \in \dot{H}^1_{\sigma}(\mathbb R^3),$ given the weak formulation for the Stokes problem, we have that $w = \mathcal L(v_1-v_2)$ satisfies:
\[
\int_{\mathbb R^3} D(w) : D(\varphi) = \int_{\mathbb R^3} [\mathbb{M}_{eff} (D(v_1 -v_2))] : D(\varphi)\,, \qquad \forall \, \varphi \in \dot{H}^1_{\sigma}(\mathbb R^3).
\]
Setting $w = \varphi$ and recalling that $w$ is divergence-free we obtain that
\[
\int_{\mathbb R^3} |\nabla w|^2 = 2 \int_{\mathbb R^3} |D(w)|^2
\leq 8 \|\mathbb{M}_{eff}\|^2_{L^{\infty}} \int_{\mathbb R^2} |D(v_1-v_2)|^2
\leq 4\|\mathbb{M}_{eff}\|^2_{L^{\infty}} \int_{\mathbb R^2} |\nabla v_1-\nabla v_2|^2.
\]
Consequently, fix $\varepsilon_0 < 1/8.$ Then $\|\mathcal L\| < 1/\sqrt{2}$ and the mapping $\mathcal L$ is a contraction that admits a unique fixed point. This yields a solution to \eqref{eq_continu_sec}-\eqref{bc_continu_sec}. Furthermore, this solution is obtained by iterating the mapping $\mathcal L$ from $v^{(0)} = 0.$
So the sequence $v^{(n)} = \mathcal L^{\circ n} v^{(0)}$ converges to $v_c$
(in $\dot{H}^{1}(\mathbb R^3)$) while, by definition, $\tilde{v}_c = v^{(1)}.$
Similar energy estimates yield that
\begin{equation} \label{eq_boundvtilde}
\|\nabla \tilde{v}_c\|_{L^2(\mathbb R^3)}
\leq |K|^{1/2} \|M_{eff}\|_{L^{\infty}(\mathbb R^3)}|A|.
\end{equation}
Standard arguments with contractions then yield that
\begin{align*}
\|\nabla \tilde{v}_c - \nabla v_c \|_{L^2(\mathbb R^3)}
& = \|v^{(1)} - \lim v^{(n)}\|_{\dot{H}^1(\mathbb R^3)} \\
& \leq \dfrac{\|\mathcal L \|}{1 - \| \mathcal L\|} \|v^{(1)}\|_{\dot{H}^1(\mathbb R^3)} \leq 4|K|^{1/2} \|\mathbb{M}_{eff}\|_{L^{\infty}(\mathbb R^3)} \|M_{eff}\|_{L^{\infty}(\mathbb R^3)} |A|.
\end{align*}
This concludes the proof.
\end{proof}
\section{Proof of main result}
\label{sec_mainresult}
We end the paper with a proof of Theorem \ref{thm_main}. In the whole section, we assume that we are given a perforated domain such that \eqref{H1}-\eqref{H2} hold true. We are also given $\mathbb{M}_{eff} \in \mathcal M(\varepsilon_0)$ with simultaneously $(a/d)^3 < \varepsilon_0$
(see \eqref{H1}-\eqref{H2} for the definitions of $a$ and $d$).
Restrictions on $\varepsilon_0$ are introduced throughout the section.
Finally, we fix a matrix $A \in {\rm Sym}_{3,\sigma}(\mathbb R).$
\medskip
We recall that Theorem \ref{thm_main} is a stability estimate between the solutions to two problems. The first one is the Stokes problem in a perforated domain \eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton}
that we studied in Section \ref{sec_refmet}. We restrict at first $\varepsilon_0$ so that Proposition \ref{prop_perfore} holds true. We have then a sequence of approximations $(u_{app}^{(n)})_{n\in \mathbb N}$ to the velocity-field $u_N(x) = Ax + v_N(x)$ solution to \eqref{eq_Stokes}-\eqref{bc_Stokes}-\eqref{eq_Newton}. The second problem is the continuous analogue
\eqref{eq_continu}-\eqref{bc_continu} that we studied in Section \ref{sec_continu}. We assume also that $\varepsilon_0$ is sufficiently small so that Proposition \ref{uc-uc} holds true. We have then an approximation
$\tilde{u}_{c}(x) = Ax + \tilde{v}_c(x)$ to the velocity-field $u_c$ solution to
\eqref{eq_continu}-\eqref{bc_continu}.
The purpose of Theorem \ref{thm_main} is to compute a bound from above for
$u_N - u_c.$ To this end, we fix $n=3$ and $u_{app} = u^{(3)}_{app}$ (with the notations of Proposition \ref{prop_perfore}) and write
\begin{equation} \label{eq_errorsplitting}
u_N - u_c = (u_N - u_{app}) + (u_{app} - \tilde{u}_c) + (\tilde{u}_{c} - u_c )
=: R_{perf} + R_{main} + R_{cont}.
\end{equation}
The two error terms $R_{perf}$ and $R_{cont}$ have been estimated previously
in Proposition \ref{prop_perfore} and Proposition \ref{uc-uc} respectively. So, we proceed in the next subsection with estimating $R_{main}$
and shall combine the various partial results in the last subsection to complete the proof of our theorem.
\subsection{Computing $R_{main}$}
We recall that $u_{app} = u^{(3)}_{app}$ is constructed via the method of reflections:
\begin{eqnarray*}
u_{app}(x)=Ax+\sum_{j=1}^3\Big(\sum_{l=1}^N U[A^{(j-1)}_l,B_l](x-x_l)\Big).
\end{eqnarray*}
By the definition of $u_{app}$ and Proposition \ref{asym-stokes}, we have the following decomposition, for any $x\in\mathcal{F}_N$
\begin{eqnarray}
u_{app}(x) - \tilde{u}_c(x)=R_1(x)+R_2(x),
\end{eqnarray}
where
\begin{eqnarray*}
R_1(x):=\sum_{l=1}^N\mathcal{K}[A,B_l](x-x_l) - \tilde{v}_c(x)
\end{eqnarray*}
and
\begin{eqnarray*}
R_2(x):=\sum_{l=1}^N\mathcal{H}[A,{B}_l](x-x_l)+\sum_{j=2,3}\sum_{l=1}^N U[A^{(j-1)}_l,B_l](x-x_l).
\end{eqnarray*}
We start with computing $R_1:$
\begin{proposition}\label{Homo-vc}
Let $K_0 \Subset \mathbb R^3$ and $p \in [1,3/2[,$ there exists $C(K_0)
$ for which:
\begin{equation} \label{eq_boundR1}
\|R_1\|_{L^{p}(K_0 \setminus \bigcup B(x_l,4a))}\leq C(K_0)\Big[\|\mathbb M_N -\mathbb M_{eff}\|_{\dot{H}^{-1}(\RR^3)} + \left( \dfrac{a^3}{d^3}\right)^{1+\theta} \Big],
\end{equation}
where $\theta = \frac1p - \frac 23.$
\end{proposition}
\begin{proof}
By Proposition \ref{asym-stokes}, we know that each component of $\mathcal{K}$ can be written as:
\begin{eqnarray*}
\mathcal{K}[A,B_l]_i(x-x_l)&=&\mathbb{M}[A,B_l]:\nabla\mathcal{U}^i(x-x_l)
\end{eqnarray*}
In this identity, we recall that $\mathbb M[A,B_l] = a^3 \mathbb M[A,\mathcal B_l],$
given by \eqref{eq_defM}, and that $\mathcal{U}^i=(U^{i1},U^{i2},U^{i3})$ with
\begin{eqnarray*}
U^{ij}:=-\frac{1}{8\pi}\big[\frac{\delta_{ij}}{|x-y|}+\frac{(x_i-y_i)(x_j-y_j)}{|x-y|^3}\big]
\end{eqnarray*}
corresponding to the fundamental solution of Stokes equation in $\RR^3$.
According to the fact that for any $x\in\RR^3\setminus\{0\}$
\[
\Delta\mathcal{U}^i(x)=\nabla q_i
\]
with $q_i(z)=\frac{1}{4\pi}\frac{z}{|z|^3}$ and applying Lemma \ref{mean-value}, we obtain that for any $x \in K_0 \setminus \bigcup B(x_{\lambda},a)$ and $i=1,2,3$, for any $l=1,\ldots,N:$
\begin{multline*}
\mathcal{K}[A,{B}_l]_i(x-x_l)= \frac{3}{4\pi} \mathbb{M}[A,\mathcal{B}_l]: \int_{B(x_l,a)}\nabla \mathcal{U}^i(x-y)dy \\
+ \dfrac{a^3}{3} \mathbb{M}[A,\mathcal{B}_l]: \int_0^a\left(\frac{r^4}{a^3}-r\right)\int_{B(x_l,r)}\nabla^2 q_i(x-y)dydr
\end{multline*}
We proceed by remarking that:
\begin{eqnarray}
\sum_{l=1}^N \dfrac{3}{4\pi}\mathbb{M}[A,\mathcal{B}_l]: \int_{B(x_l,a)}\nabla \mathcal{U}^i(x-y)dy=\int_{\RR^3} \mathbb M_N(A)(y) :\nabla\mathcal{U}^i(x-y)dy
\end{eqnarray}
Furthermore, since $q_i$ is harmonic on $\mathbb R^3 \setminus \{0\},$
for $x \in K_0 \setminus \bigcup B(x_{\lambda},a)$ and $l\in \{1,\ldots,N\}$ there holds:
\begin{eqnarray}
\begin{split}
\int_0^a\left(\frac{r^4}{a^3}-r\right)\int_{B(x_l,r)}\nabla^2 q_i(x-y)dydr&=\int_0^a\left(\frac{r^4}{a^3}-r\right)\int_{B(x_l,a)}\nabla^2 q_i(x-y)dydr\\
&= - \frac{9}{40\pi a}\int_{B(x_l,a)}\nabla^2 q_i(x-y)dy.
\end{split}
\end{eqnarray}
On the other hand, by uniqueness of the solution to the Stokes problem
defining $\tilde{v}_c$ (in $\dot{H}^1(\mathbb R^3)$), we know that
$\tilde{v}_c$ is computed with Green's function for the Stokes problem.
This yield componentwise:
\[
\tilde{v}_{c,i}(x) = \int_{\mathbb R^3} \mathbb{M}_{eff}(A)(y) : \nabla \mathcal U^{i}(x-y){\rm d}y, \quad \forall \, x \in \mathbb R^3.
\]
We note that this quantity is well-defined since $\mathbb M_{eff} \in L^{\infty}(\mathbb R^3)$ has compact support and $\nabla \mathcal U^i$
is homogeneous of degree $-2$ so that it is $L^p_{loc}(\mathbb R^3)$
for arbitrary $p < 3/2.$
Eventually, the $i-$th component of $R_1$ can be rewritten as
\begin{multline*}
R_{1,i}(x)= \int_{\RR^3} \left[ \mathbb M_N - \mathbb M_{eff} \right](A)(y): \nabla\mathcal{U}^i(x-y)dy \\
- \frac{3a^2}{40\pi}\int_{\mathbb R^3} \mathbb M_N(A)(y) : \mathbf{1}_{|x-y| > 3a}\nabla^2 q_i(x-y)dy,
\end{multline*}
since $x \notin \bigcup B(x_l,4a).$
Concerning the first term on the right-hand side of this equality,
let denote $h$ any component of $\left[\mathbb M_N - \mathbb M_{eff} \right].$
By assumption, we have then $h\in \dot{H}^{-1}(\RR^3)$ so that it can be written $h=\partial_1 \varphi_1+\partial_2 \varphi_2$, where $\varphi_1$ and $\varphi$ are $L^2(\RR^3)$ and
\[
\max_{i=1,2}\|\varphi_i\|_{L^2(\RR^3)}\leq \|h\|_{\dot{H}^{-1}(\RR^3)} \leq
\|\mathbb M_N - \mathbb M_{eff} \|_{\dot{H}^{-1}(\RR^3)}.
\]
Therefore, we have on $\mathbb R^3:$
\[
\int_{\RR^3} h (y) \partial_{l}{U}^{ik}(x-y)dy = \int_{\RR^3} \varphi_1(y) \partial_{1l}{U}^{ik}(x-y)dy + \int_{\RR^3} \varphi_2(y) \partial_{2l}{U}^{ik}(x-y)dy,
\]
where, by Calder\'on Zygmund inequality, the right-hand side of this identity is well defined and satisfies.
\begin{equation} \label{eq_R11}
\|\int_{\RR^3} h \partial_{l}{U}^{ik}(x-y)dy \|_{L^2(\RR^3)}
\leq
C\|\mathbb M_N - \mathbb M_{eff} \|_{\dot{H}^{-1}(\RR^3)}.
\end{equation}
As for the second term in $R_{1,i}$ we can apply a classical Young inequality for convolution to obtain:
\begin{align*}
\|\int_{\RR^3} \mathbb{M}_N(A)(y)\nabla^2 q_i(x-y)dy\|_{L^{p}(\mathbb R^3)}
& \leq C\|\mathbb{M}_N(A)\|_{L^{p}(\RR^3)}\|\dfrac{\mathbf{1}_{|z|>3a}}{|z|^4}\|_{L^1(\mathbb R^3)} \\
& \leq C|A|\Big(\frac{a}{d}\Big)^{\frac 3p} \dfrac{1}{a}.
\end{align*}
Finally, applying the embedding $L^{2}(K_0) \subset L^{p}(K_0),$ we have:
\[
\|R_1\|_{L^p(K_0 \subset \bigcup B(x_l,4a))} \leq C|A| \left[ \|\mathbb M_N(y) - \mathbb M_{eff}(y) \|_{\dot{H}^{-1}(\RR^3)} + a \Big(\frac{a}{d}\Big)^{\frac 3p}. \right]
\]
We conclude by applying again that $Nd^3 \leq |K|$ so that
\[
a \left( \dfrac{a}{d}\right)^{\frac 3p} \leq (N a^3)^{\frac 13} \left( \dfrac{a}{d}\right)^{\frac 3p}
\leq \left[\left( \dfrac{a}{d}\right)^3\right]^{\frac 1p+\frac 13}.
\]
\end{proof}
We proceed by computing error estimates for $R_2.$
This is the content of the following proposition:
\begin{proposition}
Let $K_0 \Subset \mathbb R^3$ and $p \in [1,3/2[,$ there exists $C(K,K_0)
$ for which:
\begin{equation} \label{eq_boundR2}
\|R_2\|_{L^p(K_0 \setminus \bigcup B(x_l,4a))} \leq C(K,K_0) \left( \dfrac{a^3}{d^3} \right)^{1+\theta} ,
\end{equation}
where $\theta = \frac 1p - \frac 23.$
\end{proposition}
\begin{proof}
We recall that
\begin{eqnarray*}
R_2(x)&=&\sum_{l=1}^N\mathcal{H}[A,{B}_l](x-x_l)+\sum_{j=2,3}\sum_{l=1}^N U[A^{(j-1)}_l,{B}_l](x-x_l)\\
&=&\sum_{j=2,3}\sum_{l=1}^N\mathcal{K}[A^{(j-1)}_l,{B}_l](x-x_l)+\sum_{j=1}^3\sum_{l=1}^N\mathcal{H}[A^{(j-1)}_l,{B}_l](x-x_l).
\end{eqnarray*}
Since $B_l = a \mathcal B_l,$ by Proposition \ref{asym-stokes}, we know that for any $j=1,2,3$ and $l=1,\dots,N$, when $|x-x_l| > 4a:$
\begin{eqnarray*}
|\mathcal{K}[A^{(j-1)}_l,{B}_l](x-x_l)|\leq C \frac{a^3|A^{(j)}_l|}{|x-x_l|^2},
\qquad
|\mathcal{H}[A^{(j-1)}_l,{B}_l](x-x_l)|\leq C \frac{ a^4|A^{(j)}_l|}{|x-x_l|^3}.
\end{eqnarray*}
Therefore, on $K_0 \setminus \bigcup B(x_l,4a)$, we have:
\begin{eqnarray*}
|R_2(x)|\leq C \Big[a^3\sum_{l=1}^N\frac{|A^{(1)}_l|+|A^{(2)}_l|}{|x-x_l|^2}+a^4 \sum_{l=1}^N\frac{|A|}{|x-x_l|^3}\Big].
\end{eqnarray*}
Consequently, denoting ${K_1} = K_0 \cup K,$ we obtain
\begin{align*}
\|R_2\|_{L^p(K_0)}& \leq C a^3 \sum_{i=1}^2\sum_{l=1}^{N} |A_l^{(i)}|\left( \int_{a}^{diam(K_1)} \dfrac{dx}{|x|^{2p}} \right)^{\frac 1p}+
Na^4 |A| \left( \int_{a}^{\infty} \dfrac{dx}{|x|^{3p}} \right)^{\frac 1p} \\
& \leq C(K_1) \left( a^{1 + \frac 3p}\sum_{i=1}^{2} \sum_{l=1}^{N} |A_l^{(i)}| +
Na^{1+\frac 3p} |A| \right) \\
& \leq C(K,K_1) \left(\dfrac{a^3}{d^3}\right)^{1+\theta}|A|
\end{align*}
where we applied Proposition \ref{prop_perfore} to pass from the second to the last line together with the remark that $Nd^{3} \leq |K|.$
\end{proof}
\subsection{End of the proof}
Let $K_0 \Subset \mathbb R^3$ containing $K$ (for simplicity) and
$p\in[1,3/2[$. By \eqref{eq_errorsplitting} we have
\[
\|u_N - u_c\|_{L^p(K_0)} \leq \|R_{perf}\|_{L^p(K_0)} + \|R_{main}\|_{L^p(K_0)}
+ \|R_{cont}\|_{L^p(K_0)}.
\]
Since $p\leq 6$ and by the embedding $\dot{H}^1(\mathbb R^3) \subset L^{p}_{loc}(\mathbb R^3)$
and Proposition \ref{prop_perfore}, we have the bounds
\begin{align*}
\|R_{perf}\|_{L^p(K_0)}& \leq C(K_0) \|R_{perf}\|_{L^6(\mathbb R^3)} \leq C(K_0)
\|u_N - u_{app}\|_{\dot{H}^1(\mathbb R^3)}\\
& \leq C(\varepsilon_0,K_0)|A| \left( \dfrac{a^3}{d^3} \right)^{\frac{11}{6}}.
\end{align*}
With a similar chain of inequalities, we obtain by applying Proposition \ref{uc-uc}
\begin{align*}
\|R_{cont}\|_{L^p(K_0)} \leq C(K,K_0,\varepsilon_0)|A| \|\mathbb{M}_{eff}\|^2_{L^{\infty}(\mathbb R^2)}.
\end{align*}
Finally, concerning $R_{main}$, we recall that we have simultaneously
$R_{main} = u_{app} - \tilde{u}_c = v_{app} - \tilde{v}_c$ (where we denote $v_{app}(x) = u_{app}(x) - Ax$) and $R_{main} = R_1 +R_2,$
with the notations of the previous subsection. This entails that:
\[
\|R_{main}\|_{L^p(K_0)} \leq
\|u_{app} - \tilde{u}_c\|_{L^p( \bigcup B(x_l,4a))}
+ \|R_1\|_{L^p(K_0\setminus \bigcup B(x_l,4a))} + \|R_{2}\|_{L^p(K_0\setminus \bigcup B(x_l,4a))}.
\]
The two last terms of the right-hand side are controlled respectively by
\eqref{eq_boundR1} and \eqref{eq_boundR2}:
\begin{multline} \label{eq_boundglobal}
\|R_1\|_{L^p(K_0\setminus \bigcup B(x_l,4a))} + \|R_{2}\|_{L^p(K_0\setminus \bigcup B(x_l,4a))}
\\ \leq C(K,K_0)|A| \left[ \|\mathbb M_N -\mathbb M_{eff}\|_{\dot{H}^{-1}(\RR^3)} + \left( \dfrac{a^3}{d^3}\right)^{1+\theta} \right]
\end{multline}
where $\theta = \frac 1p-\frac 23.$
As for the first term, we first bound
\[
\|u_{app} - \tilde{u}_c\|_{L^p(\cup B(x_l,4a))} \leq |\bigcup B(x_l,4a)|^{\frac 1p - \frac 16}( \|v_{app}\|_{L^6(\mathbb R^3)} + \|\tilde{v}_c\|_{L^6(\mathbb R^3)}).
\]
Here, it is straightforward from \eqref{eq_boundvtilde} that:
\[
\|\tilde{v}_c\|_{L^6(\mathbb R^3)} \leq C(K)|A| \|\mathbb M_{eff}\|_{L^{\infty}(\mathbb R^3)}.
\]
As for $u_{app},$ we have, by Proposition \ref{prop_perfore} and
uniform estimate \eqref{eq_unifuN} that:
\begin{align*}
\|v_{app}\|_{L^6(\mathbb R^3)}& \leq C \|\nabla v_{app}\|_{L^6(\mathbb R^3)} \leq C \left( \|\nabla v_{app} - \nabla v_N\|_{L^2(\mathbb R^3)} + \|\nabla v_N\|_{L^2(\mathbb R^3)}\right)\\
& \leq C|A| \left(\dfrac{a^3}{d^3} \right)^{\frac 12}.
\end{align*}
Via a straightforward bound on the volume of the $B(x_l,4a)$ we conclude that:
\begin{equation} \label{eq_boundlocal}
\|u_{app} - \tilde{u}_c\|_{L^p(\bigcup B(x_l,4a))} \leq
\left( \dfrac{a^3}{d^3}\right)^{\frac 1p - \frac 16}\left(\left(\dfrac{a^3}{d^3}\right)^{1/2} + \|\mathbb{M}_{eff}\|_{L^{\infty}(\mathbb R^3)} \right) |A|.
\end{equation}
Combining \eqref{eq_boundglobal} and \eqref{eq_boundlocal} yields
\begin{multline}
\|R_{main}\|_{L^p(K_0)} \\
\leq C(K,K_0,\varepsilon_0)|A| \left[ \|\mathbb M_N -\mathbb M_{eff}\|_{\dot{H}^{-1}(\RR^3)} + \left( \dfrac{a^3}{d^3}\right)^{1+\theta} + \|\mathbb{M}_{eff}\|^2_{L^{\infty}(\mathbb R^3)} \right],
\end{multline}
since, as $p<3/2,$ we have $2/p-1/3 > 1 + \theta = 1/p+1/3.$
\medskip
Finally, we have proven:
\begin{multline*}
\|u_N - u_c\|_{L^p(K_0)}\\
\leq C(K,K_0,\varepsilon_0) |A|\left[ \|\mathbb M_N[A] -\mathbb M_{eff}[A]\|_{\dot{H}^{-1}(\RR^3)} + \left( \dfrac{a^3}{d^3}\right)^{1+\theta} + \|\mathbb{M}_{eff}\|^2_{L^{\infty}(\mathbb R^3)} \right].
\end{multline*}
This concludes the proof.
\bigskip
\paragraph{\bf Acknowledgement.}
The authors would like to thank David G\'erard-Varet for many fruitful discussions. The authors are partially supported by ANR Project IFSMACS
ANR-15-CE40-0010. The first author is also supported by ANR Project
SingFlow ANR-18-CE40-0027 and Labex Numev Convention grants ANR-10-LABX-20.
| {
"timestamp": "2019-05-30T02:13:35",
"yymm": "1905",
"arxiv_id": "1905.12306",
"language": "en",
"url": "https://arxiv.org/abs/1905.12306",
"abstract": "We compute the first order correction of the effective viscosity for a suspension containing solid particles with arbitrary shapes. We rewrite the computation as an homogenization problem for the Stokes equations in a perforated domain. Then, we extend the method of reflections to approximate the solution to the Stokes problem with a fixed number of particles. By obtaining sharp estimates, we are able to prove that this method converges for small volume fraction of the solid phase whatever the number of particles. This allows to address the limit when the number of particles diverges while their radius tends to 0. We obtain a system of PDEs similar to the Stokes system with a supplementary term in the viscosity proportional to the volume fraction of the solid phase in the mixture.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Effective viscosity of a polydispersed suspension",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850827686584,
"lm_q2_score": 0.7217432062975978,
"lm_q1q2_score": 0.7095349797007908
} |
https://arxiv.org/abs/0912.3446 | Tight Lower Bounds on the Sizes of Symmetric Extensions of Permutahedra and Similar Results | It is well known that the permutahedron Pi_n has 2^n-2 facets. The Birkhoff polytope provides a symmetric extended formulation of Pi_n of size Theta(n^2). Recently, Goemans described a non-symmetric extended formulation of Pi_n of size Theta(n log(n)). In this paper, we prove that Omega(n^2) is a lower bound for the size of symmetric extended formulations of Pi_n. |
\section{Introduction}
Extended formulations of polyhedra have gained importance in the recent past, because this concept allows to represent a polyhedron by a higher-dimensional one with a simpler description.
To illustrate the power of extended formulations we take a look at the permutahedron~$\Pi_n\subseteq \mathbb{R}^{n}$, which is the convex hull of all points obtained from~$(1,2,\cdots, n)\in \mathbb{R}^n$ by coordinate permutations. The minimal description of~$\Pi_n$ in the space~$\mathbb{R}^n$ looks as follows~\cite{CCZ09}:
\begin{align*}
\sum_{v\in\ints{n}}x_v&=\frac{n(n+1)}{2}&&\\
\sum_{v\in S}x_v&\ge\frac{|S|(|S|+1)}{2}&&\text{ for all }\varnothing\neq S\subset\ints{n}
\end{align*}
Thus the permutahedron~$\Pi_n$ has~$n!$ vertices and~$2^n-2$ facets. At the same time it is easy to derive an extended formulation of size~$\Theta(n^2)$ from the Birkhoff polytope~\cite{CCZ09}:
\begin{equation}\label{eq:sym_ext_perm}
\begin{aligned}
\sum_{i\in\ints{n}}iz_{i,v}&=x_v&&\text{ for all }v\in\ints{n}\\
\sum_{v\in\ints{n}}z_{i,v}&=1&&\text{ for all }i\in\ints{n}\\
\sum_{i\in\ints{n}}z_{i,v}&=1&&\text{ for all }v\in\ints{n}\\
z_{i,v}\ge 0&&&\text{ for all }i,v\in\ints{n}
\end{aligned}
\end{equation}
The projection of the polyhedron described by~\eqref{eq:sym_ext_perm} to the~$x$-variables gives the permutahedron~$\Pi_n$. Clearly, every coordinate permutation of~$\mathbb{R}^n$ maps~$\Pi_n$ to itself. The extended formulation~\eqref{eq:sym_ext_perm} respects this symmetry in the sense that every such permutation of the~$x$-variables can be extended by some permutation of the~$z$-variables such that these two permutations leave~\eqref{eq:sym_ext_perm} invariant (up to reordering of the constraints).
Also there exists a non-symmetric extended formulation of the permutahedron of size~$\Theta(n\log(n))$~\cite{Goe09}. This is the best one can achieve~\cite{Goe09} due to the fact that every face of~$\Pi_n$ (including the~$n!$ vertices) is a projection of some face of the extension. And since the number of faces of a polyhedron is bounded by~$2$ to the number of its facets, we can conclude that every extension of the permutahedron has at least~$\log_2(n!)=\Theta(n\log(n))$ facets.
As we show in this paper the size of the extended formulation~\eqref{eq:sym_ext_perm} is asymptotically optimal for symmetric formulations of the permutahedron. Thus there exists a gap in size between symmetric and non-symmetric extended formulations of~$\Pi_n$. This situation appears in some other cases as well, e.g. the cardinality constrained matching polytopes and the cardinality constrained cycle polytopes~\cite{KPT09}. But even if the gaps observed in those cases are more substantial, the permutahedron is interesting because of the possibility to determine tight asymptotical lower bounds~$\Omega(n^2)$ and ~$\Omega(n\log(n))$ on the sizes of symmetric and non-symmetric extended formulations.
The paper is organised as follows. Section~\ref{sec:definitions} contains definitions of extensions, the crucial notion of section and some auxilary results. In section~\ref{sec:symextension} we exploit some known techniques~\cite{KPT09},\cite{Yan91} and some new approaches to prove a lower bound on the number of variables and facets in symmetric extensions of the permutahedron.
\paragraph{Acknowledgements.} I thank Volker Kaibel for valuable comments, which led to simplification of Lemma~\ref{thm:structure} and Lemma ~\ref{thm:set}, and for his very useful recommendations on wording.
\section{Extensions, Sections and Symmetry}
\label{sec:definitions}
Here we list some known definitions and results, which will be used later. For a broader discussion of symmetry in extended formulations of polyhedra we refer the reader to~\cite{KPT09}.
A polytope~$Q\subseteq\mathbb{R}^d$ together with a linear map~$p:\mathbb{R}^d\rightarrow\mathbb{R}^m$ is called an \emph{extension} of a polytope~$P\subseteq\mathbb{R}^m$ if the equality~$p(Q)=P$ holds. Moreover, if~$Q$ is the intersection of an affine subspace of~$\mathbb{R}^d$ and the nonnegative orthant~$\mathbb{R}_+^d$ then~$Q$ is called a \emph{subspace extension}. A (finite) system of linear equations and inequalities whose solutions are the points in an extension~$Q$ of~$P$ is an \emph{extended formulation} for~$P$.
Through the proof we mostly deal with such objects as \emph{sections} ~$s:X\rightarrow Q$, which are maps that assign to every vertex~$x\in X$ of~$P$ some point~$s(x)\in Q\cap p^{-1}(x)$. Such a section induces a bijection between~$X$ and its image~$s(X)\subseteq Q$, whose inverse is given by~$p$.
For an arbitrary group~$G\subseteq \symGr{m}$ acting on the set~$X$ of vertices of~$P$,
an extension as above is \emph{symmetric} (with respect to the action of~$G$ on~$X$), if for every~$\pi\in G$ there is a permutation~$\varkappa_{\pi}\in\symGr{d}$
with~$\varkappa_{\pi}.Q = Q$
and
\begin{equation}\label{eq:pGmap}
p(\varkappa_{\pi}.y) = \pi.p(y) \quad\text{for all~$y\in p^{-1}(X)$.}
\end{equation}
We define an extended formulation~$A^=y=b^=$,~$A^{\le}y\le b^{\le}$ describing the polyhedron
\begin{equation*}
Q=\setDef{y\in\mathbb{R}^d}{A^=y=b^=, A^{\le}y\le b^{\le}}
\end{equation*}
extending~$P\subseteq\mathbb{R}^m$ as above to be \emph{symmetric} (with respect to the action of~$G$ on the set~$X$ of vertices of~$P$), if for every~$\pi\in G$ there is a permutation~$\varkappa_{\pi}\in\symGr{d}$ satisfying~\eqref{eq:pGmap} and there are two permutations~$\varrho^=_{\pi}$ and~$\varrho^{\le}_{\pi}$ of the rows of~$(A^=,b^=)$ and~$(A^{\le}, b^{\le})$, respectively, such that the corresponding simultaneous permutations of the columns and the rows of the matrices~$(A^=,b^=)$ and~$(A^{\le}, b^{\le})$ leaves them unchanged. Clearly, in this situation the permutations~$\varkappa_{\pi}$ satisfy~$\varkappa_{\pi}.Q= Q$, which implies the following.
\begin{lemma}\label{lem:symExtFormToSymExt}
Every symmetric extended formulation describes a symmetric extension.
\end{lemma}
We call an extension \emph{weakly symmetric} (with respect to the action of~$G$ on~$X$) if there is a section~$s:X\rightarrow Q$ such that for every~$\pi\in G$ there is a permutation~$\varkappa_{\pi}\in\symGr{d}$ with~$s(\pi.x)=\varkappa_{\pi}.s(x)$ for all~$x\in X$. It can be shown that every symmetric extension is weakly symmetric.
Dealing with weakly symmetric extension we can define an action of~$G$ on the set~$\mathcal{S}=\{s_1,\dots,s_d\}$ of the component functions of the section~$s:X\rightarrow Q$ with~$\pi.s_j=s_{\varkappa_{\pi^{-1}}^{-1}(j)}\in\mathcal{S}$
for each~$j\in\ints{d}$. In order to see that this definition indeed yields a group action, observe that, for each~$j\in\ints{d}$, we have
\begin{equation}\label{eq:actionCompSect}
(\pi.s_j)(x)=s_{\varkappa_{\pi^{-1}}^{-1}(j)}(x)=(\varkappa_{\pi^{-1}}.s(x))_j=s_j(\pi^{-1}.x)\quad\text{for all }x\in X\,,
\end{equation}
from which one deduces~$\id_m.s_j=s_j$ for the identity element~$\id_m$ in~$G$ as well as~$(\pi\pi').s_j=\pi.(\pi'.s_j)$ for all~$\pi,\pi'\in G$.
The \emph{isotropy} group of~$s_j\in\mathcal{S}$ under this action is
\begin{equation*}
\isoGr{s_j}=\setDef{\pi\in G}{\pi.s_j=s_j}\,.
\end{equation*}
From~\eqref{eq:actionCompSect} one sees that,~$\pi$ is in~$\isoGr{s_j}$ if and only if we have~$s_j(x)=s_j(\pi^{-1}.x)$ for all~$x\in X$ or, equivalently, ~$s_j(\pi.x)=s_j(x)$ for all~$x\in X$.
In general, it will be impossible to identify the isotropy groups~$\isoGr{s_j}$ without more knowledge on the section~$s$. However, for each isotropy group~$\isoGr{s_j}$, one can at least bound its index~$(G:\isoGr{s_j})$ by the number of variables:
\begin{equation*}
(G:\isoGr{s_j})\le d
\end{equation*}
The next lemma could be used to transform an arbitrary symmetric extension to some subspace symmetric extension.
\begin{lemma}\label{thm:transform_eq}
If there is a symmetric extension in~$\mathbb{R}^{\tilde{d}}$ with~$f$ facets for a polytope~$P$ , then there is also a symmetric subspace extension in~$\mathbb{R}^d$ with ~$d\le 2\tilde{d}+f$ for~$P$.
\end{lemma}
Using this lemma and having a lower bound on the number of variables in symmetric subspace extensions of the given polytope~$P$ one gets a lower bound on the total number of facets and variables in any symmetric estension of~$P$.
\section{Bound on Symmetric Subspace Extension of the Permutahedron}
\label{sec:symextension}
Now we would like to establish a lower bound on the
number of variables in symmetric subspace extensions of the permutahedron.
\begin{theorem}\label{thm:lowerbound}
For every~$n\ge6$ there exists no weakly symmetric subspace extension of the permutahedron~$\Pi_{n}$ with less than~$\frac{n(n-1)}{2}$ variables (with respect to the group~$G=\symGr{n}$ acting via permuting the corresponding elements).
\end{theorem}
For the proof, we assume that~$Q\subseteq\mathbb{R}^d$ with~$d<\frac{n(n-1)}{2}$ is a weakly symmetric subspace extension of~$\Pi_{n}$. Weak symmetry is meant with respect to the action of~$G=\symGr{n}$ on the set~$X$ of vertices of~$\Pi_{n}$ and we assume~$s:X\rightarrow Q$ to be a section as required in the definition of weak symmetry
The operator~$\Lambda(\zeta)$ maps any permutation~$\zeta$ to the vector ($\zeta^{-1}(1)$, $\zeta^{-1}(2)$, $\cdots$, $\zeta^{-1}(n)$). Thus, we have
\begin{equation*}
X=\setDef{\Lambda(\zeta)}{\zeta\in\symGr{n}}
\end{equation*}
where~$\symGr{n}$ is the set of all permutations on the set~$\ints{n}$, and
\begin{equation*}
(\pi.\Lambda(\zeta))_v=\Lambda(\zeta)_{\pi^{-1}(v)}
\end{equation*}
holds for all~$\pi\in \symGr{n}$,~$\zeta\in\symGr{n}$.
In order to identify suitable subgroups of the isometry groups~$\isoGr{s_j}$, we use the following result on subgroups of the symmetric group~$\symGr{n}$~\cite{Yan91}. Here~$\altGr{n}$ denotes the alternating group, i.e. the group consisting of all even permutations on the set~$\ints{n}$.
\begin{lemma}\label{lem:Yannakakis-claim}
For each subgroup~$U$ of~$\symGr{n}$ with~$(\symGr{n}:U)\le\binom{n}{k}$ for ~$k<\frac{n}{4}$, there is some ~$W\subseteq\ints{n}$ with~$|W|\le k$ such that
\begin{equation*}
\setDef{\pi\in
\altGr{n}}{\pi(v)=v\text{ for all }v\in W}\subseteq U
\end{equation*}
holds.
\end{lemma}
As we assumed~$d<\binom{n}{2}$ , Lemma~\ref{lem:Yannakakis-claim} implies that for all~$j\in\ints{d}$
\begin{equation*}
\setDef{\pi\in\altGr{n}}{\pi(v)=v\text{ for all }v\in V_j}\subseteq\isoGr{s_j}
\end{equation*}
for some set~$V_j\subset \ints{n}$,~$|V_j|\le 2$. We can prove that~$V_j$ can be choosen to contain not more than one element, which we denote by~$v_j$.
\begin{lemma}
For each~$j\in\ints{d}$ there is some~$v_j\in \ints{n}$ such that
\begin{equation*}
\setDef{\pi\in\altGr{n}}{\pi(v_j)=v_j}\subseteq\isoGr{s_j}
\end{equation*}
This element~$v_j$ is uniquely determined unless~$\altGr{n} \subseteq \isoGr{s_j}$
\end{lemma}
\begin{proof}
The statement of lemma is automatically true if the set~$V_j$ is empty or contains just one element. So let us assume the set~$V_{j}$ to consist of two elements~$\{v,w\}$. If the group~$\isoGr{s_j}$ has two fixed blocks~$V_{j}$ and~$\ints{n}\setminus V_{j}$, the following inequality
\begin{equation*}
d<\frac{n(n-1)}{2}\le(\symGr{n}:\isoGr{s_j})
\end{equation*}
holds. Thus we can find some permutation~$\tau\in\isoGr{s_j}$ such that w.l.o.g~$\tau(v) \not \in \{ v,w \}$.
Later it would be convenient to have~$\tau(w)=w$ and~$\tau\in\altGr{n}$. If~$\tau(w)\not =w$ or~$\tau \not \in \altGr{n}$ we regard a new permutation~$\tau'=\tau^{-1}\beta\tau \in \altGr{n}$, where~$\beta\in \altGr{n}$,~$\beta(v)=v$,~$\beta(w)=w$,~$\beta\tau(w)=\tau(w)$ and~$\beta\tau(v)\neq\tau(v)$. For every~$n\ge6$ such permutation~$\beta$ can be found since~$\tau(v)\not \in \{v,w,\tau(w)\}$. The construction of~$\tau'$ guarantees that~$\tau'(w)=w$,~$\tau'(v)\neq v$ and~$\tau'\in\isoGr{s_j}$. Hence we can assume that~$\tau(w)=w$ and~$\tau \in \altGr{n}$.
To prove that the described in the lemma element~$v_j$ exists we can show that the element~$w$ is one of such possibilities
\begin{equation}\label{eq:iso_element}
\setDef{\pi\in
\altGr{n}}{\pi(w)=w} \subseteq \isoGr{s_j}
\end{equation}
Any permutation~$\pi\in\altGr{n}$,~$\pi(w)=w$ with the property~$\pi(v)\neq v$ can be represented as~$(\pi(\tau\alpha)^{-1})\tau\alpha$ for any~$\alpha\in\symGr{n}$. We choose a permutation~$\alpha\in\altGr{n}$ such that~$\alpha(v)=v$,~$\alpha(w)=w$ and~$\alpha\pi^{-1}(v)=\tau^{-1}(v)$. The existence of~$\alpha$ can be trivially proved for~$n\ge6$. Thus the permutation~$\pi$ belongs to~$\isoGr{s_j}$ because all three permutations~$\tau$,~$\alpha$ and~$\pi(\tau\alpha)^{-1}$ belong to~$\isoGr{s_j}$(~note that~$\pi(\tau\alpha)^{-1}$ and~$\alpha$ are even permutations and fix elements~$v$,~$w$).
Any permutation~$\pi\in\altGr{n}$,~$\pi(w)=w$ belongs to~$\isoGr{s_j}$ whenever~$\pi(v)=v$. Therefore we can conclude that (\ref{eq:iso_element}) holds.
Having some other element~$u\in \ints{n}$,~$u\ne w$ such that
\begin{equation}\label{eq:iso_addit}
\setDef{\pi\in
\altGr{n}}{\pi(u)=u} \subseteq \isoGr{s_j}
\end{equation}
we can prove that~$\altGr{n}\subseteq \isoGr{s_j}$, since every permutation~$\pi\in \altGr{n}$ is a composition of not more than four permutations described by (\ref{eq:iso_element}) and (\ref{eq:iso_addit}).
\end{proof}
The next theorem has an important role in the proof because it describes the action of~$\altGr{n}$ on the components~$s_j$.
\begin{theorem}\label{thm:partition}
There exists a partition of the set~$\ints{d}$ into sets~$\mathcal{A}_i$ and~$\mathcal{B}_j$, such that each set~$\mathcal{B}_j$ consists just of one element~$b_j$ and each set~$\mathcal{A}_i$ consists of~$n$ elements~$a^i_1$,~$a^i_2$,\ldots,$a^i_n$ elements with
\begin{equation}\label{eq:partition}
s_{a^i_t}(\pi.x)=s_{a^i_{\pi^{-1}(t)}}(x) \qquad
s_{b_j}(\pi.x)=s_{b_j}(x)
\end{equation}
for any vertex ~$x\in X$ and all~$\pi\in \altGr{n}$.
\end{theorem}
Let us consider, for~$v\in\ints{n-2}$, permutations~$\rho_v$ consisting of just one cycle~$(v,v+1,v+2)$. We would like to show the existence of a partition~$\mathcal{A}_i$,~$\{b_j\}$ which satisfies all cardinality assumptions of Theorem~\ref{thm:partition} and satisfies equation~\eqref{eq:partition} for all~$x\in X$ and all permutations~$\rho_v$. Such a partition satisfies Theorem~\ref{thm:partition} because every permutation~$\pi\in \altGr{n}$ is a product of permutations~$\rho_v$.
Through the whole proof the action of~$\symGr{d}$ is restricted to the action on vectors~$s(x)$ for~$x\in X$. It means that two permutation~$\varkappa'$ and~$\varkappa$ from~$\symGr{d}$ are equivalent for us if~$s_{{\varkappa'}^{-1}(j)}(x)=s_{\varkappa^{-1}(j)}(x)$ for all~$x$ from~$X$ and for all~$j$ from~$\ints{d}$. For example, we can take the identity permutation~$\id_d$ instead of~$\varkappa$ if~$s_{\varkappa^{-1}(j)}(x)=s_j(x)$ for all~$x\in X$ and all~$j\in\ints{d}$.
\begin{lemma}\label{thm:cycles}
For each~$\pi=(w_1,w_2,w_3)\in\altGr{n}$ there exists a permutation in~$\symGr{d}$, which is equivalent to~$\varkappa_\pi$ such that all cycles of this permutation are of the form $({j_1}, {j_2}, {j_3})$ with~$v_{j_t}=w_t$ and~$\altGr{n}\not\subseteq \isoGr{s_{j_t}}$ for all~$t\in\ints{3}$.
\end{lemma}
\begin{proof}
The permutation~$\varkappa_\pi^3$ is equivalent to the identity permutation~$\id_d$ since the permutation~$\pi^3$ is the identity permutation~$\id_n$.
Thus any cycle~$C$ of the permutation~$\varkappa_\pi$ permutes indices of identical component functions of~$s$ if the cycle length~$|C|$ is not divisable by three. Hence, we can assume that every cycle~$C$ of~$\varkappa_\pi$ has length~$|C|=0\mod 3$.
The same argument allows us to transform each cycle~$C=({j_1},{j_2},\cdots {j_{3l}})$ of the permutation~$\varkappa_\pi$ into the following cycles~$({j_1},{j_2},{j_3})$,\ldots,~$({j_{3l-2}},{j_{3l-1}},{j_{3l}})$, offering an equivalent permutation to~$\varkappa_\pi$. Thus we may assume that~$\varkappa_\pi$ contains cycles of length three only.
Let us consider one of the cycles~$({j_1},{j_2},{j_3})$ of the permutation~$\varkappa_\pi$. We investigate two possible cases.
If the element~$v_{j_1}$ does not belong to~$\{ w_1, w_2, w_3\}$ or~$\altGr{n}\subseteq\isoGr{s_{j_1}}$ then we have~$\pi\in\isoGr{s_{j_1}}$ and thus~$\pi,\pi^2\in\isoGr{s_{j_1}}$, which yields
\[
\begin{array}{rrrrrr}
s_{j_1}(x)=&s_{j_1}(\pi.x)=&s_{{\varkappa_\pi}^{-1}(j_1)}(x)=&s_{j_3}(x)\\
s_{j_1}(x)=&s_{j_1}(\pi^2.x)=&s_{{\varkappa_\pi}^{-2}(j_1)}(x)=&s_{j_2}(x)
\end{array}
\]
This shows that the component functions~$s_{j_1}$,~$s_{j_2}$,~$s_{j_3}$ are identical. Thus the cycle~$({j_1},{j_2},{j_3})$ can be deleted.
Hence, we may w.l.o.g. assume~$v_{j_1}=w_1$. For each~$\tau'\in \altGr{n}$ with~$\tau'(w_3)=w_3$ and~$\tau:=\pi\tau'\pi^{-1}\in\altGr{n}$ we have~$\tau(w_1)=\pi\tau'\pi^{-1}(w_1)=\pi\tau'(w_3)=\pi(w_3)=w_1$. Since~$\tau\in\altGr{n}$ and~$\tau(w_1)=w_1$ we have~$\tau\in\iso{s_{j_1}}$, and thus(see the fifth equation in the following chain) for all~$x\in X$
\begin{multline*}
s_{j_3}(\pi^{-1}\tau\pi.x)=s_{{\varkappa_\pi}^{-1}(j_1)}(\pi^{-1}\tau\pi.x)=s_{j_1}(\pi\pi^{-1}\tau\pi.x)\\
=s_{j_1}(\tau\pi.x)=s_{j_1}(\tau.(\pi.x))=s_{j_1}(\pi.x)=s_{{\varkappa_\pi}^{-1}(j_1)}(x)=s_{j_3}(x)
\end{multline*}
Hence~$\tau'\in\isoGr{s_{j_3}}$ holds. Thus we have~$v_{j_3}=w_3$, unless~$\altGr{n}\subseteq \isoGr{s_{j_3}}$, which as in the treatment of the case~$\altGr{n}\subseteq \isoGr{s_{j_1}}$ would allow us to remove the cycle~$({j_1}, {j_2}, {j_3})$. Similarly, one can establish~$v_{j_2}=w_2$.
\end{proof}
Now we want to prove the next useful lemma, which will help us to construct the desired partition.
\begin{lemma}\label{thm:structure}
Let two permutations~$\pi=(w_1, w_2, w_3)$ and~$\sigma=(w_2, w_3, w_4)$ be given with~$w_1\ne w_4$ and suppose that the corresponding permutations~$\varkappa_\pi$ and~$\varkappa_\sigma$ satisfy the conditions from Lemma~\ref{thm:cycles}. If the permutation~$\varkappa_\pi$ contains a cycle~$({j_1},{j_2},{j_3})$ with~$v_{j_t}=w_t$ for all~$t\in\ints{3}$ then one of the following is true:
\begin{enumerate}[a)] \item \label{item:cycle} The permutation~$\varkappa_\sigma$ contains a cycle~$({j_2},{j_3},{j_4})$ with~$v_{j_4}=w_4$.
\item \label{item:two_cycles} The permutation~$\varkappa_\sigma$ contains two cycles~$({j_2},{j'_3},{j'_4})$ and~$({j''_2},{j_3},{j''_4})$ with~$v_{j''_2}=w_2$,~$v_{j'_3}=w_3$ and~$v_{j'_4}=v_{j''_4}=w_4$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume that permutation~$\varkappa_\sigma$ does not contain any cycle involving the index~${j_2}$. Every permutation~$\mu\in\altGr{n}$ can be represented as a combination~$\tau'\sigma\tau$, where~$\tau'$,~$\tau$ are even permutations with~$\tau'(w_2)=\tau(w_2)=w_2$. Thus for any permutation~$\mu\in\altGr{n}$ we have
\begin{equation*}
s_{j_2}(\mu.x)=s_{j_2}(\tau'\sigma\tau.x)=s_{j_2}(\sigma\tau.x)=s_{{\varkappa_\sigma}^{-1}(j_2)}(\tau.x)=s_{j_2}(\tau.x)=s_{j_2}(x)
\end{equation*}
This contradicts conditions on~$\varkappa_\pi$ from Lemma~\ref{thm:cycles}. We proceed in a similar way when no cycle in~$\varkappa_\sigma$ involves~${j_3}$.
If there are two different cycles~$({j_2},{j'_3},{j'_4})$ and~$({j''_2},{j_3},{j''_4})$ in the permutation~$\varkappa_\sigma$ we would like to prove that the component functions mentioned in the lemma are identical. For this let us consider the permutation~$\pi\sigma$ which could be written as a combination of two disjoint cycles~$(w_1, w_2)(w_3, w_4)$. From this we can conclude that~$(\pi\sigma)^2$ is the identity permutation~$\id_n$, what implies that~$(\varkappa_\pi \varkappa_\sigma)^2$ is equivalent to~$\id_d$.
For all~$x\in X$ we have
\begin{equation*}
s_{j_3}(x)=s_{j_3}((\pi\sigma)^2.x)=s_{{\varkappa_\pi}^{-1}(j_3)}(\sigma\pi\sigma.x)=s_{j_2}(\sigma\pi\sigma.x)=s_{{\varkappa_\sigma}^{-1}(j_2)}(\pi\sigma.x)
\end{equation*}
and we can continue this chain of equations using that~$v_{j'_4}=w_4$ is not equal to any of the elements~$w_1$,~$w_2$ and~$w_3$ in the following way:
\begin{equation*}
s_{{\varkappa_\sigma}^{-1}(j_2)}(\pi\sigma.x)=s_{j'_4}(\pi\sigma.x)=s_{j'_4}(\sigma.x)=s_{{\varkappa_\sigma}^{-1}(j'_4)}(x)=s_{j'_3}(x)
\end{equation*}
Thus we proved that~$s_{j_3}$ and~$s_{j'_3}$ are identical component functions. Considering expression~$s_{j_2}((\pi\sigma)^2.x)$ we get that~$s_{j_2}$ and~$s_{j''_2}$ are identical as well.
\end{proof}
\begin{lemma}\label{thm:set}
For every cycle~$({j_1}, {j_2}, {j_3})$ in the permutation~$\varkappa_{\rho_1}$ we can find a set $\mathcal{S}_{({j_1}, {j_2}, {j_3})}\linebreak[0]=\{j_1, j_2,\cdots, j_n\}$ such that, for every~$v\in\ints{n-2}$, there is a permutation equivalent to~$\rho_v$, which contains the cycle~$({j_v}, {j_{v+1}}, {j_{v+2}})$ and has the properties required in Lemma~\ref{thm:cycles}.
\end{lemma}
\begin{proof}
Let us construct the set~$\mathcal{S}_{({j_1}, {j_2}, {j_3})}$ in several steps. We start with~$\mathcal{S}_{({j_1}, {j_2}, {j_3})}=\{j_1, j_2, j_3\}$, which satisfies the condition of the lemma for~$v=1$.
By Lemma~\ref{thm:structure} with~$\pi=\rho_1$ and~$\sigma=\rho_2$ we have to consider two possible cases concerning the cycle~$({j_1}, {j_{2}}, {j_{3}})$. In case~\eqref{item:cycle} we can extend the set~$\mathcal{S}_{({j_1}, {j_2}, {j_3})}$ to~$\{j_1, j_2, j_3, j_4\}$, such that~$\mathcal{S}_{({j_1}, {j_2}, {j_3})}$ satisfies the conditions in the lemma for~$v=1,2$. In case~\eqref{item:two_cycles} we can update~$\varkappa_{\rho_2}$ by changing cycles~$({j_2},{j'_3},{j'_4})$, ~$({j''_2},{j_3},{j''_4})$ to~$({j_2},{j_3},{j'_4})$, ~$({j''_2},{j'_3},{j''_4})$, what will produce a permutation equivalent to~$\varkappa_{\rho_2}$. And we can choose~$\mathcal{S}_{({j_1}, {j_2}, {j_3})}$ be equal to~$\{ j_1, j_2, j_3, j'_4\}$.
In this manner we go through all~$v\in\ints{n-2}$ setting~$\pi=\rho_{v-1}$ and~$\sigma=\rho_v$ to extend the set~$\mathcal{S}_{({j_1}, {j_2}, {j_3})}$ and, if necessary, to update the permutation~$\varkappa_{\rho_v}$.
\end{proof}
Applying Lemma~\ref{thm:set} to all cycles of~$\varkappa_{\rho_1}$ we get some disjoint sets~$\mathcal{S}_{({j_1}, {j_2}, {j_3})}$ indexed by subsets of cycles of~$\varkappa_{\rho_1}$ . Moreover, there is no cycles in~$\varkappa_{\rho_2},\cdots, \varkappa_{\rho_{n-2}}$, which does not contain any index from the constructed sets~$\mathcal{S}_{({j_1}, {j_2}, {j_3})}$. This is due to Lemma~\ref{thm:structure} applied to pairs~$\pi=\rho_v$,~$\sigma=\rho_{v-1}$ for~$v$ ranging from~$2$ to~$n-2$(~in this order).
Now we can choose the sets~$\mathcal{A}_i$ to be the sets~$\mathcal{S}_{({j_1}, {j_2}, {j_3})}$ and the singletones~$\{b_j\}$ accordingly. Lemma~\ref{thm:set} guarantees equation~\eqref{eq:partition}.
Now we have some understanding of how permutations~$\altGr{n}$ act on the component functions of~$s$. Having this knowledge we can create an affine combination of points in the subspace extension~$Q$ of~$\Pi_n$, which has non-negative components, but does not project to the permutahedron~$\Pi_n$.
Let us introduce the following subgroup of~$\altGr{n}$ defined by one element~$w$ of the set \ints{n-1}
\begin{equation*}
H^*_w=\setDef{\pi\in\altGr{n}}{\pi(\ints{w})=\ints{w}}
\end{equation*}
Thus~$H^*_w$ is the set of all even permutations of~$\ints{n}$, which map~$\ints{w}$ to itself.
Let us consider the function~$s^*:X\times\ints{n-1}\to\mathbb{R}$ defined via
\begin{equation*}
s^*(x, w)=\frac{\sum_{\pi\in H^*_w} s(\pi.x)}{|H^*_w|}
\end{equation*}
The value of~$s^*$ is a convex combination of points from~$Q$ and thus lies in~$Q$ for any~$x\in X$.
Let us consider a partition~$\mathcal{A}_i$,~$\{b_j\}$ as in Theorem~\ref{thm:partition}. First, we take a look at the sets~$\mathcal{B}_j$ and any vertex~$x$ from~$X$:
\begin{equation*}
s^*_{b_j}(x, w)=\frac{\sum_{\pi\in H^*_w} s_{b_j}(\pi.x)}{|H^*_w|}=\frac{\sum_{\pi\in H^*_w} s_{b_j}(x)}{|H^*_w|}=s_{b_j}(x)
\end{equation*}
Hence the value of~$s^*_{b_j}$ depends just on~$x$ and is equal to the value of~$s_{b_j}(x)$ for any vertex~$x\in X$.
For the components corresponding to~$a^i_t$ from the set~$\mathcal{A}_i$, Theorem~\ref{thm:partition} gives us
\begin{equation}\label{eq:coordinate}
s^*_{a^i_t}(x, w)=\frac{\sum_{\pi\in H^*_w} s_{a^i_t}(\pi.x)}{|H^*_w|}=\frac{\sum_{\pi\in H^*_w} s_{a^i_{\pi^{-1}(t)}}(x)}{|H^*_w|}
\end{equation}
When~$t$ belongs to the set~$\ints{w}$, we can calculate the component~$s^*_{a^i_t}$ as
\begin{equation*}
s^*_{a^i_t}(x, w)=\frac{\sum_{\pi\in H^*_w} s_{a^i_{\pi^{-1}(t)}}(x)}{|H^*_w|}=\sum_{v\le w}\frac{1}{|H^*_w|} \sum_{\substack{ \pi^{-1}(t)=v\\ \pi\in H^*_w}}s_{a^i_v}(x)
\end{equation*}
Moreover, we can estimate the number of permutations in the second sum by
\begin{equation*}
|\setDef{ \pi\in H^*_w}{\pi^{-1}(t)=v}|=\frac{(w-1)!(n-w)!}{2}
\end{equation*}
and thus conclude that
\begin{equation*}
s^*_{a^i_t}(x, w)=\sum_{v\le w}\frac{1}{|H^*_w|} \sum_{\substack{ \pi^{-1}(t)=v\\ \pi\in H^*_w}}s_{a^i_v}(x)=\sum_{v\le w}\frac{s_{a^i_v}(x)}{w}
\end{equation*}
When~$t$ does not belong to~$\ints{w}$, we can derive a similar equality
\begin{equation*}
s^*_{a^i_t}(x, w)= \sum_{v>w}\frac{s_{a^i_v}(x)}{n-w}
\end{equation*}
Now we would like to outline the idea of the contradiction. Let us assume that we found some element~$w$ of the set~$\ints{n-1}$ such that the statements
\begin{equation}\label{eq:end}
\text{if} \quad s_{a^i_w}(\Lambda(id_{n}))>0\quad \text{then} \quad \sum_{v>w} s_{a^i_v}(\Lambda(id_{n}))>0
\end{equation}
and
\begin{equation}\label{eq:start}
\text{if} \quad s_{a^i_{w+1}}(\Lambda(id_{n}))>0 \quad\text{then}\quad \sum_{v\le w} s_{a^i_v}(\Lambda(id_{n}))>0
\end{equation}
hold for all sets~$\mathcal{A}_i$ (we will establish the existence of such a~$w$ in Lemma~\ref{thm:element} below).
Let~$\zeta$ be some even permutation such that the conditions
\begin{equation*}
\zeta(\ints{w-1})\subset\ints{w} \quad \text{and}\quad \zeta(w+1)\in\ints{w}
\end{equation*}
are fulfiled. Since the permutation~$\zeta$ is even, by (\ref{eq:partition}) we get
\begin{equation*}
s_{b_j}(\Lambda(\zeta))=s_{b_j}(\Lambda(id_{n}))\quad\text{and}\quad s_{a^i_t}(\Lambda(\zeta))=s_{a^i_{\zeta^{-1}(t)}}(\Lambda(id_{n}))
\end{equation*}
The point~$y=(1+\epsilon)s^*(\Lambda(id_{n}), w)-\epsilon \, s^*(\Lambda(\zeta), w)$ is an affine combination of points from~$Q$. The components~${b_j}$ of the point~$y$ are equal to the non-negative value~$s_{b_j}(\Lambda(id_{n}))$. The component~${a^i_t}$ is equal to the value
\begin{equation*}
\frac{1}{w}(\,(1+\epsilon) \sum_{v\le w}s_{a^i_v}(\Lambda(id_{n})) -\epsilon \sum_{v\le w-1}s_{a^i_v}(\Lambda(id_{n}))-\epsilon s_{a^i_{w+1}}(\Lambda(id_{n}))\,)
\end{equation*}
in the case~$t\le w$ and to the value
\begin{equation*}
\frac{1}{n-w}(\,(1+\epsilon) \sum_{v>w}s_{a^i_v}(\Lambda(id_{n})) -\epsilon \sum_{v> w+1}s_{a^i_v}(\Lambda(id_{n}))-\epsilon s_{a^i_{w}}(\Lambda(id_{n}))\,)
\end{equation*}
in the case~$t >w$.
After simplification those values look like
\begin{equation*}
\frac{1}{w}(\, \sum_{v\le w}s_{a^i_v}(\Lambda(id_{n})) +\epsilon s_{a^i_{w}}(\Lambda(id_{n}))-\epsilon s_{a^i_{w+1}}(\Lambda(id_{n}))\,)
\end{equation*}
and
\begin{equation*}
\frac{1}{n-w}(\, \sum_{v> w}s_{a^i_v}(\Lambda(id_{n})) +\epsilon s_{a^i_{w+1}}(\Lambda(id_{n}))-\epsilon s_{a^i_{w}}(\Lambda(id_{n}))\,)
\end{equation*}
From (\ref{eq:end}) and (\ref{eq:start}) follows that there exists~$\epsilon>0$ such that for all~$\mathcal{A}_i$ all components~$a^i_t$ of the point~$y$ are non-negative. But the combination~$y$ gives a point in the projection to the original variables, which violates the inequality
\begin{equation*}
\sum_{ v \in\ints{w}} x_v \ge \frac{w(w+1)}{2}
\end{equation*}
This is true since the projection of~$s^*(\Lambda(id_{n}), w)$ belongs to the corresponding face and the projection of~$s^*(\Lambda(\zeta), w)$ does not. Thus for any~$\epsilon> 0$ the point~$(1+\epsilon)s^*(\Lambda(id_{n}), w)-\epsilon \, s^*(\Lambda(\zeta),w)$ can not belong to the permutahedron~$\Pi_n$.
The next lemma finishes the proof because of the above mentioned construction.
\begin{lemma}\label{thm:element}
There exists such element~$w$ from the set~$\ints{n-1}$ which satisfies the conditions (\ref{eq:end}) and (\ref{eq:start}).
\end{lemma}
\begin{proof}
Since each set~$\mathcal{A}_i$ consists of~$n$ components, we can conclude that number of sets~$\mathcal{A}_i$ is less than~$\frac{n-1}{2}$ (recall~$d<\frac{n(n-1)}{2}$).
For each set~$\mathcal{A}_i$ there can exist just one element~$u$ from~$\ints{n-1}$, which violates the statement (\ref{eq:end}) for the mentioned inex~$i$ (it can just be the maximal element from~$\ints{n-1}$ for which~$s_{a^i_u}(\Lambda(id_{n}))>0$). Analogously, for each set~$\mathcal{A}_i$ there can exist just one element~$u$ from~$\ints{n-1}$, which violates the statement (\ref{eq:start}).
Thus, for at least one element~$w\in\ints{n-1}$ both (\ref{eq:end}) and (\ref{eq:start}) are satisfied for all sets~$\mathcal{A}_i$.
\end{proof}
Combining Lemma~\ref{thm:transform_eq} and Theorem~\ref{thm:lowerbound} we get the following theorem, which gives us a lower bound on the number of variables and facets in symmetric extensions of the permutahedron.
\begin{theorem}\label{thm:lowerbound_fin}
For every~$n\ge6$ there exists no symmetric extended formulation of the permutahedron~$\Pi_{n}$ with less than~$\frac{n(n-1)}{4}$ variables and constraints (with respect to the group~$G=\symGr{n}$ acting via permuting the corresponding elements).
\end{theorem}
\bibliographystyle{plain}
| {
"timestamp": "2011-01-25T02:02:30",
"yymm": "0912",
"arxiv_id": "0912.3446",
"language": "en",
"url": "https://arxiv.org/abs/0912.3446",
"abstract": "It is well known that the permutahedron Pi_n has 2^n-2 facets. The Birkhoff polytope provides a symmetric extended formulation of Pi_n of size Theta(n^2). Recently, Goemans described a non-symmetric extended formulation of Pi_n of size Theta(n log(n)). In this paper, we prove that Omega(n^2) is a lower bound for the size of symmetric extended formulations of Pi_n.",
"subjects": "Combinatorics (math.CO); Optimization and Control (math.OC)",
"title": "Tight Lower Bounds on the Sizes of Symmetric Extensions of Permutahedra and Similar Results",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850897067341,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.70953497882434
} |
https://arxiv.org/abs/1608.07762 | A class of Ramsey-extremal hypergraphs | In 1991, McKay and Radziszowski proved that, however each 3-subset of a 13-set is assigned one of two colours, there is some 4-subset whose four 3-subsets have the same colour. More than 25 years later, this remains the only non-trivial classical Ramsey number known for hypergraphs. In this article, we find all the extremal colourings of the 3-subsets of a 12-set and list some of their properties. Using the catalogue, we provide an answer to a question of Dudek, Fleur, Mubayi and Rödl about the size-Ramsey numbers of hypergraphs. | \section{Introduction}\label{intro}
A colouring of all the $s$-subsets of an $n$-set with two
colours is called \textit{$R(j,k;s)$-good} if there is no $j$-subset
(of the $n$-set) containing only $s$-subsets of the first
colour, and no $k$-subset containing only $s$-subsets of the
second colour. (Note that it is the $s$-subsets receiving colours,
not the elements of the $n$-set.)
The \textit{Ramsey number} $R(j,k;s)$ is
defined to be the least $n$ for which there is no
$R(j,k;s)$-good colouring.
Although there are several known values of $R(j,k;2)$~\cite{SRN},
which is usually written as just $R(j,k)$, the only known
non-trivial value of $R(j,k;s)$ for $s\ge 3$ is $R(4,4;3)=13$.
As a lower bound, a suitable colouring of the 3-subsets of
a 12-set was presented by Isbell in 1969~\cite{Isbell}, and this was
proved best possible by the present author and Radziszowski
in 1991~\cite{MR}.
During that project we found more than 200,000 $R(4,4;3)$-good
colourings for 12 points, but did not have the resources to
compute them all. With the aid of an improved algorithm
and the much greater computing resources available today, we
can now show that the number of $R(4,4;3)$-good colourings for
12 points is precisely 434,714.
We hope that this compilation of data will assist further
investigations.
\vskip 0pt plus 50pt\penalty-300\vskip 0pt plus -50pt
\section{Method}\label{method}
We prefer to use slightly different terminology for this
description. Suppose we have an $R(4,4;3)$-good colouring
of the 3-subsets of an $n$-set $V$.
We will call the 4-subsets of $V$ \textit{quadruples}.
If we choose just the 3-subsets of $V$ having the first colour,
we obtain a 3-uniform hypergraph on $V$ with the property
that every quadruple contains 1, 2 or 3 edges (the other
possibilities 0 and 4 being forbidden). We will call this
a $R(4,4;3)$-good hypergraph.
Note that we could have chosen the other colour instead and
would have obtained the complementary hypergraph.
We can obvious recover the colouring from the hypergraph, so
we lose nothing by continuing with hypergraph terminology.
Denote by $\mathcal{R}(n)$ the set of $R(4,4;3)$-good hypergraphs
with $n$ points. If we wish to emphasize the point set $V$,
we may write $\mathcal{R}(V)$ instead. More generally, $\mathcal{R}(n,e)$ is the
set of $R(4,4;3)$-good hypergraphs with $n$ points and $e$ edges,
and notations like $\mathcal{R}(V,{\le}110)$ have their obvious meanings.
Our aim is to find $\mathcal{R}(12)$. By the remark just made, it
will suffice to find $\mathcal{R}(12,{\le}110)$, since
$110=\frac12\binom{12}{3}$ and the rest are complements.
Given $G\in\mathcal{R}(V)$ and $v\in V$, define $G_v$ to be the hypergraph
with point set $V{-}v$ and all the edges of $G$ that lie in~$V{-}v$.
Clearly $G_v\in\mathcal{R}(V{-}v)$.
Since the points of $G\in\mathcal{R}(n,e)$ lie on average in $3e/n$
edges, we find that for $G\in R(12,{\le}110)$ there is some
$v$ such that $G_v\in R(11,{\le}82)$.
Continuing such logic we find a construction path
\begin{equation}\label{path}
\mathcal{R}(9,{\le} 41) \to \mathcal{R}(10,{\le} 59) \to \mathcal{R}(11,{\le} 82)
\to \mathcal{R}(12,{\le} 110).
\end{equation}
Each step in~\eqref{path} involves adding one point and some
edges that include the new point. Moreover, we can assume that
the new point is in at least as many edges as any of the old points
(after the new edges are added).
The programs developed for~\cite{MR} are fast enough to find
$\mathcal{R}(9,{\le} 41)$ in a few hours. There are exactly
3,030,480,232 such hypergraphs and these form our starting point.
It would be convenient to perform each of the three steps of~\eqref{path}
separately, but it would be quite expensive. The number of hypergraphs in
$\mathcal{R}(10)$ and $\mathcal{R}(11)$ is greater than $10^{11}$ and
even the task of extending one hypergraph by one point requires
solution of a large set of integer inequalities. We need a better way.
If $S$ is a set and $B\subseteq T\subseteq S$, then the
\textit{interval} $[B,T]$ is
$\{ X\subseteq S \mathrel{|} B\subseteq X\subseteq T\}$. The use of
intervals for solving sets of inequalities efficiently
was introduced in~\cite{R45}.
Define $V_9=\{0,1,\ldots,8\}$ and $V_{10}=V_9\cup\{a\}$.
Consider extending $G_9\in \mathcal{R}(V_9)$ to all possible
$G_{10}\in \mathcal{R}(V_{10})$ by adding the point~$a$ and some edges
that include~$a$. The possible edges all have the form
$\{i,j,a\}$ where $i,j\in V_9$; number these $e_0,e_1,\ldots,e_{35}$
in some order. Each solution for $G_{10}$ corresponds to a subset
of $\{e_0,e_1,\ldots,e_{35}\}$.
Now consider the constraints required for $G_{10}$ to be $R(4,4;3)$-good.
The quadruples within $V_9$ are fine already, since we are not adding
any further edges inside $V_9$. So consider a quadruple
$\{i,j,k,a\}$, where $i,j,k\in V_9$. If $\{i,j,k\}$ is an edge of $G_9$,
we need that at least one of the edges $\{i,j,a\}, \{i,k,a\},\{j,k,a\}$
is not selected, while if $\{i,j,k\}$ is not an edge of $G_9$, at least
one of those three edges must be selected.
Now we can describe how intervals are used to process many cases
simultaneously. Consider one interval
$[B,T]\subseteq \{e_0,e_1,\ldots,e_{35}\}$
and one quadruple $\{i,j,k,a\}$. Define $X=\{e_r,e_s,e_t\}$,
where $e_r=\{i,j,a\}$, $e_s=\{i,k,a\}$ and $e_t=\{j,k,a\}$.
Now we apply the following \textit{collapsing rules}:
\def\vrule width0pt height2.4ex{\vrule width0pt height2.4ex}
\begin{figure}[ht]
\centering
\begin{tabular}{c|c|c|l}
$\{i,j,k\}\in G\,$? & $B\cap X$ & $T\cap X$ & replace $[B,T]$ by \\
\hline \vrule width0pt height2.4ex
NO & $\ne\emptyset$ & any & $[B,T]$ \\
& $\emptyset$ & $\emptyset$ & nothing \\
& $\emptyset$ & $\{i\}$ & $[B{+}i,T]$ \\
& $\emptyset$ & $\{i,j\}$ & $[B{+}i,T], [B{+}j,T{-}i]$ \\
& $\emptyset$ & $\{i,j,k\}$ & $[B{+}i,T], [B{+}j,T{-}i], [B{+}k,T{-}i{-}j]$ \\
\hline\hline
\vrule width0pt height2.4ex$\{i,j,k\}\in G\,$? & $\bar T\cap X$ & $\bar B\cap X$ & replace $[B,T]$ by \\
\hline \vrule width0pt height2.4ex
YES & $\ne\emptyset$ & any & $[B,T]$ \\
& $\emptyset$ & $\emptyset$ & nothing \\
& $\emptyset$ & $\{i\}$ & $[B,T{-}i]$ \\
& $\emptyset$ & $\{i,j\}$ & $[B,T{-}i], [B{+}i,T{-}j]$ \\
& $\emptyset$ & $\{i,j,k\}$ & $[B,T{-}i], [B{+}i,T{-}j], [B{+}i{+}j,T{-}k]$ \\
\end{tabular}
\caption{Collapsing rules for an interval $[B,T]$ based on quadruple $\{i,j,k,a\}$.
\label{fig}}
\end{figure}
By considering each case, we find that the effect of the collapsing
rules is to replace $[B,T]$ by a set of disjoint intervals whose
union is the set of all sets in $[B,T]$ that satisfy the quadruple
$\{i,j,k,a\}$.
For best practical performance, subsets of $\{e_0,e_1,\ldots,e_{35}\}$
can be represented by the bits in a single machine word, then
the collapsing rules can be implemented in a few machine
instructions each.
Starting with the interval $[\emptyset,\{e_0,e_1,\ldots,e_{35}\}]$
we apply the collapsing rules for each quadruple $\{i,j,k,a\}$.
The result is a set of disjoint intervals (typically a few hundred)
whose union gives exactly the set of all extensions of $G_9$ to $\mathcal{R}(10)$.
The efficiency depends a lot on the order in which quadruples are
processed; we found a good order by experiment.
Now consider further extension to $R(4,4;3)$-good hypergraphs
on $V_{11}=\{0,\ldots,8,a,b\}$.
The edges we need to add in total to $G_9$ either have the form
$\{i,j,a\}$ (already added in making $G_{10}$),
$\{i,j,b\}$, or $\{i,a,b\}$, where in each case $i,j,k\in V_9$.
Here we can make an observation that is key to the whole computation:
\textit{The sets of edges $\{i,j,b\}$ which satisfy quadruples of the
form $\{i,j,k,b\}$ are the same as the sets of edges $\{i,j,a\}$
which satisfy quadruples of the form $\{i,j,k,b\}$, except that
$a$ is replaced by~$b$.}
Given this observation, we make the possibilities for $G_{11}$
as follows, given $G_9$, a set $\mathcal{I}$ of intervals describing the
extensions of $G_9$ to $\mathcal{R}(10)$, and a particular extension~$G_{10}$.
The possible new edges are numbered $e_0,\ldots,e_{44}$, where
$e_0,\ldots,e_{35}$ are edges of the form $\{i,j,b\}$ numbered in
the same order as we numbered the edges $\{i,j,a\}$ in the previous
step, and $e_{36},\ldots,e_{44}$ are the edges of the form
$\{i,a,b\}$ in any order. To find all solutions, instead of
starting with the single interval
$[\emptyset,\{e_0,e_1,\ldots,e_{44}\}]$ as in the previous step,
we start with the set of intervals
$[B,T\cup\{e_{36},\ldots,e_{44}\}]$ for $[B,T]\in\mathcal{I}$. Then we avoid collapsing
rules which are unnecessary for the stated reasons. This
results in a massive speedup.
To complete the process by extending from 11 to~12 points, we
use the same idea to begin with intervals obtained during the
extension to~11 points. This phase is very fast as most intervals
are destroyed very quickly and only a comparatively small number
of solutions are found.
\medskip
It would be possible to apply the general method of~\cite{orderly}
to perform exhaustive isomorph reduction at each step in the
computation, but the large number of intermediate hypergraphs
makes that unwise. Instead, we applied a weaker filter.
For a hypergraph with points $V$ and point $v\in V$, define
$d_v$ to be the number of edges that include~$v$.
Also define $f_v = \sum_e d_vd_wd_x$, where the sum is over
all edges $e=\{v,w,x\}$ that include~$v$.
Suppose we make $G\in G(V)$ by extending a smaller hypergraph,
and that $v\in V$ is the last point added.
The construction path~\eqref{path} assumed that $d_v\ge d_w$
for all $w\in V$, so that is the first filter applied.
If that doesn't eliminate $G$, we also require that $f_v$ be
maximum out of all $w\in V$ with maximum~$d_w$. These rules
eliminate most isomorphs and are fast to apply. When we finally
have a collection of $R(4,4;3)$-good hypergraphs on 12 points,
we perform complete isomorphism reduction using \texttt{nauty}~\cite{nauty}.
\begin{table}[p]
\centering
\begin{tabular}{c|c|c||c|c|c}
~~$n$~~ & ~~$e$~~ & count & ~~$n$~~ & ~~$e$~~ & count \\
\hline
3 & 0 & 1 & 9 & 33 & 2 \\
\multicolumn{2}{c|}{total} & 2 & & 34 & 204 \\
\cline{1-3}
4 & 1 & 1 & & 35 & 22616 \\
& 2 & 1 & & 36 & 774043 \\
\multicolumn{2}{c|}{total} & 3 & & 37 & 10877731 \\
\cline{1-3}
5 & 3 & 1 & & 38 & 79336073 \\
& 4 & 3 & & 39 & 341024774 \\
& 5 & 4 & & 40 & 928650036 \\
\multicolumn{2}{c|}{total} & 12 & & 41 & 1669794753 \\
\cline{1-3}
6 & 6 & 1 & & 42 & 2025923846 \\
& 7 & 5 & \multicolumn{2}{c|}{total} & ~~8086884310~~ \\
\cline{4-6}
& 8 & 22 & 10 & 50 & 13 \\
& 9 & 50 & & 51 & 1810 \\
& 10 & 70 & & 52 & 121356 \\
\multicolumn{2}{c|}{total} & 226 & \multicolumn{2}{c|}{$\cdots$}\\
\cline{1-3}
7 & 12 & 1 & \multicolumn{2}{c|}{total} & $\approx 6.2{\times}10^{11}$ \\
\cline{4-6}
& 13 & 26 & 11 & 73 & 36 \\
& 14 & 338 & & 74 & 4725 \\
& 15 & 1793 & & 75 & 246299 \\
& 16 & 5055 & \multicolumn{2}{c|}{$\cdots$} \\
& 17 & 8317 & \multicolumn{2}{c|}{total} & $\approx 2.1{\times}10^{11}$ \\
\cline{4-6}
\multicolumn{2}{c|}{total} & 31060 & 12 & 104 & 4 \\
\cline{1-3}
8 & 21 & 1 & & 105 & 123 \\
& 22 & 278 & & 106 & 1465 \\
& 23 & 9763 & & 107 & 10235 \\
& 24 & 107241 & & 108 & 41939 \\
& 25 & 573596 & & 109 & 98235 \\
& 26 & 1764747 & & 110 & 130712 \\
& 27 & 3380337 & \multicolumn{2}{c|}{total} & 434714 \\
& 28 & 4182459 \\
\multicolumn{2}{c|}{total} & ~~15854385~~
\end{tabular}
\caption{The numbers of $R(4,4;3)$-good hypergraphs with $n$ points and
$e$ edges. The totals include complements. \label{counts}}
\end{table}
\vskip 0pt plus 50pt\penalty-300\vskip 0pt plus -50pt
\section{Results}\label{results}
There are about $8.4\times 10^{11}$ $R(4,4;3)$-good hypergraphs
altogether, including 434,714 with 12 points.
Table~\ref{counts} details the numbers of $R(4,4;3)$-good hypergraphs
for each number of points and edges.
For 10 and 11 points we only did incomplete isomorph reduction, as
explained above; hence the totals for those sizes are estimates.
The \textit{automorphism group} $\operatorname{Aut}(G)$ of a hypergraph $G\in\mathcal{R}(V)$
is the set of permutations of $V$ which preserve the edge set. As detailed
in Table~\ref{groups}, most hypergraphs in $\mathcal{R}(12)$ have a trivial group
and none have a transitive group. The unique hypergraph with
$\abs{\operatorname{Aut}(G)}=60$, which has two orbits of size~6, is presented in
Figure~\ref{example} using letters for elements of~$V$.
This hypergraph is one of the 1306 in $\mathcal{R}(12)$ that are
self-complementary and is isomorphic to the one found by
Isbell~\cite{Isbell}.
\begin{table}[th]
\centering
\begin{tabular}{c|c|c}
$\abs{\operatorname{Aut}(G)}$ & orbits & count \\
\hline
1 & 12 & 432300 \\
2 & 6 & 18 \\
& 7 & 112 \\
& 8 & 1669 \\
3 & 4 & 529 \\
4 & 6 & 32 \\
6 & 2 & 20 \\
& 4 & 17 \\
10 & 4 & 1 \\
12 & 2 & 15 \\
60 & 2 & 1 \\
\end{tabular}
\caption{Counts of $\mathcal{R}(12)$ by automophism group.\label{groups}}
\end{table}
None of the hypergraphs in $\mathcal{R}(12)$ extend to a hypergraph in
$\mathcal{R}(13)$, consistently with the finding of~\cite{MR} that
$\mathcal{R}(13)=\emptyset$. This raises the question of how close we can
get to a hypergraph in $\mathcal{R}(13)$; specifically, how many edges
of the complete hypergraph $K_{13}^{(3)}$ can we colour without
obtaining a monochromatic induced $K_4^{(3)}$?
The generation method described in the previous section
can be easily adapted to ignore particular quadruples. If we ignore
the constraints normally attributed to the quadruples $\{i,j,k,a\}$ which
contain a specified $\{i,j,a\}$, then we are colouring the edges of
the complete hypergraph except for one uncoloured edge.
Using this method we found that $K_{13}^{(3)}$ minus one edge
cannot be coloured with two colours
without creating a monochromatic $K_4^{(3)}$.
On the other hand, if we omit two edges of $K_{13}^{(3)}$, a colouring
without a monochromatic $K_4^{(3)}$ may be possible. In
Figure~\ref{minus2} we give examples where the two omitted
edges overlap in one or two points. We did not find any examples
with the omitted two edges being disjoint, but our search in that case
was not exhaustive. We can report these partial results: there is no
good colouring of $K_{13}^{(3)}$ minus the edges $\{1,2,3\}$ and
$\{4,5,6\}$ such that a good colouring of $K_{12}^{(3)}$ can be
obtained either by
deleting vertex 1 and colouring edge $\{4,5,6\}$, or by deleting
vertex 7 and colouring both edges $\{1,2,3\}$ and $\{4,5,6\}$.
We propose the remaining cases of two disjoint edges as a
challenge for the reader.
If $H$ is a 3-uniform hypergraph, the \textit{size-Ramsey number}
$\hat R^{(3)}(H)$ is the least number $m$ such that for some
3-uniform hypergraph $G$ with $m$ edges, every colouring
of the edges of $G$ with two colours includes a monochromatic copy
of~$H$. If $H=K_4^{(3)}$, then the value $R(4,4;3)=13$ implies that
$\hat R^{(3)}(H)\le\binom{13}{3}=286$ since we can take
$G=K_{13}^{(3)}$.
Dudek, La~Fleur, Mubayi and R\"odl~\cite[Question\,2.2]{Dudek}
ask whether this bound is sharp.
Since $K_{13}^{(3)}$ minus one edge cannot be coloured without creating
a monochromatic $K_4^{(3)}$ we have $\hat R^{(3)}(H)\le 285$, which
answers Dudek et~al.'s question in the negative.
The extremal $R(4,4;3)$-good hypergraphs are available
online~\cite{online}.
Finally, we thank Staszek Radziszowski for many useful
comments.
\begin{figure}[ht]
\centering
\medskip
\parbox{0.74\hsize}{\small
Let
$\varGamma=\langle(\texttt{cd})(\texttt{ef})(\texttt{CD})(\texttt{EF}),
\allowbreak
(\texttt{bc})(\texttt{de})(\texttt{BC})(\texttt{DE}),
\allowbreak
(\texttt{ab})(\texttt{ef})(\texttt{AB})(\texttt{EF})\rangle$
be a permutation acting on the points \texttt{abcdefABCDEF}.
It is isomorphic to the alternating group $A_5$ and
acts 2-transitively on
each of its orbits $\{\mathtt a,\ldots,\mathtt f\}$
and $\{\mathtt A,\ldots,\mathtt F\}$.
Now construct a hypergraph by applying $\varGamma$ to each
of the starting edges $\{\texttt{abe}, \texttt{ABE}, \texttt{abC},
\texttt{aAB}, \texttt{cAB}\}$.
These provide 10, 10, 30, 30 and 30 edges, respectively.
The hypergraph induced by each orbit is the same
2-(6,3,2) design. The relabelling
\texttt{(aD)(bC)(cB)(dA)(eF)(fE)} takes the hypergraph
onto its complement.}
\caption{The unique hypergraph in $\mathcal{R}(12)$ with
automorphism group of order 60.\label{example}}
\end{figure}
\begin{figure}[ht]
\centering
\medskip
\parbox{0.43\hsize}{\small\baselineskip=12pt\tt
acd bcd abe ace bce cde adf cdf \\
def adg aeg beg ceg deg afg bfg \\
efg ach bch adh bdh aeh beh deh \\
afh efh bgh dgh fgh bdi cdi bei \\
cei afi bfi dfi efi bgi cgi dgi \\
ahi chi dhi ehi abj acj bcj cdj \\
aej dej bfj cfj agj bgj cgj dgj \\
bhj ehj fhj aij gij abk bck bdk \\
cek afk bfk cfk efk cgk fgk dhk \\
ghk aik bik eik hik bjk djk gjk \\
hjk ijk abl acl bcl adl bel afl \\
bfl cfl bgl dgl chl fhl ghl eil \\
fil gil hil ajl djl ejl fjl ijl \\
akl ckl dkl ekl hkl abm adm bdm \\
aem dem bfm cfm dfm cgm egm ahm \\
bhm chm ghm aim cim fim ejm fjm \\
hjm ijm akm dkm ekm fkm gkm jkm \\
blm clm dlm elm glm ilm \\
{\rm Omitted edges:} abc ade}
\parbox{0.43\hsize}{\small\baselineskip=12pt\tt
bcd cde acf bcf aef def adg bdg \\
cdg aeg beg deg bfg efg abh ach \\
bch adh bdh beh cfh egh fgh aci \\
aei bei cei dei afi dfi agi cgi \\
fgi bhi dhi fhi adj bdj aej cej \\
bfj cfj dfj agj bgj ahj bhj chj \\
ehj ghj bij dij eij abk ack bck \\
cdk aek bek cek cfk dfk efk bgk \\
cgk fgk ahk dhk fhk aik bik gik \\
hik ejk fjk gjk hjk ijk abl acl \\
bcl adl bdl afl bfl dfl efl cgl \\
dgl fgl chl dhl ehl ghl bil cil \\
eil fil hil ajl cjl ejl fjl gjl \\
bkl dkl gkl abm adm bdm cdm bem \\
cem afm bfm cfm dfm efm agm bgm \\
ahm ehm fhm ghm cim fim gim cjm \\
djm gjm ijm dkm ekm hkm jkm blm \\
elm ilm jlm klm \\
{\rm Omitted edges:} abc abd}
\caption{Two $R(4,4;3)$-good colourings of the complete hypergraph
$K_{13}^{(3)}$ minus two edges. Edges not mentioned have
the second colour.\label{minus2}}
\end{figure}
\vskip 0pt plus 50pt\penalty-300\vskip 0pt plus -50pt
\renewcommand{\baselinestretch}{1}\normalsize
| {
"timestamp": "2016-08-30T02:03:15",
"yymm": "1608",
"arxiv_id": "1608.07762",
"language": "en",
"url": "https://arxiv.org/abs/1608.07762",
"abstract": "In 1991, McKay and Radziszowski proved that, however each 3-subset of a 13-set is assigned one of two colours, there is some 4-subset whose four 3-subsets have the same colour. More than 25 years later, this remains the only non-trivial classical Ramsey number known for hypergraphs. In this article, we find all the extremal colourings of the 3-subsets of a 12-set and list some of their properties. Using the catalogue, we provide an answer to a question of Dudek, Fleur, Mubayi and Rödl about the size-Ramsey numbers of hypergraphs.",
"subjects": "Combinatorics (math.CO)",
"title": "A class of Ramsey-extremal hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850892111574,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.709534978466661
} |
https://arxiv.org/abs/2003.04776 | Parallel Robust Computation of Generalized Eigenvectors of Matrix Pencils | In this paper we consider the problem of computing generalized eigenvectors of a matrix pencil in real Schur form. In exact arithmetic, this problem can be solved using substitution. In practice, substitution is vulnerable to floating-point overflow. The robust solvers xTGEVC in LAPACK prevent overflow by dynamically scaling the eigenvectors. These subroutines are sequential scalar codes which compute the eigenvectors one by one. In this paper we discuss how to derive robust blocked algorithms. The new StarNEig library contains a robust task-parallel solver Zazamoukh which runs on top of StarPU. Our numerical experiments show that Zazamoukh achieves a super-linear speedup compared with DTGEVC for sufficiently large matrices. | \section{Introduction} \label{sec:introduction}
Let $A \in \R^{m \times m}$ and let $B \in \R^{m \times m}$. The matrix pencil $(A,B)$ consists of all matrices of the form $A - \lambda B$ where $\lambda \in \mathbb{C}$. The set of (generalized) eigenvalues of the matrix pencil $(A,B)$ is given by
\begin{equation}
\lambda(A,B) = \{ \lambda \in C \: : \det(A-\lambda B) = 0 \}.
\end{equation}
We say that $z \in \C^m$ is a (generalized) eigenvector of the matrix pencil $(A,B)$ if and only if
$z \not = 0$ and
\begin{equation}
Az = \lambda Bz.
\end{equation}
The eigenvalues of $(A,B)$ can be computed by first reducing $(A,B)$ to real Schur form $(S,T)$. Specifically, there exist orthogonal matrices $U$ and $V$ such that $S = U^TAV$ is quasi-upper triangular and $T = U^T B V$ is upper triangular. It is clear that
\begin{equation}
\lambda(A,B) = \lambda(S,T).
\end{equation}
Moreover, if $z$ is a generalized eigenvector of $(S,T)$ corresponding to the eigenvalue $\lambda$, then $Uz$ is a generalized eigenvector of $(A,B)$ corresponding to the eigenvalue $\lambda$.
In this paper, we consider the parallel computation of eigenvectors of a matrix pencil in real Schur form.
In exact arithmetic, this problem can be solved using substitution. However, substitution is very vulnerable to floating-point overflow.
In LAPACK \cite{laug} there exists a family \texttt{xTGEVC} of subroutines which compute the generalized eigenvectors of a matrix pencil in Schur form.
They prevent overflow by dynamically scaling the eigenvectors.
These subroutines are scalar codes which compute the eigenvectors one by one.
In this paper we discuss the construction of algorithms which are not only robust, but blocked and parallel.
Our paper is organized as follows. In Section \ref{sec:real} we consider the problem of computing the eigenvectors of a matrix pencil in real Schur form using real arithmetic. This problem is equivalent to solving a homogeneous matrix equation of the form
\begin{equation} \label{equ:central-matrix-equation}
SV\!D - TV\!B = 0
\end{equation}
where $D$ is diagonal and $B$ is block diagonal with diagonal blocks which are either 1-by-1 or 2-by-2.
In Section \ref{sec:block} we present a blocked algorithm for solving this matrix equation.
In Section \ref{sec:robust} we discuss how to prevent overflow in this algorithm.
The concept of an augmented matrix is central to this discussion.
A robust task-parallel solver {\tt Zazamoukh} has been developed and integrated into the new StarNEig library for solving non-symmetric eigenvalue problems \cite{starneig1,starneig2}.
The performance of {\tt Zazamoukh} is compared to LAPACK in Section \ref{sec:experiments}.
We outline several directions for future work in Section \ref{sec:conclusion}.
\section{Real arithmetic} \label{sec:real}
Let $A \in \R^{m \times m}$ and $B \in \R^{m \times m}$ be given. The set of generalized eigenvalues of the matrix pencil $(A,B)$ can be computed by first reducing $(A,B)$ to generalized real Schur form. Specifically, there exist orthogonal matrices $U$ and $V$ such that
\begin{equation*}
S = U^T A V =
\begin{bmatrix} S_{11} & S_{12} & \dots & S_{1p} \\ & S_{22} & \dots & S_{2p} \\ & & \ddots & \vdots \\ & & & S_{pp} \end{bmatrix}, \quad T = U^T B V = \begin{bmatrix} T_{11} & T_{12} & \dots & T_{1p} \\ & T_{22} & \dots & T_{2p} \\ & & \ddots & \vdots \\ & & & T_{pp} \end{bmatrix}
\end{equation*}
are upper block triangular and $\dim(S_{jj}) = \dim(T_{jj}) \in \{1,2\}$.
It is clear that
\begin{equation*}
\lambda(S,T) = \cup_{j=1}^p \lambda(S_{jj}, T_{jj}).
\end{equation*}
In order to simplify the discussion, we will make the following assumptions.
\begin{enumerate}
\item If $\dim(S_{jj}) = 1$, then $(S_{jj}, T_{jj})$ has a single real eigenvalue.
\item If $\dim(S_{jj}) = 2$, then $(S_{jj}, T_{jj})$ has two complex conjugate eigenvalues.
\item All eigenvalues are distinct.
\end{enumerate}
We follow the standard convention and represent eigenvalues $\lambda$ using an ordered pair $(\alpha, \beta)$ where $\alpha \in \mathbb{C}$ and $\beta \ge 0$. If $\beta > 0$, then $\lambda = \alpha/\beta$ is a finite eigenvalue. The case of $\alpha \in \mathbb{R} - \{0\}$ and $\beta = 0$, corresponds to an infinite eigenvalue. The case of $\alpha = \beta = 0$ corresponds to an indefinite problem.
We now consider the problem of computing generalized eigenvectors of $(S,T)$. Our goal is to obtain an equivalent problem which can be solved using a blocked algorithm.
\subsection{Computing a single eigenvector}
It this subsection, we note that the problem of computing a single generalized eigenvector of $(S,T)$ is equivalent to solving a tall homogenous matrix equation involving real matrices.
Let $\lambda \in \lambda(S_{jj},T_{jj})$ and let $\lambda = \frac{\alpha_j}{\beta_j}$ where $\beta_j > 0$ and
\begin{equation*}
\alpha_j = a_j + i b_j \in \mathbb{C}.
\end{equation*}
Let $m_j = \dim(S_{jj})$ and let $D_{jj} \in \mathbb{R}^{m_j \times m_j}$ and $B_{jj} \in \mathbb{R}^{m_j \times m_j}$ be given by
\begin{equation} \label{equ:DB1}
D_{jj} = \beta_j, \quad B_{jj} = a_j
\end{equation}
when $\dim(S_{jj}) = 1$ (or equivalently $b_j = 0$) and
\begin{equation} \label{equ:DB2}
D_{jj} =
\begin{bmatrix}
\beta_j & 0 \\ 0 & \beta_j
\end{bmatrix},
\quad
B_{jj} =
\begin{bmatrix}
a_j & b_j \\
-b_j & a_j
\end{bmatrix}
\end{equation}
when $\dim(S_{jj}) = 2$ (or equivalently $b_j \not = 0$). With this notation, the problem of computing an eigenvector is equivalent to solving the homogeneous linear equation
\begin{equation} \label{equ:tall-matrix-equation}
SV\!D_{jj} - TV\!B_{jj} = 0
\end{equation}
with respect to $V \in \mathbb{R}^{m \times n_j}$. This is an immediate consequence of the following lemma.
\begin{lemma} Let $\lambda \in \lambda(S_{jj}, T_{jj})$ and let $\lambda = (a_j + ib_j)/\beta$ where $\beta > 0$. Then the following statements are true.
\begin{enumerate}
\item If $\dim(S_{jj}) = 1$, then $x \in \R^m$ is a real eigenvector corresponding to the real eigenvalue $\lambda \in \mathbb{R}$ if and only if $V = \begin{bmatrix} x \end{bmatrix}$ has rank 1 and solves equation \eqref{equ:tall-matrix-equation}.
\item If $\dim(S_{jj}) = 2$, then $z = x + iy \in \C^m$ is a complex eigenvector corresponding to the complex eigenvalue $\lambda \in \mathbb{C}$ if and only if $V = \begin{bmatrix} x & y \end{bmatrix}$ has rank 2 and solves equation \eqref{equ:tall-matrix-equation}.
\end{enumerate}
\end{lemma}
\subsection{Computing all eigenvectors}
It this subsection, we note that the problem of computing all generalized eigenvectors of $(S,T)$ is equivalent to solving a homogenous matrix equation involving real matrices.
Specifically, let $D \in \R^{m \times m}$ and $B \in \R^{m \times m}$ be given by
\begin{equation}
D = \text{diag}\{D_{11}, D_{22}, \dotsc, D_{pp}\}, \quad B = \text{diag}\{B_{11}, B_{22}, \dotsc, B_{pp}\}.
\end{equation}
where $D_{jj}$ and $B_{jj}$ are given by equations \eqref{equ:DB1} and \eqref{equ:DB2}.
Then $V = \begin{bmatrix} V_1 & V_2 & \dots V_p \end{bmatrix}$ solves the homogeneous matrix equation
\begin{equation} \label{equ:large-matrix-equation}
SV\!D - TV\!B = 0
\end{equation}
if and only if $V_j \in \mathbb{R}^{m \times n_j}$ solves equation \eqref{equ:tall-matrix-equation}.
\section{A blocked algorithm} \label{sec:block}
It is straightforward to derived a blocked algorithm for solving the homogeneous matrix equation \eqref{equ:large-matrix-equation}. Specifically, \emph{redefine}
\begin{equation*}
S = \begin{bmatrix} S_{ij} \end{bmatrix}, \quad i, j \in \{1,2,\dots,M\}
\end{equation*}
to denote \textit{any} partitioning of $S$ into an $M$ by $M$ block matrix which does not split any of the 2-by-2 blocks along the diagonal of $S$. Apply the same partitioning to $T$, $D$, $B$, and $V$. Then $S$, $T$, and $V$ are block triangular. It is straightforward to see that equation \eqref{equ:large-matrix-equation} can be solved using Algorithm \ref{alg:blocked-computation-all-eigenvectors}.
\begin{algorithm}[htbp]
\caption{Blocked computation of all generalized eigenvectors} \label{alg:blocked-computation-all-eigenvectors}
\For{$j \gets 1,2,\dotsc,p$}{
Compute generalized eigenvectors $Y_{jj}$ for $(S_{jj},T_{jj})$\;
\For{$i \gets j-1,\dotsc,1$}{
\For{$k \gets 1,2,\dotsc, i$}{
$Y_{kj} = Y_{kj} - (S_{k,i+1} Y_{i+1,j} D_{jj} - T_{k,i+1} Y_{i+1,j} B_{jj})$
}
Solve
\begin{equation}
S_{ii} Z D_{jj} - T_{ii} Z B_{jj} = Y_{ij}
\end{equation}
with respect to $Z$ and set $Y_{ij} \gets Z$\;
}
}
\end{algorithm}
We see that Algorithm \ref{alg:blocked-computation-all-eigenvectors} can be implemented using three distinct kernels:
\begin{enumerate}
\item \label{kernel:eig} Compute the eigenvectors corresponding to a small matrix pencil $(S,T)$ in real Schur form.
\item \label{kernel:update} Execute linear updates of the form
\begin{equation} \label{equ:update}
Y \gets Y - (S X D - T X B)
\end{equation}
where $S$ and $T$ are small dense matrices, $D$ is diagonal and $B$ is block diagonal with blocks of size $1$ or $2$.
\item \label{kernel:solve} Solution of small matrix equations of the form
\begin{equation} \label{equ:solve}
S Z D - T Z B = Y
\end{equation}
where $(S,T)$ is a matrix pencil in real Schur form and $D$ is diagonal and $B$ is block diagonal with diagonal blocks of dimension $1$ or $2$.
\end{enumerate}
Once these kernels have been implemented, it is straightforward to parallelize Algorithm \ref{alg:blocked-computation-all-eigenvectors} using a task-based runtime system such as StarPU \cite{StarPU}.
\section{Deriving a robust blocked algorithm} \label{sec:robust}
The subroutines {\tt xTGEVC} prevent overflow using principles originally derived by Edward Anderson and implemented in the subroutines {\tt xLATRS} \cite{lawn36}. These subroutines apply to triangular linear systems
\begin{equation} \label{equ:triangular-system}
Tx=b
\end{equation}
with a single right-hand side $b$. Mikkelsen and Karlsson \cite{ppam2017} formalized the work of Anderson and derived a robust blocked algorithm for solving \eqref{equ:triangular-system}. Mikkelsen, Schwarz and Karlsson \cite{ccpe2018} derived a robust blocked algorithm for solving triangular linear systems $$TX=B$$ with multiple right hand sides. Their StarPU implementation ({\tt Kiya}) is significantly faster than {\tt DLATRS} when numerical scaling is necessary and not significantly slower than {\tt DTRSM} when numerical scaling is unnecessary.
A robust variant of Algorithm \ref{alg:blocked-computation-all-eigenvectors} can be derived using
the principles applied by Mikkelsen, Schwarz and Karlsson \cite{ccpe2018}. We can use the two real functions {\tt ProtectUpdate} and {\tt ProtectDivision} introduced by Mikkelsen and Karlsson \cite{ppam2017}. They are used to prevent scalar divisions and linear updates from exceeding the overflow threshold $\Omega > 0$. They have the following key properties.
\begin{enumerate}
\item If $t \not = 0$ and $|b| \leq \Omega$, and
\begin{equation}
\xi = {\tt ProtectDivision}(|b|,|t|)
\end{equation}
then $\xi \in (0,1]$ and $|\xi b| \leq |t| \Omega$. It follows that the scaled division
\begin{equation}
y \gets \frac{(\xi b)}{t}
\end{equation}
cannot exceed $\Omega$.
\item If $Z = Y - TX$ is defined, with
\begin{equation}
\|T\|_\infty \leq \Omega, \quad \|X\|_\infty \leq \Omega, \quad \|Y\|_\infty \leq \Omega,
\end{equation}
and
\begin{equation}
\xi = {\tt ProtectUpdate}(\|Y\|_\infty, \|T\|_\infty, \|X\|_\infty)
\end{equation}
then $\xi \in (0,1]$ and $\xi (\|Y\|_\infty + \|T\|_\infty \|X\|_\infty) \leq \Omega$. It follows that
\begin{equation}
Z \gets (\xi Y) - T (\xi X) = (\xi Y) - (\xi T) X
\end{equation}
can be computed without any intermediate or final result exceeding $\Omega$.
\end{enumerate}
Now consider the kernels needed for implementing Algorithm \ref{alg:blocked-computation-all-eigenvectors}. It is easy to understand that they can be implemented using loops, divisions and linear updates. It is therefore plausible that robust variants can be implemented using {\tt ProtectDivision} and {\tt ProtectUpdate}. However, resolving all the relevant details requires many pages. Here we explain how the updates given by equation \eqref{equ:update} can be done without exceeding the overflow threshold $\Omega$. We will use the concept of an \emph{augmented} matrix introduced by Mikkelsen, Schwarz and Karlsson \cite{ccpe2018}.
\begin{definition} Let $X \in \R^{m \times n}$ be partitioned into $k$ block columns
\begin{equation}
X = \begin{bmatrix} X_1 & X_2 & \dotsc & X_k \end{bmatrix}, \quad X_j \in \mathbb{R}^{m \times n_j}, \quad \sum_{j=1}^k n_j = n,
\end{equation}
and let $\alpha \in \mathbb{R}^k$ have $\alpha_j \in (0,1]$. The augmented matrix $\langle \alpha, X \rangle$ represents the real matrix $Y$ given by
\begin{equation}
Y = \begin{bmatrix} Y_1 & Y_2 & \dotsc & Y_k \end{bmatrix}, \quad Y_j = \alpha_j^{-1} Y_j.
\end{equation}
\end{definition}
This is a trivial extension of the original definition which only considered the case of $k=n$. The purpose of the scaling factors $\alpha_j$ is used to extend the normal representational range of our floating-point numbers.
\begin{algorithm} \caption{Right updates with block diagonal matrix} \label{alg:right}
\KwData{An augmented matrix $\av{\alpha}{X}$ where
\begin{equation*}
X = \begin{bmatrix} X_1 & X_2 & \dotsc & X_k \end{bmatrix}, \quad \|X_j\|_\infty \leq \Omega, \quad \alpha \in \mathbb{R}^k
\end{equation*}
and matrices $B_j$ such that $\|B_j\|_\infty \leq \Omega$ and $Y_j = X_j B_j$ is defined.}
\KwResult{An augmented matrix $\av{\beta}{Y}$ such that
\begin{equation}
\beta_j^{-1} Y_j = (\alpha_j^{-1} X_j) B_j, \quad \|Y_j \|_\infty \leq \Omega,
\end{equation}
and $Y$ can be computed without exceeding $\Omega$.
}
\For{$j=1,2\dots,k$}{
$\gamma_j = \texttt{ProtectUpdate}(\|X_j\|_\infty, \|B_j\|_\infty, 0)$\;
$Y_j = (\gamma_j X_j) B_j$\;
$\beta_j = \alpha_j \gamma_j$
}
\end{algorithm}
Now $Z_1 = XD$ can be computed without overflow using Algorithm \ref{alg:right}. In our case $D$ is diagonal, so we are merely scaling the columns of $X$. However, $Z_2 = Y - SZ_1$ can now be computed without overflow using Algorithm \ref{alg:left}. It follows that $Y \gets (Y - S(XD)) + T(XB)$ can be computed without overflow using two applications of Algorithm \ref{alg:right} and Algorithm \ref{alg:left}. Now suppose that $Y$ is $m$ by $n$. Then Algorithm \ref{alg:right} does $O(mn)$ flops on $O(mn)$ data. However, Algorithm \ref{alg:left} then does $O(mn^2)$ flops on the \emph{same} data. Therefore, the overall arithmetic intensity is $O(n)$.
\begin{algorithm} \caption{Left update with dense matrix} \label{alg:left}
\KwData{A matrix $T$ and augmented matrices $\av{\alpha}{X}$ and $\av{\beta}{Y}$ where
\begin{equation}
X = \begin{bmatrix} X_1 & X_2 & \dotsc & X_k \end{bmatrix}, \quad Y = \begin{bmatrix} Y_1 & Y_2 & \dotsc & Y_k \end{bmatrix},
\end{equation}
such that $Z_j = Y_j - TX_j$ is defined and
\begin{equation}
\|X_j\|_\infty \leq \Omega, \quad \|Y_j\|_\infty \leq \Omega.
\end{equation}
}
\KwResult{An augmented matrix $\av{\zeta}{Z}$ such that
\begin{equation}
\zeta_j^{-1} Z_j = \beta_j^{-1} Y_j - T (\alpha_j^{-1} X_j), \quad \|Z_j\|_\infty \leq \Omega,
\end{equation}
and $Z$ can be computed without exceeding $\Omega$.
}
\For{$j=1,\dotsc,k$}{
$\gamma_j = \min \{\alpha_j, \beta_j\}$\;
$\delta_j = \texttt{ProtectUpdate}(\|T\|_\infty, (\gamma_j/\alpha_j) \|X_j\|_\infty, (\gamma_j/\beta_j)\|Y_j\|_\infty)$\;
$X_j \gets \delta_j (\gamma_j/\alpha_j) X_j$\;
$Y_j \gets \delta_j (\gamma_j/\beta_j) Y_j$\;
$\zeta_j = \epsilon_j \delta_j \gamma_j$\;
}
$Z \gets Y - TX$\;
\Return $\av{\zeta}{Z}$
\end{algorithm}
In order to execute all linear updates needed for a robust variant of Algorithm \ref{alg:blocked-computation-all-eigenvectors} we require certain norms. Specifically, we need the infinity norms of all super-diagonal blocks of $S$ and $T$. Moreover, we require the infinity norm of certain submatrices of $Y$. These submatrices consists of either a single column (segment of real eigenvector) or two adjacent columns (segment of complex eigenvector). The infinity norm must be computed whenever a submatrix has been initialized or updated. {\tt ProtectUpdate} requires that the input arguments are bounded by $\Omega$ and failure is possible if they are not. It is necessary to scale the matrices $S$ and $T$ to ensure that all blocks have infinity norm bounded by $\Omega$.
\section{Zazamoukh - a task-parallel robust solver} \label{sec:Zazamoukh}
The new StarNEig library runs on top of StarPU and can be used to solve dense non-symmetric eigenvalue problems. A robust variant of Algorithm \ref{alg:blocked-computation-all-eigenvectors} has been implemented in StarNEig. This implementation ({\tt Zazamoukh}) uses augmented matrices and scaling factors which are signed 32 bit integers. \texttt{Zazamoukh} can compute eigenvectors corresponding to a subset $E \subseteq \lambda(S,T)$ which is closed under complex conjugation.
\texttt{Zazamoukh} is currently limited to shared memory, but an extension to distributed memory is planned.
\subsection{Memory layout}
Given block sizes $mb$ and $nb$ {\tt Zazamoukh} partitions $S$, $T$ and the matrix of eigenvectors $Y$ conformally by rows and columns. In the absence of any 2-by-2 diagonal blocks on the diagonal blocks the tiles of $S$ and $T$ are $mb$ by $mb$ and the tiles of $Y$ are $mb$ by $nb$. The only exceptions can be found along the right and lower boundaries of the matrices. This default configuration is adjusted minimally to prevent splitting any 2-by-2 block of $S$ or separating the real part and the imaginary part of a complex eigenvector into separate block columns.
\subsection{Tasks}
{\tt Zazamoukh} relies on four types of tasks
\begin{enumerate}
\item Pre-processing tasks which compute all quantities needed for robustness. This includes the infinity norm of all super-diagonal tiles of $S$ and $T$ as well as all norms needed for the robust solution of equations of the type \eqref{equ:solve}. If necessary, the matrics $S$ and $T$ are scaled minimally.
\item Solve tasks which use {\tt DTGEVC} to compute the lower \emph{tips} of eigenvectors and a robust solver based on {\tt DLALN2} to solve equations of the type \eqref{equ:solve}.
\item Update tasks which execute updates of the type \eqref{equ:update} robustly.
\item Post-processing tasks which enforce a consistent scaling on all eigenvectors.
\end{enumerate}
\subsection{Task insertion order and priorities}
\texttt{Zazamoukh} is closely related to \texttt{Kiya} which solves triangular linear systems with multiple right-hand sides. Apart from the pre-processing and post-processing tasks, the main task graph is a the disjoint union of $p$ task-graphs, one for each block column of the matrix of eigenvectors. {\tt Zazamoukh} uses the same task insertion order and priorities as {\tt Kiya} to process each of the $p$ sub-graphs.
\section{Numerical experiments} \label{sec:experiments}
In this section we give the result of a set of experiments involving tiny ($m\leq 10\,000$) and small ($m\leq 40\,000$) matrices. Each experiment consisted of computing all eigenvectors of the matrix pencil. The run time was measured for (DTGEVC) LAPACK and {\tt Zazamoukh}. Results related to somewhat larger matrices $(m\leq 80\,000)$ can be found in the NLAFET Deliverable 2.7 \cite{D27}.
The experiments were executed on an Intel Xeon E5-2690v4 (“Broadwell”) node with 28 cores arranged in two NUMA islands with 14 cores each. The theoretical peak performance in double-precision arithmetic
is 41.6 GFLOPS/s for one core and 1164.8 GFLOPS/s for a full node.
We used the StarNEig test-program {\tt starneig-test} to generate reproducible experiments. The default parameters produce matrix pencils where approximately 1 percent of all eigenvalues are zeros, 1 percent of all eigenvalues are infinities and there are no indefinite eigenvalues. {\tt Zazamoukh} used the default tile size $mb$ which is 1.6 percent of the matrix dimension for matrix pencils with dimension $m \ge 1000$.
All experiments were executed with exclusive access to a complete node (28 cores). LAPACK was run in sequential mode, while {\tt Zazamoukh} used 28 StarPU workers and 1 master thread. The summary of our results are given in Figure \ref{fig:results}. The speedup of {\tt Zazamoukh} over LAPACK is initially very modest as there is not enough tasks to keep 28 workers busy, but it picks up rapidly and {\tt Zazamoukh} achieves a superlinear speedup over {\tt DTGEVC} when $m \ge 10\,000$. This is an expression of the fact that {\tt Zazamoukh} uses a blocked algorithm, whereas {\tt DTGEVC} computes the eigenvectors one by one.
\begin{table}
\centering
\begin{tabular}{R{1.5cm}C{0.3cm}R{0.9cm}R{0.9cm}R{0.9cm}C{0.3cm}R{1.5cm}R{1.5cm}C{0.3cm}R{1.5cm}}
dimension && \multicolumn{3}{c}{eigenvalue analysis} && \multicolumn{2}{c}{runtime (ms)} && SpeedUp \\
\cmidrule{1-1} \cmidrule{3-5} \cmidrule{7-8} \cmidrule{10-10}
m && zeros & inf. & indef. && LAPACK & StarNEig && \\
1000 && 11 & 13 & 0 && 295 & 175 && 1.6857 \\
2000 && 25 & 16 & 0 && 1598 & 409 && 3.9071 \\
3000 && 24 & 30 & 0 && 6182 & 929 && 6.6545 \\
4000 && 42 & 49 & 0 && 15476 & 1796 && 8.6169 \\
5000 && 54 & 37 & 0 && 30730 & 2113 && 14.5433 \\
6000 && 61 & 64 & 0 && 53700 & 2637 && 20.3641 \\
7000 && 67 & 64 & 0 && 84330 & 3541 && 23.8153 \\
8000 && 56 & 69 & 0 && 122527 & 4769 && 25.6924 \\
9000 && 91 & 91 & 0 && 171800 & 6189 && 27.7589 \\
10000 && 108 & 94 & 0 && 242466 & 7821 && 31.0019 \\
20000 && 175 & 197 & 0 && 2034664 & 49823 && 40.8378 \\
30000 && 306 & 306 & 0 && 7183746 & 162747 && 44.1406 \\
40000 && 366 & 382 & 0 && 17713267 & 380856 && 46.5091 \\
\bottomrule
\end{tabular} \caption{Comparison of sequential DTGEVC to task-parallel {\tt Zazamoukh} using 28 cores. The runtimes are given in milli-seconds (ms). The last column gives the speedup of {\tt Zazamoukh} over LAPACK. Values above 28 correspond to super-linear speedup. All eigenvectors were computed with a relative residual less than $2u$, where $u$ denotes the double precision unit roundoff.} \label{fig:results}
\end{table}
\section{Conclusion} \label{sec:conclusion}
Previous work by Mikkelsen, Schwarz and Karlsson has shown that triangular linear systems can be solved in parallel without overflow using augmented matrices. In this paper we have shown that the eigenvectors of a matrix pencil can be computed in parallel without overflow using augmented matrices. Certainly, robust algorithms are slower than non-robust algorithms when numerical scaling is not needed, but robust algorithms will always return a result which can be evaluated in the context of the user's application. To the best of our knowlegde StarNEig is the only library which contains a parallel robust solver for computing the generalized eigenvectors of a dense nonsymmetric matrix pencil. The StarNEig solver ({\tt Zazamoukh}) runs on top of StarPU and uses augmented matrices and scaling factors with are integer powers of $2$ to prevent overflow. It acheives superlinear speedup compared with (DTGEVC) from LAPACK.
In the immediate future we expect to pursue the following work:
\begin{enumerate}
\item Extend {\tt Zazamoukh} to also compute left eigenvectors. Here the layout of the loops is different and we must use the 1-norm instead of the infinity norm when executing the overflow protection logic.
\item Extend {\tt Zazamoukh} to distributed memory machines.
\item Extend {\tt Zazamoukh}'s solver to use recursive blocking to reduce the run-time further. The solve tasks all lie on the critical path of the task graph.
\item Extend {\tt Zazamoukh} to complex data-types. This case is simpler than real arithmetic because there are no $2$-by-$2$ blocks on the main diagonal of $S$.
\item Revisit the complex division routine {\tt xLADIV} \cite{baudin2012} which is the foundation for the {\tt DLALN2} routine used by {\tt Zazamoukh}'s solve tasks. In particular, the failure modes of {\tt xLADIV} have not be characterized \cite{Baudin}.
\end{enumerate}
\subsection*{Acknowlegdements}
This work is part of a project (NLAFET) that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 671633. This work was supported by the Swedish strategic research programme eSSENCE. We thank the High Performance Computing Center North (HPC2N) at Ume{\aa} University for providing computational resources and valuable support during test and performance runs.
| {
"timestamp": "2020-03-11T01:14:40",
"yymm": "2003",
"arxiv_id": "2003.04776",
"language": "en",
"url": "https://arxiv.org/abs/2003.04776",
"abstract": "In this paper we consider the problem of computing generalized eigenvectors of a matrix pencil in real Schur form. In exact arithmetic, this problem can be solved using substitution. In practice, substitution is vulnerable to floating-point overflow. The robust solvers xTGEVC in LAPACK prevent overflow by dynamically scaling the eigenvectors. These subroutines are sequential scalar codes which compute the eigenvectors one by one. In this paper we discuss how to derive robust blocked algorithms. The new StarNEig library contains a robust task-parallel solver Zazamoukh which runs on top of StarPU. Our numerical experiments show that Zazamoukh achieves a super-linear speedup compared with DTGEVC for sufficiently large matrices.",
"subjects": "Mathematical Software (cs.MS); Distributed, Parallel, and Cluster Computing (cs.DC); Numerical Analysis (math.NA)",
"title": "Parallel Robust Computation of Generalized Eigenvectors of Matrix Pencils",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850892111574,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.709534978466661
} |
https://arxiv.org/abs/1705.01646 | Recursive Integral Method with Cayley Transformation | Recently, a non-classical eigenvalue solver, called RIM, was proposed to compute (all) eigenvalues in a region on the complex plane. Without solving any eigenvalue problem, it tests if a region contains eigenvalues using an approximate spectral projection. Regions that contain eigenvalues are subdivided and tested recursively until eigenvalues are isolated with a specified precision. This makes RIM an eigensolver distinct from all existing methods. Furthermore, it requires no a priori spectral information. In this paper, we propose an improved version of {\bf RIM} for non-Hermitian eigenvalue problems. Using Cayley transformation and Arnoldi's method, the computation cost is reduced significantly. Effectiveness and efficiency of the new method are demonstrated by numerical examples and compared with 'eigs' in Matlab. | \section{Introduction}
We consider the non-Hermitian eigenvalue problem
\begin{equation} \label{AxLambdaBx}
A x= \lambda B x,
\end{equation}
where $A$ and $B$ are $n\times n$ large sparse matrices. Here $B$ can be singular. Such eigenvalue problems arise in many scientific and engineering
applications \cite{GolubVorst2000CAM, Saad2011, SunZhou2016} as well as in emerging areas such as data analysis in social networks \cite{BoccalettiEtal2006PR}.
The problem of interest in this paper is to find (all) eigenvalues in a given region $S$ on the complex plane $\mathbb C$ without any spectral information,
i.e., the number and distribution of eigenvalues in $S$ are not known.
In a recent work \cite{Huang2016JCP}, we developed an eigenvalue solver {\bf RIM} (recursive integral method).
\textbf{RIM}, which is essentially different from all the existing eigensolvers, is based on spectral projection and domain decomposition. Briefly speaking,
given a region $S \subset \mathbb C$ whose boundary $\Gamma:=\partial S$ is a simple closed curve,
{\bf RIM} computes an indicator $\delta_S$ using spectral projection $P$ defined by a Cauchy contour integral on $\Gamma$.
The indicator is used to decide if $S$ contains eigenvalue(s). When the answer is positive, $S$ is divided into sub-regions
and indicators for these sub-regions are computed. The procedure continues until the size of the region is smaller than
the specified precision $\epsilon$ (e.g., $\epsilon = 10^{-6}$).
The centers of the regions are the approximations of eigenvalues.
To be specific, for $z \in \mathbb C$, the resolvent of the matrix pencil $(A,B)$ is defined as as
\begin{equation}
R_z(A,B):=(A-zB)^{-1}.
\end{equation}
Let $\Gamma$ be a simple closed curve lying in the resolvent set of $(A,B)$ on $\mathbb C$. Spectral projection for \eqref{AxLambdaBx} is given by
\begin{equation}
P(A,B)=\dfrac{1}{2\pi i}\int_{\Gamma}(A-zB)^{-1}dz.
\end{equation}
Given a random vector ${\boldsymbol f}$, it is well-known that $P$ projects ${\boldsymbol f}$
onto the generalized eigenspace associated with the eigenvalues enclosed by $\Gamma$. Clearly, $P{\boldsymbol f}$ is zero if there is no eigenvalue(s)
inside $S$, and nonzero otherwise.
{\bf RIM} differs from classical eigensolvers \cite{GolubVorst2000CAM} and recently developed integral based methods\cite{SakuraiSugiura2003CAM,Polizzi2009PRB}.
It simply computes an indictor of a region using the approximation to $P{\boldsymbol f}$,
\begin{equation}\label{XLXf}
P{\boldsymbol f} \approx \dfrac{1}{2 \pi i} \sum_{j=1}^W \omega_j {\boldsymbol x}_j,
\end{equation}
where $\omega_j$'s are quadrature weights and ${\boldsymbol x}_j$'s are the solutions of the linear systems
\begin{equation}\label{linearsys}
(A- z_jB){\boldsymbol x}_j = {\boldsymbol f}, \quad j = 1, \ldots, W.
\end{equation}
Recall that if there is no eigenvalue inside $\Gamma$, then
$P{\boldsymbol f} = {\bf 0}$ for all
${\boldsymbol f} \in \mathbb C^n$.
In practice, one needs a threshold to distinguish
between $|P{\boldsymbol f}| \ne 0$ and
$|P{\boldsymbol f}|= 0$. The indicator needs to be robust enough to treat the following problems:
\begin{itemize}
\item[P1)] The randomly selected ${\boldsymbol f}$ may have only a
small component in the range of $P$ (which we write as
${\mathcal R}(P)$),
in which case $|P{\boldsymbol f}|$ may be small even
when there are eigenvalues in $S$;
\item[P2)] Since a quadrature rule is
used to approximate $|P{\boldsymbol f}|$,
the indicator will not be zero (and may not
even be very small) when there is no eigenvalue in $S$.
\end{itemize}
The strategy of {\bf RIM} in \cite{Huang2016JCP} selects a small threshold $\delta_0=0.1$
based on substantial experimentation, i.e., $S$ contains no eigenvalue if $\delta_S < \delta_0$.
This choice of threshold
for discarding a region systematically leans towards further
investigation of regions that may potentially contain eigenvalues.
Of course, such a strategy leads to more computation cost.
To understand the first problem, consider an orthonormal basis
$\{{\boldsymbol \phi}_j, j=1,\ldots, M\}$
for ${\mathcal R}(P)$, which coincides with the
eigenspace associated with all the eigenvalues within $S$.
For a random ${\boldsymbol f}$,
\begin{equation}\label{fRP}
|P{\boldsymbol f}|=\left|{\boldsymbol f}|_{{\mathcal R}(P)}\right | =\left| \sum_{j=1}^M a_j {\boldsymbol \phi}_j \right |=\left(\sum_{j=1}^M a^2_i \right)^{1/2},
\end{equation}
where $a_j = ({\boldsymbol f}, {\boldsymbol \phi}_j)$. Since ${\boldsymbol f}$ is random, $|P{\boldsymbol f}|$ could be very small.
The solution in \cite{Huang2016JCP} is to normalize
$P{\boldsymbol f}$ and project again.
The indicator $\delta_S$ is set to be
\begin{equation}
\label{sigmaP2P}
\delta_S := \left | P \left( \frac{P {\boldsymbol f}}{|P {\boldsymbol f}|}\right)\right|.
\end{equation}
Analytically, $P^2{\boldsymbol f} = P{\boldsymbol f}$. But numerical approximations to $P^2{\boldsymbol f}$ and $P{\boldsymbol f}$ may
differ significantly.
\vskip 0.1cm
The following is the basic algorithm for {\bf RIM}:
\begin{itemize}
\item[] {\bf RIM}$(A, B, S, \epsilon, \delta_0, {\boldsymbol f})$
\item[]{\bf Input:} matrices $A, B$, region $S$, precision $\epsilon$, threshold $\delta_0$, random vector ${\boldsymbol f}$.
\item[]{\bf Output:} generalized eigenvalue(s) $\lambda$ inside $S$
\vskip 0.1cm
\item[1.] Compute ${\delta_S}$ using \eqref{sigmaP2P}.
\item[2.] Decide if $S$ contains eigenvalue(s).
\begin{itemize}
\item If $\delta_S < \delta_0$, then exit.
\item Otherwise, compute the size $h(S)$ of $S$.
\begin{itemize}
\item[-] If $h(S) > \epsilon $,
\begin{itemize}
\item[] partition $S$ into subregions $S_j, j=1, \ldots N$.
\item[] for $j=1: N$
\item[] $\qquad${\bf RIM}$(A, B, S_j, \epsilon, \delta_0, {\boldsymbol f})$.
\item[] end
\end{itemize}
\item[-] If $h(S) \le \epsilon$,
\begin{itemize}
\item[] set $\lambda$ to be the center of $S$.
\item[] output $\lambda$ and exit.
\end{itemize}
\end{itemize}
\end{itemize}
\end{itemize}
To compute $\delta_S$, one needs to solve many linear systems
\begin{equation}\label{fAzjByj}
(A-z_j B) {\boldsymbol x}_j = {\boldsymbol f}
\end{equation}
parameterized by $z_j$.
In \cite{Huang2016JCP}, the Matlab linear solver `\textbackslash' is used to solve \eqref{fAzjByj}. This is certainly not efficient!
In this paper, we propose a new version of {\bf RIM}, called {\bf RIM-C}, to improve the efficiency.
The contributions include: 1) Cayley transformation and Arnoldi's method to speedup linear solves for the parameterized system
\eqref{fAzjByj}; and 2) a new indicator to improve the robustness and efficiency.
The rest of the paper is arranged as follows. In Section 2, we present how to incorporate Cayley transformation and the Arnoldi's method
into {\bf RIM}. In Section 3, we introduce a new indicator to decide if a region contains eigenvalues. Section 4 contains the new algorithm
and some implementation details. Numerical examples are presented in Section 5. We end up the paper with some conclusions and future
works in Section 6.
\section{Cayley Transformation and Arnoldi's Method}
\subsection{Cayley transformation}
The computation cost of {\bf RIM} mainly comes from solving the linear systems \eqref{fAzjByj} to compute the spectral projection $P{\boldsymbol f}$.
In particular, when the method zooms in around an eigenvalue, it needs to solve linear systems for many close $z_j$'s.
This is done one by one in the first version of {\bf RIM} \cite{Huang2016JCP}.
It is clear that the computation cost will be greatly reduced if one can take the advantage of the parametrized linear systems of same structure.
Without loss of generality, we consider a family of linear systems
\begin{equation} \label{eq:4}
(A-zB) {\boldsymbol x} ={\boldsymbol f},
\end{equation}
where $z$ is a complex number.
When $B$ is nonsingular, multiplication of $B^{-1}$ on both sides of \eqref{eq:4} leads to
\begin{equation} \label{eq:Precond}
(B^{-1}A-z I) {\boldsymbol x}=B^{-1} {\boldsymbol f}.
\end{equation}
Given a matrix $M$, a vector ${\boldsymbol b}$, and a non-negative integer $m$, the Krylov subspace is defined as
\begin{align}
K_{m}(M; {\boldsymbol b}):=\text{span} \{{\boldsymbol b}, M {\boldsymbol b}, \ldots, M^{m-1}{\boldsymbol b} \}.
\end{align}
The shift-invariant property of Krylov subspaces says that
\begin{align} \label{eq:5}
K_{m}(a M+ b I; {\boldsymbol b})=K_{m}(M; {\boldsymbol b}),
\end{align}
where $a$ and $b$ are two scalars.
Thus the Krylov subspace of $B^{-1}A-z I$ is the same as $B^{-1}A$, which is independent of $z$.
The above derivation fails when $B$ is singular.
Fortunately, this can be fixed by Cayley transformation \cite{Meerbergen2003SIAM}.
Assume that $\sigma$ is not a generalized eigenvalue and $\sigma \neq z$.
Multiplying both sides of \eqref{eq:4} with
\begin{equation}\label{newprecond}
(A- \sigma B)^{-1},
\end{equation}
one obtains that
\begin{eqnarray*}
\label{ABsigmaz}(A- \sigma B)^{-1}{\boldsymbol f}&=&(A- \sigma B)^{-1}(A-z B){\boldsymbol x} \\
\nonumber &=&(A- \sigma B)^{-1}(A-\sigma B+(\sigma -z)B) {\boldsymbol x} \label{eq:Cayley} \\
\nonumber &=&(I+(\sigma - z)(A- \sigma B)^{-1}B) {\boldsymbol x}.\\
\end{eqnarray*}
Let $M=(A- \sigma B)^{-1}B$ and
${\boldsymbol b}=(A- \sigma B)^{-1}{\boldsymbol f}$. Then
\eqref{eq:4} becomes
\begin{align} \label{eq:6}
(I+(\sigma -z)M) {\boldsymbol x} = {\boldsymbol b}.
\end{align}
From \eqref{eq:5}, the Krylov subspace $(I+(\sigma -z)M)$ is the same as $K_{m}(M; {\boldsymbol b})$.
\subsection{Analysis of the pre-conditioners}
Now we look at the connection between two pre-conditioners $B^{-1}$ and $(A- \sigma B)^{-1}$.
Assume that $B$ is non-singular. Let $\lambda$ be an eigenvalue of $B^{-1}A$. Then $\theta=\dfrac{\lambda - z}{\lambda -\sigma}$
is an eigenvalue of
\[
(A- \sigma B)^{-1}(A-z B).
\]
The spectrum of $B^{-1}A$ might spread over the complex plane such that
Krylov subspace based iterative methods may not converge. However, after Cayley transformation, when $\lambda$ becomes large,
$\theta$ will cluster around $1$ (see Fig.~\ref{SpectrumCT} for matrices $A$ and $B$ of {\bf Example 1} in Section 5).
Similar result holds when $B$ is singular. Note that when $\lambda$ approaches $\sigma$, $\theta$ will be very large in magnitude.
When $\lambda$ approaches $z$, $\theta$ goes to zero.
When $\lambda$ is away from $\sigma$ and $z$, $\theta$ is $O(1)$. The key here is that the spectrum of \eqref{eq:Cayley}
has a cluster of eigenvalues around $1$ and only a few isolated eigenvalues, which favors fast convergence in Krylov subspace.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\resizebox{0.45\textwidth}{!}{\includegraphics{2.eps}}&
\resizebox{0.45\textwidth}{!}{\includegraphics{1.eps}}
\end{tabular}
\end{center}
\caption{Matrices $A$ and $B$ are from Example 1 in Section 5. Left: Spectrum of original problem. Right: Spectrum after Cayley transformation.}
\label{SpectrumCT}
\end{figure}
\subsection{Arnoldi Method for Linear Systems }
The computation cost can be significantly reduced by exploiting \eqref{eq:6}. Consider the orthogonal projection method for
\[
M {\boldsymbol x}={\boldsymbol b}.
\]
Let the initial guess be ${\boldsymbol x}_0={\boldsymbol 0}$. One seeks an approximate solution ${\boldsymbol x}_m$ in
$K_m(M; {\boldsymbol b})$ of dimension $m$ by imposing the Galerkin condition \cite{Saad2003}
\begin{equation} \label{eq:Krylov}
({\boldsymbol b}-M {\boldsymbol x}_m) \perp K_m(M; {\boldsymbol b}).
\end{equation}
The basic Arnoldi's process (Algorithm 6.1 of \cite{Saad2011}) is as follows.
\begin{itemize}
\item[1.] Choose a vector ${\boldsymbol v}_1$ of norm $1$
\item[2.] for $j=1,2, \ldots, m$
\begin{itemize}
\item $h_{ij} = (M{\boldsymbol v}_j, {\boldsymbol v}_i), \quad i=1,2, \ldots,j$,
\item ${\boldsymbol w}_j = M {\boldsymbol v}_j - \sum_{i=1}^j h_{ij} {\boldsymbol v}_i$,
\item $h_{j+1,j}=\|{\boldsymbol v}_j\|_2$, if $h_{j+1, j}=0$ stop
\item ${\boldsymbol v}_{j+1} = {\boldsymbol w}_j/h_{j+1, j}$.
\end{itemize}
\end{itemize}
Let $V_m$ be the $n \times m$ orthogonal matrix with column vectors ${\boldsymbol v}_1, \ldots, {\boldsymbol v}_m$ and $H_m$ be
the $m \times m$ Hessenberg matrix whose nonzero entries $h_{i,j}$ are defined as above.
From Proposition 6.6 of \cite{Saad2011}, one has that
\begin{equation} \label{eq:arnoldi}
M V_m=V_m H_m + {\boldsymbol v}_{m+1} {h}_{m+1,m}{\boldsymbol e}^{T}_m
\end{equation}
such that
\[
\text{span}\{\mathrm{col}(V_m)\}=K_{m}(M; {\boldsymbol b}).
\]
Let ${\boldsymbol x}_m=V_m {\boldsymbol y}$.
The Galerkin condition \eqref{eq:Krylov} becomes
\begin{equation}
V^{T}_m {\boldsymbol b}-V^{T}_m M V_m {\boldsymbol y} ={\boldsymbol 0}.
\end{equation}
Since $V_m^T MV_m = H_m$ (see Proposition 6.5 of \cite{Saad2003}), the following holds:
\[
H_m {\boldsymbol y}=V^{T}_m {\boldsymbol b}.
\]
From the construction of $V_m$, ${\boldsymbol v}_1=\dfrac{{\boldsymbol b}}{\|{\boldsymbol b}\|_2}$. Let $\beta=\|{\boldsymbol b}\|_2$. Then
\begin{equation}
{\boldsymbol y}=\beta H_m^{-1}{\boldsymbol e}_1.
\end{equation}
Consequently, the residual of the approximated solution ${\boldsymbol x}_m$ can be written as
\begin{equation} \label{eq:err1}
\|{\boldsymbol b}-M {\boldsymbol x}_m\|_2 = {h}_{m+1,m}|{\boldsymbol e}_m^{T} {\boldsymbol y}|.
\end{equation}
Due to the shift invariant property, one has that
\begin{equation} \label{eq:arno}
\{I+(\sigma -z)M \} V_m =V_m( I+(\sigma -z)H_m)\\
+(\sigma - z){\boldsymbol v}_{m+1} {h}_{m+1,m}{\boldsymbol e}^{T}_m.
\end{equation}
By imposing a Galerkin condition similar to \eqref{eq:Krylov}, we have that
\begin{equation}
V^{T}_m {\boldsymbol b}-V^{T}_m \{I+(\sigma -z)M \} V_m {\boldsymbol y} =0,
\end{equation}
which implies
\begin{equation} \label{eq:many}
\{I+(\sigma -z)H_m \} {\boldsymbol y} =\beta {\boldsymbol e}_1.
\end{equation}
From \eqref{eq:err1}, one has that
\begin{equation} \label{eq:error}
\|b-\{I+(\sigma - z)M \} {\boldsymbol x}_m\|_2 =(\sigma -z){h}_{m+1,m}|{\boldsymbol e}_m^{T} {\boldsymbol y}|.
\end{equation}
Matrix $M$ is an $n \times n$ matrix and $H_m $ is an $m \times m$ upper Hessenberg matrix such that $m \ll n$.
Once $H_m$ and $V_m$ are constructed by Arnoldi's process,
they can be used to solve \eqref{eq:many} for different $z$'s with residual given by \eqref{eq:error}.
The residual can be monitored with a little extra cost.
Next we explain how the Arnoldi's process is incorporated in {\bf RIM}.
To solve \eqref{fAzjByj} for quadrature points $z_j$'s, one chooses a proper shift $\sigma$. Following \eqref{eq:6}, one has that
\begin{align}
(I+(\sigma -z_j)M) {\boldsymbol x}_j={\boldsymbol b},
\end{align}
where $M=(A-\sigma B)^{-1} B$ and ${\boldsymbol b}=(A- \sigma B )^{-1} {\boldsymbol f}$.
From \eqref{eq:arno} and \eqref{eq:many},
\begin{eqnarray} \label{eq:y}
{\boldsymbol y}_j&=&\beta (I+ (\sigma-z_j) H_m)^{-1}{\boldsymbol e}_1, \\
\nonumber {\boldsymbol x}_j &\approx& V_m {\boldsymbol y}_j, \\
\label{eq:reduced}
P{\boldsymbol f} &\approx& \dfrac{1}{2 \pi i} \sum w_j V_m {\boldsymbol y}_j.
\end{eqnarray}
Hence the Krylov subspace for $M=(A-\sigma B)^{-1} B$ can be used to solve many linear systems associated with $z_j$'s close to $\sigma$.
\section{An Efficient Indicator}
Another critical problem of {\bf RIM} is to how to define the indicator $\delta_S$. As seen above,
the indicator in \cite{Huang2016JCP} defined by \eqref{sigmaP2P} is to project a random vector twice.
One needs to solve linear systems with different right hand sides, i.e.,
${\boldsymbol f}$ and $P{\boldsymbol f}/|P{\boldsymbol f}|$.
Consequently, two Krylov subspaces, rather than one, are constructed for a single shift $\sigma$.
In this section, we propose a new indicator that avoids the construction of two Krylov subspaces.
The indicator stills needs to resolve the two problems (P1 and P2) in Section 1.
The idea is to approximate $|P {\boldsymbol f}|$ with different sets of trapezoidal quadrature points
by taking the advantage of the Cayley transformation and Arnoldi's method discussed in the previous section.
Let $P{\boldsymbol f}|_{n}$ be the approximation of $ P {\boldsymbol f} $ with $n$ quadrature points.
It is well-known that trapezoidal quadratures
of a periodic function converges exponentially \cite[Section 4.6.5]{davis1984methods}, i.e.,
\begin{align*}
\left|P{\boldsymbol f}- P{\boldsymbol f}|_n\right| = O(e^{-C n}),
\end{align*}
where C is a constant depending on ${\boldsymbol f}$. The spectral projection satisfies
\[ P {\boldsymbol f}|_{n}\begin{cases}
\neq {\boldsymbol 0} & \text{if there are eigenvalues inside } S, \\
\approx {\boldsymbol 0} & \text{no eigenvalue inside } S.
\end{cases} \]
For a large enough $n_0$, one has that
\[ \dfrac{ \left | P {\boldsymbol f}|_{2n_0}\right|}{ \left | P {\boldsymbol f}|_{n_0} \right|}=\begin{cases}
\dfrac{|P {\boldsymbol f}| + O(e^{-C 2n})}{|P {\boldsymbol f}| + O(e^{-C n})} & \text{if there are eigenvalues inside } S, \\
\dfrac{ O(e^{-C 2n})}{ O(e^{-C n})}=O(e^{-C n}) & \text{no eigenvalue inside } S.
\end{cases} \]
The new indicator is set to be
\begin{equation}\label{ISPf}
\delta_S = {|P {\boldsymbol f}_{2n_0}|}/{|P {\boldsymbol f}_{n_0}|}.
\end{equation}
A threshold value $\delta_0$ is also needed to decide if there exists eigenvalue in $S$ or not.
If $\delta_S > \delta_0:=0.2$, $S$ is said to be admissible, i.e., there exists eigenvalue(s) in $S$. The value $0.2$ is chosen based on numerical experimentation.
Due to \eqref{eq:y} - \eqref{eq:reduced}, the computation cost to evaluate the new indicator is not expensive.
\section{The New Algorithm}
Now we are ready to give the algorithm in detail.
It starts with several shifts $\sigma$'s distributed in $S$ uniformly. The associated Krylov subspaces $K_m(M; {\boldsymbol b})$ are constructed and stored.
For a quadrature point $z$, the algorithm first attempts to solve the linear system \eqref{eq:4} using the Krylov subspace with shift $\sigma$ closest to $z$.
If the residual is larger than the given precision $\epsilon$, a Krylov subspace with a new shift $\sigma$ is constructed, stored and used to solve the
linear system.
Briefly speaking, the algorithm constructed some Krylov subspaces with different $\sigma$'s.
These subspaces are then used to solve the linear system for all quadrature points $z_j$'s.
From \eqref{eq:y} and \eqref{eq:reduced}, instead of solving a family of linear systems of size $n$,
the algorithm solves linear systems of reduced size $m$ for most $z_j$'s.
This is the key idea to speed up {\bf RIM}.
We denote this improved version of {\bf RIM} by {\bf RIM-C} ({\bf RIM} with Cayley transformation).
Given a search region $S$ and a normalized random vector ${\boldsymbol f}$, we compute the indicator $\delta_S$ using \eqref{ISPf}.
Without loss of generality, $S$ is assumed to be a square.
We set $n_0=4$ in \eqref{ISPf}.
If $\delta_S > 0.2$, $S$ is divided uniformly into $4$ regions. The indicators
of these regions are computed. This process continues until the size of the region is smaller than $d_0$.
\begin{enumerate}
\item[] {\bf Algorithm RIM-C:}
\item[] \textbf{RIM-C}$(A, B, S, {\boldsymbol f}, d_0, \epsilon, \delta_0, m, n_0)$
\item[] \textbf{Input:}
\begin{itemize}
\item $A, B$: $n \times n$ matrices
\item $S$: search region in $\mathbb C$
\item ${\boldsymbol f}$: a random vector
\item $d_0$: precision
\item $\epsilon$: residual threshold
\item $\delta_0$: indicator threshold
\item $m$: size of Krylov subspace
\item $n_0$: number of quadrature points
\end{itemize}
\item[] \textbf{Output:}
\begin{itemize}
\item generalized eigenvalues inside $S$
\end{itemize}
\end{enumerate}
\begin{enumerate}
\item Choose several $\sigma$'s uniformly in $S$ and construct Krylov subspaces
\item Compute $\delta_S$ using \eqref{ISPf}.
\begin{itemize}
\item[] Let $z$ be a quadrature point.
\item Check if the linear system can be solved using the existing Krylov subspaces with residual less than $\epsilon$.
\item Otherwise, choose a new $\sigma$, construct a new Krylov subspace to solve the linear system.
\end{itemize}
\item Decide if each $S$ contains eigenvalues(s).
\begin{itemize}
\item If $\delta_S = \dfrac{|P {\boldsymbol f}|_{2n_0}|}{|P {\boldsymbol f}|_{n_0}|}<\delta_0$, exit.
\item Compute the size of $S$, $h(S)$.
\item[-] If $h(S)> \epsilon_0 $, uniformly partition $S_i$ into subregions $S_j, j=1, \ldots 4$
\begin{itemize}
\item[] for $j=1$ to $4$
\item[] \qquad call \textbf{RIM-C}$(A, B, S_j, {\boldsymbol f}, d_0, \epsilon, \delta_0, m, n_0)$
\item[] end
\end{itemize}
\item[-] Otherwise, output the eigenvalue $\lambda$ and exit.
\end{itemize}
\end{enumerate}
\section{Numerical Examples}
In this section, {\bf RIM-C} (implemented in Matlab) is employed to compute all the eigenvalues in a given region.
To the authors' knowledge, there exists no eigensolver doing exactly the same thing. We compare {\bf RIM-C} with `eigs' in Matlab
(IRAM: Implicitly Restarted Arnoldi Method \cite{arpack}). Although the comparison seems to be unfair to both methods, it gives some
idea about the performance of {\bf RIM-C}.
The matrices for {\bf Examples 1-5} come from a finite element discretization of the transmission eigenvalue problem \cite{JiSunTurner2012ACMTOM,Sun2011SIAMNA}
using different mesh size $h$. Therefore, the spectra of these problems are similar.
For Matlab function `eigs(A,B,K,SIGMA)', `K' and `SIGMA' denote the number of eigenvalues to compute and the {\it shift}, respectively.
For {\bf RIM-C}, the size of Krylov space is set to be $m=50$, $d_0 = 10^{-9}$, $\epsilon = 10^{-10}$, $\delta_0=0.2$, and $n_0=4$.
All the examples are computed on a Macbook pro with 16 Gb memory and 3 GHz Intel Core i7.
{\bf Example 1:}
The matrices $A$ and $B$ are $1018 \times 1018$ (mesh size $h\approx 0.1$). The search region $S=[1, 11] \times [-1, 1]$.
For `eigs', the `shift' is set to be $5.5$. For this problem, it is known that there exist $5$ eigenvalues in $S$. Therefore, `K' is set to be $5$.
Note that {\bf RIM-C} does not need this information.
The results are shown in Table~\ref{1018}.
Both {\bf RIM-C} and `eigs' compute $5$ eigenvalues and they are consistent. `eigs' uses less time than {\bf RIM-C}.
\begin{table}[h!]
\caption{Eigenvalues computed and CPU time by {\bf RIM-C} and `eigs' for {\bf Example 1}.}
\label{1018}
\centering
\begin{tabular}{l|r|r}
\hline
& {\bf RIM-C} & `eigs' \\\hline
Eigenvalues & \textbf{3.9945390188}48445 & \textbf{3.9945390188}56096 \\
& \textbf{6.93971914380}0903 & \textbf{6.93971914380}4773 \\
& \textbf{6.9350539858}73570 & \textbf{6.9350539858}44678 \\
& \textbf{10.6546658534}90588 & \textbf{10.6546658534}41946\\
& \textbf{10.6587060246}50019 & \textbf{10.6587060246}09756\\
\hline
CPU time& 0.284922s &0.247310s \\
\hline
\end{tabular}
\end{table}
{\bf Example 2:}
Matrices $A$ and $B$ are $4066 \times 4066$ (mesh size $h\approx 0.05$). Let $S=[20, 30] \times [-6, 6]$. For `eigs', `{\it shift}' is set to be $25$.
Again, it is known in advance that there are $3$ eigenvalues in $S$. Hence `K' is set to be $3$.
The results are shown in Table~\ref{4066}. Both methods compute same eigenvalues and `eigs' is faster.
\begin{table}[h!]
\caption{Eigenvalues computed and CPU time by {\bf RIM-C} and `eigs" for {\bf Example 2}.}
\label{4066}
\centering
\begin{tabular}{l|r|r}
\hline
& {\bf RIM-C} & `eigs' \\\hline
Eigenvalues & \textbf{23.803023938}395199 $\qquad \qquad$ & \textbf{23.803023938}403236$\qquad \qquad$ \\
& $\pm$ \textbf{5.6823043148}76092i & $\qquad \pm$ \textbf{5.6823043148}40053i\\
& \textbf{24.73702749700}6540 & \textbf{24.73702749700}3453\\
& \textbf{24.7509596350}36583 & \textbf{24.7509596350}22376\\
& \textbf{25.2781451874}65789 & \textbf{25.2781451874}57707\\
& \textbf{25.2845015150}28143 & \textbf{25.2845015150}36474\\
\hline
CPU time& 0.558687s &0.333513s \\
\hline
\end{tabular}
\end{table}
{\bf Example 3:}
Matrices $A$ and $B$ are $16258 \times 16258$ matrices (mesh size $h\approx 0.025$).
Let $S=[0, 20] \times [-6, 6]$. There are $10$ eigenvalues in $S$.
It is well-known that the performance of `eigs' is highly dependent on `shift'.
In Table~\ref{16258}, we show the time used by {\bf RIM-C} and `eigs' with different shifts `{\it shift} =5, 10, 15'.
Notice that when the shift is not {\it good}, `eigs' uses much more time. In practice, {\it good} shifts are not known in advance.
\begin{table}[h!]
\caption{CPU time used by {\bf RIM} and `eigs' with different shifts for {\bf Example 3}.}
\label{16258}
\centering
\begin{tabular}{c|c|c|c|c}
\hline
& {\bf RIM-C} & `eigs' shift=5& `eigs' shift=10 & `eigs' shift=15 \\ \hline
CPU time& 2.571800s &0.590186 &\textbf{7.183679}s&0.392902s\\
\hline
\end{tabular}
\end{table}
{\bf Example 4:} We consider a larger problem: $A$ and $B$ are $260098 \times 260098$.
Let $S=[0, 20] \times [-6, 6]$ (mesh size $h\approx 0.00625$). There are $17$ eigenvalues in $S$.
The results are in Table~\ref{260098}. This example, again, shows that for larger problems without any spectrum information,
the performance of {\bf RIM-C} is quite stable and consistent. However, the performance of `eigs' varies a lot with different `{\it shifts}'.
\begin{table}[h!]
\caption{CPU time used by {\bf RIM} and `eigs' with different {\it shifts} for {\bf Example 4}.}
\label{260098}
\centering
\begin{tabular}{c|c|c|c}
\hline
& RIM-C & `eigs' shift=5 &`eigs' shift=10 \\ \hline
CPU time& 104.228413s &\textbf{1696.703477}s&{\bf 272.506573}s\\
\hline
\end{tabular}
\end{table}
{\bf Example 5:} This example demonstrates the effectiveness and robustness of the new indicator.
The same matrices in {\bf Example 3} ($16258 \times 16258$) are used. Consider three regions $S_1, S_2$ and $S_3$.
$S_1=[18.4,18.8] \times [-0.2,0.2]$ has one eigenvalue inside.
$S_2=[14.6,14.8] \times [-0.1,0.1]$ has two eigenvalues inside.
$S_3=[19.7,19.9] \times [-0.1,0.1]$ contains no eigenvalue.
Table~\ref{NewIndictorsV} shows the indicators of these three regions computed using \eqref{ISPf}.
It is seen that the indicator is different when there are eigenvalues inside the region and when there are no eigenvalues.
\begin{table}[h!]
\caption{Indictors: $S_1$ and $S_2$ contain at least one eigenvalue, $S_3$ contains no eigenvalue.}
\label{NewIndictorsV}
\centering
\begin{tabular}{c|r|r|r}
\hline
number of quadrature points & $P{\boldsymbol f}_{S_1}$ & $P{\boldsymbol f}_{S_2}$ &$P{\boldsymbol f}_{S_3}$\\
\hline
4 & 0.021036161440 & 0.000256531878& 0.001173702609\\
8 & 0.020981705584 & 0.000258504259 & 0.000044238403\\
\hline
$\delta_S$ & 0.997411 &0.992370 & 0.037691\\
\hline
\end{tabular}
\end{table}
Table~\ref{randv} shows the means, minima, maxima, and standard deviations of indicators of these three regions computed using $100$ random vectors.
The indicators are consistent for different random vectors.
\begin{table}[h!]
\caption{Means, minima, maxima, and standard deviations of indicators using $100$ random vectors.}
\label{randv}
\begin{center}
\begin{tabular}{lrrrr}
\hline
$S$ & mean & min.&max.&std. dev.\\
\hline
$S_1$&0.99848393687 &0.66250246918&1.43123449889&0.08740952445\\
$S_2$ &0.99926772105 &0.92600650392&1.14648387384&0.01788832121\\
$S_3$ &0.03763601782 &0.03734608324&0.03775912970&0.00010228556\\
\hline
\end{tabular}
\end{center}
\end{table}
{\bf Example 6:} The last example shows the potential of {\bf RIM-C} to treat large matrices.
The sparse matrices are of ${\bf15,728,640 \times 15,728,640}$ arising from a finite element discretization of localized quantum states in
random media \cite{Arnold2016}. {\bf RIM-C} computed $136$ real eigenvalues in $(2, 3)$, shown in the right picture
of Fig.~\ref{Arnold}.
\begin{figure}
\begin{center}
{ \scalebox{0.5} {\includegraphics{Arnold.eps}}}
\caption{Distribution of eigenvalues in $(2,3)$ for {\bf Example 6}.}
\label{Arnold}
\end{center}
\end{figure}
\section{Conclusions and Future Works}
This purposes of this paper is to compute (all) the eigenvalues of a large sparse non-Hermitian problem
in a given region. We propose a new eigensolver {\bf RIM-C}, which is an improved version of the recursive
integral method using spectrum projection. {\bf RIM-C} uses Cayley transformation and Arnoldi method
to reduce the computation cost.
To the authors' knowledge, {\bf RIM-C} is the only eigensolver for this particular purpose. As we mentioned,
the comparison of {\bf RIM-C} and `eigs' is unfair to both methods. However, the numerical results do show that {\bf RIM-C}
is effective and has the potential to treat large scale problems.
Currently, the algorithm is implemented in Matlab. A parallel version using C++ is under development.
For the time being, {\bf RIM-C} only computes eigenvalues, which is good enough for some applications. However, adding
a component to give the associated eigenvectors is necessary for other applications. It would also be useful to provide the
multiplicity of an eigenvalue. These are our future works to make {\bf RIM-C} a robust efficient eigensolver.
| {
"timestamp": "2017-05-05T02:02:00",
"yymm": "1705",
"arxiv_id": "1705.01646",
"language": "en",
"url": "https://arxiv.org/abs/1705.01646",
"abstract": "Recently, a non-classical eigenvalue solver, called RIM, was proposed to compute (all) eigenvalues in a region on the complex plane. Without solving any eigenvalue problem, it tests if a region contains eigenvalues using an approximate spectral projection. Regions that contain eigenvalues are subdivided and tested recursively until eigenvalues are isolated with a specified precision. This makes RIM an eigensolver distinct from all existing methods. Furthermore, it requires no a priori spectral information. In this paper, we propose an improved version of {\\bf RIM} for non-Hermitian eigenvalue problems. Using Cayley transformation and Arnoldi's method, the computation cost is reduced significantly. Effectiveness and efficiency of the new method are demonstrated by numerical examples and compared with 'eigs' in Matlab.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Recursive Integral Method with Cayley Transformation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850892111574,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7095349784666609
} |
https://arxiv.org/abs/1912.06999 | Fixed-Time Extremum Seeking | We introduce a new class of extremum seeking controllers able to achieve fixed time convergence to the solution of optimization problems defined by static and dynamical systems. Unlike existing approaches in the literature, the convergence time of the proposed algorithms does not depend on the initial conditions and it can be prescribed a priori by tuning the parameters of the controller. Specifically, our first contribution is a novel gradient-based extremum seeking algorithm for cost functions that satisfy the Polyak-Lojasiewicz (PL) inequality with some coefficient \kappa > 0, and for which the extremum seeking controller guarantees a fixed upper bound on the convergence time that is independent of the initial conditions but dependent on the coefficient \kappa. Second, in order to remove the dependence on \kappa, we introduce a novel Newton-based extremum seeking algorithm that guarantees a fully assignable fixed upper bound on the convergence time, thus paralleling existing asymptotic results in Newton-based extremum seeking where the rate of convergence is fully assignable. Finally, we study the problem of optimizing dynamical systems, where the cost function corresponds to the steady-state input-to-output map of a stable but unknown dynamical system. In this case, after a time scale transformation is performed, the proposed extremum seeking controllers achieve the same fixed upper bound on the convergence time as in the static case. Our results exploit recent gradient flow structures proposed by Garg and Panagou in [3], and are established by using averaging theory and singular perturbation theory for dynamical systems that are not necessarily Lipschitz continuous. We confirm the validity of our results via numerical simulations that illustrate the key advantages of the extremum seeking controllers presented in this paper. | \section{Introduction}
\label{Sec:introduction}
\PARstart{I}{n} several applications it is of interest to recursively minimize a particular cost function whose mathematical form is unknown and which is only accessible via measurements. For these types of problems, extremum seeking control (ESC) has shown to be a powerful technique with provable stability, convergence, and robustness guarantees \cite{KrsticBookESC,DerivativesESC,Moase,Durr:Lie}. The main underlaying idea behind extremum seeking control is to induce multiple time scales in the dynamics of the closed-loop system in oder to emulate the behavior of a target nominal optimization algorithm chosen a priori to solve a particular type of optimization problem. Typical target nominal algorithms include gradient descent for convex optimization \cite{tan06Auto,GrushkovskayaLie}, Newton-like methods \cite{PowerES}, Riemannian gradient flows for constrained optimization \cite{Poveda:15}, gradient descent with momentum \cite{HeavyBollES}, accelerated gradient descent \cite{zero_order_poveda_Lina}, hybrid gradient descent on manifolds \cite{Strizic:17_CDC}, gradient and Newton flows with delays \cite{ThiagoKrstic}, gradient descent with time-varying costs \cite{Grushkovskaya2017}, distributed gradient systems for multi-agent systems \cite{Suttner2017}, and switched gradient flows modeled as hybrid systems \cite{Poveda:16}. Other ES approaches based on parameter estimation in adaptive control have also been considered in \cite{Guay:03,Guay:15,AttaGuay18,PhasorGuay}. Modifications to improve accuracy in ES were also presented in \cite{improvedES}. The previous approaches have been successfully used in several engineering applications such as wind turbine control and power converter optimization \cite{PowerES}, resource allocation problems \cite{Poveda:15}, optimization of robotic systems \cite{Fish_ESC}, price seeking in dynamic markets \cite{Frihauf:12}, dynamic tolling for transportation systems \cite{PovedaCDC17_a}, and traffic light control \cite{Kutadinata:14_Traffic}, to just name a few. However, even though significant progress has been made in the analysis and design of ESC during the last years, obtaining a good transient performance characterized by fast rates of convergence remains a persistent challenge. This issue has recently motivated the development of fast ESCs based on Netwon-like flows \cite{PowerES}, gradient algorithms with time-invariant momentum \cite{HeavyBollES}, discontinuous gradient descent with finite-time stability properties \cite[Sec. 6.1]{Poveda:16}, and hybrid gradient flows with momentum resetting \cite{zero_order_poveda_Lina}. An early use of non-smooth feedback in extremum seeking can be found in control laws (48) and (55) in \cite{NonC2Krstic}, as well as finite-time (initial condition-dependent) convergence in Figures 2, 4, and 6 of \cite{NonC2Krstic}. Nevertheless, while these recent approaches can significantly improve the transient performance of the system, the convergence properties of all these ESCs are still of (practical) \emph{asymptotic} nature in the sense that for each small neighborhood $\mathcal{N}_{\epsilon}$ that contains the set of optimizers, and for each compact set of initial conditions $K_0$, the controller can be tuned to guarantee convergence to $\mathcal{N}_{\epsilon}$ in finite time, but, in general, as $\mathcal{N}_{\varepsilon}$ shrinks and $K_0$ grows, the convergence time grows unbounded.
On the other hand, recently there has been significant efforts in designing and analyzing control, estimation, and optimization algorithms with non-asymptotic convergence properties. These algorithms guarantee convergence to the desired target in a \emph{fixed time} that is finite and independent of the initial conditions. Algorithms with fixed time stability properties have been recently studied in \cite{JaimeFixed_Time,finite_timeEngelTAC,Fixed_timeTAC,OldFinite_Time,PralyFinite_Time,fixed_time} and \cite{romeronips}. As shown in \cite{Fixed_timeTAC} and \cite{PralyFinite_Time}, this type of convergence property can be established via Lyapunov functions for a class of continuous-time dynamical systems, which has opened the door to novel optimization algorithms characterized by continuous vector fields with non-asymptotic convergence properties; e.g., \cite{fixed_time} and \cite{Garg_Inequalities}. However, to the best of our knowledge, all existing optimization algorithms with fixed-time stability properties are model-based and require access to the first or second derivatives of the cost functions.
\vspace{-0.2cm}
\subsection{Contributions}
In this paper, we present a new class of extremum seeking dynamics able to achieve fixed-time (semi-global practical) convergence in static and dynamical systems. This type of convergence guarantees that a fixed upper bound on the convergence time can be prescribed a priori independently of the initial conditions. More precisely the contributions of this paper are the following:
\begin{enumerate}
\item We present a novel fixed-time gradient-based extremum seeking (FTGES) algorithm for cost functions described by smooth static maps that attain its minima and that satisfy the Polyak-Lojasiewicz (PL) inequality with some coefficient $\kappa>0$. For any function satisfying these properties, we establish the existence of tunable parameters that guarantee (semi-global practical) fixed time convergence to a neighborhood of the optimizers, with an upper bound on the convergence time that depends on the coefficient $\kappa$. This establishes a clear advantage in comparison to existing accelerated and non accelerated gradient based extremum seeking controllers for which the convergence time grows unbounded with unbounded initial conditions of the optimizing state.
\item In order to remove the dependence of the convergence time on the parameters of the cost function, we introduce a new fixed-time Newton-based extremum seeking algorithm (FTNES), for which the upper bound in the convergence time is completely assignable a priori. Given that Newton-based ESCs carry out the estimation of the inverse of the Hessian matrix of the cost function by using a Riccati differential equation that has multiple equilibria, our convergence results are local by nature. However, unlike existing results in the literature, the convergence time to an arbitrarily small neighborhood of the optimizer can be upper bounded by a positive number that is independent of the initial conditions of the optimizing state and the Hessian of the cost function, and which can be prescribed a priori by the designer. This exhibits a clear advantage in comparison to traditional Newton-based extremum seeking schemes whose convergence times depend on the initial conditions.
\item We extend our previous results to dynamic settings where the cost function corresponds to the steady-state input-to-output mapping of a stable dynamical system. In this case, we show that equivalent fixed-time convergence results can be obtained after a time scale transformation is performed. In turn, this implies that in the original time scale the fixed upper bound is scaled by the inverse of a gain of the extremum seeking controller that needs to be selected sufficiently small in order to guarantee stability of the closed-loop system.
\item Finally, we show that, under a certain choice of parameters, the proposed algorithms reduce to the standard gradient-based and Newton-based extremum seeking dynamics, and we recover the existing asymptotic results found in the literature of ESC. In this way, the dynamics introduced in this paper can be seen as generalizations of the standard gradient-based and Newton-based ESC.
\end{enumerate}
In order to analyze the extremum seeking dynamics considered in this paper, we make use of averaging theorems developed for non-smooth and set-valued systems \cite{Wang:12_Automatica,averaging_singularHDS,zero_order_poveda_Lina}. This allows us to link the $\mathcal{K}\mathcal{L}$ bound of the average system with the $\mathcal{K}\mathcal{L}$ bound that characterizes the properties of the extremum seeking dynamics, thus establishing a semi-global practical fixed-time convergence property. We illustrate the performance of our ESCs via simulations in different scenarios of single-variable and multi-variable optimization problems, comparing the trajectories generated by the algorithm with the trajectories generated by the standard vanilla gradient-based and Newton-based ESCs of \cite{DerivativesESC} and \cite{Newton}, respectively. To the knowledge of the authors, our results correspond to the first averaging-based extremum seeking algorithms with fixed-time convergence properties for static maps and dynamical systems.
\subsection{Additional contributions with respect to the conference submissions \cite{ACCpovedaKrstic} and \cite{PovedaKrsticIFACWC20}}
In contrast to the preliminary results presented in the conference submissions \cite{ACCpovedaKrstic} and \cite{PovedaKrsticIFACWC20}, in this journal paper we present the complete stability proofs of the algorithms by exploiting continuity of the vector fields. We also present a detailed analysis of the existence of complete solutions, as well as novel closed-loop architectures, stability results, and numerical examples for fixed-time gradient-based and Newton-based extremum seeking controllers applied to \emph{dynamical systems}. We further establish connections with previous results in the literature, that illustrate the advantages and generality of the proposed algorithms.
\subsection{Organization}
The rest of this paper is organized as follows: Section \ref{Sec2} introduces the notation and some preliminaries on dynamical systems. Sections \ref{Sec_Gradient} and \ref{Sec_Newton} present the fixed-time gradient-based and Newton-based extremum seeking algorithms, respectively, as well as their main convergence properties and stability analysis. Section \ref{Sec_Dynamic} presents the results of the extremum seeking algorithms applied to dynamical systems, and finally Section \ref{Sec_Conclusions} ends with the outlook and some conclusions. Our theoretical results are illustrated throughout the paper by means of numerical examples.
\section{Preliminaries}
\label{Sec2}
\subsection{Notation}
We denote by $\mathbb{R}$ ($\mathbb{R}_{>0}$) the set of real numbers (resp. positive real numbers), and by $\mathbb{Z}$ ($\mathbb{Z}_{>0}$) the set of integers (resp. positive integers). Given a compact set $\mathcal{A}\subset\mathbb{R}^n$ and a vector $z\in\mathbb{R}^n$, we use $|z|_{\mathcal{A}}:=\min_{s\in\mathcal{A}}\|z-s\|_2$ to denote the minimum distance of $z$ to $\mathcal{A}$. We use $\mathbb{S}^1:=\{z\in\mathbb{R}^2:z^2_1+z_2^2=1\}$ to denote the unit circle in $\mathbb{R}^2$, and $r\mathbb{B}$ to denote a closed ball in the Euclidean space, of radius $r>0$, and centered at the origin. A function $\alpha:\mathbb{R}_{\geq0}\to\mathbb{R}_{\geq0}$ is of class $\mathcal{K}_{\infty}$ if it is zero at zero, continuous, strictly increasing, and unbounded. A function $\beta:\mathbb{R}_{\geq0}\times\mathbb{R}_{\geq0}\to\mathbb{R}_{\geq0}$ is of class $\mathcal{K}\mathcal{L}$ if it is nondecreasing in its first argument, nonincreasing in its second argument, $\lim_{r\to0^+}\beta(r,s)=0$ for each $s\in\mathbb{R}_{\geq0}$, and $\lim_{s\to\infty}\beta(r,s)=0$ for each $r\in\mathbb{R}_{\geq0}$.
We define the matrix $\mathcal{D}\in\mathbb{R}^{n\times{2n}}$ as the binary matrix that maps a vector $z=[z_1,z_2,z_3,\ldots,z_{2n}]^\top\in\mathbb{R}^{2n}$ to a vector $\tilde{z}$ having only the odd components of $z$, i.e., $\tilde{z}=\mathcal{D}z=[z_1,z_3,z_5,\ldots,z_{2n-1}]^\top$.
\subsection{Dynamical Systems and Stability Notions}
In this paper, we consider constrained dynamical systems with state $x\in\mathbb{R}^n$ and dynamics of the form
\begin{equation}\label{ODE}
x\in C,~~~~\dot{x}=F(x),
\end{equation}
where $F:\mathbb{R}^n\to\mathbb{R}^n$ is a continuous function and $C\subset\mathbb{R}^n$ is a closed set. A solution to system \eqref{ODE} is a continuously differentiable function $x:\text{dom}(x)\to\mathbb{R}^n$ that satisfies: a) $x(0)\in C$; b) $x(t)\in C$ for all $t\in\text{dom}(x)$; and c) $\dot{x}(t)=f(x(t))$ for all $t\in\text{dom}(x)$. A solution is said to be complete if $\text{dom}(x)=[0,\infty)$. Given a compact set $\mathcal{A}\subset C$, system \eqref{ODE} is said to render $\mathcal{A}$ uniformly globally asymptotically stable (UGAS) if there exists a class $\mathcal{K}\mathcal{L}$ function $\beta$ such that every solution of \eqref{ODE} satisfies
\begin{equation}\label{KL_bound}
|x(t)|_{\mathcal{A}}\leq \beta(|x(0)|_{\mathcal{A}},t),~~~\forall~t\in\text{dom}(x).
\end{equation}
In this paper we will also consider $\varepsilon$-perturbed or parameterized dynamical systems of the form
\begin{equation}\label{perturbed_ode}
x\in C_{\varepsilon},~~~\dot{x}=f_{\varepsilon}(x),
\end{equation}
whose stability properties can be established only in a semi-global practical way. In particular, a compact set $\mathcal{A}\subset C$ is said to be semi-globally practically asymptotically stable (SGPAS) as $\varepsilon\to0^+$ if there exists a class $\mathcal{K}\mathcal{L}$ function $\beta$ such that for each pair $\delta>\nu>0$ there exists $\varepsilon^*>0$ such that for all $\varepsilon\in(0,\varepsilon^*)$ every solution of \eqref{perturbed_ode} with $|x(0)|_{\mathcal{A}}\leq \delta$ satisfies
\begin{equation}
|x(t)|_{\mathcal{A}}\leq \beta(|x(0)|_{\mathcal{A}},t)+\nu,~~~\forall~t\in\text{dom}(x).
\end{equation}
The notion of SGPAS can be extended to systems that depend on multiple parameters $\varepsilon=[\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_{\ell}]^\top$. In this case, and with some abuse of notation, we say that the system \eqref{perturbed_ode} renders the set $\mathcal{A}$ SGPAS as $(\varepsilon_{\ell},\ldots,\varepsilon_2,\varepsilon_{1})\to0^+$, where the parameters are tuned in order starting from $\varepsilon_1$, i.e., the parameters $(\varepsilon_{\ell},\ldots,\varepsilon_2,\varepsilon_{1})$ may not be selected independently. This type of stability notion is standard in extremum seeking control, see \cite{DerivativesESC,Poveda:16}.
\subsection{Dynamic Oscillators}
Our model-free optimization algorithms make use of several sinusoid signals that facilitate the extraction of gradient and hessian-related information of the cost function. In order to model these excitation signals, we consider $n$ uncoupled linear dynamic oscillators evolving on the $n$-torus $\mathbb{S}^n=\mathbb{S}^1\times\mathbb{S}^1\times\ldots\times\mathbb{S}^1\subset\mathbb{R}^{2n}$ with overall state $\mu\in \mathbb{R}^{2n}$ and dynamics
\begin{equation}\label{oscillator}
\mu\in \mathbb{S}^n,~~~\dot{\mu}=-\frac{2\pi}{\varepsilon_1}\mathcal{R}_{\kappa}\mu,
\end{equation}
where $\varepsilon_1>0$. The matrix $\mathcal{R}_{\kappa}\in\mathbb{R}^{2n\times 2n}$ is block diagonal and parametrized by a vector of gains $\kappa=[\kappa_1,\kappa_2,\ldots,\kappa_n]^\top$. The $i^{th}$ diagonal block of $\mathcal{R}_{\kappa}$ is defined as
\begin{equation*}
\mathcal{R}_i:=\left[\begin{array}{cc}
0 & -\kappa_i\\
\kappa_i & 0
\end{array}\right],
\end{equation*}
where $\kappa_i>0$. Since the $n$ oscillators are uncoupled, the odd entries $\mu_i$ of the solutions $\mu$ generated by \eqref{oscillator} with $\mu(0)\in \mathbb{S}^n$ are given by
\begin{equation}\label{solutions_oscillator}
\mu_i(t)=\mu_{i}(0)\cos\left(\frac{2\pi}{\varepsilon_1} \kappa_it\right)+\mu_{i+1}(0)\sin\left(\frac{2\pi}{\varepsilon_1} \kappa_it\right),
\end{equation}
with $\mu_{i}(0)^2+\mu_{i+1}(0)^2=1$, for all $i\in\{1,3,5,\ldots,n-1\}$. Indeed, by the structure of the oscillators, the set $\mathbb{S}^n$ is forward invariant, i.e., if $\mu(0)\in \mathbb{S}^n$ then $\mu(t)\in\mathbb{S}^n$ for all $t\geq0$. Moreover, since no solution of \eqref{oscillator} is defined outside the $n$-torus, the set $\mathbb{S}^n$ is trivially globally attractive. Therefore, system \eqref{oscillator} actually renders UGAS the set $\mathbb{S}^n$. This property will facilitate the stability analysis of our algorithms via averaging theory for non-smooth systems \cite{Wang:12_Automatica}.
\section{Gradient-Based Fixed-Time Extremum Seeking for Static Maps}
\label{Sec_Gradient}
We start by considering the fixed-time extremum seeking problem for static maps using a gradient descent-based architecture. In particular, we consider the following unconstrained optimization problem
\begin{equation}\label{main_problem}
\min_{z\in\mathbb{R}^n}~~\phi(z),
\end{equation}
where $\phi:\mathbb{R}^n\to\mathbb{R}$ is an unknown mathematical function that satisfies $\inf_{z\in\mathbb{R}^n}\phi(z)>-\infty$. Our goal is to design a robust feedback-based optimization algorithm that steers the state $z$ to a neighborhood of the set of solutions of \eqref{main_problem} in a fixed time, by using only measurements of $\phi$. Since algorithms of this form have no access to the first or second derivatives of $\phi$, they are usually referred to as \emph{zero-order methods} or extremum seeking controllers.
\subsection{Qualitative Properties of the Cost Functions}
In order to solve problem \eqref{main_problem}, we start by considering cost functions that satisfy the following assumption:
\begin{assumption}\label{assumption_1}
The function $\phi:\mathbb{R}^n\to\mathbb{R}$ is twice continuously differentiable, radially unbounded, has a unique minimizer $z^*\in\mathbb{R}^n$, and there exists a constant $\kappa>0$ such that the function $\phi$ satisfies the Polyak-Lojasiewicz (PL) inequality:
\begin{equation}\label{condition2}
\phi(z)-\phi^*\leq \frac{1}{2\kappa}|\nabla \phi(z)|^2,
\end{equation}
for all $z\in\mathbb{R}^n$, where $\phi^*:=\phi(z^*)$. \QEDB
\end{assumption}
The PL inequality \eqref{condition2} is satisfied by any strongly convex function, i.e., any function $\phi$ that satisfies
\begin{equation*}
\phi(z_1)\geq \phi(z_2)+\nabla \phi(z_2)^\top(z_1-z_2)+\frac{\kappa}{2}|z_1-z_2|^2,
\end{equation*}
for all $z_1,z_2$. Twice continuously differentiable strongly convex functions also satisfy $\nabla^2 \phi(z)\geq \kappa I$, for all $z\in\mathbb{R}^n$, see \cite[Exercise 7.26]{Beck2014}. Since under Assumption \ref{assumption_1} the function $\phi$ is radially unbounded, all the nonempty level sets of $\phi$ are compact.
\subsection{Gradient-Based Fixed-Time Dynamics}
In order to solve problem \eqref{main_problem} using only measurements of $\phi$, we consider a Fixed-Time Gradient Extremum Seeking (FTGES) algorithm with state $(u,\xi,\mu)\in\mathbb{R}^n\times\mathbb{R}^n\times\mathbb{R}^{2n}$, evolving on the set
\begin{equation}\label{flowset1}
(u,\xi,\mu)\in C:=\mathbb{R}^n\times \eta\mathbb{B} \times \mathbb{S}^n,
\end{equation}
with dynamics
\begin{align}\label{ES_dynamics1}
\left(\begin{array}{c}
\dot{u}\\\\
\dot{\xi}\\\\
\dot{\mu}
\end{array}\right)=-\left(\begin{array}{c}
k\left(\dfrac{\xi}{|\xi|^{\alpha_1}}+\dfrac{\xi}{|\xi|^{\alpha_2}}\right)\\
\dfrac{1}{\varepsilon_2}\Big(\xi-F_G(\phi,\mu)\Big)\\
\dfrac{2\pi}{\varepsilon_1}\mathcal{R}_{\kappa}\mu,
\end{array}\right),
\end{align}
where the right-hand side of $\dot{u}$ is defined as zero whenever $\xi=0$. The mapping $F_G$ is given by
\begin{equation}\label{mappingG}
F_{G}(\phi,\mu):=\phi(z)M(\mu),
\end{equation}
and where the argument of $\phi(\cdot)$, and the signal $M(\mu)$, are defined as
\begin{equation}\label{input}
z:=u+a\mathcal{D}\mu,~~~~~M(\mu):=\frac{2}{a}\mathcal{D}\mu,
\end{equation}
with $a\in\mathbb{R}_{\geq0}$ being a tunable parameter. The constants $\alpha_1$ and $\alpha_2$ are defined as
\begin{equation}\label{alphaconstants}
\alpha_1:=\frac{q_1-2}{q_1-1},~~~~\alpha_2:=\frac{q_2-2}{q_2-1},
\end{equation}
where $(q_1,q_2)\in\mathbb{R}_{>0}^2$ are tunable parameters that are said to be \emph{admissible} if they satisfy
\begin{equation*}
q_1\in(2,\infty)~~~\text{and}~~~~q_2\in(1,2).
\end{equation*}
In particular, admissible parameters $(q_1,q_2)$ guarantee that $\alpha_1$ is positive and $\alpha_2$ is negative. Figure \ref{figure11} shows a scheme illustrating the FTGES dynamics.
The proposed algorithm defines an extremum seeking controller with excitation signals $\mu$ generated by a linear oscillator, and a constrained low-pass filter with state $\xi$ evolving on a pre-defined compact set $\eta\mathbb{B}$. In practice, this compact set can be taken arbitrarily large by choosing a large $\eta$, and it is used only for the purpose of analysis in order to apply averaging results for nonsmooth singularly perturbed systems. In \eqref{ES_dynamics1}, the optimizing state $u$ is governed by learning dynamics that are designed to approximate a gradient flow with fixed-time convergence properties \cite{fixed_time}. However, said approximation requires suitable tuning of the parameters $(\varepsilon_1,\varepsilon_2, k, q_1,q_2, \kappa,a)$ that characterize the closed-loop system. The following assumption provides some general tuning guidelines.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{SchemeGradient3-eps-converted-to.pdf}
\caption{\small Scheme of the Fixed-Time Gradient-based Extremum Seeking (FTGES) algorithm for a static map $\phi$.}
\label{figure11}
\end{figure}
\begin{assumption}\label{assumptionFTGES}
The parameters of the FTGES dynamics satisfy the following conditions:
\begin{enumerate}
\item The parameters $(q_1,q_2)$ are admissible.
\item The gain $k$ is positive, and $0<a\ll 1$.
\item For each $i\in\{1,2,\ldots,n\}$ the parameter $\kappa_i$ is a positive rational number, and $\kappa_i\neq \kappa_j$ for all $i\neq j$
\item The parameters $(\varepsilon_1,\varepsilon_2)$ satisfy $0<\varepsilon_1\ll \varepsilon_2\ll 1$.
\end{enumerate}
\QEDB
\end{assumption}
Under Assumption \ref{assumptionFTGES}, the vector field describing the FTGES dynamics is continuous. In particular, since $1-\alpha_1>0$ and $1-\alpha_2>0$ we have that
\begin{align*}
\left|\lim_{\xi\to0}\frac{\xi}{|\xi|^{\alpha_1}}\right|&=\lim_{\xi\to0}\left|\frac{\xi}{|\xi|^{\alpha_1}}\right|
\leq \lim_{\xi\to0} |\xi|^{1-\alpha_1}=0.
\end{align*}
Similarly,
\begin{align*}
\left|\lim_{\xi\to0}\frac{\xi}{|\xi|^{\alpha_2}}\right|&=\lim_{\xi\to0}\left|\frac{\xi}{|\xi|^{\alpha_2}}\right|\leq \lim_{\xi\to0} |\xi|^{1-\alpha_2}=0.
\end{align*}
Therefore, the dynamics of the state $u$ are continuous at $\xi=0$. Continuity at points satisfying $\xi\neq0$ follows trivially by the structure of the right-hand side of \eqref{ES_dynamics1}. Existence of solutions follows then directly by \cite[Lemma 5.26]{HDS}.
The behavior of the FTGES can be roughly explained as follows: The dynamic oscillator generates sinusoid signals of the form \eqref{solutions_oscillator} with frequency proportional to $1/\varepsilon_1$. Thus, for values of $\varepsilon_1$ sufficiently small, the dynamics of $u$ and $\xi$ can be analyzed via averaging theory. By the construction of $F_G$ in \eqref{mappingG}, the average dynamics of $\xi$ will receive as input a perturbed estimation of the gradient of $\phi$, which is then feed to the dynamics of $u$ in order to carry out the optimization process. Provided $\varepsilon_2$ is sufficiently small, the qualitative behavior of the dynamics of $u$ can then be approximated by a simplified reduced system obtained via singular perturbation theory. Thus, under an appropriate tuning of the parameters $(\varepsilon_1,\varepsilon_2)$, the FTGES dynamics exhibit a multi-time scale behavior with three time scales, with the oscillator dynamics operating at the fastest time scale, and the optimization dynamics of $u$ operating at the slowest time scale. However, in contrast to existing smooth extremum seeking controllers, the FTGES dynamics cannot be analyzed using averaging theory and singular perturbation theory for Lipschitz continuous systems. Instead, we will use generalized averaging results developed for set-valued systems and hybrid dynamical systems with possibly non-unique solutions \cite{Wang:12_Automatica,zero_order_poveda_Lina}, which for ODEs only require continuity of the vector field.
\subsection{Main Result for the FTGES}
In order to state the main convergence result for the FTGES dynamics \eqref{ES_dynamics1}, we define for each admissible pair of parameters $(q_1,q_2)$ and each constant $\kappa$ satisfying the PL inequality \eqref{condition2}, the positive constants
\begin{equation}\label{gamma1}
\gamma_1=2^{\frac{8-3\alpha_1}{4}}\kappa^{\frac{2-\alpha_1}{2}},~~~\gamma_2=2^{\frac{8-3\alpha_2}{4}}\kappa^{\frac{2-\alpha_2}{2}},
\end{equation}
where $\alpha_1$ and $\alpha_2$ are defined in \eqref{alphaconstants}. Using \eqref{gamma1}, for each gain $k>0$, we define the value
\begin{equation}\label{fixed_time1}
T_G^*:=\frac{4}{k}\left(\frac{1}{\gamma_1\alpha_1}-\frac{1}{\gamma_2\alpha_2}\right)
\end{equation}
Since $(q_1,q_2)$ are admissible, the term inside the parenthesis is positive. Thus, for any desired $T_G^*>0$ equation \eqref{fixed_time1} can be satisfied by selecting the gain $k$ as
\begin{equation}\label{design_k}
k=\frac{4}{T_G^*}\left(\frac{1}{\gamma_1\alpha_1}-\frac{1}{\gamma_2\alpha_2}\right).
\end{equation}
We are now ready to state the first main result of this paper.
\vspace{0.1cm}
\begin{thm}\label{theorem1}
Suppose that $\phi$ satisfies Assumption \ref{assumption_1} and that Assumption \ref{assumptionFTGES} holds. Then, for any $T_G^*>0$ there exists admissible parameters $(q_1,q_2)$ and gain $k>0$ such that: for each pair $\delta>\nu>0$ there exists $\eta>0$ and $\varepsilon_2^*>0$ such that for each $\varepsilon_2\in(0,\varepsilon_2^*)$ there exists $a^*>0$ such that for each $a\in(0,a^*)$ there exists $\varepsilon_1^*>0$ such that for each $\varepsilon_1\in(0,\varepsilon_1^*)$ the FTGES dynamics \eqref{flowset1}-\eqref{ES_dynamics1} with $(u(0),\xi(0),\mu(0))\in \left(\{z^*\}+\delta\mathbb{B}\right)\times \eta\mathbb{B}\times\mathbb{S}^n$ generates solutions with unbounded time domain, and each of these solutions satisfies $|z(t)-z^*|\leq \nu$, for all $t\geq T_G^*$. \QEDB
\end{thm}
\textsl{Proof:} In order to analyze the FTGES dynamics, we will use averaging tools for dynamical systems that are not necessarily Lipschitz continuous, e.g., \cite{Wang:12_Automatica,zero_order_poveda_Lina}. A key feature of these tools is that the $\mathcal{K}\mathcal{L}$ bound that characterizes the rate of convergence of the ``slow state'' in a singularly perturbed system is completely characterized by the $\mathcal{K}\mathcal{L}$ bound of its \emph{average system}. Based on this, we will divide the proof in two steps. First, by studying the stability properties of the compact set $\mathcal{A}\times \mathbb{S}^n$, where $\mathcal{A}:=\{z^*\}\times \eta\mathbb{B}$, we will establish a SGPAS result for the FTGES dynamics, which preserves the $\mathcal{K}\mathcal{L}$ bound of the average dynamics in the slowest time scale. After this, we will exploit the SGPAS result and the linearity of the low pass filter dynamics to show that the restriction of $\xi$ to the compact set $\eta\mathbb{B}$ is inconsequential for compact sets of initial conditions $L\mathbb{B}$ whenever $\eta\gg L$ is sufficiently large.
\vspace{0.1cm}
\textsl{Step 1:} We start with the following lemma, which follows directly by computation:
\begin{lemma}\label{properties_oscillator}
Under item 3) of Assumption \ref{assumptionFTGES} there exists a $T>0$ such that every solution $\mu$ of the oscillator \eqref{oscillator} with $\varepsilon_1=1$ satisfies
\begin{subequations}\label{orthogonality}
\begin{align}
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}(t)\tilde{\mu}(t)dt=\frac{1}{2}I_n,\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}(t)dt=0.
\end{align}
\end{subequations}
for all $\ell\in\mathbb{Z}_{\geq1}$, where $\tilde{\mu}=\mathcal{D}\mu$. \QEDB
\end{lemma}
By using the properties \eqref{orthogonality}, we can average the dynamics of $u$ and $\xi$ along the solutions $\mu$ of the oscillator. To find the average dynamics of system \eqref{ES_dynamics1}, and for $a>0$ sufficiently small, consider a Taylor expansion of $\phi(u+a\tilde{\mu})$ around $u$:
\begin{equation*}
\phi(u+a\tilde{\mu})=\phi(u)+a\tilde{\mu}^\top \nabla\phi(u)+O(a^2),
\end{equation*}
where we used $\tilde{\mu}:=\mathbb{D}\mu$ to shorten notation. Substituting in \eqref{mappingG} and using the properties \eqref{orthogonality}, we obtain the following average dynamics with state $(u^a,\xi^a)$:
\begin{align}\label{ES_dynamics_average}
\left(\begin{array}{c}
\dot{u}^a\\\\
\dot{\xi}^a
\end{array}\right)=-\left(\begin{array}{c}
k\left(\dfrac{\xi^a}{|\xi^a|^{\alpha_1}}+\dfrac{\xi^a}{|\xi^a|^{\alpha_2}}\right)\\
\dfrac{1}{\varepsilon_2}\Big(\xi^a-\tilde{F}_G(\nabla \phi)\Big)
\end{array}\right)
\end{align}
which are defined in the set
\begin{equation*}
(u^a,\xi^a)\in \mathbb{R}^n\times \eta\mathbb{B}
\end{equation*}
and where the function $\tilde{F}_G(\nabla \phi)$ is given by
\begin{equation*}
\tilde{F}_{G}(\nabla \phi):=\nabla \phi(u^a)+O(a).
\end{equation*}
System \eqref{ES_dynamics_average} is an $O(a)$-perturbed version of a \emph{nominal average} system with dynamics
\begin{align}\label{ES_dynamics_average_nominal}
\left(\begin{array}{c}
\dot{u}^a\\\\
\dot{\xi}^a
\end{array}\right)=-\left(\begin{array}{c}
k\left(\dfrac{\xi^a}{|\xi^a|^{\alpha_1}}+\dfrac{\xi^a}{|\xi^a|^{\alpha_2}}\right)\\
\dfrac{1}{\varepsilon_2}\big(\xi^a-\nabla \phi(u^a)\big)
\end{array}\right).
\end{align}
Indeed, since system \eqref{ES_dynamics_average_nominal} is continuous, by \cite[Thm. 7.21]{HDS} suitable stability properties will be preserved in a semi-global practical way provided $a$ is sufficiently small. Therefore, we proceed to analyze the stability and convergence properties of the nominal system \eqref{ES_dynamics_average_nominal}.
For $\varepsilon_2$ sufficiently small, system \eqref{ES_dynamics_average_nominal} is in singular perturbation form, with the dynamics of $\xi^a$ acting as fast dynamics. To find the boundary layer dynamics of this system, we introduce the new time scale $\tau=t/\varepsilon_2$, which leads to
\begin{align*}
\left(\begin{array}{c}
\dfrac{\partial u^a}{\partial \tau}\vspace{0.1cm}\\
\dfrac{\partial \xi^a}{\partial \tau}
\end{array}\right)=-\left(\begin{array}{c}
\varepsilon_2k\left(\dfrac{\xi^a}{|\xi^a|^{\alpha_1}}+\dfrac{\xi^a}{|\xi^a|^{\alpha_2}}\right)\\\
\xi^a-\nabla \phi(u^a)
\end{array}\right).
\end{align*}
The boundary layer dynamics are obtained by setting $\varepsilon_2=0$, that is $\frac{\partial u_{bl}^a}{\partial \tau}=0$ and
\begin{align}\label{ES_dynamics_average_bl}
\xi_{bl}\in\eta\mathbb{B},~~~\dfrac{\partial \xi_{bl}^a}{\partial \tau}=-\Big(\xi_{bl}^a-\nabla \phi(u_{bl}^a)\Big).
\end{align}
For each fixed $u_{bl}$, i.e., $\dot{u}_{bl}=0$, the solutions of this system satisfy
\begin{equation*}
\big|\xi_{bl}^{a}(t)-\nabla\phi(u_{bl}^a)\big|\leq c_1 e^{-c_2 t}\big|\xi_{bl}^{a}(0)-\nabla\phi(u_{bl}^a)\big|,
\end{equation*}
for all $t\in\text{dom}(\xi_{bl},u_{bl})$, for some $c_1,c_2>0$. Thus, the singularly perturbed system \eqref{ES_dynamics_average_nominal} has a well-defined reduced system \cite[Ex. 1]{Wang:12_Automatica}. This reduced system is obtained by substituting $\xi^a$ by its steady-state value $\nabla \phi(u^a)$, which leads to the following unconstrained dynamics with state $u_r$:
\begin{align}\label{ES_dynamics_average_nominal_reduced}
u_r\in\mathbb{R}^n,~~~\dot{u}_r=-k\left(\dfrac{\nabla \phi(u_r)}{|\nabla \phi(u_r)|^{\alpha_1}}+\dfrac{\nabla \phi(u_r)}{|\nabla \phi(u_r)|^{\alpha_2}}\right).
\end{align}
In order to analyze system \eqref{ES_dynamics_average_nominal_reduced}, we can follow the ideas of \cite{fixed_time} by considering the Lyapunov function
\begin{equation*}
V_G(u_r)=\frac{1}{2}(\phi(u_r)-\phi^*)^2,
\end{equation*}
which, under Assumption 1, satisfies $V(u_r)>0$ for all $u_r\neq z^*$, and $V(u_r)=0$ if an only if $u_r=z^*$. Moreover, all the level sets of $V(u_r)$ are bounded. The derivative of $V(u_r)$ along the solutions of \eqref{ES_dynamics_average_nominal_reduced} is given by
\begin{equation*}
\dot{V}_G(u_r)=-k(\phi(u_r)-\phi^*)\bigg(|\nabla \phi(u_r)|^{\tilde{\alpha}_1}+|\nabla \phi(u_r)|^{\tilde{\alpha}_2}\bigg),
\end{equation*}
where $\tilde{\alpha}_1=2-\alpha_1>0$ and $\tilde{\alpha}_2=2-\alpha_2>0$. Using the PL inequality \eqref{condition2} we have that $-|\nabla \phi(u_r)|\leq -\sqrt{2\kappa}\big(\phi(u_r)-\phi^*\big)^{\frac{1}{2}}$. Thus, the derivative $\dot{V}_G$ can be upper bounded as
\begin{align*}
\dot{V}_G(u_r)&\leq -k\Big((2\kappa)^{\frac{\tilde{\alpha}_1}{2}}(\phi(u_r)-\phi^*)^{\frac{2+\tilde{\alpha}_1}{2}}\\
&~~~~~~~~~~~~~~~+(2\kappa)^{\frac{\tilde{\alpha}_2}{2}}(\phi(u_r)-\phi^*)^{\frac{2+\tilde{\alpha}_2}{2}}\Big),\\
&=-k\Big( (2\kappa)^{\frac{\tilde{\alpha}_1}{2}} (2V(u_r))^{\frac{2+\tilde{\alpha}_1}{4}}+(2\kappa)^{\frac{\tilde{\alpha}_2}{2}} (2V(u_r))^{\frac{2+\tilde{\alpha}_2}{4}} \Big),\\
&=-k \Big(c_1V_G(u_r)^{\gamma_1}+c_2V_G(u_r)^{\gamma_2} \Big)<0,
\end{align*}
for all $u_r\neq z^*$, where
\begin{align*}
c_1:=2^{\frac{2+3\tilde{\alpha_1}}{4}}\kappa^{\frac{\tilde{\alpha}_1}{2}}>0, \\
c_2:=2^{\frac{2+3\tilde{\alpha_2}}{4}}\kappa^{\frac{\tilde{\alpha}_2}{2}}>0,
\end{align*}
and
\begin{align*}
\gamma_1:&=\frac{2+\tilde{\alpha}_1}{4} \in(0,1), \\
\gamma_2:&=\frac{2+\tilde{\alpha}_2}{4}>1.
\end{align*}
This establishes UGAS of $z^*$ for the reduced dynamics. Moreover, by \cite[Lemma 1]{Fixed_timeTAC}, the Lyapunov function evaluated along the solutions of \eqref{ES_dynamics_average_nominal_reduced} satisfies $V_G(u_r(t))=0$ for all $t\geq T_G^*$, where $T_G^*$ is given by \eqref{fixed_time1}. By the definition of $V_G(u_r)$ and Assumption \ref{assumption_1}, this implies that $u_r(t)=z^*$ for all $t\geq T_G^*$.
The UGAS property implies the existence of a $\mathcal{K}\mathcal{L}$ bound $\beta_u$ such that all solutions of \eqref{ES_dynamics_average_nominal_reduced} satisfy
\begin{equation}\label{KLbound}
|u_r(t)-z^*|\leq \beta_u(|u_r(0)-z^*|,t),~~~~~\forall~~t\geq0.
\end{equation}
As noted in \cite[Thm. 3.40]{HDS}, the function $\beta_u$ can be constructed as $\beta_u(r,s)=\max\{0,\beta_0(r,s)\}$, where:
\begin{align}\label{KLexplicit}
\beta_{0}(r,s):=&\sup\Big\{|u_r(t)-z^*|:~u_r~\text{is a solution of \eqref{ES_dynamics_average_nominal_reduced}}\notag\\
&~~~~~~\text{with}~u_r(0)=u_0,~|u_0-z^*|\leq r,~t\geq s\Big\}.
\end{align}
Since the solutions of \eqref{ES_dynamics_average_nominal_reduced} exists and are defined for all $t\geq0$, the supremum is well defined and the function is nondecreasing in $r$ and nonincreasing in $s$. Moreover, it satisfies $\lim_{r\to0^+}\beta_u(r,s)=0$ for each $s\in\mathbb{R}_{\geq0}$ since $\beta_u(r,s)\leq \alpha(r)$, where $\alpha\in\mathcal{K}_{\infty}$ is generated by the uniform global stability property. Finally, $\lim_{s\to\infty}\beta_u(r,s)=0$ for each $r$ due to the finite time convergence property, where the limit can be taken independent of $r$. Therefore, $\beta_u(r,s)$ is a valid class $\mathcal{K}\mathcal{L}$ function that bounds every solution of \eqref{ES_dynamics_average_nominal_reduced}, and by construction it satisfies
\begin{equation}\label{finite_bound}
\beta_u(|u_r(0)-z^*|,t)=0
\end{equation}
for all $t\geq T_G^*$ and all $u_r(0)\in\mathbb{R}^n$.
Having established UGAS and fixed time convergence for the reduced dynamics \eqref{ES_dynamics_average_nominal_reduced}, we can now use \cite[Thm. 2]{Wang:12_Automatica} to establish
that system \eqref{ES_dynamics_average_nominal} renders the compact set $\mathcal{A}:=\{z^*\}\times \eta\mathbb{B}$ SGPAS as $\varepsilon_2\to 0^+$ with $\mathcal{K}\mathcal{L}$ bound $\beta_u$. Moreover, by the definition of solutions we have that $|\xi^a(t)|_{\eta\mathbb{B}}=0$ for all $t\in\text{dom}(u^a,\xi^a)$, which implies that $|\zeta^a(t)|_{\mathcal{A}}:=|u^a(t)-z^*|$, where $\zeta^a=(u^{a\top},\xi^{a\top})^\top$. Thus, for each $\delta>\nu>0$ there exists $\varepsilon_2^*>0$ such that for all $\varepsilon_2\in(0,\varepsilon^*_2)$ every solution of system \eqref{ES_dynamics_average_nominal} with $\zeta^a(0)\in (\{z^*\}+\delta\mathbb{B})\times \eta\mathbb{B}$ satisfies the following bound:
\begin{equation}\label{first_KLbound}
|\zeta^a(t)|_{\mathcal{A}}\leq \beta_u(|\zeta^a(0)|_{\mathcal{A}},t)+\nu,
\end{equation}
for all $t\in\text{dom}(\zeta^a)$. Having established the bound \eqref{first_KLbound}, we can now exploit the structural robustness properties of system \eqref{ES_dynamics_average_nominal}, which follow by the continuity of the right hand side of the dynamics, in order to establish via \cite[Thm. 7.21]{HDS} and \cite[Prop. 6]{zero_order_poveda_Lina}, that the $O(a)$-perturbed system \eqref{ES_dynamics_average} renders the same compact set $\mathcal{A}$ SGPAS as $(a,\varepsilon_2)\to 0^+$ with $\mathcal{K}\mathcal{L}$ bound $\beta_u$, i.e., for each pair $\delta>\nu>0$ there exists $\varepsilon_2^*>0$ such that for all $\varepsilon_2\in(0,\varepsilon_2^*)$ there exists $a^*>0$ such that for all $a\in(0,a^*)$ every solution of the perturbed system \eqref{ES_dynamics_average} with $u^a(0)\in \{z^*\}+\delta\mathbb{B}$ satisfies a bound of the form \eqref{first_KLbound}.
Finally, since the fast oscillator dynamics of \eqref{ES_dynamics1} render UGAS the set $\mathbb{S}^n$ and generate a well-defined average system corresponding to \eqref{ES_dynamics_average}, by averaging results for perturbed non-Lipschitz systems \cite[Thm. 9]{zero_order_poveda_Lina} we obtain that the FTGES dynamics render the set $\mathcal{A}\times\mathbb{S}^n$ SGPAS as $(\varepsilon_1,a,\varepsilon_2)\to 0^+$ with $\mathcal{K}\mathcal{L}$ bound $\beta_u$. In particular, this establishes that for each $k>0$, each tuple of admissible parameters $(q_1,q_2)$, and each pair $\delta>\nu>0$ there exists $\varepsilon_2^*>0$ such that for each $\varepsilon_2\in(0,\varepsilon_2^*)$ there exists $a^*\in(0,\nu/2)$ such that for each $a\in(0,a^*)$ there exists $\varepsilon_1^*>0$ such that for each $\varepsilon_1\in(0,\varepsilon_1^*)$ each solution of the FTGES dynamics \eqref{flowset1}-\eqref{ES_dynamics1} satisfies the bound
\begin{equation}\label{KLLL}
|\zeta(t)|_{\mathcal{A}}\leq \beta_u(|\zeta(t)|_{\mathcal{A}},t)+\frac{\nu}{2},
\end{equation}
for all $t\in\text{dom}(\zeta,\mu)$, where $\zeta:=(u^\top,\xi^\top)^\top$. Given that by definition of solutions we have that $|\xi|_{\eta\mathbb{B}}=0$ and therefore $|\zeta|_{\mathcal{A}}=|u-z^*|$, it then follows that each solution of the FTGES dynamics \eqref{flowset1}-\eqref{ES_dynamics1} satisfies the bound
\begin{equation}\label{KLL}
|u(t)-z^*|\leq \beta_u(|u(t)-z^*|,t)+\frac{\nu}{2},
\end{equation}
for all $t\in\text{dom}(\zeta,\mu)$, where $\zeta:=(u^\top,\xi^\top)^\top$. Since the $\mathcal{K}\mathcal{L}$ bound $\beta_u$ satisfies the property of equation \eqref{finite_bound}, we obtain that $|u(t)-z^*|\leq\nu/2$ for all $t\in\text{dom}(u,\xi,\mu)$ such that $t\geq T_G^*$, namely, solutions with a time domain larger than $T^*_{G}$ achieve fixed-time convergence to a $\nu$-neighborhood of $z^*$. Using \eqref{input}, the triangle inequality, and the fact that $a\in(0,\nu/2)$, we obtain that $|z(t)-z^*|\leq\nu$ for all $t\in\text{dom}(u,\xi,\mu)$ such that $t\geq T_G^*$.
\vspace{0.1cm}
\textsl{Step 2:} We now exploit the $\mathcal{K}\mathcal{L}$ bound $\beta_u$ of system \eqref{ES_dynamics_average_nominal_reduced} in order to establish the existence of complete solutions for the FTGES dynamics from arbitrarily large compact sets $\left(\{z^*\}+\delta\mathbb{B}\right)\times L\mathbb{B}\times\mathbb{S}^n$ of initial conditions. Without loss of generality we assume that $\nu<1$. Due to the bound \eqref{KLL}, the fact that for any $\eta>0$ the set $\eta\mathbb{B}$ is compact, and the forward invariance of the compact set $\mathbb{S}^n$, by the results of Step 1 the FTGES dynamics have no finite escape times from compact sets of initial conditions $\{z^*\}+\delta\mathbb{B}$ and for suitable choices of parameters $(\varepsilon_2,a,\varepsilon_1)$. Thus, any solution of \eqref{flowset1}-\eqref{ES_dynamics1} with $\text{dom}(u,\xi,\mu)<\infty$ must stop due to $\xi$ leaving the set $\eta\mathbb{B}$. In order to establish the existence of solutions that do not stop, we note that for any uniformly bounded input $s$ the solutions of the linear dynamics $\dot{\xi^a}=\frac{1}{\varepsilon_2}\left(-\xi^a+s(t)\right)$, satisfy the bound
\begin{equation}\label{linear_bound}
|\xi^a(t)|\leq \exp\left(-\frac{1}{\varepsilon_2}t\right)|\xi^a(0)|+\sup_{t\geq0} |s(t)|,
\end{equation}
for all $t\geq0$ and all $\varepsilon_2>0$, see \cite[pp. 174]{khalil}. Fix the admissible parameters $(k,q_1,q_2)$ which define the fixed time $T_G^*$. Fixed the constants $\delta>\nu$ with $\nu\in(0,1)$, and define the set
\vspace{-0.4cm}
\begin{small}
\begin{equation*}
\tilde{K}:=\left\{u\in\mathbb{R}^n:|u-z^*|\leq \beta_u\left(\max_{u_0\in \{z^*\}+\delta\mathbb{B}}|u_0-z^*|,0\right)+1\right\}.
\end{equation*}
\end{small}\noindent
By construction, this set is compact since without loss of generality $\beta_u$ can be taken to be continuous \cite[pp. 69]{HDS}. Thus, there exists $M>0$ such that $\tilde{K}\subset M\mathbb{B}$. Since $\phi$ is $C^2$, we have that $|u|\leq M$ implies $|\nabla \phi(u)|\leq M'$ for some $M'>0$. Let $L>0$ and consider the compact set $L\mathbb{B}\subset\mathbb{R}^n$. Let $M''>L$ and define the constant $\Gamma:= M''+M'$. Let $\eta:=2\Gamma$ and let Step 1 generate the values of $(\varepsilon^*_1,a^*,\varepsilon_2^*)$ that induce the bound \eqref{KLL}. Then, by the results of Step 1, for all $\varepsilon_2\in(0,\varepsilon_2^*)$ every solution of the singularly-perturbed system \eqref{ES_dynamics_average_nominal} with $u^a(0)\in \{z^*\}+\delta\mathbb{B}$ and $\xi^a(0)\in L\mathbb{B}$ generates trajectories $u^a$ that satisfy a bound of the form $|u^a(t)-z^*|\leq \beta_u(|u^a(0)-z^*|,0)+\nu/2$. Moreover, by linearity of the dynamics of the low-pass filter, the trajectories $\xi^a$ satisfy a bound of the form $\eqref{linear_bound}$, which, by the construction of $\Gamma\mathbb{B}$ and the choice of $\eta$, implies that $\xi^a(t)\in \Gamma\mathbb{B}\subset \text{int}(\eta\mathbb{B})$ for all $t\geq0$. Thus, said solutions satisfy $\text{dom}(u^a,\xi^a)=[0,\infty)$. Since $\nabla \phi$ is locally Lipschitz, $\nabla \phi(z^*)=0$, and since $|u^a(t)-z^*|\leq \nu$ for all $t\geq T_G^*$, it follows that $|\nabla\phi(u^a(t))|\leq L_{z^*}\nu$ for all $t\geq T_G^*$, where $L_{z^*}>0$. Thus, by using the bound \eqref{linear_bound} with $\sup_{t\geq0} |s(t)|=L_{z^*}\nu$ it follows that trajectories $\xi^a$ of \eqref{ES_dynamics_average_nominal} with $\xi^a(0)\in L\mathbb{B}$ converge to a $L_{z^*}\nu$-neighborhood of zero.
Finally, we use $\epsilon$-closeness of solutions on compact time domains (\cite[Def. 5.23]{HDS}) between perturbed and unperturbed ODEs with a continuous right-hand side in order to establish the existence of complete solutions for the original FTGES dynamics: By \cite[Prop. 6.34]{HDS}, there exists $a^{**}>0$ such that for all $a\in(0,\min\{a^*,a^{**}\})$ and each solution of the $O(a)$-perturbed average dynamics \eqref{ES_dynamics_average} there exists a solution of the nominal average dynamics \eqref{ES_dynamics_average_nominal} that is $\epsilon$-close on compact time domains. Thus, since $\text{int}(\eta\mathbb{B})$ is an open set there exists $\epsilon>0$ such that for any $\bar{T}>T_G^*$ there exists $a^{***}>0$ sufficiently small such that for all $a\in(0,a^{***})$ the trajectories $\xi^a$ of system \eqref{ES_dynamics_average} with $(u^a(0),\xi^a(0))\in \left(\{z^*\}+\delta\mathbb{B}\right)\times L\mathbb{B}$ also satisfy $\xi^a(t)\in (\Gamma+\epsilon/2)\mathbb{B}\subset \text{int}(\eta\mathbb{B})$ for all $t\in[0,\bar{T}]$. By using again closeness of solutions between systems \eqref{ES_dynamics1} and \eqref{ES_dynamics_average}, we obtain the existence of $\varepsilon_1^{**}>0$ such that for all $\varepsilon_1\in(0,\min\{\varepsilon_1^*,\varepsilon_1^{**}\})$ for each trajectory of the original FTGES dynamics with $(u(0),\xi(0),\mu(0))\in\left(\{z^*\}+\delta\mathbb{B}\right)\times L\mathbb{B}\times\mathbb{S}^n$ there exists a solution of the average dynamics \eqref{ES_dynamics_average} that is $\epsilon/2$-close on compact time domains. Thus, the trajectories $(u,\xi)$ of the FTGES dynamics \eqref{ES_dynamics1} with $(u(0),\xi(0))\in \left(\{z^*\}+\delta\mathbb{B}\right)\times L\mathbb{B} $ also satisfy $\xi(t)\in (\Gamma+\epsilon)\mathbb{B}\subset \text{int}(\eta\mathbb{B})$ for all $t\in[0,\bar{T}]$, where $\bar{T}\geq T^*_G$. Convergence of $\xi$ to an $L_{z^*}\nu$-neighborhood of the origin follows now directly by the bounds \eqref{linear_bound} and \eqref{KLL}, and the $\epsilon$-closeness of solutions on compact time domains between solutions of the FTGES dynamics and system \eqref{ES_dynamics_average_nominal}. In turn, this implies that $\xi(t)\in \text{int}(\eta\mathbb{B})$ for all $t\geq 0$, which establishes that every solution of the FTGES dynamics from the compact set of initial conditions $(\{z^*\}+\delta\mathbb{B})\times L\mathbb{B}\times\mathbb{S}^n\subset \mathbb{R}^n\times \eta\mathbb{B}\times\mathbb{S}^n$ satisfies $\text{dom}(u,\xi,\mu)=[0,\infty)$. \null\hfill\null $\blacksquare$
\subsection{Discussion and Numerical Examples}
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{Figure1Final2-eps-converted-to.pdf}
\caption{Evolution of $u$ for the cost function $\phi_{\kappa}$ of \eqref{cost_fixed} with three different parameters $\kappa\in\{0.25,1,2\}$ and with fixed parameters in the algorithm. Each trajectory corresponds to a different cost $\phi_{\kappa}$, and it converges to the set $[1-\nu,~1+\nu]$, with $\nu=0.003$, before the theoretical upper bound on the convergence time.}
\label{figure1}
\end{figure}
From the proof of Theorem \ref{theorem1} we can observe that the fixed time convergence property, established in Step 1, is achieved by selecting admissible parameters $q_1$ and $q_2$ that also guarantee continuity of the vector field\footnote{The conference submission \cite{ACCpovedaKrstic} analyzed the FTGES dynamics using tools for discontinuous systems, e.g., Krasovskii regularizations. Since the Krasovskii regularization of a time-invariant continuous vector field is just the same vector field, said approach is still valid for the analysis. However, in this paper we are able to present a much simpler proof of the stability analysis of the FTGES by exploiting the continuity of the vector field \eqref{ES_dynamics1}.}, and by an appropriate orderly tuning of the parameters $(\varepsilon_2,a,\varepsilon_1)$. Moreover, since only positivity is required for the gain $k$, any fixed time $T_G^*>0$ can be assigned a priori by using equation \eqref{design_k}. Note, however, that $T_G^*$ depends on the parameter $\kappa$ that characterizes the cost function in Assumption \ref{assumption_1}, which is assumed to be unknown. Therefore, in order to induce a particular fixed-time $T^*_G$, a conservative estimate of $\kappa$ should be used in order to tune the parameters of the FTGES.
\begin{remark}
The stability analysis of the FTGES relies on the convergence properties of the gradient flow \eqref{ES_dynamics_average_nominal_reduced}, which we use to qualitatively
approximate the asymptotic behavior of the state $u$ in the FTGES. This is achieved by designing the FTGES dynamics \eqref{ES_dynamics1} to be in standard form for the application of singular perturbation theory \cite{Wang:12_Automatica,zero_order_poveda_Lina}, and by restricting a priori the state $\xi$ of the filter to the compact set $\eta\mathbb{B}$. Since, as shown in Step 2, the constant $\eta>0$ can be taken arbitrarily large to encompass any complete solution of practical interest, the restriction of $\xi$ to $\eta\mathbb{B}$ does not impose any practical constraint in the algorithm, and it is used only for the purpose of analysis. \QEDB
\end{remark}
\begin{remark}
Unlike the model-based fixed-time gradient dynamics of \cite{fixed_time} and \cite{Garg_Inequalities}, the FTGES dynamics are \emph{model-free} and only need measurements of the cost function $\phi$. Because of this, their stability properties are highly dependent on an appropriate \emph{ordered} tuning of the parameters $(\varepsilon_2,a,\varepsilon_1)$. Since standard averaging theory requires Lipschitz continuity of the vector field, the analysis of the FTGES dynamics cannot be carried out using standard averaging tools for smooth extremum seeking controllers such as in \cite{KrsticBookESC} or \cite{durrEbenbauer}. Instead, in this paper we used generalized averaging tools for non-smooth and hybrid systems \cite{Wang:12_Automatica,zero_order_poveda_Lina}. \QEDB
\end{remark}
In order to illustrate the performance of the FTGES dynamics, we consider first the scalar cost function
\begin{equation}\label{cost_fixed}
\phi_{\kappa}(z)=\frac{\kappa}{2}(z-z^*)^2,~~~\kappa>0,
\end{equation}
which satisfies $\nabla^2\phi(z)=\kappa$. For the purpose of simulation, we initially consider the case where $q_1=3$, $q_2=1.5$, $a=0.1$, $\varepsilon_1=0.02$, $\varepsilon_2=0.1$. We consider constants $\kappa\in\{0.25,1,2\}$ which generate the theoretical upper bounds $T_{0.25}=8.25,~T_{1}=2.06,~T_2=1.03$. Figure \ref{figure1} shows the behavior obtained under the FTGES dynamics. For each different $\kappa$ it can be seen that the trajectories converge to a small $\nu$-neighborhood of the optimal point in a finite time bounded by $T_G^*$. The controllers were tuned to induce this behavior with $\nu=0.003$.
On the other hand, we also consider the case when the finite time convergence is fixed at $T_G^*=1$. In this case, for all cost functions $\phi_{\kappa}$ the FTGES dynamics are tuned to guarantee that the convergence time is upper bounded by $1$. In order to achieve this property, the parameters were selected as $a=0.1$, $\varepsilon_1=0.001$, $\varepsilon_2=0.05$, $q_1=3$, $q_2=1.5$, and the gain $k$ was obtained via equation \eqref{design_k}. Figure \ref{figure2} shows the trajectories of $u$ for $\kappa\in\{0.25,1,2\}$. As expected all the trajectories converge to the set $[1-\nu,~1+\nu]$ in finite time bounded by $T_G^*=1$, where $\nu=0.005$. All the auxiliary states $\xi$ were initialized at zero.
\begin{figure}[t!]
\centering
\includegraphics[width=0.49\textwidth]{Figure2Final-eps-converted-to.pdf}
\caption{Evolution of $u$ for the cost function $\phi_{\kappa}$ of \eqref{cost_fixed} with three different values of $\kappa\in\{0.25,1,2\}$ and with fixed prescribed time $T_G^*=1.1$. Red line corresponds to $\kappa=0.25$, blue line corresponds to $\kappa=1$, and green line corresponds to $\kappa=2$. In this case, the parameters of the ES dynamics were tuned to guarantee convergence to the set $[1-\nu,~1+\nu]$, with $\nu=0.003$, before $T_G^*$ seconds. The insets show the overshoot and the convergence to the neighborhood of the optimizer.}
\label{figure2}
\end{figure}
We finish this section by considering the multi-variable case, where the cost function has the form $\phi(z)=\frac{1}{2}z^\top Qz+b^\top z$. We let $T_G^*=1$, and we tuned the parameters of the FTGES to generate trajectories that converge to a neighborhood of the optimal set in a finite time bounded by $T_G^*$. The matrix $Q\in\mathbb{R}^{2\times2}$ is selected as a diagonal positive definite matrix, and the vector $b$ is selected such that the unique minimizer of $\phi$ is $z^*=[1,2]^\top$. Figure \ref{figure3} shows the evolution of the solutions generated by the FTGES dynamics. The parameters were selected as $q_1=3$, $q_2=1.5$, $k=2.1$, $a=0.1$, $\varepsilon_1=0.001$, $\varepsilon_2=0.05$. It can be observed that the trajectories converge to a small neighborhood of the optimal point $z^*$ before $T_G^*$. In order to make a clear comparison with the standard ES dynamics, we have also have plotted the solutions of the classic gradient-based ES dynamics considered in \cite{Krstic2000,TanAndNesic2006Local}.
\section{Newton-Based Fixed-Time Extremum Seeking}
\label{Sec_Newton}
While the FTGES dynamics \eqref{ES_dynamics1} are able to solve problem \eqref{main_problem} in a fixed time $T_G^*$, as shown in equations \eqref{gamma1} and \eqref{fixed_time1} the value of $T_G^*$ depends on the unknown coefficient $\kappa$ that characterizes the cost function in Assumption \ref{assumption_1}. This dependence was further illustrated in Figure \ref{figure1}, where the FTGES dynamics with identical parameters were applied to three cost functions with different coefficients $\kappa$. Since the standing assumption in extremum seeking is that $\phi$ is unknown, absence of knowledge of the coefficient $\kappa$ presents a challenge for the tuning of the FTGES. Motivated by this limitation, in this section we now present a Fixed Time Newton-based Extremum Seeking algorithm that removes the dependence on the coefficient $\kappa$
in the upper bound of the convergence time, thus facilitating the tuning of the control parameters.
\subsection{Qualitative Properties of the Cost Function}
For the Newton-based fixed time extremum seeking dynamics (FTNES) we consider the following assumption on the cost function $\phi$.
\begin{assumption}\label{assumption_2}
The function $\phi:\mathbb{R}^n\to\mathbb{R}$ is twice continuously differentiable, the Hessian matrix $\nabla^2 \phi(z)$ is positive definite for all $z\in\mathbb{R}^n$, and the norm $|\nabla \phi|$ has bounded level sets. \QEDB
\end{assumption}
\begin{figure}[t!]
\centering
\includegraphics[width=0.485\textwidth]{Figure1Journal-eps-converted-to.pdf}
\caption{Trajectories of $u$ generated by the FTES dynamics applied to a multi-variable optimization problem. The dotted lines show the trajectories generated by the classic vanilla gradient descent based-ES.}
\label{figure3}
\end{figure}
Since by assumption $\inf_{z\in\mathbb{R}^n}\phi(z)>-\infty$, the function $\phi$ attains a minimum. Thus, since $\nabla^2 \phi(z)>0$ implies strict convexity, the minimizer $z^*$ is unique and $\nabla \phi(z)=0$ if and only if $z=z^*$. Since there are strictly convex functions that do not satisfy the PL inequality \eqref{condition2}, the FTNES dynamics can be applied to cost functions $\phi$ that do not satisfy Assumption \ref{assumption_1}.
\subsection{Newton-based Fixed-Time Dynamics}
In order to solve problem \eqref{main_problem} for functions satisfying Assumption \ref{assumption_2}, we now consider an extremum seeking controller with state $(u,\xi_1,\xi_2,\mu)\in\mathbb{R}^n\times\mathbb{R}^n\times\mathbb{R}^{n\times n}\times\mathbb{R}^{2n}$, evolving on the set
\begin{equation}\label{flowset1N}
(u,\xi_1,\xi_2,\mu)\in C:=\mathbb{R}^n\times \eta\mathbb{B}\times \eta\mathbb{B} \times \mathbb{S}^n,
\end{equation}
with dynamics
\begin{align}\label{ES_dynamics1N}
\left(\begin{array}{c}
\dot{u}\\\\
\dot{\xi}_1\\\\
\dot{\xi}_2\\\\
\dot{\mu}
\end{array}\right)=-\left(\begin{array}{c}
k\xi_1\left(\dfrac{\xi_2}{|\xi_2|^{\alpha_1}}+\dfrac{\xi_2}{|\xi_2|^{\alpha_2}}\right)\\
\dfrac{1}{\varepsilon_2}\Big(\xi_1F_H(\phi,\mu)\xi_1-\xi_1\Big),\\
\dfrac{1}{\varepsilon_2}\Big(\xi_2-F_G(\phi,\mu)\Big)\\
\dfrac{2\pi}{\varepsilon_1}\mathcal{R}_{\kappa}\mu,
\end{array}\right).
\end{align}
where the right-hand side of $\dot{u}$ is defined as zero whenever $\xi_2=0$. The parameters $(\alpha_1,\alpha_2)$ are again given by \eqref{alphaconstants}, the mapping $F_G$ is given by \eqref{mappingG}, the input $z$ and the function $M$ are given by \eqref{input}, and the dynamic oscillator is the same of \eqref{ES_dynamics1}.
The FTNES has an extra state $\xi_1$ with dynamics depending on the mapping $F_H$, defined as
\begin{equation*}
F_H(\phi,\mu):=\phi(z)N(\mu),
\end{equation*}
where
\begin{equation}\label{excitation_signals}
N(\mu):=\left[\begin{array}{cccc}
N_{11} & N_{12} & \ldots & N_{1n}\\
N_{21} & N_{22} & \ldots & N_{2n}\\
\vdots & \vdots & \vdots & \vdots\\
N_{n1} & N_{n2} & \ldots & N_{nn}
\end{array}\right],
\end{equation}
and where the entries $N_{ij}$ satisfy $N_{ij}=N_{ji}$, as well as the following conditions:
\begin{align*}
N_{ij}&=\frac{16}{a^2}\left(\tilde{\mu}_i^2-\frac{1}{2}\right),~~~~~\forall~i=j,\\
N_{ij}&=\frac{4}{a^2}\tilde{\mu}_i\tilde{\mu}_j,~~~~~~~~~~~~~~\forall~i\neq j.
\end{align*}
where $\tilde{\mu}=\mathcal{D}\mu$.
The structure of the NFTES is similar to the multi-variable Newton-based extremum seeking controller considered in \cite{Newton}, where, on average, the state $\xi_2$ serves as an estimation of the gradient, and the state $\xi_1$ approximates the inverse of the Hessian $\nabla^2\phi(z)$. However, the NFTES dynamics have two main differences: a) for admissible parameters $(q_1,q_2)$, the dynamics of $u$ are continuous but not Lipschitz continuous, and they aim to approximate a Newton-based flow with fixed-time convergence properties \cite{fixed_time,Garg_Inequalities} instead of the standard Newton-based flow $\dot{x}=-\nabla^2\phi(x)^{-1}\nabla\phi(x)$ considered in \cite{Newton}; b) the excitation signals $N(\cdot)$ and $M(\cdot)$ are both generated by a time-invariant linear oscillator, which facilitates the analysis of the algorithm via averaging theory for non-smooth time-invariant dynamical systems \cite{Wang:12_Automatica}. A scheme illustrating the FTNES dynamics is shown in Figure \ref{figureNewton}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{SchemeNewton1-eps-converted-to.pdf}
\caption{Scheme of the Fixed-Time Newton-based Extremum Seeking (FTNES) algorithm for a static map $\phi$.}
\label{figureNewton}
\end{figure}
\begin{remark}
In equation \eqref{ES_dynamics1N} the state $\xi_1$ corresponds to an $n\times n$ matrix. Therefore, the dynamics of $\xi_1$ must be understood as a matrix differential equation. This notation is used to simplify our presentation, and it is consistent with the standard Newton-based extremum seeking controllers of \cite{Newton}. There is no loss of generality in analyzing these dynamics in vectorial form, such as in \cite{NewtonESC}. \QEDB
\end{remark}
Since admissible parameters $(q_1,q_2)$ guarantee that $1-\alpha_1>0$ and $1-\alpha_2>0$, the dynamics of $u$ in \eqref{ES_dynamics1} also satisfy
\begin{align*}
\left|\lim_{\xi_2\to0}\frac{\xi_1\xi_2}{|\xi_2|^{\alpha_1}}\right|=\lim_{\xi_2\to0}\left|\frac{\xi_1\xi_2}{|\xi_2|^{\alpha_1}}\right|&\leq \lim_{\xi_2\to0} \left| \xi_1\right| \frac{|\xi_2|}{|\xi_2|^{\alpha_1}}\\
&\leq \left| \xi_1\right|\lim_{\xi_2\to0} |\xi_2|^{1-\alpha_1}\\
&=0.
\end{align*}
and
\begin{align*}
\left|\lim_{\xi_2\to0}\frac{\xi_1\xi_2}{|\xi_2|^{\alpha_2}}\right|=\lim_{\xi_2\to0}\left|\frac{\xi_1\xi_2}{|\xi_2|^{\alpha_2}}\right|&\leq \lim_{\xi_2\to0} \left| \xi_1\right| \frac{|\xi_2|}{|\xi_2|^{\alpha_2}}\\
&\leq \left| \xi_1\right|\lim_{\xi_2\to0} |\xi_2|^{1-\alpha_2}\\
&=0.
\end{align*}
Therefore, admissible parameters guarantee continuity of the FTNES dynamics. Moreover, since the algorithm has the same tunable parameters $(q_1,q_2,k,\varepsilon_1,\varepsilon_2,a)$ of the FTGES algorithm \eqref{ES_dynamics1}, under an appropriate tuning of the parameters a multi-time scale behavior can be induced in the closed-loop system.
\subsection{Main Result}
In order to state the main convergence result for the FTNES dynamics, for each pair of admissible parameters $(q_1,q_2)$ we now define the positive constants
\begin{align*}
\tilde{\gamma}_1:=2^{\frac{\alpha_1}{2}},~~~~\tilde{\gamma}_2:=2^{\frac{\alpha_2}{2}},
\end{align*}
and the upper bound
\begin{equation}\label{upper_bound}
T_N^*:=\frac{1}{k}\left(\frac{\tilde{\gamma}_1}{\alpha_1}-\frac{\tilde{\gamma}_2}{\alpha_2}\right),
\end{equation}
where $(\alpha_1,\alpha_2)$ are defined as in \eqref{alphaconstants} and $k>0$. For admissible parameters $(q_1,q_2)$, the term inside the parentheses of \eqref{upper_bound} is positive. Moreover, $T_N^*$ does not depend on any parameter of the cost function $\phi$. Thus, for each desired $T_N^*>0$ one can always satisfy equation \eqref{upper_bound} by choosing admissible parameters $(q_1,q_2,k)$ with
\begin{equation}\label{choiceC}
k=\frac{1}{T_N^*}\left(\frac{\tilde{\gamma}_1}{\alpha_1}-\frac{\tilde{\gamma}_2}{\alpha_2}\right).
\end{equation}
The following Theorem states the main convergence result for the FTNES dynamics:
\begin{thm}\label{main_theorem2}
Consider the NFTES dynamics and suppose that Assumptions \ref{assumptionFTGES} and \ref{assumption_2} hold. Then, for admissible parameters $(q_1,q_2)$, $k>0$ and each $\nu>0$, there exists $\eta>0$ and $\varepsilon_2^*>0$ such that for each $\varepsilon_2\in(0,\varepsilon_2^*)$ there exists $a^*>0$ such that for each $a\in(0,a^*)$ there exists $\varepsilon_1^*>0$ such that for each $\varepsilon_1\in(0,\varepsilon_1^*)$ there exists a neighborhood $\mathcal{N}$ of $p^*:=(z^*, \nabla^2 \phi(z^*)^{-1},\nabla\phi(z^*))$ such that every solution with $(u(0),\xi(0),\mu(0))\in \mathcal{N}\times\mathbb{S}^n$ is defined for all time $t\geq0$, and satisfies $|z(t)-z^*|\leq \nu$ for all $t\geq T_N^*$. \QEDB
\end{thm}
\textsl{Proof:} The proof of Theorem \ref{main_theorem2} is similar to the proof of Theorem \ref{theorem1}. We start with the next auxiliary Lemma which follows by direct computation.
\begin{lemma}\label{integrals2}
Under item 3) of Assumption \ref{assumptionFTGES} there exists a $T>0$ such that for all solutions $\mu$ of the linear oscillator \eqref{oscillator} with $\varepsilon_1=1$ the following holds:
\begin{align*}
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_i(s)ds=0,~~\forall~~i\in\{1,2,\ldots,n\}.\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_i(s)^2ds=\frac{1}{2},\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_{i}(s)\tilde{\mu}_j(s)ds=0,~~\forall~~i\neq j,\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_{i}(s)^2N_{ii}(s)ds=\frac{1}{8},~~\forall~~i\in\{1,2,\ldots,n\},\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_{i}(s)^2N_{jj}(s)ds=0,~~\forall~~i\neq j,\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_{i}(s)^2N_{ij}(s)ds=0,~~\forall~~i\neq j,\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_{i}(s)N_{ij}(s)ds=0,~~~\forall~~i\neq j,\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_{i}(s)N_{ii}(s)ds=0,~~\forall~~i\in\{1,2,\ldots,n\},\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_{i}(s)N_{jj}(s)ds=0,~~\forall~~i\neq j,\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_{i}(s)\tilde{\mu}_{j}(s)N_{ij}(s)ds=\frac{1}{4},~~\forall~~i\neq j,\\
&\frac{1}{\ell T}\int_{0}^{\ell T}\tilde{\mu}_{i}(s)\tilde{\mu}_{j}(s)N_{ii}(s)ds=0,~~\forall~~i\neq j.
\end{align*}
for all $\ell\in\mathbb{Z}_{\geq 1}$, where $\tilde{\mu}=\mathcal{D}\mu$.
\QEDB
\end{lemma}
Lemma \ref{integrals2} is instrumental for the application of averaging theory in order to obtain estimations of $\nabla\phi$ and $\nabla^2 \phi$ of order $O(a)$. In particular, by taking again a Taylor expansion of $\phi(u+a\tilde{\mu})$ around $u$ for small values of $a$, and retaining the second order terms, we obtain:
\begin{equation*}
\phi(u+a\tilde{\mu})=\phi(u)+a\tilde{\mu}^\top \nabla \phi(u)+\frac{a^2}{2}\tilde{\mu}^\top\nabla^2 \phi(u)\tilde{\mu}+O(a^3).
\end{equation*}
Using this expansion, and the definitions of the mappings $M$, $N$, $F_G$, and $F_H$, as well as the integrals of Lemma \ref{integrals2}, we obtain the following average functions, where the average is taken with respect to the solutions of the oscillator, i.e., keeping $(u,\xi_1,\xi_2)$ constant:
\begin{equation*}
\frac{1}{\ell T}\int_{0}^{\ell T}F_{G}(\phi(s),\mu(s))ds=\nabla\phi(u)+O(a),
\end{equation*}
and
\begin{equation*}
\frac{1}{\ell T}\int_{0}^{\ell T}F_{H}(\phi(s),\mu(s))ds=\nabla^2\phi(u)+O(a).
\end{equation*}
Therefore, the average dynamics of \eqref{ES_dynamics1N} are given by
\begin{align}\label{NES_dynamics1}
\left(\begin{array}{c}
\dot{u}^a\\\\
\dot{\xi}^a_1\\\\
\dot{\xi}^a_2\\
\end{array}\right)=-\left(\begin{array}{c}
k\xi^a_1\left(\dfrac{\xi^a_2}{|\xi^a_2|^{\alpha_1}}+\dfrac{\xi^a_2}{|\xi^a_2|^{\alpha_2}}\right)\\
\dfrac{1}{\varepsilon_2}\Big(\xi^a_1\nabla^2\phi(u^a)\xi^a_1-\xi^a_1+O(a\xi_1^{2a})\Big),\\
\dfrac{1}{\varepsilon_2}\Big(\xi^a_2-\nabla \phi(u^a)+O(a)\Big)
\end{array}\right).
\end{align}
System \eqref{NES_dynamics1} is a perturbed version of the nominal average system
\begin{align}\label{NES_dynamics2}
\left(\begin{array}{c}
\dot{u}^a\\\\
\dot{\xi}^a_1\\\\
\dot{\xi}^a_2\\
\end{array}\right)=-\left(\begin{array}{c}
k\xi^a_1\left(\dfrac{\xi^a_2}{|\xi^a_2|^{\alpha_1}}+\dfrac{\xi^a_2}{|\xi^a_2|^{\alpha_2}}\right)\\
\dfrac{1}{\varepsilon_2}\Big(\xi^a_1\nabla^2\phi(u^a)\xi^a_1-\xi^a_1\Big),\\
\dfrac{1}{\varepsilon_2}\Big(\xi^a_2-\nabla \phi(u^a))\Big)
\end{array}\right).
\end{align}
Thus, we analyze again the stability properties of system \eqref{NES_dynamics1} by studying the properties of system \eqref{NES_dynamics2}, and using standard robustness results for perturbed ODEs with a continuous right-hand side, e.g., \cite[Thm. 7.21]{HDS}.
When $\varepsilon_2$ is sufficiently small, system \eqref{NES_dynamics2} is a singularly perturbed system. The boundary layer dynamics are obtained in the $\tau=t/\varepsilon_2$ time scale by setting $\varepsilon_2=0$, which leads to $\frac{\partial u^a}{\partial \tau}=0$ and
\begin{subequations}\label{ES_dynamics3}
\begin{align}
\frac{\partial \xi^a_1}{\partial \tau}&=\xi^a_1-\xi^a_1\nabla^2\phi(u^a)\xi^a_1,\label{hessian_dynamics3}\\
\frac{\partial \xi^a_2}{\partial \tau}&=-\xi^a_2 +\nabla \phi(u^a).\label{gradient_dynamics3}
\end{align}
\end{subequations}
For fixed values of $u^a$, the stability properties of system \eqref{ES_dynamics3} can be analyzed as follows: denote by $r_u:=\nabla \phi(u^a)$ and $H_u:=\nabla^2\phi(u^a)$, and consider the errors $\tilde{\xi}^a_1=\xi^a_1-H_u^{-1}$ and $\tilde{\xi}^a_2=\xi^a_2-r_u$. The error boundary layer dynamics are then given by the following decoupled equations:
\begin{subequations}
\begin{align}
\dot{\tilde{\xi}}^a_1&=-\tilde{\xi}^a_1H_u\left(\tilde{\xi}^a_1+H_u^{-1}\right)\label{filter1}\\
\dot{\tilde{\xi}}^a_2&=-\tilde{\xi}^a_2.\label{filter2}
\end{align}
\end{subequations}
System \eqref{filter2} has the origin globally exponentially stable, and system \eqref{filter1} has the origin locally exponentially stable given that its linearization around the origin has the Jacobian $-I$, see \cite[pp. 1761]{Newton}. Therefore, for each fixed $u^a$, the boundary layer dynamics render the quasi-steady state mapping
\begin{equation*}
\xi^*=\left(\nabla^2 \phi(u^a)^{-1},\nabla \phi(u^a)\right),
\end{equation*}
locally exponentially stable, uniformly on $u^a$. Using this stability property we can now obtained the reduced dynamics associated to system \eqref{NES_dynamics2} by substituting $\xi^*$ in the dynamics of $u^a$:
\begin{align}\label{slow_dynamics}
\dot{u}_r=-k\nabla^2\phi(u_r)^{-1}\left(\frac{\nabla\phi(u_r)}{|\nabla\phi(u_r)|^{\alpha_1}}+\frac{\nabla\phi(u_r)}{|\nabla\phi(u_r)|^{\alpha_2}}\right).
\end{align}
Following the ideas of \cite{fixed_time}, we can analyze the stability properties of system \eqref{slow_dynamics} by considering the Lyapunov function
\begin{equation*}
V_H(u_r)=\frac{1}{2}|\nabla \phi(u_r)|^2,
\end{equation*}
which, under Assumption \ref{assumption_2}, is radially unbounded and positive definite with respect to the point $\{z^*\}$. The derivative of $V_H$ along the solutions of \eqref{slow_dynamics} satisfies
\begin{equation}\label{Lyapunov}
\dot{V}_H(u_r)=-k\rho_1V_H(u_r)^{\chi_1}-k\rho_2V_H(u_r)^{\chi_2}<0,
\end{equation}
for all $u_r\neq z^*$, where
\begin{align*}
\rho_1=2^{\chi_1}>0,~~~\rho_2=2^{\chi_2}>0
\end{align*}
and
\begin{align*}
\chi_1=\frac{2-\alpha_1}{2}\in (0.5,1),~~~\chi_2=\frac{2-\alpha_2}{2}>1.
\end{align*}
Therefore, system \eqref{slow_dynamics} renders the point $z^*$ UGAS and by \cite[Lemma 1]{Fixed_timeTAC}, the Lyapunov function evaluated along the solutions of \eqref{slow_dynamics} satisfies $V_H(u_r(t))=0$ for all $t\geq T_N^*$, where $T_N^*$ is given by \eqref{upper_bound}. Thus the convergence of $u_r$ to $z^*$ occurs in a finite time bounded by $T_N^*$. UGAS of $\{z^*\}$ implies the existence of a class $\mathcal{K}\mathcal{L}$ function $\beta'_u$, constructed as \eqref{KLexplicit}, such that all solutions of \eqref{slow_dynamics} satisfy the bound
\begin{equation*}
|u_r(t)-z^*|\leq \beta'_u(|u_r(0)-z^*|,t),
\end{equation*}
for all $t\geq0$, and where $\beta'_u(r,t)=0$ for all $t\geq T^*$ and all $r>0$. From here, the stability analysis follows the exact same steps of the proof of Theorem \ref{theorem1}, using \cite[Thm.2]{Wang:12_Automatica} and \cite[Thm. 7.21]{HDS} successively to link the stability properties of the original dynamics \eqref{ES_dynamics1} and its average perturbed dynamics \eqref{NES_dynamics1}. Local existence of complete solutions from the neighborhood $\mathcal{N}$ follows directly by the local stability properties of \eqref{NES_dynamics2} and the closeness of solutions between \eqref{NES_dynamics2} and \eqref{ES_dynamics1N}. \null\hfill\null $\blacksquare$
\subsection{Discussion and Numerical Examples}
\label{sec_NumNewton}
The convergence result of Theorem \ref{main_theorem2} is local with respect to the initial conditions, and practical with respect to the parameters $(\varepsilon_1,\varepsilon_2,a)$ and the neighborhood $\{z^*\}+\nu\mathbb{B}$. However, unlike existing results in the literature, an upper bound on the convergence time of the FTNES dynamics can be prescribed a priori by selecting admissible parameters $(q_1,q_2)$ and a gain $k$ that satisfies equation \eqref{choiceC}, and by initializing the states $(\xi_1,\xi_2)$ in a pre-defined compact set $\eta\mathbb{B}$. This represents a clear advantage in comparison to the existing Newton-based extremum seeking algorithms.
\begin{remark}
The local convergence result of Theorem \ref{main_theorem2} is due to the existence of multiple equilibria in the dynamics
\begin{equation}\label{estimator_Hessian2}
\dot{\xi}_1=\Big(\xi_1-\xi_1\nabla^2\phi(z) \xi_1\Big),
\end{equation}
which emerge in the nominal average system \eqref{NES_dynamics2}. Similar local results emerge in Newton-based ESCs with asymptotic convergence properties; see for instance \cite{Newton}. While it is possible to design Newton-based ESCs with semi-global practical asymptotic stability results by computing the vector $\nabla^2\phi(x)^{-1}\nabla\phi(x)$ using the singular perturbation approach presented in \cite[Sec. 3]{NewtonSemiglobal}, see for instance \cite{NewtonESC}, said approach cannot be used in this case since it will generate learning dynamics for the state $u$ with discontinuous vector fields that are not locally bounded when $\nabla\phi=0$. \QEDB
\end{remark}
\begin{figure}[t!]
\begin{centering}
\includegraphics[width=0.48\textwidth]{x2-eps-converted-to.pdf}
\caption{Evolution in time of the state $u$. Blue line corresponds to $u_1$, and red line corresponds to $u_2$. The dotted lines correspond to the trajectories generated by the traditional Newton-based ESC of \cite{Newton}. The solid lines correspond to the trajectories generated by the NFTES. The inset shows the trajectories of the state $\xi_1$ and the cost function $\phi(z)$.}
\label{Fig11}
\end{centering}
\end{figure}
To illustrate the performance of the NFTES, and to highlight the differences with respect to the the standard Newton-based extremum seeking controller of \cite{Newton}, consider the quadratic function
\begin{equation*}
\phi(z)=\frac{1}{2} z^\top H z+b^\top z+c,
\end{equation*}
which satisfies
\begin{align*}
\nabla \phi(z)&=Hz+b,\\
\nabla^2\phi(z)&=H.
\end{align*}
The parameters of the cost function are selected as
\begin{equation*}
H=\left[\begin{array}{cc}
4 & 1 \\ 1 & 2
\end{array}\right],~~~b=\left[\begin{array}{c}
-4\\
-6
\end{array}\right],~~c=11.
\end{equation*}
The inverse of the Hessian matrix is given by
\begin{equation*}
H^{-1}=\left[\begin{array}{cc}
0.2857 & -0.1429 \\ -0.1429 & 0.5714
\end{array}\right],
\end{equation*}
and the function $\phi(z)$ has a global minimizer at the point
\begin{equation*}
z^*=-H^{-1}b=\left[\frac{2}{7},~\frac{20}{7}\right]^\top.
\end{equation*}
In order to find $z^*$ in fixed time, we implement the NFTES dynamics with parameters $a=0.1$, $\varepsilon_1=0.1$ and $\varepsilon_2=10$. The constants $(k,q_1,q_2)$ were selected as $\eta=50,$ $k=0.025$, $q_1=3$, and $q_2=1.5$, which generates an upper bound $T_{N}^*$ approximately equal to $T_N^*=123.4$. We have also simulated the Newton-based ESC of \cite{Newton}, which has learning dynamics of the form $\dot{x}=-\xi_1\xi_2$. To obtain a smooth approximation of $H^{-1}$, we have also implemented an additional low-pass filter that receives as input $\xi_1$ and $\xi_2$, and which generates filtered outputs $\xi^f_1$ and $\xi^f_{2}$ that serve as inputs to the learning dynamics. As shown in \cite{Newton}, the incorporation of these filters does not affect the stability analysis of the algorithm. Figure \ref{Fig11} shows the trajectories generated by the NFTES dynamics as well as the trajectories of the standard Newton-based ESC. It can be observed that the NFTES dynamics exhibit a much better transient performance in terms of less oscillations and faster convergence time to a neighborhood of $z^*$. The insets show the evolution of the components of the state $\xi_1$, which correspond to the entries of $H^{-1}$, as well as the evolution in time of the cost function $\phi(z)$. As it can be observed, the trajectories of the entries of $\xi_1$ converge to a neighborhood of the true values of the entries of $H^{-1}$.
To further illustrate the fixed-time convergence property of the NFTES dynamics, we have also simulated the case where the upper bound in the convergence time is fixed to $T_N^*=100$, which can be obtained in the NFTES dynamics by choosing $k=0.03085$, $q_1=3$, and $q_2=1.5$. Figure \ref{Fig6} shows the evolution in time of 50 different trajectories $u(t)$ initialized randomly in the set $[-10,10]\times[-10,10]$. The inset shows the evolution in time of the cost functions $\phi(z)$ along the trajectories of $z$. As it can be observed, the NFTES dynamics guarantee convergence to a small neighborhood of $z^*$ before the time $T^*$.
\begin{figure}[t!]
\begin{centering}
\includegraphics[width=0.48\textwidth]{Random5.jpg}
\caption{Evolution in time of several solutions $x$ initialized in a neighborhood of the optimizers. The solid black line indicates the upper bound $T^*$ on $T_c$.}
\label{Fig6}
\end{centering}
\end{figure}
It is important to note that in order to obtain the convergence result of Theorem \ref{main_theorem2}, the parameters $(a,\varepsilon_1,\varepsilon_2)$ need to be appropriately tuned. In particular, these parameters depend on the constants $(q_1,q_2,k)$, which, in turn, fix the bound $T_N^*$. Thus, smaller values of $T_N^*$ may require smaller values for $(a,\varepsilon_1,\varepsilon_2)$. Since $\varepsilon_2$ is related to the frequency of the oscillator, in order to implement in discrete time the NFTES algorithm for small values of $T_N^*$, one may need to use a sufficiently small step size to avoid aliasing issues. This indicates a potential tradeoff between achieving small finite convergence times and the computational complexity of the discretization of the algorithm. Also, larger sets $\eta\mathbb{B}$ may require smaller values of $(a,\varepsilon_1,\varepsilon_2)$.
\section{$\varepsilon_0$-Fixed-Time Extremum Seeking for Dynamic Systems}
\label{Sec_Dynamic}
We now consider the more general extremum seeking problem, where the cost function $\phi$ corresponds to the steady state input-to-output mapping of a dynamical system. In particular, we consider the following dynamical system
\begin{subequations}\label{plant_dynamics}
\begin{align}
\dot{x}&=f(x,z)\label{plant_dynamics1}\\
y&=h(x,z)\label{output_plant},
\end{align}
\end{subequations}
where $x\in\mathbb{R}^p$ is the state of the system, $z\in\mathbb{R}^n$ is the input, $f$ is a continuous function characterizing the dynamics of the plant, and $h$ is an output function. We assume that the plant \eqref{plant_dynamics1} has already been stabilized such that the state $x$ evolves in a compact set $\Xi\subset\mathbb{R}^p$ for any input $z$ of interest. While this may look like a strong condition, it is a reasonable assumption given that we will consider plants \eqref{plant_dynamics} that have a quasi-steady state continuous manifold $z\mapsto \ell(z)$, and we will tune our controller to operate from particular compact sets $K_z$, which will generate uniformly bounded trajectories $z$ that will keep $x$ uniformly bounded.
In order to have a well-defined extremum seeking problem we also make the following stability assumption.
\begin{assumption}\label{assumption_plant}
There exists a continuous quasi-steady state manifold $\ell:\mathbb{R}^n\to\mathbb{R}^p$ such that for each compact set $K_z\subset\mathbb{R}^n$ the dynamics of the plant, with frozen input, given by
\begin{equation*}
(x,z)\in \Xi\times K_z,~\left\{\begin{array}{l}
\dot{x}=f(x,z)\\
\dot{z}=0
\end{array}\right.
\end{equation*}
render UGAS the compact set $M_{K_z}:=\{(x,z)\in\Xi\times K_z: x=\ell(z)\}$.\QEDB
\end{assumption}
The stability conditions of Assumption \ref{assumption_plant} are standard in extremum seeking problems, and they can be further relaxed to allow for set-valued quasi-steady state manifolds $\ell:\mathbb{R}^n\rightrightarrows\mathbb{R}^p$, see \cite{Poveda:16,black_box_CDC}. However, for simplicity we assume that $\ell$ is a continuous function, which allow us to avoid introducing extra definitions for set-valued maps.
Using the quasi-steady state manifold $\ell$ and the output \eqref{output_plant}, we define the quasi steady-state input-to-output mapping of the dynamical system \eqref{plant_dynamics} as
\begin{equation}\label{ssiom}
\phi(z):=h(\ell(z)).
\end{equation}
As before, we will assume that $\phi$ attains its minimum at some point $z^*\in\mathbb{R}^n$, i.e., $\inf_{z\in\mathbb{R}^n}\phi(z)>-\infty$.
\subsection{Closed-Loop Dynamics}
In order to optimize the steady-state input-to-output mapping of system \eqref{plant_dynamics} using the FTGES, we consider the following closed-loop system
\begin{align}\label{ES_dynamic}
\left(\begin{array}{c}
\dot{u}\vspace{0.1cm}\\
\dot{\xi}\vspace{0.1cm}\\
\dot{\mu}\vspace{0.1cm}\\
\dot{x}\vspace{0.1cm}
\end{array}\right)=\left(\begin{array}{c}
-k_1\left(\dfrac{\xi}{|\xi|^{\alpha_1}}+\dfrac{\xi}{|\xi|^{\alpha_2}}\right)\\
-k_2\big(\xi-F_G(y,\mu)\big)\\
-k_3\mathcal{R}_{\kappa}\mu\\
f(x,u+a\mathcal{D}u)
\end{array}\right),
\end{align}
evolving in the set
\begin{equation}\label{flowset11}
(u,\xi,\mu,x)\in C:=\mathbb{R}^n\times \eta\mathbb{B} \times \mathbb{S}^n\times \Xi,
\end{equation}
where $k_1:=\varepsilon_0 k$, $k_2:=\frac{\varepsilon_0}{\varepsilon_2}$, $k_3:=\frac{\varepsilon_02\pi}{\varepsilon_1}$, and where $\varepsilon_0$ is a new tunable parameter that satisfies $0<\varepsilon_0\ll 1$.
\begin{figure}[t!]
\begin{centering}
\includegraphics[width=0.48\textwidth]{SchemeNewton2-eps-converted-to.pdf}
\caption{\small{Closed-loop system generated by the interconnection of the FTNES dynamics and the plant \eqref{plant_dynamics}.}}
\label{Fig66}
\end{centering}
\end{figure}
In contrast to the static case considered in Section \ref{Sec_Gradient}, the FTGES dynamics in \eqref{ES_dynamic} receive as input direct measurements of the output of the plant \eqref{plant_dynamics}. Since the rate of convergence of the state $x$ is unknown, it is not possible to prescribed a priori a convergence time for the complete closed-loop system operating in the original time scale $t$. Therefore, we study the convergence properties of system \eqref{ES_dynamic} in a new time scale $\tau:=t\varepsilon_0$. It then follows that $dt =d\tau/\varepsilon_0$, and the dynamics \eqref{ES_dynamic} in the $\tau$-time scale become
\begin{align}\label{ES_dynamicss}
\left(\begin{array}{c}
\dfrac{du}{d\tau}\vspace{0.1cm}\\
\dfrac{d\xi}{d\tau}\vspace{0.1cm}\\
\dfrac{d\mu}{d\tau}\vspace{0.1cm}\\
\dfrac{dx}{d\tau}
\end{array}\right)=\left(\begin{array}{c}
-k\left(\dfrac{\xi}{|\xi|^{\alpha_1}}+\dfrac{\xi}{|\xi|^{\alpha_2}}\right)\\
-\dfrac{1}{\varepsilon_2}\Big(\xi-F_G(y,\mu)\Big)\\
-\dfrac{2\pi}{\varepsilon_1}\mathcal{R}_{\kappa}\mu\\
\dfrac{1}{\varepsilon_0}f(x,u+a\mathcal{D}\mu)
\end{array}\right).
\end{align}
System \eqref{ES_dynamicss} is now in standard form for the application of singular perturbation theory. The boundary layer dynamics are obtained by setting $\varepsilon_0=0$ in \eqref{ES_dynamic}, which leads to $\dot{u}=0$, $\dot{\xi}=0$, $\dot{\mu}=0$, and the dynamics
\begin{equation}\label{bl_dynamic}
x\in\Xi,~~~\dot{x}=f(x,u+a\mathcal{D}\mu),
\end{equation}
which, by Assumption \ref{assumption_plant}, renders UGAS the quasi-steady state manifold $x=\ell(u+a\mathcal{D}\mu)$. Thus, the reduced system associated to the singularly perturbed system \eqref{ES_dynamicss} is given by
\begin{align}\label{ES_dynamicsss}
\left(\begin{array}{c}
\dfrac{du}{d\tau}\vspace{0.1cm}\\
\dfrac{d\xi}{d\tau}\vspace{0.1cm}\\
\dfrac{d\mu}{d\tau}\vspace{0.1cm}
\end{array}\right)=\left(\begin{array}{c}
-k\left(\dfrac{\xi}{|\xi|^{\alpha_1}}+\dfrac{\xi}{|\xi|^{\alpha_2}}\right)\\
-\dfrac{1}{\varepsilon_2}\Big(\xi-F_G(h(\ell(u+a\mathcal{D}\mu)),\mu)\Big)\\
-\dfrac{2\pi}{\varepsilon_1}\mathcal{R}_{\kappa}\mu
\end{array}\right).
\end{align}
Using the definition of $\phi$ in \eqref{ssiom} we can see that this system corresponds to the same system \eqref{ES_dynamics1}. From the proof of Theorem \ref{theorem1} we know that this system renders SGPAS as $(\varepsilon_2,a,\varepsilon_1)\to0^+$ the set $\mathcal{A}=\{z^*\}\times\eta\mathbb{B}\times\mathbb{S}^n$ with $\mathcal{K}\mathcal{L}$ bound $\beta_u$, where the parameters $(\varepsilon_2,a,\varepsilon_1)$ must be tuned orderly. Since the boundary layer dynamics \eqref{bl_dynamic} are constrained to the set $\Xi$, the complete system \eqref{ES_dynamicss} renders SGPAS as $(\varepsilon_2,a,\varepsilon_1)\to0^+$ the set $\mathcal{A}=\{z^*\}\times\eta\mathbb{B}\times\mathbb{S}^n\times\Xi$ with $\mathcal{K}\mathcal{L}$ bound $\beta_u$, i.e., from compact sets of initial conditions and under suitable order tuning of the parameters of the controller, every solution of the closed-loop system generates trajectories $u$ that satisfy
\begin{equation*}
|u(\tau)-z^*|\leq \beta_u(|u(0)-z^*|,\tau)+\nu,
\end{equation*}
for all $t$ in the domain of the solutions, where $\nu$ can be made arbitrarily small, and where $\beta_u(|u(0)-z^*|,\tau)=0$ for all $\tau\geq T_G^*$, with $T_G^*$ given by \eqref{fixed_time1}. Therefore, in the $\tau$-time scale, the closed-loop system achieves fixed-time extremum seeking.
\begin{figure*}[t!]
\begin{centering}
\includegraphics[width=0.75\textwidth]{DynamicFinal.jpg}
\caption{\small Time history of 70 different trajectories of the the states $u_1$ and $u_2$ generated by the FTGES from $70$ different initial conditions, and applied to the dynamic plant \eqref{dynamic_plant_example}. The trajectories were randomly initialized in the interval $[-10,10]$. The inset shows the trajectories of the states $x_1$ and $x_2$ of the plant.}
\label{Fig666}
\end{centering}
\end{figure*}
We summarize with the following theorem.
\begin{thm}\label{theorem3}
Consider the closed-loop system \eqref{ES_dynamicss}, and suppose that Assumptions \ref{assumptionFTGES} and \ref{assumption_plant} hold, and that the steady state input-to-output map \eqref{ssiom} satisfies Assumption \ref{assumption_1}. Then for any pair of admissible parameters $(q_1,q_2)$, $k>0$, $\delta>\nu>0$, there exists $\eta>0$ and $\varepsilon_2^*>0$ such that for all $\varepsilon_2\in(0,\varepsilon_2^*)$ there exists $a^*>0$ such that for all $a\in(0,a^*)$ there exists $\varepsilon_1^*>0$ such that for all $\varepsilon_1\in(0,\varepsilon_1^*)$ there exists $\varepsilon_0^*>0$ such that for all $\varepsilon_0\in(0,\varepsilon_0^*)$ system \eqref{ES_dynamicss} with initial conditions $(u(0),\xi(0),\mu(0),x(0))\in (\{z^*\}+\delta\mathbb{B})\times \eta\mathbb{B}\times \mathbb{S}^n\times \Xi$ generates complete solutions and each trajectory satisfies $|z(\tau)-z^*|\leq \nu$, for all $\tau\in\text{dom}(u,\xi,\mu,x)$, such that $\tau\geq T_G^*$. \QEDB
\end{thm}
It is important to note that the fixed time convergence result of Theorem \ref{theorem3} holds in the $\tau$-time scale. Since $\tau=t\varepsilon_0$, the convergence result in the original time scale would translate to $t\geq T_G^*/\varepsilon_0$. Thus, smaller values of $\varepsilon_0$ generate larger values for the upper bound of the convergence time. This observation illustrates the key difference between fixed-time extremum seeking in dynamic plants versus static maps.
We finish this section by presenting the convergence result for the FTNES algorithm applied to the plant dynamics \eqref{plant_dynamics}. In this case, the closed-loop system is shown in Figure \ref{Fig66}, where the gains $k_i$ for $i\in\{1,2,3\}$ are defined as in the FTGES of \eqref{ES_dynamic}. Since the proof is almost identical to the proof of Theorem \ref{theorem3} by using the result of Theorem \ref{main_theorem2}, we present this result as a Corollary.
\begin{cor}\label{corollaryNewton}
Consider the closed-loop system shown in Figure \ref{Fig66} in the $\tau$-time scale, and suppose that Assumptions \ref{assumptionFTGES} and \ref{assumption_plant} hold and that the steady state input-to-output mapping \eqref{ssiom} satisfies Assumption \ref{assumption_2}. Then, for admissible parameters $(q_1,q_2)$, $k>0$, and each $\nu>0$ there exists $\eta>0$ and $\varepsilon_2^*>0$ such that for each $\varepsilon_2\in(0,\varepsilon_2^*)$ there exists $a^*>0$ such that for each $a\in(0,a^*)$ there exists $\varepsilon_1^*>0$ such that for each $\varepsilon_1\in(0,\varepsilon_1^*)$ there exists $\varepsilon_0^*$ such that for each $\varepsilon_0\in(0,\varepsilon_0^*)$ there exists a neighborhood $\mathcal{N}$ of $p^*:=(z^*, \nabla^2 \phi(z^*)^{-1},\nabla\phi(z^*),\ell(z^*))$ such that every solution with $(u(0),\xi(0),x(0),\mu)\in \mathcal{N}\times\mathbb{S}^n$ is defined for all time $\tau\geq0$, and satisfies $|z(\tau)-z^*|\leq \nu$, for all $\tau\geq T_N^*$. \QEDB
\end{cor}
As in the gradient-based case, the convergence result of Corollary \ref{corollaryNewton} would imply that in the $t$-time scale the input satisfies $|z(t)-z^*|\leq \nu$ for all $t\geq T_N^*/\varepsilon_0$. Thus, as $\varepsilon_0\to0^+$ the convergence time grows unbounded since the controller is ``slowed down'' to guarantee enough time scale separation with respect to the plant dynamics \eqref{plant_dynamics}. Nevertheless, for the FTNES dynamics the value of $T^*_{N}$ does not depend on the cost function $\phi$.
\subsection{Numerical Example}
To illustrate the application of the fixed-time extremum seeking controller in dynamic plants, we consider the following dynamical system
\begin{equation}\label{dynamic_plant_example}
\begin{split}
\dot{x}_1&=-20x_1+5x_2+5z_1\\
\dot{x}_2&=-20x_2+5z_2,
\end{split}
\end{equation}
with output function given by
\begin{equation*}
y=10x_1^2+10x_2^2+\frac{x_1}{2}+\frac{x_2}{5}.
\end{equation*}
The quasi-steady state manifold is given by $\ell(z)=\left(\frac{z_2}{16}+\frac{z_1}{4}, \frac{z_2}{4}\right)$, and the steady state input-to-output mapping is
\begin{equation*}
\phi(z)=h(\ell(z))=10\left(\frac{z_1}{4}+\frac{z_2}{16}\right)^2+\frac{5z_2^2}{8}+\frac{13z_2}{160}+\frac{z_1}{8},
\end{equation*}
which is strongly convex and has a minimum at $z^*=(-0.09,-0.04)$. We implement the FTGES using only measurements $y$ of the output plant, and parameters $\varepsilon_0=0.1$, $\varepsilon_1=0.0015$, $\varepsilon_2=0.05$, $q_1=3$, $q_2=1.5$, and $k=0.2$. We simulated 70 different trajectories of the closed-loop system, each trajectory with an initial condition $u(0)$ selected randomly from a compact set $\Omega_{10}$ using a uniform distribution. Figure \ref{Fig666} shows the time history of the resulting 70 trajectories, as well as the theoretical upper bound on the convergence time in the $\tau$-time scale normalized by $\varepsilon_0$. The insets show the evolution of the states $x_1$ and $x_2$, which were initialized at the origin, as well as the residual error after the convergence time. As it can be observed, all the trajectories of $u$ converge to a small neighborhood of $z^*$ at a time that is upper bounded by the theoretical bound $T_G^*$ normalized by $\varepsilon_0$. Since the plant is stable, the states $x_1$ and $x_2$ eventually also converge to a neighborhood of the quasi-steady state manifold $\ell(z^*)$.
\subsection{Connections with Existing Results in the Literature}
\label{subsec_connections}
We now discuss some connections between the ES dynamics presented in this paper and the existing gradient and Newton-based ES algorithms considered in \cite{DerivativesESC} and \cite{Newton}. In particular, we show that when $\alpha_1=\alpha_2=0$, the FTGES and FTNES dynamics recover the existing schemes in the literature which have only asymptotic (semi-global practical) convergence properties.
\subsubsection{Gradient-Based Scheme} The proposed FTGES \eqref{ES_dynamics1} can be seen as a generalizations of the existing gradient-based ES algorithms. In particular, when $\alpha_1=\alpha_2=0$, the FTGES dynamics applied to the dynamic plant \eqref{plant_dynamics} become
\begin{align}\label{ES_dynamicA}
\left(\begin{array}{c}
\dot{u}\vspace{0.1cm}\\
\dot{\xi}\vspace{0.1cm}\\
\dot{\mu}\vspace{0.1cm}\\
\dot{x}\vspace{0.1cm}
\end{array}\right)=\left(\begin{array}{c}
-2k_1\xi\vspace{0.1cm}\\
-k_2\big(\xi-F_G(y,\mu)\big)\vspace{0.1cm}\\
-k_3\mathcal{R}_{\kappa}\mu\vspace{0.1cm}\\
f(x,u+a\mathcal{D}u)
\end{array}\right).
\end{align}
In this case, by setting $k=0.5$, the reduced average dynamics of \eqref{ES_dynamicA} are given by
\begin{equation}\label{simplified_gradient_flow}
\dot{u}_r=-\nabla \phi(u_r),
\end{equation}
which is just a classic gradient descent. Under condition \eqref{condition2}, the function $\phi(u_r)$ is invex \cite{PLInequality}, which implies that $\mathcal{A}:=\{u^{*}_r\in\mathbb{R}^n: \nabla\phi(u^{*}_r)=0\}=\{u^{*}_r\in\mathbb{R}^n: u^{*}_r=\text{arg}\min_{u_r\in\mathbb{R}^n} \phi(u_r)\}$, i.e., every critical point is a global minimizer \cite{Invexity1}. Thus, the Lyapunov function $V=\phi(u_r)-\phi^*$ satisfies $\dot{V}=-|\nabla \phi(u_r)|^2$, which is zero only at points that minimize the function $\phi$. Since the cost function is radially unbounded and $\phi$ attains its minimum value, the set $\mathcal{A}$ is compact and UGAS under the dynamics \eqref{simplified_gradient_flow}. If one further assumes that $\nabla \phi$ is globally Lipschitz, the set $\mathcal{A}$ is indeed uniformly globally exponentially stable (UGES). Thus, in this case the $\mathcal{KL}$ bound that characterizes the convergence of $u$ in the original dynamics will be of the form $\beta(s_1,s_2)=c_1s_1\exp(-c_2s_2)$, for some $c_1,c_2>0$, see also \cite{zero_order_poveda_Lina}. Therefore, the choice $\alpha_1=\alpha_2=0$ recovers the existing results for gradient-based ES of \cite{tan06Auto} and \cite{DerivativesESC}. Finally, we note that when $\alpha_1=\alpha_2=1$, and $u_r$ is a scalar, we recover the finite-time gradient-based ES algorithm presented in \cite[Sec. 6.1]{Poveda:16}, which is discontinuous at the set of minimizers, and which has a convergence time that is dependent on the initial conditions of the optimizing state $u$.
\subsubsection{Newton-based Schemes} For the Newton-based case, setting $\alpha_1=\alpha_2=0$ in the FTNES dynamics applied to the plant \eqref{plant_dynamics} results in the closed-loop system:
\begin{align}\label{ES_dynamics1NB}
\left(\begin{array}{c}
\dot{u}\\\\
\dot{\xi}_1\\\\
\dot{\xi}_2\\\\
\dot{\mu}
\end{array}\right)=-\left(\begin{array}{c}
2k_1\xi_1\xi_2\vspace{0.1cm}\\
k_2\Big(\xi_1F_H(\phi,\mu)\xi_1-\xi_1\Big),\vspace{0.1cm}\\
k_2\Big(\xi_2-F_G(\phi,\mu)\Big)\vspace{0.1cm}\\
k_3\mathcal{R}_{\kappa}\mu\vspace{0.1cm}\\
f(x,u+a\mathcal{D}u)
\end{array}\right).
\end{align}
By choosing $k=0.5\gamma$, the reduced average dynamics of \eqref{ES_dynamics1NB} become
\begin{equation*}
\dot{u}_r=- \gamma\nabla^2\phi(u_r)^{-1} \nabla \phi(u_r),
\end{equation*}
which is a classic Newton-flow. Under Assumption \ref{assumption_2}, this dynamics render the unique minimizer $z^*$ locally exponentially stable with rate of convergence proportional to the tunable gain $\gamma$, which is consistent with the results from \cite{Newton}.
\section{Conclusions and Outlook}
\label{Sec_Conclusions}
In this paper we presented a novel class of extremum seeking controllers that achieve fixed-time convergence with a convergence time that is independent of the optimizing state. We considered both model-free gradient-based algorithms and model-free Newton-based algorithms for different classes of cost functions, and we showed that for the gradient-based algorithm the upper bound on the convergence time of the extremum seeking controller depends on the parameters of the cost. On the other hand, for the Newton-based dynamics the upper bound is independent of the cost function and can be prescribed a priori. Both results were established by using generalized averaging theory for non-smooth systems. We also extended these extremum seeking controllers to dynamical systems that generate well-defined steady state input-to-output mappings, and for which the fixed-time convergence property holds after a time scale transformation is performed. Our results were further validated by means of different single-variable and multivariable numerical examples.
The results of this paper open the door to novel opportunities for the development of other type of fixed-time extremum seeking controllers. In particular, the results of this paper can be extended and generalized for the solution of constrained extremum seeking problems, Nash seeking problems in games, tracking of time-varying optimizers, and hybrid extremum seeking controllers with fixed-time convergence properties.
\vspace{0.5cm}
\section*{Acknowledgments}
The first author would like to thank Andy Teel for fruitful discussions on the semi-global practical fixed-time convergence properties for nonsmooth singularly perturbed systems in standard form.
\bibliographystyle{ieeetr}
| {
"timestamp": "2019-12-17T02:12:23",
"yymm": "1912",
"arxiv_id": "1912.06999",
"language": "en",
"url": "https://arxiv.org/abs/1912.06999",
"abstract": "We introduce a new class of extremum seeking controllers able to achieve fixed time convergence to the solution of optimization problems defined by static and dynamical systems. Unlike existing approaches in the literature, the convergence time of the proposed algorithms does not depend on the initial conditions and it can be prescribed a priori by tuning the parameters of the controller. Specifically, our first contribution is a novel gradient-based extremum seeking algorithm for cost functions that satisfy the Polyak-Lojasiewicz (PL) inequality with some coefficient \\kappa > 0, and for which the extremum seeking controller guarantees a fixed upper bound on the convergence time that is independent of the initial conditions but dependent on the coefficient \\kappa. Second, in order to remove the dependence on \\kappa, we introduce a novel Newton-based extremum seeking algorithm that guarantees a fully assignable fixed upper bound on the convergence time, thus paralleling existing asymptotic results in Newton-based extremum seeking where the rate of convergence is fully assignable. Finally, we study the problem of optimizing dynamical systems, where the cost function corresponds to the steady-state input-to-output map of a stable but unknown dynamical system. In this case, after a time scale transformation is performed, the proposed extremum seeking controllers achieve the same fixed upper bound on the convergence time as in the static case. Our results exploit recent gradient flow structures proposed by Garg and Panagou in [3], and are established by using averaging theory and singular perturbation theory for dynamical systems that are not necessarily Lipschitz continuous. We confirm the validity of our results via numerical simulations that illustrate the key advantages of the extremum seeking controllers presented in this paper.",
"subjects": "Optimization and Control (math.OC)",
"title": "Fixed-Time Extremum Seeking",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850887155806,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7095349781089818
} |
https://arxiv.org/abs/1611.06303 | Hilbert's Proof of His Irreducibility Theorem | Hilbert's Irreducibility Theorem is a cornerstone that joins areas of analysis and number theory. Both the genesis and genius of its proof involved combining real analysis and combinatorics. We try to expose the motivations that led Hilbert to this synthesis. Hilbert's famous Cube Lemma supplied fuel for the proof but without the analytical foundation and framework it would have been heating empty air. The lemma is said to presage Ramsey Theory but we note differences in motivation. | \section{Introduction.}
\label{sec:intro}
In 1892, David Hilbert
published what is
known today as \emph{Hilbert's irreducibility theorem}.
We give his statement, using \emph{integral polynomial} to mean a polynomial in any number of variables whose coefficients are integers.
\begin{thm}
\label{irreducibility}
If $F(x,y,\dots,w;t,r,\dots,q)$ is an irreducible polynomial with
integral coefficients
in the variables $x,y,\dots,w$ and the parameters $t,r,\dots,q$, then
it is always possible, and indeed in infinitely many ways, to
substitute integers for the parameters $t,r,\dots,q$ such that the
polynomial $F(x,y,\dots,w;t,r,\dots,q)$ becomes an irreducible
polynomial in the variables $x,y,\dots,w$ alone.
\end{thm}
The statement of Theorem \ref{irreducibility} is a direct translation from \cite{Hilbert1892}. It falls short of modern precision.
For example, the irreducibility of $F$ concerns the polynomial in the whole set of variables---parameters included---but the statement is technically false if there are no variables but only parameters. Nor is it clear whether one needs a lot of variables and parameters or whether proving the theorem for one or two of each suffices; this is clarified below.
One of our purposes is to lead readers to appreciate modern rigor and clarity compared to 19th century standards.
To Hilbert, this theorem was not an end in itself but rather a tool to
use for some remarkable applications. A simple one is that if a
polynomial $f(x)$ over $\Zed$ has values that are perfect squares for
all sufficiently large $x$, then $f(x)$ must be the square of some
other polynomial over $\Zed$. One of his most striking results was the following.
\begin{cor}
For every integer $n\geq 1$ there exist infinitely many polynomials $p$ in
$\Zed[x]$ of degree $n$ such that $p$ has the symmetric group $S_n$ as its Galois
group.
\end{cor}
He began his paper \cite{Hilbert1892} with a statement and proof of
the two-variable case, which is the fundamental step in the proof of
the general theorem. Again in Hilbert's own words, the statement is:
\begin{thm}
\label{irreducibility-1}
If $f(x,t)$ is an irreducible polynomial in the two variables $x$ and
$t$ with integral coefficients
\begin{equation}\label{integral-T}
f(x,t) = T x^n + T_1 x^{n-1} +\cdots+ T_n,
\end{equation}
where $T,T_1,\dots,T_n$ are integral polynomials in $t$, it is always
possible, indeed in infinitely many ways, to substitute an integer for
$t$ in $f(x,t)$ such that the polynomial $f(x,t)$ becomes an
irreducible polynomial of the single variable~$x$.
\end{thm}
\noindent
This statement also has several issues. In just eight words of the last sentence, ``$t$'' is first a variable symbol, then part of the name ``$f(x,t)$'' for $f$, and last a constant substituted into $f(x,t)$. It is hard to avoid the juggling of different meanings that Hilbert expected of his readers, but we will use ``$t_0$'' and ``$t_1$'' to distinguish constants. Often $t_0$ is a threshold and $t_1 \geq t_0$. Second, the simpler statement does not clarify what happens when there is no dependence on the variable $x$. The set $\Zed[x]$ of polynomials $f$ in one variable $x$ having coefficients in $\Zed$ includes the constant polynomials $f(x) = 0$, $f(x) = 1$, $f(x) = -1$, and so on. Likewise, $f(x,t) = t^2 + t + 2$ counts as a member of $\Zed[x,t]$. It satisfies the hypotheses because it is irreducible and because the choices $T = T_1 = \dots = T_{n-1} = 0$ and $T_n = t^2 + t + 2$ count as ``integral polynomials in $t$.'' But the conclusion is false because for every integer $t_1$, the \emph{constant} $t_1^2 + t_1 + 2$ is an even number, so the constant \emph{polynomial} $f'(x) = f(x,t_1) = t_1^2 + t_1 + 2$ is reducible as a member of $\Zed[x]$.\footnote{
If this objection seems trivializing, consider instead the polynomial $g(t) = t^2 + 1$, which is likewise irreducible. Whether $g(t)$ takes on infinitely many prime values is one of four problems presented by Edmund Landau to the 1912 International Congress of Mathematicians---all still unsolved---and it must have been outside the boundary of what Hilbert's words were meant to embrace in 1892. Thus Hilbert's theorem borders on matters of lasting depth.
}
We will see at the start of Section~\ref{sec:pseries} how the proof needs $f(x,t_1)$ to have at least one (complex) root for all but finitely many $t_1$. For this it suffices
that the two-variable polynomial $f(x,t)$ is not independent of $x$, so that $x$ has degree at least $1$. The crispest modern way we know to say this is ``$f \in \Zed[x,t] \setminus \Zed[t]$,'' where $\setminus$ means difference of sets. This yields the following statement.
\begin{thm}
\label{irreducibility-2}
Let $f(x,y) \in \Zed[x,y] \setminus \Zed[y]$. For an infinite number of $t_1 \in \Zed$, $f(x,t_1)$, as an element of $\Zed[x]$, is irreducible.
\end{thm}
\noindent
In fact, Hilbert proved the \emph{contrapositive}, which can be
formulated as follows.
\begin{thm}
\label{irreducibility-contra}
Let $f(x,y)\in \Zed[x,y] \setminus \Zed[y]$. If there exists $t_0$ such that for every
integer
$t_1 \ge t_0$, $f(x,t_1)$ is reducible in $\Zed[x]$, then $f(x,y)$ is reducible in $\Zed[x,y]$.
\end{thm}
Hilbert proved Theorem~\ref{irreducibility-contra} by formulating what is today called
\emph{Hilbert's cube lemma}. It can be viewed not only as an enhanced
form of Dirichlet's pigeonhole principle but also as the first
statement of a Ramsey-type theorem.
In Section~\ref{sec:ramsey} we discuss Ramsey theory to illustrate why
Hilbert's cube lemma is regarded as
belonging to that field.
In Section~\ref{sec:cube-lemma} we state and give a simple modern proof of the Hilbert's cube lemma and describe optimizations
(we discuss Hilbert's original proof in Section~\ref{sec:conc}).
It is easy to appraise the Hilbert cube lemma as a gem in an isolated setting and
forget the quest that led to it, which was was \emph{to find a
polynomial factor, $\varphi(x,t) \in \Zed[x,t]$, of the polynomial
$f(x,t)$.}
In Sections~\ref{sec:monic} through~\ref{sec:int}
we provide a motivated account of Hilbert's beautiful proof of (ir)reducibility by
putting ourselves in his shoes and following the trail of ideas that
we find in his 1892 paper. We have tried to make it as self-contained and elementary as possible.
After Hilbert, many mathematicians offered other proofs of the
irreducibility theorem. Many of these
proofs use so-called ``density" arguments, a standard technique in
today's Diophantine approximation theory, but a far cry from the
natural idea of Hilbert to find a factor of a reducible polynomial. We
will say more about modern proofs in Section~\ref{sec:later}.
Hilbert remains one of the greatest mathematicians of all time. His
original proof still contains insights and arguments that are well
worth study even today. We offer the reader a detailed exposition of
this proof in hope of saving it from the oblivion of history.
\section{Ramsey Theory.}
\label{sec:ramsey}
Theorems in Ramsey theory
almost always follow this informally-stated pattern:
\begin{center}
\fbox{\parbox{6.5cm}{\it For any coloring of a large enough object
there is a nice monochromatic sub-object.}
}
\end{center}
We give three examples of such theorems along with some of the history.
See \cite{GRS,RamseyInts,ramseypromel} for more on these theorems and also Alexander Soifer's book~\cite{coloring} for more of the history.
In 1916, Issai Schur~\cite{Schur}
proved the following statement.
A \emph{$c$-coloring} of a set $S$ is formally a map from $S$ to $\{1,2,\dots,c\}$.
\begin{lemma}
\label{schur}
For all $c$ there exists $S=S(c)$ such that for all $c$-colorings of $\{1,\ldots,S\}$
there exists a monochromatic triple $x,y,z$ such that $x+y=z$.
\end{lemma}
Schur viewed his lemma as a means to an end and so
did not launch what is now called Ramsey theory.
He used it to prove the following theorem in number theory.
\begin{thm}
\label{schurfermat}
Let $n\ge 1$.
There exists $q$ such that, for all primes $p\ge q$,
there exists $x,y,z\in \{1,\ldots,p-1\}$ such that
$x^n+y^n\equiv z^n \pmod p$.
\end{thm}
\begin{proof}
Given $n$ let $q=S(n)$.
Let $p$ be a prime such that $p\ge S(n)$.
Then $\Zed_p^*$, which denotes the numbers
$\{1,\ldots,p-1\}$ together with the operation of
multiplication modulo $p$, forms a group.
All arithmetic henceforth is in $\Zed_p^*$.
Let $H = \{ x^n \mid x\in \Zed_p^* \}$. Clearly $H$ is a subgroup of $\Zed_p^*$.
It is known that $|H|=\frac{p-1}{\gcd(n,p-1)}$ so the number of cosets is
$c=\frac{p-1}{|H|} = \gcd(n,p-1) \le n$.
We denote the cosets by
$d_1H,\ldots,d_cH$.
Consider the following $c$-coloring of $\{1,\ldots,p-1\}$:
color $x$ by $i$ such that $x\in d_iH$.
Since $c\le n$ and $p-1\ge S(n)$, by Schur's lemma,
there exists a monochromatic $x_1,y_1,z_1$ such that $x_1+y_1=z_1$.
Since they are all in the same coset, there exists $d$ such that
$x_1,y_1,z_1 \in dH$. Hence
$x_1=dx^n$,
$y_1=dy^n$,
$z_1=dz^n$.
Since $dx^n + dy^n = dz^n$ we get $x^n+y^n=z^n$.
\end{proof}
Theorem~\ref{schurfermat} refuted the idea of proving
Fermat's last theorem by showing that for all $n\ge 3$ there are arbitrarily large $p$ such that $x^n+y^n\equiv z^n$
has no solution modulo $p$.
In 1927, Bartel van der Waerden~\cite{VDW}
proved the following theorem which now bears his name.
\begin{thm}
\label{vdw}
For all $k,c$ there is a number $W=W(k,c)$
such that, for all $c$-colorings of $\{1,\ldots,W\}$,
there exists a monochromatic arithmetic sequence of length $k$.
\end{thm}
The title of \cite{VDW} credits Pierre Baudet with having conjectured this, but
Soifer~\cite{coloring} gives evidence that Schur had also done so.
Even though van der Waerden did not have another goal in mind, he did not pursue
this line of research and
so did not launch what is now called Ramsey theory.
Frank Ramsey~\cite{ramsey30}
proved the following theorem that now bears his name.
As others often do, we state only the case for graphs, not hypergraphs.
A \emph{graph} consists of a set $V$ of \emph{vertices} and a set
$E \subseteq \binom{V}{2}$ (the set of unordered pairs of elements of $V$).
The graph is \emph{complete} if $E$ includes all such pairs and is then denoted by $K_n$, where $n = |V|$.
We consider $c$-colorings of the \emph{edges}, namely mappings $f$ from $E$ to $\{1,\dots,c\}$.
A \emph{monochromatic $K_m$} means a subset $V' \subseteq V$ of size $m$ and $c' \leq c$ such that for all distinct $u,v \in V'$, $(u,v)$ is an edge and $f(u,v) = c'$.
\begin{thm}
\label{ramsey}
For all $c,m$ there exists a number $R$
such that for all $c$-colorings of the edges of $K_R$
there exists a monochromatic $K_m$.
\end{thm}
The least such $R$ satisfying the conditions of Theorem \ref{ramsey} is denoted by $R_c(m)$. The folkloric example of this theorem is
that in any group of six people, \emph{at least three know each other or at least three are complete strangers}. If the six are the vertices of a $K_6$ and each edge is
colored green or blue (friends or strangers), then the theorem says there is at least one monochromatic triangle. In fact, there are at least \emph{two} such triangles, whereas $K_5$ has none when a green five-pointed star is inscribed in a blue pentagon, so that $R_2(3) = 6$.
Ramsey applied his lemma to problems in mathematical logic.
He viewed it as a means to an end and
so did not launch what is now called Ramsey theory.
In 1892,
before all of the results above, Hilbert~\cite{Hilbert1892} proved
the lemma featured in the next section.
Like the three statements above (Lemma 6, Theorems 8 and 9), it applies to
any $c$-coloring and yields a monochromatic nice substructure.
Hilbert viewed his lemma as a means to an end and
so did not launch what is now called Ramsey theory. He used it to prove the
Hilbert irreducibility theorem, which is
our main topic.
Who did launch Ramsey theory?
Speaking about his joint paper in 1935 with George Szekeres \cite{ES}, Paul Erd{\H{o}}s said that it was Szekeres
who rediscovered the statement and proof of Ramsey's theorem. They used it as a means to the following end.
\begin{thm}
For all $n\ge 3$ there exists $m > n$ such that for any $m$ points in the plane
in general position there exists $n$ points that form a convex hull.
\end{thm}
\noindent
But they also
attracted a clique of mostly Hungarians who developed the ideas, conjectures, and results that grew into Ramsey theory as we know it.
\section{The Cube Lemma.}
\label{sec:cube-lemma}
Hilbert's first paragraph in \cite{Hilbert1892} crisply framed the \emph{problem} of
irreducibility under substitutions represented by the statement of
Theorem~\ref{irreducibility}. Then he continued right away: ``Our
developments rest on the following lemma.'' We reproduce his words but
change his $a,\mu$ to $c,\beta$ and compress his displayed formulas
using variables $b_1,\dots,b_m$ that take only the values $0$ or $1$:
\begin{center}
\fbox{\parbox{11cm}{Given an infinite integer sequence $a_1,a_2,a_3,\dots$ in which
generally each $a_s$ denotes one of the $c$-many positive integers
$1,2,\dots,c$, let $m$ be any positive integer. Then there are
always $m$-many positive integers
$\mu^{(1)},\mu^{(2)},\dots,\mu^{(m)}$ such that the $2^m$ elements
\[
a_{\beta + \sum_{i=1}^m b_i \mu^{(i)}}
\]
for infinitely many integers $\beta$ are collectively the same
number $G$, where $G$ is one of the numbers $1,2,\dots,c$.}}
\end{center}
\noindent
Call those elements collectively the \emph{$m$-cube}, which we can
denote by $C(\beta;\mu_1,\dots,\mu_m)$. The sequence
$a_1,a_2,a_3,\dots$ can be called a \emph{coloring} of $\Nat^+$ using
$c$~colors. Thus the conclusion is that every coloring gives rise to
\emph{increments} $\mu^{(1)},\mu^{(2)},\dots,\mu^{(m)}$ that yield a
monochromatic $m$-cube for infinitely many starting points $\beta$.
This is implied by the following finitistic statement, which we regard
as \textbf{Hilbert's cube lemma} in the modern sense.
\begin{lemma}
\label{le:cube}
For all $m,c$ there is a number $H$ such that, for all $c$-colorings
of $\Nat^+$ and all intervals of length $H$ in $\Nat^+$, there is a
monochromatic $m$-cube within the interval.
\end{lemma}
\begin{proof}
We fix $c$. We will let $H_m$ be a value of $H$ that satisfies the theorem.
We prove, by induction on $m$, that $H_m$ always exists.
For the base case $m = 1$, we can
take $H_1 = c + 1$. This just says that for any $c$-coloring of an
interval of length $c + 1$ there will be two elements that are the
same color. Taking $\beta$ to be the smaller one and $\beta + \mu_1$
the larger one, $C(\beta;\mu_1)$ is a monochromatic 1-cube.
For the induction step, assume that $h = H_{m-1}$ exists. We show that, for
any $c$-coloring of an interval of length
$H_m = h \!\cdot\! (1 + c^h)$, there is a monochromatic $m$-cube. Let
$COL$ be a $c$-coloring of an interval of length $H_m$. Partition the
interval into $1 + c^h$ blocks of size $h$. By the pigeonhole
principle, some two of those blocks have the same sequence of $h$
colors. By the induction hypothesis, the former has a monochromatic
$(m-1)$-cube $C(\beta;\mu_1,\dots,\mu_{m-1})$, and since the color
sequence of the latter is the same, it has
$C(\beta';\mu_1,\dots,\mu_{m-1})$ with the same color and increments
but $\beta' > \beta$. Take $\mu_m = \beta' - \beta$. Then
$C(\beta;\mu_1,\dots,\mu_m)$ is the required monochromatic $m$-cube.
This proves the lemma statement with $H = H_m$.
\end{proof}
By analogy with Ramsey numbers, one can denote the least such $H$ by $H(m,c)$
and call it a ``Hilbert Cube Number.''
The above proof embodies a recursive upper bound
$H(m,c) \leq H(m-1,c)(1 + c^{H(m-1,c)})$, with basis $H(1,c) = 1$ for
all $c$. This is far from best possible. When
$2 \leq m \leq c$, one can improve the upper bound to
$H(m,c) \leq h(1 + c(m-1)^{h})$, where $h = H(m-1,c)$, by a different
counting argument. One can further tweak this by using $\binom{m-1}{h}$ in
place of $(m-1)^h$. These formulas are not bounded by any fixed tower
of exponents in $c$ and~$m$.
As observed by Brown et~al.~\cite{BCEG}, Hilbert's original proof
yields bounds with $(c + 1)$ rather than $(m - 1)$ in the base and the
Fibonacci number $F_{2m}$ in the exponent, that is,
$H(m,c) \leq (c + 1)^{F_{2m}}$, where $F_0 = 0$, $F_1 = 1$, $F_2 = 1$,
$F_3 = 2$, and so on. These bounds have doubly-exponential growth.
Szemer\'edi \cite{Sz} (see also~\cite{GRS}) improved both the bounds
and the nature of the result. Here is his more modern form of the lemma:
\begin{lemma}
Let $H,c > 0$ and let $A$ be a subset of
$[1,\dots,H]$ such that $|A| \geq H/c$. Then for some constant $C$ depending
only on $c$, $A$ contains an
$m$-cube where $m \geq \log\log(H) - C$.
\end{lemma}
\section{Monic polynomials.}
\label{sec:monic}
%
Hilbert begins by reducing his general problem to the case of
\emph{monic polynomials in one variable $x$} with \emph{rational}
coefficients. That is, he shows that the following statement suffices to prove
Theorem~\ref{irreducibility-contra}.
\begin{thm}\label{thm:goal}
Let $g(y,t) \in \Zed[y,t] \setminus \Zed[t]$. If there exists $t_0$ such that for all integers $t_1 \geq t_0$, $g(y,t_1)$ is monic and reducible in $\Zed[y]$, then $g(y,t)$ is reducible in $\Que[y,t]$.
\end{thm}
\noindent
Proving Theorem~\ref{thm:goal} will occupy the upcoming sections, but here we show the following.
\begin{prop}
\label{prop:Que2Zed}
Theorem~\ref{thm:goal} implies Theorem~\ref{irreducibility-contra}.
\end{prop}
The following transformation is used not only to prove the implication but also to motivate the machinery for proving Theorem~\ref{thm:goal}.
Recall from \eqref{integral-T} the integral polynomials $T,T_1,\dots,T_n$ in $t$ such that
$f(x,t) = Tx^n + T_1 x^{n-1} + \cdots +T_{n-1} x + T_n.$
Note that $T$ and the $T_j$'s become integer constants for any fixed value of $t$. Define
\begin{equation}\label{gyt}
g(y,t) = y^n + S_1 y^{n-1} + \cdots + S_{n-1} y + S_n,
\end{equation}
where for each $j$, $1 \leq j \leq n$, $S_j = T_j T^{n-j}$. Then
\[
g(y,t) = T^{n-1} f(\frac{y}{T},t).
\]
Thus $g(y,t)$ is defined by rational transformation of the argument $x$ in $f(x,t)$
but still comes out integral. To work with this, we preface four observations,
of which the first is a famous result of Gauss called his \emph{polynomial lemma}.
\begin{lemma}
\label{lem:factors}
\begin{enumerate}
\item
If a monic polynomial in $\Zed[y]$ factors in $\Que[y]$, then it factors in $\Zed[y]$.
\item
A polynomial $\psi(y)$ divides a polynomial $f \in \Zed[x,y]$ if and only if, upon writing
\[
f(x,y) = a_0(y)x^{n} + a_1(y)x^{n-1} +\cdots+ a_{n-1}(y)x + a_n(y),
\]
we have that $\psi(y)$ is a factor of each $a_j(y)$.
\item
If $\psi(y)$ is irreducible and divides $f\cdot g$, where $g \in \Zed[x,y]$ is written as
\[
g(x,y) = b_0(y)x^{n} + b_1(y)x^{n-1} +\cdots+ b_{n-1}(y)x + b_n(y),
\]
then either $\psi(y)$ is a factor of all $a_i(y)$ or it is a factor of all $b_i(y)$.
\item
If $f(x,y)$ can be factored into the product of two polynomials in $x$
whose coefficients are rational functions of $y$ with integral
coefficients, i.e., belong to $\Que(y)[x]$, then it can be factored into the
product of two polynomials in $\Zed[x,y]$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part (a) can be understood as saying the product of two monic polynomials, each with at least one
rational noninteger coefficient, must have at least one rational noninteger coefficient. The
intuition for (b) and (c) is that since $x$ occurs nowhere else, it cannot help
$\psi(y)$ divide $f$ or $f\cdot g$ any other way than stated; proofs may be found in
B\^ocher \cite[pp.~203--204]{Boc}.
To prove
(d), we write the given factorization in the form
\[
f(x,y)
= \frac{f_1(x,y)}{\varphi_1(y)}\cdot \frac{f_2(x,y)}{\varphi_2(y)},
\]
where $f_1(x,y)$, $f_2(x,y)$, $\varphi_1(y)$ and $\varphi_2(y)$ are
integral polynomials such that $f_1$ is not divisible by any factor of
$\varphi_1(y)$ and $f_2$ is not divisible by any factor of
$\varphi_2(y)$. By part~(c), since $f_1 \cdot f_2$ is divisible by
$\varphi_1 \cdot \varphi_2$, $f_1$ has the complete polynomial
$\varphi_2$ as a factor and $f_2$ has the complete polynomial
$\varphi_1$ as a factor. By~(b) we can cancel $\varphi_2$ from the
coefficients of $f_1$ and we can cancel $\varphi_1$ from the
coefficients of~$f_2$. This gives us our factorization into two
polynomials in $\Zed[x,y]$.
\end{proof}
\begin{proof}
[Proof of Proposition~\ref{prop:Que2Zed}]
Let $t_1 \geq t_0$ in the hypothesis of Theorem~\ref{thm:goal}.
Recall that $g(y,t) = f(\frac{y}{T},t)$ for any $t$.
Since $f(x,t_1)$ factors in $\Zed[x]$, $g(y,t_1)$ factors in $\Que[y]$. But since $g(y,t_1)$ is an integral polynomial
and is monic in the one variable $y$, it factors in $\Zed[y]$ by Gauss's polynomial lemma.
Thus we have satisfied the hypothesis of Theorem~\ref{thm:goal}---and with the same $t_0$ as in
Theorem~\ref{irreducibility-contra}. Assuming its conclusion gives us
\[
g(y,t) = \Psi(y,t) \Psi'(y,t),
\]
where $\Psi(y,t)$ and $\Psi'(y,t)$ belong to $\Que[y,t]$. Substituting back $y = xT$ yields the following equation for our
original polynomial:
\[
f(x,t) = \frac{\Phi(x,t) \Phi'(x,t)}{AT^{n-1}},
\]
where $\Phi(x,t)$ and $\Phi'(x,t)$ both belong to $\Zed[x,t]$,
$A \in \Zed$, and $T \in \Zed[t]$ . Now part~(d) of our lemma completes
the proof.
\end{proof}
Gauss used his lemma, which appeared on page 42 of his \emph{Disquisitiones} \cite{Gauss}, to give the first proof of the irreducibility of the cyclotomic polynomial of prime degree over the rationals. As we've seen, Hilbert used it to reduce the irreducibility theorem to the case of monic polynomials with rational coefficients.
But to go further and prove Theorem~\ref{thm:goal}, a new tool is needed.
\section{Puiseux series.}
\label{sec:pseries}
Let $n$ be the maximum degree of $g(y,t_1)$ as a polynomial in $y$ as $t_1$ varies. It is possible that $g(y,t_1)$ has degree less than $n$ for some values $t_1$ owing to cancellations, but these values are isolated and we will effectively be able to ignore them.
The fundamental theorem of algebra shows that the equation $g(y,t_1) = 0$ (for other values of $t_1$) has $n$ complex roots.
Thus, thinking informally, we can postulate $n$ functions of $t$, say $y_1(t),\dots,y_n(t)$,
that satisfy the equations. Hilbert brought this into reality by using a refined form of the
\emph{implicit function theorem}, which we refer to as \emph{Puiseux's
theorem} (see discussion of origins below). It says that the $n$ root
functions $y_1(t),\dots,y_n(t)$ can be expressed in a concrete way by
means of fractional power series in decreasing powers of the variable.
These are the so-called \emph{Puiseux series at infinity}, defined as follows.
\begin{definition}
\label{de:pu}
A Puiseux series at infinity is an expression of the
form
\[
u(x^{1/k}) + \sum_{i=1}^\infty \frac{\mathrm{B}_i}{x^{i/k}},
\]
where $u$ is a polynomial with possibly complex coefficients, $k$ is a positive integer, and
$\mathrm{B}_1,\mathrm{B}_2,\ldots \in \Cee$.
\end{definition}
\noindent
We adopt the next theorem statement from \cite[pp.~80--81]{Tz}
with slight alterations in notation and formatting. Its power series
are called \emph{Puiseux expansions at infinity}.
\begin{thm}[Puiseux's Theorem]
Given $g(y,t)$ as above, there are $n$
distinct power series
\begin{equation}
\label{Puiseux1}
\begin{split}
y_1(t) &= \mathrm{A}_{11}\tau^h + \mathrm{A}_{12}\tau^{h-1}
+\cdots+ \mathrm{A}_{1,h+1} + \frac{\mathrm{B}_{11}}{\tau}
+ \frac{\mathrm{B}_{12}}{\tau^2} + \frac{\mathrm{B}_{13}}{\tau^3}
+ \cdots
\\
y_2(t) &= \mathrm{A}_{21}\tau^h + \mathrm{A}_{22}\tau^{h-1}
+\cdots+ \mathrm{A}_{2,h+1} + \frac{\mathrm{B}_{21}}{\tau}
+ \frac{\mathrm{B}_{22}}{\tau^2} + \frac{\mathrm{B}_{23}}{\tau^3}
+ \cdots
\\
& \vdots
\\
y_n(t) &= \mathrm{A}_{n1}\tau^h + \mathrm{A}_{n2}\tau^{h-1}
+\cdots+ \mathrm{A}_{n,h+1} + \frac{\mathrm{B}_{n1}}{\tau}
+ \frac{\mathrm{B}_{n2}}{\tau^2} + \frac{\mathrm{B}_{n3}}{\tau^3}
+ \cdots
\end{split}
\end{equation}
which are all convergent for
$\tau$
greater than some constant, where the following hold:
\begin{enumerate}
\item
For a certain positive integer $k$, $\tau = t^{1/k}$, where the
positive real value of the root is meant;
\item
The given number $h$ is the highest positive exponent of $\tau$ that
occurs.
\item
All coefficients $\mathrm{A_{i,j}}$ and $\mathrm{B_{i,j}}$ are well-defined uniquely determined complex numbers;
\item
Any formal power series $y(t)$ satisfying the formal identity
$g\{y(t),t\}\equiv 0$ and having properties analogous to those of the
series \eqref{Puiseux1} necessarily coincides with one of
the $n$ series above.
\item
The following formal identity holds:
\[
g(y,t) \equiv \prod_{i=1}^{n} \{y - y_i(t)\}.
\]
\qed
\end{enumerate}
\hideqed
\end{thm}
Hilbert \cite{Hilbert1892} ascribed the idea of employing Puiseux series to Runge in a work that had appeared
three years earlier, and cited it in a footnote exactly like this.%
\footnote{The original work of Puiseux can be found in
\emph{Liouville's} Journal, vols.~15, 16 (1850, 1851). These
expansions have already been used by \emph{C. Runge} to derive
necessary conditions that an equation between two unknowns have
infinitely many integral solutions. See this Journal, vol.~100,
p.~425.
[Footnote by Hilbert]}
In order for the idea to get off the ground, however, we need to have at least one series, i.e., $n \geq 1$. This is where the condition $g(y,t) \in \Zed[y,t] \setminus \Zed[t]$ in Theorem~\ref{thm:goal}, reflecting emendations in Section~\ref{sec:intro}, is used.
Hilbert then notes---and this is the reason to reduce the problem to monic
polynomials---the relation between the elementary symmetric functions
of the roots $y_1,y_2,\dots,y_n$ and the coefficient polynomials
$S_1,S_2,\ldots,S_n$, namely:
\begin{equation}
\label{SymFun}
\begin{split}
S_1 &= -(y_1 + y_2 +\cdots+ y_n)
\\
S_2 &= (-1)^2(y_1y_2 + y_1y_3 +\cdots+ y_{n-1}y_n)
\\
&\vdots
\\
S_n &= (-1)^n(y_1y_2y_3\cdots y_{n-1}y_n).\\
\end{split}
\end{equation}
The insight is that by Puiseux's theorem, when we plug the expansions
\eqref{Puiseux1} into the symmetric functions \eqref{SymFun},
\emph{the resulting fractional power series for the coefficients all
collapse down to
integral polynomials in $t$. And those must be the polynomials $S_k$ in~$t$ from \eqref{gyt}.}
For later reference, it will be convenient to state
the former observation as a theorem, calling the part of
the expansion with positive exponents the \emph{polynomial part}.
\begin{thm}
\label{thm:nub}
For any $g(y,t) \in \Cee[y,t]$ the elementary symmetric functions of
$n$ Puiseux expansions \eqref{Puiseux1} parameterizing the roots for any $t$ collapse down to polynomials
in $\Zed[t]$ if (and only if):
\begin{enumerate}
\item
The coefficients of
the \emph{negative} powers of $\tau$ in the
resulting fractional power series for the coefficients are
\emph{all equal to zero}.
\item
\label{polypart}
The numerical coefficients of the \emph{``polynomial part"} of the
resulting fractional power series for the coefficients
\emph{are all integers}.
\item
The numerical coefficients of the\emph{ positive
fractional powers} of $\tau$ in the resulting fractional power series
for the coefficients are \emph{all equal to zero}.
\qed
\end{enumerate}
\hideqed
\end{thm}
\noindent These three conditions will, with appropriate changes, characterize
the coefficients of any polynomial factor in $\Zed[y,t]$ of $g(y,t)$.
Lemma~\ref{lem:factors} above shows that it suffices to write ``rational
numbers'' instead of ``integers'' in condition~(b).
\section{The formal factors.}
\label{sec:factors}
Any nontrivial \emph{formal} polynomial factor of $g(y,t)$ is a polynomial of the
form
\begin{equation}
\label{formal}
\pi_A(y,t) := \prod_{y_j\in A} (y - y_j),
\end{equation}
where $A$ is a nonempty proper subset of the roots $\{y_1,y_2,\ldots,y_n\}$. As
Hilbert points out, there are $\binom{n}{2}$ quadratic factors,
$\binom{n}{3}$ cubic factors, $\binom{n}{4}$ quartic factors,
$\binom{n}{5}$ quintic factors,
and finally
$n$
factors of degree $n-1$. Additionally, we count the
$n$ linear factors but exclude the two trivial ones, giving a grand total of
\[
\binom{n}{2} + \binom{n}{3} +\cdots+ \binom{n}{n-1} + n = 2^n - 2
\]
possible factors.
So, if $g(y,t)$ is reducible, \emph{some
$\pi_A(y,t)$ must be an integral polynomial factor.}
Sometimes we prefer to think of $A$ as a single item rather than a set, so we assign it a unique index $a$ where $a = 1,2,\dots, 2^n - 2$. Then
$\pi_a(y,t)$ means the same as $\pi_A(y,t)$.
These items will become the ``colors'' in the cube lemma.
Let's look at a simple example of a reducible integral polynomial:
\[
g(y,t) := y^3 - t^3.
\]
Then the roots of $g(y,t) = 0$ are $y_1 = t$, $y_2 = \omega t$,
$y_3 = \omega^2 t$ where $\omega^3 = 1$, $\omega \neq 1$.%
\footnote{Moreover, $y_1 = t$, $y_2 = \omega t$, $y_3 = \omega^2 t$
where $\omega^3 = 1$, $\omega \neq 1$ are also the Puiseux expansions
of the roots.}
When $n = 3$ there are $2^3-2 = 6$ formal factors. Thus the sets $A$
are
\[
\{y_1\}, \quad \{y_2\}, \quad \{y_3\}, \quad
\{y_1,y_2\}, \quad \{y_1,y_3\}, \quad \{y_2,y_3\},
\]
and we (arbitrarily) assign the indices $a = 1,2,3,4,5,6$ to them,
respectively. Then, by \eqref{formal}, these formal factors are:
\begin{align*}
\pi_{\{y_1\}} &\equiv \pi_1(y,t) = y - y_1 = y - t,
\\
\pi_{\{y_2\}} &\equiv \pi_2(y,t) = y - y_2 = y - \omega t,
\\
\pi_{\{y_3\}} &\equiv \pi_3(y,t) = y - y_3 = y - \omega^2 t,
\\
\pi_{\{y_1,y_2\}} &\equiv \pi_4(y,t) = (y - y_1)(y - y_2)
= y^2 + \omega ty + \omega^2 t^2,
\\
\pi_{\{y_1,y_3\}} &\equiv \pi_5(y,t) = (y - y_1)(y - y_3)
= y^2 + \omega^2 ty + \omega t^2,
\\
\pi_{\{y_2,y_3\}}
&\equiv \pi_6(y,t) = (y - y_2)(y - y_3) = y^2 + ty + t^2.
\end{align*}
We observe that $\pi_1(y,t)$ and $\pi_6(y,t)$ are \emph{integral}
polynomial factors, whereas the other four are not.
\section{Using the pigeonhole principle.}
\label{sec:php}
Our problem now is to discover \emph{at least one} formal factor $\pi_{\alpha}$ that is an
\emph{integral} polynomial. (We note that its complementary factor is also an integral polynomial.) We begin our search by applying our
hypothesis.
Take $t_0$ from the hypothesis of Theorem~\ref{thm:goal}
and $\tau_0$ to be the
positive $k$th root of~$t_0$. Usually, $\tau_0$ will be
irrational. By hypothesis, if we substitute $\tau_0$ into all of the
coefficient series of the formal factors $\pi_a(y,t)$, \emph{at least
one of them will be an integral polynomial in $y$}.
Now, along with Hilbert, we observe that if we substitute $2\tau_0$ into all
of the coefficient series of the formal factors $\pi_a(y,t)$, then
\emph{at least one of them will be an integral polynomial in $y$}, by
the assumption that $g(y,2^k t_0)$ is reducible.
Again, if we substitute $3\tau_0$ into all of the coefficient series
of the formal factors $\pi_a(y,t)$ \emph{at least one of them will be
an integral polynomial in~$y$}, by the assumption that $g(y,3^kt_0)$
is reducible. The same is true of $4\tau_0$, $5\tau_0$, and indeed, of
$\sigma\tau_0$ for $\sigma = 1,2,3,\dots$.
Therefore, we obtain \emph{an infinite sequence of integral polynomial
factors $\pi_a(y,\sigma^k t_0)$ in $y$}. Each of them has a unique
index $a \in \{1,2,\dots, 2^n - 2\}$. Let these indices be
$a_1,a_2,a_3,\dots,a_s,\dots$. Then, by the pigeonhole principle,
\emph{at least one index $a_s$ occurs infinitely often}. In our
example above, we can take $a_s = 1$ or $a_s = 6$ and our sequence of
indices contains $1$ or $6$ or both infinitely often. The point
is the following.
\begin{center}
\fbox{\parbox{8cm}{The corresponding formal polynomial $\pi_{a_{s}}(y,t)$ is a natural
candidate for our integral polynomial factor.
}}
\end{center}
\noindent
To prove that it \emph{is} our integral polynomial factor, we must
verify that its Puiseux
expansions satisfy
the three conditions of
Theorem~\ref{thm:nub}. The rest of Hilbert's paper (and ours) is
about pinning down \emph{a formal factor $\pi_{a_{s}}(y,t)$ that satisfies these three conditions.}
\section{Framing the cube lemma.}
\label{sec:framing}
Let's consider the first condition of Theorem 16: \emph{The coefficients of all the
\emph{negative} powers of $\tau_0$ in the resulting fractional power
series for the coefficients of $\pi_{a_s}(y,\sigma^kt_0)$ are all
equal to zero}.
Without loss of generality we may suppose $a_s$ indexes $\{y_1,\dots,y_{\nu}\}$ where $\nu < n$ is the degree of $\pi_{a_s}$.
Suppose the following system of coefficient power
series for $\pi_{a_s}(y,(\sigma^kt_0))$ has $a_s$ as its index:
\begin{align*}
y_1 + y_2 +\cdots+ y_\nu
&= \mathrm{C}_{11}(\sigma\tau_0)^h
+ \mathrm{C}_{12}(\sigma\tau_0)^{h-1} +\cdots+ \mathrm{C}_{1,h+1}
\\
&\hspace*{5.8em}
+ \frac{\mathrm{D}_{11}}{\sigma\tau_0}
+ \frac{\mathrm{D}_{12}}{(\sigma\tau_0)^2}
+ \frac{\mathrm{D}_{13}}{(\sigma\tau_0)^3} +\cdots
\\[-2\jot]
& \vdots
\\
y_1y_2\cdots y_\nu
&= \mathrm{C}_{\nu1}(\sigma\tau_0)^{h\nu}
+ \mathrm{C}_{\nu2}(\sigma\tau_0)^{h\nu-1}
+\cdots+ \mathrm{C}_{\nu,h\nu+1}
\\
&\hspace*{6.2em}
+ \frac{\mathrm{D}_{\nu1}}{\sigma\tau_0}
+ \frac{\mathrm{D}_{\nu2}}{(\sigma\tau_0)^2}
+ \frac{\mathrm{D}_{\nu3}}{(\sigma\tau_0)^3} +\cdots .
\end{align*}
The coefficients $\mathrm{C_{ij}},\mathrm{D_{ij}}$ are all completely
determined as
rational or irrational, real or complex
numbers. Some of
them must be zero because the positive exponents of $\tau_0$ in
general will be smaller than $(n - 1)h$.
The variable quantity here is the integer $\sigma$. This suggests that
\emph{we rewrite the above fractional series as series in~$\sigma$}
and then obtain
\begin{align*}
y_1 + y_2 +\cdots+ y_\nu
&= C_{11}\sigma^h + C_{12}\sigma^{h-1} +\cdots+ C_{1,h+1}
\\
&\hspace*{4.1em}
+ \frac{D_{11}}{\sigma} + \frac{D_{12}}{\sigma^2}
+ \frac{D_{13}}{\sigma^3} +\cdots
\\[-2\jot]
& \vdots
\\
y_1y_2 \cdots y_\nu
&= C_{\nu1}\sigma^{h\nu} + C_{\nu2}\sigma^{h\nu -1}
+\cdots+ C_{\nu,h\nu+1}
\\
&\hspace*{4.6em}
+ \frac{D_{\nu1}}{\sigma}
+ \frac{D_{\nu2}}{\sigma^2} + \frac{D_{\nu3}}{\sigma^3} +\cdots
\end{align*}
where the new coefficients $C_{ij},D_{ij}$ are again determinate numerical
quantities. Suppose that the index of the first occurrence of our
infinitely repeated polynomial factor is $s = \sigma = \mu$. Then
every repetition of the index $\mu$ produces the same $\nu$ power
series in $\sigma$, but with a \emph{larger} value of $\sigma$. Thus,
since there are an infinite number of such indices, \emph{there are infinitely many larger and larger values of $\sigma$ substituted into the power series.
}
If we look at the series of \emph{negative} powers for any particular
coefficient, we see that for sufficiently large $\sigma$ it becomes
arbitrarily small in absolute value. Yet, the total power series takes
integral values for all of these values of $\sigma$. That suggests
that the total contribution of the negative powers, for large
$\sigma$, \emph{is an integer of arbitrarily small absolute value,
i.e., zero}.
Thus we might try to argue by contradiction as follows: Assume that
there \emph{are} nonzero coefficients of negative powers and deduce an
absurd conclusion. The possible hitch is that this inference ignores
the ``polynomial part'' of the coefficient series---which could
exactly compensate for a tiny nonzero contribution of the negative
powers.
To show that this is \emph{not} the case we would like to somehow
``eliminate'' the polynomial part of the coefficient series without
losing the property of being an integer for infinitely many values
of~$\sigma$. This suggests \emph{forming suitable linear combinations
of the coefficient series that successively subtract off the
principal terms of the polynomial parts, and leaving finally only
linear combinations of integer-valued series with negative
coefficients}.
To see how this would work, let's look at a typical coefficient
series. Let us choose any of the $\nu$ power series in the system
under consideration, say the power series
\[
\mathcal{P}(\sigma)
= C_{11}\sigma^{m-1} + C_{12}\sigma^{m-2} +\cdots+ C_{1m}
+ \frac{D_{11}}{\sigma} + \frac{D_{12}}{\sigma^2}
+ \frac{D_{13}}{\sigma^3} + \cdots ,
\]
where we have written $m - 1$ for the highest power of~$\sigma$.
Now comes a new insight. \emph{This is the insight that is key to the
whole proof}. Its simplicity belies the brilliance it took to think of
it. Professional mathematicians are aware of this phenomenon: the
deepest ideas, in the end, are based on a simple observation. The observation can be counterfactual and act as a catalyst. Here is
Hilbert's:
\begin{center}
\fbox{\parbox{10cm}{Let us suppose that the series $\mathcal{P}(\sigma)$ takes on
integral values, not only for infinitely many values of $\sigma$,
\emph{but also for all the infinitely many values of
$\sigma + \mu^{(1)}$, where $\mu^{(1)}$ is a fixed increment
independent of~$\sigma$}.}}
\end{center}
\noindent
To set it in motion, we form the linear combination
\[
\mathcal{P}^{(1)}(\sigma)
:= \mathcal{P}(\sigma) - \mathcal{P}(\sigma + \mu^{(1)})
\]
and put the polynomial part equal to
\[
\varphi_{m-1}(\sigma)
:= C_{11}\sigma^{m-1} + C_{12}\sigma^{m-2} +\cdots+ C_{1m}
\]
for brevity. We now start the argument-by-contradiction.
Suppose now that the other coefficients $D_{11},D_{12},D_{13},\dots$
of the power series $\mathcal{P}(\sigma)$ \emph{are not all zero} and
let $D_{1v}/\sigma^v$ be the first term whose coefficient $D_{1v}$
does \emph{not} vanish. Then
\[
\mathcal{P}^{(1)}(\sigma) = \varphi_{m-1}(\sigma)
- \varphi_{m-1}(\sigma + \mu^{(1)})
+ D_{1v} \biggl[ \frac{1}{\sigma^v}
- \frac{1}{(\sigma + \mu^{(1)})^v} \biggr] + \cdots .
\]
Here the first difference on the right-hand side \emph{is a polynomial
of degree $m - 2$ in~$\sigma$}; we put
\[
\varphi_{m-2}(\sigma)
= \varphi_{m-1}(\sigma) - \varphi_{m-1}(\sigma + \mu^{(1)}).
\]
We expand the remaining terms on the right-hand side in decreasing
powers of $\sigma$; then we obtain%
\footnote{The details, with simplified notation, are:
\begin{align*}
D_{1v} \biggl[ \frac{1}{\sigma^\nu}
- \frac{1}{(\sigma + \mu)^\nu} \biggr]
&= \frac{D_{1v}}{\sigma^\nu} \biggl[
1 - \frac{1}{(1 + \frac{\mu}{\sigma})^{\nu}} \biggr]
\\
&= \frac{D_{1v}}{\sigma^\nu}
\biggl[ 1 - \biggl\{ 1 + \binom{-\nu}{1} \frac{\mu}{\sigma}
+ \binom{-\nu}{2} \biggl( \frac{\mu}{\sigma} \biggr)^2
+ \cdots \biggr\} \biggr]
\\
&= \frac{D_{1v}}{\sigma^\nu}
\biggl[ \frac{\mu\nu}{\sigma} - \frac{\nu(\nu + 1)}{2}
\biggl( \frac{\mu}{\sigma} \biggr)^2 + \cdots \biggr]
\\
&= \frac{\mu\nu D_{1v}}{\sigma^{\nu+1}} \biggl[
1 - \frac{\nu + 1}{2} \biggl( \frac{\mu}{\sigma} \biggr)
+ \cdots \biggr],
\end{align*}
and this last expression on the right-hand side is equivalent to that
in the main body of the paper.}
\[
\mathcal{P}^{(1)}(\sigma)
= \varphi_{m-2}(\sigma) + \mu^{(1)}v\frac{D_{1v}}{\sigma^{v+1}}
+\cdots \,.
\]
\emph{We have reduced the maximum degree of the polynomial part by one
unit}. Moreover, $\mathcal{P}^{(1)}(\sigma)$ \emph{takes on integral
values for infinitely many~$\sigma$}.
We have \emph{not} proved that such an increment $\mu^{(1)}$ exists,
but if we could, then we could make a first step in reaching our goal
of producing a series of negative powers of $\sigma$ that
evaluates to
an integer for infinitely many values of~$\sigma$.
To carry out a similar program to \emph{reduce the maximum degree of
the polynomial part to $m - 3$, then to $m - 4$, and so on until
finally to zero}, we would have to have to prove the existence of $m$
fixed increments $\mu^{(k)}$, $k = 1,2,\ldots,m$ \emph{whose values
are all independent of $\sigma$ and such that if we substitute any of
the integers}
\[
\boxed{\begin{smallmatrix}
\mu & & & &
\\
\mu + \mu^{(1)} & & & &
\\
\mu + \mu^{(2)} & \mu + \mu^{(1)} + \mu^{(2)} & & &
\\
\mu + \mu^{(3)} & \mu + \mu^{(1)} + \mu^{(3)}
& \mu + \mu^{(2)} + \mu^{(3)}
& \mu + \mu^{(1)} + \mu^{(2)} + \mu^{(3)} &
\\
\vdots & \vdots & \vdots & \vdots \hspace*{2.5em} \cdots
& \ddots \hfill
\\
\mu + \mu^{(m)} & \mu + \mu^{(1)} + \mu^{(m)}
& \mu + \mu^{(2)} + \mu^{(m)} & \cdots \qquad \cdots
& \mu + \mu^{(1)} +\cdots+ \mu^{(m)}
\end{smallmatrix}}
\]
\emph{for $\sigma$ in $\mathcal{P}(\sigma)$, the values will all be
integers}.
Note that the proof of the existence of the increment $\mu^{(1)}$
amounts to proving that $a_{\mu}$ and $a_{\mu + \mu^{(1)}}$ are both
\emph{the same number in the set of indices~$a_s$}. A similar property
holds for the set of all the above sums of fixed increments
$\mu^{(k)}$ (for $k = 1,2,\dots,m$) with $\mu$, namely that they all
are the subscripts of \emph{the same number in the set of
indices~$a_s$}. In our running example they are all equal to~$1$ or
they are all equal to~$6$.
The proof of the existence of these increments \emph{is the content of the cube lemma}.
To serve the context of Hilbert's proof, we restate it using his formulas much as he visualized them in his paper.
\begin{thm}
Let $a_1,a_2,a_3,\ldots$ be an infinite sequence in which the general
term, $a_s$, is one of the $a$ positive numbers $1,2,\ldots,a$.
Moreover, let $m$ be any positive integer. Then we can always find $m$
positive integers $\mu^{(1)},\mu^{(2)},\ldots,\mu^{(m)}$ such that for
infinitely many integers $\mu$ the $2^m$ elements
\[
\setlength{\arraycolsep}{3pt}
\boxed{\begin{matrix}
a_ \mu & & & &
\\
a_{\mu + \mu^{(1)}} & & & &
\\
a_{\mu + \mu^{(2)}} & a_{\mu + \mu^{(1)} + \mu^{(2)}} & & &
\\
a_{\mu + \mu^{(3)}} & a_{\mu + \mu^{(1)} + \mu^{(3)}}
& a_{\mu + \mu^{(2)} + \mu^{(3)}}
& a_{\mu + \mu^{(1)} + \mu^{(2)} + \mu^{(3)}} &
\\
\vdots & \vdots & \vdots & \enspace\vdots \hspace*{2.4em} \cdots
& \ddots \hfill
\\
a_{\mu + \mu^{(m)}} & a_{\mu + \mu^{(1)} + \mu^{(m)}}
& a_{\mu + \mu^{(2)} + \mu^{(m)}} & \cdots \qquad \cdots
& a_{\mu + \mu^{(1)} + \mu^{(2)} +\cdots+ \mu^{(m)}}
\end{matrix}}
\]
are all equal to the same number $G$, where $G$ is one of the numbers
$1,2,\dots,a$.
\end{thm}
Thus we see that the statement \emph{arises naturally from the
necessity of proving that the coefficients of the negative powers of
$\sigma$ must be all equal to zero}. It is the strengthened form of
the pigeonhole principle we mentioned earlier. It is stronger because
it imposes a \emph{structure} on the distribution of infinitely many
common values $a_s$ whereas the pigeonhole principle only implies
their \emph{existence}.
\section{The Coefficients of the Negative Powers of $\sigma$ are Zero.}
\label{sec:neg}
Employing the idea above, we form the following $m$ linear
combinations:
\begin{align*}
\mathcal{P}^{(1)}(\sigma)
&= \mathcal{P}(\sigma) - \mathcal{P}(\sigma + \mu^{(1)}),
\\
\mathcal{P}^{(2)}(\sigma)
&= \mathcal{P}^{(1)}(\sigma) - \mathcal{P}^{(1)}(\sigma + \mu^{(2)}),
\\[-\jot]
\qquad &\qquad \vdots
\\
\mathcal{P}^{(m)}(\sigma)
&= \mathcal{P}^{(m-1)}(\sigma)
- \mathcal{P}^{(m-1)}(\sigma + \mu^{(m)}).
\end{align*}
It follows from what we proved earlier that each of these $m$ power
series also assumes integer values for infinitely many integral
arguments $\sigma = \mu$.
As we indicated, assuming the cube lemma, we obtain
\[
\mathcal{P}^{(2)}(\sigma) = \varphi_{m-3}(\sigma)
+ \mu^{(1)}\mu^{(2)}v(v + 1) \frac{D_{1v}}{\sigma^{v+2}} + \cdots \,,
\]
where $\varphi_{m-3}(\sigma)$ is a polynomial in $\sigma$ of degree
$m - 3$. \emph{After $m$ steps we finally arrive at the formula}
\[
\mathcal{P}^{(m)}(\sigma)
= \mu^{(1)} \mu^{(2)} \cdots \mu^{(m)} v(v + 1) \cdots
(v + m - 1) \frac{D_{1v}}{\sigma^{v+m}} + \cdots \,.
\]
Since this power series begins with negative powers of $\sigma$, we
can find a positive number $\Gamma$ such that for all values of
$\sigma$ that exceed $\Gamma$ the absolute value of the power series
\emph{will be smaller than one}. On the other hand, the power series
$\mathcal{P}^{(m)}(\sigma)$ is itself \emph{equal to an integer for
infinitely many arguments} $\sigma$ and since an integer whose
absolute value is less than one is necessarily equal to zero, it
follows that \emph{there are infinitely many integers $\sigma$ for
which the power series vanishes}.
But, our last formula shows us that
\[
\lim_{\sigma\to\infty}
\left[ \sigma^{v+m}\mathcal{P}^{(m)}(\sigma) \right]
= \mu^{(1)} \mu^{(2)} \cdots \mu^{(m)} v(v + 1) \cdots
(v + m - 1) D_{1v},
\]
where the expression on the right-hand side represents a quantity
\emph{different from zero}. This last result stands in
\emph{contradiction} with the conclusion proved above, and therefore
\emph{it is impossible that a nonzero coefficient $D_{1v}$ occurs
among the coefficients $D_{11},D_{12},D_{13},\dots$}. It follows in
the same way that \emph{also the coefficients
$D_{2i},D_{3i},D_{4i},\dots,D_{\nu i}$ must all be equal to zero}.
This completes the proof of the first condition of Theorem~\ref{thm:nub} about the Puiseux
expansions of the coefficients of $\pi_{a_s}(y,t)$.
\qed
\medskip
This step was the heart of Hilbert's proof and his paper's most
brilliant insight. The other parts are clever too, but in our opinion
this best shows his penetrating originality.
\section{The Coefficients of the Polynomial Part are Rational Numbers.}
\label{sec:rat}
The next condition of Theorem~\ref{thm:nub} to be verified is:
\emph{the numerical coefficients in the polynomial part of the Puiseux
expansions of the coefficients of $\pi_{a_s}(y,t)$ are rational
numbers}. Our expansion has collapsed to the polynomial part, i.e.
\begin{equation}
\label{poly}
\mathcal{P}(\sigma)
= C_{11}\sigma^{m-1} + C_{12}\sigma^{m-2} +\cdots+ C_{1m},
\end{equation}
where the right-hand side assumes integer values for infinitely many
values of $\sigma$. If we set the right-hand side equal to these
integers for $m$ values of $\sigma$, we obtain $m$ linear equations
in $m$ unknowns $C_{11},C_{1,2},\dots,C_{1m}$ which have a
\emph{rational solution} by Cramer's rule.
By Proposition~\ref{prop:Que2Zed}, getting ``rational'' suffices to prove the condition.
\qed
\section{Only Integral Powers of \lowercase{$t$}\,.}
\label{sec:int}
The final condition of Theorem~\ref{thm:nub} to be verified is:
\emph{the only nonzero terms in the polynomial part of the Puiseux
expansions of the coefficients of $\pi_{a_s}(y,t)$ are those with
integral powers of~$t$}.
Take $\tau_0$ to be a \emph{prime} number larger than $t_0$ in the statement of
Theorem~\ref{thm:goal} and recall that $\sigma\tau_0 = \tau.$
We now select $2^n - 2$ distinct prime numbers $p_{\ell}$ all greater than $\tau_0$.
For each of $\tau_0$ and the $p_{\ell}$, at least one among the $2^n - 2$
formal factors has coefficients in the above polynomial form
\eqref{poly}. However, since we have $2^n - 1$ primes and
only $2^n - 2$ formal factors, \emph{there must
exist at least one formal factor admitting a double representation by
these polynomials} \eqref{poly}. That is to say, there are distinct primes $p,p' > t_0$ such that
\begin{align*}
y_1 + y_2 +\cdots+ y_\nu
&= A_{11} p^{-(m-1)/k} \tau^{m-1}
+ A_{12}p^{-(m-2)/k} \tau^{m-2} +\cdots+ A_{1m}
\\
& \vdots
\\
y_1 y_2 \cdots y_{\nu}
&= A_{\nu1} p^{-(m-1)/k} \tau^{m-1}
+ A_{\nu2} p^{-(m-2)/k} \tau^{m-2} +\cdots+ A_{\nu m},
\\
\intertext{and simultaneously,}
y_1 + y_2 +\cdots+ y_\nu
&= A'_{11} (p')^{-(m-1)/k} \tau^{m-1}
+ A'_{12} (p')^{-(m-2)/k} \tau^{m-2} +\cdots+ A'_{1m}
\\
& \vdots
\\
y_1 y_2 \cdots y_\nu
&= A'_{\nu1} (p')^{-(m-1)/k} \tau^{m-1}
+ A'_{\nu2} (p')^{-(m-2)/k} \tau^{m-2} +\cdots+ A'_{\nu m} \,.
\end{align*}
\noindent
Since, by Puiseux's theorem, the coefficients of the powers of $\tau$ are unique, if we equate coefficients of equal powers of $\tau$ on the right-hand sides we obtain:
\[
\begin{aligned}
A_{11}p^{-(m-1)/k}
&= A'_{11} (p')^{-(m-1)/k}, & \cdots \quad A_{\nu 1} p^{-(m-1)/k} &= A'_{\nu 1} (p')^{-(m-1)/k}
\\
A_{12}p^{-(m-2)/k}
&= A'_{12} (p')^{-(m-2)/k}, & \cdots \quad A_{\nu 2} p^{-(m-2)/k} &= A'_{\nu 2} (p')^{-(m-2)/k}
\\
&\vdots & &\vdots\quad \quad & \quad
\\
A_{1m} &= A'_{1m}, & \quad A_{\nu m} &= A'_{\nu m} \, .
\end{aligned}
\]
\noindent
The final numerical point is that for any rational number $r$ that is not an integer, and distinct primes $p$ and $p'$, the ratio $p^r /p'^r$ is irrational. Suppose the ratio were equal to the rational number $c/d$ in lowest terms, and let us also put $r = k/\ell$ in lowest terms. Cross-multiplying then gives
\[
d^{\ell} p^k = c^{\ell} p'^k.
\]
For this to happen, $p$ must divide $c$. Let $a$ be the largest integer such that $p^a$ divides $c$. Then we must have $\ell = ak$ in order to cancel out all factors of $p$. This means that $\ell/k$ must be an integer, contradicting the condition on $r$.
We can apply this where $\ell$ is a numerator of one of the exponents of $p$ and $p'$, such as $\ell = m-1$, because we established in Section~\ref{sec:rat} that the coefficients $A_{i,j}$ of the polynomial part are all rational. We conclude that \emph{the only coefficients that can be different from zero are those for which the corresponding exponent of
$\tau$ must be an integer divisible by~$k$}.
It follows that the power
series of our system \emph{are polynomials in $\tau^k$ with rational
coefficients}, and if we put $\tau^k = t$, we obtain
\begin{align*}
y_1 + y_2 +\cdots+ y_\nu &= F_1(t)
\\[-\jot]
\vdots\qquad &
\\
y_1 y_2 \cdots y_\nu &= F_\nu(t),
\end{align*}
where $F_1(t),\dots,F_{\nu}(t)$ are \emph{polynomials in $t$
with rational coefficients}.
This completes the proof of the third condition of Theorem~\ref{thm:nub} of the Puiseux
expansions of the coefficients of some formal factor $\pi_{a_s}(y,t)$.
\qed
\bigskip
We note that the final formal factor $\pi_{a_s}(y,t)$ is not necessarily the same one that we started with.
All we needed was that it fulfills the three
conditions of Theorem~\ref{thm:nub}, and therefore the proof of Theorem~\ref{thm:goal} is complete.
\qed
\section{Later Proofs of the Irreducibility Theorem.}
\label{sec:later}
After Hilbert, many mathematicians offered other proofs of the
irreducibility theorem.
Most of the modern proofs of the (two-variable) irreducibility
theorem are based on that of Karl D\"orge~\cite{Dorge}, which sharpened an
idea of Thoralf Skolem~\cite{skolem}.
D\"orge proved it without using the
cube lemma and obtained a stronger result. To begin contrasting his and Hilbert's results, recall Hilbert's
statement that if $f\in\Zed[x,y_1,\ldots,y_s]$ is irreducible,
then for infinity many
$t_1,\ldots,t_s \in \Zed$, $f(x,t_1,\ldots,t_s)$ is irreducible as a member of $\Zed[x]$.
Now let $|f|$ be the maximum of $8$ and the absolute values of the coefficients of $f$.
(The reason for insisting $|f| \ge 8$ is technical.)
A simplified statement of D\"orge's theorem is the following.
\begin{thm}\label{thm:Dorge}
There is a function $c(d,s)$ such that the following holds.
Let $f\in\Zed[x,y_1,\ldots,y_s]$ be irreducible of degree $d$.
Let $N>|f|^{c(d,s)}$.
Then the number of $(t_1,\ldots,t_s)\in \{-N,\ldots,N\}^s$ such that
$f(x,t_1,\ldots,t_s)$ is not irreducible is at most $|f|^{c(d,s)}N^{s-(1/2)}\log N$.
\end{thm}
\noindent
Note that the number of such $(t_1,\ldots,t_s)$ has density 0.
D\"orge actually presented a generalization of this theorem where he replaces
$\Zed$ with the integers of a finite extension of a number field.
D\"orge also showed (in fact, this was his primary interest) that if $f$, viewed
as an element of $\Zed[y_1,\ldots,y_s][x]$, has
Galois group $G$,
then the number of $(t_1,\ldots,t_s)\in \{-N,\ldots,N\}^s$ such that
$f(x,t_1,\ldots,t_s)$ does not have Galois group $G$ is at most
$|f|^{c(d,s)}N^{s-(1/2)}\log N$.
And again, he actually presented a generalization of this theorem
that replaces
$\Zed$ with the integers of a finite extension of a number field.
Lang~\cite{diogeom,diogeomfund} and
Prasolov~\cite{polynomials} have expositions of D\"orge's proof.
Franz~\cite{Franz} also gave a proof that does not use the cube lemma,
and this is expounded further by Schinzel~\cite{polynomialssel}. There
is a another proof by Fried~\cite{fried}.
Serre~\cite{serre} recasts these results in geometric terms and
presents results about which groups can be Galois groups.
\section{Conclusion: Hilbert's World.}
\label{sec:conc}
We have shown the significance of the cube lemma in the context of
Hilbert's original paper. We return to the questions of how Hilbert might have
expanded it in the direction of Ramsey theory and why he didn't. That Hilbert was
the world's master in the relationship between number theory and logic until G\"odel emerged,
while Ramsey was motivated by a problem in logic, gives more reason to ask.
We close with a speculative answer: The world in which
Hilbert was immersed was and remains at least one level of exponentiation higher than the ground floor of Ramsey theory.
Recall from Section~\ref{sec:cube-lemma} that
we defined the ``Hilbert Cube Numbers'' $H(m,c)$ to be the least $H$ such that
every $c$-coloring of $\{1,\dots,H\}$ has a monochromatic $m$-cube.
The best known upper and lower bounds appear still to be those of
Gunderson and R\"odl~\cite{GR}:
\[
c^{(1 - \epsilon_c)(2^m - 1)/m} \leq H(m,c) \leq (2c)^{2^{m-1}},
\]
where $\epsilon_c \to 0$ as $c \to \infty$.\footnote{%
The same upper bound was
recently ascribed to \cite{GRS2} by Conlon, Fox, and Sudakov
\cite{CFS} but see also S\'andor~\cite{sandor} with different
asymptotics.
}
This establishes doubly-exponential growth of $H(m,c)$ in $m$ for any fixed $c$, in
contrast to the singly-exponential growth of the Ramsey numbers, in particular
\[
R_2(m) \leq \binom{2m-2}{m-1} \leq \frac{4^m}{\sqrt{m}}.
\]
Moreover, while Erd\H{o}s and Tur\'an~\cite{et} proved that $H(2,c)$
is asymptotic to $c^2$, much less is known about $H(m,c)$ for fixed $m \geq 3$
(see remarks in \cite{BCEG} which appear still in force), and in
\cite{CFS} it is noted that $H(m,2)$ depends on unknown properties of van
der Waerden numbers. This all puts $H(m,c)$ on a higher and harder plane
than analogous cases of $R_c(m)$ in the Ramsey world. So what world was Hilbert in?
The years 1890--1893 saw the publication of Hilbert's great
foundational works in commutative algebra, including his basis theorem
and \textit{Nullstellensatz} \cite{Hilbert1890,Hilbert1893}. A common
thread is the notion of \emph{regularity}: given
a finitely-specified system of elements that may have arbitrarily
large values of some parameter $t$ (such as the degrees of certain polynomials
over a ring), there is some integer $t_0$ such that, for all
$t \geq t_0$, the system conforms to a simple description. Hilbert
first proved his basis theorem nonconstructively. Later was it shown
that the growth of the relevant $t_0$ (in terms of the degrees $d$ of
basis elements or the $n$-variable equations in the
\textit{Nullstellensatz}) is doubly-exponential, of order up to
$d^{2^n}$. We could try to equate the growth of $H(m,c)$ with
that of Ramsey numbers by regarding the cube as a graph of size $M = 2^m$ and saying
$H(m,c)$ is ``singly exponential in $M$.'' But $m$, not $M$, is still the
natural parameter, just as with $n$, not $2^n$, in the \textit{Nullstellensatz}.
Hence our answer is simply that Hilbert was occupied with
more rarefied levels of algebra and analysis where nonconstructive methods were often more salient
than double-exponential effective ones. Irreducibility of polynomials plays into irreducible
varieties and primary decompositions of polynomial ideals, which
Hilbert's student Emanuel Lasker (the world chess champion) and
colleague
Emmy Noether built upon for some great work in the next
two decades that became more algorithmic.
In the meantime, Hilbert swooped down to the
ground-level task of formalizing Euclid's geometry in the later 1890s,
which presaged his work on formal logic.
The divide in purpose and growth rate does not ward us off from
appreciating the cube numbers and seeking other uses for them. That is
why we have devoted this paper to expounding their original use and
context. We have highlighted how the cube lemma completed an insight
about estimates by infinite series. We hope that our exposition will
foster a greater appreciation of combinatorial underpinnings of more
``analytical'' areas of mathematics.
\\
\\
\textbf{Acknowledgements}.
We thank Joseph P. Varilly for formatting help and the referees for helpful comments that greatly improved the exposition and clarity.
| {
"timestamp": "2017-09-21T02:09:36",
"yymm": "1611",
"arxiv_id": "1611.06303",
"language": "en",
"url": "https://arxiv.org/abs/1611.06303",
"abstract": "Hilbert's Irreducibility Theorem is a cornerstone that joins areas of analysis and number theory. Both the genesis and genius of its proof involved combining real analysis and combinatorics. We try to expose the motivations that led Hilbert to this synthesis. Hilbert's famous Cube Lemma supplied fuel for the proof but without the analytical foundation and framework it would have been heating empty air. The lemma is said to presage Ramsey Theory but we note differences in motivation.",
"subjects": "History and Overview (math.HO)",
"title": "Hilbert's Proof of His Irreducibility Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850867332734,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7095349766782652
} |
https://arxiv.org/abs/2008.04307 | Spectrum of twists of Cayley and Cayley sum graphs | Let $G$ be a finite group with $|G|\geq 4$ and $S$ be a subset of $G$. Given an automorphism $\sigma$ of $G$, the twisted Cayley graph $C(G, S)^\sigma$ (resp. the twisted Cayley sum graph $C_\Sigma(G, S)^\sigma$) is defined as the graph having $G$ as its set of vertices and the adjacent vertices of a vertex $g\in G$ are of the form $\sigma(gs)$ (resp. $\sigma(g^{-1} s)$) for some $s\in S$. If the twisted Cayley graph $C(G, S)^\sigma$ is undirected and connected, then we prove that the nontrivial spectrum of its normalised adjacency operator is bounded away from $-1$ and this bound depends only on its degree, the order of $\sigma$ and the vertex Cheeger constant of $C(G, S)^\sigma$. Moreover, if the twisted Cayley sum graph $C_\Sigma(G, S)^\sigma$ is undirected and connected, then we prove that the nontrivial spectrum of its normalised adjacency operator is bounded away from $-1$ and this bound depends only on its degree and the vertex Cheeger constant of $C_\Sigma(G, S)^\sigma$. We also study these twisted graphs with respect to anti-automorphisms, and obtain similar results. Further, we prove an analogous result for the Schreier graphs satisfying certain conditions. | \section{Introduction}
\subsection{Motivation}
The study of the spectrum of graphs is an important theme in the theory of expanders. It was remarked by Breuillard--Green--Guralnick--Tao that the eigenvalues of the normalised Laplacian operator of non-bipartite, finite Cayley graphs are bounded away from $2$ (see \cite[Appendix E]{BGGTExpansionSimpleLie}). Recently, the first author established an explicit upper bound.
Given a subset $S$ of a finite group $G$ with $|S| = d\geq 1$, the associated Cayley graph $C(G, S)$ has $G$ as its set of vertices and for $x, y\in G$, there is an edge from $x$ to $y$ if $y = x s$ for some $s\in S$. This graph is undirected if and only if $S$ is symmetric. In \cite[Theorem 1.4]{BiswasCheegerCayley} (cf. \cite[Theorem 2.11]{CheegerCayleySum}), it is established that if the Cayley graph $C(G, S)$ is undirected and connected, then the nontrivial spectrum of its adjacency operator lies in the interval
$$\left( -1 + \frac{h^4}{2^9d^8}
,
1 - \frac{h^2}{2d^2}
\right]$$
where $h$ denotes the vertex Cheeger constant of $C(G, S)$.
It turns out that a similar result holds for the Cayley sum graphs, which are classical combinatorial objects, e.g., see \cite{HandBookCombi}. The Cayley sum graph $C_\Sigma(G, S)$ has $G$ as its set of vertices, and for $x, y\in S$, there is an edge from $x$ to $y$ if $y = x^{-1} s$ for some $s\in S$. This graph is undirected if and only if $S$ is closed under conjugation (see \cite[Lemma 2.6]{CheegerCayleySum}). In \cite[Theorem 1.3]{CheegerCayleySum}, it is established that if the Cayley sum graph $C_\Sigma(G, S)$ is undirected and connected, then the nontrivial spectrum of its adjacency operator lies in the interval
$$\left( -1 + \frac{h_\Sigma^4}{2^9d^8}
,
1 - \frac{h_\Sigma^2}{2d^2}
\right]$$
where $h_\Sigma$ denotes the vertex Cheeger constant of $C_\Sigma(G, S)$.
\subsection{Results obtained}
Given a group automorphism $\sigma$ of $G$, one can consider variants of the Cayley graph and the Cayley sum graph, viz., the twisted Cayley graph $C(G, S)^\sigma$ and the twisted Cayley sum graph $C_\Sigma (G, S)^\sigma$. The twisted Cayley graph $C(G, S)^\sigma$ has $G$ as its set of vertices, and for $x, y\in G$, there is an edge from $x$ to $y$ if $y = \sigma(xs)$ for some $s\in S$. The twisted Cayley sum graph $C_\Sigma (G, S)^\sigma$ has $G$ as its set of vertices, and for $x, y\in G$, there is an edge from $x$ to $y$ if $y = \sigma(x^{-1} s)$ for some $s\in S$.
Note that the twisted Cayley graphs, and the twisted Cayley sum graphs provide examples of graphs which are neither Cayley graphs, nor Cayley sum graphs (if we focus on the twists by automorphisms of order two only). In fact, there are twisted Cayley graphs which are isomorphic to no Cayley graphs, no Cayley sum graphs, no twisted Cayley sum graphs, and there are twisted Cayley sum graphs which are isomorphic to no Cayley graphs, no Cayley sum graphs, no twisted Cayley graphs, as the following examples illustrate.
Let $p$ be an odd prime. Let $D_{2p}$ denote the dihedral group of order $2p$. Let $s$ denote an element of $D_{2p}$ of order two. Let $\sigma$ be the involution of $D_{2p}$ which fixes $s$ and sends any element of order $p$ to its inverse. Then the twisted Cayley graph $C(D_{2p}, \{s\})^\sigma$ is isomorphic to no Cayley graph, no Cayley sum graph, no twisted Cayley sum graph on any group of order $2p$.
Let $\tau$ denote the involution of $\ensuremath{\mathbb{Z}}/2p\ensuremath{\mathbb{Z}}$ of order two. Then, for any symmetric subset $S$ of $\ensuremath{\mathbb{Z}}/2p\ensuremath{\mathbb{Z}}$ containing the identity element and having size $\leq p-1$, the twisted Cayley sum graph $C_\Sigma(\ensuremath{\mathbb{Z}}/2p\ensuremath{\mathbb{Z}}, S)^\tau$ is isomorphic to no Cayley graph, no Cayley sum graph, no twisted Cayley graph on any group of order $2p$.
In this article, one of our aim is to show that the spectrum of the twisted Cayley graph $C(G, S)^\sigma$ and twisted Cayley sum graph $C_\Sigma(G,S)^\sigma$ are bounded away from $-1$.
\begin{theorem}\label{Thm:Bdd}
Let $S$ be a subset of a finite group $G$ with $|S|= d$. Suppose $\sigma$ is an automorphism of $G$.
\begin{enumerate}
\item
Suppose $\sigma^{2}$ is the trivial automorphism of $G$.
If the twisted Cayley graph $C(G, S)^\sigma$ is connected and undirected and $|G| \geq 4$, then the nontrivial spectrum of its normalised adjacency operator lies in the interval
$$\left( -1 + \frac{h_\sigma^4}{2^{12}d^8}
,
1 - \frac{h_\sigma^2}{2d^2}
\right]$$
where $h_\sigma$ denotes the vertex Cheeger constant of $C(G, S)^\sigma$.
\item
Suppose $\sigma^{2k}$ is the trivial automorphism of $G$, where $k\geq 1$ is an odd integer.
If the twisted Cayley graph $C(G, S)^\sigma$ is connected and undirected and $|G| \geq 4$, then the nontrivial spectrum of its normalised adjacency operator lies in the interval
$$\left(
\left(
-1 + \frac {1}{2^{12} d^{8k}}
\left(
\frac 12
\left(
1 -
\left(
1 - \frac{h_\sigma^2}{2d^2}
\right)^k
\right)
\right)
^4
\right)^{1/k}
,
1 - \frac{h_\sigma^2}{2d^2}
\right]$$
where $h_\sigma$ denotes the vertex Cheeger constant of $C(G, S)^\sigma$.
\item If the twisted Cayley sum graph $C_\Sigma (G, S)^\sigma$ is connected and undirected and $|G| \geq 4$, then the nontrivial spectrum of its normalised adjacency operator lies in the interval
$$\left( -1 + \frac{h_{\Sigma, \sigma}^4}{2^{12}d^8}
,
1 - \frac{h_{\Sigma, \sigma}^2}{2d^2}
\right]$$
where $h_{\Sigma, \sigma}$ denotes the vertex Cheeger constant of $C_\Sigma (G, S)^\sigma$.
\end{enumerate}
\end{theorem}
More generally, we study the spectrum of undirected, connected, non-bipartite graphs carrying a suitable action of a group and establish Theorem \ref{thmPrincipal}. Using Theorem \ref{thmPrincipal}, we prove Theorem \ref{Thm:Bdd}. The proof of Theorem \ref{thmPrincipal} is motivated by the strategy used in \cite[Appendix E]{BGGTExpansionSimpleLie}.
In Section \ref{Sec:TwistsAnti}, we consider twisted Cayley graphs and twisted Cayley sum graphs with respect to an anti-automorphism of the underlying group. We establish that if these graphs are undirected and connected, then the nontrivial spectrum of their normalised adjacency operators are bounded away from $-1$ (see Theorem \ref{Thm:BddAnti}).
We consider Schreier graphs in Section \ref{Sec:Schreier}. We prove under certain hypothesis that the nontrivial eigenvalues of the normalised adjacency operator of a given Schreier graph is bounded away form $-1$ (see Theorem \ref{Thm:Schreier}).
\subsection{Acknowledgements}
We wish to thank Emmanuel Breuillard for a number of helpful discussions during the opening colloquium of the M\"unster Mathematics Cluster,
and the MFO for their hospitality, where a part of this work was initiated. The first author is supported by the ISF Grant no. 662/15 at the Technion. The second author would like to acknowledge the Initiation Grant from the Indian Institute of Science Education and Research Bhopal and the INSPIRE Faculty Award from the Department of Science and Technology, Government of India.
\section{Preliminaries}
In the following, all the graphs considered are undirected. However, these graphs may contain multiple edges and even multiple loops at certain vertices. Given a finite $d$-regular multi-graph $\mathbb{G} = (V,E)$ having $V$ as its set of vertices and $E$ as its multiset of edges, we have the normalised adjacency operator $T$ of size $|V|\times |V|$. The normalised Laplacian operator of $\mathbb{G}$ is defined by
$$L:= I_{|V|} - T,$$
where $I_{|V|}$ denotes the identity matrix of size $|V|\times |V|$. Let $n$ denote the number of elements of $V$. Denote the eigenvalues of $T$ and the eigenvalues of $L$ by $\lbrace t_{i} : i= 1, \cdots, n\rbrace $ and $\lbrace \lambda_{i} : i= 1, \cdots, n\rbrace $ respectively such that $\lambda_{i} = 1 - t_{i}$ and
$$
0 = \lambda_{1} \leqslant \lambda_{2} \leqslant \cdots \leqslant \lambda_{n-1}\leqslant \lambda_{n} \leqslant 2.
$$
Let $\mathbb{G} = (V,E)$ be a multi-graph. For a subset $V_{1}\subseteq V$, its neighbourhood in $\mathbb G$ is denoted by $N(V_{1})$ and is defined as
$$N(V_{1}) := \lbrace v\in V : (v, v_{1})\in E \text{ for some } v_{1}\in V_{1}\rbrace.$$
The boundary of $V_{1}$ is defined as $ \partial(V_{1}) := N(V_{1})\backslash V_{1}$.
\begin{definition}[Vertex Cheeger constant]
The vertex Cheeger constant $h(\mathbb{G})$ of a multi-graph $\mathbb{G} = (V,E)$ is defined as
$$h(\mathbb{G}) := \inf \left\lbrace \frac{|\partial(V_{1})|}{|V_{1}|} : \emptyset \neq V_{1}\subseteq V, |V_{1}|\leqslant \frac{|V|}{2} \right\rbrace.$$
\end{definition}
We recall the notion of expander graphs as stated in \cite{AlonEigenvalueExpand}.
\begin{definition}[$(n,d,\varepsilon)$-expander]
\label{vexp}
Let $\varepsilon>0$. An $(n,d,\varepsilon)$-expander is a graph $(V,E)$ on $n$ vertices, having maximal degree $d$, such that for every set $\emptyset \neq V_{1}\subseteq V$ satisfying $|V_{1}|\leqslant \frac{|V|}{2}$, the inequality $|\partial(V_{1})|\geqslant \varepsilon|V_{1}|$ holds (equivalently, $h((V, E))\geqslant \varepsilon).$
\end{definition}
The degree of a vertex of a multi-graph is the number of half-edges adjacent to it (in the absence of loops). The presence of a loop at a vertex increases its degree by one. A multi-graph is called $r$-regular if each vertex has degree $r$. Apart from the notion of vertex expansion as in Definition \ref{vexp}, there is the notion of edge expansion.
\begin{definition}[Edge expansion]
Let $\mathbb{G} = (V,E)$ be a $d$-regular multi-graph with vertex set $V$ and edge multiset $E$. For any nonempty subset $V_1$ of $V$, the edge boundary $E(V_{1},V\backslash V_{1})$ of $V_{1}$ is defined as the multiset
$$E(V_{1},V\backslash V_{1}) := \lbrace (v_{1},v_2)\in E: v_{1}\in V, v_2\in V\backslash V_{1} \rbrace ,$$
and the edge expansion ratio $\phi(V_{1})$ of $V_1$ is defined as
$$\phi(V_{1}) := \frac{|E(V_{1},V\backslash V_{1})|}{d|V_{1}|}.$$
\end{definition}
\begin{definition}[Edge Cheeger constant]
The edge Cheeger constant $\mathfrak{h}(\mathbb{G})$ of a multi-graph $\mathbb G = (V, E)$ is defined by
$$\mathfrak{h}(\mathbb{G}):= \inf_{\emptyset \neq V_{1}\subseteq V, |V_{1}|\leqslant |V|/2} \phi(V_{1}).$$
\end{definition}
The two Cheeger constants are related by the following lemma.
\begin{lemma}
\label{Lemma:VertexEdgeCons}
For $d$-regular multi-graph $\mathbb{G} = (V,E)$, the inequalities
$$ \frac{h(\mathbb{G})}{d} \leqslant \mathfrak{h}(\mathbb{G}) \leqslant h(\mathbb{G})$$
hold.
\end{lemma}
\begin{proof}
For $\emptyset \neq V_{1}\subseteq V$, consider the map $$\psi:E(V_{1},V\backslash V_{1}) \rightarrow \partial(V_{1}) \text{ given by }(v_{1},v_2)\mapsto v_{2}.$$
This map is surjective, and hence the first inequality follows. For the second inequality, note that the fibre of any point of $\partial(V_1)$ under the map $\psi$ contains at most $d$ elements.
\end{proof}
The following Proposition relates the edge Cheeger constant of a multi-graph with the second smallest eigenvalue of its Laplacian operator. It is the version for graphs of the corresponding inequalities for the Laplace--Beltrami operator on compact Riemannian manifolds. It was first established by Cheeger \cite{CheegerLowerBddSmallest} (the lower bound) and by Buser \cite{BuserNoteIsoperimetric} (the upper bound). The discrete version was proved by Alon and Millman \cite{AlonMilmanIsoperiIneqSupConcen} (Proposition \ref{Prop:chin}).
\begin{proposition}[Discrete Cheeger--Buser inequality]
\label{Prop:chin}
Let $\mathbb{G} = (V,E)$ be a finite $d$-regular multi-graph.
Let $\mathfrak{h}(\mathbb{G})$ denote its edge Cheeger constant and $\lambda_{2}$ denote the second smallest eigenvalue of its normalised Laplacian operator. Then
$$ \frac{\mathfrak{h}(\mathbb{G})^{2}}{2} \leqslant \lambda_{2} \leqslant 2\mathfrak{h}(\mathbb{G}).$$
\end{proposition}
\begin{proof}
See \cite[Propositions 4.2.4, 4.2.5]{LubotzkyDiscreteGroups} or
\cite{ChungLaplacianOfGraph}.
\end{proof}
\section{Generalities}
Let $(V, E)$ be a finite graph (which may contain multiple edges, and even multiple loops at certain vertices) of degree $d$.
The neighbourhood of a subset $V'$ of $V$ in $(V, E)$ is denoted by $\ensuremath{\mathcal{N}}(V')$. Assume that there exist an integer $d\geq 1$ and permutations $\theta_1, \cdots, \theta_d: V\to V$ such that the vertices $v, \theta_i(v)$ are adjacent in $(V, E)$ for any $v\in V$ and $1\leq i \leq d$, and $\ensuremath{\mathcal{N}}(v)$ is equal to $\cup_{i=1}^d \{\theta_i(v)\}$ for any $v\in V$. For a subset $V'$ of $V$, denote the subset $\theta_i(V')$ of $\ensuremath{\mathcal{N}}(V')$ by $\ensuremath{\mathcal{N}}^i(V')$.
\begin{proposition}
\label{Prop:AExists}
Assume that the graph $(V, E)$ is undirected. Suppose $|V|\geq 4$, the graph $(V, E)$ is an $\varepsilon$-vertex expander for some $\varepsilon>0$ and its normalised adjacency operator has an eigenvalue in the interval $(-1, -1+\zeta]$ for some $\zeta$ satisfying $0<\zeta \leq \frac{\varepsilon^2}{4d^4}$. Then for some subset $A$ of $V$, the inequalities
$$\left(\frac 1{2 + \beta + \frac{d\beta}{\varepsilon}}\right) |V| \leq |A| \leq \frac 12 |V|$$
hold with $\beta = d^2 \sqrt{2\zeta (2-\zeta)}$. Let $\tau$ be an automorphism of the set $V$ such that
$$
\ensuremath{\mathcal{N}} (\ensuremath{\mathcal{N}}(\tau(A))) \subseteq \tau(\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(A))).
$$
If
$$\beta < \frac{\varepsilon^2}{2d(2d+1)},$$
then exactly one of the inequalities
$$|A \cap \tau(A)| \leq \frac{d \beta}{\varepsilon^2} (\varepsilon + d + 2)|A|,
\quad
|A \cap \tau(A)| \geq \left( 1 - \frac {d \beta}{ \varepsilon^2} ( \varepsilon + d + 2) \right) |A|$$
holds.
\end{proposition}
\begin{proof}
Since the graph $(V, E)$ is an $\varepsilon$-vertex expander with $\varepsilon>0$, it follows that for some vertex $v\in V$ and for some $1\leq i \leq d$, $\theta_i(v) \neq v$. Using $|V| \geq 4$, one obtains
\begin{align*}
\varepsilon |\{v, \theta_i(v)\}|
& \leq |(\ensuremath{\mathcal{N}}(v) \cup \ensuremath{\mathcal{N}}(\theta_i(v)) ) \setminus \{v, \theta_i(v)\}| \\
& \leq |\ensuremath{\mathcal{N}}(v) \setminus \{\theta_i(v)\}| + |\ensuremath{\mathcal{N}}(\theta_i(v))\setminus \{v\}|\\
& \leq 2(d-1),
\end{align*}
which implies
\begin{equation}
\label{Eqn:EpsilonBoundSch}
\varepsilon \leq d-1.
\end{equation}
and hence $\zeta < 1$. Let $T$ denote the normalised adjacency operator of the graph $(V, E)$. Since $T$ has an eigenvalue in $(-1, -1+\zeta]$ and $\zeta <1$, it follows that $T^2$ has an eigenvalue $\nu$ in $[(1-\zeta)^2, 1)$.
Consider the undirected multi-graph $\ensuremath{\mathcal{M}}$ (which may contain multiple edges, and multiple loops) with $V$ as its set of vertices and its edges are obtained by drawing an edge from $v$ to each element of $\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(\{v\}))$ (considered as a multiset). In other words, $\ensuremath{\mathcal{M}}$ is the undirected multi-graph having $V$ as its set of vertices and $T^2$ as its normalised adjacency operator. Thus the second largest eigenvalue of the normalised adjacency operator of $\ensuremath{\mathcal{M}}$ is $\geq \nu\geq (1-\zeta)^2 = 1 - \zeta(2-\zeta)$. Hence the second smallest eigenvalue of the normalised Laplacian operator of $\ensuremath{\mathcal{M}}$ is $\leq \zeta(2-\zeta)$. By the discrete Cheeger--Buser inequality (Proposition \ref{Prop:chin}), it follows that the edge Cheeger constant of $\ensuremath{\mathcal{M}}$ satisfies
$$\frac 12 \ensuremath{\mathfrak{h}}(\ensuremath{\mathcal{M}})^2 \leq \zeta(2-\zeta),$$
which yields
$$\ensuremath{\mathfrak{h}}(\ensuremath{\mathcal{M}})\leq \sqrt{2\zeta(2-\zeta)}.$$
Since $(V, E)$ has degree $d$, from Lemma \ref{Lemma:VertexEdgeCons}, it follows that the vertex Cheeger constant of $\ensuremath{\mathcal{M}}$ satisfies
$$h(\ensuremath{\mathcal{M}}) \leq
d^2 \ensuremath{\mathfrak{h}}(\ensuremath{\mathcal{M}}) \leq d^2\sqrt{2\zeta(2-\zeta)}.$$
This implies that for some nonempty subset $A$ of $V$ with $|A| \leq \frac 12 |V|$,
$$
|\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(A))\setminus A|
\leq |A| d^2 \sqrt{2\zeta(2-\zeta)}
= |A| \beta
$$
holds. Note that $|A \cup \ensuremath{\mathcal{N}}(A)| \geq \frac{|V|}{2}$. Otherwise, we obtain
\begin{align*}
\varepsilon|A|
& \leq \varepsilon |A \cup \ensuremath{\mathcal{N}}(A)| \\
& \leq |\ensuremath{\mathcal{N}}(A \cup \ensuremath{\mathcal{N}}(A)) \setminus (A \cup \ensuremath{\mathcal{N}}(A))|\\
& \leq |\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(A))\setminus A| \\
& \leq |A| d^2 \sqrt{2\zeta(2-\zeta)}\\
& = |A| \beta.
\end{align*}
This implies
$\varepsilon\leq d^2 \sqrt{2\zeta(2-\zeta)} < d^2 \sqrt{4\zeta}$, which contradicts the assumption that $\zeta \leq \frac{\varepsilon^2}{4d^4}$. It follows that
\begin{align*}
\varepsilon |V \setminus (A \cup \ensuremath{\mathcal{N}}(A))|
& \leq \varepsilon|(A \cup \ensuremath{\mathcal{N}}(A))^c|\\
& \leq |\ensuremath{\mathcal{N}}((A \cup \ensuremath{\mathcal{N}}(A))^c) \setminus (A \cup \ensuremath{\mathcal{N}}(A))^c| \\
& \leq |(A \cup \ensuremath{\mathcal{N}}(A)) \cap \ensuremath{\mathcal{N}}((A \cup \ensuremath{\mathcal{N}}(A))^c)| \\
& \leq \sum_{i = 1}^d
|(A \cup \ensuremath{\mathcal{N}}(A)) \cap \theta_i((A \cup \ensuremath{\mathcal{N}}(A))^c)| \\
& \leq \sum_{i = 1}^d |\ensuremath{\mathcal{N}}(A \cup \ensuremath{\mathcal{N}}(A)) \cap (A \cup \ensuremath{\mathcal{N}}(A))^c|\\
& \leq d |\ensuremath{\mathcal{N}}(A \cup \ensuremath{\mathcal{N}}(A)) \cap (A \cup \ensuremath{\mathcal{N}}(A))^c|\\
& \leq d |\ensuremath{\mathcal{N}}(A \cup \ensuremath{\mathcal{N}}(A)) \setminus (A \cup \ensuremath{\mathcal{N}}(A))|\\
& \leq d|\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(A))\setminus A| \\
& \leq d|A| d^2 \sqrt{2\zeta(2-\zeta)}\\
& = d |A| \beta,
\end{align*}
which implies
\begin{align*}
|V|
& \leq \frac{d\beta}{\varepsilon}|A| + |A \cup \ensuremath{\mathcal{N}}(A)|\\
& \leq \frac{d\beta}{\varepsilon}|A| + |A| + |\ensuremath{\mathcal{N}}(A)| \\
& \leq \frac{d\beta}{\varepsilon}|A| + |A| + |\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(A))| \\
& \leq \frac{d\beta}{\varepsilon}|A| + 2|A| + |\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(A))\setminus A|\\
& \leq \frac{d\beta}{\varepsilon}|A| + 2|A| + \beta|A|,
\end{align*}
and hence
$$\frac{|V|}{2 + \beta + \frac{d\beta}{\varepsilon}} \leq |A| \leq \frac{|V|}{2}.$$
Note that the inequalities
$$
\frac {2d\beta} { \varepsilon^2} ( \varepsilon + d + 2)
\leq \frac {2d\beta} { \varepsilon^2}( d -1 + d + 2)
= \frac {2d\beta} { \varepsilon^2} ( 2d + 1)
<1
$$
imply that
$$\frac{d \beta}{\varepsilon^2} (\varepsilon + d + 2)
<
1 - \frac {d \beta}{ \varepsilon^2} ( \varepsilon + d + 2).$$
Hence it suffices to show that one of the inequalities
$$|A \cap (\tau(A))| \leq \frac{d \beta}{\varepsilon^2} (\varepsilon + d + 2)|A|,
\quad
|A \cap (\tau(A))| \geq \left( 1 - \frac {d \beta}{ \varepsilon^2} ( \varepsilon + d + 2) \right) |A|$$
holds.
Let $B = A\Delta(\tau(A))^c$. Let $\ensuremath{\mathrm{id}}$ denote the identity map from $V$ to $V$. Note that
\begin{align*}
|\ensuremath{\mathcal{N}}^i(B) \Delta B|
& \leq |\ensuremath{\mathcal{N}}^i(A) \Delta \ensuremath{\mathcal{N}}^i((\tau(A))^c) \Delta A \Delta(\tau(A))^c| \\
& \leq |\ensuremath{\mathcal{N}}^i(A) \Delta \ensuremath{\mathcal{N}}^i((\tau(A))^c) \Delta A^c \Delta \tau(A)| \\
& \leq |\ensuremath{\mathcal{N}}^i(A) \Delta A^c \Delta \ensuremath{\mathcal{N}}^i((\tau(A))^c) \Delta \tau(A)| \\
& \leq |\ensuremath{\mathcal{N}}^i(A) \Delta A^c| + |\ensuremath{\mathcal{N}}^i((\tau(A))^c) \Delta \tau(A)|\\
& = |\ensuremath{\mathcal{N}}^i(A) \Delta A^c| + |\ensuremath{\mathcal{N}}^i(\tau(A)) \Delta \tau(A^c)|\\
& = \sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} |\ensuremath{\mathcal{N}}^i(\phi(A)) \Delta \phi(A^c)|\\
& = \sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} \left( |\ensuremath{\mathcal{N}}^i(\phi(A))| + |\phi(A^c)| - 2|\ensuremath{\mathcal{N}}^i(\phi(A)) \cap \phi(A^c)|\right)\\
& = \sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} \left( |V| - 2|\ensuremath{\mathcal{N}}^i(\phi(A)) \cap \phi(A^c)|\right)\\
& = \sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} \left( |V| - 2|\ensuremath{\mathcal{N}}^i(\phi(A))| + 2|\ensuremath{\mathcal{N}}^i(\phi(A))| - 2|\ensuremath{\mathcal{N}}^i(\phi(A)) \cap \phi(A^c)|\right)\\
& = \sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} \left( |V| - 2|A| + 2|\ensuremath{\mathcal{N}}^i(\phi(A)) \cap \phi(A)|\right)\\
& = 2(|V| - 2|A|) + 2\sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} |\ensuremath{\mathcal{N}}^i(\phi(A)) \cap \phi(A)| \\
& \leq 2(|V| - 2|A|) +
\frac 2\varepsilon\sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} \varepsilon|\ensuremath{\mathcal{N}}(\phi(A)) \cap \phi(A)| \\
& \leq 2(|V| - 2|A|) +
\frac 2\varepsilon\sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} |\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(\phi(A)) \cap \phi(A)) \setminus (\ensuremath{\mathcal{N}}(\phi(A)) \cap \phi(A))| \\
& \leq 2(|V| - 2|A|) +
\frac 2\varepsilon\sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} |\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(\phi(A)) \setminus \phi(A)| \\
& \leq 2(|V| - 2|A|) +
\frac 2\varepsilon\sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} |\phi(\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(A))) \setminus \phi(A)| \\
& = 2(|V| - 2|A|) +
\frac 2\varepsilon\sum_{\phi\in \{\ensuremath{\mathrm{id}}, \tau\}} |\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(A)) \setminus A| \\
& \leq 2\left(\beta + \frac{d\beta}\varepsilon \right)|A| +\frac 2\varepsilon 2|A|\beta\\
& = \frac{2\beta}\varepsilon (\varepsilon + d + 2)|A| .
\end{align*}
It follows that
$$
|\ensuremath{\mathcal{N}}(B) \Delta B|
\leq
\frac{2d\beta}\varepsilon (\varepsilon + d + 2)|A| .
$$
Note that
\begin{align*}
|\ensuremath{\mathcal{N}}^i(B^c) \Delta B^c|
& \leq |\ensuremath{\mathcal{N}}^i(A) \Delta \ensuremath{\mathcal{N}}^i(\tau(A)) \Delta A \Delta\tau(A)| \\
& \leq |\ensuremath{\mathcal{N}}^i(A) \Delta \ensuremath{\mathcal{N}}^i(\tau(A)) \Delta A^c \Delta (\tau(A))^c| \\
& \leq |\ensuremath{\mathcal{N}}^i(A) \Delta A^c \Delta \ensuremath{\mathcal{N}}^i(\tau(A)) \Delta (\tau(A))^c| \\
& \leq |\ensuremath{\mathcal{N}}^i(A) \Delta A^c| + |\ensuremath{\mathcal{N}}^i(\tau(A)) \Delta (\tau(A))^c| )\\
& \leq |\ensuremath{\mathcal{N}}^i(A) \Delta A^c| + |\ensuremath{\mathcal{N}}^i((\tau(A)^c)) \Delta \tau(A)| )\\
& \leq \frac{2\beta}\varepsilon (\varepsilon + d + 2)|A| .
\end{align*}
It follows that
$$
|\ensuremath{\mathcal{N}}(B^c) \Delta B^c|
\leq
\frac{2d\beta}\varepsilon (\varepsilon + d + 2)|A| .
$$
We consider the following cases, viz., $|B|\leq \frac{|V|}2, |B| > \frac{|V|}2$. When $|B|\leq \frac{|V|}2$ holds, we obtain
$$
\varepsilon |B| \leq |\ensuremath{\mathcal{N}}(B) \setminus B| \leq |\ensuremath{\mathcal{N}}(B) \Delta B| \leq \frac{2d \beta}\varepsilon (\varepsilon + d + 2)|A|,
$$
which yields
$$|B| \leq \frac {2d\beta}{ \varepsilon^2} ( \varepsilon + d + 2) |A|.
$$
Since
\begin{align*}
|V| - |B|
& = |B^c| \\
& = |A \Delta (\tau (A))| \\
& = |A| - |A\cap (\tau(A))| + |\tau(A)| - |A\cap (\tau(A))| \\
& = 2|A| - 2|A\cap (\tau(A))|
\end{align*}
holds, we obtain
$$
2|A \cap (\tau(A)) |
\leq |V| - 2|A| + 2|A \cap (\tau(A))|
= |B|
\leq \frac{2d \beta}{\varepsilon^2} (\varepsilon + d + 2)|A|.
$$
While $|B| > \frac{|V|}2$ holds, we obtain
$$
\varepsilon |B^c| \leq |\ensuremath{\mathcal{N}}(B^c)\setminus B^c| \leq |\ensuremath{\mathcal{N}}(B^c)\Delta B^c|
\leq \frac{2d\beta}{\varepsilon}(\varepsilon + d + 2) |A|,
$$
which yields
$$|B^c| \leq \frac{2d\beta}{\varepsilon^2}(\varepsilon + d + 2) |A|.
$$
Since
$$
|B^c|
= |A \Delta (\tau(A))|
= |A| - |A\cap (\tau(A))| + |(\tau(A)) | - |A\cap (\tau(A))|
= 2|A| - 2|A\cap (\tau(A))| $$
holds, we obtain
$$
|A \cap (\tau(A)) |
\geq |A| - \frac {d\beta}{ \varepsilon^2} ( \varepsilon + d + 2)|A|
= \left( 1- \frac {d\beta}{ \varepsilon^2} ( \varepsilon + d + 2)\right)|A|.
$$
This completes the proof.
\end{proof}
\begin{theorem}\label{thmPrincipalk1}
Suppose $V$ carries a left action of a group $\ensuremath{\mathcal{G}}$ such that the following conditions hold.
\begin{enumerate}
\item No index two subgroup of $\ensuremath{\mathcal{G}}$ acts transitively on $V$.
\item The action of $\ensuremath{\mathcal{G}}$ on the set $V$ is ``transitive of order $t$'' in the sense that for each $(u, v)\in V\times V$, the equation $gu = v$ has exactly $t$ distinct solutions for $g\in \ensuremath{\mathcal{G}}$. In other words, the action of $\ensuremath{\mathcal{G}}$ on $V$ is transitive and the stabilizer of each element of $V$ have the same size (equal to $t$).
\item For each $\theta_i, 1\leq i\leq d$ and $v\in V$, there is an automorphism or an anti-automorphism $\psi_{i,v}$ of the group $\ensuremath{\mathcal{G}}$ such that one of
$$\theta_i(g\cdot v) = \psi_{i,v}(g) \cdot \theta_i(v)$$
and
$$\theta_i(g\cdot v) = \psi_{i, v}(g^{-1}) \cdot \theta_i(v)$$
holds for any $g\in \ensuremath{\mathcal{G}}$.
\item For any $\tau\in \ensuremath{\mathcal{G}}$,
$
\ensuremath{\mathcal{N}} (\ensuremath{\mathcal{N}}(\tau(A))) \subseteq \tau(\ensuremath{\mathcal{N}}(\ensuremath{\mathcal{N}}(A)))
$
holds.
\end{enumerate}
Assume that the graph $(V, E)$ is undirected and non-bipartite. Assume further that $|V|\geq 4$, the graph $(V, E)$ is an $\varepsilon$-vertex expander for some $\varepsilon>0$. Then the nontrivial eigenvalues of the normalised adjacency operator of this graph are greater than $-1 + \ell_{\varepsilon, d}$ with
$$\ell_{\varepsilon, d}
=\frac{\varepsilon^4}{2^{12} d^8}.$$
\end{theorem}
\begin{proof}
On the contrary, let us assume that a nontrivial eigenvalue of the normalised adjacency operator of the graph $(V, E)$ lies in the interval $\left[-1, -1 + \ell_{\varepsilon, d}\right]$. Since $(V, E)$ is non-bipartite, it follows that $-1$ is not an eigenvalue of its normalised adjacency operator. Hence an eigenvalue of the normalised adjacency operator of the graph $(V, E)$ lies in the interval $(-1, -1+ \ell_{\varepsilon, d}]$. Set
\begin{align*}
\gamma & = d^2 \sqrt{2 \ell_{\varepsilon, d}(2- \ell_{\varepsilon, d})},\\
r & = 1- \frac {d \gamma}{ \varepsilon^2} ( \varepsilon + d + 2).
\end{align*}
Since $\ell_{\varepsilon, d} = \frac{\varepsilon^4}{2^{12} d^8}$, we have
$$\gamma = d^2 \sqrt{2 \ell_{\varepsilon, d}(2- \ell_{\varepsilon, d})} < d^2 \sqrt{4 \ell_{\varepsilon, d}} \leqslant \frac{\varepsilon^{2}}{2^5d^2}.$$
$$
1 - r = \frac {d \gamma}{ \varepsilon^2} ( \varepsilon + d + 2)
\leq \frac {d \gamma}{ \varepsilon^2} (2d+1)
< \frac 3{2^3\sqrt 2} < \frac 13.$$
Consequently,
\begin{equation}
\label{Eqn:BoundssigmaCayleySum}
\ell_{\varepsilon, d}\leq \frac{\varepsilon^2}{4d^4},
\gamma < \frac{\varepsilon^2}{2d(2d+1)} \text{ and } r> \frac 23.
\end{equation}
Define the subset $H$ of $\ensuremath{\mathcal{G}}$ by
$$H :=
\{
\tau\in \ensuremath{\mathcal{G}} \,:\,\, |A \cap (\tau(A))| \geq r|A|
\}.$$
Note that $H$ contains the identity element of $\ensuremath{\mathcal{G}}$. By the triangle inequality,
\begin{align*}
|A \setminus (\tau_1(\tau_2(A)))|
& \leq |A \setminus (\tau_1(A)) | + |(\tau_1(A)) \setminus (\tau_1(\tau_2(A)))| \\
& = |A \setminus (\tau_1(A)) | + |A \setminus (\tau_2(A)) |\\
& = |A | - |A \cap (\tau_1(A)) | + |A | - |A \cap (\tau_2(A)) |\\
& \leq 2|A| - 2r |A| .
\end{align*}
Consequently,
\begin{align*}
|A \cap (\tau_1(\tau_2(A)))|
& = |A | - |A \setminus (\tau_1(\tau_2(A)))| \\
& \geq |A | - 2|A| + 2r |A| \\
& = (2r - 1) |A|.
\end{align*}
If $|A \cap (\tau_1(\tau_2(A)))| \leq (1-r) |A|$, then we obtain
$$(1-r) |A|
\geq |A \cap (\tau_1(\tau_2(A)))|
\geq (2r - 1) |A|,$$
which implies $r\leq \frac 23$. Since $r>\frac 23$, it follows that the inequality
$|A \cap (\tau_1(\tau_2(A)))| \leq (1-r) |A|$
does not hold. Hence, by Proposition \ref{Prop:AExists}(1), $H$ contains $\tau_1\tau_2$. So $H$ is a subgroup of $\ensuremath{\mathcal{G}}$.
For any $g\in \ensuremath{\mathcal{G}}$, the map
$$A\cap g^{-1} A \to A\times A, \quad
x\mapsto (x, gx)$$
is well-defined, and induces a map
$$\varphi: \coprod _{g\in \ensuremath{\mathcal{G}}} A\cap g^{-1} A \to A\times A.$$
Each of its fibre contains exactly $t$ elements and hence
$$
|A|^2
= im(\varphi)
= \sum_{x\in im(\varphi) } \frac{|\varphi^{-1} (x)|} {|\varphi^{-1} (x)|}
= \frac 1t \sum_{x\in im(\varphi)} |\varphi^{-1} (x)|
= \frac 1t \sum_{g\in \ensuremath{\mathcal{G}}} |A\cap g^{-1} A|.$$
If $\ensuremath{\mathcal{G}} = H$, we obtain
$$|A| \cdot \frac{|V|}{2}
\geq |A|^2 = \frac 1t \sum_{g\in \ensuremath{\mathcal{G}}} |A\cap g^{-1} A|
\geq \frac 1t |\ensuremath{\mathcal{G}}| \cdot r|A|,$$
which gives
$$|\ensuremath{\mathcal{G}}| \leq \frac t{2r}{|V|}.$$
This contradicts that $|\ensuremath{\mathcal{G}}| \geq t|V|$. Hence $H$ is a proper subgroup of $\ensuremath{\mathcal{G}}$.
The following estimate
$$
t |A|^2
= \sum_{g\in \ensuremath{\mathcal{G}}} |A\cap g^{-1} A|
\leq |H| |A| + \frac{d\gamma}{\varepsilon^2}(\varepsilon + d+ 2)|A||\ensuremath{\mathcal{G}}\setminus H| $$
implies
$$t |A|
\leq |H| + \frac{d\gamma}{\varepsilon^2}(\varepsilon + d+ 2)(|\ensuremath{\mathcal{G}}| - |H|).$$
Using Proposition \ref{Prop:AExists}(1), we obtain
$$
\left(\frac{1}{2+ \gamma + \frac{d\gamma}{\varepsilon}}\right) |\ensuremath{\mathcal{G}}|
= \left(\frac{t}{2+ \gamma + \frac{d\gamma}{\varepsilon}}\right) |V|
\leq \frac{d\gamma}{\varepsilon^2}(\varepsilon + d+ 2)|\ensuremath{\mathcal{G}}|
+ \left(1 - \frac{d\gamma}{\varepsilon^2}(\varepsilon + d+ 2)\right) |H|.$$
We claim that $H$ is a subgroup of $\ensuremath{\mathcal{G}}$ of index two. To prove this claim, it suffices to show that
\begin{equation}
\label{Eqn:13rdInePri}
\frac 13 \left(1 - \frac{d\gamma}{\varepsilon^2}(\varepsilon + d+ 2)\right)
<
\left(\frac{1}{2+ \gamma + \frac{d\gamma}{\varepsilon}}\right) - \frac{d\gamma}{\varepsilon^2}(\varepsilon + d+ 2),
\end{equation}
i.e.,
$$\left(2 + \gamma + \frac{d\gamma}{\varepsilon}\right) \left( 1 + \frac{2d\gamma}{\varepsilon^2}(\varepsilon + d+ 2) \right) < 3,$$
which is equivalent to
\begin{equation}
\label{Eqn:13rdInequality}
\left(\gamma + \frac{d\gamma}{\varepsilon}\right) + \frac{2d\gamma}{\varepsilon^{2}}(\varepsilon + d+ 2)\left(2 + \gamma + \frac{d\gamma}{\varepsilon}\right) < 1.
\end{equation}
Assume that $\zeta \leq \frac{\varepsilon^4}{2^9 d^8}$. Since
$$\gamma \leq
\frac{\varepsilon^{2}}{2^3\sqrt{2}d^2}
$$
and $d \geq 2$ (by Equation \eqref{Eqn:EpsilonBoundSch}), we obtain
\begin{align*}
& \gamma + \frac{d\gamma}{\varepsilon} + \frac{2d\gamma}{\varepsilon^{2}}(\varepsilon + d+ 2)\left(2 + \gamma + \frac{d\gamma}{\varepsilon}\right)\\
& = \frac{\gamma}{\varepsilon}(\varepsilon + d) + \frac{2d\gamma}{\varepsilon^{2}}(\varepsilon + d+ 2)\left(2 + \frac{\gamma}{\varepsilon}(\varepsilon+ d)\right) \\
& = \frac{\gamma}{\varepsilon}\left(\varepsilon +d+ \frac{4d(\varepsilon +d+ 2)}{\varepsilon} \right) + \frac{2d\gamma^2}{\varepsilon^3}(\varepsilon+d)(\varepsilon + d + 2)\\
& = \frac{\gamma}{\varepsilon^2}\left(\varepsilon(\varepsilon +d)+ 4d(\varepsilon +d+ 2)\right) + \frac{2d\varepsilon \gamma^2}{\varepsilon^4}(\varepsilon+d)(\varepsilon + d + 2)\\
& \leq \frac{\gamma}{\varepsilon^2}\left((d-1) (2d-1) + 4d(2d+1) \right) + \frac{2d\gamma^2}{\varepsilon^4} (d-1)(2d-1)(2d+1)\\
& = \frac{1}{8\sqrt 2 d^2}(10d^2 + d +1) + \frac{1}{64d^3}(d-1)(2d-1)(2d+1)\\
& = \frac{(d-1)(4d^2-1) + 4\sqrt 2 d (10d^2 + d + 1)}{64d^3}\\
& \leq \frac{(d-1)(4d^2-1) + 41\sqrt 2 d^3 + 8\sqrt 2 d}{64d^3}\\
& \leq \frac{(41\sqrt 2 + 4) d^3 + 8\sqrt 2 d - 4d^2 - d + 1}{64d^3}\\
& \leq \frac{(41\sqrt 2 + 4) d^3 + 4d(2 \sqrt 2 - d) + (1-d)}{64d^3}\\
& < \frac{41\sqrt 2 + 4}{64} + \frac{\sqrt 2 - 1} {32} \\
& < 1.
\end{align*}
So the claim that $H$ is a subgroup of $\ensuremath{\mathcal{G}}$ of index two follows. Since no index two subgroup of $\ensuremath{\mathcal{G}}$ acts transitively on $V$, the action of $H$ on $V$ has at least two distinct orbits. Since the action of $\ensuremath{\mathcal{G}}$ on $V$ is transitive, the action of $H$ on $V$ has exactly two orbits. Let $\ensuremath{\mathcal{O}}_1, \ensuremath{\mathcal{O}}_2$ denote the orbits of the action of $H$ on $V$.
Note that for any $g\in H$ and for any subset $B$ of $V$ contained in $\ensuremath{\mathcal{O}}_1$ or $\ensuremath{\mathcal{O}}_2$, the map
$$B\cap g^{-1} B \to B\times B, \quad
x\mapsto (x, gx)$$
is well-defined, and induces a map
$$\varphi: \coprod _{g\in H} B\cap g^{-1} B \to B\times B.$$
Each of its fibre contains exactly $t$ elements and hence
$$
|B|^2
= im(\varphi)
= \sum_{x\in im(\varphi) } \frac{|\varphi^{-1} (x)|} {|\varphi^{-1} (x)|}
= \frac 1t \sum_{x\in im(\varphi)} |\varphi^{-1} (x)|
= \frac 1t \sum_{g\in H} |B\cap g^{-1} B|.$$
For any $g\in H$, we have
\begin{align*}
A\cap g^{-1} A
& =
((A\cap \ensuremath{\mathcal{O}}_1) \cup (A\cap \ensuremath{\mathcal{O}}_2))
\cap
(g^{-1} ((A\cap \ensuremath{\mathcal{O}}_1) \cup (A\cap \ensuremath{\mathcal{O}}_2)))\\
& =
((A\cap \ensuremath{\mathcal{O}}_1) \cap
(g^{-1} (A\cap \ensuremath{\mathcal{O}}_1))
\cup
((A\cap \ensuremath{\mathcal{O}}_2) \cap
(g^{-1} (A\cap \ensuremath{\mathcal{O}}_2)),
\end{align*}
which yields
$$
|A \cap g^{-1} A| = \sum_{B = A\cap \ensuremath{\mathcal{O}}_1, A\cap \ensuremath{\mathcal{O}}_2}|B\cap g^{-1} B| .$$
So
\begin{align*}
|A\cap \ensuremath{\mathcal{O}}_1|^2 + |A\cap \ensuremath{\mathcal{O}}_2|^2
& = \sum_{B = A\cap \ensuremath{\mathcal{O}}_1, A\cap \ensuremath{\mathcal{O}}_2} |B|^2\\
& = \frac 1t \sum_{B = A\cap \ensuremath{\mathcal{O}}_1, A\cap \ensuremath{\mathcal{O}}_2}\sum_{g\in H} |B\cap g^{-1} B| \\
& = \frac 1t \sum_{g\in H} \sum_{B = A\cap \ensuremath{\mathcal{O}}_1, A\cap \ensuremath{\mathcal{O}}_2}|B\cap g^{-1} B| \\
& = \frac 1t \sum_{g\in H} |A \cap g^{-1} A| .
\end{align*}
It follows that
\begin{align*}
|A|^2 - 2|A\cap \ensuremath{\mathcal{O}}_1||A\cap \ensuremath{\mathcal{O}}_2|
& = (|A\cap \ensuremath{\mathcal{O}}_1| + |A\cap \ensuremath{\mathcal{O}}_2|)^2 - 2|A\cap \ensuremath{\mathcal{O}}_1||A\cap \ensuremath{\mathcal{O}}_2| \\
& = |A\cap \ensuremath{\mathcal{O}}_1|^2 + |A\cap \ensuremath{\mathcal{O}}_2|^2 \\
& = \frac 1t \sum_{g\in H} |A\cap gA| \\
& = \frac 1t \sum_{g\in \ensuremath{\mathcal{G}}} |A\cap gA| - \frac 1t \sum_{g\in H^c} |A\cap gA| \\
& = |A|^2 - \frac 1t \sum_{g\in H^c} |A\cap gA| \\
& \geq |A|^2 - \sum_{g\in H^c} \frac{d \gamma}{t\varepsilon^2} (\varepsilon + d + 2)|A| \\
& = |A|^2 - \frac{d \gamma}{t\varepsilon^2} (\varepsilon + d + 2)|A||H|.
\end{align*}
This implies that
$$
|A\cap \ensuremath{\mathcal{O}}_1||A\cap \ensuremath{\mathcal{O}}_2|
\leq \frac{d \gamma}{4t\varepsilon^2} (\varepsilon + d + 2)|A||\ensuremath{\mathcal{G}}|
\leq \frac{d \gamma}{4t\varepsilon^2} (\varepsilon + d + 2) \frac{t|V|^2}{2}\leq \frac{d \gamma}{8\varepsilon^2} (\varepsilon + d + 2)|V|^2.
$$
Hence, for the orbit $\ensuremath{\mathcal{O}}$ of some element of $V$ under the action of $H$, we have
$$
|A\cap \ensuremath{\mathcal{O}}^c| \leq
\sqrt{\frac{d\gamma}{8\varepsilon^2} (\varepsilon + d + 2) }|V|.$$
For any $1\leq i \leq d$,
\begin{align*}
|\theta_i(\ensuremath{\mathcal{O}})\cap \ensuremath{\mathcal{O}} |
& = |\theta_i(\ensuremath{\mathcal{O}}\cap A) \cap \ensuremath{\mathcal{O}}| + |\theta_i(\ensuremath{\mathcal{O}}\setminus A) \cap \ensuremath{\mathcal{O}}|\\
& \leq |\theta_i(A) \cap \ensuremath{\mathcal{O}}| + |\ensuremath{\mathcal{O}}\setminus A|\\
& = |\theta_i(A) \cap (\ensuremath{\mathcal{O}}\cap A) | + |\theta_i(\ensuremath{\mathcal{O}}\cap A) \cap (\ensuremath{\mathcal{O}}\setminus A) | + |\ensuremath{\mathcal{O}}\setminus A|\\
& \leq |\theta_i(A) \cap A| + 2|\ensuremath{\mathcal{O}}\setminus A|\\
& = |\theta_i(A) \cap A| + 2|\ensuremath{\mathcal{O}}| - 2|\ensuremath{\mathcal{O}}\cap A|\\
& = |\theta_i(A) \cap A| + |V| - 2|A| + 2|A\cap \ensuremath{\mathcal{O}}^c|\\
& \leq \frac \gamma\varepsilon |A| + \left(2 + \gamma + \frac{d\gamma}{\varepsilon}\right)|A| - 2|A| + 2\sqrt{\frac{d\gamma}{8\varepsilon^2} (\varepsilon + d + 2) }|V| \\
& = \frac \gamma\varepsilon |A| + \left(\gamma + \frac{d\gamma}{\varepsilon}\right)|A| + 2\sqrt{\frac{d\gamma}{8\varepsilon^2} (\varepsilon + d + 2) }|V| \\
& \leq \left(\frac \gamma\varepsilon (d+1 + \varepsilon) + \sqrt{\frac{2d\gamma}{\varepsilon^2} (\varepsilon + d + 2) }\right) \frac{|V|}{2} \\
& \leq \left(\frac {2d\gamma}\varepsilon |H| + \sqrt{\frac{2d\gamma}{\varepsilon^2} (2d + 1) }\right) \frac{|V|}{2} \\
& \leq \left(\frac {2d\gamma}\varepsilon + \sqrt{\frac{6d^2\gamma}{\varepsilon^2}} \right) \frac{|V|}{2}\\
& < \left(\frac {1}{2^4} + \sqrt{\frac{6}{2^5}} \right) \frac{|V|}{2}\\
& < \frac{|V|}{4} \\
& = \frac{|\ensuremath{\mathcal{G}}|}{4t} \\
& = \frac{|H|}{2t}.
\end{align*}
Note that for any two subgroup $H_1, H_2$ of $\ensuremath{\mathcal{G}}$ of index two, the inequalities
$$|H_1\cap H_2| \geq \frac{|H_1|}{2}, |H_1^c\cap H_2^c| \geq \frac{|H_1|}{2}$$
hold. Indeed, for any $x\in H_1 \cap H_2^c$, the `left multiplication by $x$' map induces an injection from $H_1\cap H_2^c\to H_1 \cap H_2$. This implies that
$$ |H_1\cap H_2|
\geq |H_1 \cap H_2^c|,$$
which yields
$$ |H_1\cap H_2|
\geq \frac 12 ( |H_1\cap H_2^c| + |H_1\cap H_2| ) = \frac{|H_1|}{2}.$$
It follows that $|H_1\cap H_2^c| \leq \frac{|H_1|}2$, which implies $|H_1^c\cap H_2^c| \geq \frac{|H_1|}2$.
Note that no two elements of $\ensuremath{\mathcal{O}}$ are adjacent in $(V, E)$. Otherwise, for $u, v\in \ensuremath{\mathcal{O}}$, $u = \theta_i(v)$ for some $i$ and hence
\begin{align*}
|\theta_i(\ensuremath{\mathcal{O}})\cap \ensuremath{\mathcal{O}}|
& = |\theta_i(Hv) \cap Hu | \\
& = | \psi_{i,v} (H) \theta_i(v) \cap H u| \\
& = |\psi_{i,v} (H) u \cap H u|\\
& \geq |(\psi_{i,v} (H) \cap H) u|\\
& \geq \frac{|\psi_{i,v} (H) \cap H|}{t} \\
& \geq \frac{|H|}{2t}.
\end{align*}
Moreover, no two elements of $\ensuremath{\mathcal{O}}^c$ are adjacent in $(V, E)$. Otherwise, for $u, v\in \ensuremath{\mathcal{O}}^c$, $u = \theta_i(v)$ for some $i$ and hence
\begin{align*}
|\theta_i(\ensuremath{\mathcal{O}})\cap \ensuremath{\mathcal{O}}|
& = |\theta_i(H^c v) \cap H^c u | \\
& = |\psi_{i,v} (H^c) \theta_i(v) \cap H^c u|\\
& = |\psi_{i,v} (H^c) u \cap H^c u| \\
& \geq |(\psi_{i,v} (H^c) \cap H^c) u| \\
& \geq \frac{|\psi_{i,v} (H^c) \cap H^c|}{t} \\
& \geq \frac{|H|}{2t}.
\end{align*}
So $(V, E)$ is bipartite, contradicting the hypothesis. Hence, the nontrivial eigenvalues of the normalised adjacency operator of this graph are greater than $-1 + \ell_{\varepsilon, d}$ with
$$\ell_{\varepsilon, d}
=\frac{\varepsilon^4}{2^{12} d^8}.$$
\end{proof}
In the following, $(V, E)^k$ denotes the graph having $V$ as its set of vertices and the $k$-th power of the adjacency operator of $(V,E)$ as its adjacency operator.
\begin{theorem}
\label{thmPrincipal}
Suppose $V$ carries a left action of a group $\ensuremath{\mathcal{G}}$, $(V, E)$ is undirected, $|V|\geq 4$ and $k\geq 1$ be an odd integer such that the following conditions hold.
\begin{enumerate}
\item No index two subgroup of $\ensuremath{\mathcal{G}}$ acts transitively on $V$.
\item The action of $\ensuremath{\mathcal{G}}$ on the set $V$ is ``transitive of order $t$'' in the sense that for each $(u, v)\in V\times V$, the equation $gu = v$ has exactly $t$ distinct solutions for $g\in \ensuremath{\mathcal{G}}$. In other words, the action of $\ensuremath{\mathcal{G}}$ on $V$ is transitive and the stabilizer of each element of $V$ have the same size (equal to $t$).
\item
$(V, E)^k$ is an $\varepsilon_k$-vertex expander with $\varepsilon_k >0$.
\item
For $1\leq i_1, \cdots, i_k \leq d$ and $v\in V$, there is an automorphism or an anti-automorphism $\psi_{(i_1, \cdots, i_k), v}$ of the group $\ensuremath{\mathcal{G}}$ such that one of
$$
(\theta_{i_1} \circ \cdots \circ \theta_{i_k} )(g\cdot v)
=
\psi_{(i_1, \cdots, i_k), v}(g) \cdot
(\theta_{i_1} \circ \cdots \circ \theta_{i_k} )(v)
$$
and
$$
(\theta_{i_1} \circ \cdots \circ \theta_{i_k} )(g\cdot v)
=
\psi_{(i_1, \cdots, i_k), v}(g^{-1}) \cdot
(\theta_{i_1} \circ \cdots \circ \theta_{i_k} )(v)
$$
holds for any $g\in \ensuremath{\mathcal{G}}$.
\item For any $\tau\in \ensuremath{\mathcal{G}}$ and for any subset $A$ of $V$,
$
\ensuremath{\mathcal{N}}^k (\ensuremath{\mathcal{N}}^k(\tau(A))) \subseteq \tau(\ensuremath{\mathcal{N}}^k(\ensuremath{\mathcal{N}}^k(A)))
$
holds.
\end{enumerate}
Then the nontrivial eigenvalues of the normalised adjacency operator of $(V, E)$ are greater than
$$
\left(
-1 + \frac {1}{2^{12} d^{8k}}
\varepsilon_k^4
\right)^{1/k}.
$$
If $(V, E)$ is an $\varepsilon$-vertex expander with $\varepsilon >0$, then the vertex Cheeger constant of $(V, E)^k$ is
$$
\geq
\frac 12
\left(
1 -
\left(
1 - \frac{\varepsilon^2}{2d^2}
\right)^k
\right)
.$$
\end{theorem}
\begin{proof}
By Theorem \ref{thmPrincipalk1}, the nontrivial spectrum of the adjacency operator of $(V, E)^k$ is bounded
away from $-1 + \frac {\varepsilon_k^4}{2^{12} d^{8k}}$. Since $k$ is odd, the nontrivial spectrum of $(V, E)$ is bounded away from
$$
\left(
-1 + \frac {1}{2^{12} d^{8k}}
\varepsilon_k^4
\right)^{1/k}
.$$
Since $(V, E)$ is an $\varepsilon$-vertex expander, the largest nontrivial eigenvalue of its adjacency operator is bounded by $1 - \frac{\varepsilon^2}{2d^2}$. Since $k$ is odd, the largest nontrivial eigenvalue of the adjacency operator of $(V, E)^k$ is bounded by
$$
\left(
1 - \frac{\varepsilon^2}{2d^2}
\right)^k
.$$
By Lemma \ref{Lemma:VertexEdgeCons} and the discrete Cheeger--Buser inequality (Proposition \ref{Prop:chin}), the vertex Cheeger constant of $(V, E)^k$ is
$$
\geq
\frac 12
\left(
1 -
\left(
1 - \frac{\varepsilon^2}{2d^2}
\right)^k
\right)
.$$
\end{proof}
\section{Spectral expansion of the Cayley and Cayley sum graphs twisted by automorphisms}
\label{Sec:Twists}
Let $G$ be a finite group, $S$ be a subset of $G$ and $\sigma$ be a group automorphism of $G$.
Consider the twist $C(G, S)^\sigma$ of the Cayley graph $C(G, S)$ by the automorphism $\sigma$. The graph $C(G, S)^\sigma$ has $G$ as its set of vertices, and there is an edge from $x$ to $y$ whenever $y = \sigma(xs)$ for some $s\in S$. Roughly speaking, the twisted Cayley graph $C(G, S)^\sigma$ has the same set of vertices as that of the Cayley graph $C(G, S)$, and given a vertex $x$ in $C(G, S)^\sigma$, its adjacent vertices are precisely the translates of the adjacent vertices of $x$ in $C(G, S)$ under $\sigma$.
Consider the twist $C_\Sigma(G, S)^\sigma$ of the Cayley sum graph $C_\Sigma(G, S)$ by the automorphism $\sigma$. The graph $C_\Sigma(G, S)^\sigma$ has $G$ as its set of vertices, and there is an edge from $x$ to $y$ whenever $y = \sigma(x^{-1} s)$ for some $s\in S$. Roughly speaking, the twisted Cayley sum graph $C_\Sigma(G, S)^\sigma$ has the same set of vertices as that of the Cayley sum graph $C_\Sigma(G, S)$, and given a vertex $x$ in $C_\Sigma(G, S)^\sigma$, its adjacent vertices are precisely the translates of the adjacent vertices of $x$ in $C_\Sigma(G, S)$ under $\sigma$.
\subsection{The twisted Cayley graph}
For a subset $A$ of $G$, its neighbourhood in $C(G, S)^\sigma$ is denote by $\ensuremath{\mathscr{N}}(A)$. Given $g_1, \cdots, g_r \in G$,
$\prod_{i = 1}^r g_i$
denotes the product
$g_1 \cdots g_r$.
\begin{proof}[Proof of Theorem \ref{Thm:Bdd}(1), (2)]
Let $\ensuremath{\mathcal{G}}$ denote the group $G$ and $m$ be a positive integer. Consider the action of $\ensuremath{\mathcal{G}}$ on the set of vertices of $(C(G, S)^\sigma)^m$ by left multiplication, i.e.,
$$g\cdot v = gv$$
for any $g\in \ensuremath{\mathcal{G}}, v\in G$.
For an element $(s_1, \cdots, s_m)\in S^m$, let $\theta:G\to G$ denote the bijection defined by
$$\theta(v)
=
\sigma^m(v)
\prod_{i=1}^m
\sigma^{m+1-i} (s_i)$$
for $v\in G$.
For each $(s_1, \cdots, s_m)\in S^m$ and $v\in G$, note that
\begin{align*}
\theta(g\cdot v)
& =
\sigma^m(g) \cdot \theta(v) \\
& =
\psi_{(s_1, \cdots, s_m), v}(g) \cdot \theta(v)
\end{align*}
for any $g\in \ensuremath{\mathcal{G}}$, where $\psi_{(s_1, \cdots, s_m), v}$ denotes the automorphism
$$g\mapsto
\sigma^m(g)
$$
of the group $\ensuremath{\mathcal{G}}$. Moreover, for any subset $A$ of $G$,
$\ensuremath{\mathscr{N}}^m(g\cdot A) = \sigma^m(g) \cdot \ensuremath{\mathscr{N}}^m(A)$.
Since $C(G, S)^\sigma$ is connected, its vertex Cheeger constant $h_\sigma$ is positive. Thus $C(G, S)^\sigma$ is an $h_\sigma$-expander with $h_\sigma>0$.
If $\sigma^2$ is the trivial automorphism of $G$, then from Theorem \ref{thmPrincipal}, the nontrivial eigenvalues of the normalised adjacency operator of $C(G, S)^\sigma$ are greater than
$$
-1 + \frac {h_\sigma^4}{2^{12} d^{8}}
.
$$
If $\sigma^{2k}$ is the trivial automorphism of $G$ for some odd integer $k\geq 1$, then from Theorem \ref{thmPrincipal}, the nontrivial eigenvalues of the normalised adjacency operator of $C(G, S)^\sigma$ are greater than
$$
\left(
-1 + \frac {1}{2^{12} d^{8k}}
\left(
\frac 12
\left(
1 -
\left(
1 - \frac{h_\sigma^2}{2d^2}
\right)^k
\right)
\right)
^4
\right)^{1/k}.
$$
By the discrete Cheeger--Buser inequality (Proposition \ref{Prop:chin}), the result follows.
\end{proof}
\subsection{The twisted Cayley sum graph}
For a subset $A$ of $G$, its neighbourhood in $C_\Sigma(G, S)^\sigma$ is denote by $\ensuremath{\mathscr{N}}_\Sigma(A)$. Given $g_1, \cdots, g_r \in G$, $\prod_{i = 1}^r g_i$ denotes the product $g_1 \cdots g_r$.
\begin{lemma}
\label{Lemma:UndirectedsigmaCayleySum}
The twisted Cayley sum graph $C_\Sigma (G, S)^\sigma$ is undirected if and only if $S$ contains $\sigma^2(g)\sigma(s)g^{-1}$ for any $s\in S, g\in G$.
\end{lemma}
\begin{proof}
If $h$ is adjacent to $g$, then $h = \sigma(g^{-1} s)$ for some $s\in S$. Note that
\begin{align*}
g
& = s \sigma^{-1}(h^{-1}) \\
& = \sigma(h^{-1}) \sigma(h) s\sigma^{-1}(h^{-1}),
\end{align*}
which implies that $g$ is adjacent to $h$ if and only if $\sigma(h) s\sigma^{-1}(h^{-1}) \in \sigma^{-1}(S)$, i.e.,
$\sigma^2(h) \sigma(s) h^{-1} \in S$. Hence $g$ is adjacent to each of its adjacent vertices if and only if $S$ contains
$$\sigma^2(\sigma(g^{-1} s)) \sigma(s) (\sigma(g^{-1} s))^{-1}
$$
for any $s\in S$. So $C_\Sigma (G, S)^\sigma$ is undirected if and only if $S$ contains $
\sigma^2(\sigma(g^{-1} s)) \sigma(s) (\sigma(g^{-1} s))^{-1}
$ for any $s\in S, g\in G$. Note that for $s\in S$ and $x\in G$ and $g = (\sigma^{-1}(x)s^{-1})^{-1}$,
$$
\sigma^2(\sigma(g^{-1} s)) \sigma(s) (\sigma(g^{-1} s))^{-1}
=
\sigma^2(x) \sigma(s) x^{-1}.
$$
So $C_\Sigma (G, S)^\sigma$ is undirected if and only if $S$ contains $\sigma^2(g)\sigma(s)g^{-1}$ for any $s\in S, g\in G$. Hence the Lemma follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:Bdd}(3)]
Let $\ensuremath{\mathcal{G}}$ denote the group $G$ and $m$ be a positive integer. Consider the action of $\ensuremath{\mathcal{G}}$ on the set of vertices of $(C_\Sigma(G, S)^\sigma)^m$ by right multiplication via the inverse, i.e.,
$$g\cdot v = vg^{-1}$$
for any $g\in \ensuremath{\mathcal{G}}, v\in G$.
For an element $(s_1, \cdots, s_m)\in S^m$, let $\theta:G\to G$ denote the bijection defined by
$$\theta(v)
=
\left(
\prod_{i = 1}^{\lfloor m/2\rfloor} \sigma^{2i} (s_{m+1 -2i}^{-1})
\right)
\sigma(v^{(-1)^m})
\left(
\prod_{i = 1}^{\lceil m/2\rceil} \sigma^{2\lceil m/2\rceil +1 - 2i} (s_{m+1 -(2\lceil m/2\rceil +1 - 2i)})
\right)
$$
for $v\in G$.
For each $(s_1, \cdots, s_m)\in S^m$ and $v\in G$, set
$$
\alpha =
\left(
\prod_{i = 1}^{\lfloor m/2\rfloor} \sigma^{2i} (s_{m+1 -2i}^{-1})
\right),
\beta =
\left(
\prod_{i = 1}^{\lceil m/2\rceil} \sigma^{2\lceil m/2\rceil +1 - 2i} (s_{m+1 -(2\lceil m/2\rceil +1 - 2i)})
\right),
$$
and note that
\begin{align*}
\theta(g\cdot v)
& =
\alpha \sigma(gv^{-1}) \beta \\
& =
\alpha \sigma(v^{-1}) \beta (\sigma(v^{-1}) \beta)^{-1} \sigma(gv^{-1}) \beta \\
& =
\alpha \sigma(v^{-1}) \beta
\left(
(\sigma(v^{-1}) \beta)^{-1} \sigma(g^{-1}) \sigma(v^{-1}) \beta
\right)^{-1} \\
& =
\theta(v)
\left(
(\sigma(v^{-1}) \beta)^{-1} \sigma(g^{-1}) \sigma(v^{-1}) \beta
\right)^{-1} \\
& =
\left(
(\sigma(v^{-1}) \beta)^{-1} \sigma(g^{-1}) \sigma(v^{-1}) \beta
\right)\cdot \theta(v) \\
& =
\psi_{(s_1, \cdots, s_m), v}(g^{-1}) \cdot \theta (v)
\end{align*}
for any $g\in \ensuremath{\mathcal{G}}$, where $\psi_{(s_1, \cdots, s_m), v}$ denotes the automorphism
$$g\mapsto
(\sigma(v^{-1}) \beta)^{-1} \sigma(g) \sigma(v^{-1}) \beta
$$
of the group $\ensuremath{\mathcal{G}}$.
For any subset $A$ of $G$, note that
$$\ensuremath{\mathscr{N}}_\Sigma^{m}
(A)
=
\sigma^2(S^{-1}) \sigma^4(S^{-1}) \cdots \sigma^{2\lfloor m/2\rfloor}(S^{-1})
\sigma^{m}(A^{(-1)^m})
\sigma^{2\lceil m/2\rceil -1}(S) \cdots \sigma^3(S) \sigma(S).
$$
For any $g\in \ensuremath{\mathcal{G}}$, one obtains
\begin{align*}
& \sigma^{2m}(g)
\sigma^{2m-1}(S) \cdots \sigma^3(S) \sigma(S)
g^{-1}
\\
& =
\left(
\sigma^{2m}(g)
\sigma^{2m-1}(S)
\sigma^{2m-2}(g^{-1})
\right)
\left(
\sigma^{2m-2}(g)
\sigma^{2m-3}(S)
\sigma^{2m-4}(g^{-1})
\right)
\cdots \\
& \qquad \qquad
\left(
\sigma^6(g)
\sigma^5(S)
\sigma^4(g^{-1})
\right)
\left(
\sigma^4(g)
\sigma^3(S)
\sigma^2(g^{-1})
\right)
\left(
\sigma^2(g)
\sigma(S)
g^{-1}
\right)
\\
& =
\sigma^{2m-2}(S)
\cdots
\sigma^4(S)
\sigma^2(S)
S \\
& =
\sigma^{2m-1}(S)
\cdots
\sigma^5(S)
\sigma^3(S)
\sigma(S).
\end{align*}
So, for any subset $A$ of $G$,
\begin{align*}
\ensuremath{\mathscr{N}}_\Sigma^m(\ensuremath{\mathscr{N}}_\Sigma^m(g\cdot A))
& =
\sigma^2(S^{-1}) \sigma^4(S^{-1}) \cdots \sigma^{2m}(S^{-1})
\sigma^{2m}(g\cdot A)
\sigma^{2m-1}(S) \cdots \sigma^3(S) \sigma(S) \\
& =
\sigma^2(S^{-1}) \sigma^4(S^{-1}) \cdots \sigma^{2m}(S^{-1})
\sigma^{2m}(A) \sigma^{2m}(g^{-1})
\sigma^{2m-1}(S) \cdots \sigma^3(S) \sigma(S) \\
& =
\sigma^2(S^{-1}) \sigma^4(S^{-1}) \cdots \sigma^{2m}(S^{-1})
\sigma^{2m}(A) \sigma^{2m}(g^{-1})
\sigma^{2m-1}(S) \cdots \sigma^3(S) \sigma(S) gg^{-1}\\
& =
\sigma^2(S^{-1}) \sigma^4(S^{-1}) \cdots \sigma^{2m}(S^{-1})
\sigma^{2m}(A)
\sigma^{2m-1}(S) \cdots \sigma^3(S) \sigma(S) g^{-1}\\
& =
\ensuremath{\mathscr{N}}_\Sigma^m(\ensuremath{\mathscr{N}}_\Sigma^m(A)) g^{-1} \\
& =
g \cdot \ensuremath{\mathscr{N}}_\Sigma^m(\ensuremath{\mathscr{N}}_\Sigma^m(A))
\end{align*}
holds for any $g\in \ensuremath{\mathcal{G}}$.
Since $C_\Sigma (G, S)^\sigma$ is connected, its vertex Cheeger constant $h_{\Sigma, \sigma}$ is positive. Thus $C_\Sigma (G, S)^\sigma$ is an $h_{\Sigma, \sigma}$-expander with $h_{\Sigma, \sigma} >0$.
By Theorem \ref{thmPrincipal}, the nontrivial eigenvalues of the normalised adjacency operator of $C_\Sigma(G, S)^\sigma$ are greater than
$$
-1 + \frac {h_{\Sigma, \sigma}^4}{2^{12} d^{8}}
.
$$
By the discrete Cheeger--Buser inequality (Proposition \ref{Prop:chin}), the result follows.
\end{proof}
\section{Spectral expansion of the Cayley and Cayley sum graphs twisted by anti-automorphisms}
\label{Sec:TwistsAnti}
Let $G$ be a finite group, $S$ be a subset of $G$ and $\sigma$ be an group anti-automorphism of $G$.
Consider the twist $C(G, S)^\sigma$ of the Cayley graph $C(G, S)$ by the anti-automorphism $\sigma$. The graph $C(G, S)^\sigma$ has $G$ as its set of vertices, and there is an edge from $x$ to $y$ whenever $y = \sigma(xs)$ for some $s\in S$. Roughly speaking, the twisted Cayley graph $C(G, S)^\sigma$ has the same set of vertices as that of the Cayley graph $C(G, S)$, and given a vertex $x$ in $C(G, S)^\sigma$, its adjacent vertices are precisely the translates of the adjacent vertices of $x$ in $C(G, S)$ under $\sigma$.
Consider the twist $C_\Sigma(G, S)^\sigma$ of the Cayley sum graph $C_\Sigma(G, S)$ by the anti-automorphism $\sigma$. The graph $C_\Sigma(G, S)^\sigma$ has $G$ as its set of vertices, and there is an edge from $x$ to $y$ whenever $y = \sigma(x^{-1} s)$ for some $s\in S$. Roughly speaking, the twisted Cayley sum graph $C_\Sigma(G, S)^\sigma$ has the same set of vertices as that of the Cayley sum graph $C_\Sigma(G, S)$, and given a vertex $x$ in $C_\Sigma(G, S)^\sigma$, its adjacent vertices are precisely the translates of the adjacent vertices of $x$ in $C_\Sigma(G, S)$ under $\sigma$.
\begin{theorem}\label{Thm:BddAnti}
Let $S$ be a subset of a finite group $G$ with $|S|= d$. Suppose $\sigma$ is an anti-automorphism of $G$.
\begin{enumerate}
\item If the twisted Cayley graph $C (G, S)^\sigma$ is connected and undirected and $|G| \geq 4$, then the nontrivial spectrum of its normalised adjacency operator lies in the interval
$$\left( -1 + \frac{h_\sigma^4}{2^{12}d^8}
,
1 - \frac{h_\sigma^2}{2d^2}
\right]$$
where $h_\sigma$ denotes the vertex Cheeger constant of $C (G, S)^\sigma$.
\item
Suppose $\sigma^{2}$ is the trivial automorphism of $G$.
If the twisted Cayley sum graph $C_\Sigma(G, S)^\sigma$ is connected and undirected and $|G| \geq 4$, then the nontrivial spectrum of its normalised adjacency operator lies in the interval
$$\left( -1 + \frac{h_{\Sigma, \sigma}^4}{2^{12}d^8}
,
1 - \frac{h_{\Sigma, \sigma}^2}{2d^2}
\right]$$
where $h_{\Sigma, \sigma}$ denotes the vertex Cheeger constant of $C_\Sigma(G, S)^\sigma$.
\item
Suppose $\sigma^{2k}$ is the trivial automorphism of $G$, where $k\geq 1$ is an odd integer.
If the twisted Cayley graph sum $C_\Sigma(G, S)^\sigma$ is connected and undirected and $|G| \geq 4$, then the nontrivial spectrum of its normalised adjacency operator lies in the interval
$$\left(
\left(
-1 + \frac {1}{2^{12} d^{8k}}
\left(
\frac 12
\left(
1 -
\left(
1 - \frac{h_{\Sigma, \sigma}^2}{2d^2}
\right)^k
\right)
\right)
^4
\right)^{1/k}
,
1 - \frac{h_{\Sigma, \sigma}^2}{2d^2}
\right]$$
where $h_{\Sigma, \sigma}$ denotes the vertex Cheeger constant of $C_\Sigma(G, S)^\sigma$.
\end{enumerate}
\end{theorem}
\subsection{Cayley graph twisted by anti-automorphisms}
For a subset $A$ of $G$, its neighbourhood in $C(G, S)^\sigma$ is denote by $\ensuremath{\mathscr{N}}(A)$. Given $g_1, \cdots, g_r \in G$,
$\prod_{i = 1}^r g_i$
denotes the product
$g_1 \cdots g_r$.
\begin{lemma}
\label{Lemma:UndirectedsigmaCayleySum}
The twisted Cayley graph $C (G, S)^\sigma$ is undirected if and only if $S$ contains $\sigma^2(g) \sigma(s^{-1}) g^{-1}$ for any $s\in S, g\in G$.
\end{lemma}
\begin{proof}
If $h$ is adjacent to $g$, then $h = \sigma(g s)$ for some $s\in S$. Note that
\begin{align*}
g
& = \sigma^{-1}(h)s^{-1} \\
& = \sigma^{-1}(h)s^{-1} \sigma(h^{-1}) \sigma(h) \\
& = \sigma(h^{-1} \sigma^{-1}(s^{-1}) \sigma^{-2}(h))\sigma(h) \\
& = \sigma(h (h^{-1} \sigma^{-1}(s^{-1}) \sigma^{-2}(h))),
\end{align*}
which implies that $g$ is adjacent to $h$ if and only if $h^{-1} \sigma^{-1}(s^{-1}) \sigma^{-2}(h) \in S$. Hence $g$ is adjacent to each of its adjacent vertices if and only if $S$ contains
$$
(\sigma(s) \sigma(g))^{-1} \sigma^{-1}(s^{-1}) \sigma^{-2}(\sigma(s) \sigma(g))
$$
for any $s\in S$. So $C (G, S)^\sigma$ is undirected if and only if $S$ contains
$$(\sigma(s) \sigma(g))^{-1} \sigma^{-1}(s^{-1}) \sigma^{-2}(\sigma(s) \sigma(g))
=
\sigma(g)^{-1} \sigma(s^{-1}) \sigma^{-1}(s^{-1}) \sigma^{-1}(s) \sigma^{-1}(g)
=
\sigma(g)^{-1} \sigma(s^{-1}) \sigma^{-1}(g)
$$
for any $s\in S, g\in G$.
So $C (G, S)^\sigma$ is undirected if and only if $S$ contains $\sigma^2(g)\sigma(s^{-1})g^{-1}$ for any $s\in S, g\in G$. Hence the Lemma follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm:BddAnti}(1)]
Let $\ensuremath{\mathcal{G}}$ denote the group $G$. Consider the action of $\ensuremath{\mathcal{G}}$ on the set of vertices of $C(G, S)^\sigma$ by left multiplication, i.e.,
$$g\cdot v = gv$$
for any $g\in \ensuremath{\mathcal{G}}, v\in G$.
For an element $s\in S$, let $\theta:G\to G$ denote the bijection defined by
$$\theta(v)
=
\sigma(vs)$$
for $v\in G$.
For each $s\in S$ and $v\in G$, note that
\begin{align*}
\theta(g\cdot v)
& =
\sigma(gvs) \\
& =
\sigma(gvs) (\sigma(vs))^{-1} \sigma(vs) \\
& =
(\sigma(vs) \sigma(g) (\sigma(vs))^{-1}) \sigma(vs) \\
& =
\psi_{s, v}(g) \cdot \theta(v)
\end{align*}
for any $g\in \ensuremath{\mathcal{G}}$, where $\psi_{s, v}$ denotes the anti-automorphism
$$g\mapsto
\sigma(vs) \sigma(g) (\sigma(vs))^{-1}
$$
of the group $\ensuremath{\mathcal{G}}$.
For any subset $A$ of $G$, note that
\begin{align*}
\ensuremath{\mathscr{N}}^{2}(A)
& = \sigma(S) \sigma^2(A) \sigma^2(S),
\end{align*}
which yields
\begin{align*}
\ensuremath{\mathscr{N}}^{2}(g\cdot A)
& =
\sigma(S) \sigma^2(g \cdot A) \sigma^2(S)\\
& =
\sigma(S) \sigma^2(g)\sigma^2(A) \sigma^2(S)\\
& =
g g^{-1} \sigma(S) \sigma^2(g)\sigma^2(A) \sigma^2(S)\\
& =
g (\sigma^2(g^{-1})\sigma(S^{-1})g)^{-1} \sigma^2(A) \sigma^2(S)\\
& =
g S^{-1} \sigma^2(A) \sigma^2(S)\\
& =
g \sigma(S) \sigma^2(A) \sigma^2(S)\\
& =
g \cdot \ensuremath{\mathscr{N}}^2(A) .
\end{align*}
Since $C(G, S)^\sigma$ is connected, its vertex Cheeger constant $h_\sigma$ is positive. Thus $C(G, S)^\sigma$ is an $h_\sigma$-expander with $h_\sigma>0$.
By Theorem \ref{thmPrincipal}, the nontrivial eigenvalues of the normalised adjacency operator of $C(G, S)^\sigma$ are greater than
$$
-1 + \frac {h_\sigma^4}{2^{12} d^{8}}
.
$$
By the discrete Cheeger--Buser inequality (Proposition \ref{Prop:chin}), the result follows.
\end{proof}
\subsection{Cayley sum graph twisted by anti-automorphisms}
For a subset $A$ of $G$, its neighbourhood in $C_\Sigma(G, S)^\sigma$ is denote by $\ensuremath{\mathscr{N}}_\Sigma(A)$. Given $g_1, \cdots, g_r \in G$,
$\prod_{i = 1}^r g_i$
denotes the product
$g_1 \cdots g_r$.
\begin{proof}[Proof of Theorem \ref{Thm:BddAnti}(2), (3)]
Let $\ensuremath{\mathcal{G}}$ denote the group $G$. Consider the action of $\ensuremath{\mathcal{G}}$ on the set of vertices of $C_\Sigma(G, S)^\sigma$ by right multiplication via the inverse, i.e.,
$$g\cdot v = vg^{-1}$$
for any $g\in \ensuremath{\mathcal{G}}, v\in G$.
For an element $(s_1, \cdots, s_m)\in S^m$, let $\theta:G\to G$ denote the bijection defined by
$$\theta(v)
=
\prod_{i=1}^m
\sigma^i (s^{(-1)^{i-1}}_{m+1-i})
\sigma^m(v^{(-1)^m}) $$
for $v\in G$.
For each $(s_1, \cdots, s_m)\in S^m$ and $v\in G$, note that
\begin{align*}
\theta(g\cdot v)
& =
\prod_{i=1}^m
\sigma^i (s^{(-1)^{i-1}}_{m+1-i})
\sigma^m((g\cdot v) ^{(-1)^m})\\
& =
\prod_{i=1}^m
\sigma^i (s^{(-1)^{i-1}}_{m+1-i})
\sigma^m(v^{(-1)^m})
(\sigma^m(v^{(-1)^m}))^{-1}
\sigma^m((g\cdot v) ^{(-1)^m})\\
& =
\theta(v)
(\sigma^m(v^{(-1)^m}))^{-1}
\sigma^m((g\cdot v) ^{(-1)^m})\\
& =
\theta(v)
\sigma^m(g^{(-1)^{m-1}}) \\
& =
\sigma^m(g^{(-1)^m})
\cdot \theta(v) \\
& =
\psi_{(s_1, \cdots, s_m)}(g) \cdot \theta(v)
\end{align*}
for any $g\in \ensuremath{\mathcal{G}}$, where $\psi_{(s_1, \cdots, s_m), v}$ denotes the automorphism
$$g\mapsto
\sigma^m(g^{(-1)^m})
$$
of the group $\ensuremath{\mathcal{G}}$.
For any integer $m\geq 1$ and any subset $A$ of $G$, note that
\begin{align*}
\ensuremath{\mathscr{N}}_{\Sigma}^{2m}(A)
& = \sigma(S) \sigma^2(S^{-1}) \sigma^3(S) \sigma^4(S^{-1}) \cdots \sigma^{2m-1} (S) \sigma^{2m} (S^{-1})\sigma^{2m}(A) ,
\end{align*}
which yields
\begin{align*}
\ensuremath{\mathscr{N}}_{\Sigma}^{2m}(g\cdot A)
& = \sigma(S) \sigma^2(S^{-1}) \sigma^3(S) \sigma^4(S^{-1}) \cdots \sigma^{2m-1} (S) \sigma^{2m} (S^{-1})\sigma^{2m}(g\cdot A) \\
& = \sigma(S) \sigma^2(S^{-1}) \sigma^3(S) \sigma^4(S^{-1}) \cdots \sigma^{2m-1} (S) \sigma^{2m} (S^{-1})\sigma^{2m}(Ag^{-1}) \\
& = \sigma^{2m}(g) \cdot (\sigma(S) \sigma^2(S^{-1}) \sigma^3(S) \sigma^4(S^{-1}) \cdots \sigma^{2m-1} (S) \sigma^{2m} (S^{-1})\sigma^{2m}(A)) \\
& = \sigma^{2m}(g) \cdot \ensuremath{\mathscr{N}}_{\Sigma}^{2m}(A) .
\end{align*}
Since $C_\Sigma(G, S)^\sigma$ is connected, its vertex Cheeger constant $h_{\Sigma, \sigma}$ is positive. Thus $C_\Sigma(G, S)^\sigma$ is an $h_{\Sigma, \sigma}$-expander with $h_{\Sigma, \sigma}>0$.
If $\sigma^2$ is the trivial automorphism of $G$, then from Theorem \ref{thmPrincipal}, the nontrivial eigenvalues of the normalised adjacency operator of $C_\Sigma(G, S)^\sigma$ are greater than
$$
-1 + \frac {h_{\Sigma, \sigma}^4}{2^{12} d^{8}}
.
$$
If $\sigma^{2k}$ is the trivial automorphism of $G$ for some odd integer $k\geq 1$, then from Theorem \ref{thmPrincipal}, the nontrivial eigenvalues of the normalised adjacency operator of $C_\Sigma(G, S)^\sigma$ are greater than
$$
\left(
-1 + \frac {1}{2^{12} d^{8k}}
\left(
\frac 12
\left(
1 -
\left(
1 - \frac{h_{\Sigma, \sigma}^2}{2d^2}
\right)^k
\right)
\right)
^4
\right)^{1/k}.
$$
By the discrete Cheeger--Buser inequality (Proposition \ref{Prop:chin}), the result follows.
\end{proof}
\section{Spectral expansion of Schreier graphs}
\label{Sec:Schreier}
Given a subgroup $H$ of a finite group $G$, and a symmetric subset $S$ of $G$ with $|S| = d$, the Schreier graph $\mathrm{Sch} (G, H, S)$ has the set $H\backslash G$ of right cosets of $H$ in $G$ as its set of vertices and there is an edge from $Hg$ to $Hg'$ for $g, g'\in G$, if $Hg' = Hgs$ for some $s\in S$.
\begin{theorem}
\label{Thm:Schreier}
Suppose no index two subgroup of $G$ acts transitively on the vertex set of $\mathrm{Sch} (G, H, S)$. Assume that the index of $H$ in $G$ is at least $4$, and $S\cdot S$ contains its conjugates by the elements of $G$. If the Schreier graph $\mathrm{Sch} (G, H, S)$ is connected, then the nontrivial spectrum of its normalised adjacency operator lies in the interval
$$\left( -1 + \frac{h_{\mathrm{Sch}}^4}{2^{12}d^8}
,
1 - \frac{h_{\mathrm{Sch}}^2}{2d^2}
\right]$$
where $h_{\mathrm{Sch}}$ denotes the vertex Cheeger constant of $\mathrm{Sch} (G, H, S)$.
\end{theorem}
\begin{proof}
Let $\ensuremath{\mathcal{G}}$ denote the group $G$. Consider the action of $\ensuremath{\mathcal{G}}$ on the set of vertices $H\backslash G$ of $\mathrm{Sch} (G, H, S)$ by right multiplication via the inverse, i.e.,
$$\tau \cdot Hg
= Hg \tau^{-1}
$$
for any $\tau \in \ensuremath{\mathcal{G}}$ and any right coset $Hg$ of $H$ in $G$.
Note that this action is transitive of order $|H|$.
For an element $s\in S$, let $\theta:
H\backslash G
\to
H\backslash G$
denote the bijection defined by
$$\theta(Hg)
= Hgs.$$
For any $Hg\in H\backslash G$, note that
\begin{align*}
\theta(\tau \cdot Hg)
& = Hg\tau^{-1} s\\
& = Hgs (s^{-1} \tau s)^{-1} \\
& = (s^{-1} \tau s) \cdot Hgs\\
& = \psi_s(\tau) \cdot Hgs\\
\end{align*}
for any $\tau \in \ensuremath{\mathcal{G}}$, where $\psi_s$ denote the automorphism
$$\tau \mapsto s^{-1} \tau s$$
of the group $\ensuremath{\mathcal{G}}$.
For any element $Hg\in H\backslash G$, note that
\begin{align*}
\ensuremath{\mathscr{N}}_S(\ensuremath{\mathscr{N}}_S(\tau\cdot Hg))
& = \{Hg\tau^{-1} x\,|\, x\in S\cdot S\} \\
& = \{Hg\tau^{-1} x\,|\, x\in \tau S\cdot S \tau^{-1} \} \\
& = \{Hgx\tau^{-1} \,|\, x\in S\cdot S\} \\
& = \{\tau \cdot Hgx \,|\, x\in S\cdot S\} \\
& = \tau \cdot \ensuremath{\mathscr{N}}_S(\ensuremath{\mathscr{N}}_S(Hg))
\end{align*}
holds for any $\tau \in \ensuremath{\mathcal{G}}$.
If the Schreier graph $\mathrm{Sch} (G, H, S)$ is an $\varepsilon$-vertex expander for some $\varepsilon>0$, then by Theorem \ref{thmPrincipalk1}, the nontrivial eigenvalues of its normalised adjacency operator are greater than
$$ - 1 + \frac{\varepsilon^4} {2^{12} d^8}.$$
By the discrete Cheeger--Buser inequality (Proposition \ref{Prop:chin}), the result follows.
\end{proof}
\def\cprime{$'$} \def\Dbar{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex
\accent"16\hss}D} \def\cfac#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"13\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7}
\def\cftil#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\lower1.15ex\hbox to 1\wd7{\hss\accent"7E\hss}}\penalty 10000
\hskip-1\wd7\penalty 10000\box7}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2020-08-12T02:00:06",
"yymm": "2008",
"arxiv_id": "2008.04307",
"language": "en",
"url": "https://arxiv.org/abs/2008.04307",
"abstract": "Let $G$ be a finite group with $|G|\\geq 4$ and $S$ be a subset of $G$. Given an automorphism $\\sigma$ of $G$, the twisted Cayley graph $C(G, S)^\\sigma$ (resp. the twisted Cayley sum graph $C_\\Sigma(G, S)^\\sigma$) is defined as the graph having $G$ as its set of vertices and the adjacent vertices of a vertex $g\\in G$ are of the form $\\sigma(gs)$ (resp. $\\sigma(g^{-1} s)$) for some $s\\in S$. If the twisted Cayley graph $C(G, S)^\\sigma$ is undirected and connected, then we prove that the nontrivial spectrum of its normalised adjacency operator is bounded away from $-1$ and this bound depends only on its degree, the order of $\\sigma$ and the vertex Cheeger constant of $C(G, S)^\\sigma$. Moreover, if the twisted Cayley sum graph $C_\\Sigma(G, S)^\\sigma$ is undirected and connected, then we prove that the nontrivial spectrum of its normalised adjacency operator is bounded away from $-1$ and this bound depends only on its degree and the vertex Cheeger constant of $C_\\Sigma(G, S)^\\sigma$. We also study these twisted graphs with respect to anti-automorphisms, and obtain similar results. Further, we prove an analogous result for the Schreier graphs satisfying certain conditions.",
"subjects": "Combinatorics (math.CO)",
"title": "Spectrum of twists of Cayley and Cayley sum graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850857421198,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7095349759629067
} |
https://arxiv.org/abs/1802.04586 | Hamiltonicity in randomly perturbed hypergraphs | For integers $k\ge 3$ and $1\le \ell\le k-1$, we prove that for any $\alpha>0$, there exist $\epsilon>0$ and $C>0$ such that for sufficiently large $n\in (k-\ell)\mathbb{N}$, the union of a $k$-uniform hypergraph with minimum vertex degree $\alpha n^{k-1}$ and a binomial random $k$-uniform hypergraph $\mathbb{G}^{(k)}(n,p)$ with $p\ge n^{-(k-\ell)-\epsilon}$ for $\ell\ge 2$ and $p\ge C n^{-(k-1)}$ for $\ell=1$ on the same vertex set contains a Hamiltonian $\ell$-cycle with high probability. Our result is best possible up to the values of $\epsilon$ and $C$ and answers a question of Krivelevich, Kwan and Sudakov. | \section{\@startsection{section}{1}%
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}
\newtheoremstyle{note
{4pt
{4pt
{\sl
{
{\itshape
{.
{.5em
{
\theoremstyle{note}
\newtheorem{claim}{Claim}
\begin{document}
\title[Hamiltonicity in randomly perturbed hypergraphs]{Hamiltonicity in randomly perturbed hypergraphs}
\thanks{
JH was partially supported by FAPESP (Proc. 2014/18641-5) and Simons Foundation \#630884.
YZ is partially supported by NSF grant DMS 1700622.}
\author{Jie Han}
\author{Yi Zhao}
\address{Department of Mathematics, University of Rhode Island, 5 Lippitt Road, Kingston, RI, 02881}
\email{jie\_han@uri.edu}
\address
{Department of Mathematics and Statistics, Georgia State University, Atlanta, GA, 30303}
\email{yzhao6@gsu.edu}
\keywords{Hamiltonian cycle, random hypergraph, perturbed hypergraph}
\subjclass[2010]
}
\begin{abstract}
For integers $k\ge 3$ and $1\le \ell\le k-1$, we prove that for any $\alpha>0$, there exist $\epsilon>0$ and $C>0$ such that for sufficiently large $n\in (k-\ell)\mathbb{N}$, the union of a $k$-uniform hypergraph with minimum vertex degree $\alpha n^{k-1}$ and a binomial random $k$-uniform hypergraph $\mathbb{G}^{(k)}(n,p)$ with $p\ge n^{-(k-\ell)-\epsilon}$ for $\ell\ge 2$ and $p\ge C n^{-(k-1)}$ for $\ell=1$ on the same vertex set contains a Hamiltonian $\ell$-cycle with high probability.
Our result is best possible up to the values of $\epsilon$ and $C$ and answers a question of Krivelevich, Kwan and Sudakov.
\end{abstract}
\maketitle
\section{Introduction} \label{sec:intro}
\subsection{Hamiltonian cycles and random graphs}
The study of Hamiltonicity (the existence of a spanning cycle) has been a central and fruitful area in graph theory.
In particular, a celebrated result of Karp~\cite{Karp} states that the decision problem for Hamiltonicity in graphs is NP-complete.
So it is desirable to study sufficient conditions that guarantees Hamiltonicity.
Among a large variety of such results, probably the most well-known is a theorem of Dirac from 1952~\cite{Di52}: every $n$-vertex graph ($n\ge 3$) with minimum degree at least $n/2$ is Hamiltonian.
Another well-studied object in graph theory is the random graph $\mathbb{G}(n,p)$, which contains $n$ vertices and each pair of vertices forms an edge with probability $p$ independently from other pairs.
P\'osa~\cite{Posa} and Korshunov~\cite{Korshunov} independently determined the threshold for Hamiltonicity in $\mathbb{G}(n,p)$, which is around $\log n/n$.
This implies that almost all dense graphs are Hamiltonian.
Furthermore, Bohman, Frieze and Martin~\cite{BFM} showed that for every $\alpha>0$ there is $c=c(\alpha)$ such that
every $n$-vertex graph $G$ with minimum degree $\alpha n$ becomes Hamiltonian~\emph{a.a.s.}~after adding $c n$ random edges (we say that an event happens \emph{asymptotically almost surely}, or \emph{a.a.s.}, if it happens with probability $1-o(1)$).
This result is tight up to the value of $c$ by considering a complete bipartite graph $K_{\alpha n, (1-\alpha)n}$.
A comparison can be drawn to the notion of smoothed analysis of algorithms introduced by Spielman and Teng~\cite{SpTe}, which involves studying the performance of algorithms on randomly perturbed inputs.
\subsection{Uniform hypergraphs}
It is natural to study the Hamiltonicity of uniform hypergraphs.
Given $k\ge 2$, a $k$-uniform hypergraph (in short, a \emph{$k$-graph}) $H=(V,E)$ consists of a vertex set $V$ and an edge set $E\subseteq \binom{V}{k}$, where every edge is a $k$-element subset of $V$.
Given a $k$-graph $H$ with a set $S$ of $d$ vertices (where $1 \le d \le k-1$) we define $N_{H} (S)$ to be the collection of $(k-d)$-sets $T$ such that $S\cup T\in E(H)$, and let $\deg_H(S):=|N_H(S)|$.
The \emph{minimum $d$-degree $\delta _{d} (H)$} of $H$ is the minimum of $\deg_{H} (S)$ over all $d$-vertex sets $S$ in $H$.
In the last two decades, there has been a growing interest of extending Dirac's theorem to hypergraphs.
Despite other notion of cycles in hypergraphs (e.g., Berge cycles), the following definition of cycles has become more popular recently (see surveys~\cite{RR,zsurvey}).
For integers $1\le \ell \le k-1$ and $m\ge 3$, a $k$-graph $F$ with $m(k-\ell)$ vertices and $m$ edges is called an \emph{$\ell$-cycle} if its vertices can be ordered cyclically such that each of its edges consists of $k$ consecutive vertices and every two consecutive edges (in the natural order of the edges) share exactly $\ell$ vertices.
A $k$-graph is called \emph{$\ell$-Hamiltonian} if it contains an $\ell$-cycle as a spanning subgraph.
Extending Dirac's theorem, the minimum $d$-degree conditions that force $\ell$-Hamiltonicity (for $1\le d, \ell\le k-1$) have been intensively studied~\cite{BMSSS1, BMSSS2, BHS, CzMo, GPW, HS, HZ2, HZ1, KKMO, KMO, KO, RoRu14, RoRuSz06, RRS08, RRS11}.
For example, the minimum $1$-degree threshold for $2$-Hamiltonicity in 3-graphs was determined asymptotically ~\cite{RRRSS}.
Let $\mathbb{G}^{(k)}(n,p)$ denote the binomial random $k$-graph on $n$ vertices, where each $k$-set forms an edge independently with probability $p$.
The thresholds for $\ell$-Hamiltonicity have been studied by Dudek and Frieze~\cite{DuFr1, DuFr2}, who proved that the asymptotic threshold is $1/n^{k-\ell}$ for $\ell\ge 2$ and $\log n/n^{k-1}$ for $\ell=1$ (they also gave a sharp threshold for $k\ge 4$ and $\ell=k-1$).
It is also natural to consider $\ell$-Hamiltonicity in randomly perturbed $k$-graphs.
In fact, Krivelevich, Kwan and Sudakov~\cite{KKS} extended the result of Bohman--Frieze--Martin~\cite{BFM} to hypergraphs.
\begin{thm}\label{thm:KKS}\cite{KKS}
Let $k\in \mathbb{N}$, and let $H$ be a $k$-graph on $n\in (k-1)\mathbb{N}$ vertices with $\delta_{k-1}(H)\geq \alpha n$. There exists a function $c_k =c_k (\alpha)$ such that for $p=c_k/ n^{k-1}$, $H\cup \mathbb G^{(k)}(n,p)$ a.a.s.~is $1$-Hamiltonian.
\end{thm}
Theorem~\ref{thm:KKS} is tight up to the value of $c_k$ (see the paragraph after Theorem~\ref{main}).
Similar results for the powers of Hamiltonian $(k-1)$-cycles were obtained by Bennett, Dudek and Frieze~\cite{BeDuFr16}, and recently by Bedenknecht, Han, Kohayakawa and Mota~\cite{BHKM}.
In addition, B\"ottcher, Montgomery, Parczyk and Person~\cite{BMPP} proved embedding results for bounded degree subgraphs in randomly perturbed graphs.
Other results in randomly perturbed graphs can be found in~\cite{BTW, KKS2, BHKMPP}.
Krivelevich, Kwan and Sudakov~\cite{KKS} asked whether Theorem~\ref{thm:KKS} can be extended to $\ell$-Hamiltonicity under minimum $d$-degree conditions for $1\le d, \ell \le k-1$. McDowell and Mycroft~\cite{McMy} found such results for $d\ge \max\{\ell, k-\ell\}$ and reiterated the question for arbitrary $d$ and $\ell$.
In this paper we solve this problem completely.
Since the minimum $1$-degree condition is the weakest among $d$-degree conditions for all $d\ge 1$, we only state and prove our result with respect to the minimum $1$-degree.
\begin{thm}\label{main}
For integers $k\ge 3$, $1\le \ell \le k-1$ and $\alpha>0$ there exist $\epsilon>0$ and an integer $C>0$ such that the following holds for sufficiently large $n\in (k-\ell)\mathbb{N}$.
Suppose $H$ is a $k$-graph on $n$ vertices with $\delta_{1}(H)\ge \alpha {n}^{k-1}$
and
\begin{equation}\label{eq:p}
p=p(n)\ge \begin{cases}
n^{-(k-\ell)-\epsilon} &\text{ if } \ell\ge 2, \\
C n^{-(k-1)} &\text{ if } \ell=1.
\end{cases}
\end{equation}
Then $H\cup \mathbb{G}^{(k)}(n,p)$ a.a.s.~is $\ell$-Hamiltonian.
\end{thm}
Theorem~\ref{main} is sharp up to the constants $\epsilon$ and $C$.
Indeed, given $k$ and $\ell$, let $\alpha>0$ be sufficiently small and $n\in (k-\ell)\mathbb{N}$ be sufficiently large.
Consider a partition $A\cup B$ of a vertex set $V$ such that $|A|=\alpha n$ and $|B|=(1-\alpha)n$.
Let $H_0$ be the $k$-graph with all $k$-tuples that intersect both $A$ and $B$ as edges.
It is easy to see that $\delta_1(H_0) = \alpha n \binom{n - \alpha n-1}{k-2}$.
Suppose $H_0\cup \mathbb{G}^{(k)}(n,p)$ \emph{a.a.s.}~contains a Hamiltonian $\ell$-cycle $C$.
Since $|A|=\alpha n$, $C$ contains at least $1/\alpha-1$ consecutive vertices in $B$.
Let $a = \lfloor (1/\alpha - 1 - \ell) / (k - \ell) \rfloor$.
Since $B$ is an independent set in $H_0$, this implies that $\mathbb{G}^{(k)}(n,p)$ \emph{a.a.s.}~contains an \emph{$\ell$-path} on $a$ edges (a $k$-graph with vertices $v_1, v_2, \dots, v_{a(k-\ell)+\ell}$ and edges $\{v_{i(k-\ell)+1}, \dots, v_{i(k-\ell)+k}\}$ for $i=0,\dots, a-1$). When $p < (1/2)^{1/a} n^{-(k-\ell)-\ell/a}$, we have $n^{\ell+a(k-\ell)} p^a <1/2$.
By Markov's inequality, with probability at least $1/2$, $\mathbb{G}^{(k)}(n,p)$ contains no $\ell$-path on $a$ edges.
When $\ell=1$, if $H_0\cup \mathbb{G}^{(k)}(n,p)$ is \emph{a.a.s.}~$\ell$-Hamiltonian, then $\mathbb{G}^{(k)}(n,p)$ \emph{a.a.s.}~contains $n/(k-1) - 2|A| > n/k$ edges (because a $1$-Hamiltonian cycle contains $n/(k-1)$ edges and each vertex is contained in at most $2$ of them). When $p < n^{-(k-1)}/(2k)$, we have $n^{k} p <n/(2k)$.
By Markov's inequality, with probability at least $1/2$, $\mathbb{G}^{(k)}(n,p)$ contains fewer than $n/k$ edges.
\subsection{Proof ideas}
The proof of Theorem~\ref{main} follows the \textit{absorbing method} introduced by R\"odl, Ruci\'nski, and Szemer\'edi in~\cite{RoRuSz06}. Let us define \emph{absorbers} for our problem. Given an $\ell$-path $P$, we call the first and last $\ell$ vertices two \emph{$\ell$-ends} of $P$.
Let $H$ be a $k$-graph and $S$ be a set of $k-\ell$ vertices in $V(H)$.
We call an $\ell$-path $P$ an \emph{$S$-absorber} if $V(P)\cap S=\emptyset$ and $V(P)\cup S$ spans an $\ell$-path with the same \emph{$\ell$-ends} as $P$.
Below is a typical procedure for finding a Hamilton $\ell$-cycle in $H$ by the absorbing method.
\begin{enumerate}
\item We show that every $(k-\ell)$-subset of $V(H)$ has many absorbers (of the same fixed length). This enables us to obtain a path $P_{abs}$ of linear length such that every $(k-\ell)$-set has many absorbers on $P_{abs}$.
\item We cover most vertices of $V\setminus V(P_{abs})$ by short paths and then connect them together with $P_{abs}$ into a cycle $C$ .
\item The vertices not covered by $C$ are arbitrarily partitioned into $(k-\ell)$-sets and absorbed by $P_{abs}$ greedily.
\end{enumerate}
The proof thus has three main components:
\begin{itemize}
\item an \emph{absorbing lemma}, which provides a family ${\mathcal A}$ of vertex-disjoint short paths such that every $(k-\ell)$-set has many absorbers in ${\mathcal A}$;
\item a \emph{path cover lemma}, which allows us to cover most vertices of $V(H)$ by vertex-disjoint paths; and
\item a \emph{connecting lemma}, which allows us to connect ${\mathcal A}$ into a single path $P_{abs}$ and connect the paths from the path cover lemma together.
\end{itemize}
Let $\mathbb{G}^{(k)}(n,p)\cup H$ be the underlying $k$-graph on the same vertex set $V$. Using Janson's inequality, one can derive the path cover lemma by using the edges of $\mathbb{G}^{(k)}(n,p)$.
If we have $\delta_{k-\ell}(H)\ge \alpha \binom{n}{\ell}$, then every $(k-\ell)$-set of $V$ has many neighbors and it is not difficult to prove the absorbing lemma. If we have $\delta_{\ell}(H)\ge \alpha \binom{n}{k- \ell}$, then every $\ell$-set of $V$ has many neighbors and it is easy to prove the connecting lemma. However, our Theorem~\ref{main} only assumes $\delta_{1}(H)\ge \alpha {n}^{k-1}$. In order to prove Theorem~\ref{main}, we ``shave'' $H$ by removing all the edges of $H$ that contain an $\ell$-set of low degree.
This results in a $k$-graph $H'$ in which every $\ell$-subset of $V$ either has a high degree or a zero degree. Our connecting lemma only connects two $\ell$-sets with high degree.
To overcome the difficulty in absorbing, an earlier version of this paper used the hypergraph regularity method. Following the suggestion of a referee, we now give a simpler absorbing lemma without the regularity method. Note that the shaving process creates a small number of vertices that cannot be absorbed and we will cover these vertices by the path cover lemma.
The rest of the paper is organized as follows.
We state and prove our lemmas in Sections 2 and 3 and prove Theorem~\ref{main} in Section 4.
\medskip
\noindent\textbf{Notation.}
Given positive integers $n\ge b$, let $[n]:= \{1, 2, \dots, n\}$ and $(n)_b:=n(n-1)\cdots (n-b+1) = n!/(n-b)!$.
Given a $k$-graph $H$, we use $v_H$ and $e_H$ to denote the order and size of $H$, respectively. For two (hyper)graphs $G$ and $H$, let $G\cap H$ (or $G\cup H$) denote the (hyper)graph with vertex set $V(G)\cap V(H)$ (or $V(G)\cup V(H)$) and edge set $E(G)\cap E(H)$ (or $E(G)\cup E(H)$). Given a set $X$, $\binom Xk$ denotes the family of all $k$-subsets of $X$. A $k$-graph $(V, E)$ is \emph{complete} if $E= \binom Vk$.
Given $1\le \ell \le k$, the \emph{$\ell$-shadow of a $k$-graph $H$}, denoted by $\partial_{\ell}H$, is the collection of all $\ell$-subsets $S\subset V(H)$ that are contained in some edges of $H$.
In this paper, unless stated otherwise, we assume that the vertex sets of paths and related hypergraphs are \emph{ordered}.
When $A$ and $B$ are ordered sets, let $AB$ denote their concatenation.
Given positive integers $k, \ell, a$ such that $\ell<k$, let $P_a$ denote a \emph{$k$-uniform $\ell$-path of length $a$}, that is, a $k$-graph on vertices $v_1, v_2, \dots, v_{a(k-\ell)+\ell}$ with edges $\{v_{i(k-\ell)+1}, \dots, v_{i(k-\ell)+k}\}$ for $i=0,\dots, a-1$.
In general, given a $k$-graph $F$ on $\{x_1,\dots, x_s\}$ and a $k$-graph $H$, we say that \emph{an ordered subset $(v_1,\dots, v_s)$ of $V(H)$ spans a {(labeled) copy} of $F$} if $\{v_{i_1},\dots, v_{i_k}\}\in E(H)$ whenever $\{x_{i_1},\dots, x_{i_k}\}\in E(F)$.
Given integers $a\ge 1$ and $x\ge 0$, let $P_{a,x}$ denote a $k$-graph on $a(k-\ell)+\ell+2x$ vertices with an order such that the first and last $x$ vertices are isolated and the middle $a(k-\ell)+\ell$ vertices span a copy of $P_a$.
Throughout the rest of the paper, we write $\alpha \ll \beta \ll \gamma$ to mean that we can choose the positive constants
$\alpha, \beta, \gamma$ from right to left. More
precisely, there are increasing functions $f$ and $g$ such that, given
$\gamma$, whenever $\beta \leq f(\gamma)$ and $\alpha \leq g(\beta)$, the subsequent statement holds.
Hierarchies of other lengths are defined similarly.
Throughout the paper we omit floor and ceiling functions when they are not crucial.
\section{Subgraphs in random hypergraphs}\label{sec:random}
In this section we introduce some results related to binomial random $k$-graphs (similar ones can be found in~\cite{BHKM}).
Our main tools are Janson's inequality (see, e.g.,~\cite[Theorem 2.14]{JLR}) and Chebyshev's inequality.
We first recall Janson's inequality.
Let $\Gamma$ be a finite set and let $\Gamma_p$ be a random subset of $\Gamma$ such that each element of $\Gamma$ is included independently with probability $p$.
Let ${\mathcal S}$ be a family of non-empty subsets of $\Gamma$ and for each $S\in {\mathcal S}$, let $I_S$ be the indicator random variable for the event $S\subseteq \Gamma_p$.
Thus each $I_S$ is a Bernoulli random variable $\mathop{\text{\rm Be}}\nolimits(p^{|S|})$.
Let $X:=\sum_{S\in {\mathcal S}} I_S$ and $\lambda = \mathbb E(X)$.
Let $\Delta_{X} := \sum_{S\cap T\neq\emptyset}\mathbb{E}(I_S I_T)$, where the sum is over not necessarily distinct ordered pairs $S, T\in {\mathcal S}$.
Then Janson's inequality says that for any $0\le t\le \lambda$,
\begin{equation}
\mathbb{P}(X\leq \lambda -t)\leq \exp \left ( -\dfrac{t^2}{2\Delta_{X}}\right ). \label{eq:2}
\end{equation}
Next note that $\mathop{\text{\rm Var}}\nolimits(X)=\mathbb E(X^2) - \mathbb E(X)^2 \le \Delta_X$.
Then by Chebyshev's inequality,
\begin{equation}
\mathbb{P}(X\ge 2\lambda) \le \frac{\mathop{\text{\rm Var}}\nolimits(X)}{\lambda^2} \le \frac{\Delta_X}{\lambda^2}. \label{eq:1}
\end{equation}
Consider the random $k$-graph $\mathbb{G}^{(k)}(n, p)$ on an $n$-vertex set $V$.
Note that we can view $\mathbb{G}^{(k)}(n, p)$ as $\Gamma_p$ with $\Gamma = \binom{V}k$.
Let $\Phi_F = \Phi_F(n, p)= \min \{n^{v_H} p^{e_H}: H\subseteq F, e_H>0\}$.
The following simple proposition is useful.
\begin{prop}\label{prop:1
Let $F$ be a $k$-graph with $s$ vertices and $f$ edges and let $G:=\mathbb{G}^{(k)}(n, p)$ on $V$.
Given a family ${\mathcal A}$ of ordered $s$-subsets of $V$,
let $X_{\mathcal A}=\sum_{A\in {\mathcal A}} I_A$,
where $I_A$ is the Bernoulli random variable for the event that $A$ {spans a labeled copy of $F$ in} $G$.
Then $\Delta_{X_{\mathcal A}} \leq 2^s s! \, n^{2s}p^{2f}/\Phi_F$.
\end{prop}
\begin{proof}
Fix $1\le i\le s$. There are $\binom{s}{i} (s)_i$ ways that two labeled $s$-sets share exactly $i$ vertices. Fixing two such $s$-sets, there are $(n)_{2s - i}$ ways mapping their $2s-i$ vertices into $V$.
Let $f_i$ denote the maximum number of edges of an $i$-vertex subgraph of $F$. We have
\[
\Delta_{X_{\mathcal A}} \le \sum_{i=1}^s \binom{s}{i} (s)_i (n)_{2s - i} p^{2f - f_i} \le \sum_{i=1}^s \binom si s! \, n^{2s-i} p^{2f - f_i}
\le 2^{s} s! \, n^{2s}p^{2f}/\Phi_F. \qedhere
\]
\end{proof}
The next two lemmas gather all the properties of $\mathbb{G}^{(k)}(n,p)$ that we will use.
\begin{lemma}\label{lm:gnp}
Let $k, \ell, a, x\in \mathbb Z$ such that $k\ge 3, 1\le \ell\le k-1$, $a\geq 1$, and $0\le x\le k$. Write $b = b(x) =2x+\ell+(k-\ell)a$.
Suppose $0<\epsilon\le \ell/(3a)$ and $1/n \ll 1/C \ll \gamma, 1/a, 1/k$.
Let $G=\mathbb{G}^{(k)}(n, p)$ be a random $k$-graph with vertex set $V$, where $p$ satisfies~\eqref{eq:p}. Then the following properties hold.
\begin{enumerate}
\item Let $L$ be a family of $\ell$-sets in $V(G)$ and in addition assume $a\ge \ell/(k-\ell)$.
Then for every $R, V^*\subseteq V(G)$ such that $|V^*|\ge \gamma n$ and $|L\cap \binom{R}{\ell}|\ge \gamma n^{\ell}$, with probability at least $1-\exp(-3 n)$, $G$ contains a copy of $P_{a}$ whose $\ell$-ends are in $L\cap \binom{R}{\ell}$ and whose other vertices are from $V^*$.
Moreover, this property holds for all choices of $R$ and $V^*$ simultaneously with probability $1-o(1)$. \label{item-I}
\item With probability at least $1-o(1)$, at most $2 p^{a} n^{b}$ ordered $b$-subsets of $V(G)$ span copies of $P_{a,x}$. \label{item-III}
\item With probability at least $1-o(1)$, $G$ contains at most $4 b^2 n^{2b-1} p^{2a}$ pairs of overlapping (i.e., not vertex-disjoint) copies of $P_{a,x}$. \label{item-IV}
\end{enumerate}
\end{lemma}
\begin{proof
Note that if $H$ is a subgraph of $P_{a,x}$, then $v_H\ge \ell+(k-\ell)e_H$. Thus,
\begin{equation}
\label{eq:phi1}
\Phi_{P_{a,x}} = \min_{1\le e_H\le a} n^{v_H} p^{e_H} \ge \min_{1\le e_H\le a} n^{\ell+(k-\ell)e_H} p^{e_H} = n^{\ell} \min_{1\le e_H\le a} (n^{k-\ell} p)^{e_H} \ge
\begin{cases}
n^{\ell - a\epsilon} & \text{if } \ell\ge 2,\\
C n & \text{if }\ell=1,
\end{cases}
\end{equation}
where we used~\eqref{eq:p} in the last inequality.
Since $\eps\le \ell/(3a)$, $\Phi_{P_{a,x}}\ge C n$ holds for all $\ell$.
Given a family ${\mathcal A}$ of ordered $b$-sets of vertices in $V$, let ${\mathcal S}$ consist of the edge sets of the labeled copies of $P_{a,x}$ spanned on $A$ in the complete $k$-graph on $V$ for all $A\in {\mathcal A}$. Let $X_{\mathcal A}=\sum_{S\in {\mathcal S}} I_S$, where $I_S$ is the indicator variable for the event $S\subseteq E(G)$ (thus $X_{\mathcal A}$ counts the number of $A\in {\mathcal A}$ that spans a copy of $P_{a,x}$ in $G$).
Since $\Phi_{P_{a,x}}\ge C n$, Proposition~\ref{prop:1} implies that
\begin{equation}\label{eq:phi}
\Delta_{X_{\mathcal A}} \leq 2^{b} b! \, n^{2b}p^{2a}/\Phi_{P_{a,x}}\le ( 2^b b! /C) n^{2b-1}p^{2a} \le (\gamma^{2b}/24) n^{2b-1}p^{2a}
\end{equation}
because $1/C\ll \gamma, 1/a, 1/k$.
For~\eqref{item-I}, fix such a choice for $R$ and $V^*$ and let $x=0$ and $b= \ell+(k-\ell)a$.
Let $\mathcal A$ be the family of all ordered $(\ell+(k-\ell)a)$-sets in $V(G)$ whose first and last $\ell$ vertices are in $L\cap \binom{R}{\ell}$ and all other vertices are from $V^*$.
Then $|{\mathcal A}| \ge (\gamma n^\ell)^2 (\gamma n)^{(k-\ell)a-\ell}/2 \ge (\gamma n)^{b}/2$.
Recall that $X_{\mathcal A}$ counts the number of $A\in {\mathcal A}$ that spans a copy of $P_a$ in $G$.
Then $(\gamma n)^{b} p^a/2\le \mathbb{E}(X_{\mathcal A})\le n^{b} p^a$.
By~\eqref{eq:2} and \eqref{eq:phi}, we have
\[
\mathbb{P} (X_{\mathcal A}=0) \le
\exp \left( -\frac{\mathbb{E}(X_{\mathcal A})^2}{2\Delta_{X_{{\mathcal A}}}} \right)
\le \exp \left( -\frac{(\gamma n)^{2b} p^{2a}/4}{(\gamma^{2b}/12) n^{2b-1}p^{2a} } \right)
= \exp( - 3 n).
\]
The second part of~\eqref{item-I} follows from the union bound because there are at most $2^n$ choices for each of $R$ and $V^*$ and $2^n \cdot 2^n \cdot \exp(- 3 n) \le \exp(-n)$.
For \eqref{item-III},
let $X_2$ be the random variable that counts the number of labeled copies of $P_{a,x}$ in $G$.
Then $\mathbb E(X_2)= {(n)}_{b}p^{a}$. By~\eqref{eq:1} and \eqref{eq:phi}, we have
\[
\mathbb{P}(X_2\ge 2p^an^b)\leq \mathbb{P}(X_2\ge 2\mathbb{E}(X_2))\le \frac{\Delta_{X_2 }}{\mathbb{E}(X_2)^2} \le \frac{ (\gamma^{2b}/24) n^{2b-1}p^{2a} }{((n)_{b} \, p^{a})^2} = o(1).
\]
For \eqref{item-IV},
let ${\mathcal Q}$ consist of edge sets of all overlapping pairs of $P_{a,x}$ in the complete $k$-graph on $V$.
Let $Y=\sum_{Q\in {\mathcal Q}}I_Q$, where $I_Q$ is the indicator variable for the event $Q\subseteq E(G)$.
We first estimate $\mathbb{E}(Y)$.
For $X_2$ defined above, we have $\Delta_{X_2} = \mathbb{E}(\sum_{Q}I_Q)$, where the sum is over all $Q\in {\mathcal Q}$ whose two copies of $P_{a,x}$ share at least one edge.
As shown in the proof of Proposition~\ref{prop:1}, for $1\le i\le b$, there are $(n)_{2b - i} \binom b i (b)_i$
members of ${\mathcal Q}$ whose two copies of $P_{a,x}$ share exactly $i$ vertices.
Hence $\mathbb{E}(Y) \ge (n)_{2b - 1} b^2 \, p^{2a} \ge {n}^{2b - 1} p^{2a} b^2/2$. Since $\sum_{2\le i\le b} (n)_{2b-i} (b)_i^2 \le n^{2b-1}/2$, using \eqref{eq:phi}, we derive that
\[
\mathbb{E}(Y)\le (n)_{2b - 1} b^2 \, p^{2a} + (n^{2b-1}/2)\, p^{2a} + \Delta_{X_{2}} \le 2b^2 {n}^{{2b - 1}} p^{2a}.
\]
We next compute $\mathop{\text{\rm Var}}\nolimits(Y)$.
For each $Q\in {\mathcal Q}$, let $S_Q$ denote the $k$-graph induced by $Q$ (thus $S_Q$ is the union of two overlapping copies of $P_{a,x}$).
Fix two $Q, R\in {\mathcal Q}$ such that $Q\cap R \ne \emptyset$.
We write $S_Q=T_1\cup T_2$ and $S_R=T_3\cup T_4$, where $T_i$'s are copies of $P_{a,x}$ such that $E(T_1)\cap E(T_3)\neq\emptyset$.
Define $H_1:=T_1\cap T_2$, $H_2:=(T_1\cup T_2)\cap T_3$ and $H_3:=(T_1\cup T_2\cup T_3)\cap T_4$.
Since $V(T_1)\cap V(T_2)\ne \emptyset$, $V(T_3)\cap V(T_4)\ne \emptyset$, and $E(T_1)\cap E(T_3)\ne \emptyset$,
it follows that $v_{H_i}\ge 1$ for $i=1,2,3$.
We claim that $n^{v_{H_i}}p^{e_{H_i}}\ge n$ for $i=1,2,3$.
Indeed, since each $H_i$ is a subgraph of $P_{a,x}$, if $e_{H_i}\ge 1$, then by~\eqref{eq:phi}, $n^{v_{H_i}}p^{e_{H_i}}\ge \Phi_{P_{a,x}}\ge C n$; otherwise $e_{H_i}=0$ and then we have $n^{v_{H_i}}p^{e_{H_i}} = n^{v_{H_i}} \ge n^1=n$.
Consequently,
\begin{equation}\label{eq:estnp}
n^{v_{H_1}}p^{e_{H_1}}\cdot n^{v_{H_2}}p^{e_{H_2}}\cdot n^{v_{H_3}}p^{e_{H_3}} \ge n^{3}.
\end{equation}
Let $D=D(b,k)$ be the number of choices for $H_1, H_2, H_3$. Fix some $H_1, H_2, H_3$. Let
$
\Delta_{H_1, H_2, H_3}= \sum_{Q, R} \mathbb{E}(I_Q I_R),
$
where the sum is over all $Q, R\in {\mathcal Q}$ with $Q\cap R\neq\emptyset$ that generate the given $H_1, H_2, H_3$.
It is easy to see that the sum contains at most
\[
\binom{b}{v_{H_1}} (b)_{v_{H_1}} \binom{b}{v_{H_2}} (2b-v_{H_1})_{v_{H_2}} \binom{b}{v_{H_3}} (3b- v_{H_1} - v_{H_2})_{v_{H_3}} (n)_{4b-v_{H_1} - v_{H_2} -v_{H_3}} \le 2^{3b} (3b)! n^{4b-(v_{H_1} + v_{H_2} + v_{H_3})}
\]
terms.
Together with~\eqref{eq:estnp}, we obtain that
\[
\Delta_{H_1, H_2, H_3}=
\sum_{Q, R} \mathbb{E}(I_Q I_R) \le 2^{3b} (3b)! n^{4b-(v_{H_1} + v_{H_2} + v_{H_3})} p^{4a - (e_{H_1} + e_{H_2} + e_{H_3})} \le 2^{3b} (3b)! n^{4b - 3} p^{4a}.
\]
Consequently,
\[
\Delta_Y = \sum_{H_1, H_2, H_3} \Delta_{H_1, H_2, H_3} \le D 2^{3b} (3b)! n^{4b-3}p^{4a}.
\]
By~\eqref{eq:1}, we derive that
\[
\mathbb{P}\big(Y\ge 4 b^2 n^{2b-1}p^{2a}) \le \frac{\Delta_Y}{\mathbb{E}(Y)^2} \le \frac{D 2^{3b} (3b)! n^{4b-3}p^{4a}}{({n}^{{2b-1}} p^{2a} b/2)^2} = o(1).
\]
This confirms \eqref{item-IV}.
\end{proof}
In Lemma~\ref{lm:gnp} we assume that $p$ satisfies \eqref{eq:p} and obtain that $\Phi_{P_{a,x}}\ge C n$. This is necessary for Part~\eqref{item-I}, in which we use the union bound on $2^n$ events. When there are only
polynomially many events, it suffices to have $\Phi_{P_{a,x}}\ge n^c$ for some $0<c<1$, which occurs when $p\ge n^{-(k-\ell)-\eps}$ (for all $\ell\ge 1$) and $\epsilon< \ell/a$. We use this weaker condition on $p$ in the following lemma because we only have this condition in the proof of Lemma~\ref{lm:almost}.
\begin{lemma}\label{lm:gnp2}
Let $k, \ell, a, x\in \mathbb Z$ such that $k\ge 3, 1\le \ell\le k-1$, $a\geq 1$, and $0\le x\le k$. Write $b = b(x) =2x+\ell+(k-\ell)a$.
Suppose $0< \epsilon\le \ell/(2a)$ and $1/n \ll \gamma, 1/a, 1/k$.
Let $V$ be an $n$-vertex set, and let $\mathcal{F}_1, \dots, \mathcal{F}_t$ be $t\le n^{2k}$ families of $\gamma n^{b}$ ordered $b$-sets on $V$.
Suppose $G=\mathbb{G}^{(k)}(n,p)$ with $p\ge n^{-(k-\ell)-\eps}$, then with probability at least $1-\exp(-n^{1/3})$, for all $i\in [t]$, at least $(\gamma/2) p^{a} n^{b}$ members of $\mathcal{F}_i$ span copies of $P_{a,x}$.
\end{lemma}
\begin{proof}
By~\eqref{eq:phi1} and $\eps\le \ell/(2a)$, we have $\Phi_{P_{a,x}} \ge n^{\ell-a\eps} \ge \sqrt n$.
Fix $i\in [t]$ and let $X_{\mathcal{F}_i}$ be the random variable that counts the number of the members of $\mathcal{F}_i$ that span copies of $P_{a,x}$.
By~\eqref{eq:phi}, we have $\Delta_{X_{{\mathcal F}_i}} \leq 2^{b} b! \, n^{2b}p^{2a}/\sqrt n$ and note that $\mathbb{E}(X_{{\mathcal F}_i})=\gamma n^{b}p^a$.
By~\eqref{eq:2}, we have
\[
\mathbb{P}\big(X_{{\mathcal F}_i}\leq (\gamma/2)n^{b}p^{a}\big) \le \exp \left( -\frac{ (\mathbb{E}(X_{{\mathcal F}_i})/2)^2 }{2\Delta_{X_{{\mathcal F}_i}}} \right)\le \exp \left( -\frac{(\gamma/2)^2n^{2b}p^{2a}}{2^{b} b! \, n^{2b}p^{2a}/\sqrt n} \right) \le \exp(- 2n^{1/3}).
\]
Since $n^{2k} \exp(- 2n^{1/3})\le \exp(- n^{1/3})$, the result follows from the union bound.
\end{proof}
\section{Lemmas}
In this section we prove all the lemmas that are needed for the proof of Theorem~\ref{main}.
Since we assume $\delta_1(H)\ge \alpha n^{k-1}$, unless $\ell=1$, the $k$-graph $H$ may contain some $\ell$-sets $S$ whose degree is too low to be used for connection. To overcome this, we simply delete all edges that contain $S$.
The following lemma reflects this ``shaving'' process.
\begin{lemma}\label{lem:shave}
Let $0<\eta \le \alpha, 1/k$.
Let $H$ be an $n$-vertex $k$-graph with $\delta_1(H)\ge \alpha n^{k-1}$.
Then there exists a spanning subgraph $H'$ of $H$, satisfying the following properties.
\begin{enumerate}
\item $e(H')\ge {\alpha} n^{k}/(2k)$.
\item $\deg_{H'}(v)\ge 2\alpha n^{k-1}/3$ for all but at most $3k\eta^2 n/\alpha$ vertices of $H$.
\item For every $\ell$-set $S$ of $V(H)$, either $\deg_{H'}(S)=0$ or $\deg_{H'}(S) \ge \eta^2 n^{k-\ell}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Starting from $H$, we iteratively do the following. If the current $k$-graph contains an $\ell$-set $S$ whose degree is less than $\eta^2 n^{k-\ell}$, then we delete all the edges containing $S$.
Clearly the iteration lasts at most $\binom{n}{\ell}$ steps.
Let $H'$ be the resulting $k$-graph, then (3) holds.
Since we deleted at most $\eta^2 n^{k-\ell}$ edges in each step, we have $e(H) - e(H')\le \binom{n}{\ell}\eta^2 n^{k-\ell}\le {\alpha} n^{k}/(2k)$.
Together with $e(H)\ge (n/k) \alpha n^{k-1}$, (1) follows.
For (2), let $V_0$ be the set of vertices $v$ in $H'$ such that $\deg_{H'}(v)\le 2\alpha n^{k-1}/3$, then since $\delta_1(H)\ge \alpha n^{k-1}$, we have
\[
|V_0| \cdot \frac13\alpha n^{k-1} \le k(e(H) - e(H')) \le k\binom{n}{\ell}\eta^2 n^{k-\ell} \le k\eta^2 n^k.
\]
Thus $|V_0|\le 3k\eta^2 n/\alpha$ and (2) holds.
\end{proof}
\medskip
We recall the following Chernoff's inequality (see, e.g.,~\cite{JLR}).
For $x >0$ and a binomial random variable $X=Bin(n, \zeta)$, it holds that
\begin{align}
\mathbb P(X\ge n\zeta + x)< e^{-x^2/(2 n\zeta + x/3)} \quad \text{and} \quad \mathbb{P}(X\le n\zeta - x)< e^{-x^2/(2 n\zeta)}. \label{eq:cher2}
\end{align}
The following lemma helps us to build connectors and absorbers.
\begin{lemma}\label{prop:prob}
Let $k, \ell, a, x, b,\epsilon$ be as in Lemma~\ref{lm:gnp}.
Suppose $1/n \ll1/C \ll \beta, 1/b$.
Let $V$ be an $n$-vertex set, and let $\mathcal{F}_1, \dots, \mathcal{F}_t$ be $t\le n^{2k}$ families of $24\beta n^{b}$ ordered $b$-sets on $V$.
Suppose $G=\mathbb{G}^{(k)}(n, p)$ on $V$ and $p$ satisfies~\eqref{eq:p}.
Then a.a.s.~there exists a family $\mathcal{F}\subseteq \bigcup_{i\in [t]} \mathcal{F}_i$ of at most $\beta n$ disjoint ordered $b$-sets such that $|\mathcal{F}_i\cap \mathcal{F}|\ge \beta^2 n/b^2$ for each $i\in [t]$, and each member of $\mathcal{F}$ spans a labeled copy of $P_{a,x}$ in $G$.
\end{lemma}
\begin{proof}
In $G=\mathbb{G}^{(k)}(n, p)$, let $\mathcal{T}$ be the set of all ordered $b$-sets on $V$ that span copies of $P_{a,x}$. By Lemma~\ref{lm:gnp}~\eqref{item-III} and~\eqref{item-IV}, Lemma~\ref{lm:gnp2} and the union bound, \emph{a.a.s.}~the following properties hold simultaneously .
\begin{itemize}
\item $|\mathcal{F}_i\cap \mathcal T|\ge 12\beta p^{a} n^{b}$ for all $i\in [t]$;
\item $|\mathcal T|\le 2 p^{a} n^{b} $;
\item there are at most $4 b^2 p^{2a}n^{2b-1}$ pairs of overlapping members of $\mathcal T$.
\end{itemize}
Next we select a random set $\mathcal{F'}\subset \mathcal T$ by including each member of $\mathcal T$ independently with probability $q:=\beta/(2 b^2 n^{b-1}p^{a})$.
Because of \eqref{eq:cher2} (for (i) and (ii) below) and Markov's inequality (for (iii)), there exists such a family $\mathcal F'$ satisfying the following properties:
\begin{itemize}
\item[(i)] $|{\mathcal F}_i\cap \mathcal{F}'|\ge 12\beta (q/2) p^{a} n^{b} = 3 \beta^2 n/b^2$ for all $i\in [t]$;
\item[(ii)] $|\mathcal F'|\le 2q |\mathcal T|\le \beta n$;
\item[(iii)] there are at most $ 8b^2 q^2 n^{2b-1} p^{2a}= 2\beta^2 n/b^2$ pairs of overlapping members of $\mathcal F'$.
\end{itemize}
By deleting one ordered $b$-set from each overlapping pair and all ordered $b$-sets not in $\bigcup_{i\in [t]} \mathcal{F}_i$, we obtain a collection $\mathcal F$ of disjoint ordered $b$-sets such that $|\mathcal F|\le \beta n$, and for every $i\in [t]$,
$|\mathcal{F}_i\cap \mathcal{F}|\ge 3\beta^2 n/b^2 - 2\beta^2 n/b^2= \beta^2 n/b^2$.
Moreover, since $\mathcal{F}\subseteq \mathcal T$, each member of $\mathcal{F}'$ spans a labeled copy of $P_{a,x}$ in $G$.
\end{proof}
\medskip
We now prove a connecting lemma that provides connectors for any two $\ell$-sets with large degree.
Throughout the rest of the paper, let
\[
t_1:=\lceil \ell /(k-\ell) \rceil, \quad t_2:=t_1(k-\ell)-\ell, \quad \text{and} \quad t_3: = 3t_1(k-\ell)-\ell.
\]
Given a $k$-graph $H$, we say that an ordered $t_3$-set $C$ \textit{connects} two ordered $\ell$-sets $A$ and $B$ if $C\cap A=C\cap B=\emptyset$ and the concatenation $ACB$ spans an $\ell$-path.
Note that in this case, $C$ spans a copy of $P_{t_1, t_2}$ in $H$.
\begin{lemma}\label{lm:conn}
Suppose $1\le \ell<k$ and $1/n \ll1/C \ll \beta \ll \eta\ll 1/k$ and $0<\epsilon \le \ell/(3t_1)$.
Let $H'$ and $G$ be two $n$-vertex $k$-graphs on the same vertex set $V$ such that for any $\ell$-set $S\subseteq V$, either $\deg_{H'}(S) = 0$ or $\deg_{H'}(S)\ge \eta {n}^{k-\ell}$ and $G:=\mathbb{G}^{(k)}(n, p)$ satisfies~\eqref{eq:p}.
Then for any set $W\subseteq V$ of size at most $\eta n/3$, a.a.s.~ $H'\cup G$ contains a set $\mathcal{C}$ of disjoint $t_3$-sets such that $V({\mathcal C})\cap W=\emptyset$, $|{\mathcal C}|\le \beta n$, and for every two disjoint ordered $\ell$-sets $S, S'$ in $V$ with $\deg_{H'}(S), \deg_{H'}(S')\ge \eta n^{k-\ell}$, there are at least $3\beta^3 n$ members of $\mathcal C$ that connect them.
\end{lemma}
\begin{proof}
Fix two disjoint ordered $\ell$-sets $S:=(v_1, \dots, v_{\ell})$ and $S':=(w_{\ell}, \dots, w_1)$ such that $\deg_{H'}(S), \deg_{H'}(S')\ge \eta n^{k-\ell}$.
We first claim that we can greedily extend $S$ to an $\ell$-path $v_1, \dots, v_{\ell+t_1(k-\ell)}$ of length $t_1$ in $H'$ such that the new vertices are disjoint from $S'\cup W$ and there are at least $(\eta/2)^{t_1} {n}^{t_1(k-\ell)}$ choices for them.
Indeed, we iteratively extend the path from the current $\ell$-end $T$ by adding $k-\ell$ new vertices.
By the degree assumption, we know that $\deg_{H'}(T)\ge \eta {n}^{k-\ell}$ (in the first step $T=S$).
Since the number of $(k-\ell)$-sets that intersect the existing vertices or $W$ is $\eta n^{k-\ell}/3 + O(n^{k-\ell-1})$, there are at least $\eta {n}^{k-\ell}/2$ choices for the new $k-\ell$ vertices.
Similarly, we can greedily extend $(w_1, \dots, w_{\ell})$ to an $\ell$-path $w_1, \dots, w_{\ell+t_1(k-\ell)}$ of length $t_1$ in $H'$ such that new vertices are disjoint with $\{v_1, \dots, v_{\ell+t_1(k-\ell)}\}\cup W$ and there are at least $(\eta/2)^{t_1} {n}^{t_1(k-\ell)}$ choices for them.
At last, if $t_2>0$, then we pick $t_2$ arbitrary vertices $\{u_1,\dots, u_{t_2}\}$ that are disjoint from the existing vertices and $W$, and there are at least $n^{t_2}/2$ choices for them.
Note that $t_3=2t_1(k-\ell)+t_2$.
So there are at least $(\eta /2)^{2t_1} n^{t_3}/2 \ge 24\beta n^{t_3}$ choices for the ordered $t_3$-sets
\[
(v_{\ell+1}, \dots, v_{\ell+t_1(k-\ell)}, u_1,\dots, u_{t_2}, w_{\ell+t_1(k-\ell)}, \dots, w_{\ell+1}).
\]
Let $\mathcal C_{S, S'}$ be a collection of exactly $24\beta n^{t_3}$ such ordered $t_3$-sets.
By this definition, if some $C\in \mathcal C_{S, S'}$ spans a labeled copy of $P_{t_1, t_2}$,
then $C$ connects $S$ and $S'$.
We apply Lemma~\ref{prop:prob} to $\mathcal C_{S, S'}$ for all pairs of $S, S'$ such that $\deg_{H'}(S), \deg_{H'}(S')\ge \eta n^{k-\ell}$ and $G=\mathbb{G}^{(k)}(n, p)$, and conclude that \emph{a.a.s.}~there exists a family $\mathcal C$ of disjoint $t_3$-sets such that $|\mathcal C|\le \beta n$, and for ordered $\ell$-sets $S, S'$ with $\deg_{H'}(S), \deg_{H'}(S')\ge \eta n^{k-\ell}$, there are at least
$\beta^2 n/t_3^2 \ge 3\beta^3 n$ $t_3$-sets that connect them.
In particular, $V({\mathcal C})\cap W=\emptyset$ by our construction.
\end{proof}
\medskip
Given a $k$-graph $H$, let $W=\{w_1,\dots, w_{k-\ell}\}\subseteq V(H)$. The $W$-absorber is defined as follows.
Let
\[
t_4 := \lceil (3k-\ell-2)/(k-\ell) \rceil \quad \text{and} \quad t_5:= t_4 (k-\ell) \quad (\text{thus } 3k-\ell-2\le t_5\le 4k).
\]
Suppose $X_i, Y_i, Z_i$, $i\in [k-\ell]$, and $T$ are pairwise disjoint ordered sets from $V(H)\setminus W$ satisfying the following properties:
\begin{enumerate}[label=($\roman*$)]
\item $|X_i|=k-1$, $|Y_i|=t_5-k-i+1$, and $|Z_i|=i-1$ for every $i\in [k-\ell]$ and $|T|=\ell$; \label{item:ii}
\item $Q:=X_1 Z_2 Y_1 \ X_2 Z_3 Y_2 \cdots X_{k-\ell-1} Z_{k-\ell} Y_{k-\ell-1} \ X_{k-\ell} Z_1 Y_{k-\ell} T$ spans a copy of $P_{t_5-1}$; \label{item:c}
\item $Q':=X_1 w_1 Z_1 Y_1 \ X_2 w_2 Z_2 Y_2 \cdots X_{k-\ell} w_{k-\ell} Z_{k-\ell} Y_{k-\ell} T$ spans a copy of $P_{t_5}$. \label{item:iv}
\end{enumerate}
By definition, $Q$ is a $W$-absorber.
Note that $|Y_i|\ge k-1$ for $i\in [k-\ell]$ by the definition of $t_5$. Let $B_i$ be the ordered set $X_i w_i Z_i Y_i$. Since $|X_i|, |Y_i|\ge k-1$, all the edges of $Q'$ that intersect $\{w_i\}\cup Z_i$ are completely in $B_i$. Furthermore, when counting from the left end, all $X_i$ and $Y_i$ are placed at the same location in $Q$ as in $Q'$, except that $Y_{k-\ell}$ is shifted $k-\ell$ vertices to the right in $Q'$ (thus $Z_2, \dots, Z_{k-\ell}$ are simply place-holders). Consequently, if $H[B_i]\supseteq Q'[B_i]$ for $i\in [k-\ell]$ and $Q$ is a path, then $Q'$ is a path.
The following is our absorbing lemma.
\begin{lemma}\label{lem:new}
Let $1\le \ell <k$ be integers and suppose $0<\epsilon\le \ell/(3 t_5)$ and $1/n \ll \beta \ll\eta, \alpha, 1/k,1/t_5$.
Let $V$ be a set of $n$ vertices and let $V', U$ be two (not necessarily disjoint) subsets of $V$ such that $|U|\le \eta n/3$.
Let $H$ be a $k$-graph on $V$ such that $\deg_H(v)\ge \alpha n^{k-1}$ for all $v\in V'$, and for all $\ell$-sets $S\subseteq V$, either $\deg_{H}(S)=0$ or $\deg_{H}(S) \ge \eta n^{k-\ell}$.
Suppose $G:=\mathbb{G}^{(k)}(n, p)$ has vertex set $V$ and satisfies \eqref{eq:p}.
Then $H\cup G$~\emph{a.a.s.}~contains a family ${\mathcal A}$ of at most $\beta n$ vertex-disjoint copies of $P_{t_5-1}$ with ends in $\partial_\ell H$ such that $V({\mathcal A}) \subseteq V\setminus U$, and every $(k-\ell)$-set $W\subseteq V'$ has at least $\beta^3 n$ $W$-absorbers in ${\mathcal A}$.
\end{lemma}
\begin{proof}
For each $W=\{w_1,\dots, w_{k-\ell}\} \subseteq V'$, we will find $W$-absorbers from $V\setminus U$ satisfying~\ref{item:ii} -- ~\ref{item:iv}.
We achieve this in two steps.
In the first step, for each $i\in [k-\ell]$, we will find a path $Q_i$ of length $t_4$ with $V(Q_i)= \{v_1, \dots, v_{t_5+\ell} \} \subseteq V\setminus U$ such that $v_k = w_i$, and there are at least $\frac{\alpha}2 (\frac{\eta}2)^{t_4 -1} n^{t_5+\ell-1}$ choices for $V(Q_i)$. Indeed, we first choose an unordered set $\{v_1, \dots, v_{k-1}\} \in N_H(w_i)$. Since $\deg_H(w_i)\ge \alpha n^{k-1}$ and at most $|U| + \ell+ t_5(k-\ell) \le \eta n/2$ vertices are either in $U$ or used in this step, there are at least $\frac{\alpha}2 n^{k-1}$ choices for $\{v_1, \dots, v_{k-1}\}$. Next, let $S=\{v_{k- \ell+1}, \dots, v_k\}$. Since $\deg_{H}(S)>0$, we have $\deg_{H}(S) \ge \eta n^{k-\ell}$. Hence we can choose an unordered set $\{v_{k+1}, \dots, v_{2k-\ell}\} \in N_H(S)$ while avoiding $U$ and the vertices already used in this step.
There are at least $\frac{\eta}2 n^{k-\ell}$ choices.
We repeat this to obtain the desired path $Q_i$ and there are at least $\frac{\alpha}2 (\frac{\eta}2)^{t_4 -1} n^{t_5+\ell-1}$ choices for $V(Q_i)$ as an ordered set.
Let $B_i$ be the ordered set $\{v_1, \dots, v_{t_5} \}$. It follows that there are at least $\frac{\alpha}2 (\frac{\eta}2)^{t_4 -1} n^{t_5-1}$ choices for $B_i$. Now let $A= B_1 \cdots B_{k-\ell-1} V(Q_{k-\ell})$. We have at least $((\frac{\alpha}2) (\frac{\eta}2)^{t_4-1})^{k-\ell} n^{\ell+(t_5-1)(k-\ell)} \ge 24 \beta n^{\ell+(t_5-1)(k-\ell)}$ choices for~$A$.
Now we proceed to the second step. For each $ i\in [k-\ell]$, recall that $V(Q_i)= \{v_1, \dots, v_{t_5+\ell} \}$. Define (ordered) sets
\[
X_i= \{v_1, \dots, v_{k-1} \},\quad Z_i=\{v_{k+1}, \dots, v_{k+i-1} \}, \quad \text{and} \quad Y_i = \{v_{k+i}, \dots, v_{t_5} \}.
\]
In addition, let $T=\{v_{t_5+1}, \dots, v_{t_5+\ell} \}$ from $Q_{k-\ell}$.
It is clear that $X_i, Y_i, Z_i$ and $T$ satisfy~\ref{item:ii}.
Recall that $B_i = X_i w_i Z_i Y_i$. For $Q'$ defined in \ref{item:iv}, our first step already provides the edges of $Q'[B_i]$ for $i\le k-\ell-1$ and the edges of $Q'[B_{k-\ell}\cup T]$.
Following the discussion right after \ref{item:iv}, we achieve both \ref{item:c} and \ref{item:iv} if $Q$ is a path. To this end, we use the edges of $G$.
Let $\mathcal F_W$ be the family of $24 \beta n^{\ell + (t_5-1) (k-\ell)}$ copies of $A$, each re-ordered as in $Q$.
We apply Lemma~\ref{prop:prob} to $G$ with $x=0$, families $\mathcal F_{W}$ for all ordered $(k-\ell)$-sets $W\subseteq V'$, and conclude that \emph{a.a.s.}~there exists a collection ${\mathcal A}$ of at most $\beta n$ vertex-disjoint copies of $P_{t_5-1}$ such that for every $(k-\ell$)-set $W\subseteq V'$, at least $\beta^3 n$ members of ${\mathcal A}$ are from $\mathcal F_{W}$, and thus are
$W$-absorbers.
At last, because of the first step, both $\ell$-ends of these paths are in $\partial_{\ell} H$.
\end{proof}
In the proof of Theorem~\ref{main} we need a lemma to cover most of the vertices with constantly many paths.
This is done in the following lemma.
In the proofs of the following lemma and Theorem~\ref{main}, we use the trick of \emph{multi-round exposure}, namely, in each of the steps later, we expose one or several independent copies of the binomial random hypergraph, each of them with edge probability a constant fraction of the original edge probability.
\begin{lemma}\label{lm:almost}
Let $1\le \ell < k$, and suppose $1/n \ll 1/C \ll \zeta\ll \alpha\ll 1/k$ and $0<\epsilon \le \zeta^3\ell/6$.
Suppose $V$ is a set of $n$ vertices and $V_0\subseteq V$ with $|V_0|\le \alpha n$, and furthermore, when $\ell=1$, suppose that $V_0=\emptyset$.
Suppose $G:=\mathbb{G}^{(k)}(n, p)$ on $V$ satisfying~\eqref{eq:p}.
Let $L$ be an $\ell$-graph on $V\setminus V_0$ with $|E(L)|\ge \alpha n^\ell$.
Then \emph{a.a.s.}~$G$ contains a set $\mathcal P$ of at most $2\zeta^3 n$ vertex-disjoint $\ell$-paths such that their ends are in $L$, $V_0\subseteq V(\mathcal P)$ and $|V\setminus V(\mathcal P)|\le 2\zeta n$.
\end{lemma}
\begin{proof}
Since $|L|\ge \alpha n^\ell$,
by averaging, there exists a set $R\subseteq V\setminus V_0$ of size $\zeta n$ such that $ | L \cap \binom{R}{\ell} | \ge \alpha |R|^\ell /2$.
We find our path cover in two phases.
In the first phase, we use relatively long paths with ends from $R$ to cover most of the vertices of $V$. In the second phase, we greedily cover the remaining vertices of $V_0\setminus R$ with short paths.
We therefore expose $G$ in two rounds such that $G=G_1\cup G_2$, where each $G_i$ is $\mathbb{G}^{(k)}(n,p')$ with $(1-p')^{2} = 1-p$. Thus $p' >p/2 > n^{-(k-\ell)-2\epsilon}$ when $\ell\ge 2$.
We start with Phase 1. Let $s$ be the smallest integer such that $s\ge 1/\zeta^3$ and $s\equiv \ell \mod{(k-\ell)}$, and let $s_1=(s-\ell)/(k-\ell)$. Since $\epsilon \le \zeta^3 \ell/3$, we have $2\eps \le \ell / (3s_1)$.
By Lemma~\ref{lm:gnp}~\eqref{item-I}, \emph{a.a.s.}~for all $V^*\subseteq V\setminus R, R'\subseteq R$ satisfying $|V^*|\ge \zeta^3 n$, $|R'|\ge |R|/2$ and $|L\cap \binom{R'}{\ell}| \ge (\alpha/3) |R|^\ell$,
$G_1=\mathbb{G}^{(k)}(n,p')$ contains a copy of $P_{s_1}$ whose $\ell$-ends are in $L\cap \binom{R'}{\ell}$ and other vertices are from $V^*$.
Owing to this property, we repeatedly construct copies of $P_{s_1}$ by letting $V^*$ be the set of uncovered vertices of $V'$ and letting $R'$ be the set of uncovered vertices of $R$, as long as $|V^*|\ge \zeta^3 n$.
This is possible because we construct at most $\zeta^3 n$
vertex-disjoint copies of $P_{s_1}$, which consume at most $2\ell \zeta^3 n$ vertices from $R$.
During the process, at least $|R| - 2\ell \zeta^3 n\ge |R|/2$ vertices of $R$ are available and by our assumption, they span at least $\alpha |R|^\ell /2 - 2\ell \zeta^3 n\cdot |R|^{\ell-1} \ge \alpha |R|^\ell /3$ edges of $L$.
Let $\mathcal{P}_{1}$ denote the set of the paths obtained in this phase.
Note that when $\ell=1$, since $V_0=\emptyset$ and $|V\setminus V(\mathcal P_1)|\le |R|+\zeta^3 n\le 2\zeta n$, we are done by letting $\mathcal P=\mathcal P_1$.
Now we proceed to Phase 2 and assume that $\ell\ge 2$.
Let $V''$ be the set of uncovered vertices in $V\setminus R$ and $R'=R\setminus V(\mathcal{P}_{1})$.
Note that $|V''|\le \zeta^3 n$ and $|R'|\ge |R| - 2\ell \zeta^3 n \ge |R|/2$, and $|L\cap \binom{R'}{\ell}| \ge (\alpha/3) |R|^\ell$.
Using the edges of $G_2=\mathbb{G}^{(k)}(n,p')$, we will greedily put vertices $v\in V''$ into vertex-disjoint $\ell$-paths $w_1 \cdots w_{k-1} v w_{k} \cdots w_{t_6(k-\ell)+\ell-1}$ of length $t_6:=\lfloor ({k-1})/({k-\ell}) \rfloor+1$ such that all the vertices other than $v$ are from $R'$ and both $\ell$-ends are in $L$.
Note that $v$ is in every edge of the path but in neither of the $\ell$-ends.
For any $v\in V''$, let $G_v$ be the edges of $G_2$ that contain $v$ and have their other $k-1$ vertices from $R'$.
For distinct vertices $u, v\in V''$, the possible edges appear in $G_u$ independently of the possible edges that can appear in $G_v$.
Suppose we consider $v\in V''$ after covering some vertices of $V''$ by $\ell$-paths.
To this end, we expose $G_v$.
Let $R''$ be the set of unused vertices in $R'$.
We have $|R''|\ge |R'| - |V''| (t_6(k-\ell)+\ell)\ge |R'| - 2k\zeta^3 n \ge |R|/3$ and $|L\cap \binom{R''}{\ell}| \ge |L\cap \binom{R'}{\ell}| - 2k\zeta^3 n |R'|^{\ell-1} \ge (\alpha/4) |R|^\ell$.
We choose two disjoint $\ell$-sets from $L\cap \binom{R''}{\ell}$ and $t_6(k-\ell) - \ell-1$ vertices from $R''$ forming an ordered $(t_6(k-\ell)+\ell - 1)$-set $(w_1, \dots, w_{t_6(k-\ell)+\ell-1})$ -- there are
\[
\frac{\alpha |R|^\ell}{4}\cdot \left(\frac{\alpha |R|^\ell}{4} - \ell |R''|^{\ell-1} \right) \cdot
\left(\frac{\zeta n}{4}\right)^{t_6(k-\ell)-\ell -1} \ge \alpha^3 (\zeta n)^{t_6(k-\ell)+\ell - 1}
\]
such sets.
We observe that $w_1 \dots w_{k-1} v w_{k} \dots w_{t_6(k-\ell)+\ell-1}$ spanning a copy of $P_{t_6}$ in $G_v$ is equivalent to $w_1 \dots w_{t_6(k-\ell)+\ell-1}$ spanning a $(k-1)$-uniform $(\ell-1)$-path in $N_{G_v}(v)$.
Since $p'\ge n^{(k-1)-(\ell-1) - 2\epsilon}$ and $2\epsilon \le (\ell-1)/(3 t_6)$, we can apply Lemma~\ref{lm:gnp2} to $\alpha^3 (\zeta n)^{t_6(k-\ell)+\ell - 1}$ ordered $(t_6(k-\ell)+\ell - 1)$-sets, and conclude that $G_v$ contains a desired $\ell$-path with probability at least $1-\exp(-n^{1/3})$.
By the union bound, with probability at least $1-|V''|\exp(-n^{1/3}) =1-o(1)$, we can put all the vertices of $V''$ into vertex-disjoint $\ell$-paths of length $t_6$ by using the vertices of $R$ such that all the $\ell$-ends are in $L$.
This finishes Phase 2.
Let $\mathcal{P}_{2}$ denote the family of the $\ell$-paths found in this phase.
Let $\mathcal{P}:=\mathcal{P}_1\cup \mathcal{P}_{2}$ and note that $|\mathcal P|\le 2\zeta^3 n$.
By construction, all the $\ell$-ends of the paths in $\mathcal P$ are in $L$.
Since $V\setminus V(\mathcal P)\subseteq R$, we have $V_0\subseteq V(\mathcal P)$ and $|V\setminus V(\mathcal P) |\le |R|\le 2\zeta n$.
\end{proof}
\section{Proof of Theorem~\ref{main}}
In this section we prove Theorem~\ref{main}. We essentially follow the procedure mentioned in Section~1.3 but need additional work.
We first apply Lemma~\ref{lem:shave} and obtain a spanning subgraph $H'$ of $H$. Let $V_*$ be the set of vertices of $H'$ with high degree.
Following the procedure outlined in Section~1.3, we obtain an absorbing path $P_{abs}$, a set ${\mathcal C}_1$ of connectors and a set ${\mathcal P}$ of paths that cover almost all the vertices.
A natural attempt is to use the connectors in ${\mathcal C}_1$ to connect the paths in ${\mathcal P}$ and $P_{abs}$ to obtain an almost spanning cycle and then absorb the remaining vertices of ${\mathcal C}_1$ by $P_{abs}$.
On the other hand, when applying Lemma~\ref{lem:new} to $H'$, we can only absorb vertices in $V_*$. Therefore we need to have $V({\mathcal C}_1)\subseteq V_*$.
However, we cannot strengthen Lemma~\ref{lm:conn} by asking $V({\mathcal C}_1)\subseteq V_*$ because for a given $\ell$-set in $V_*$, it is possible that all its neighbors intersect $V\setminus V_*$ (recall that $\deg_{H'}(S)\ge \eta^2 n^{k-\ell}$ and $|V\setminus V_*|\le \eta n$). Therefore, this naive attempt fails.
To fix it, we ``shave'' $H'$ again, namely, applying Lemma~\ref{lem:shave} to $H'[V_*]$, and obtain a spanning $k$-graph $H_*$ on $V_*$. We thus apply Lemma~\ref{lm:conn} to $H_*$ and obtain ${\mathcal C}_1$ such that $V({\mathcal C}_1)\subseteq V_*$ and ${\mathcal C}_1$ can connect any two $\ell$-sets in $L:=\partial_{\ell} H_*$.
In order to obtain $P_{abs}$, we apply Lemma~\ref{lem:new} to $H'$ obtaining a family ${\mathcal A}$ of absorbers and apply Lemma~\ref{lm:conn} to $H'$ obtaining another set ${\mathcal C}_2$ of connectors. After connecting ${\mathcal A}$ into $P_{abs}$, unused members of ${\mathcal C}_2$ will be discarded (and the vertices in these members will be covered in a later step).
Below are the details of our proof.
Let $1/n\ll 1/C \ll \zeta \ll \beta \ll \eta \ll\alpha, 1/k$ and $0<\epsilon \le \zeta^3\ell/12$.
Write $V=V(H)$.
Let $\bigcup_{i\in [4]} G_i = \mathbb{G}^{(k)}(n,p)$ such that each $G_i$ is $\mathbb{G}^{(k)}(n,p')$ and $(1-p')^{4} = 1-p$.
In particular, $p' >p/4 > n^{-(k-\ell)-2\epsilon}$ if $\ell\ge 2$, and $p' >p/4 \ge (C/4) n^{-(k-1)}$ if $\ell=1$.
When we apply Lemmas~\ref{lm:conn}, ~\ref{lem:new} and~\ref{lm:almost}, we apply them with $p'$ in place of $p$ and $2\epsilon$ in place of $\epsilon$.
\medskip
\noindent\textit{Step 1. Shave $H$ twice.}
We define two subgraphs $H'$ and $H_*$ of $H$ as follows.
If $\ell=1$, let $H'=H_* =H$, $V_*= V$, and $L=V$.
If $\ell\ge 2$, then we apply Lemma~\ref{lem:shave} to $H$ and obtain a subgraph $H'$ with the following properties:
\begin{itemize}
\item there exists $V_0\subseteq V$ such that $|V_0| \le 3k\eta^2 n/\alpha\le \eta n$ and $\deg_{H'}(v)\ge 2\alpha n^{k-1}/3$ for all $v\in V\setminus V_0$;
\item for every $\ell$-set $S\subseteq V$, either $\deg_{H'}(S)=0$ or $\deg_{H'}(S) \ge \eta^2 n^{k-\ell}$.
\end{itemize}
Let $V_*=V\setminus V_0$ and $n_*:=|V_*| \ge (1-\eta )n$.
We have $\delta_1(H'[V_*]) \ge 2\alpha n^{k-1}/3 - |V_0| n^{k-2} \ge \alpha n_*^{k-1}/2$.
Apply Lemma~\ref{lem:shave} again to $H'[V_*]$ and obtain a subgraph $H_*$ on $V_*$ such that
\begin{itemize}
\item $e(H_*)\ge \alpha n_*^{k}/(4k) $,
\item for every $\ell$-set $S\subseteq V_*$, either $\deg_{H_*}(S)=0$ or $\deg_{H_*}(S) \ge \eta^2 n_*^{k-\ell}$.
\end{itemize}
Let $L=\partial_\ell H_*$. We have
\begin{align}\label{eq:L}
|L|\ge \frac{\binom k{\ell} e(H_*)}{\binom {n-\ell}{k-\ell} }\ge \frac{ \binom k{\ell} \frac{\alpha}{4k} n_*^k}{n_*^{k-\ell}} \ge \frac{\alpha}4 n_*^{\ell}.
\end{align}
\medskip
\noindent\textit{Step 2. Build connectors ${\mathcal C}_1$ and ${\mathcal C}_2$.}
We obtain ${\mathcal C}_1$ and ${\mathcal C}_2$ by applying Lemma~\ref{lm:conn} twice.
First, we apply Lemma~\ref{lm:conn} to $H_*\cup G_{1}[V_*]$ with $W=\emptyset$, $\eta^2$ (in place of $\eta$) and $\zeta$ (in place of $\beta$), and conclude that $H_*\cup G_{1}[V_*]$ \emph{a.a.s.}~contains a set ${\mathcal C}_1$ of disjoint $t_3$-sets such that $V({\mathcal C}_1)\subseteq V_*$, $|{\mathcal C}_1|\leq \zeta n$ and for any two disjoint ordered $\ell$-sets $S, S'$ in $L$, there are at least $3\zeta^{3} n$ members of ${\mathcal C}_1$ connecting them.
Second, we apply Lemma~\ref{lm:conn} to $H'\cup G_{2}$ with $W=V({\mathcal C}_1)$, $\eta^2$ (in place of $\eta$) and $\beta$, and conclude that $H'\cup G_{2}$ \emph{a.a.s.}~contains a set ${\mathcal C}_2$ of disjoint $t_3$-sets such that $V({\mathcal C}_2)\subseteq V\setminus V({\mathcal C}_1)$, $|{\mathcal C}_2|\leq \beta n$, and for any two disjoint ordered $\ell$-sets $S, S'$ in $\partial_\ell H'$, there are at least $3\beta^3 n$ members of ${\mathcal C}_2$ connecting them.
\medskip
\noindent\textit{Step 3. Build an absorbing path.}
Note that $|V({\mathcal C}_1\cup {\mathcal C}_2)| \le 2\beta n \cdot t_3< 6k\beta n$ (as $t_3< 3k$).
We apply Lemma~\ref{lem:new} to $H'\cup G_3$ with $V'= V_*$, $U= V({\mathcal C}_1\cup {\mathcal C}_2)$,
$2\alpha/3$ (in place of $\alpha$), $\eta^2$ (in place of $\eta$) and $\beta^3$ (in place of $\beta$).
Then \emph{a.a.s.}~there exists a collection ${\mathcal A}$ of at most $\beta^3 n$ vertex-disjoint copies of $P_{t_5 -1}$ such that for every $(k-\ell)$-set $S\subseteq V_*$, there are at least $\beta^9 n$ $S$-absorbers in ${\mathcal A}$. Note that each member of ${\mathcal A}$ contains $(t_5 -1)(k- \ell) + \ell \le 4k^2$ vertices.
Moreover, all the members of ${\mathcal A}$ have their $\ell$-ends in $ \partial_\ell H'$ and $V({\mathcal A})\subseteq V\setminus V({\mathcal C}_1\cup {\mathcal C}_2)$.
Next, we pick two disjoint $\ell$-sets $E_1, E_2\in L$, which are also disjoint from $V({\mathcal A})\cup V({\mathcal C}_1\cup {\mathcal C}_2)$.
This is possible because $|V({\mathcal A})\cup V({\mathcal C}_1\cup {\mathcal C}_2)|\le 4 k^2 \beta^3 n + 6k\beta n\le 7k\beta n$ and $|L|\ge \alpha n_*^\ell/4$.
Finally, we use the members of ${\mathcal C}_2$ to connect the members of ${\mathcal A}$ and $E_1, E_2$ to an $\ell$-path $P_{abs}$ with ends $E_1$ and $E_2$, which is possible because all the absorbers have ends in $\partial_\ell H'$ and $|{\mathcal A}|\le \beta^3 n$.
\medskip
\noindent\textit{Step 4. Cover most of the remaining vertices.}
Let $V'=V\setminus (V(P_{abs})\cup V({\mathcal C}_1))$.
Note that $|V(P_{abs})\cup V({\mathcal C}_1)|\le 7k\beta n + 2\ell \le 8k\beta n$.
Let $L':=L[V_*\setminus (V(P_{abs})\cup V({\mathcal C}_1))]$.
By~\eqref{eq:L}, we have
\[
|L'| \ge \alpha n_*^\ell/4 - |V(P_{abs})\cup V({\mathcal C}_1)|\cdot n_*^{\ell-1} \ge \alpha n_*^\ell/5 \ge \alpha |V'|^\ell/6.
\]
So we can apply Lemma~\ref{lm:almost} with $V'$ (in place of $V$), $V_0$, $L'$ (in place of $L$), $\alpha/6$ (in place of $\alpha$), $G=G_4$, and \emph{a.a.s.}~obtain a collection ${\mathcal P}$ of at most $2\zeta^3 n$ vertex-disjoint paths with ends in $L$, which leaves a set $W$ of at most $2\zeta n$ vertices in $V'\setminus V_0\subseteq V_*$ uncovered.
Next, we connect $P_{abs}$ and the paths in $\mathcal{P}$ by the connectors in ${\mathcal C}_1$ and denote the resulting $\ell$-cycle by $Q$.
This is possible because the ends of these paths are in $L$, and $1 + |\mathcal{P}|\le 1 + 2\zeta^3 n \le 3\zeta^3 n$.
\medskip
\noindent\textit{Step 5. Finish the Hamiltonian $\ell$-cycle.}
Let $X=V\setminus V(Q)$. The construction of $Q$ implies that $|X|\in (k-\ell)\mathbb N$, $X \subseteq W\cup V({\mathcal C}_1)\subseteq V_*$ and $|X|\le 2\zeta n+t_3 \zeta n \le 2 t_3 \zeta n$ (because $t_3\ge 2\ell\ge 2$).
We arbitrarily partition $X$ into disjoint sets of size $k-\ell$.
By the definition of ${\mathcal A}$, every $(k-\ell)$-set $S\subseteq X$ has at least $\beta^9 n$ $S$-absorbers in $\mathcal A$.
Since each member of ${\mathcal A}$ is a subpath of $Q$ and $2 t_3 \zeta \le \beta^9$, we can absorb all these $(k-\ell)$-sets greedily and obtain the desired Hamiltonian $\ell$-cycle.
\smallskip
Each of Steps 2, 3 and 4 can be done with probability $1-o(1)$ (while Steps 1 and 5 are deterministic). Hence, by the union bound, \emph{a.a.s.}~we complete all the steps and obtain a Hamiltonian $\ell$-cycle of~$H$.
\section*{Acknowledgment} \nonumber
We would like to thank Wiebke Bedenknecht, Yoshiharu Kohayakawa and Guilherme Mota for discussions at an early stage of this project.
We are also grateful to two anonymous referees for many helpful comments.
In particular, we are in debt to a referee who showed us how to obtain an absorbing lemma without using the regularity method. This and other comments helped to simplify our proof and greatly improved the presentation of the paper.
\begin{bibdiv}
\begin{biblist}
\bib{BTW}{article}{
author = {Balogh, J.},
author = {Treglown, A.},
author = {Wagner, A.~Z.},
title = {Tilings in randomly perturbed dense graphs},
journal={Combinatorics, Probability and Computing}, publisher={Cambridge University Press}, year={2019},
volume={28},
pages={159-176}
}
\bib{BMSSS1}{article}{
Author = {Bastos, J.O.},
author={Mota, G. O.},
author={Schacht, M.},
author={Schnitzer, J.},
author= {Schulenburg, F.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
volume={31},
year={2017},
pages={2328-2347},
Journal = {SIAM Journal on Discrete Math.},
Title = {Loose Hamiltonian cycles forced by large $(k-2)$-degree - approximation version},
Url = {http://dx.doi.org/10.1137/16M1065732}
}
\bib{BMSSS2}{article}{
Author = {Bastos, J.O.},
author={Mota, G. O.},
author={Schacht, M.},
author={Schnitzer, J.},
author= {Schulenburg, F.},
title = {Loose Hamiltonian cycles forced by large (k-2)-degree - sharp version},
journal = {Contributions to Discrete Mathematics},
volume = {13},
number = {2},
year = {2018}}
\bib{BHKM}{article}{
Author = {Bedenknecht, W.},
author={Han, J.},
author={Kohayakawa, Y.},
author={Mota, G. O.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
JOURNAL = {Random Structures \& Algorithms, to appear},
Title = {Powers of tight Hamilton cycles in randomly perturbed hypergraphs}}
\bib{BeDuFr16}{article}{
author = {Bennett, P.},
author = {Dudek, A.},
author = {Frieze, A.},
title = {Adding random edges to create the square of a Hamilton cycle},
journal = {ArXiv e-prints},
archivePrefix = {arXiv},
eprint = {1710.02716},
primaryClass = {math.CO},
keywords = {Mathematics - Combinatorics},
year = {2017},
adsurl = {http://adsabs.harvard.edu/abs/2016arXiv161106570B},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
\bib{BFM}{article}{
AUTHOR = {Bohman, Tom},
author={Frieze, Alan},
author={Martin, Ryan},
TITLE = {How many random edges make a dense graph {H}amiltonian?},
JOURNAL = {Random Structures \& Algorithms},
FJOURNAL = {Random Structures \& Algorithms},
VOLUME = {22},
YEAR = {2003},
NUMBER = {1},
PAGES = {33--42},
ISSN = {1042-9832},
MRCLASS = {05C80 (05C45 60C05)},
MRNUMBER = {1943857},
MRREVIEWER = {Bert Fristedt},
DOI = {10.1002/rsa.10070},
URL = {http://dx.doi.org/10.1002/rsa.10070},
}
\bib{BHKMPP}{article}{
Author = {B\"ottcher, J.},
author={Han, J.},
author={Kohayakawa, Y.},
author={Montgomery, R.},
author={Parczyk, O.},
author={Person, Y.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
Journal = {Random Structures \& Algorithms, to appear},
Title = {Universality of bounded degree spanning trees in randomly perturbed graphs}}
\bib{BMPP}{article}{
Author = {B\"ottcher, J.},
author={Montgomery, R.},
author={Parczyk, O.},
author={Person, Y.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
Journal = {preprint},
Title = {Embedding spanning bounded degree subgraphs in randomly perturbed graphs}}
\bib{BHS}{article}{
Author = {Bu{\ss}, E.},
author={H{\`a}n, H.},
author={Schacht, M.},
Date-Added = {2017-02-14 19:33:33 +0000},
Date-Modified = {2017-02-14 19:33:33 +0000},
Doi = {10.1016/j.jctb.2013.07.004},
Fjournal = {Journal of Combinatorial Theory. Series B},
Issn = {0095-8956},
Journal = {J. Combin. Theory Ser. B},
Mrclass = {05C65 (05C45)},
Mrnumber = {3127586},
Mrreviewer = {Martin Sonntag},
Number = {6},
Pages = {658--678},
Title = {Minimum vertex degree conditions for loose {H}amilton cycles in 3-uniform hypergraphs},
Url = {http://dx.doi.org/10.1016/j.jctb.2013.07.004},
Volume = {103},
Year = {2013},
Bdsk-Url-1 = {http://dx.doi.org/10.1016/j.jctb.2013.07.004}}
\bib{CzMo}{article}{
Author = {Czygrinow, A.},
author={Molla, T.},
Date-Added = {2017-02-14 19:33:33 +0000},
Date-Modified = {2017-02-14 19:33:33 +0000},
Doi = {10.1137/120890417},
Fjournal = {SIAM Journal on Discrete Mathematics},
Issn = {0895-4801},
Journal = {SIAM J. Discrete Math.},
Mrclass = {05D40 (05C65)},
Mrnumber = {3150175},
Mrreviewer = {Deryk Osthus},
Number = {1},
Pages = {67--76},
Title = {Tight codegree condition for the existence of loose {H}amilton cycles in 3-graphs},
Url = {http://dx.doi.org/10.1137/120890417},
Volume = {28},
Year = {2014},
Bdsk-Url-1 = {http://dx.doi.org/10.1137/120890417}}
\bib{Di52}{article}{
author = {Dirac, G. A.},
title = {Some Theorems on Abstract Graphs},
journal = {Proceedings of the London Mathematical Society},
volume = {s3-2},
number = {1},
publisher = {Oxford University Press},
issn = {1460-244X},
url = {http://dx.doi.org/10.1112/plms/s3-2.1.69},
doi = {10.1112/plms/s3-2.1.69},
pages = {69--81},
year = {1952},
}
\bib{DuFr2}{article}{
AUTHOR = {Dudek, Andrzej},
author={Frieze, Alan},
TITLE = {Loose {H}amilton cycles in random uniform hypergraphs},
JOURNAL = {Electron. J. Combin.},
FJOURNAL = {Electronic Journal of Combinatorics},
VOLUME = {18},
YEAR = {2011},
NUMBER = {1},
PAGES = {Paper 48, 14},
ISSN = {1077-8926},
MRCLASS = {05C80 (05C45 05C65)},
MRNUMBER = {2776824},
}
\bib{DuFr1}{article}{
AUTHOR = {Dudek, Andrzej},
author={Frieze, Alan},
TITLE = {Tight {H}amilton cycles in random uniform hypergraphs},
JOURNAL = {Random Structures \& Algorithms},
FJOURNAL = {Random Structures \& Algorithms},
VOLUME = {42},
YEAR = {2013},
NUMBER = {3},
PAGES = {374--385},
ISSN = {1042-9832},
MRCLASS = {05C80 (05C45 05C65)},
MRNUMBER = {3039684},
MRREVIEWER = {Andrew Clark Treglown},
DOI = {10.1002/rsa.20404},
URL = {http://dx.doi.org/10.1002/rsa.20404},
}
\bib{GPW}{article}{
Author = {Glebov, R.},
author={Person, Y.},
author={Weps, W.},
Date-Added = {2017-02-14 19:33:33 +0000},
Date-Modified = {2017-02-14 19:33:33 +0000},
Doi = {10.1016/j.ejc.2011.10.003},
Fjournal = {European Journal of Combinatorics},
Issn = {0195-6698},
Journal = {European J. Combin.},
Mrclass = {05C35 (05C45 05C65)},
Mrnumber = {2864440},
Mrreviewer = {Martin Sonntag},
Number = {4},
Pages = {544--555},
Title = {On extremal hypergraphs for {H}amiltonian cycles},
Url = {http://dx.doi.org/10.1016/j.ejc.2011.10.003},
Volume = {33},
Year = {2012},
Bdsk-Url-1 = {http://dx.doi.org/10.1016/j.ejc.2011.10.003}}
\bib{HZ2}{article}{
Author = {Han, J.},
author={Zhao, Y.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
Doi = {http://dx.doi.org/10.1016/j.jcta.2015.01.004},
Issn = {0097-3165},
Journal = {J. Combin. Theory Ser. A},
Keywords = {Regularity lemma},
Number = {0},
Pages = {194 - 223},
Title = {Minimum codegree threshold for Hamilton $\ell$-cycles in k-uniform hypergraphs},
Url = {http://www.sciencedirect.com/science/article/pii/S0097316515000059},
Volume = {132},
Year = {2015},
Bdsk-Url-1 = {http://www.sciencedirect.com/science/article/pii/S0097316515000059},
Bdsk-Url-2 = {http://dx.doi.org/10.1016/j.jcta.2015.01.004}}
\bib{HZ1}{article}{
Author = {Han, J.},
author={Zhao, Y.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
Journal = {J. Combin. Theory Ser. B},
Pages = {70 - 96},
Title = {Minimum degree thresholds for loose {Hamilton} cycle in 3-graphs},
Volume = {114},
Year = {2015}}
\bib{HS}{article}{
Author = {H\`an, H.},
author= {Schacht, M.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
Issue = {3},
Journal = {J. Combin. Theory Ser. B},
Pages = {332--346},
Title = {Dirac-type results for loose {Hamilton} cycles in uniform hypergraphs},
Volume = {100},
Year = {2010}}
\bib{JLR}{book}{
AUTHOR = {Janson, Svante},
author={\L uczak, Tomasz},
author={Rucinski, Andrzej},
TITLE = {Random graphs},
SERIES = {Wiley-Interscience Series in Discrete Mathematics and
Optimization},
PUBLISHER = {Wiley-Interscience, New York},
YEAR = {2000},
PAGES = {xii+333},
ISBN = {0-471-17541-2},
MRCLASS = {05C80 (60C05 82B41)},
MRNUMBER = {1782847},
MRREVIEWER = {Mark R. Jerrum},
DOI = {10.1002/9781118032718},
URL = {http://dx.doi.org/10.1002/9781118032718},
}
\bib{Karp}{article}{
author = {Karp, Richard M.},
TITLE = {Reducibility among combinatorial problems},
BOOKTITLE = {Complexity of computer computations ({P}roc. {S}ympos., {IBM}
{T}homas {J}. {W}atson {R}es. {C}enter, {Y}orktown {H}eights,
{N}.{Y}., 1972)},
PAGES = {85--103},
PUBLISHER = {Plenum, New York},
YEAR = {1972},
MRCLASS = {68A20},
MRNUMBER = {0378476},
MRREVIEWER = {John T. Gill},
}
\bib{KKMO}{article}{
Author = {Keevash, P.},
author={K\"uhn, D.},
author= {Mycroft, R.},
author= {Osthus, D.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
Journal = {Discrete Math.},
Number = {7},
Pages = {544--559},
Title = {Loose {Hamilton} cycles in hypergraphs},
Volume = {311},
Year = {2011}}
\bib{Korshunov} {article}{
AUTHOR = {Kor\v sunov, A. D.},
TITLE = {Solution of a problem of {P}. {E}rd\H os and {A}. {R}\'enyi on
{H}amiltonian cycles in nonoriented graphs},
JOURNAL = {Diskret. Analiz},
NUMBER = {31 Metody Diskret. Anal. v Teorii Upravljaju\v s\v cih Sistem},
YEAR = {1977},
PAGES = {17--56, 90},
MRCLASS = {05C35},
MRNUMBER = {0543833},
}
\bib{KKS}{article}{
AUTHOR = {Krivelevich, Michael},
author={Kwan, Matthew},
author={Sudakov, Benny},
TITLE = {Cycles and matchings in randomly perturbed digraphs and
hypergraphs},
JOURNAL = {Combin. Probab. Comput.},
FJOURNAL = {Combinatorics, Probability and Computing},
VOLUME = {25},
YEAR = {2016},
NUMBER = {6},
PAGES = {909--927},
ISSN = {0963-5483},
MRCLASS = {05C80 (05C35 05C65)},
MRNUMBER = {3568952},
DOI = {10.1017/S0963548316000079},
URL = {http://dx.doi.org/10.1017/S0963548316000079},
}
\bib{KKS2}{article}{
AUTHOR = {Krivelevich, Michael},
author={Kwan, Matthew},
author={Sudakov, Benny},
title={Bounded-degree spanning trees in randomly perturbed graphs},
journal={SIAM Journal on Discrete Mathematics},
volume={31},
number={1},
pages={155--171},
year={2017},
publisher={SIAM}
}
\bib{KMO}{article}{
author={K\"uhn, D.},
author= {Mycroft, R.},
author= {Osthus, D.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
Journal = {J. Combin. Theory Ser. A},
Number = {7},
Pages = {910--927},
Title = {Hamilton $\ell$-cycles in uniform hypergraphs},
Volume = {117},
Year = {2010}}
\bib{KO}{article}{
Author = {K\"uhn, D.},
author= {Osthus, D.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
Journal = {J. Combin. Theory Ser. B},
Number = {6},
Pages = {767--821},
Title = {Loose {Hamilton} cycles in 3-uniform hypergraphs of high minimum degree},
Volume = {96},
Year = {2006}}
\bib{McMy}{article}{
Author = {McDowell, A.},
author={Mycroft, R.},
Date-Added = {2017-02-14 19:33:21 +0000},
Date-Modified = {2017-02-14 19:33:21 +0000},
JOURNAL = {Electron. J. Combin.},
FJOURNAL = {Electronic Journal of Combinatorics},
VOLUME = {25},
YEAR = {2018},
PAGES = {P4.36},
ISSN = {1077-8926},
Title = {Hamilton {$\ell$}-cycles in randomly perturbed hypergraphs}}
\bib{Posa}{article}{
AUTHOR = {P\'osa, L.},
TITLE = {Hamiltonian circuits in random graphs},
JOURNAL = {Discrete Math.},
FJOURNAL = {Discrete Mathematics},
VOLUME = {14},
YEAR = {1976},
NUMBER = {4},
PAGES = {359--364},
ISSN = {0012-365X},
MRCLASS = {05C35},
MRNUMBER = {0389666},
MRREVIEWER = {F. Harary},
DOI = {10.1016/0012-365X(76)90068-6},
URL = {http://dx.doi.org/10.1016/0012-365X(76)90068-6},
}
\bib{RRRSS}{article}{
author = {{Reiher}, Christian},
author={{R{\"o}dl}, Vojt{\v{e}}ch},
author={Ruci{\'n}ski, Andrzej},
author={{Schacht}, Mathias},
author={ {Szemer{\'e}di}, Endre},
title = {Minimum vertex degree condition for tight Hamiltonian cycles in 3-uniform hypergraphs},
journal = {Proceedings of the London Mathematical Society, to appear}
}
\bib{RR}{book} {
AUTHOR = {R\"{o}dl, V.},
author={Ruci\'{n}ski, Andrzej},
TITLE = {Dirac-type questions for hypergraphs---a survey (or more
problems for {E}ndre to solve)},
BOOKTITLE = {An irregular mind},
SERIES = {Bolyai Soc. Math. Stud.},
VOLUME = {21},
PAGES = {561--590},
PUBLISHER = {J\'{a}nos Bolyai Math. Soc., Budapest},
YEAR = {2010},
MRCLASS = {05-02 (05C65 05C70)},
MRNUMBER = {2815614},
DOI = {10.1007/978-3-642-14444-8_16},
URL = {https://doi.org/10.1007/978-3-642-14444-8_16},
}
\bib{RoRu14}{article}{
Author = {R{\"o}dl, V.},
author={Ruci{\'n}ski, A.},
Date-Added = {2017-02-14 19:33:33 +0000},
Date-Modified = {2017-02-14 19:33:33 +0000},
Doi = {10.7151/dmgt.1743},
Fjournal = {Discussiones Mathematicae. Graph Theory},
Issn = {1234-3099},
Journal = {Discuss. Math. Graph Theory},
Mrclass = {05D05 (05C65)},
Mrnumber = {3194042},
Mrreviewer = {Peter James Dukes},
Number = {2},
Pages = {361--381},
Title = {Families of triples with high minimum degree are {H}amiltonian},
Url = {http://dx.doi.org/10.7151/dmgt.1743},
Volume = {34},
Year = {2014},
Bdsk-Url-1 = {http://dx.doi.org/10.7151/dmgt.1743}}
\bib{RoRuSz06}{article}{
Author = {R\"odl, V.},
author={Ruci\'nski, A.},
author={Szemer\'edi, E.},
title={A Dirac-type theorem for 3-uniform hypergraphs},
journal={Combin. Probab. Comput.},
volume={15},
date={2006},
number={1-2},
pages={229--251},
issn={0963-5483},
review={\MR{2195584}},
doi={10.1017/S0963548305007042},
}
\bib{RRS08}{article}{
Author = {R\"odl, V.},
author={Ruci\'nski, A.},
author={Szemer\'edi, E.},
Date-Added = {2017-02-14 19:33:33 +0000},
Date-Modified = {2017-02-14 19:33:33 +0000},
Journal = {Combinatorica},
Number = {2},
Pages = {229--260},
Title = {An approximate {D}irac-type theorem for k-uniform hypergraphs},
Volume = {28},
Year = {2008}}
\bib{RRS11}{article}{
Author = {R\"odl, V.},
author={Ruci\'nski, A.},
author={Szemer\'edi, E.},
Date-Added = {2017-02-14 19:33:33 +0000},
Date-Modified = {2017-02-14 19:33:33 +0000},
Journal = {Advances in Mathematics},
Number = {3},
Pages = {1225--1299},
Title = {Dirac-type conditions for {Hamiltonian} paths and cycles in 3-uniform hypergraphs},
Volume = {227},
Year = {2011}}
\bib{SpTe}{article}{
AUTHOR = {Spielman, Daniel A.},
author= {Teng, Shang-Hua},
TITLE = {Smoothed analysis: motivation and discrete models},
BOOKTITLE = {Algorithms and data structures},
SERIES = {Lecture Notes in Comput. Sci.},
VOLUME = {2748},
PAGES = {256--270},
PUBLISHER = {Springer, Berlin},
YEAR = {2003},
MRCLASS = {68W40 (05C85)},
MRNUMBER = {2078601},
}
\bib{zsurvey}{book}{
AUTHOR = {Zhao, Yi},
TITLE = {Recent advances on {D}irac-type problems for hypergraphs},
BOOKTITLE = {Recent trends in combinatorics},
SERIES = {IMA Vol. Math. Appl.},
VOLUME = {159},
PAGES = {145--165},
PUBLISHER = {Springer, [Cham]},
YEAR = {2016},
MRCLASS = {05-02 (05C35 05C45 05C65 05C70)},
MRNUMBER = {3526407},
DOI = {10.1007/978-3-319-24298-9_6},
URL = {https://doi.org/10.1007/978-3-319-24298-9_6},
}
\end{biblist}
\end{bibdiv}
\noindent
\end{document}
| {
"timestamp": "2019-11-19T02:01:43",
"yymm": "1802",
"arxiv_id": "1802.04586",
"language": "en",
"url": "https://arxiv.org/abs/1802.04586",
"abstract": "For integers $k\\ge 3$ and $1\\le \\ell\\le k-1$, we prove that for any $\\alpha>0$, there exist $\\epsilon>0$ and $C>0$ such that for sufficiently large $n\\in (k-\\ell)\\mathbb{N}$, the union of a $k$-uniform hypergraph with minimum vertex degree $\\alpha n^{k-1}$ and a binomial random $k$-uniform hypergraph $\\mathbb{G}^{(k)}(n,p)$ with $p\\ge n^{-(k-\\ell)-\\epsilon}$ for $\\ell\\ge 2$ and $p\\ge C n^{-(k-1)}$ for $\\ell=1$ on the same vertex set contains a Hamiltonian $\\ell$-cycle with high probability. Our result is best possible up to the values of $\\epsilon$ and $C$ and answers a question of Krivelevich, Kwan and Sudakov.",
"subjects": "Combinatorics (math.CO)",
"title": "Hamiltonicity in randomly perturbed hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850857421197,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7095349759629066
} |
https://arxiv.org/abs/1207.3933 | Bounds for approximate discrete tomography solutions | In earlier papers we have developed an algebraic theory of discrete tomography. In those papers the structure of the functions $f: A \to \{0,1\}$ and $f: A \to \mathbb{Z}$ having given line sums in certain directions have been analyzed. Here $A$ was a block in $\mathbb{Z}^n$ with sides parallel to the axes. In the present paper we assume that there is noise in the measurements and (only) that $A$ is an arbitrary or convex finite set in $\mathbb{Z}^n$. We derive generalizations of earlier results. Furthermore we apply a method of Beck and Fiala to obtain results of he following type: if the line sums in $k$ directions of a function $h: A \to [0,1]$ are known, then there exists a function $f: A \to \{0,1\}$ such that its line sums differ by at most $k$ from the corresponding line sums of $h$. | \section{Introduction}
Let $n$ be a positive integer and let $A$ be a finite subset of $\mathbb{Z}^n$. If $f: A \to \mathbb{R}$, then the line sum of $f$ along the line $l=\underline{c}+t\underline{d}$ (with $\underline{c},\underline{d}\in \mathbb{Z}^n$, $\underline{d}\neq \underline{0}$ fixed and $t\in \mathbb{R}$ variable) is defined as $\sum_{\underline{a} \in A \cap l} f(\underline{a})$. We call $\underline{d}$ a direction.
Let $S=\{\underline{d_1}, \dots, \underline{d_k}\}$ be a set of directions. By the line sums of $f$ along $S$ we mean all the line sums of $f$ along a line in a direction from $S$ passing through at least one point of $A$. Theorem 1 of \cite{ht4} states that if $A$ is a block with sides parallel to the axes, then any function $f: A \to \mathbb{R}$ with zero line sums along $S$ can be uniquely written as a linear combination
of so-called switching components of $S$ contained in $A$. In Section 3 we prove that for this result it suffices that $A$ is convex, but that the convexity requirement cannot be dropped.
By a discrete tomography problem we mean asking for a function $f:A \to Z$ which satisfies prescribed line sums along $S$, where $Z$ may be $\{0,1\}, \mathbb{R}, \mathbb{Z}$ or some finite real set.
The authors and others have developed an algebraic theory of the structure of the solutions of a discrete tomography problem, see \cite{ht1}, \cite{ht2}, \cite{ht3}, \cite{ht4}, \cite{h}, \cite{st}, \cite{sb}, \cite{dht}, \cite{bfht}.
It appears that the real solutions of a discrete
tomography problem form a linear manifold if there is at least one real solution, and that the integer solutions form a grid in this linear manifold, provided that at least one integer solution exists.
If the line sums are measured with some noise, then it is not certain that some function satisfies the measured line sums along $S$. A natural question is then what the best approximative solution is.
We shall show
that there is some linear manifold which can be considered as the set of `best real approximations' in the sense of least squares. An obvious choice is then to choose the shortest best approximation, that is the orthogonal projection of the origin to that linear manifold. In Section 4 we present an algorithm to construct this shortest best approximation and illustrate it by an example. In Section 5 we present an explicit system of linear equations which determines the shortest best solution in case $A$ is convex. As an application we generalize a result from \cite{dht} by giving an explicit expression for the shortest best solution in case $A$ is a rectangle with sides parallel to the axes and only row and column sums are given.
In the 80's Beck and Fiala \cite{bf} proved a `balancing' theorem. In Section 6 we show that this implies that if the line sums in $k$ directions of a function $h: A \to [0,1]$ are known, then there exists a function $f: A \to \{0,1\}$ such that its line sums differ by at most $k$ from the corresponding line sums of $h$.
We extend this result in Section 7 to the case that we are not searching for a binary image, but for an image $f$ with a finite number of given real values. To do so we generalize the result of Beck and Fiala.
\section{Notation}
We use the following notation throughout the paper. Let $n$ be a positive integer. For brevity, for $x_1, \dots, x_n \in \mathbb{R}$ and $u_1, \dots, u_n \in \mathbb{Z}$, we write $\underline{x} = (x_1, \dots,x_n)$, $\vec{x} = (x_1, \dots, x_n)^T$ and
$\underline{x}^{\underline{u}} = \prod_{j=1}^n x^{u_j}_j$.
Let $\underline{d} \in \mathbb{Z}^n$ with gcd$(d_1, \dots, d_n)=1$ be such that $\underline{d} \not= \underline{0}$, and for the smallest $j$ with $d_j \not = 0$ we have $d_j>0$. We call $\underline{d}$ a direction.
By lines with direction $\underline{d}$ we mean lines of the form $\underline{c}+t\underline{d}$ (with $\underline{c}\in \mathbb{Z}^n$ fixed, $t\in\mathbb{R}$ variable).
Let $A$ be a finite subset of $\mathbb{Z}^n$. Write $A = \{ \underline{a_1}, \dots, \underline{a_s}\}$ where $\underline{a_1}, \dots, \underline{a_s}$ are arranged in lexicographic increasing order.
We call $A$ convex if every $ \underline{a} \in \mathbb{Z}^n$ which belongs to the closed convex hull of $A$ belongs to $A$ itself. By the minimal corner of a set $B \subseteq \mathbb{Z}^n$ we mean the lexicographically smallest element $\phi(B)$ of $ B$.
If $f: A \to \mathbb{R}$, then the line sum of $f$ along the line
$l = \underline{c} + t \underline{d}$ is defined as $\sum_{\underline{a} \in A \cap l} f(\underline{a})$.
For any
$f:\ A\to{\mathbb R}$, write $\vec{f}:=
\left(f(\underline{a}_1),\dots,f(\underline{a}_s)\right)^T$.
We often identify $f$ and $\vec{f}$. The length of $\vec{f}$ (or $f$) is defined as $|f| = |\vec{f}| = \sqrt{\sum_{\underline{a} \in A} (f( \underline{a}))^2}$.
Let $k$ be a positive integer and $S=\{\underline{d_1}, \dots, \underline{d_k}\}$ be a fixed set of directions. By the line sums along $S$ we mean all the line sums along lines in a direction from $S$ which pass through at least one point of $A$. For $\underline{d}=(d_1,\dots,d_n)\in S$ put
$$
f_{\underline{d}}( \underline{x}) = (\underline{x}^{\underline{d}} -1) \prod_{d_j < 0} x_j^{-d_j},
$$
$F(\underline{x}) = \prod_{i=1}^k f_{\underline{d_i}} (\underline{x})$ and, for $ \underline{u} \in \mathbb{Z}^n$, set $F_{\underline{u}}(\underline{x}) = \underline{x}^{\underline{u}} F(\underline{x})$.
Obviously, the polynomial $F_{\underline{u}}$ has integer coefficients. We call the functions $F_{\underline{u}}$ the switching polynomials of $S$. Define the functions $m_{\underline{u}} : \mathbb{Z}^n \to \mathbb{Z}$ by
$$ m_{\underline{u}} ( \underline{v}) = {\rm coeff} (\underline{x}^{\underline{v}})~{\rm in} ~ F_{\underline{u}} (\underline{x}) ~ {\rm for} ~ \underline{v} \in \mathbb{Z}^n.$$
We define $D_{\underline{u}}$ as the set of $\underline{v} \in \mathbb{Z}^n$ for which $m_{\underline{u}} (\underline{v}) \not= 0$ and call it a switching component.
Let $\phi(\underline{u})$ denote the minimal corner of $D_{\underline{u}}$.
It follows from the above definitions that
\begin{equation} \label{phi}
m_{\underline{u}}(\phi(\underline{u})) = \pm 1.
\end{equation}
\section{The structure of functions with zero line sums}
We prove that Theorem 1 of \cite{ht4} remains true under the weaker condition that $A$ is convex.
\begin{theorem}
\label{base}
Let $A$ be a finite convex subset of $\mathbb{Z}^n$, and $S$ a given set of directions. Then any function $f: A \to \mathbb{R}$ with zero line sums along $S$ can be uniquely written in the form
$$
f = \sum_{D_{\underline{u}} \subseteq A} c_{\underline{u}} m_{\underline{u}}
$$
with coefficients $c_{\underline{u}} \in \mathbb{R}$. Moreover, every such function $f$ has zero line sums along $S$.
\end{theorem}
\noindent If there is no $\underline{u}$ for which $D_{\underline{u}} \subseteq A$, then the only function $f$ with zero line sums along $S$ is the trivial function $f=0$.
Otherwise, the functions with zero line sums along $S$ form a proper linear subspace of the linear space of all functions $f: A \to \mathbb{R}$.
\begin{proof}
The statement has been proved in case $A$ is a hyperblock with sides parallel to the axes in Theorem 1 of \cite{ht4}.
Let $A^*$ be a hyperblock with sides parallel to the axes such that $A \subseteq A^*$. Set $f(\underline{x}) = 0$ for $\underline{x} \in A^* \setminus A$. Then we know that
\begin{equation}
\label{U}
f = \sum_{D_{\underline{u}} \subseteq A^*} c_{\underline{u}} m_{\underline{u}}
\end{equation}
with coefficients $c_{\underline{u}} \in \mathbb{R}$.
It remains to prove that $c_{\underline{u}} = 0$ if $ D_{\underline{u}}$ is not contained in $A$.
If $ D_{\underline{u}}$ is not contained in $A$, then there exists $\psi(\underline{u})\in D_{\underline{u}}$ such that $\psi(\underline{u})\notin A$. Since $A$ is convex, there is a linear manifold $L$ which
extends a hyperface of the convex hull of $A$
such that $\psi(u)$ and $A$ are on different sides of $L$. Let $H_L$ be the open halfspace generated by $L$ which contains $\psi(u)$. Note that $H_L$ does not contain any element of $A$. Consider the set $U_L$ of all $\underline{u}$ such that $D_{\underline{u}} \subseteq A^*$ and $D_{\underline{u}}$ contains an element $\psi(\underline{u})\in H_L$. Without loss of generality we assume that $\psi(\underline{u})$ has maximal Euclidean distance $d(\psi(\underline{u}),L)$ to $L$
among the elements of $D_{\underline{u}}\cap H_L$ and, if there are more such elements with maximal distance to $L$, then $\psi(u)$ is the lexicographically smallest among them.
Since the sets $D_{\underline{u}}$ for variable $\underline{u}$ are translates of each other, the vectors $\psi(\underline{u}) -\underline{u}$ are the same for all $\underline{u} \in U_L$. Now we arrange the elements of $U_L$ according to the non-increasing distances $d(\psi(\underline{u}),L)$ of $\psi(\underline{u})$ to $L$. Thereafter we order the elements of $U_L$ for which the distances $d(\psi(\underline{u}),L)$ are equal according to non-decreasing lexicographic order of $\underline{u}$. Consider the first element $\underline{u} \in U_L$ according to this ordering. By the above construction there is no other set $D_{\underline{u}}$ for $\underline{u} \in U_L$ which contains $\psi(\underline{u})$.
Since $\psi(u) \notin A$ we infer $f(\psi(\underline{u})) = 0$, hence $c_{\underline{u}} =0$. We proceed with the next element $\underline{u} \in U_L$ in the ordering and conclude by a similar reasoning that $c_{\underline{u}} = 0$ also for this $\underline{u}$. Continuing until we have had all elements of $U_L$, we conclude that $c_{\underline{u}}= 0$ for all $\underline{u} \in U_L$. Since $D_{\underline{u}}$ was an arbitrary set not contained in $A$, the first statement follows. The uniqueness and the second statement of the theorem follow immediately from Theorem 1 of \cite{ht4}.
\end{proof}
The following result is a consequence of Theorem \ref{base}.
\begin{cor}
In the notation of Theorem \ref{base}, for any $h: A\to \mathbb{R}$ and for any prescribed values from $\mathbb{R}$ at the minimal corners of the switching components contained in $A$ there exists a unique $f: A\to\mathbb{R}$ having the same line sums along $S$ as $h$ has and having the prescribed values at the minimal corners. Moreover, if $h: A \to \mathbb{Z}$, then $f: A \to \mathbb{Z}$.
\end{cor}
\begin{proof}
According to Theorem \ref{base} there are unique coefficients $c_{\underline{u}} $ such that
$$
f = h + \sum_{D_{\underline{u}} \subseteq A} c_{\underline{u}} m_{\underline{u}}
$$
has the same line sums along $S$ as $h$. By (\ref{phi}) we obtain, following the ordering argument from the previous proof, that each coefficient $c_{\underline{u}} $ is completely determined by the value of $m_{\underline{u}}$ at $\phi({u})$ and, moreover, that $c_{\underline{u}} \in \mathbb{Z}$ if $h: A \to \mathbb{Z}$.
\end{proof}
\noindent {\bf Remark 3.1.} The following example shows that in Theorem 3.1 we cannot drop the convexity requirement. \\
Let $A = \{ (0,0), (0,1), (1,0), (1,2), (2,1), (2,2) \} $ and $S = \{ (1,0), (0,1) \}$.
Then for every $ \underline {u}$ we have $D_{\underline{u}} - \underline{u} = \{ (0,0), (0,1), (1,0), (1,1) \}.$ Therefore $A$ does not contain any switchting component.
However, there is a nontrivial function $f: A \to \mathbb{Z}$ with all line sums along $S$ equal to 0: \\
$f(0,0)=1, f(0,1) =-1, f(1,0) = -1, f(1,2) = 1, f(2,1) = 1, \\f(2,2) = -1.$
\section{The best approximating function for general domains}
The next theorem can be used to construct the function $f_0: A \to \mathbb{R}$ such that $f_0$ fits optimally the measured line sums along $S$ in the sense of least squares and, moreover, has minimal Euclidean length among such functions.
Let $A\subseteq{\mathbb Z}^n$ be a finite, nonempty set, and write $\underline{a}_1,\dots,\underline{a}_s$ for its elements. For the rest of this section, fix the indexing of the elements.
Let $B$ be a $t$ by $s$ matrix of real numbers.
The range of the matrix $B$ is denoted by
$$
R(B):=\{B\cdot\vec{x}\ :\ \vec{x}\in{\mathbb R}^s\}.
$$
Hence $R(B)$ is a subspace of ${\mathbb R}^t$, generated by the column vectors $\vec{b}_1,\dots,\vec{b}_s$ of $B$. We have $0\leq {\text{dim}}(R(B))\leq t$.
Write $B_1$ for a matrix formed by a maximal linearly independent set of column vectors of $B$. Then $B_1 = B \cdot C_1$ where $C_1$ is a matrix of type $s\times ({\rm{rank}}(B))$ which has rank$(B)$ entries 1 in distinct columns and all other entries equal to 0. Observe that $B_1^T \cdot B_1$ is invertible.
\begin{lemma}
\label{lemlstsq}
Let $A, B$ and $B_1$ be as above. Let $\vec{b}\in{\mathbb R}^t$ be arbitrary. Put
\begin{equation}
\label{b^*}
\vec{b^*} = B_1 \cdot (B_1^T \cdot B_1)^{-1} \cdot B_1^T \cdot \vec{b}.
\end{equation}
Then $\vec{b^*}$ is the vector from $R(B)$ which is closest to $\vec{b}$.
\end{lemma}
\begin{proof}
Obviously, the vector $\vec{b}^*$ in $R(B)$ closest to $\vec{b}$ is uniquely determined by the following properties:
\begin{itemize}
\item $\vec{b}^*\in R(B)$,
\item $\vec{b}-\vec{b}^*$ is orthogonal to $R(B)$.
\end{itemize}
Since $B_1 = B \cdot C_1$ the first property follows immediately from \eqref{b^*}.
The second property is equivalent to that $\vec{b}-\vec{b}^*$ is orthogonal to all the column vectors of $B$, or equivalently, to all column vectors of $B_1$. In other words, it is equivalent to
$$
B_1^T\cdot(\vec{b}-\vec{b}^*)=\vec{0}.
$$
It follows from \eqref{b^*} that
$$
B_1^T\cdot \vec{b}^* = B_1^T\cdot B_1 \cdot (B_1^T \cdot B_1)^{-1} \cdot B_1^T \cdot \vec{b} = B_1^T \cdot \vec{b}.
$$
Hence both properties are satisfied.
\end{proof}
We use the above notation and define
$$
\vec{l}_f = B\cdot \vec{f}.
$$
Let $B_2$ be a matrix formed by a maximal linearly independent set of row vectors of $B$. Then $B_2 = C_2 \cdot B$ where $C_2$ is a matrix of type $({\rm{rank}}(B))\times t$ which has rank$(B)$ entries 1 in distinct rows and all other entries equal to 0. Observe that $B_2 \cdot B_2^T$ is invertible.
\begin{theorem}
\label{thmlstsq}
Let $A, B, B_2, C_2, \vec{b}, \vec{b^*}, f$ ($ = \vec{f}$) be as above. Put
\begin{equation} \label{f_0}
\vec{f_0} = B_2^T \cdot (B_2 \cdot B_2^T)^{-1} \cdot C_2 \cdot \vec{b^*}
\end{equation}
Then the corresponding $f_0: A \to \mathbb{R}$ has the following properties:
\begin{itemize}
\item[(i)] for any $f:\ A\to{\mathbb R}$ we have $|\vec{l}_f-\vec{b}|\geq |\vec{l}_{f_0}-\vec{b}|$,
\item[(ii)] if $f:\ A\to{\mathbb R}$, $f\neq f_0$ such that $|\vec{l}_f-\vec{b}|=|\vec{l}_{f_0}-\vec{b}|$, then $|\vec{f}|>|\vec{f}_0|$.
\end{itemize}
\end{theorem}
\begin{proof}
Observe that (i) and (ii) are equivalent with the following two properties:
\begin{itemize}
\item $B\cdot \vec{f}_0=\vec{b}^*$,
\item $\vec{f_0}$ is orthogonal to ker$(B)$, the nullspace of $B$.
\end{itemize}
The first property is clearly equivalent to
\begin{equation}
\label{eqforf01}
B_2 \cdot \vec{f}_0= C_2 \cdot \vec{b^*}.
\end{equation}
It follows immediately from \eqref{f_0} that this property is satisfied.
Since
$$
{\rm{ker}}(B)=\{\vec{x}\in{\mathbb R}^s\ :\ B\cdot\vec{x}=\vec{0}\},
$$
the orthogonal complement of ker$(B)$ is the subspace of ${\mathbb R}^s$ generated by the row vectors of $B$. Hence, by the definition of $B_2$, the second property above is equivalent to
\begin{equation}
\label{eqforf02}
\vec{f_0}=B_2^T\cdot\vec{y}\ \ \ {\rm{for\ some}}\ \vec{y}\in{\mathbb R}^r,
\end{equation}
where $r$ is the rank of $B_2$.
This is obviously true because of \eqref{f_0}.
Thus both properties are satisfied.
\end{proof}
\noindent {\bf Remark 4.1.} An alternative version of Theorem \ref{thmlstsq} can be obtained by using the Moore-Penrose pseudo inverse, cf. the proof of Theorem 1 in \cite{bfht}.
\vskip.3cm
\noindent {\bf Remark 4.2.} We apply Lemma \ref{lemlstsq} and Theorem \ref{thmlstsq} in the context of Discrete Tomography as follows. Let $A$ be a finite subset of $\mathbb{Z}^n$ and $S$ a set of directions. Let $l_1, \dots, l_t$ be the measured line sums along $S$. Note that because of noise they need not be consistent. Then $B$ is the $s$ by $t$ matrix whose entry $B_{ij}$ equals $1$ if the line corresponding to $l_j$ passes through $\underline{a}_i$ and $0$ otherwise. The vector $\vec{b}^*$ constructed in Lemma \ref{lemlstsq} represents the corresponding line sums along $S$ which are consistent and provide the optimal choice in the sense that $\sum_{j=1}^t (l_j - b_j^*)^2$ is minimal among the consistent line sums $b^*_j$ along $S$. Furthermore, the vector $\vec{f_0}$ constructed in Theorem \ref{thmlstsq} is the shortest best approximation in the sense that it is the shortest vector realizing the line sums given by $\vec{b^*}$. The corresponding function $f_0: A \to \mathbb{R}$ may be considered as the optimal choice for the measured line sums $l_1, \dots, l_t$.
\vskip.1cm
We illustrate the method by an example.
\vskip.3cm
\noindent
{\bf Example.} We use the notation of Lemma \ref{lemlstsq} and Theorem \ref{thmlstsq}. Consider the following subset of ${\mathbb R}^2$:
$$
A:=\{(1,0), (3,0), (0,1), (4,1), (0,2), (4,2), (1,3), (2,3), (3,3)\}.
$$
As the set of directions, take
$$
S:=\{(1,0), (0,1), (1,-1), (1,1)\}.
$$
The ordering of the points in $A$ and directions in $S$ are arbitrary, but fixed.
As a (measured) line sum vector, take
$$
\vec{b}^T:=\left(1, \frac{23}{10}, \frac75, 1, 1, 1, \frac32, 1, \frac65, 1, 1, 1, \frac9{10}, \frac{13}{10}, \frac12, 1, \frac65, \frac35, \frac12, \frac{17}{10}, \frac7{10}\right).
$$
The entries of $\vec{b}$ belong to the lines
$$ y=t\ \ (t=0,1,2,3),\ \ \ x=t\ \ (t=0,1,2,3,4) $$
$$ y=x+t\ \ (t=-3,-2,-1,0,1,2),\ \ \ y=-x+t\ \ (t=1,2,3,4,5,6) $$
which we keep in this order.
Then the matrix $B$ of line sums is given by
$$
{\footnotesize
\left(
{\begin{array}{rrrrrrrrr}
1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\
1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1
\end{array}}
\right)
}
.
$$
As one can easily check, rank$(B)=9$. So we can take the matrix $C_1$ as the $9\times 9$ unit matrix. Thus $B_1=B$. Then, by \eqref{b^*}, the vector $\vec{b^*}^T$ is given by
$$
_{\left(\frac{891}{800}, \frac{2457}{1600}, \frac{1019}{800}, \frac{4361}{3200}, \frac{167}{128}, \frac{103}{128}, \frac{111}{128}, \frac{103}{128}, \frac{963}{640}, \frac{4239}{3200}, \frac{859}{1600}, \frac{1211}{1600}, \frac{1433}{3200}, \frac{287}{200}, \frac{2511}{3200}, \frac{4239}{3200}, \frac{1179}{1600}, \frac{571}{1600}, \frac{153}{3200}, \frac{367}{200}, \frac{3151}{3200}\right).}
$$
As one can readily check, the indices of a maximal set of independent rows of $B$ is given by
$$
\{1,2,3,4,5,6,7,10,11\}.
$$
That is, we may take
$$
{\footnotesize
C_2:=
\left(
{\begin{array}{rrrrrrrrrrrrrrrrrrrrr}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}}
\right)
}
$$
whence
$$
{\footnotesize
B_2=
\left(
{\begin{array}{rrrrrrrrr}
1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\
0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0
\end{array}}
\right)
.}
$$
Finally, by \eqref{f_0} we obtain
$$
{\footnotesize
\vec{f_0}^T=
\left(\frac{1211}{1600}, \frac{571}{1600}, \frac{1817}{3200},
\frac{3097}{3200}, \frac{1179}{1600}, \frac{859}{1600}, \frac{153}{3200}, \frac{111}{128}, \frac {1433}{3200}
\right)
.}
$$
\section{The best approximating function for convex domains}
The following theorem provides explicitly a system of linear equations which determines the best approximating function constructed in the previous section. We illustrate in the corollary the advantage of this explicit expression. The real number $l(Y_{\tau})$ in the following theorem can be considered as the measured line sum of $f$ along the line corresponding to $Y_{\tau}$.
\begin{theorem} \label{central}
Let $A\subseteq \mathbb{R}^n$ be convex. Let $S$ be a finite set of directions and $Y_1, \dots, Y_t$ the subsets of $A$ which determine the lines along $S$. Suppose for $\tau=1, \dots, t$ a real number $l(Y_{\tau})$ is given. Let $U_A \subset A$ be the set of minimal corners of the switching components contained in $A$. Define $f_0: A \to \mathbb{R}$ by the system of linear equations
\begin{equation}
\label{orth}
\sum_{\underline{v} \in D_{\underline{u}}} f_0(\underline{v}) m_{\underline{u}}(\underline{v}) = 0 ~~{\it for~all}~ \underline{u}~{\it with}~ \phi(\underline{u}) \in U_A,
\end{equation}
\begin{equation}
\label{linear}
\sum_{\tau: \underline{u} \in Y_{\tau}} \sum_{\underline{v} \in Y_{\tau}} f_0(\underline{v}) = \sum_{\tau: \underline{u} \in Y_{\tau}} l(Y_{\tau}) ~{\it for~all}~ \underline{u}~{\it with}~ \phi(\underline{u}) \in A \setminus U_A.
\end{equation}
Then $f_0$ is a function such that
\begin{equation}
\label{linman}
\sum_{\tau=1}^t \left( \sum_{\underline{v} \in Y_{\tau}} f_0(\underline{v}) - l(Y_{\tau}) \right)^2
\end{equation}
is minimal and among such functions $f_0$ is the one for which the value of $|\vec{f_0}|$ is minimal.
\end{theorem}
\begin{proof}
By Theorem \ref{thmlstsq} the function $f_0: A \to \mathbb{R}$ satisfying (\ref{linman}) for which $|\vec{f_0}|^2 = \sum_{\underline{v} \in A} (f_0(\underline{v}))^2$ is minimal is uniquely determined. We proceed with this function $f_0$ and consider it as a function for which each value $f_0(\underline{u})$ for $ \underline{u} \in A$ is a variable.
It follows by differentation of \eqref{linman} to each $f_0(\underline{u})$ that
$$
\sum_{\tau: \underline{u} \in Y_{\tau}} \sum_{\underline{v} \in Y_{\tau}} f_0(\underline{v}) = \sum_{\tau: \underline{u} \in Y_{\tau}}l(Y_{\tau})
$$
for all $\underline{u} \in A$. Hence $f_0$ satisfies \eqref{linear}.
We know that $\vec{f_0}$ is orthogonal to the linear subspace $L$ of functions having zero line sums along $S$. According to Theorem \ref{base} the functions $m_{\underline{u}}$ have zero line sums along $S$. Therefore they are in $L$ for all $\underline{u} \in \mathbb{Z}^n$. Since the inner product of $\vec{f_0}$ and any vector from $L$ is $0$, $f_0$ satisfies \eqref{orth} too.
The numbers of linear equations in \eqref{orth} and
\eqref{linear} together equal the cardinality of $A$. Thus it suffices to show that they are linearly independent over $\mathbb{R}$ in order to prove that $f_0$ is completely determined by them. Because of the orthogonality of $\vec{f_0}$ and $L$, it is enough to prove that the equations in \eqref{orth} are linearly independent as well as those in \eqref{linear}.
Since by Theorem \ref{base} the functions $m_{\underline{u}}$ are linearly independent, the equations \eqref{orth} are linearly independent as well.
Furthermore, in Theorem \ref{base} it is shown that $f_0$ is uniquely determined by its values at $U_A$. This shows that the equations in \eqref{linear} are linearly independent. We conclude that the linear equations in \eqref{orth} and \eqref{linear} are linearly independent indeed.
\end{proof}
In the particular case that $A \subset \mathbb{Z}^2$ is a rectangular block, and we only have row and column sums, we give an explicit form of $f_0$.
The result shows that the formula from \cite{dht} is also valid if there is noise in the measurements. We simplify our notation.
\begin{cor}
\label{dht}
Let $A =\{(i,j)\in{\mathbb Z}^2:0\leq i<q,0\leq j<p\}, S=\{(1,0),(0,1)\}$.
Let $c_i$ $(i=0,\dots,q-1)$ and $r_j$ $(j=0,\dots,p-1)$ denote the measured column sums and row sums, respectively. Further, write $s_r=\sum\limits_{j=0}^{p-1} r_j, s_c=\sum\limits_{i=0}^{q-1} c_i$ and $T =
\frac {ps_r + qs_c}{q+p}$.
Then for any $(i,j)\in A$ we have
$$
f_0(i,j)= \frac{c_i}{p} + \frac{r_j}{q} - \frac{T}{qp}.
$$
\end{cor}
\noindent Observe that if $s_r = s_c$, then $T = s_r = s_c $.
\begin{proof}
Since
$$(\frac{r_j}{q} + \frac{c_i}{p} - \frac {T}{qp}) - (\frac{r_j}{q} + \frac{c_{i+1}}{p} - \frac {T}{qp}) - (\frac{r_{j+1}}{q} + \frac{c_i}{p} - \frac {T}{qp}) + (\frac{r_{j+1}}{q} + \frac{c_{i+1}}{p} - \frac {T}{qp}) = 0$$
for all $i$ and $j$, the equations \eqref{orth} are satisfied.
Furthermore
$$ (\frac{r_1}{q} + \frac{c_i}{p} - \frac {T}{qp}) + \dots + (\frac{r_p}{q} + \frac{c_i}{p} - \frac {T}{qp}) + (\frac{r_j}{q} + \frac{c_1}{p} - \frac {T}{qp}) + \dots + (\frac{r_j}{q} + \frac{c_n}{p} - \frac {T}{qp})$$
$$ = \frac{s_r}{q} + c_i - \frac{T}{p} +r_j + \frac{s_c}{p} - \frac{T}{q} = c_i + r_j,$$
which shows that the equations \eqref{linear} are also satisfied.
\end{proof}
\section{Approximate solutions in the binary case}
Let $A$ be a finite subset of $ \mathbb{Z}^n$.
We assume that a function $h: A \to \mathbb{R}$ is given and provide information on the `nearest' function $f: A \to \mathbb{Z}$ having approximately the same line sums along $S$ as $h$.
If $n=2$ and only row and column sums are given, we have the following result.
\begin{theorem} \label{bara}
If $h: A \to \mathbb{R}$ is given, there exists a function $f: A \to \mathbb{Z}$ such that every two corresponding elements of $f$ and $h$ as well as every two corresponding row sums and column sums as well as the sums of all function values of $f$ and $h$ differ by less than $1$.
\end{theorem}
We apply the following result of Baranyai.
\begin{lemma}[\cite{bar}, Lemma 3]\label{bar}
Let $[h_{ij}]$ be an $l$ by $m$ matrix of real elements. Then there exists an $l$ by $m$ integer matrix $[f_{ij}]$ such that
$$|h_{ij} - f_{ij}| <1 {\rm ~~for~all~}i,j,$$
$$ | \sum_i h_{ij} - \sum_i f_{ij} | <1 {\rm~~for~all~}j,$$
$$ | \sum_j h_{ij} - \sum_j f_{ij} | < 1 {\rm~~for~all~}i,$$
$$ | \sum_i \sum_j h_{ij} - \sum_i \sum_j f_{ij} | <1.$$
\end{lemma}
\begin{proof}[Proof of Theorem \ref{bara}]
Choose an $l$ by $m$ block $A^*$ which covers $A$. For $(i,j)\in A^*\setminus A$ put $h(i,j)=0$. This does not change the line sums. Applying Lemma \ref{bar}, we get $f(i,j)=h(i,j)=0$ for $(i,j) \in A^*\setminus A$ and the theorem follows.
\end{proof}
\noindent The following example shows that the bound 1 is best possible. Let $ 0<\varepsilon<1$, $l> 1/ \varepsilon$, $m=1$, $h(i,1) = \varepsilon$ for $i=1, \dots, l$. Then $f(i,1) =1$ for some $i$ in order to avoid that the row sums of $h$ and $f$ differ more than 1. But then the $i$-th column sums of $h$ and $f$ have a difference $1 - \varepsilon$.
\vskip.1cm
The crucial feature of the following general result is that the upper bound is independent of the size of $A$.
\begin{theorem}
\label{BF}
Let $A$ be a finite set in $\mathbb{Z}^n$. Let $h: A \to \mathbb{R}$ and let $k$ directions $S$ be given. Then
there exists a function $f: A \to \mathbb{Z}$ such that each difference between corresponding elements of $h$ and $f$ is less than $1$
and each difference between corresponding line sums of $h$ and $f$ along $S$ is at most $k-1$.
\end{theorem}
We introduce the following notation in order to apply a result of Beck and Fiala. Let $X= \{x_1,x_2, \dots\}$ be a finite set and $\mathcal{F}$ a family of subsets of $X$. Associate to every $x_i$ a real number $\alpha_i$. Let $k$ be the degree of $\mathcal{F}$, that is the maximal number of elements of $\mathcal{F}$ to which some element of $X$ belongs. Let $r(k)$ be the least value for which one can find integers $a_i, ~i=1,2, \dots$ so that $|a_i - \alpha_i| < 1$ and
$$ | \sum_{x_i \in E} a_i - \sum_{x_i \in E} \alpha_i| \leq r(k) $$ for all $E \in \mathcal{F} $.
The following result is due to Beck and Fiala (see \cite{bf}). We shall prove a generalization of it in the next section.
\begin{lemma}
\label{bf}
In the above notation, we have
$$ r(k) \leq k-1~~{\rm for}~~k \geq 2.$$
\end{lemma}
\noindent Beck and Fiala conjecture that $r(k) \leq k/2$ is true even for small values of $k$. Bednarchak and Helm \cite{bh} and Helm \cite{he} improved the Beck-Fiala bound to $r(k) \leq k-3/2$ for $k \geq 3$ and $r(k) \leq k-2$ for $k$ sufficiently large, respectively.
\begin{proof}[Proof of Theorem \ref{BF}]
Let $Y_1, \dots, Y_t$ denote the subsets of $A$ which determine the line sums along $S$. Let $\mathcal{F}=\{Y_1,Y_2, \dots, Y_t\} $.
By Lemma \ref{bf} there exist integers $f(a)$ for all $ a \in A$ with $f(a) \in \{\lfloor h(a) \rfloor, \lceil h(a) \rceil \}$ such that $\sum_{a \in Y_j} | f(a) - h(a) | \leq k-1$
\noindent for $j= 1, \dots, t$.
\end{proof}
\noindent{\bf Remark 6.1}. Obviously, many variations of Theorem \ref{bf} are possible. E.g. adding the requirement that the sum of all values $f(a)$ differs little from the sum of all values $h(a)$ leads to an upper bound $k$. The requirement that the difference between the sums of the values of $f$ and $h$ along any linear manifold parallel to the axes should be small leads to an upper bound $2^k-2$.
\vskip.1cm
\noindent {\bf Remark 6.2.} By a probabilistic method a better dependence on $k$ can be obtained at the cost of some dependence on $A$.
An recent improvement by Banaszczyk \cite{ba} of a result of Beck implies that in Theorem \ref{BF} the upper bound $k-1$ can be replaced by $C\sqrt{k \log (\min (m,n))}$,
where $C$ is some constant.
\section{Approximate solutions for grey values}
\begin{theorem}
\label{GV}
Let $Z = \{z_1, \dots, z_m \}$ be a set of $m$ real numbers with $z_1< \dots < z_m$. Put $z = \max_i (z_{i+1} - z_i)$.
Let $h: A \to [z_1,z_m]$ and let $k$ directions $S$ be given.
Then there exists a function $f: A \to Z$ such that the difference between the values of $f$ and $h$ at any element of $A$ is at most $z$
and each difference between corresponding line sums of $f$ and $h$ along $S$ is at most $(k-1)z$.
\end{theorem}
For the proof we derive the following extension of the lemma of Beck and Fiala.
\begin{lemma}
\label{bfe}
Let $Z = \{z_1, \dots, z_m \}$ be a set of $m$ real numbers with $z_1< \dots< z_m$. Put $z = \max_i (z_{i+1} - z_i)$.
Let $X= \{x_1,x_2, \dots , x_s\}$ be a finite set and associate to every $x_i$ a real number $\alpha_i \in [z_1,z_m]$.
Then given any family $\mathcal{F}$ of subsets of $X$ having maximum degree $k \geq 2$, there exist $a_i \in Z$ such that
$a_i=z_j$ if $\alpha_i = z_j$ and there is no element from $Z$ in between $\alpha_i$ and $a_i$ for all $i$ and $j$
and
$$ \left| \sum_{x_i \in E} a_i - \sum_{x_i \in E} \alpha_i \right| \leq (k-1)z $$
for all $E \in \mathcal{F} $.
\end{lemma}
\begin{proof}
We shall define a sequence $\alpha^0, \alpha^1, \dots, \alpha^p$ of $s$-dimensional vectors $\alpha^j = (\alpha_1^j, \dots, \alpha_s^j)$ and a sequence $Y_j$ of subsets of $X$ with the following properties: \\
(i) $ \alpha_i^0 = \alpha_i$ for $i = 1, \dots, s.$ \\
(ii) There is no element of $Z$ in between $\alpha_i$ and $\alpha_i^j$ for $ i = 1, \dots, s; j = 0,1, \dots, p.$ \\
(iii) $X \setminus Y_j$ is a set of points $x$ for which $x \in Z$ for all $j$. \\
(iv) $Y_0 \supset Y_1 \supset \dots \supset Y_p $ and $|Y_j| = p-j$ for $0 \leq j \leq p$. \\
(v) $\alpha_i^j = \alpha_i^h$ for $j=h, \dots, p$ whenever $\alpha_i^h \in Z$. \\
(vi) If $|E \cap Y_j| > k$, then $\sum_{x_i \in E} \alpha_i^j = \sum_{x_i \in E} \alpha_i^{j+1}$ for all $E \in \mathcal{F}$.\\
(vii) For $j=0,1, \dots, p$ and all $E \in \mathcal{F}$ we have
$$ \left|\sum_{x_i \in E} \alpha_i^{j} - \sum_{x_i \in E} \alpha_i \right| \leq (k-1)z.$$
\noindent According to (iii) and (iv) the final vector $\alpha^p$ has all coordinates in $Z$.
We construct the sequence $(\alpha^j)$ by induction.
Suppose $\alpha^j$ is defined satisfying the above conditions for $j$. Let $$G_j = \{E \in \mathcal{F} : |E \cap Y_j| \geq k \}.$$
We distinguish between three cases. At every step there is some $i$ such that $x_i \in Y_j, \alpha_i^{j+1} \in Z$ and we set $Y_{j+1} = Y_j \setminus \{x_i\} $. \\
Case (a) $G_j = \emptyset.$ \\
Case (b) $ 0 < |G_j| < |Y_j|$. \\
Case (c) $|G_j| \geq |Y_j|.$
Case (a). If $G_j$ is empty, then choose $\alpha_i^{j+1}$ as the element from $Z$ which is nearest to $\alpha_i$ for all $i$ with $x_i \in Y_j$. It follows that
$$
\left|\sum_{x_i \in E} \alpha_i - \sum_{x_i \in E} \alpha_i^{j+1}\right| \leq (k-1)z ~~{\rm for~ all}~~ E \in \mathcal{F},
$$
and the above conditions are satisfied for $j+1$. \\(It follows that $\alpha_i^{j+1} = \dots = \alpha_i^p = a_i$ for all $i$.)
In Case (b) associate a real variable $\beta_i$ to every $i = 1, \dots , s$ and consider the system of equations
$$ \sum_{x_i \in E \cap Y_j} \beta_i = 0 ~~{\rm for}~~ E \in G_j,$$
$$\beta_i = 0 ~~{\rm for}~~x_i \notin Y_j.$$
A nontrivial solution $\{ \beta_i \}_{i=1}^s$ exists, because in case (b) there are more variables than equations.
Let $t_0$ be the smallest nonnegative value for which $\alpha_i^j +t \beta_i \in Z$ for some $i$ with $x_i \in Y_j.$
Put $\alpha_i^{j+1} = \alpha_i^j + t_0 \beta_i$ for $i=1, \dots, s$.
It is easy to check that $$\sum_{x_i \in E} \alpha_i^{j} = \sum_{x_i \in E} \alpha_i^{j+1} ~~ {\rm for ~all}~~ E \in G_j.$$
\noindent Hence the above conditions are satisfied for $j+1$.
Case (c). Since each $x_i$ has degree at most $k$ in $G_j$, we may conclude that $|G_j| = |Y_j|$, each $x_i$ has degree exactly $k$ in $G_j$ and
$|E \cap Y_j| = k$ for every $E \in G_j$.
Let $\alpha_i^{j+1}$ be the element from $Z$ nearest to $\alpha_i$ for every $x_i \in Y_j$.
Then $| \alpha_i^{j+1} - \alpha_i| \leq z/2$ for $x_i \in Y_j$. Since $k/2 \leq k-1$, we obtain
$$ |\sum_{x_i \in E} \alpha_i^{j+1} - \sum_{x_i \in E} \alpha_i| \leq (k-1)z$$
for all $E \in \mathcal{F}$. Hence the above conditions are satisfied for $j+1$. \\(It follows that $\alpha_i^{j+1} = \dots = \alpha_i^p = a_i$ for all $i$.)
Write $a_j =\alpha_i^p$ for $i = 1, \dots, s$. It is easy to check that in each case the relations (iii), (iv) and (vii) hold. This completes the proof.
\end{proof}
\begin{proof} [Proof of Theorem \ref{GV}]
Let $Y_1, \dots, Y_t$ denote the subsets of $A$ which determine the line sums along $S$.
By Lemma \ref{bfe} there exists a function $f: A \to Z$ such that
$$\sum_{a \in A \cap Y_j} | f(a) - h(a) | \leq (k-1)z$$ for
$j= 1, \dots, t$.
\end{proof}
\noindent {\bf Remark 7.1.} A small adjustment must be made if the entries are not all in $[z_1,z_m].$ E.g. values of $h$ smaller than $z_1$ are first replaced by $z_1$, values larger than $z_m$ by $z_m$.
\vskip.1cm
\noindent {\bf Remark 7.2.} If we want to have relatively short vectors $\underline{f},\underline{g}$, then we may apply Theorem \ref{bfe} to the function $f_0$ from Theorem \ref{thmlstsq}.
| {
"timestamp": "2012-07-18T02:02:47",
"yymm": "1207",
"arxiv_id": "1207.3933",
"language": "en",
"url": "https://arxiv.org/abs/1207.3933",
"abstract": "In earlier papers we have developed an algebraic theory of discrete tomography. In those papers the structure of the functions $f: A \\to \\{0,1\\}$ and $f: A \\to \\mathbb{Z}$ having given line sums in certain directions have been analyzed. Here $A$ was a block in $\\mathbb{Z}^n$ with sides parallel to the axes. In the present paper we assume that there is noise in the measurements and (only) that $A$ is an arbitrary or convex finite set in $\\mathbb{Z}^n$. We derive generalizations of earlier results. Furthermore we apply a method of Beck and Fiala to obtain results of he following type: if the line sums in $k$ directions of a function $h: A \\to [0,1]$ are known, then there exists a function $f: A \\to \\{0,1\\}$ such that its line sums differ by at most $k$ from the corresponding line sums of $h$.",
"subjects": "Combinatorics (math.CO)",
"title": "Bounds for approximate discrete tomography solutions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850837598122,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7095349745321896
} |
https://arxiv.org/abs/2211.16874 | A survey on generalizations of Forelli's theorem and related pluripotential methods | We present a survey on recent developments of generalizations of Forelli's analyticity theorem and related pluripotential methods. | \section{Introduction}
When the higher-dimensional complex analysis began to emerge in the early 1900s, a natural concern was to establish criteria for the complex analyticity of functions of several complex variables. Then the study of analyticity theorems started with the following celebrated
\begin{theorem}[Hartogs \cite{Hartogs1906}]\label{Hartogs-original}
A complex-valued function $f:\Omega\to \mathbb{C}$ on an open set $\Omega\subset \mathbb{C}^n$ is holomorphic if $f$ is separately holomorphic, i.e., it satisfies the Cauchy-Riemann equations in each complex variable separately.
\end{theorem}
Note that the converse of the theorem is trivial. Earlier, Osgood \cite{Osgood} proved Theorem \ref{Hartogs-original} under an additional assumption that the given function is continuous. The ingenious idea of Hartogs was to use subharmonic functions (Lemma \ref{Hartogslemma}) to show that such an assumption is redundant. Since then, the theory of plurisubharmonic functions which we now call the \textit{pluripotential theory} has been developed by various researchers. For the applications of the modern pluripotential methods to generalizations of Theorem \ref{Hartogs-original}, see \cite{JP11} and references therein.
The following theorem of Forelli should be second only to Hartogs' analyticity theorem among various complex analyticity theorems.
\begin{theorem}[Forelli \cite{Forelli77}] \label{Forelli-original}
If a function $f\colon B^n \to \mathbb{C}$ defined on the open unit ball $B^n\subset \mathbb{C}^n$ satisfies the following
two conditions
\begin{enumerate}
\setlength\itemsep{0.3em}
\item $f\in C^{\infty}(0)$, meaning that for any positive integer $k$ there
exists an open neighborhood $V_k$ of the origin $0$ such that $f \in C^k
(V_k)$, and
%
\item the correspondence $f_v: z\in B^1\to f(z v)$ is holomorphic on $B^1$ for each
$v \in \mathbb{C}^n$ with $\|v\|=1$,
\end{enumerate}
then $f$ is holomorphic on $B^n$.
\end{theorem}
At first glance, one may think that Forelli's theorem can be obtained as an easy corollary of Hartogs' theorem via the birational blow-up of $B^n$ at the origin. Although the idea can be realized as shown in \cite{Kim13}, the proof requires additional nontrivial analysis.
Naturally, there have been many attempts to generalize Theorem \ref{Forelli-original}. But it turned out that Condition (1) cannot be weakened to finite differentiability as numerous counterexamples were found; see \cite{JKS16}. Recently, Condition (2) has been generalized successfully to various directions, starting with the work of Chirka \cite{Chirka06}. The purpose of this article is to present the recent developments of generalizations of Forelli's analyticity theorem and related pluripotential methods. The contents of the article is based upon the author's talk at the POSTECH Conference on Complex Analytic Geometry held in Pohang, South Korea for the period 18-22 July 2022.
\subsection*{Acknowledgement}
The author is supported by the National Research Foundation of Korea (NRF-2018R1C1B3005963, NRF-2021R1A4A1032418).
\section{The original proof of Forelli}
The original version of Forelli's theorem is concerned with the pluriharmonicity of functions harmonic along linear complex discs passing through the origin. But with minor adjustments, Forelli's proof in \cite{Forelli77} also yields Theorem \ref{Forelli-original} as noted in \cite{Stoll80}. We shall recapitulate the proof in this section.
\medskip
\textit{Sketch of the proof of Theorem \ref{Forelli-original}}. Let $f$ be the given function. Note that, by Condition (2), one can define
\[
f_m(z):=\frac{1}{2\pi}\int_{0}^{2\pi}f(ze^{i\theta})\,e^{-im\theta}d\theta
\]
for each nonnegative integer $m\geq 0$ and $z\in B^n$. Let $S_f:=\sum_{m=0}^{\infty}f_m$ be the formal sum of the sequence $\{f_m\}$. Fix $z\in B^n$ and choose $t\in \mathbb{C}$ with $|t|\cdot \|z\|<1$. As the correspondence $t \mapsto f(tz)$ is holomorphic by the assumption, we have
\[
f(tz)=\sum_{m=0}^{\infty}c_m(z)\cdot t^m
\]
for some sequence $\{c_m(z)\}\subset \mathbb{C}$. Then it follows from the definition of $f_m$ that $f_m(tz)=t^mc_m(z)$. Letting $t \to 1$, we obtain $f_m(z)=c_m(z)$. So $f\equiv S_f$ on $B^n$ and
\begin{equation}\label{homogeneous}
f_m(tz)=t^mf_m(z)~\text{for each}~t\in B^1,\,z\in B^n.
\end{equation}
Since $f\in C^{\infty}(0)$, we may differentiate the terms in (\ref{homogeneous}) $m$ times with respect to the complex variable $t$. Then one easily verifies that each $f_m$ is a holomorphic homogeneous polynomial of degree $m$. Hence the formal Taylor series $S_f$ is of $\textit{holomorphic type}$. Note also that $f$ is continuous on a neighborhood of the origin. So there are numbers $2r\in (0,1),\, M>0$ such that $|f(z)|\leq M$ if $z\in B^n(0;2r) := \{z \in \mathbb{C}^n \colon \|z-a\|<2r \}$. Then for each $z\in B^n(0;r)$, we have
\begin{align*}
|f_m(z)|&=\frac{1}{2^m}\cdot |f_m(2z)|\leq \frac{1}{2\pi}\frac{1}{2^m}\cdot \int_{0}^{2\pi}|f(2ze^{i\theta})|\,d\theta\leq \frac{M}{2^m}.
\end{align*}
Therefore, $f\equiv S_f$ is holomorphic on $B^n(0;r)$ by the Weierstrass $M$-test. To prove that $f$ is holomorphic on $B^n$, we recall the following celebrated lemma of Hartogs.
\begin{lemma}[Hartogs \cite{Hartogs1906}]\label{Hartogslemma}
Let $\{u_m\}$ be a sequence of subharmonic functions on an open set $\Omega\subset \mathbb{C}^n$ and $C\in \mathbb{R}$ a constant such that
\begin{enumerate}
\setlength\itemsep{0.1em}
\item $\{u_m\}$ is locally uniformly bounded from above on $\Omega$, and
\item $\limsup\limits_{m\to \infty}u_m(z)\leq C$ for any $z\in \Omega$.
\end{enumerate}
If $K$ is a compact subset of $\Omega$ and $\epsilon$ is a positive number, then there exists a positive integer $N=N(K,\epsilon)$ such that $u_m(z)\leq C+\epsilon$ whenever $m\geq N$ and $z\in K$.
\end{lemma}
\noindent
Let $\{u_m\}$ be a sequence of subharmonic functions on $\mathbb{C}^n$ defined as $u_m(z):=|f_m(z)|^{\frac{1}{m}}$. Then applying Lemma \ref{Hartogslemma} to the sequence $\{u_m\}$, one can conclude that $S_f$ converges uniformly on each compact subset of $B^n$; see p.362 of \cite{Forelli77}. \hfill $\Box$
\section{Forelli's theorem at the level of formal power series}\label{Section3}
Recall that the proof of Theorem \ref{Forelli-original} was carried out in the following two steps:
\smallskip
\begin{narrower}
\textbf{Step 1.} The formal Taylor series $S_f$ of $f$ is of holomorphic type.
\textbf{Step 2.} The formal series $S_f$ converges uniformly on some $B^n(0;r),~r<1$.
\end{narrower}
\noindent
Then by Lemma \ref{Hartogslemma}, $f\equiv S_f$ is holomorphic on $B^n$. In this section, we shall introduce several studies regarding \textbf{Step 2}.
\subsection{Solutions of Bochner's problem by Zorn, Ree, Lelong, and Cho-Kim}
A line of research concerning the convergence of formal power series of holomorphic type originates from the following question of Bochner:
\begin{question}[Bochner]
Let $S=\sum a_{ij}z_1^iz_2^j$ be a formal power series with complex coefficients such that every substitution of convergent power series with
complex coefficients $z_1=\sum b_it^i$, $z_2=\sum c_it^i$ produces a
convergent power series in $t$. Is $S$ convergent on some neighborhood
of $0\in \mathbb{C}^2?$
\end{question}
The question was answered affirmatively by Zorn in \cite{Zorn47}. He proved that $S\in \mathbb{C}[[z_1,z_2]]$ is uniformly convergent on an open neighborhood of the origin if the map $t\in \mathbb{C}\to S(at,bt)$ has a nonvanishing radius of convergence $R=R(a,b)>0$ for each $(a,b)\in \mathbb{C}^2$. Zorn also remarked in \cite{Zorn47} that it would be interesting to know whether his result can be generalized to the real case. In \cite{Ree49}, Ree clarified the meaning of the `real case' and obtained the same conclusion when the set of linear discs in Zorn's theorem is replaced by the set $\{t\in \mathbb{C} \to (at,bt):(a,b)\in \mathbb{R}^2\}$. Motivated by the works of Zorn and Ree, Lelong \cite{Lelong51} introduced the following
\begin{definition}[\cite{Lelong51}]\label{definitionLelong}
\normalfont
A set $E\subset \mathbb{C}^2$ is called \textit{normal} if any formal power series $S\in \mathbb{C}[[z_1,z_2]]$ enjoying the property that $S_{a,b}(t):=S(at,bt)\in \mathbb{C}[[t]]$ has a positive radius of convergence for every $(a,b)\in E$ becomes holomorphic on some open neighborhood of the origin in $\mathbb{C}^2$.
\end{definition}
In the terminology of Lelong, what the theorems of Zorn and Ree say is that $\mathbb{R}^2$ and $\mathbb{C}^2$ are normal sets, respectively. In \cite{Lelong51}, Lelong characterized a normal set in $\mathbb{C}^2$ using the potential theory on the complex plane. This was generalized to a higher-dimensional principle by Cho-Kim in \cite{CK21}. To state the analyticity theorem of Cho-Kim, recall that a set $F\subset \mathbb{C}^n$ is $\textit{pluripolar}$ if there is a nonconstant plurisubharmonic function $u$ on $\mathbb{C}^n$ such that $u\equiv -\infty$ on $F$. For each set $F\subset \mathbb{C}^n$, the $\textit{direction set}$ of $F$ is defined as
\[
F':=\bigg\{\bigg(\frac{z_2}{z_1},\dots, \frac{z_n}{z_1}\bigg)\in \mathbb{C}^{n-1}:(z_1,\dots,z_n)\in F,~z_1\neq 0\bigg\}.
\]
Obviously, one can extend Definition \ref{definitionLelong} to higher dimensions. Then now we can state the following
\begin{theorem}[Cho-Kim \cite{CK21}]\label{CK21}
A set $F\subset \mathbb{C}^n$ is normal if $F'\subset \mathbb{C}^{n-1}$ is not pluripolar.
\end{theorem}
Here, we give a sketch of the proof of the theorem when $n=2$. Let
\[
S(z_1,z_2)= \sum a_{i,j}{z_1}^{i}{z_2}^{j}
\]
be a formal power series for which $S_{a_1,a_2}(t):=S(a_1t,a_2t)$ has a positive radius of
convergence $R_{(a_1,a_2)}>0$ for every $(a_1,a_2)\in F$. Then we are to show that $S$ is holomorphic on some open neighborhood
of $0$ in $\mathbb{C}^2$. Note that, for any $b\in F'$, $S_{1,b}$ converges absolutely and uniformly on $\frac{1}{2}R_{(1,b)}$. So it can be rearranged as follows:
\[
S_{1,b}(t)=S(t,bt)=\sum_{m=0}^{\infty}\bigg(\sum_{j=0}^{m}a_{m-j,j}b^j\bigg)t^m = \sum_{m=0}^{\infty}P_m(b)\cdot t^m,
\]
where
\[
P_m(z):=\sum_{j=0}^{m}a_{m-j,j}z^j\in \mathbb{C}[z],~\textup{deg}\,P_m\leq m.
\]
Then the root test implies the following pointwise estimate:
\begin{equation}\label{limsup}
\limsup\limits_{m\to \infty}\frac{1}{m}\,\textup{log}\,|P_m(b)|<\infty\,\text{for each} ~b\in F'.
\end{equation}
Note that each function $\frac{1}{m}\,\textup{log}\,|P_m(z)|$ is subharmonic on $\mathbb{C}.$ The crux of the proof turns out to be the following version of Lemma \ref{Hartogslemma}; see Proposition 4.1 in \cite{CK21}.
\begin{theorem}[Lelong \cite{Lelong51}, Cho-Kim \cite{CK21}]\label{estimate}
Let $\{P_m\}\subset \mathbb{C}[z_1,\ldots,z_n]$ be a sequence of
polynomials with $\textup{deg}\,P_m\leq m$ for each positive integer $m$. If $F\subset \mathbb{C}^n$ is nonpluripolar and
\[
\limsup\limits_{m\to \infty}\frac{1}{m}\, \textup{log}\,|P_m(z)|<\infty\,~\text{for each}~ z\in F,
\]
Then for each compact subset $K$ of $\mathbb{C}^n$, there exists a constant $M=M(K)>0$ such that $\frac{1}{m}\,\textup{log}\,|P_m(z)|<\textup{log}\,M$ for any $z\in K$, $m\geq 1$.
\end{theorem}
\noindent
So from (\ref{limsup}), we obtain $|P_m(z)|\leq M^m$ for each $m\geq 1$ and $z\in \bar{B}^n$. Then the conclusion follows from the Cauchy estimate and the Weierstrass $M$-test.
We remark that the proof of Theorem \ref{estimate} had to await Theorem 7.1 in \cite{BedfordandTaylor82}. If $n=2,$ then the theorem of Bedford-Taylor reduces to the result of H. Cartan \cite{Cartan42} used in \cite{Lelong51}; see the statement $[a_3]$ on p.14 of \cite{Lelong51}.
\subsection{Projective capacity and related extremal plurisubharmonic function}
Recall that, if a formal series $S\in \mathbb{C}[[z]]$ converges at $z_0\in \mathbb{C}-\{0\}$, then $S$ converges uniformly on $B^1(0;r)$ for any $0<r<|z_0|$ by the classical theorem of Abel. However, this does not hold for formal series of several complex variables; Leja showed that there exists a formal power series $S\in \mathbb{C}[[z_1,z_2]]$ that converges pointwise on a given countable set $E\subset \mathbb{C}^2$ but does not converge uniformly on any open neighborhood of the origin. Then it would be natural to address the following
\begin{problem*}[Leja]
Characterize a subset $E$ of $\mathbb{C}^n$ for which the following statement holds: any formal series $S\in \mathbb{C}[[z_1,\dots,z_n]]$ that converges at each point of $E$ must converge uniformly on an open neighborhood of the origin.
\end{problem*}
We refer the reader to \cite{Siciak90} for a detailed historical account of the problem of Leja. In \cite{Siciak90}, Siciak gave a solution to the problem of Leja using the notion of projective capacity and related plurisubharmonic extremal function. Then the solution also yields a complete characterization of $F_{\sigma}$ normal sets, as noted by Levenberg and Molzon \cite{LevenMol88}. To recapitulate the solution of Siciak, we first recall the following
\begin{definition}[\cite{Siciak82}]\label{defofext}
\normalfont
The set of $\textit{homogeneous plurisubharmonic functions}$ on $\mathbb{C}^n$ is defined as
\[
H:=\{u\in \textup{PSH}(\mathbb{C}^n): u\geq 0~\text{on}~\mathbb{C}^n, \,u(tz)=|t|u(z)\; \forall z\in \mathbb{C}^n,~ t\in \mathbb{C}\}.
\]
For each bounded subset $E$ of $\mathbb{C}^n$, define
\[
\Psi_E(z):=\textup{sup}\,\{u(z):u\in H,\; u\leq 1~\text{on}~E\},\;~\forall z\in \mathbb{C}^n.
\]
If $E$ is unbounded, then we set
\[
\Psi_{E}(z):=\textup{inf}\,\{\Psi_{F}(z): F\subset E~\text{is bounded} \},\;~\forall z\in \mathbb{C}^n.
\]
Denote by $S^{2n-1} := \{z \in \mathbb{C}^{n}=\mathbb{R}^{2n} \colon \|z\|=1\}.$ The $\textit{projective capacity}$ of a set $E\subset \mathbb{C}^n$ is defined as
\[
\rho(E):=\textup{inf}\,\{\|u\|_{E}:u\in H,\, \|u\|_{S^{2n-1}}=1\},~\text{where}~\|u\|_{E}:=\sup_{z\in E}|u(z)|.
\]
\end{definition}
Recall that a set $E\subset \mathbb{C}^n$ is $\textit{circular}$ if $(e^{i\theta}z_1,\dots,e^{i\theta}z_n)\in E$ for any $\theta \in \mathbb{R}$ and $(z_1,\dots,z_n)\in E$. From the potential theoretic point of view, the importance of the family $H$ comes from the fact that a circular set $E\subset \mathbb{C}^n$ is pluripolar if, and only if, there is a function $u\in H$ such that $u\equiv 0$ on $E$. If $u\in H$ and $u\equiv 0$ on $E$, then $\rho(E)=0$ by the definition and $\Psi^{\ast}_{E}\equiv +\infty$ as $m\cdot u\leq \Psi_{E}$ on $\mathbb{C}^n$ for each $m>0$. Here, the asterisk denotes the upper-semicontinuous regularization of the function. In the construction of $u\in H$ vanishing on a given circular pluripolar set, Lemma \ref{Hartogslemma} again plays an important role; see Proposition 2.20 in \cite{Siciak82}. If a circular set $E$ is nonpluripolar, then it is known that $\rho(E)\neq 0$ and $\Psi^{\ast}_{E}\in H$.
For each nonempty set $F\subset S^{2n-1}$, define a circular set $S_0(F):=\{zv:z\in B^1,\,v\in F\}$. The set will be referred as a $\textit{suspension of linear discs}$ at the origin. For simplicity, we will identify a suspension with its underlying set. It can also be checked that $S_0(F)$ is nonpluripolar if, and only if, $F'$ is nonpluripolar. Then now we can state the analyticity theorem of Siciak in the following form:
\begin{theorem}[Siciak \cite{Siciak90}]\label{Siciak original}
Let $F\subset S^{2n-1}$ be a set such that $F'\subset \mathbb{C}^{n-1}$ is nonpluripolar. If $S\in $ $\mathbb{C}[[z_1,\dots,z_n]]$ is a formal series for which the map $z\in B^1\to S(zv)$ is holomorphic for any $v\in F,$ then $S$ is holomorphic on a domain of holomorphy
\[
\Omega:=\big\{z\in \mathbb{C}^n:\Psi^{\ast}_{S_0(F)}(z)<1\big\}\supset B^n(0;\rho(S_0(F)))
\]
containing the origin. Conversely, if $F\subset S^{2n-1}$ is a circular $F_{\sigma}$ set such that $F'$ is pluripolar, then there exists a formal series $S\in \mathbb{C}[[z_1,\dots,z_n]]$ such that the correspondence $z\in B^1 \to S(zv)$ is holomorphic for each $v\in F$ but $S$ does not converge uniformly on any open neighborhood of the origin.
\end{theorem}
To prove that the given $S$ is holomorphic on $\Omega$, we first write the series as a formal sum $S=\sum_{m=0}^{\infty}q_m$ of homogeneous polynomials $\{q_m\}$ with $\textup{deg}\,q_m=m$. The crux of the proof is the following $\textit{Bernstein-Walsh type inequality}$ which follows from Definition \ref{defofext}:
\begin{equation}\label{BWinequality}
|q_m(z)|\leq \|q_m\|_{E}\cdot \{\Psi^{\ast}_{E}(z)\}^m~\text{for each}~z\in \mathbb{C}^n.
\end{equation}
It should be noted that the inequality becomes trivial if $E\subset \mathbb{C}^n$ is a pluripolar circular set by the preceding arguments. Then using the Cauchy estimate of $S$ along each complex line in the suspension, one can obtain the following estimate from (\ref{BWinequality}):
\[
\limsup_{m\to \infty}|q_m(z)|^{\frac{1}{m}}\leq \Psi^{\ast}_{S_0(F)}(z)~\text{for any}~z\in \mathbb{C}^n.
\]
Therefore, $S$ is holomorphic on $\Omega$ by the root test. We refer the reader to the proof of Theorem 3.1 in \cite{Siciak90} for the details. We also remark that the region of convergence of $S$ can be estimated using the pluricomplex Green function with pole at infinity as in \cite{Sadullaev22}.
\section{Generalizations of Forelli's analyticity theorem}
Recent generalizations of Forelli's analyticity theorem started with the work of Chirka \cite{Chirka06}. He proved that the radial complex discs in Theorem \ref{Forelli-original} can be replaced with a singular foliation of $B^2$ by holomorphic curves transversal at the origin to obtain the same conclusion. This was generalized to a higher-dimensional principle by Joo-Kim-Schmalz in \cite{JKS13} and finally by Cho-Kim \cite{CK21} to the case where the foliation is parametrized by an open subset of $S^{2n-1}$.
In another direction, Kim-Polestky-Schmalz observed in \cite{KPS09} that each radial complex disc passing through the origin in $\mathbb{C}^n$ is an integral curve of the complex Euler vector field $E=\sum_{k=1}^{n}z_k\frac{\partial}{\partial z_k}$. Then they proved that the set of integral curves of a \textit{diagonalizable vector field} of the form
\[
X=\sum_{k=1}^{n}\lambda_kz_k\frac{\partial}{\partial z_k},~\{\lambda_1,\dots,\lambda_n\}\subset \mathbb{C}
\]
can replace the set of complex lines in Theorem \ref{Forelli-original} if, and only if, the field is \textit{aligned}, i.e., $\lambda_j/\lambda_k>0$ for each $j,k\in \{1,\dots,n\}$. This was generalized to the case of nondiagonalizable vector fields contracting at the origin in \cite{JKS16}. We say that a holomorphic vector field $X$ defined on an open neighborhood of the origin in $\mathbb{C}^n$ is $\textit{contracting at the origin}$ if the flow-diffeomorphism $\Phi_t$ of $\textup{Re}\,X$ for some $t<0$ satisfies: (1) $\Phi_t(0)=0$, and (2) every eigenvalue of the matrix $d\Phi_t|_{0}$ has absolute value less than 1. By the Poincar\'{e}-Dulac theorem, there exists a local holomorphic coordinate system near the origin such that $X$ takes the following form:
\begin{equation}\label{contractingVF}
X=\sum_{k=1}^{n}(\lambda_kz_k+g_k(z))\frac{\partial}{\partial z_k},
\end{equation}
where $g_k\in \mathbb{C}[z_1,\dots,z_n]$ and $\lambda_k\in \mathbb{C}$ for each $k$. $X$ is said to be \textit{aligned} if $\lambda_j/\lambda_k>0$ for each $j,k\in \{1,\dots,n\}$. By replacing the complex time $t$ with $\lambda_1\cdot t$ in the complex flow map $\Phi^X(z,t)$, we will assume that $\lambda_i>0$ for each $i$ whenever $X$ is aligned.
Note that the two theories complement each other in a sense; the partial foliation considered by Cho-Kim need not be generated by a vector field, whereas the integral curves of a holomorphic vector field need not intersect mutually transversally at origin. In this section, we shall briefly examine the ideas of the proofs of the aforementioned generalizations of Theorem \ref{Forelli-original}.
\subsection{Foliation viewpoint}
To formulate a family of holomorphic curves intersecting mutually transversally at a point $p$ of a domain $\Omega\subset \mathbb{C}^n$, Cho-Kim introduced the notion of \textit{$C^1$ pencil of holomorphic discs} at $p\in\Omega$ in \cite{CK21}. The primary example of a such pencil is the \textit{standard pencil} $S_0(U)$ of linear discs at the origin in $\mathbb{C}^n$ parametrized by an open subset $U$ in $S^{2n-1}$. Then a general pencil is defined as a $C^1$ diffeomorphic image of a standard pencil into $\Omega$. Here, we require that the diffeomorphism (1) sends the origin to the point $p$, and (2) deforms each linear disc holomorphically to a Riemann surface in $\Omega$.
Let $\Omega\subset \mathbb{C}^n$ be a domain admitting a $C^1$ pencil of holomorphic discs at $p\in \Omega$. Then the theorem of Cho-Kim \cite{CK21} says that, if a function $f:\Omega\to \mathbb{C}$ is (1) smooth at $p$, and (2) holomorphic along each Riemann surface of the given pencil, then $f$ is holomorphic on the union of an open ball centered at $p$ and the underlying (open) set of the pencil. If $U=S^{2n-1}$, then the theorem reduces to the result of Joo-Kim-Schmalz in \cite{JKS13} and further to Theorem \ref{Forelli-original} if the pencil is standard.
The proof of the theorem of Cho-Kim was established in two steps as follows:
\medskip
\begin{narrower}
{Step 1.} There is a subpencil on whose underlying set
$f$ is holomorphic.
{Step 2.} There is a standard subpencil of
the pencil found in Step 1.
\end{narrower}
\medskip
\noindent
First, the analysis of Joo-Kim-Schmalz (\cite{JKS13}, p.1173--1174) implies that $f$ satisfies the Cauchy-Riemann equations at each point of a fixed Riemann surface of the pencil near the origin. This, together with an application of the Baire category theorem, settles Step 1. If one assumes that Step 2 fails to hold, then a contradiction can be derived from the transversality of each pair of curves of the given pencil. Hence we can find a standard subpencil along which the function $f$ is holomorphic. Then we recall the original steps of Forelli and show first that the formal Taylor series $S_f$ is of holomorphic type. Now the conclusion follows from Theorem \ref{CK21} and the generalization of Lemma \ref{Hartogslemma} in \cite{Chirka06}.
\subsection{Vector field viewpoint}
Let $X$ be a diagonalizable vector field of aligned type on $\mathbb{C}^n$. Then Kim-Poletsky-Schmalz \cite{KPS09} proved that, if a function $f:B^n\to \mathbb{C}$ satisfies: (1) $f\in C^{\infty}(0)$, and (2) $f$ is holomorphic along each integral curve of $X$, then $f$ is holomorphic on $B^n$. For the proof, the authors followed the original steps of Forelli. Note first that the complex flow map of $X$ is given by
\begin{equation}\label{flow}
\Phi^X(z,t)=(z_1e^{-\lambda_1 t},\dots,z_ne^{-\lambda_n t}).
\end{equation}
If $S_f=\sum C_{IJ}z^I\bar{z}^{J}$ is the formal Taylor series of the function $f$, then Condition (2) implies that the map
\begin{equation}\label{asymptoticexp}
f(\Phi^X(z,t))=\sum \{C_{IJ}z^I\bar{z}^{J}\cdot (\text{exponential term in variable}\;t)\}
\end{equation}
is holomorphic in $t$ for each $z\in B^n$. Here, we used the multi-index notations:
\begin{align*}
k=(k_1,\ldots,k_n),~|k|=k_1+\cdots+k_n,~k!=k_1!\cdots k_n!,\,\text{and}\,z^{k}=z_1^{k_1} \cdots z_n^{k_n},
\end{align*}
where $z=(z_1,\dots,z_n)\in \mathbb{C}^n$. Then (\ref{asymptoticexp}) implies that $C_{IJ}=0$ whenever $J\neq 0$; so $S_f$ is of holomorphic type. To prove that $S_f$ is holomorphic on a neighborhood of the origin, the authors of \cite{KPS09} develop a version of the Cauchy estimate along each integral curve of $X$. So they obtain a uniform estimate
\[
\Big|\sum_{\text{finite}} C_{I0}z^I\Big|<A,~\text{for each}~z\in B^n
\]
for some $A>0$. Then by applying the Cauchy estimate to the finite polynomial above, one can conclude from the Weierstrass $M$-test that $f\equiv S_f$ is holomorphic on some $B^n(0;r)$. Finally, use the implicit function theorem to `straighten' the complex flow of $X$ locally, and extend $f$ along the flow (Lemma \ref{Hartogslemma}).
If the given field $X$ is not diagonalizable, then the formal expansion of the map $f(\Phi^X(z,\cdot))$ becomes considerably more complicated than (\ref{asymptoticexp}). Then it is not clear whether a version of the Cauchy estimate along the leaves of $X$ can be formulated. So the authors of \cite{JKS16} take a different approach: they first use sophisticated mathematical induction on the index $J$ in $S_f=\sum C_{IJ}z^I\bar{z}^J$ to show that $S_f$ is of holomorphic type. Then they use the Phragm$\acute{\textup{e}}$n-Lindel$\ddot{\textup{o}}$f type argument and the methods in \cite{JKS13} to show that $f$ satisfies the Cauchy-Riemann equations along each complex integral curve of $X$. Recall that the set of complex integral curves of $X$ forms a foliation of a punctured open neighborhood $V-\{0\}$ of the origin. Since $f\in C^1$ on some open neighborhood of the origin, we have $\bar{\partial}f\equiv 0$ on $V-\{0\}$ by shrinking $V$ if necessary. Therefore, $f$ becomes holomorphic on $V$ by Theorem \ref{Hartogs-original} as any isolated point in $\mathbb{C}^n(n\geq 2)$ is removable. Then by Lemma \ref{Hartogslemma}, the function extends holomorphically on the union of $V$ and the maximal integral curves of $X$ intersecting $V$.
\section{Localization of Kim-Poletsky-Schmalz theorem}
Note that the underlying set of each family of curves considered in the previous section is open. But in Section \ref{Section3}, we only required the underlying set of a suspension of linear discs to be $\textit{nonpluripolar}$ to guarantee $\textbf{Step 2}$. A general nonpluripolar set needs not to have a nonempty interior; indeed, there are nonpluripolar sets even having Lebesgue measure zero (e.g. $\mathbb{R}$ in $\mathbb{C}$). So in the light of the original steps of Forelli, we obtain a new generalization of Theorem \ref{Forelli-original} as soon as we find a nonpluripolar linear suspension with an empty interior on which $\textbf{Step 1}$ is available.
Motivated by the observation, I proved in \cite{Cho22} that $\textbf{Step 1}$ is valid on a given linear suspension if, and only if, it has a specific leaf $L_v:=\{zv:z\in B^1\}$ $(v\in \bar{F})$ called an $\textit{algebraically}$ $\textit{nonsparse leaf}$. Then I also introduced the concept of a $\textit{regular leaf}$ and showed that a suspension is nonpluripolar if, and only if, it has a regular leaf. So the Forelli-type theorem holds on a linear suspension having the two kinds of leaves and such a suspension is called a $\textit{Forelli suspension}$. From this localization of Theorem \ref{Forelli-original}, I was also able to construct a nowhere dense Forelli suspension; see Example 4.3 in \cite{Cho22}.
Recently, I found that the aforementioned notions of leaves can be generalized to the case of suspensions generated by diagonalizable vector fields of aligned type. Let $X$ be one of such fields on $\mathbb{C}^n$ with eigenvalues $\lambda=(\lambda_1,\dots,\lambda_n)$ and $\Phi^X$ the complex flow map of $X$ in (\ref{flow}). By a $\textit{suspension of} $ $\textit{integral curves of}\;X$, we mean a pair of the form $(S^X_0(F),\Phi^{X})$. Here,
\[
S^X_0(F):=\{\Phi^{X}(z,t):z\in F,~t\in \mathbb{H}\}
\]
for some nonempty subset $F$ of $S^{2n-1}$ and $\mathbb{H}$ is the right open half-plane in $\mathbb{C}.$ We will identify a suspension with its underlying set. Suppose that the given suspension possesses an algebraically nonsparse leaf
\[
L_{z_0}:=\{\Phi^X(z_0,t):t\in \mathbb{H}\},~z_0\in \bar{F}.
\]
Then it turns out that any function $f:B^n\to \mathbb{C}$ satisfying the following two conditions: (1) $f\in C^{\infty}(0)$, and (2) $t\in \mathbb{H}\to f(\Phi^X(z,t))$ is holomorphic for each $z\in F$ has a formal series $S_f$ of holomorphic type. So by Condition (2) and the Cauchy estimate in \cite{KPS09}, $S_f$ can be written as a formal sum of holomorphic $\textit{quasi-homogeneous}$ polynomials of type $\lambda$ (see Definition 1.2 in \cite{Cho22-2}) that converge pointwise on the suspension. If the given suspension has a regular leaf, then the suspension is nonpluripolar so we can use the pluripotential methods to establish the uniform convergence of the polynomials. Among several methods available, I chose to follow Siciak's ideas \cite{Siciak82} and develop a new capacity theory which seems more suitable for the analysis of quasi-homogeneous polynomials; see p.4 of \cite{Cho22-2}.
For this purpose, the set of $\textit{quasi-homogeneous}$ plurisubharmonic function on $\mathbb{C}^n$ associated with $X$ was defined in \cite{Cho22-2} as
\[
H_{\lambda}:=\{u\in \textup{PSH}(\mathbb{C}^n): u\geq 0~\text{on}~\mathbb{C}^n, \,u(\Phi^{X}(z,t))=e^{-\textup{Re}\,t}\cdot u(z)\; \forall z\in \mathbb{C}^n, t\in \mathbb{C}\}.
\]
Then the $\lambda\textit{-projective capacity}$ $\rho_{\lambda}$ and the related extremal function $\Psi_{E,\lambda}$ can be defined as in Definition \ref{defofext}. If $X$ is the complex Euler vector field, i.e., $\lambda=(1,\dots,1)$, then the functions reduce to those in Definition \ref{defofext}. Using the methods in \cite{Siciak82} and \cite{KPS09}, one can develop a theory of the new functions and obtain the following localization of the theorem of Kim-Poletsky-Schmalz \cite{KPS09}.
\begin{theorem}[Cho \cite{Cho22-2}]\label{main theorem}
If a suspension $S^X_0(F)$ has a nonsparse leaf and a regular leaf, then it is a Forelli suspension; that is, any function $f:B^n\to \mathbb{C}$ satisfying the following two conditions
\begin{enumerate}
\setlength\itemsep{0.1em}
\item $f\in C^{\infty}(0)$, and
\item $t\in \mathbb{H}\to f\circ \Phi^{X}(z,t)$ is holomorphic for each $z\in F$
\end{enumerate}
is holomorphic on a domain of holomorphy
\[
\Omega:=\{z\in \mathbb{C}^n:\Psi^{\ast}_{S^X_0(F),\lambda}(z)<1\}\supset B^n(0;\{\rho_{\lambda}(S^{X}_0(F))\}^{\textup{max}(\lambda)})
\]
of the origin. Furthermore, there exists an open neighborhood $U=U(F,X)\subset S^{2n-1}$ of a generator $v_0\in \bar{F}$ of the regular leaf such that $f|_{\Omega}$ extends to a holomorphic function on a domain of holomorphy
\[
\hat{\Omega}:=\{z\in \mathbb{C}^n:\Psi_{\Omega \cup S^X_0(U),\lambda}(z)<1\}\supset \Omega \cup S^X_0(U).
\]
\end{theorem}
The generalization of Lemma \ref{Hartogslemma} by Shiffman \cite{Shiffman89} plays a crucial role in the extension of $f$ along $S^X_0(U)$; see Section 6 in \cite{Cho22-2}. At this point, a few remarks regarding Theorem \ref{main theorem} should be in order. As each point of a nonempty open subset $U$ of $S^{2n-1}$ generates a leaf that is both nonsparse and regular, $S^X_0(U)$ is always a Forelli suspension. But a Forelli suspension needs not to be generated by an open subset of $S^{2n-1}$ in general; one can construct a set $F\subset S^{2n-1}$ such that $S^X_0(F)$ is a nowhere dense Forelli suspension for any $X$. See Example 7.3 in \cite{Cho22-2}. Note also that we required the given suspension to possess all of the two specific leaves in Theorem \ref{main theorem}. But several examples of suspensions in Section 7 of \cite{Cho22-2} indicate that they are indeed all necessary to obtain the desired conclusion.
Finally, we remark that it would be interesting to know whether this type of analysis can be carried out when the given suspension is generated by a nondiagonalizable vector field of aligned type or a family of Riemann surfaces pairwise transversal at the origin.
| {
"timestamp": "2022-12-01T02:12:08",
"yymm": "2211",
"arxiv_id": "2211.16874",
"language": "en",
"url": "https://arxiv.org/abs/2211.16874",
"abstract": "We present a survey on recent developments of generalizations of Forelli's analyticity theorem and related pluripotential methods.",
"subjects": "Complex Variables (math.CV)",
"title": "A survey on generalizations of Forelli's theorem and related pluripotential methods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850827686584,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7095349738168312
} |
https://arxiv.org/abs/0811.2365 | Sturm and Sylvester algorithms revisited via tridiagonal determinantal representations | First, we show that Sturm algorithm and Sylvester algorithm, which compute the number of real roots of a given univariate polynomial, lead to two dual tridiagonal determinantal representations of the polynomial. Next, we show that the number of real roots of a polynomial given by a tridiagonal determinantal representation is greater than the signature of this representation. | \section*{Introduction}
There are several methods to count the number of real roots of an univariate polynomial $p(x)\in{\Bbb R}[x]$ of degree $n$ (for details we refer to \cite{BPR}). Among them, the Sturm algorithm says that the number of real roots of $p(x)$ is equal to the number of Permanence minus the number of variations of signs which appears in the leading coefficients of the signed remainders sequence of $p(x)$ and $p'(x)$. \par
Another method is the Sylvester algorithm which says that the number of real roots of $p(x)$ is equal to the signature of the symmetric matrix whose $(i,j)$-th entry is the $i+j$-th Newton sums of the roots of the polynomial $p(x)$.\par
One purpose of the paper is to point out, at least in the generic situation, that these two classical algorithms can be viewed as dual.\air
In section 1, we introduce signed remainders sequences of two given monic polynomials $p(x)$ and $q(x)$ of respective degrees $n$ and $n-1$. With some conventions of signs and others, we give a presentation of this sequence through a tridiagonal matrix ${\mbox{\rm Td}}(p,q)$. Next, we give a decomposition of this tridiagonal matrix as ${\mbox{\rm Td}}(p,q)=LC_p^{T}L^{-1}$ where $L$ is lower triangular and $C_p^T$ is the transposed of the companion matrix associated to $p(x)$.
\par
In section 2, we introduce the duality between the Sturm and Sylvester algorithm, first when the polynomial $p(x)$ has only single and real roots, and then in Theorem \ref{duality} we generalize it to the generic case.\par
More precisely, on one hand we have $$\left\{\begin{array}{lcl}p(x)&=&{\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_n-{\mbox{\rm Td}}(p,q))\\ q(x)&=&{\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_{n-1}-{\mbox{\rm Td}}(p,q)_{n-1})\end{array}\right.$$
with the conventions that ${\mbox{\rm\bf Id}}_n$ (or ${\mbox{\rm\bf Id}}$ in short) denotes the identity matrix of ${\Bbb R}^{n\times n}$ and $A_k\in{\Bbb R}^{k\times k}$ (respectively $\overline{A}_k\in{\Bbb R}^{k\times k}$) denotes the {\it $k$-th principal submatrix} (respectively {the {\it $k$-th antiprincipal submatrix}) of $A$ which corresponds to extracting the first $k$ (respectively the last $k$) rows and columns in the matrix $A\in{\Bbb R}^{n\times n}$.
On the other hand, we consider a natural Hankel (hence symmetric) matrix $H(q/p)\in{\Bbb R}^{n\times n}$ associated to $p(x)$ and $q(x)$. Generically it admits an LU decomposition of the form $H(q/p)=KJK^T$ where $J$ is a {\it signature matrix} (a diagonal matrix with coefficients $\pm 1$ onto the diagonal) and $K$ is lower triangular. Then, we introduce the tridiagonal matrix $\overline{{\mbox{\rm Td}}}=K^{-1}C_p^TK$, which is such that $p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_n-\overline{{\mbox{\rm Td}}})$.\par
If we consider that the matrices ${\mbox{\rm Td}}(p,q)$ and $\overline{{\mbox{\rm Td}}}$ represent linear mappings in some basis, then the duality Theorem \ref{duality} means that one matrix can be deduced from the other simply by reversing the ordering of the basis.
\air
We shall mention that, in the case when all the roots of $p(x)$ are real, the existence of a tridiagonal symmetric matrix ${\mbox{\rm Td}}$ given by the signed remainders sequence of $p(x)$ and $q(x)$ together with the identity $p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_n-{\mbox{\rm Td}})$ corresponds to the Routh-Lanczos algorithm which answers a structured Jacobi inverse problem. Namely, the question to find a real symmetric tridiagonal matrix $A$ with a given characteristic polynomial $p(x)$ and such that the characteristic polynomial of its principal minor $A_{n-1}$, of size $n-1$, is proportional to $p'(x)$. We refer to \cite{EP} for a survey on the subject.
\air
In section 3, we focus on the relation of the question of real roots counting and the question of determinantal representation. We say that $p(x)={\;\mbox{\rm det}}(J-xA)$ is a determinantal representation of the polynomial $p(x)$ if $J\in{\Bbb R}^{n\times n}$ is a signature matrix and $A\in{\Bbb R}^{n\times n}$ is a symmetric matrix.\par
Remark that we may transform the identity $p(x)={\;\mbox{\rm det}}(J-xA)$ into $p^*(x)={\;\mbox{\rm det}}(xJ-A)$ where $p^*(x)$ is the reciprocal polynomial of $p(x)$. If we write $$p^*(x)={\;\mbox{\rm det}}(J)\times{\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-AJ),$$ it shows a connexion with the results of section 2 when the matrix $AJ$ is tridiagonal. More precisely, we establish that such a determinantal representation is always possible : we may even find a family of representations for a given polynomial $p(x)$. We show also that, given such a determinantal representation for a polynomial $p(x)$, its number of real roots is at least equal to the signature of the signature matrix $J$. \par
Finally, in section 4 we end with some worked examples.
\section{Tridiagonal representation of signed remainders sequences}
\subsection{Definitions}\label{definitions}
Let $\alpha=(\alpha_1,\ldots,\alpha_n)$, $\beta=(\beta_1,\ldots,\beta_{n-1})$ and $\gamma=(\gamma_1,\ldots,\gamma_{n-1})$ be three sequences of real numbers. We set the tridiagonal matrix ${\mbox{\rm Td}}(\alpha,\beta,\gamma)$ to be :
$${\mbox{\rm Td}}(\alpha,\beta,\gamma)=\left(\begin{array}{ccccc}
\alpha_n&\gamma_{n-1}&0&\ldots&0\\
\beta_{n-1}&\alpha_{n-1}&\gamma_{n-2}&\ddots&\vdots\\
0&\beta_{n-2}&\ddots&\ddots&0\\
\vdots&\ddots&\ddots&\ddots&\gamma_1\\
0&\ldots&0&\beta_1&\alpha_1\\
\end{array}\right)
$$
Let $p(x)$ and $q(x)$ be two monic polynomials of respective degrees $n$ and $n-1$. We set ${\mbox{\rm SRemS}}(p,q)=(p_k(x))_k$ to be the {\it signed remainders sequence} of $p(x)$ and $q(x)$ defined in the following way :
\air
\begin{equation}\label{definition_prs}\left\{\begin{array}{lll}
p_0(x)&=&p(x)\\
p_1(x)&=&q(x)\\
p_k(x)&=&q_{k+1}(x)p_{k+1}(x)-\epsilon_{k+1}\beta^2_{k+1}p_{k+2}(x)
\end{array}\right.
\end{equation}
where
\begin{equation}\label{convention_prs}\left\{\begin{array}{l}
p_k(x),q_{k+1}(x)\in{\Bbb R}[x],\\
\epsilon_{k+1}\in\{-1,+1\},\\
\beta_{k+1}\; {\rm is\; a\; positive\; real\; number},\\
p_{k+2}(x)\; {\rm is\; monic\; and\;} {\mbox{\rm deg}\,} p_{k+2}<{\mbox{\rm deg}\,} p_{k+1}.
\end{array}\right.
\end{equation}
This is a finite sequence which stops at the step just before we reach the zero polynomial as remainder.
With these conventions, the signed remainders sequence ${\mbox{\rm SRemS}}(p,q)$ that we obtain is also called the {\it Sturm-Habicht} sequence of $p(x)$ and $q(x)$.\air
Let us assume that there is no degree breakdown in ${\mbox{\rm SRemS}}(p,q)$. Namely :
\begin{equation}\label{NoBreak}
(\forall k\in\{0,\ldots,n\})\; ({\mbox{\rm deg}\,} p_k=n-k)
\end{equation}
Then, $q_{k+1}(x)$ is a degree one polynomial which we write $q_{k+1}(x)=(x-\alpha_{k+1})$ with $\alpha_{k+1}\in{\Bbb R}$. Another consequence is that $\gcd(p,q)=1$.\air
Let $\gamma_{k+1}=\epsilon_{k+1}\beta_{k+1}$ and consider the following tridiagonal matrix :
$${\mbox{\rm Td}}(p,q)={\mbox{\rm Td}}(\alpha,\beta,\gamma)$$
We may read on this matrix all the informations about the signed remainders sequence ${\mbox{\rm SRemS}}(p,q)$. \air
For a given tridiagonal matrix ${\mbox{\rm Td}}={\mbox{\rm Td}}(\alpha,\beta,\gamma)\in{\Bbb R}^{n\times n}$, we define the {\it first principal lower diagonal} (respectively the {\it first principal upper diagonal}) of ${\mbox{\rm Td}}$ to be the sequence $\beta=(\beta_1,\ldots,\beta_{n-1})$ (respectively $\gamma=(\gamma_1,\ldots,\gamma_{n-1})$). We will say that these first principal diagonals are {\it non-singular} if all the coefficients $\beta_i$ (respectively $\gamma_i$) are different from zero.\par Note that the no degree breakdown assumption (\ref{NoBreak}) implies that the principal diagonals of ${\mbox{\rm Td}}(p,q)$ are non-singular. \par
\begin{prop}\label{prs_matrice_tridiag}
\begin{enumerate}
\item[(i)] To any tridiagonal matrix ${\mbox{\rm Td}}={\mbox{\rm Td}}(\alpha,\beta,\gamma)$ with non-singular principal diagonals, we may canonically associate a (unique) couple of monic polynomials $p(x)$ and $q(x)$ of respective degrees $n$ and $n-1$ such that the sequence ${\mbox{\rm SRemS}}(p,q)$ has no degree breakdown and the characteristic polynomial of ${\mbox{\rm Td}}_k$ is equal to $p_{n-k}(x)$ : $${\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_k-{\mbox{\rm Td}}_k)=p_{n-k}(x).$$
\item[(ii)] To any couple of monic polynomials $p(x)$ and $q(x)$ of respective degrees $n$ and $n-1$ such that ${\mbox{\rm SRemS}}(p,q)$ has no degree breakdown, we may associate a unique tridiagonal matrix with non-singular principal diagonals ${\mbox{\rm Td}}(p,q)={\mbox{\rm Td}}(\alpha,\beta,\gamma)$ satisfying for all $k$, $\beta_k>0$ and $\gamma_k=\epsilon_k\beta_k$ where $\epsilon_k=\pm 1$.\par
\item[(iii)]
When we have $(i)$ and $(ii)$, the matrix $Td(p,q))\times P$ is tridiagonal and symmetric, where we have set $$P=\left(\begin{array}{rccc}
\epsilon_{n-1}\times\ldots\times\epsilon_1&&&\\
\ddots&&&\\
&\epsilon_2\times\epsilon_1&&\\
&&\epsilon_1&\\
&&&1
\end{array}\right).$$
\item[(iv)]
When we have $(i)$ and $(ii)$, the sequence of signs in the leading coefficients of the signed remainders sequence ${\mbox{\rm SRemS}}(p,q)$ is :
$$(1,1,\epsilon_1,\epsilon_2,\epsilon_1\times\epsilon_3,\epsilon_2\times\epsilon_4,\epsilon_1\times\epsilon_3\times\epsilon_5,\ldots,\epsilon_{n-1{\;\mbox{\rm mod}}\; 2}\times\ldots\times\epsilon_{n-3}\times\epsilon_{n-1})$$
\end{enumerate}
\end{prop}
\begin{proof}
Concerning $(i)$, the polynomials $p(x)$ and $q(x)$ are taken to be $p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_n-{\mbox{\rm Td}})$ and $q(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_{n-1}-{\mbox{\rm Td}}_{n-1})$. Then, we set for all $k$, $$\delta_{n-k}(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_k-{\mbox{\rm Td}}_k)$$ (where ${\mbox{\rm Td}}_k$ is the $k$-th principal submatrix of ${\mbox{\rm Td}}$) and we develop the determinant $$\delta_0(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_n-{\mbox{\rm Td}})$$ with respect to the last row. We get
$$\delta_0(x)=(x-\alpha_1)\delta_1(x)-(\beta_1\gamma_1)\delta_2(x)$$
Repeating the process, we obtain the same recurrence relation as the one defining the sequence $(p_k(x))_k$ in (\ref{definition_prs}). Since $\delta_{0}(x)=p_{0}(x)$ and $\delta_{1}(x)=p_{1}(x)$, we get the wanted identity.\air
Point $(ii)$ follows straightforward from the beginning of the section, whereas points $(iii)$ and $(iv)$ follows from elementary computation. \end{proof}
We may note that to the tridiagonal matrix ${\mbox{\rm Td}}(p,q)$, we may associate also another natural polynomial remainder sequence : $\overline{{\mbox{\rm SRemS}}(p,q)}={\mbox{\rm SRemS}}(p,\bar{q})$ where $$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_n-{\mbox{\rm Td}})$$ and $$\bar{q}(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_{n-1}-\overline{{\mbox{\rm Td}}}_{n-1}),$$ with the convention that $\overline{{\mbox{\rm Td}}}_k$ is the $k$-th antiprincipal submatrix of ${\mbox{\rm Td}}$.
The signed remainders sequence $\overline{{\mbox{\rm SRemS}}(p,q)}$ will be considered as the {\it dual} signed remainders sequence of ${\mbox{\rm SRemS}}(p,q)$.
This only means that we may read on a tridiagonal matrix from the top left rather than from the bottom right !
\air
For cosmetic reasons we will write $\overline{{\mbox{\rm Td}}(p,q)}$ in place of ${\mbox{\rm Td}}(p,\bar{q})$. We obviously have :
\begin{equation}\label{dual}\overline{{\mbox{\rm Td}}(p,q)}={\mbox{\rm\bf Ad}}\times{\mbox{\rm Td}}(p,q)\times{\mbox{\rm\bf Ad}}\end{equation}
where ${\mbox{\rm\bf Ad}}_n\in{\Bbb R}^{n\times n}$ (${\mbox{\rm\bf Ad}}$ in short) stand for the anti-identity matrix of size $n$ :
$${\mbox{\rm\bf Ad}}_n=\left(\begin{array}{cccc}
0&\ldots&0&1\\
\vdots&\adots&\adots&0\\
0&\adots&\adots&\vdots\\
1&0&\ldots&0\\
\end{array}\right)$$
\subsection{Companion matrix}\label{Companion}
We denote by $A^T$ the transposed of the matrix $A\in{\Bbb R}^{n\times n}$ and we define the {\it companion matrix} of the polynomial $p(x)=x^n+a_{n-1}x^{n-1}+\ldots+a_0$ to be
$$C_p=
\left(\begin{array}{ccccc}
0&\ldots&\ldots&0&-a_0\\
1&\ddots&&\vdots&-a_1\\
0&\ddots&\ddots&\vdots&\vdots\\
\vdots&\ddots&\ddots&0&-a_{n-2}\\
0&\ldots&0&1&-a_{n-1}
\end{array}\right)
$$
We recall a well-know identity (see for instance \cite{EP}) :
\begin{prop}\label{prs_tridiag} Let $p(x)$ and $q(x)$ be two monic polynomials of respective degrees $n$ and $n-1$ such that ${\mbox{\rm SRemS}}(p,q)$ has no degree breakdown. \par
Then, there is a lower triangular matrix $L$ such that \begin{equation}{\mbox{\rm Td}}(p,q)=LC_p^TL^{-1}\end{equation}
\end{prop}
\begin{proof}
With the notation of Subsection \ref{definitions}, let ${\cal P}(x)=\left(\gamma_1\ldots\gamma_{n-1}p_n(x),\ldots,\gamma_1p_2(x),p_1(x)\right)$.
A direct computation gives
$${\cal P}(x)\left({\mbox{\rm Td}}(p,q)\right)^T=x{\cal P}(x)+\left(0,\ldots,0,-p(x)\right)$$
Let $U$ be the upper triangular matrix whose columns are the coefficients of the polynomials of ${\cal P}(x)$ in the canonical basis ${\cal C}(x)=(1,x,\ldots,x^{n-1})$.
In other words :
$${\cal C}(x)U={\cal P}(x)$$
Besides, we have $${\cal C}(x)C_p=x{\cal C}(x)+(0,\ldots,0,-p(x))$$
Thus
$$\begin{array}{rclcc}
{\cal C}(x)C_pU&=&x{\cal C}(x)U+(0,\ldots,0,-p(x))&{\rm since}\; p_1(x) \;{\rm is\; monic}\\
&=&{\cal P}(x)\left({\mbox{\rm Td}}(p,q)\right)^T&\\
&=&{\cal C}(x)U\left({\mbox{\rm Td}}(p,q)\right)^T&\\
\end{array}$$
We deduce the identity
$$V(x_1,\ldots,x_n)C_pU=V(x_1,\ldots,x_n)U\left(Td(p,q)\right)^T$$
for any Vandermonde matrix $V(x_1,\ldots,x_n)$ whose lines are $(1,x_i,\ldots,x_i^{n-1})$ for $i=1\ldots n$. If we choose the $n$ reals $x_1,\ldots,x_n$ to be distinct, then $V(x_1,\ldots,x_n)$ becomes invertible and we get :
$${\mbox{\rm Td}}(p,q)=LC_p^TL^{-1}$$
where $L$ is the lower triangular matrix defined by $L= U^T$.
\end{proof}
The following result says that the decomposition generically exists for any tridiagonal matrix, and also it is unique :
\begin{prop}\label{commutant}
Any tridiagonal matrix ${\mbox{\rm Td}}$ with non-singular principal diagonals can be written ${\mbox{\rm Td}}=LC_p^TL^{-1}$ where $p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-{\mbox{\rm Td}})$ and $L$ is a lower triangular matrix. Moreover the matrix $L$ is unique up to a multiplication by a real number.
\end{prop}
\begin{proof}
The existence is given by Proposition \ref{prs_matrice_tridiag} and Proposition \ref{prs_tridiag}.\par
We come now to the unicity.
Assume that $L_1C_p^TL_1^{-1}=L_2C_p^TL_2^{-1}$ where $L_1$ and $L_2$ are lower triangular. Then, $L=L_2^{-1}L_1$ is a lower triangular matrix which commute with $C_p^T$.
\par
If $L=(t_{i,j})_{1\leq i,j\leq n}$, then
$$LC_p^T=\left(\begin{array}{ccccc}
0&t_{1,1}&0&\ldots&0\\
\vdots&t_{2,1}&t_{2,2}&\ddots&\vdots\\
\vdots&\vdots&&\ddots&0\\
0&t_{n-1,1}&\ldots&\ldots&t_{n-1,n-1}\\
?&\ldots&\ldots&\ldots&?
\end{array}\right)$$
and
$$C_p^TL=\left(\begin{array}{ccccc}
t_{2,1}&t_{2,2}&0&\ldots&0\\
t_{3,1}&t_{3,2}&t_{3,3}&\ddots&\vdots\\
\vdots&&\ddots&\ddots&0\\
t_{n,1}&\ldots&\ldots&t_{n,n-1}&t_{n,n}\\
?&\ldots&\ldots&\ldots&?
\end{array}\right)$$
\air
Thus $t_{1,1}=t_{2,2}=\ldots=t_{n,n}$ and $t_{2,1}=t_{3,2}=\ldots=t_{n,n-1}=0$ and $t_{3,1}=t_{4,2}=\ldots=t_{n,n-2}=0$, and so on until $t_{n,1}=0$. We deduce that $L=\lambda {\mbox{\rm\bf Id}}$ and we are done.
\end{proof}
\subsection{Sturm algorithm}
As a particularly important case of signed remainders sequences, we shall mention the Sturm sequence which is ${\mbox{\rm SRemS}}(p,q)$ where $q$ is taken to be the derivative of the polynomial $p(x)$ up to normalization, i.e. $q=p'/{\mbox{\rm deg}\,}(p)$. \par
For a given finite sequence $\nu=(\nu_1,\ldots,\nu_k)$ of elements in $\{-1,+1\}$, we recall the {\it Permanence minus Variations} number :
$${\mbox{\rm PmV}}(\nu_1,\ldots,\nu_k)=\sum_{i=1}^{k-1}\nu_i\nu_{i+1}.$$
Here the sequence $\nu$ will be for the sequence of signs of leading coefficients in ${\mbox{\rm SRemS}}(p,q)$. Then, the Sturm Theorem \cite[Theorem 2.50]{BPR} says that the number ${\mbox{\rm PmV}}(\nu)$ is exactly the number of real roots of $p(x)$. \par
If we assume that the polynomial $p(x)$ has $n$ distinct real roots, then the Sturm sequence has no degree breakdown and for all $k$ we have $\nu_k=1$. Hence we get a {\it symmetric} tridiagonal matrix ${\mbox{\rm Td}}(p,q)$ which has the decomposition
${\mbox{\rm Td}}(p,q)=LC_p^TL^{-1}$ where $L$ is the lower triangular matrix defined as in subsection \ref{Companion}. In particular, the last row of $L$ gives the list of coefficients of the polynomial $q(x)$ in the canonical basis.
\section{Duality between Sturm and Sylvester algorithms}
\subsection{Sylvester algorithm}\label{Sylvester}
Let us introduce the symmetric matrix ${\rm Newt}_p(n)=(n_{i,j})_{0\leq i,j\leq n-1}$ define as $$n_{i,j}={\mbox{\rm Trace}}\;(C_p^{i+j})=N_{i+j}$$ which is nothing but the $i+j$-th Newton sum of the polynomial $p(x)$. To be more explicit, if $\alpha_1,\ldots,\alpha_n$ denote all the complex roots of the polynomial $p(x)$, then the $k$-th Newton sum is the real number $N_k=\alpha_1^k+\ldots+\alpha_n^k$.
\air
Recall that the {\it signature} ${\mbox{\rm sign}}(A)$ of a real symmetric matrix $A\in{\Bbb R}^{n\times n}$, is defined to be the number $p-q$, where $p$ is the number of positive eigenvalues of $A$ (counted with multiplicity) and $q$ the number of negative eigenvalues of $A$ (counted with multiplicity).
The Sylvester Theorem (which has been generalized later by Hermite : \cite[Theorem 4.57]{BPR}) says that the matrix ${\rm Newt}_p(n)$ is invertible if and only if $p(x)$ has only single roots, and also that ${\mbox{\rm sgn}}({\rm Newt}_p(n))$ is exactly the number of distinct real roots of $p(x)$. \par
In particular, if the polynomial $p(x)$ has $n$ distinct real roots, then the matrix ${\rm Newt}_p(n)$ is positive definite.
Thus, by the Choleski decomposition algorithm, we can find a lower triangular matrix $K$ such that ${\rm Newt}_p(n)=K K^T$. Le us show how to exploit this decomposition.\air
First, we write $$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-C_p^T)$$
Then, we introduce a useful identity (which will be discussed in more details in the forthcoming section) :
$${\rm Newt}_p(n)C_p=C_p^T{\rm Newt}_p(n),$$
So, we get :
$$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-K^{-1}C_p^TK)$$
Note that the matrix $K^{-1}C_p^TK$ is tridiagonal. Our purpose in the following is to establish a connexion with the identity $$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-LC_p^TL^{-1})$$
obtained in Proposition \ref{commutant}.\par
More generally, we will point out a connexion between tridiagonal representations associated to signed remainders sequences on one hand, and tridiagonal representations derived from decompositions of some Hankel matrices on the other hand.
\subsection{Hankel matrices and Intertwinning relation}
Roughly speaking, the idea of previous section is to start with the canonical companion identity $$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-C_p^T)$$ and then to use a symmetric invertible matrix $H$ satisfying the so-called {\it intertwinning relation}
\begin{equation}\label{Intertwinning} HC_p=C_p^TH\end{equation}
Since $H$ is supposed to be symmetric invertible, Equation (\ref{Intertwinning}) only says that the matrix $HC_p$ is symmetric. It is a classical and elementary result that a matrix $H$ satisfying equation (\ref{Intertwinning}) is necessarily an Hankel matrix.
\begin{defn}
We say that the matrix $H=(h_{i,j})_{0\leq i,j\leq n-1}\in{\Bbb R}^{n\times n}$ is an Hankel matrix if $h_{i,j}=h_{i',j'}$ whenever $i+j=i'+j'$. Then, it makes sense to introduce the real numbers $a_{i+j}=h_{i,j}$ which allow to write in short $H=(a_{i+j})_{0\leq i,j\leq n-1}$.
\air
Let $s=(s_k)$ be a sequence of real numbers. We denote by $H_n(s)$ or by $H(s_0,\ldots,s_{2n-2})$ the following Hankel matrix of ${\Bbb R}^{n\times n}$ :
$$H_n(s)=(s_{i+j})_{0\leq i,j\leq n-1}=\left(\begin{array}{cccc}
s_0&s_1&\ldots&s_{n}\\
s_1&&\adots&s_{n+1}\\
\vdots&\adots&\adots&\vdots\\
s_n&s_{n+1}&\ldots&s_{2n-2}\\
\end{array}\right)$$
\air
\end{defn}
We get from \cite[Theorem 9.17]{BPR} :
\begin{prop}\label{Hankel} Let $p(x)=x^n+a_{n-1}x^{n-1}+\ldots+a_0$ and $s=(s_k)$ be a sequence of real numbers. The following assertions are equivalent
\begin{enumerate}
\item[(i)]
$(\forall k\geq n)\; (s_k=-a_{n-1}s_{k-1}-\ldots-a_0s_{k-n})$
\item[(ii)]
There is a polynomial $q(x)$ of degree ${\mbox{\rm deg}\,} q<{\mbox{\rm deg}\,} p$ such that $$\frac{q(x)}{p(x)}=\sum_{j=0}^\infty\frac{s_j}{x^{j+1}}$$
\item[(iii)]
There is an integer $r\leq n$ such that ${\;\mbox{\rm det}}(H_r(s))\not =0$, and for all $k>r$, ${\;\mbox{\rm det}}(H_k(s))=0$.
\end{enumerate}
Whenever these conditions are fulfilled, we denote by $H_n(q/p)$ the Hankel matrix $H_n(s)$.
\end{prop}
Back to the intertwinning relation (\ref{Intertwinning}) : it is immediate that an Hankel matrix $H$ is a solution if and only if the (finite) sequence $(s_0,\ldots,s_{2n-2})$ satisfies the linear recurrence relation of Proposition \ref{Hankel}$(i)$, for $k=n,\ldots,2n-2$.\par
For further details and developments about the intertwinning relation,
we refer to \cite{HV}.\air
The vector subspace of Hankel matrices in ${\Bbb R}^{n\times n}$ satisfying relation (\ref{Intertwinning}) has dimension $n$, and contains a remarkable element : the Hankel matrix ${\rm Newt}_p(n)$ that was considered in subsection \ref{Sylvester} about Sylvester algorithm.
Indeed, it is a well-known and elementary fact that the $N_k$'s are real numbers which verify the Newton identities :
$$(\forall k\geq n)\;\left(N_k+a_{n-1}N_{k-1}+\ldots+a_0N_{k-n}=0\right)$$
\air
\subsection{Barnett formula}
First, recall that if $p(x)=x^n+a_{n-1}x^{n-1}+\ldots+a_0$ and $q(x)$ is a (non-necessarily monic) polynomial in ${\Bbb R}[x]$ whose degree is equal to $n-1$, the Bezoutian of $p(x)$ and $q(x)$ is defined as the two-variables polynomial :
$${\mbox{\rm Bez}} (p,q)=\frac{q(y)p(x)-q(x)p(y)}{x-y}\in{\Bbb R}[x,y]$$
Let ${\cal B}(z)$ be any basis of the $n$-dimentional vector space ${\Bbb R}[z]/p(z)$ over ${\Bbb R}$. We denote by ${\mbox{\rm Bez}}_{\cal B}(p,q)$ the symmetric matrix of the coefficients of ${\mbox{\rm Bez}}(p,q)$
with respect to the basis ${\cal B}(x)$ and ${\cal B}(y)$.\par
Among all the basis of ${\Bbb R}[z]/p(z)$ that will be interesting for the following, let us mention the canonical basis ${\cal C}=(1,z,\ldots,z^{n-1})$ and also the (degree decreasing) Horner basis ${\cal H}(z)=(h_0,\ldots,h_{n-1})$ associated to the polynomial $p(z)$ and which is defined by :
$$\left\{\begin{array}{lcl}
h_{0}(z)&=&z^{n-1}+a_{n-1}z^{n-2}+\ldots+a_{1}\\
&\vdots&\\
h_{i}(z)&=&z^{n-1-i}+a_{n-1}z^{n-2-i}+\ldots+a_{i+1}=zh_{i+1}(z)+a_{i+1}\\
&\vdots&\\
h_{n-2}(z)&=&z+a_{n-1}\\
h_{n-1}(z)&=&1
\end{array}\right.$$
\air
We recall from \cite[Proposition 9.20]{BPR} :
\begin{prop}\label{BPR9.20} Let $p(x)$ and $q(x)$ be two polynomials such that ${\mbox{\rm deg}\,} q<{\mbox{\rm deg}\,} p=n$. Let $s$ be the sequence of real numbers defined by $$\frac{q(x)}{p(x)}=\sum_{j=0}^\infty\frac{s_j}{x^{j+1}}$$
Then, ${\mbox{\rm Bez}}_{\cal H}(p,q)=H_n(s)=H_n(q/p)$.
\end{prop}
\air
\air We come to a central proposition which is a consequence of the Barnett formula \cite{Ba}.
\begin{prop}\label{Barnett} Let $p(x)$ and $q(x)$ be two polynomials such that ${\mbox{\rm deg}\,} q<{\mbox{\rm deg}\,} p=n$ and let $P_{\cal C H}$ be the change of basis matrix from the canonical basis ${\cal C}$ to the Horner basis ${\cal H}$. We have
$$q(C_p)=P_{{\cal CH}}^T\times H_n(q/p)$$
\end{prop}
\begin{proof}
The Barnett formula has been established in \cite{Ba} using direct matrix computations. For the convenience of the reader, we give here another proof (which may be found at various places in the literature).
The obvious identity
$$q(y)(p(x)-p(y))=q(y)p(x)-p(y)q(x)+p(y)(q(x)-q(y))$$
implies, by definition of the Bezoutian $B(p,q)$, that :
$$q(y)\frac{p(x)-p(y)}{x-y}\equiv {\mbox{\rm Bez}}(p,q){\;\mbox{\rm mod}}\; p(y)$$
Noticing that $\frac{p(x)-p(y)}{x-y}=\sum_{j=0}^{n-1}h_j(y)x^j$, we get
$$q(y)\sum_{j=0}^{n-1}h_j(y)x^j\equiv {\cal C}(y) {\mbox{\rm Bez}}_{\cal C}(p,q){\cal C}(x)^T{\;\mbox{\rm mod}}\; p(y)$$
In other words, if we denote by $M$ the matrix whose columns are the coefficients of $q(y)h_j(y)$ in the basis ${\cal C}(y)$, we get the identity
$${\cal C}(y) M {\cal C}(x)^T \equiv {\cal C}(y) {\mbox{\rm Bez}}_{\cal C}(p,q){\cal C}(x)^T{\;\mbox{\rm mod}}\; p(y)$$
Since $C_p$ is the matrix of the multiplication by $y{\;\mbox{\rm mod}}\; p(y)$ with respect to the canonical basis ${\cal C}(y)$, we have also the identity
$$M=q(C_p)P_{\cal C H}$$
where the change of basis matrix $P_{\cal C H}$ is in fact the following Hankel matrix $$P_{\cal C H}=H(a_0,a_1,\ldots,a_{n-1},1,0,\ldots,0)\in{\Bbb R}^{n\times n}$$
with the usual notation $p(x)=x^n+a_{n-1}x^{n-1}+\ldots+a_0$.
Hence, we get the Barnett Formula
$${\mbox{\rm Bez}}_{\cal C}(p,q)=q(C_p)P_{\cal C H}$$
Finally, by Proposition \ref{BPR9.20}, we derive the wanted relation :
$${\mbox{\rm Bez}}_{\cal C}(p,q)=P_{\cal C H}^T{\mbox{\rm Bez}}_{\cal H}(p,q)P_{\cal C H}=P_{\cal C H}^TH_n(q/p)P_{\cal C H}$$
Which concludes the proof.
\end{proof}
To end the section, we show how Sturm and Sylvester algorithms can be considered as dual, in the case where all the roots of $p(x)$ are real and simple, say $x_1<\ldots<x_n$. Then, $q(x)=p'(x)/n$ has also $n-1$ simple real roots $y_1<\ldots<y_{n-1}$ which are interlacing those of $p(x)$. Namely
$$x_1<y_1<x_2<y_2<\ldots<y_{n-1}<x_n$$
We may repeat the argument to see that this interlacing property of real roots remains for any two consecutive polynomials $p_k(x)$ and $p_{k+1}(x)$ of the sequence ${\mbox{\rm SRemS}}(p,q)$.
In particular, ${\mbox{\rm SRemS}}(p,q)$ does not have any degree breakdown, all the $\epsilon_k$ are equal to $+1$, and $N(q/p)$ is positive definite. \air
We have, by Proposition \ref{Barnett} $$q(C_p^T)=H_n(q/p)P_{\cal CH}$$
Since $H_n(q/p)$ is positive definite, the Cholesky algorithm gives a decomposition $$H_n(q/p)= K K^T$$ where $K\in{\Bbb R}^{n\times n}$ is lower triangular. So that we can write
$$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-K^{-1}C_p^TK)$$
We shall remark at this point that the matrix $K^{-1}C_p^TK$ is tridiagonal and symmetric.
\air
We get
$q(C_p^T)=K{\mbox{\rm\bf Ad}} L$ where $L={\mbox{\rm\bf Ad}} K^TP_{\cal CH}$.
Then, we observe that $L$ is a lower triangular matrix (since $P_{\cal CH}{\mbox{\rm\bf Ad}}$ is upper triangular) and $K{\mbox{\rm\bf Ad}} L$ commute with $C_p^T$.
Thus, we have the identity :
$$LC_p^TL^{-1}={\mbox{\rm\bf Ad}}(K^{-1}C_p^TK){\mbox{\rm\bf Ad}}$$
We denote by ${\mbox{\rm Td}}$ this tridiagonal matrix. Let $(p_k(x))$ be the signed remainders sequence associated to ${\mbox{\rm Td}}$ as given in Proposition \ref{prs_matrice_tridiag} (i). The first row of $K{\mbox{\rm\bf Ad}} L$ is proportional to the last row of the matrix $L$ which is proportional to $p_{1}(x)$.
It remains to observe that the first row of $K{\mbox{\rm\bf Ad}} L=q(C_p^T)$ gives exactly the coefficients of the polynomial $q(x)$ in the canonical basis.
Then, $p_1(x)=q(x)$. \air
In summary, we have shown that, if $p(x)$ has $n$ simple real roots and $q(x)=p'(x)/n$, then $H_n(q/p)$ is positive definite with Cholesky decomposition $H_n(q/p)=KK^T$, and if we denote by $\tilde{q}(x)$ the monic polynomial whose coefficients are proportional to the last row of $K^{-1}$, then ${\mbox{\rm Td}}(p,\tilde{q})=\overline{{\mbox{\rm Td}}(p,q)}$. Which settle the announced duality.
\subsection{Generic case}
We turn now to the generic situation. Let $p(x)$ and $q(x)$ be monic polynomials of respective degrees $n$ and $n-1$ and such that ${\mbox{\rm SRemS}}(p,q)$ does not have any degree breakdown. This condition is equivalent to saying that all the principal minors of the Hankel matrix $H_n(q/p)$ do not vanish. We refer to \cite{BPR} for this point. One way to see this is to figure out the connexion with the subresultants of $p(x)$ and $q(x)$.\par A little bit more precisely, the $j$-th signed subresultant coefficient of $p(x)$ and $q(x)$ is denoted by ${\mbox{\rm sRes}}_j(p,q)$ for $j=0\ldots n-1$. If for all $j$, ${\mbox{\rm sRes}}_j(p,q)\not = 0$, we say that the sequence of subresultants is {\it non-defective}.
Then, by \cite[Corollary 8.33]{BPR} and Proposition \ref{prs_matrice_tridiag} (iv), we deduce that the non-defective condition is equivalent to the fact that ${\mbox{\rm SRemS}}(p,q)$ has no degree breakdown. Moreover, from \cite[Lemma 9.26]{BPR} we know that $$(\forall j\in\{1\ldots n\})\;\left({\mbox{\rm sRes}}_{n-j}(p,q)={\;\mbox{\rm det}}(H_j(q/p))\right).$$
In conclusion, our no degree breakdown assumption means also that all the principal minors of the Hankel matrix $H_n(q/p)$ do not vanish.
\par At this point, we may add another equivalent condition, which will be essential for the following. Indeed, the condition that all the principal minors of the Hankel matrix $H_n(q/p)$ do not vanish is also equivalent to saying that the matrix $H_n(q/p)$ admits an invertible $LU$ decomposition. Namely, it exists a lower triangular matrix $L$ with $1$ entries onto the diagonal, and an upper invertible triangular matrix $U$ such that $H_n(q/p)=LU$. Moreover this decomposition is unique and since $H_n(q/p)$ is symmetric we may write it as $H_n(q/p)=LDL^T$ where $D$ is diagonal. In fact, for our purpose, we will prefer the unique decomposition $H_n(q/p)=KJK^T$ where $K$ is lower triangular and $J$ is a signature matrix.
Generalizing the previous section, we get :
\begin{thm}\label{duality}
Let $p(x)$ and $q(x)$ be two monic polynomials of respective degrees $n$ and $n-1$ such that ${\mbox{\rm SRemS}}(p,q)$ does not have any degree breakdown. Consider the symmetric LU-decomposition of the Hankel matrix $H_n(q/p)=KJK^T$, where $J$ is a signature matrix and $K$ a lower triangular matrix, and denote by $\tilde{q}(x)$ the monic polynomials whose coefficients in the canonical basis are proportional to the last row of $K^{-1}$. Then,
$${\mbox{\rm Td}}(p,\tilde{q})=\overline{{\mbox{\rm Td}}(p,q)}.$$
\end{thm}
\begin{proof}
We start with the companion identity :
$$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-C_p^T)$$
Next, because of Proposition \ref{Hankel}$(i)$, we notice that the matrix $H_n(q/p)$ verifies the intertwinning relation :
$$H_n(q/p)C_p=C_p^TH_n(q/p)$$
Then, we write the symmetric $LU$-decomposition of $H_n(q/p)$ :
$$H_n(q/p)=KJK^T$$
Which gives the identity
$$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-K^{-1}C_p^TK).$$
\air
We have, by Proposition \ref{Barnett} $$q(C_p^T)=H_n(q/p)P_{\cal CH}=K{\mbox{\rm\bf Ad}} L$$
where $$L={\mbox{\rm\bf Ad}} J K^TP_{\cal CH}$$
We observe first that $L$ is a lower triangular matrix (since $P_{\cal CH}{\mbox{\rm\bf Ad}}$ is upper triangular), and second that $K{\mbox{\rm\bf Ad}} L$ commute with $C_p^T$.
Thus, we have the identity :
$$LC_p^TL^{-1}={\mbox{\rm\bf Ad}}(K^{-1}C_p^TK){\mbox{\rm\bf Ad}}$$
Proposition \ref{commutant} gives
$$\overline{{\mbox{\rm Td}}(p,\tilde{q})}={\mbox{\rm\bf Ad}}(K^{-1}C_p^TK){\mbox{\rm\bf Ad}}$$
Moreover, the first row of $K{\mbox{\rm\bf Ad}} L$ is proportional to the last row of the matrix $L$. It remains to observe that the first row of $K{\mbox{\rm\bf Ad}} L=q(C_p^T)$ gives exactly the coefficients of the polynomial $q(x)$ in the canonical basis. Thus, by Proposition \ref{commutant} we get
$$LC_p^TL^{-1}={\mbox{\rm Td}}(p,q)$$
Which concludes the proof.
\end{proof}
\begin{rem}
Note that $K^{-1}C_p^TKJ$ is symmetric and hence also the matrix $LC_p^TL^{-1}\bar{J}$, where $\bar{J}={\mbox{\rm\bf Ad}} J{\mbox{\rm\bf Ad}}$.
\end{rem}
\section{Tridiagonal determinantal representations}
\subsection{Notations}
We say that an univariate polynomial $p(x)\in{\Bbb R}[x]$ of degree $n$ such that $p(0)\not =0$ has a {\it determinantal representation} if
$${\rm (DR)}\quad p(x)=\alpha{\;\mbox{\rm det}}(J-Ax)$$
where $\alpha\in{\Bbb R}^*$, $J$ is a signature matrix in ${\Bbb R}^{n\times n}$, and $A$ is a symmetric matrix in ${\Bbb R}^{n\times n}$ (we obviously have $\alpha={\;\mbox{\rm det}}(J)p(0)$). \par
Likewise, we say that $p(x)$ has a {\it weak determinantal representation} if
$${\rm (WDR)}\quad p(x)=\alpha{\;\mbox{\rm det}}(S-Ax)$$
where $\alpha\in{\Bbb R}^*$, $S$ is symmetric invertible and $A$ is symmetric.\air
Of course the existence of (DR) is obvious for univariate polynomials, but we will focus on the problem of {\it effectivity}. Namely, we want an algorithm (say of polynomial complexity with respect to the coefficients and the degree of $p(x)$) which produces the representation. Typically, we do want to avoid the use of the roots of $p(x)$.
One result in that direction can be found in \cite{Qz2} (which is inspired from \cite{Fi}). It uses arrow matrices as a ``model", whereas in the present article we make use of tridiagonal matrices.\air
When all the roots of $p(x)$ are real, the effective existence of determinantal representation for univariate real polynomials exists even if we add the condition that $J={\mbox{\rm\bf Id}}$. It has been discussed in several places, although not exactly with the determinantal representation formulation. Indeed, in place of looking for DR we may consider the equivalent problem of the research of a symmetric matrix whose characteristic polynomial is given. Indeed, if the size of the matrix $A$ is equal to the degree $n$ of the polynomial, the condition $$p(x)={\;\mbox{\rm det}}({\mbox{\rm\bf Id}}-xA)$$ is equivalent to $$p^*(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-A)$$ where $p^*(x)$ is the reciprocal polynomial of $p(x)$.
In \cite{Fi}, arrow matrices are used to answer this last problem. On the other hand, the Routh-Lanczos algorithm (which can be viewed as Proposition \ref{prs_matrice_tridiag}) gives also an answer, using tridiagonal model. Note that the problem may also be reformulated as a structured Jacobi inverse problem (confer \cite{EP} for a survey). \air
In the following, we generalize the tridiagonal model to any polynomial $p(x)$, possibly having non real roots. Doing that, general signature matrices $J$ appear, whose entries depend on the number of real roots of $p(x)$.\par
\subsection{Over a general field}
A lot of identities in Section 2 are still valid over a general field $k$.
For instance, if $p(x)$ and $q(x)$ are monic polynomials of respective degrees $n-1$ and $n$, we may still associate the Hankel matrix $H(q/p)=(s_{i+j})_{0\leq i,j\leq n-1}\in k^{n\times n}$ defined by the identity
$$\frac{q(x)}{p(x)}=\sum_{j=0}^\infty\frac{s_j}{x^{j+1}}$$
Then, we have the following :
\begin{thm}\label{general_field}
Let $p(x)\in k[x]$, $q(x)\in k[x]$ be two monic polynomials of respective degrees $n$ and $n-1$, and set $H=H(q/p)$. Then, the matrix $C_p^TH$ is symmetric and we have the WDR :
$${\;\mbox{\rm det}}(H)\times p(x)={\;\mbox{\rm det}}(xH-C_p^TH)$$
Moreover, if $H$ admits the LU-decomposition $H=KDK^T$ where $K\in k^{n\times n}$ is lower triangular with entries $1$ onto the diagonal and $D\in k^{n\times n}$ a diagonal matrix, then we have :
$$p(x)={\;\mbox{\rm det}}(xD-{\mbox{\rm Td}})$$
where ${\mbox{\rm Td}}=K^{-1}C_p^TKD$ is a tridiagonal symmetric matrix.
\end{thm}
\begin{proof}
We exactly follow the proof of Theorem \ref{duality}.
\end{proof}
Note that the condition for $H$ to be invertible is equivalent to the fact that the polynomials $p(x)$ and $q(x)$ are coprime, since we have
$${\mbox{\rm rk}}({\mbox{\rm Bez}}(q,p))={\mbox{\rm deg}\,}(p)-{\mbox{\rm deg}\,}(\gcd(p,q)).$$
To see this, we may refer to the first assertion of \cite[Theorem 9.4]{BPR} whose proof is valid over any field.\air
The WDR of Theorem \ref{general_field} has the advantage that the considered matrices have entries in the ring generated by the coefficients of the polynomial $p(x)$.
This point is not satisfied in the methods proposed in $\cite{Qz2}$ or in the Routh-Lanczos algorithm.\par
In fact, the use of Hankel matrices satisfying the intertwinning relation seems to be more convenient since we are able to ``stop the algorithm at an earlier stage", namely before having to compute a square root of the matrix $H$ (or of the matrix $D$).\par
Of course, at the time we want to derive a DR, then we have to add some conditions on the field $k$, for instance we shall work over an ordered field where square roots of positive elements exist.\par
To end the section, we may summarize that, for a given polynomial $p(x)$, we have an obvious but non effective (i.e. using factorization) DR with entries in the splitting field of $p(x)$ over $k$, to compare with
an effective WDR given by Theorem \ref{general_field} where entries are in the field generated by the coefficients of $p(x)$.
\subsection{Symmetric tridiagonal representation and real roots counting}
If $p(x)$ and $r(x)$ are two real polynomials, we recall the number known as the Tarski Query :
$${\mbox{\rm TaQ}}(r,p)=\#\{x\in{\Bbb R}\mid p(x)=0\wedge r(x)>0\}-\#\{x\in{\Bbb R}\mid p(x)=0\wedge r(x)<0\}.$$
We also recall the definition of the Permanences minus variations number of a given sequence of signs $\nu=(\nu_1,\ldots,\nu_k)$ :
$${\mbox{\rm PmV}}(\nu)=\sum_{i=1}^{k-1}\nu_i\nu_{i+1}.$$
We summarize, from \cite[Theorem 4.32, Proposition 9.25, Corollary 9.8]{BPR} some useful properties of these numbers,
\begin{prop}\label{real_counting}
Let $p(x)$ and $q(x)$ be two monic polynomials of respective degrees $n$ and $n-1$, and such that the sequence ${\mbox{\rm SRemS}}(p,q)$ has no degree breakdown. Let $r(x)$ be another polynomial such that $q(x)$ is the remainder of $p'(x)r(x)$ modulo $p(x)$. Then,
$${\mbox{\rm PmV}}(\nu)={\mbox{\rm sgn}}({\mbox{\rm Bez}}(p,q))={\mbox{\rm sgn}}(H_n(q/p))={\mbox{\rm TaQ}}(r,p)$$
where $\nu$ is the sequence of signs of the leading coefficients in the signed remainders sequence ${\mbox{\rm SRemS}}(p,q)$.
\end{prop}
We come now to our main result about real roots counting :
\begin{thm}\label{realroots_tridiag}
Let ${\mbox{\rm Td}}\in{\Bbb R}^{n\times n}$ be a tridiagonal symmetric matrix with non-singular first principal diagonals. Let also $p(x)\in{\Bbb R}[x]$ be a real polynomial with no multiple root and such that $$p(x)={\;\mbox{\rm det}}(J){\;\mbox{\rm det}}(xJ-{\mbox{\rm Td}}),$$ where $J$ is a signature matrix whose last entry onto the diagonal is $+1$.\par
Then, the number of real roots of $p(x)$ is greater than ${\mbox{\rm sgn}}(J)$.
\end{thm}
\begin{proof}
We have $$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}_n-{\mbox{\rm Td}}\times J)$$ and we set
$$q(x)={\;\mbox{\rm det}}\left(x{\mbox{\rm\bf Id}}_{n-1}-\left({\mbox{\rm Td}}\times J\right)_{n-1}\right).$$
The matrix ${\mbox{\rm Td}}\times J$ is still tridiagonal with non-singular first principal diagonals. Then, we consider the sequence ${\mbox{\rm SRemS}}(p,q)$ and denote by $\nu$ the associated sequence of signs of leading coefficients.\par
Since $\gcd(p,p')=1$, we set $r(x)$ to be the unique polynomial of degree $<n$ such that $$r\equiv \frac{q}{p'}{\;\mbox{\rm mod}}\; p.$$ Then, $$p'r\equiv q{\;\mbox{\rm mod}}\; p$$ and from Proposition \ref{real_counting}, we get :
$${\mbox{\rm PmV}}(\nu)={\mbox{\rm TaQ}}(r,p)\leq \#\{x\in{\Bbb R}\mid p(x)=0\}$$
\air
Let us introduce some notations at this step. First, let ${\mbox{\rm Td}}={\mbox{\rm Td}}(\alpha,\beta,\gamma)$, next denote by $\epsilon(a)$ the sign in $\{-1,+1\}$ of the non zero real number $a$, and finally let
$$J=\left(\begin{array}{cccc}
\theta_{n-1}&&&\\
&\ddots&&\\
&&\theta_{1}&\\
&&&1
\end{array}\right).$$
Then, we can write
$$p(x)={\;\mbox{\rm det}}\left(x{\mbox{\rm\bf Id}}_n-P({\mbox{\rm Td}}\times J)P^{-1}\right)$$ where
$$P=\left(\begin{array}{ccccc}
(\theta_{n-1}\ldots\theta_{1})\times(\epsilon(\gamma_{n-1})\ldots\epsilon(\gamma_{1}))&&&&\\
&&\ddots&&\\
&&&\theta_{1}\times\epsilon({\gamma_1})&\\
&&&&1
\end{array}\right).$$
We note in fact that $P({\mbox{\rm Td}}\times J)P^{-1}={\mbox{\rm Td}}(p,q)$. Indeed, all the coefficients onto the first lower principal diagonal are positive. Moreover, all the coefficients onto the first upper principal diagonal are given by the sequence
$$(\theta_{n-1}\times\theta_{n-2},\ldots,\theta_{2}\times\theta_1,\theta_1).$$
We deduce from Proposition \ref{prs_matrice_tridiag} (iv) that the sequence of signs of leading coefficients in the signed remainders sequence ${\mbox{\rm SRemS}}(p,q)$ is the following :
$$\nu=(\theta_{n-1}\times\ldots\times\theta_{1},\ldots,\theta_2\times\theta_1,\theta_{1},1,1).$$
Thus $${\mbox{\rm PmV}}(\nu)=1+\sum_{k=1}^{n-1}\theta_k={\mbox{\rm sgn}}(J)$$ and we are done.
\end{proof}
Another way, maybe less constructive, to prove the result is to use the duality of Theorem \ref{duality}. Indeed, replacing as in the previous proof the matrix ${\mbox{\rm Td}}\times J$ with $P({\mbox{\rm Td}}\times J)P^{-1}$, we write the identity
$${\mbox{\rm Td}}\times J=LC^T_pL^{-1}$$
Then, by duality, we have
$$LC_p^TL^{-1}={\mbox{\rm\bf Ad}} K^{-1}C_p^TKJ'{\mbox{\rm\bf Ad}}$$
where we have set the LU-decomposition $$H_n(q/p)=KJ'K^T.$$
Let us introduce $\bar{J'}={\mbox{\rm\bf Ad}} J'{\mbox{\rm\bf Ad}}$ ; we get :
$$\left(LC_p^TL^{-1}J\right)\times (J\bar{J'})={\mbox{\rm\bf Ad}} K^{-1}C_pKJ' {\mbox{\rm\bf Ad}}$$
We remark that the matrices $LC_p^TL^{-1}J$ and $K^{-1}C_pKJ'$ are both tridiagonal and symmetric with non-singular principal diagonals, so that we necessarily have $$J\bar{J'}=\pm{\mbox{\rm\bf Id}}.$$ Notice that by assumption the last coefficient of $J$ is $+1$ and that the first coefficient of $J'$ is always $+1$ (since it is the leading coefficient of $\frac{q(x)}{p(x)}$). Thus
$$J\bar{J'}={\mbox{\rm\bf Id}}.$$
Then, we may conclude by Proposition \ref{real_counting}.\par An alternative way to make use of this computation is to say that we get another proof of the equality
$${\mbox{\rm PmV}}(\nu)={\mbox{\rm sgn}}({\mbox{\rm Bez}}(p,q))$$
which appears in the sequence of identities
$${\mbox{\rm sgn}}({\mbox{\rm Bez}}(p,q))={\mbox{\rm sgn}}(H_n(q/p))={\mbox{\rm sgn}}(J')={\mbox{\rm sgn}}(J)={\mbox{\rm PmV}}(\nu)={\mbox{\rm TaQ}}(r,p).$$
\begin{rem}
It is possible to extend Theorem \ref{realroots_tridiag} in the case where principal diagonals of ${\mbox{\rm Td}}={\mbox{\rm Td}}(\alpha,\beta,\beta)$ are singular. Namely, for all $k$ such that $\beta_k=0$, we have to assume that the corresponding $k$-th entry onto the diagonal of $J$ is equal to $+1$. Then, we get that the number of real roots of $p(x)$, {\it counted with multiplicity}, is greater than ${\mbox{\rm sgn}}(J)$.\par
To see this, it suffices to note that the polynomial defined by $p(x)={\;\mbox{\rm det}}(J){\;\mbox{\rm det}}(xJ-{\mbox{\rm Td}})$ factorizes through $$p(x)={\;\mbox{\rm det}}(J_1){\;\mbox{\rm det}}(xJ_1-{\mbox{\rm Td}}_{k})\times{\;\mbox{\rm det}}(J_2){\;\mbox{\rm det}}(xJ_2-\overline{{\mbox{\rm Td}}}_{n-k})$$
Moreover, the matrices ${\mbox{\rm Td}}_{k}$ and $\overline{{\mbox{\rm Td}}}_{n-k}$ remain tridiagonal symmetric and $J_1$, $J_2$ remain signature matrices. If we denote by $\bigoplus$ the usual direct sum of matrices, we have $J=J_1\bigoplus J_2$ and ${\mbox{\rm Td}}={\mbox{\rm Td}}_{k}\bigoplus\overline{{\mbox{\rm Td}}}_{n-k}$.\par Thus, we may proceed by induction on the degree of $p(x)$.
\end{rem}
Before stating the converse property of Theorem \ref{realroots_tridiag}, we establish a genericity lemma.
\begin{lem}\label{genericity}
Let $p(x)$ be a monic polynomial of degree $n$ with only single roots and $q(x)=x^{n-1}+b_{1}x^{n-1}+\ldots+b_{n-1}$.
Then, the set of all $(n-1)$-tuples $(b_1,\ldots, b_{n-1})\in{\Bbb R}^{n-1}$ such that there is an integer $k\in\{1,\ldots, n\}$ satisfying ${\;\mbox{\rm det}}(H_k(q/p))=0$, is a proper subvariety of ${\Bbb R}^{n-1}$.
\end{lem}
\begin{proof}
We only have to show that for all $k$, ${\;\mbox{\rm det}}(H_k(q/p))$, viewed as a polynomial in the variables $b_1,\ldots, b_{n-1}$, is not the zero polynomial.\par
Let $H_n(q/p)=(s_{i+j})_{0\leq i,j\leq n-1}$ where $$\frac{q(x)}{p(x)}=\sum_{j=0}^\infty\frac{s_j}{x^{j+1}}$$ and denote by $\alpha_1,\ldots,\alpha_n$ the set of all (possibly complex) roots of $p(x)$. Then,
$$s_j=\sum_{i=1}^n\frac{q(\alpha_i)}{p'(\alpha_i)}\alpha_i^j$$
Let us introduce the real numbers defined as $$u_j=\sum_{i=1}^n\frac{\alpha_i^j}{p'(\alpha_i)}$$
We obviously have $u_j=0$ whenever $j\leq n-2$ and also $u_{n-1}=1$ (look at $\lim_{x\to +\infty}\frac{x^jq(x)}{p(x)}$). So that we deduce :\air
$\left\{\begin{array}{l} s_0=1\\ s_1=b_1+u_{n}\\
{\rm and\; more\; generally}\\
(\forall j\in\{1,\ldots,2n-2\})\; (s_j=b_j+b_{j-1}u_{n}+\ldots+b_1u_{n+j-2}+u_{n+j-1})
\end{array}\right.$\air
Then, it becomes clear that $H_{k+1}(q/p)\not \equiv 0$ for any $k$ such that $k\leq \lfloor\frac{n-1}{2}\rfloor=r$, since $s_{2k}\in{\Bbb R}[b_1,\ldots,b_{2k}]$ has degree $1$ in the variable $b_{2k}$ and so is the case for $H_{k+1}(q/p)$.
\par
Next, for $r<k\leq n$, we develop the determinant $H_k(q/p)$ successively according to the first columns, and we remark that its degree in the variable $b_{n-1}$ is equal to $2k-n$ (with leading coefficient equal to $-1$).
This concludes the proof.
\end{proof}
In other words, the Lemma says that the condition
$$(\forall k\in\{1,\ldots, n\})\;({\;\mbox{\rm det}}(H_k(q/p))=0)$$
is {\it generic} with respect to the space of coefficients of the polynomial $q(x)$. Because of the relations between coefficients and roots, the condition is also generic with respect to the (possibly complex) roots of the polynomial $q(x)$.\air
Here is our converse statement about real roots counting :
\begin{thm}
Let $p(x)$ be a monic polynomial of degree $n$ which has exactly $s$ real roots counted with multiplicity. We can find effectively a generic family of symmetric tridiagonal matrices ${\mbox{\rm Td}}$ and signature matrices $J$ with ${\mbox{\rm sgn}}(J)=s$, and such that $$p(x)={\;\mbox{\rm det}}(J)\times{\;\mbox{\rm det}}(xJ-{\mbox{\rm Td}}).$$
\end{thm}
\begin{proof}
If $p(x)$ has multiple roots, then we may factorize it by $\gcd(p,p')$ and use the multiplicative property of the determinant to argue by induction on the degree. Now, we assume that $p(x)$ has only simple roots.\air
We take for $q(x)$ any monic polynomials of degree $n-1$ which has exactly $s-1$ real roots interlacing those of $p(x)$. Namely, if we denote by $x_1<\ldots<x_s$ all the real roots of $p(x)$ and by $y_1<\ldots <y_{s-1}$ all the real roots of $q(x)$, we ask that $x_1<y_1<x_1<y_2<\ldots<y_{s-1}<x_s$.
\air
Let $r(x)$ be the unique polynomial of degree $<n$ such that $r(x)\equiv\frac{q(x)}{p'(x)}{\;\mbox{\rm mod}}\; p(x)$ (since $p'(x)$ is invertible modulo $p(x)$).\par
From $p'r\equiv q{\;\mbox{\rm mod}}\; p$ and $p'(x_i)=q(x_i)$ for all real root $x_i$ of $p(x)$, we get $${\mbox{\rm TaQ}}(r,p)=s=\#\{x\in{\Bbb R}\mid p(x)=0\}$$
\air
At this point, we need that $q(x)$ satisfies another hypothesis : that is ${\mbox{\rm SRemS}}(p,q)$ shall not have any degree breakdown, or equivalently that $H(q/p)$ shall admit a $LU$-decomposition $H_n(q/p)=KJK^T$. According to Lemma \ref{genericity}, this hypothesis is generically satisfied, although it may not be always satisfied for the natural candidate $q(x)=p'(x)/n$.\par
Then, we get from Theorem \ref{duality}
$$p(x)={\;\mbox{\rm det}}(xJ-K^{-1}C_p^TKJ)$$
where ${\mbox{\rm Td}}=K^{-1}C_p^TKJ$ is tridiagonal symmetric and $J$ is a signature matrix.
\air
By the proof of Proposition \ref{realroots_tridiag}, we get moreover that $${\mbox{\rm sgn}}(J)={\mbox{\rm TaQ}}(r,p)={\mbox{\rm sgn}}(H_n(q/p)).$$
This concludes the proof since ${\mbox{\rm TaQ}}(r,p)=s$.
\end{proof}
\begin{rem}
\begin{enumerate}
\item[(i)] The choice of such polynomials $q(x)$ with the interlacing roots property need to count and localize the real roots of $p(x)$. It can be done via Sturm sequences for instance.
\item[(ii)]
Although the polynomial $q(x)=p'(x)/n$ has not necessarily the interlacing property in general, it is the case when all the roots of $p(x)$ are real and simple.
Moreover, in this case, the interlacing roots condition is equivalent to the no degree breakdown condition. Indeed, ${\mbox{\rm TaQ}}(p'q {\;\mbox{\rm mod}}\; p,p)=n$ if and only if $p'(x)$ and $q(x)$ have same signs at each root of $p(x)$.
\end{enumerate}
\end{rem}
\section{Some worked examples}
In order to get lighter formulas in our examples, we decide to get rid off denominators. That is why we replace signature matrices by only non-singular diagonal matrices. If one wants to deduce formulas with signature matrices, it suffices to normalize.
\begin{enumerate}
\item[1)]
Let $p(x)=x^3+s x+t$ with $s\not =0$, and $q(x)=p'(x)=3x^2+s$. Let us introduce the discriminant of $p(x)$ as $\Delta=-4s^3-27t^2$. Consider the decomposition of the Hankel matrix
$$H(q/p)=\left(\begin{array}{ccc}
3&0&-2s\\
0&-2s&-3t\\
-2s&-3t&2s^2
\end{array}\right)
=KJK^T$$
where $$K=
\left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
-\frac{2s}{3}&\frac{3t}{2s}&1\\
\end{array}\right)$$ and
$$J=\left(\begin{array}{ccc}
3&0&0\\
0&-2s&0\\
0&0&\frac{-\Delta}{6s}\\
\end{array}\right)$$
We recover the well-known fact that $p(x)$ has three distinct real roots if and only if $s<0$ and $\Delta>0$, which obviously reduces to the single condition $\Delta>0$.\par
Then, we have the determinantal representation
$$\Delta \times p(x)={\;\mbox{\rm det}}(xJ-{\mbox{\rm Td}})$$
where
$${\mbox{\rm Td}}=\left(\begin{array}{ccc}
0&-2s&0\\
-2s&-3t&\frac{-\Delta}{6s}\\
0&\frac{-\Delta}{6s}&\frac{t\Delta}{4s^2}\\
\end{array}\right)$$
\item[2)] Consider the polynomial $p(x)=x^5-5x^3+4x$, which in fact factorizes through $p(x)=x(x-1)(x+1)(x-2)(x+2)$. Let $q(x)=p'(x)/5$. We have
$$N(q/p)=\left(
\begin{array}{ccccc}
5&0&10&0&34\\
0&10&0&34&0\\
10&0&34&0&130\\
0&34&0&130&0\\
34&0&130&0&514\\
\end{array}\right)$$
$${\mbox{\rm Td}}=\left(\begin{array}{ccccc}
0&\sqrt{2}&0&0&0\\
\sqrt{2}&0&\sqrt{\frac{7}{5}}&0&0\\
0&\sqrt{\frac{7}{5}}&0&\sqrt{\frac{36}{35}}&\\
0&0&\sqrt{\frac{36}{35}}&0&\sqrt{\frac{4}{7}}\\
0&0&0&\sqrt{\frac{4}{7}}&0\\
\end{array}\right)$$
$$p(x)={\;\mbox{\rm det}}(x{\mbox{\rm\bf Id}}-{\mbox{\rm Td}}).$$
In order to get some parametrized identities, let us introduce the following family of polynomials $$q_a(x)=(x-a)\left(x+\frac{3}{2}\right)\left(x+\frac{1}{2}\right)\left(x-\frac{1}{2}\right).$$
We write the LU-decomposition
$$H(q_a/p)=\left(\begin{array}{ccccc}
1&\frac{3}{2}-a&\frac{-3a}{2}+\frac{19}{4}&\frac{57}{8}-\frac{19a}{4}&\frac{-57a}{8}+\frac{79}{4}\\
&&&&\\
\frac{3}{2}-a&\frac{-3a}{2}+\frac{19}{4}&\frac{57}{8}-\frac{19a}{4}&\frac{-57a}{8}+\frac{79}{4}&\frac{237}{8}-\frac{79a}{4}\\
&&&&\\
\frac{-3a}{2}+\frac{19}{4}&\frac{57}{8}-\frac{19a}{4}&\frac{-57a}{8}+\frac{79}{4}&\frac{237}{8}-\frac{79a}{4}&\frac{-237a}{8}+\frac{319}{4}\\
&&&&\\
\frac{57}{8}-\frac{19a}{4}&\frac{-57a}{8}+\frac{79}{4}&\frac{237}{8}-\frac{79a}{4}&\frac{-237a}{8}+\frac{319}{4}&\frac{957}{8}-\frac{319a}{4}\\
&&&&\\
\frac{-57a}{8}+\frac{79}{4}&\frac{237}{8}-\frac{79a}{4}&\frac{-237a}{8}+\frac{319}{4}&\frac{957}{8}-\frac{319a}{4}&\frac{-957a}{8}+\frac{1279}{4}\\
\end{array}\right)=K_aJ_aK_a^T$$
where the associated ``signature" matrix $J_a$ is equal to
{\footnotesize $$\left(\begin{array}{ccccc}
1&&&&\\
&-\frac{1}{2}(a+1)(2a-5)&&&\\
&&\left(\frac{15}{16}\right)\frac{(2a-1)(4a^2-a-15)}{(a+1)(2a-5)}&&\\
&&&\left(\frac{45}{128}\right)\frac{48a^4-16a^3-216a^2+58a+105}{(2a-1)(4a^2-a-15)}&\\
&&&&\left(\frac{315}{8}\right)\frac{(a+2)(a+1)a(a-1)(a-2)}{48a^4-16a^3-216a^2+58a+105}\\
\end{array}\right)$$}
The condition for $H_a(q/p)$ to be positive definite is equivalent to having only positive coefficients onto the diagonal of $J_a$.\par
First, it yields $J_a(2,2)>0$, which means that $a\in]-1,\frac{5}{2}[$. Then, we add the condition $J_a(3,3)>0$ which means that $a\in]\frac{1}{2},2,06..[$. Then, we add the condition $J_a(4,4)>0$ which means that $a\in]0,9..,2,00..[$. And finally, we add the condition $J_a(5,5)>0$, which means that $a\in]1,2[$ and gives exactly the interlacing property for the polynomial $q_a(x)$.
\air
For instance, with $a=\frac{3}{2}$ we get $p(x)={\;\mbox{\rm det}}\left(x{\mbox{\rm\bf Id}}-{\mbox{\rm Td}}_{\frac{3}{2}}\right)$ where :
$${\mbox{\rm Td}}_{\frac{3}{2}}=\left(\begin{array}{ccccc}
0&\sqrt{\frac{5}{2}}&0&0&0\\
\sqrt{\frac{5}{2}}&0&\sqrt{\frac{9}{8}}&0&0\\
0&\sqrt{\frac{9}{8}}&0&\sqrt{\frac{35}{40}}&\\
0&0&\sqrt{\frac{35}{40}}&0&\sqrt{\frac{1}{2}}\\
0&0&0&\sqrt{\frac{1}{2}}&0\\
\end{array}\right)$$
\end{enumerate}
\air\noindent{\small{\bf Acknowledgments.} }\par I wish to thank Marie-Francoise Roy for helpful discussions on the subject.
| {
"timestamp": "2008-11-14T16:49:56",
"yymm": "0811",
"arxiv_id": "0811.2365",
"language": "en",
"url": "https://arxiv.org/abs/0811.2365",
"abstract": "First, we show that Sturm algorithm and Sylvester algorithm, which compute the number of real roots of a given univariate polynomial, lead to two dual tridiagonal determinantal representations of the polynomial. Next, we show that the number of real roots of a polynomial given by a tridiagonal determinantal representation is greater than the signature of this representation.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Sturm and Sylvester algorithms revisited via tridiagonal determinantal representations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850902023108,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7095349732980593
} |
https://arxiv.org/abs/2206.07760 | Multiscale methods for signal selection in single-cell data | Analysis of single-cell transcriptomics often relies on clustering cells and then performing differential gene expression (DGE) to identify genes that vary between these clusters. These discrete analyses successfully determine cell types and markers; however, continuous variation within and between cell types may not be detected. We propose three topologically motivated mathematical methods for unsupervised feature selection that consider discrete and continuous transcriptional patterns on an equal footing across multiple scales simultaneously. Eigenscores ($\text{eig}_i$) rank signals or genes based on their correspondence to low-frequency intrinsic patterning in the data using the spectral decomposition of the Laplacian graph. The multiscale Laplacian score (MLS) is an unsupervised method for locating relevant scales in data and selecting the genes that are coherently expressed at these respective scales. The persistent Rayleigh quotient (PRQ) takes data equipped with a filtration, allowing the separation of genes with different roles in a bifurcation process (e.g., pseudo-time). We demonstrate the utility of these techniques by applying them to published single-cell transcriptomics data sets. The methods validate previously identified genes and detect additional biologically meaningful genes with coherent expression patterns. By studying the interaction between gene signals and the geometry of the underlying space, the three methods give multidimensional rankings of the genes and visualisation of relationships between them. | \section{Introduction}
Cells, the building blocks of life, are often classified into discrete cell types (e.g. liver, neuron, immune, or blood cells).
In modern experiments, cell type identification commonly relies on partitioning single cell RNA sequencing (scRNA-seq) data.
Differential gene expression (DGE) algorithms use statistical tests to determine genes that significantly differ between predefined groups of cells. However, cellular biology is more nuanced: there are multiple scales of cell classification (e.g. Treg cells are T cells which are a type of immune cell), continuous transitions into cell types (e.g. embryonic development starts from stem cells that differentiate into broad cell types that further specialise), or natural variations within cell types. The rich repertoire of gene expression patterns and cellular subphenotypes offer an opportunity to study continuity of gene expression. \par
Mathematically, single-cell data are given as raw counts of RNA transcripts that represent the
expression of more than 20,000 genes in the human genome. A cell-by-gene matrix of counts is then pre-processed to reduce noise, variance due to technical effects, and the number of genes, to form a smaller normalised gene expression matrix $\widehat{Y} \in \mathbb{R}^{m \times n}$,
where $m \sim 10^{3}$ genes and $n \sim 10^{3}-10^{6}$ cells \cite{SeuratV4,wolf2018scanpy}.
Due to the high-dimensional nature of this data, along with sparsity and noise, standard data science methods are out of reach.
\par
The field of topological data analysis (TDA) studies the shape and connectivity of data at multiple scales of resolution. TDA methods require a metric and approximate the shape of the data by building covers or sequences of higher order networks (i.e., filtrations) on the data.
TDA methods have successfully analysed and visualised single-cell data (e.g. UMAP, which relies on fuzzy simplicial sets; or Mapper, which visualises data using covers and filters) \cite{mcinnes2018umap, becht2019dimensionality,jeitziner2017two,rizviSinglecellTopologicalRNAseq2017,kuchrooTopologicalAnalysisSinglecell2021,vandaeleStableTopologicalSignatures2021}.
In this paper, instead of studying the shape of data, we focus on the related task of quantifying how well signals on a given data set align with the topology of the data.
\par
The multiscale nature of topological data analysis and filtrations leads us to combine these ideas with graph signal processing \cite{ortega2018graph} and spectral graph theory \cite{chung1997spectral} to study continuous variation of gene features across cells.
The analysis in this paper starts with a pre-processed single-cell data matrix $\widehat{Y}$, as computed in the standard software Seurat \cite{SeuratV4}, and then uses UMAP to construct an undirected weighted $k$-nearest neighbour cell similarity graph $G$. The nodes, which represent cells, are connected by edges, weighted by the similarity of gene expression.
\par
Spectral graph theory, graph signal processing and the emerging field of topological signal processing \cite{robinson2014topological,schaub2021signal,ortega2018graph} offer a setting to study continuous patterns of gene expression across cells. The expression of a particular gene, or other features more generally (e.g. epigenetic factors), can be considered as a real-valued function $g: V \to \mathbb{R}$, or signal, on the nodes of this graph.
To determine whether a gene signal is coherent with the graph structure, which encodes the average similarity of all genes, Govek et al. \cite{govekClusteringindependentAnalysisGenomic2019} applied and extended the Laplacian score (LS). Previously, He et al. \cite{He2005} proposed the Laplacian score for unsupervised feature selection, drawing inspiration from Laplacian eigenmaps for dimensionality reduction. By studying the spectral properties of the graph Laplacian, each
gene is given a score according to its consistency with the local geometric structure of the graph \cite{govekClusteringindependentAnalysisGenomic2019}. The Laplacian score is small if the gene signal roughly correlates with the graph structure (i.e. is locally approximately constant but has global variation) and is large if the expression of a gene varies wildly on local neighbourhoods.
This feature selection approach ranks the best features (e.g., genes) from the input data to form a compact and informative data representation.
A score for each gene can be calculated, providing an overall ranking of features or gene signals \cite{He2005,govekClusteringindependentAnalysisGenomic2019}.
\par
In this work, we analyse gene signals on a cell similarity graph, while taking into account multiple scales of the single-cell data. We propose three computationally tractable methods for finding gene expression patterns that drive continuous variation in the dataset.
Similar to Govek et al. \cite{govekClusteringindependentAnalysisGenomic2019}, the proposed methods do not require predefined clustering or cell assignment. However, instead of one ranking, we propose multiple rankings of gene expression patterns at different scales of the data. Briefly, eigenvector scores
restrict the signal to the eigenspaces corresponding to the smallest eigenvalues. We score each gene by its alignment to each of the
smallest eigenvalue
eigenvectors and then visualise the signals in gene space.
Our proposed multiscale Laplacian score (MLS) pipeline uses the theory of continuous-time random walks and Markov stability \cite{Delvenne2013,Schaub2012} to rank genes according to their consistency with features that range from local to global geometric structures.
The persistent Rayleigh quotient (PRQ) takes in a filtration on the data (e.g. time) to study bifurcation patterns in gene expression data. The PRQ is based on the Kron reduced (persistent) Laplacian \cite{Dorfler2013,wangPersistentSpectralGraph2019a, memoliPersistentLaplaciansProperties2021}, which considers subgraphs inside a larger graph. It then applies the Rayleigh quotient associated with this operator, resulting in the identification of genes that drive bifurcation processes. To probe the discrete cell type paradigm, we apply the methods to synthetic and experimental data sets, which select subsets of genes that span known cell types and provide possible pathway transitions between them.
The article is organised as follows.
In Section \ref{section:MM_SGT}, we present mathematical preliminaries. We then introduce the three proposed scores (Sections \ref{section:MM_eig}-- \ref{section:MM_PRQ}) and data sets (Section~\ref{sec:data}). In Section \ref{section:Results}, we present and discuss computational results, highlighting the potential of each method for application on single-cell data sets, and then conclude in the final section.
\section{Materials and Methods}\label{section:MM}
\subsection{Preliminaries}\label{section:MM_SGT}
Let $ G = (V, E)$ be an undirected graph where $V=\{1,\ldots,n\}$ are nodes (representing the set of $n$ cells) and $E \subseteq V \times V$ are edges that are weighted by gene correlation or similarity. The weight between cells $u$ and $v$ is recorded in the $a_{uv}$ entry of the weighted adjacency matrix $A$. Let $d_v$ denote the degree of node $v$, and let $D$ denote the diagonal matrix $D_{vv}$ whose entries have value $d_v$.
\begin{definition}
The \emph{combinatorial Laplacian} \(L\), the \emph{symmetrically normalised Laplacian} \(\mathcal{L}\) and the \emph{random walk Laplacian} $L^\mathrm{rw}$ of the graph $G$ are:
\begin{align}
L &= D - A,\label{eq:laplacian} \\
\mathcal{L} &= D^{-1/2}L D^{-1/2} = I - D^{-1/2}AD^{-1/2},\label{eq:normalised_laplacian}\\
L^\mathrm{rw}&=D^{-1}L=I-D^{-1}A.\label{eq:rw_laplacian}
\end{align}
\end{definition}
\begin{definition}
The \textit{Rayleigh quotient} for a non-zero graph signal $g: V \to \mathbb{R}$ on the nodes of \(G\) is
\begin{equation}
\label{eqn:rayleigh-quotient}
R_L(g)
= \frac{\langle g, L g \rangle}{\langle g, g \rangle}
= \frac{
\sum_{u\sim v}{A_{uv}
(g(u) - g(v))^2}
}{
\sum_{u}{g(u)^2}
},
\end{equation}
where $u\sim v$ indicates that $u$ and $v$ are adjacent nodes in \(G\) and the inner product is defined as $\langle g , h \rangle= \sum_{v\in V} g(v) h(v).$
\end{definition}
If \(g\) is constant, then \(R_L(g)\) is zero.
Substituting the normalised Laplacian into Equation~\ref{eqn:rayleigh-quotient},
we have the following equation:
\begin{equation}
\label{eqn:normalised-rayleigh-quotient}
R_\mathcal{L}(g) = \frac{\langle g, \mathcal{L} g \rangle}{\langle g, g \rangle}
= \frac{\sum_{u \sim v}A_{uv} \left(\frac{1}{\sqrt{d_u}} g(u) - \frac{1}{\sqrt{d_v}} g(v)\right)^2}{\sum_u g(u)^2}
.
\end{equation}
When normalising signals by $D^{1/2}$ \cite{chung1997spectral} so that
\(R_\mathcal{L}(D^{1/2}\mathbf{1})=0\), where $\mathbf{1}\in \mathbb{R}^{n}$ is the vector of ones, we get
\begin{equation}
\label{eqn:rayleigh-quotient-premultiplied}
R_\mathcal{L}(D^{1/2}g) = \frac{\langle D^{1/2}g, \mathcal{L} D^{1/2}g\rangle}{\langle D^{1/2}g, D^{1/2}g\rangle}
= \frac{\langle g, L g\rangle}{\langle g, D g\rangle}
= \frac{
\sum_{u \sim v}{A_{uv}
(g(u) - g(v))^2}
}{
\sum_{u}{g(u)^2 d_u}
}.
\end{equation}
The graph mean \(\mu_G(g)\) of a signal \(g\) is defined as
\begin{equation}
\mu_G (g) = \frac{1}{\sum_{u\in V}d_u}\sum_{v\in V}{g(v)d_v}\label{eq:graph_mean}
\end{equation}
and the graph variance of $g$ is
\(\mathrm{Var}_G(g)=\sum_{v\in V}d_v\left(g(v)-\mu_G(g)\right)^2\) \cite{He2005}.
\begin{definition}
If we re-centre the graph signal $g$ by setting $\tilde{g}(v)=g(v)-\mu_G(g)$, then the \emph{Laplacian score} of $g$ (in the sense of \cite{govekClusteringindependentAnalysisGenomic2019})
is defined as
\begin{equation}
\label{eqn:LS}
LS(g)=R_\mathcal{L}\left(D^{1/2}\tilde{g}\right)=\frac{\sum_{u \sim v} A_{uv}\left(g(u)-g(v)\right)^2}{\mathrm{Var}_G(g)}.
\end{equation}
\end{definition}
The Rayleigh quotient and Laplacian score measure consistency
of the graph signal with the underlying graph structure.
Small scores correspond to signals which exhibit variation consistent with the local graph structures; larger scores correspond to signals inconsistent with the local graph structures.
While the Rayleigh quotient is zero for constant signals (i.e. a perfect score)
the Laplacian score is undefined for constant signals and is high for
near-constant signals~\cite{chung1997spectral, He2005}.
\subsection{Eigenscores}\label{section:MM_eig}
The Rayleigh quotient and Laplacian score order graph signals by coherence with the underlying graph.
We remark that this ordering only considers consistency at a single scale.
In order to obtain a finer-grained, multiscale understanding of graph signals, we consider their alignment with different coherent structures on multiple scales in the graph. To explain how we can do this, we first recall the spectrum of the Laplacian.
As both the Laplacian \(L\) and the normalised Laplacian \(\mathcal{L}\)
are symmetric and positive semi-definite
their eigenvalues are real and non-negative \cite{chung1997spectral}.
For the normalised Laplacian \(\mathcal{L}\),
write the orthonormal eigenbasis as
\(\{e_0, \ldots, e_{n-1}\}\)
with corresponding eigenvalues
\(0=\lambda_0 \leq \lambda_1 \leq \cdots \leq \lambda_{n-1}\).
Given a graph signal \(g\),
\(D^{1/2}g = \sum_{i=0}^{n-1} g_i e_i\)
where \(g_i = \langle D^{1/2}g, e_i \rangle\).
Writing Equation \ref{eqn:rayleigh-quotient-premultiplied} in this eigenbasis gives:
\begin{align}
R_\mathcal{L}(D^{1/2}g) &=
\frac{\langle\sum_i g_i e_i, \sum_j \lambda_j g_je_j \rangle}
{\langle \sum_i g_i e_i, \sum_j g_j e_j \rangle}\notag \\
&= \frac{\sum_i \lambda_i g_i^2}{\sum_i g_i^2}\notag\\
&= \sum_i \lambda_i \left(\frac{g_i}{\|D^{1/2}g\|}\right)^2.\label{eq:eigenbasis}
\end{align}
Given the expression of the eigenbasis in Equation ~\ref{eq:eigenbasis}, we now consider individual contributions to the Rayleigh quotient separately, proposing the following definition.
\begin{definition}[Eigenscore]
Given a graph signal \(g: G \to \mathbb{R}\),
we define the \emph{ \(i\)th eigenscore \(\eig_i\)} by
\begin{equation}
\eig_i(g) = \frac{\langle D^{1/2}g, e_i \rangle}{\| D^{1/2}g \|}.
\end{equation}
It follows that \(R_\mathcal{L}(D^{1/2}g) = \sum_i \lambda_i \eig_i(g)^2\).
\end{definition}
We can view the \(i\)th eigenscore of a graph signal
as the contribution from the \(i\)th eigenvector direction to its Rayleigh quotient.
It can also be viewed as the cosine of the angle between \(D^{1/2}g\)
and the \(i\)th eigenvector.
Thus, a large positive value for \(\eig_i(g)\) indicates strong alignment
of the graph signal with the \(i\)th eigenvector of \(\mathcal{L}\)
and a large negative value indicates strong anti-alignment (i.e. alignment with minus the eigenvector).
The ordering of the eigenvalues by magnitude
explains the multiscale nature of the eigenscore.
Expressing a graph signal in terms of Laplacian eigenvector contributions can be viewed as expanding in a frequency basis. Here, ordering the eigenvectors according to increasing eigenvalue corresponds to considering waves of increasing frequency. Expressing a signal in this basis can be viewed as the graph analogue of a Fourier transform.
In general, computing the full eigendecomposition
is expensive; however, algorithms exist for computing
the first few dominant eigenvectors of a symmetric sparse matrix~\cite{calvetti1994implicitly}.
\subsubsection{The 0th Eigenscore}
Set $D^{1/2}\mathbf{1}$ to be the 0th eigenvector in our eigenbasis. Then
$$
\eig_0(g) = \frac{\langle D^{1/2}g, D^{1/2} \mathbf{1}\rangle}{\| D^{1/2}g\| \|D^{1/2} \mathbf{1}\|}
=
\| D^{1/2} \mathbf{1} \|
\, \, \mu_G\left(\frac{g}{\| D^{1/2}g\|} \right),
$$
where $\mu_G$ is the graph mean, as defined in Equation \ref{eq:graph_mean}.
\subsubsection{Eigenscores to visualise graph signals}
Projecting gene signals onto the eigenspace spanned by low frequency eigenscores
allows us to visualise gene space and identify meaningful signals (see Figure~\ref{fig:eig_explain}). Noisy signals are mapped close to zero, and interesting signals lie on the periphery in such an eigenspace plot.
Constructing such an embedding using Laplacian eigenvectors is reminiscent of Laplacian eigenmaps \cite{belkin2003laplacian}. However, in \cite{belkin2003laplacian} Laplacian eigenvectors are used to construct an embedding of the nodes of the graph whereas we embed signals on the graph.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{Fig_eigenscores2.png}
\caption{We demonstrate eigenscores on a graph constructed by taking 100 random points each from four touching balls in 30 dimensions and connecting them via a 15-nearest neighbour graph. (A) Laplacian eigenvectors $e_1$ and $e_2$ distinguish the left and right two clusters and the top and bottom two clusters respectively. (B) Different graph signals align or anti-align differently with the two eigenvectors, resulting in a plot of eigenscore $(\eig_1$, $\eig_2)$-space that differentiates the various signals. A random signal plots near the origin.}
\label{fig:eig_explain}
\end{figure}
\subsection{Multiscale Laplacian Score}\label{section:MM_MLS}
The Laplacian score (Equation~\ref{eqn:LS})
considers the change in signal along single edges in the graph.
We propose the multiscale Laplacian score (MLS)
relies on random walks
to measure the consistency of a signal across local graph neighbourhoods of continuously increasing size. This unsupervised approach provides a multiscale ranking of signal coherence with the graph.
We can determine a finite number of scales at which the random walker admits a Markov stable partition \cite{delvenne2010stability,lambiotte2014random} and we pair this pipeline with the MLS.
\subsubsection{Random Walks on Graphs}
Random walks on graphs are stochastic processes that can model a range of phenomena, including diffusion on graphs \cite{masuda2017random}.
For any graph $G$ with adjacency matrix $A$, the evolution of a continuous-time Markov process is governed by the Kolmogorov differential equation:
\begin{equation}
\Dot{\mathbf{p}}=-\mathbf{p}L^\mathrm{rw},\label{eq:de}
\end{equation}
where $\mathbf{p}$ is a time-dependent node vector and $\mathbf{p}_v (t)$ gives the probability of a random walker being on node $v$ at time $t$.
In this Markov process a random walker jumps to adjacent nodes (with probability proportional to the respective edge weight) after a period of time drawn from an $\mathrm{Exp}(1)$ random variable.
The stationary distribution $\pi\in \mathbb{R}^{n}$ is the unique left eigenvector of $L^\mathrm{rw}$ with eigenvalue 0 whose entries sum to 1. The solution to Equation \ref{eq:de} is $\mathbf{p}(t)=\mathbf{p}(0)\exp(-tL^\mathrm{rw})$ and $\pi=\lim_{t\to\infty}\mathbf{p}(t)$.
\subsubsection{Community detection}
Community detection in networks is concerned with finding groups of nodes that are more tightly connected to each other than to the rest of the network. Some of the best known community detection algorithms, such as modularity optimisation \cite{porter2009communities}, exploit combinatorial properties of the graph. The communities found by optimising modularity are {\it dense}; i.e., there are many more edges between nodes in the group than with the rest of the network.
Community detection has extended from the notion of dense connections defining a community to also include connectivity via random walks. Markov stability, a dynamical approach for community detection, relies on random walks to detect stable graph partitions $V=C_1\sqcup C_2\sqcup\dots\sqcup C_k$ at multiple resolutions \cite{delvenne2010stability,lambiotte2014random}.
We call each $C_i$ a community and assume that it is non-empty. Moreover, we denote the community to which node $v$ belongs as $c_v$ and assume that all subgraphs induced by $C_i$ are connected. Two partitions are
considered identical if one can be obtained from the other by permuting the labels $1,...,k$.
\begin{definition}
Let $\{C_i\}_{i=1,...,k}$ be a partition of the graph $G$ into communities.
If $M=D^{-1}A$ is the random walk transition matrix with stationary distribution $\pi$, then the
\emph{continuous Markov stability} of the partition at
time is
$$r_\mathrm{cont}(\{C_i\},t)=\sum_{u,v\in V}\pi_u\left(P(t)_{uv}-\pi_v\right)\delta(c_u,c_v),$$
where $\delta$ is the Kronecker delta and $P(t):=\exp(-tL^\mathrm{rw})$ is the continuous time transition matrix \cite{Delvenne2013}.
\end{definition}
The Markov stability of a graph partition at time $t$ is the probability of a random walker remaining in its initial community after walking for time $t$, minus by the probability that two independent random walkers are in the same community at time $t$. All walkers are assumed to be in the stationary distribution.
The Markov stability of a partition $\{C_i\}$ at time $t$ takes values in the range $(-1/2, 1]$. High values indicate that a random walker tends to get trapped in one of the groups, which is what we expect in the presence of communities.
For each value of $t$, coherent community structures on a graph can be found by maximising Markov stability
using the Louvain method \cite{Blondel2008}, a successful algorithm for finding community structures at different scales in applications \cite{bacik2016flow, Beguerisse2013, liu2020graph}.
The choice of values of $t$ to use for finding community structures via maximisation of Markov stability depends on the graph $G$. For example, a complete graph will only have one sensible community structure (containing a single community) which will be detected at a relatively large $t$, while many real world networks exhibit community structures at a variety of scales $t$.
The partitions at different $t$ obtained from this optimisation are assessed using the mean pairwise \emph{variation of information (VI)}, which
tests the consistency and robustness of partitions \cite{meilua2007comparing}.
At resolutions for which there is an obvious community structure, the VI is relatively small and takes a local minimum (viewed as a function of $t$). This behaviour is explained in \cite{Schaub2012} and illustrated in Figure \ref{fig:vi_eg}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{vi_eg_fig.png}
\caption{The graph on the left displays community structures at four different scales, exemplified by the groups A, B, C and D.
When computing the mean pairwise variation of information (right) as a function of scale (Markov time), we find local minima corresponding to resolutions A (256 communities), B (64 communities), C (16 communities) and D (4 communities). Figure inspired by \cite{StabilityEg}.}
\label{fig:vi_eg}
\end{figure}
\subsubsection{Signal scores at multiple resolutions}
We can reinterpret the Laplacian score (Equation~\ref{eqn:LS}) in
terms of the random walk Laplacian (Equation \ref{eq:rw_laplacian}):
\begin{equation}
LS(g)=\frac{\left\langle D^{1/2}\tilde{g}, D^{1/2}L^\mathrm{rw} \tilde{g}\right\rangle}{\left\langle D^{1/2}\tilde{g}, D^{1/2}\tilde{g}\right\rangle} = \frac{\sum_{u, v\in V} d_u(D^{-1}A)_{uv}\left(g(u)-g(v)\right)^2}{2\cdot\mathrm{Var}_G(g)}. \label{eq:LS-rw}
\end{equation}
Thus the Laplacian score of a signal $g$
is the expected squared difference in the signal $g$ that is observed when a random walker at stationary distribution takes exactly one step following transition matrix $D^{-1}A$, divided by twice the graph variance.
By extending the Laplacian score from a single random step to a random walk for time $t$,
we arrive at our definition for the multiscale Laplacian score:
\begin{definition}
Let $G=(V,E)$ be a graph with adjacancy matrix $A$, $g:V\to\mathbb{R}$ be a signal on $G$ and $t\in\mathbb{R}_{\geq 0}$. The \emph{multiscale Laplacian score} of $g$ at resolution $t$ is defined as
$$
MLS(g, t)=\frac{\left\langle D^{1/2}\tilde{g}, D^{1/2}(I-P(t)) \tilde{g}\right\rangle}{\left\langle D^{1/2}\tilde{g}, D^{1/2}\tilde{g}\right\rangle}=\frac{\sum_{u, v\in V}d_uP(t)_{uv}\left(g(u)-g(v)\right)^2}{2\cdot\mathrm{Var}_G(g)},
$$
where we use the identity $d_uP(t)_{uv}=d_vP(t)_{vu}$ for all $t\in\mathbb{R}_{\geq0}$ and all $u,v\in V$.
\end{definition}
If the expected change in a signal $g$, which a continuous-time random walker is exposed to,
after time \(t\) is small, then the $\mathrm{MLS}(g,t)$ is small.
In such a case, we say that the signal $g$ is consistent with the graph structure of $G$ at resolution $t$. The MLS extends the Laplacian score (Equation~\ref{eqn:LS}) \cite{govekClusteringindependentAnalysisGenomic2019} by performing a consistency analysis at multiple resolutions. Analysing multiple resolutions, ranging from local to global structures, is useful for studying graphs $G$ paired with signals that are consistent at multiple resolutions.
\subsubsection{MLS analysis pipeline}
In the MLS analysis pipeline, we partition a given graph $G$ into communities at 100 Markov times using the Louvain algorithm \cite{Blondel2008}.
We then select a small set of Markov times at which the VI attains local minima. Next we calculate the MLS at each of these resolutions and for each signal on the graph.
We can then compare the MLS at different Markov times to identify gene signals particularly consistent with a given topological structure at a given resolution. For example, a small MLS at an earlier Markov time (compared to the mean behaviour of all signals) is more consistent with structures at that resolution (see Figure \ref{fig:mls_eg}).
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{mls_eg_fig.png}
\caption{We construct a graph with three communities, all of different sizes. (A) the VI (on y-axis, VI is 0 except for a brief spike around $t=3.35$) identifies resolutions $t_1$, at which all three communities are identified, and $t_2,$ at which two communities are identified
(note that due to the simplicity of the graph,
there are intervals of local minima instead of points; we pick $t_1$ before the spike and $t_2$ after).
In (B), we calculate the MLS at $t_1$ and $t_2$ (given by black circles) of three signals that are equal to 1 on one of the $t_1$-communities (constant part of the signal is highlighted by arrows) and uniformly random elsewhere, and one completely random signal. The signal that is constant on the largest cluster (bottom left) is identified as highly consistent at both times. The random signal (top right) is identified as inconsistent at both times. Conversely, the signal constant on the smallest community (top left) has a high MLS at $t_2$ relative to the MLS at $t_1$, separating it from the signal constant on the community of intermediate size (centre).
}
\label{fig:mls_eg}
\end{figure}
\subsection{Persistent Rayleigh Quotient}\label{section:MM_PRQ}
Given a graph $G$ and signals $g:V \rightarrow \mathbb{R}$, we may have additional information associated to each node of \(G\)
that we would like to use to further inform our analysis.
In single-cell data, for example, this could be the developmental
time of each observed cell, which associates a real
value to each node of the graph \(G\).
\begin{definition}[Filtered graph]
A filtration of a graph \(G\)
is a integer-valued function {\(f : V \to \mathbb{Z}\)}
on the nodes of \(G\).
For \(i \in \mathbb{Z}\)
the sub-level set \(\alpha(i)\)
of \(f\) at \(i\) is the set
\[
\alpha(i) = \{v \in V: f(v) \leq i\}
,\]
the nodes of \(G\) with filtration value not greater than \(i\).
The induced subgraph \(G[\alpha(t)]\)
is the subgraph of \(G\) with nodes \(\alpha(i)\)
and every edge in \(G\) that has both
endpoints in \(\alpha(i)\).
Then the filtration \(f\) gives a sequence
of induced subgraphs of \(G\)
\[
G[\alpha(i_0)] \leq G[\alpha(t_1)] \leq \cdots \leq G[\alpha(i_n)] \leq G
\]
for each increasing sequence \((i_k)_{k=0}^n\) of real numbers.
\end{definition}
Topological data analysis studies the evolution of topological invariants across
filtered graphs \(G[\alpha(i)]\).
The most common tool is persistent homology \cite{ghrist2008barcodes}
which computes how invariants, such as connected components
in \(G[\alpha(i)]\), persist in the larger graph \(G[\alpha(j)]\).
Persistent homology is limited to studying the structure of the filtered graph itself. To analyse the signals on the sequence of subgraphs, we first recall the persistent Laplacian and then introduce the persistent Rayleigh quotient.
\subsubsection{Persistent Laplacian}
Given a subset \(\alpha \subseteq V\), one
can reduce the Laplacian of \(G\)
to a Laplacian on the nodes \(\alpha\)
by a method known as Kron reduction~\cite{Dorfler2013}.
Briefly, Kron reduction removes the nodes in \(V\setminus{\alpha}\)
and adds weighted edges that preserve the geometric
structure between the nodes \(\alpha\) in \(G\).
For example, in network circuit theory \cite{Dorfler2013},
Kron reduction creates a simpler representation
of a circuit whilst
preserving resistances.
Memoli, Wang and collaborators extended this method to higher-order
graphs (i.e. simplicial complexes)
\cite{memoliPersistentLaplaciansProperties2021, wangPersistentSpectralGraph2019a}
and showed that the Kron reduced Laplacian is the 0-degree persistent Laplacian.
There is a direct relationship
between persistent homology
and the Kron reduction/persistent Laplacian:
the nullity of the reduced Laplacian is
exactly the persistent Betti number
of \(G[\alpha] \subseteq G\)~\cite{memoliPersistentLaplaciansProperties2021}.
For graphs, this persistent Betti number is
the number of connected components
of \(G[\alpha]\) that remain
disconnected in \(G\).
For subsets \(\alpha, \beta \subseteq V\)
let \(L[\alpha, \beta]\)
be the submatrix of \(L\)
with rows indexed by \(\alpha\) and columns indexed by \(\beta\).
Under an appropriate reordering
of the node labels, the Laplacian \(L\) has block form
\[
L =
\begin{bmatrix}
L[\alpha, \alpha] & L[\alpha, \alpha^c] \\
L[\alpha^c, \alpha] & L[\alpha^c, \alpha^c]
\end{bmatrix},
\]
where \(\alpha^c = V \setminus \alpha\)
is the complement of \(\alpha\) in \(V\).
\begin{definition}[Kron reduction \cite{Dorfler2013}/Persistent Laplacian \cite{memoliPersistentLaplaciansProperties2021}]
\label{def:kron_reduction}
The Kron reduction
(or 0-degree persistent Laplacian)
of \(L\) with respect to \(\alpha\)
is the matrix
\[
L_{\alpha}
= L[\alpha, \alpha]
- L[\alpha, \alpha^c] L[\alpha^c, \alpha^c]^{-1} L[\alpha^c, \alpha],
\]
which is also known as the Schur complement
\(L/L[\alpha^c, \alpha^c]\).
We analogously define \(\mathcal{L}_\alpha\)
for the normalised Laplacian \(\mathcal{L}\).
\end{definition}
The Kron reduction \(L_\alpha\) of $L$
arises from performing Gaussian elimination on \(L\)
to remove blocks \(L[\alpha, \alpha^c]\) and \(L[\alpha^c, \alpha]\):
\[
\begin{bmatrix}
L[\alpha, \alpha] & L[\alpha, \alpha^c] \\
L[\alpha^c, \alpha] & L[\alpha^c, \alpha^c]
\end{bmatrix}
\rightsquigarrow
\begin{bmatrix}
L[\alpha, \alpha]
- L[\alpha, \alpha^c] L[\alpha^c, \alpha^c]^{-1} L[\alpha^c, \alpha]
& 0 \\
0 & L[\alpha^c, \alpha^c]
\end{bmatrix}
.
\]
\begin{lemma}[Lemma 2.6 in \cite{Dorfler2013}]
In Definition \ref{def:kron_reduction} the following hold:
\begin{enumerate}
\item \(L_\alpha\) is well-defined as \(L[\alpha^c, \alpha^c]\) is invertible.
\item \(L_\alpha\) is symmetric.
\item \(L_\alpha\mathbf{1} = \mathbf{0}\), where \(\mathbf{1}\) is the column vector of ones.
\end{enumerate}
\end{lemma}
Hence, \(L_\alpha\) is a Laplacian matrix in the sense
that there exists a weighted graph with nodes \(\alpha\)
and Laplacian equal to \(L_\alpha\).
Suppose we have a filtration \(f\) on the nodes of the graph \(G\).
Then
for \(i, j \in \mathbb{Z}\) with \(i \leq j\) define
\[
L_i^j =\left(L^{\alpha(i)}\right)_{\alpha(j)}
\]
the \((i,j)\)-persistent Laplacian,
where \(L^{\alpha(j)}\)
is the Laplacian of the graph \(G[\alpha(j)]\).
Again, $\mathcal{L}_i^j$ is defined analogously.
\begin{definition} [Persistent Rayleigh quotient]
For a graph \(G\) with filtration \(f\)
and \(i, j \in \mathbb{Z}\) with \(i \leq j\),
the
\emph{persistent Rayleigh quotient}
of a signal \(g: G \to \mathbb{R}\) is
\[
\PRQ(i, j)(g) = R_{L_i^j}(g)
= \frac{\langle g , L_i^jg \rangle}{\langle g , g \rangle},
\]
which is the Rayleigh quotient (as in Equation~\ref{eqn:rayleigh-quotient}) using the \((i, j)\)-persistent Laplacian.
We further define the normalised persistent Rayleigh quotient to be
\[
\widehat{\PRQ}(i, j)(g) = R_{\mathcal{L}_i^j}((D_i^j)^{1/2}g)
= \frac{\langle g , L_i^jg \rangle}{\langle g , D_i^j g \rangle},
\]
which is the Rayleigh quotient (as in Equation~\ref{eqn:rayleigh-quotient-premultiplied}) using the normalised version of the \((i,j)\)-persistent Laplacian
on the normalisation of the signal.
Here, \(D_i^j\) is the degree matrix of
the graph corresponding to \(L_i^j\).
When applying \(L_i^j\) and \(D_i^j\) to \(g\)
we implicitly restrict \(g\) to the nodes \(\alpha(i)\).
\end{definition}
\subsubsection{Application to Cell Bifurcation}
We demonstrate the persistent Rayleigh quotient on a toy bifurcation model \(G\) where
\(V = \{a, b, c\}\)
and \(E = \{(a, c), (b, c)\}\)
(Fig. \ref{fig:pl_explain}).
We consider the graph signals
\begin{align*}
& g_1 : a, b, c \mapsto 1, 1, 1 , \\
& g_2 : a, b, c \mapsto 1, 0, 1 , \\
& g_3 : a, b, c \mapsto 1, 1, 0 , \\
& g_4 : a, b, c \mapsto 0, 1, 0 .
\end{align*}
Suppose that this graph represents a biological system: node \(c\) represents a parent cell type at developmental time \(t_0\)
and nodes \(a\) and \(b\) are daughter cell types at developmental time \(t_1\).
If we filter the graph \(G\) by time,
then we only ever have one connected component.
Thus we filter in `reverse time' by setting the filtration to be
\(t_\text{max} - t\).
Explicitly, we define a filtration \(f\) by \(f(a) = f(b) = 0\) and \(f(c) = t_1 - t_0\).
Now \(G[\alpha(0)]\) has two connected components which merge into a single component in
\(G = G[\alpha(t_1 - t_0)]\).
When we perform the Kron reduction of \(L=
L^{t_1 - t_0}\) with respect to \(\alpha(0)\) to get the
Laplacian \(L_{0}^{t_1 - t_0}\), the graph associated to this Laplacian has just \(a\) and \(b\) as
nodes and a single \(1/2\)-weight edge between them (Fig. \ref{fig:pl_explain}B).
This graph still has one
connected component
but the connection is weaker.
In the language of persistent homology, this corresponds to
two \(H_0\)-bars: one is born at filtration value \(0\) and dies before value \(t_1 - t_0\), the other is born at value \(0\) and persists infinitely.
Comparing the usual normalised Rayleigh quotient, corresponding to \(\widehat{\PRQ}(t_1 - t_0, t_1 - t_0)\), to the normalised persistent Rayleigh quotient
\(\widehat{\PRQ}(0, t_1 - t_0)\) separates the binary graph functions \(g_i\) on \(G\) (Figure \ref{fig:pl_explain}C).
In the context of single-cell differentiation data, graph signals correspond to genes.
Gene \(g_2\) lies
above the diagonal in Figure \ref{fig:pl_explain}C and is highly expressed
in the parent cell type and only one of the daughter cell types.
Such behaviour indicates that a gene is involved in determining the cell fate.
Similarly, gene \(g_3\), which lies below the diagonal, is expressed in both daughter cell types
but not the parent cell type,
corresponding to genes only expressed after
the possible switching has occurred.
Genes corresponding to \(g_1\) are constantly expressed over time,
representing possible `house-keeper' genes and, thus, have
zero (or close to zero) persistent Rayleigh quotients.
Finally, gene \(g_4\) is only expressed in a single daughter cell type
and lies along the diagonal.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{examplefig_final.png}
\caption{
The persistent Rayleigh quotient for cell differentiation.
(A) The model for the bifurcating differentiation process.
(B) The effects on the graph and graph Laplacian
after applying the Kron reduction process to the daughter cells.
(C) The normalised Rayleigh quotients of (x-axis) full Laplacian \(L_{t_1 - t_0}{t_1 - t_0}\)
and (y-axis) persistent Laplacian \(L_0^{t_1 - t_0}\)
for binary functions on the graph representing high and low gene expression
of a particular gene.
The persistent Rayleigh quotient separates these gene expressions
based on relevance to the bifurcation.
}
\label{fig:pl_explain}
\end{figure}
\subsection{Data Sets}\label{sec:data}
We apply the proposed methods on three different experimental scRNA-seq data sets: 2,700 human peripheral blood mononuclear cells (PBMC) \cite{10XPBMC}, 24,911 human T cells infiltrating lung tumors and adjacent normal tissue \cite{lambrechts2018phenotype} (previously analysed with the Laplacian score \cite{govekClusteringindependentAnalysisGenomic2019}), and 447 mouse foetal liver cells at different stages of development \cite{yangSinglecellTranscriptomicAnalysis2017}.
Cells in the mouse foetal liver data set were sampled on embryonic days 10, 11, 12, 13, 14, 15, and 17.
\subsubsection{Preprocessing of PBMC and T cell data sets}
We normalised the PBMC and T cell scRNA data sets using the variance stabilizing transform (VST) \cite{hafemeister2019normalization} (also commonly referred to as SC Transform).
The VST returns the 3,000 genes with the highest dispersion in each data set, which is then reduced to its 30 principal components with the highest variance,
following the recommendation given in the manual of the \texttt{R}-library Seurat \cite{SeuratV4}.
We then construct a $k$-nearest neighbour ($k$-nn) graph on cells for both data sets, using $k=15$, and weight the edges of these graphs according to the weights given by the dimension reduction algorithm UMAP \cite{mcinnes2018umap}.
We use cosine-dissimilarity for the PBMC data (following the Seurat tutorials) and Pearson correlation-distance for the T cell data (following \cite{govekClusteringindependentAnalysisGenomic2019}). We sample 3,000 cells at random between the PCA and $k$-nn graph steps in the T cell data set (following \cite{govekClusteringindependentAnalysisGenomic2019}).
\subsubsection{Preprocessing of mouse foetal liver cell data set}
As the mouse data set is substantially smaller than the other data sets
we used a simpler preprocessing strategy.
The public data from \cite{yangSinglecellTranscriptomicAnalysis2017}
was provided in transcripts-per-million (TPM)
and we further applied a \(\log_{e}(x + 1)\) transform.
The 10,000 most highly varying genes were retained and a UMAP-weighted $k$-nn graph was built using \(k=3\) with the Euclidean metric.
As in the original paper, we plot the resulting graph on the first two principal components (Fig. \ref{fig:mouse-data set}).
\subsubsection{Previous results on PBMC data}
In the UMAP plot generated from the variance stabilised PBMC data (see Figure \ref{fig:pbmc_clusters}), five large components and two smaller ones are visible.
By colouring the UMAP plot by marker gene expression, the VST vignette in \cite{seuratVST} identifies that the large components roughly correspond to CD4 T cells, CD8 T cells, NK cells, B cells and monocytes, while the small components contain platelets and dendritic cells.
The clustering algorithm applied to the data splits these components up further. By performing a differential gene expression (DGE) analysis, the VST vignette suggests that the CD4 and CD8 T cells can be split into three sub populations respectively. The B cells and NK cells are split into two sub populations respectively, each closely linked to high expression of known marker genes. The top ten differentially expressed genes in the twelve resulting cluster are given in Table \ref{Table:pbmc_dge}.
\subsubsection{Previous results on T cell data}
Lambrechts et al. \cite{lambrechts2018phenotype} identify several subclusters of T cells based on clustering in t-SNE (9 clusters) and marker genes (6 subgroups, see Figure \ref{fig:tcell_clusters}). There is only one visible connected component in the t-SNE plot. Govek et al. \cite{govekClusteringindependentAnalysisGenomic2019} applied the combinatorial Laplacian score
to identify a variety of genes, including \textit{HAVCR2}, \textit{RSAD2} and \textit{GZMK}, that are consistent with the topological structure of data set but do not correspond to the clusters previously defined in \cite{lambrechts2018phenotype}.
Govek et al. also demonstrated on the T cell data that the discriminating power of the combinatorial Laplacian score (measured by the area under the receiver-operating characteristic curve) is comparable to that of conventional DGE methods and was well superior to that of variance \cite{govekClusteringindependentAnalysisGenomic2019}.
\subsubsection{Previous results on mouse foetal liver cell data set}
Yang et al. \cite{yangSinglecellTranscriptomicAnalysis2017}
selected 1,761 heterogeneously expressed genes
that correlate with the first two principal components
of the data which were then clustered.
In a separate study on different day, Mu et al. \cite{muEmbryonicLiverDevelopmental2020}
ranked genes that were differentially expressed in
hepatocyte development.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Tcell_clusters.png}
\caption{Lambrechts et al. \cite{lambrechts2018phenotype} classified T cells into six sub-cell types based on marker genes.}
\label{fig:tcell_clusters}
\end{figure}
\subsection{Code Availability}
The code for computing eigenscores, the multiscale Laplacian score,
and the persistent Rayleigh quotient for signals on a graph is available here:
\href{https://github.com/osumray/multiscale-signal-selection-single-cell.git}{https://github.com/osumray/multiscale-signal-selection-single-cell.git}.
\section{Results}\label{section:Results}
In this section we apply the three multiscale methods outlined above
to three single-cell datasets.
Eigenscores rank genes at different frequencies: high eigenscores identify dominant genes that align with the underlying cell similarity graph as the frequency increases.
The multiscale Laplacian score (MLS) identifies coherent genes as the distance traversed by a random walker on the cell similarity graph is scaled. Third, the persistent Rayleigh quotient identifies genes involved in bifurcation processes when additional temporal meta data are available.
\subsection{Eigenscores}\label{section:Results_eig}
Eigenscores test whether features such as genes, viewed as functions on nodes representing cells of a graph, align or anti-align with the Laplacian eigenvectors on the graph.
They can be used to rank genes similarly to DGE, but on different scales in the data, according to their coherence with the topology of the cell similarity graph.
They can also be applied to explore gene expression by visualising genes in a gene space.
By scoring genes for alignment with individual eigenvectors, as well as selecting relevant genes, we can often give meaning to the biological processes in which these genes are involved.
Eigenscores are meaningful in data sets with clear community structures, e.g. the PBMC data set. Eigenscores are also meaningful in data sets that cannot be decomposed into distinct clusters, but rather have a continuous structure, e.g. the T cell data set. In either case the eigenscores do not rely on predefined clusters; they scan through the data in an unsupervised way.
\subsubsection{The geometry of PBMC genes via eigenscores}
We compute eigenscores of the top genes in the PBMC data set \cite{10XPBMC} (see Table S\ref{Table:pbmc_eig}). While many selected genes overlap with differential gene expression (DGE), validating eigenscores, this method identifies 26 additional genes (see Table S\ref{Table:pbmc_dge}).
To interpret the genes, we compare to 12 finer cell subtypes or 7
previously determined broader cell subtypes \cite{seuratPBMC} (see Figure~\ref{fig:eigenvector_PBMC}A, Figure~\ref{fig:pbmc_clusters}). Eigenscores select genes that represent broader cell types (e.g. \textit{MALAT1} for all lymphocytes or \textit{FTL} and \textit{FTH1} for all monocytes). Due to their importance in multiple cell clusters, differential gene expression (DGE) does not find they are significant genes for one cell cluster.
Eigenscores highlight \textit{FCGR3A} (Table S\ref{Table:pbmc_eig}, $e_4$ and $e_5$), a marker gene to distinguish CD56dim from CD56bright NK cells \cite{seuratPBMC};
however it is not ranked in the top 10 genes per cluster by DGE analysis (Table \ref{Table:pbmc_dge}). Moreover, \textit{PPBP}, a highly differentially expressed marker gene on platelets is ranked highly by eigenscores and not DGE. For completeness, we include a qualitative and quantitative comparison between eigenscore and DGE ranking (Figure \ref{fig:DE}), and present the set complement of the top scores.
We can further explore the relationships between genes by projecting the gene space of eigenscores via UMAP (Figure~\ref{fig:eigenvector_PBMC}B). The visualisation of 16 dimensional low-frequency eigenscores (eigenscore 1-16 corresponding to $0<\lambda_i<0.1$) provides gene signals that are most coherent with the cell similarity graph structure. We explore and interpret the continuous signal of gene space in Figure~\ref{fig:eigenvector_PBMC}B.
Genes plotting in the centre (blue) have low eigenscores,
where the gene expression is incoherent with the graph topology (see for example gene \textit{BAG4}).
The flares in the gene space with high eigenscores correspond to groups of genes strongly expressed on clusters corresponding to broad cell types or the cell cycle. We explore and interpret the continuous signal of gene space in Figure~\ref{fig:eigenvector_PBMC}B with flares interpreted clockwise: (I) platelets, (II) continuous transition of different subtypes of monocytes, (III) B cells and dendritic cells, (IV) NK cells and CD4 T cells (V) broad cell types (e.g. \textit{MALAT1} expressed on all lymphocytes), (VI) cell cycle genes differentially expressed on a previously unidentified cluster. We highlight that two flares of the gene space are missed by DGE (see Figure \ref{fig:DE}), specifically broader cell types identified in Figure~\ref{fig:eigenvector_PBMC}BV. Figure \ref{fig:DE}C gives a quantitative comparison of the amount of overlap in the eigenscore ranking versus the DGE ranking, showing that there is consistent overlap while we also find additional genes.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_PBMC.png}
\caption{Geometry of cell space and gene space. (A) Cell types in PBMC data \cite{10XPBMC}.
(B) UMAP of genes set in eigenscore space for eigenvectors 1-16. Genes (dots) are colour-coded for the logarithm of the norm of the vector in 16-dimensional eigenscore space.
Genes with similar expression patterns in the PBMC single-cell data \cite{10XPBMC} plot close together in eigenscore space, and expression patterns vary continuously as we move through this space. The outward branches I-VI correspond to genes that
are expressed highly on specific groups of cells.
}
\label{fig:eigenvector_PBMC}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_de.png}
\caption{Eigenscores compared to differential gene expression (DGE) on PBMC data set \cite{10XPBMC}. (A) Comparative study of DGE ranking using Seurat clustering (log of rank computed from adjusted p-value on x-axis) versus ranking by norm in eigenscore space (log of eigenscore rank of 16 lowest frequencies on y-axis). Example genes in top 100 for one ranking but not the other shown on the sides.
(B) Top 100 genes ranked by adjusted p-value in DGE marked on the eigenscore UMAP plot of genes from Figure \ref{fig:eigenvector_PBMC}. Two regions in the UMAP not found in the top of DGE are branch V from Figure \ref{fig:eigenvector_PBMC}B (T cell and lymphocyte genes that are expressed in larger groups of cells); branch VI (genes expressed in RRM2+ cluster that is not found by DGE).
(C) Quantitative comparison of gene ranks given by adjusted p-value in DGE versus norm in 16-dimensional eigenscore space.
}
\label{fig:DE}
\end{figure}
\subsubsection{Eigenscores for analysing data with continuous structure: T cells}
We next compute eigenscores of the T cell data \cite{lambrechts2018phenotype} (corresponding to $0<\lambda<0.1$).
As before, we can visualise the gene space in Figure \ref{fig:eigenvector_TCell}A where genes are near to another gene if they have similar expression patterns. For example, a large number of mitochondrial genes, such as \textit{MT-CO3}, that are expressed in almost all cells are grouped together as a distinct gene cluster.
The continuous nature of the T cell data set is reflected in the many intermediate to high eigenscore genes that show coherent regions in the data set on multiple scales. While we can identify groups of cells that have coherent gene expression behaviour, such as the clusters formed by EEF1A1+ cells, HBB+ cells, ANXA1+ cells and HSPA1A+ cells, we also find genes that have unique expressions that are unlike any other gene signals (e.g. \textit{GNLY}). To explore geometry of genes further, we analyse
the top 20 genes ranked by eigenscore norm in 1-19 dimensional eigenscore space. We find \textit{AREG} (9th in eigenscore rank) does not show up on the DGE ranking. As shown in the UMAP cell subfigure, \textit{AREG} is expressed \emph{in between} clusters of cell type, connecting to the previously identified as natural killer T cells (Figure \ref{fig:tcell_clusters}).
In this way, eigenscores provide insight into both the continuous and discrete nature of T cell behaviour.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{Fig_TCell.png}
\caption{(A) UMAP of genes from T cell data set \cite{lambrechts2018phenotype} in eigenscore space for eigenvectors 1-19, colour-coded for the logarithm of norm of the vector in 19-dimensional eigenscore space. Genes with similar expression group together and reveal substructure in the data set. Some genes have unique expression patterns not matched by other genes.
Boxed genes represent a group of genes with similar expression whereas unboxed genes represent isolated gene behaviour.
(B) Top 20 genes ranked by norm in 1-19 dimensional eigenscore space.
}
\label{fig:eigenvector_TCell}
\end{figure}
\subsection{Multiscale Laplacian score}\label{section:Results_MLS}
The multiscale Laplacian score (MLS), similar to the 0-dimensional combinatorial Laplacian score \cite{govekClusteringindependentAnalysisGenomic2019} and gene connectivity score \cite{rizviSinglecellTopologicalRNAseq2017}, extends DGE to settings in which a stable partition of cells into groups is not feasible; therefore, no assignment of cells into groups is required. The MLS ranks genes by their consistency with the topological structure of the data set and performs such topological consistency analyses at multiple resolutions.
The resolutions are determined by finding scales in the data that provide stable community structures.
We reiterate that the MLS calculation does not use the obtained communities; rather, we use the resolution that provides a stable communities via local minima in variation of information (VI).
\subsubsection{Multiscale Laplacian score of PBMC data}
In Figure \ref{fig:mls_pbmc} we apply MLS to the PBMC data set \cite{10XPBMC}.
This data permits a stable clustering into five larger groups of cells. However, these clusters contain non-stable substructures (see Figure \ref{fig:mls_pbmc} (A)). The substructures largely align with the clusters found in the Seurat VST vignette \cite{seuratVST} (Figure \ref{fig:pbmc_clusters}).
We find that the genes \textit{GZMK} and \textit{CD8B} exhibit a higher consistency with the structures at the first resolution ($t_1$) than at later resolutions (Figure \ref{fig:mls_pbmc} (B)). The gene \textit{GZMK} is highly expressed on the intersection of two communities at $t_1$ which correspond to naive and memory CD8 T cells, but is not highly expressed on the union of these two communities (the two communities merge at resolution $t_2$). \textit{GZMK} seems to drive a transition between these two clusters. Similarly, \textit{CD8B} is highly expressed on the left-most community contained in the CD4 T cell cluster of the UMAP plots (Fig. \ref{fig:pbmc_clusters}), which is merged into a larger community at $t_2$. The genes \textit{GZMB} and \textit{XCL2} are examples of features with low MLS at $t_2$ but relatively high MLS at $t_3$. The former is highly expressed on a community corresponding to effector CD8 T cells (cluster 6 in Fig. \ref{fig:pbmc_clusters}) , the latter on the intersection of the effector and the naive/memory T cell communities (clusters 6 and 5 and 7 in Fig. \ref{fig:pbmc_clusters}). At resolution $t_3$, clusters 5 and 7 are merged with cluster 6.
The communities at resolution $t_3$ correspond to the seven components describing the different cell types with the NK and CD8 T cells merged.
Examples of genes with low MLS at $t_3$, relative to the standard LS as in \cite{govekClusteringindependentAnalysisGenomic2019} and MLS at other resolutions, include \textit{AIF1} and \textit{CTSS}. Both genes are highly and consistently expressed on the community consisting of CD14+ and FCGR3A+ monocytes (Fig. \ref{fig:pbmc_clusters}) \cite{seuratPBMC, seuratVST}. Resolution $t_3$ is the first resolution at which CD14+ and FCGR3A+ monocytes form a single community.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{MLS_PBMC2.png}
\caption{Multiscale Laplacian scores of PBMC data set \cite{10XPBMC}. (A) The graph of variation of information of community structures returned by 100 iterations of the Louvain algorithm at each Markov time. Local minima indicate stable community structures and, hence, scales of interest. The community structures at three such minima are shown by colourings of UMAP plots. (B) Left: three scatter plots comparing the multiscale Laplacian scores of genes (grey dots) at successive times to one another (upper two) and of $t_3$ to the combinatorial Laplacian score (in all plots, axes are truncated). We highlight 6 genes of interest (annotated). Middle and Right: UMAP plots visualising the gene expression of six genes selected based on their MLS.}
\label{fig:mls_pbmc}
\end{figure}
\subsubsection{Multiscale Laplacian score of T cell data}
We compute the MLS to a human T cell data set from Labrechts et al \cite{lambrechts2018phenotype} (Figure \ref{fig:mls_tcell}).
As remarked by \cite{govekClusteringindependentAnalysisGenomic2019}, this data does not allow for any partitioning into stable clusters (see Figure \ref{fig:mls_tcell}A). Next we determine three resolutions of interest based on the VI. Genes with a relatively low MLS at the finest resolution, $t_1$, include \textit{IGKC} and \textit{IFI27}. Both are highly expressed on a small group of cells (in the center of left hand side and top right of UMAP plot respectively, compare Figure \ref{fig:mls_tcell} (B)). Similarly, at resolution $t_2$, \textit{AREG} and \textit{GZMB} show a high consistency with the topological structure, both in relation to other genes and to other time scales. In particular, \textit{AREG} is expressed highly on a group of cells which connects a cluster of natural killer T cells (bottom centre in UMAP plots; classification of cell types based on markers in Fig. \ref{fig:tcell_clusters}) with most of the remaining cells. Similarly, \textit{GZMB} is highly expressed on the intersection of exhausted and proliferating T cells (see Fig. \ref{fig:tcell_clusters}), two clusters which are visible in the community structure at $t_2$ but merge at $t_3$. Finally, at Markov time $t_3$ \textit{FGFBP2} and \textit{NKG6} are examples of genes with relatively low expression that are highly and consistently expressed on the cluster of natural killer T cells.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{MLS_TCell.png}
\caption{Multiscale Laplacian score of human T cell data set \cite{lambrechts2018phenotype}. (A) The graph of variation of information of community structures.
Again, local minima indicate scales of interest. Community structures at three scales are picked out.
(B) Left: three scatter plots comparing the multiscale Laplacian scores of genes (grey dots) at successive times to one another (left and middle plot) and of $t_3$ to the combinatorial Laplacian score (in all plots, axes are truncated). We highlight 6 genes of interest (black dots; annotated). Middle and Right: UMAP plots visualising the gene expression of six genes selected based on their MLS.}
\label{fig:mls_tcell}
\end{figure}
\subsection{Persistent Rayleigh quotient}\label{section:Results_PRQ}
Cell bifurcation methods, such as trajectory inference algorithms, seek to assign a pseudotime to each cell by fitting a tree onto the data set \cite{saelensComparisonSinglecellTrajectory2019, vandaeleStableTopologicalSignatures2021}, or fit a statistical model to each gene and then test against a null model \cite{vandenbergeTrajectorybasedDifferentialExpression2020,
trapnellDynamicsRegulatorsCell2014,
qiuReversedGraphEmbedding2017,
lonnbergSinglecellRNAseqComputational2017,
jiTSCANPseudotimeReconstruction2016}.
The Rayleigh quotient and Laplacian score have proven useful in selecting genes \cite{govekClusteringindependentAnalysisGenomic2019} but are agnostic to any prior cell knowledge or meta data. Here, the key idea is to build a filtration using this additional information (e.g. time) and then study signals on subgraphs to select genes.
Using the persistent Rayleigh quotient (PRQ) we can separate
genes that have different roles in bifurcation processes, such as differentiation.
We apply the PRQ to a bifurcation describing differentiation of mouse hepatic cells by Yang et al. \cite{yangSinglecellTranscriptomicAnalysis2017} (see Figure \ref{fig:pl_result_mouse_hepatic}).
Hepatoblasts, hepatocytes, and cholangiocytes were sampled from mouse embryos at 7 time points, from embryonic day 10 to day 17.
Hepatoblasts are a parental cell type whose daughter cells differentiate into hepatocyte and cholangiocyte cell types.
As the topologically interesting direction is in `reverse time', we assign a filtration to the graph
by assigning a node from day \(t\) the filtration value \(17-t\).
We compare the normalised persistent Rayleigh quotient \(\widehat{\PRQ}(2, 7)\) with \(\widehat{\PRQ}(7,7)\), the latter corresponding to the normalised (non-persistent) Rayleigh quotient of the full graph
(Fig. \ref{fig:pl_result_mouse_hepatic}C).
We have highlighted genes that were found to be differentially expressed during hepatoblast differentiation in \cite{muEmbryonicLiverDevelopmental2020} (Fig. \ref{fig:pl_result_mouse_hepatic}A,B,D,E).
As expected, genes such as \textit{Tubb5}, \textit{Mdk}, and \textit{Igfbp1},
which are expressed in hepatoblasts and only one of hepatocytes or cholangiocytes,
have a higher value for the persistent Rayleigh quotient than the full Rayleigh quotient
(Fig. \ref{fig:pl_result_mouse_hepatic}A,B).
Genes \textit{Aldob}, and \textit{Mt2} are expressed in cholangiocytes and hepatocytes, but not hepatoblasts.
Hence,
their persistent Rayleigh quotient has a lower value than the full Rayleigh quotient (Fig. \ref{fig:pl_result_mouse_hepatic}D).
Finally the persistent Rayleigh quotient and full Rayleigh quotient for \textit{Fabp1}
and \textit{Ahsg} are almost the same.
This corresponds to the fact that
\textit{Fabp1} and \textit{Ahsg} are highly expressed in only one of the daughter cell types
(Fig. \ref{fig:pl_result_mouse_hepatic}E).
The full Rayleigh quotient \(\widehat{\PRQ}(7, 7)\) can sort genes based on how coherently expressed
they are with respect to the underlying graph but it does not distinguish between different expression patterns relevant to development. In contrast, the persistent Rayleigh quotient can differentiate genes whose expression pattern is relevant to bifurcation.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{pca_and_prq2_200dpi.png}
\caption{
The persistent Rayleigh quotient separates genes by their role in a cell differentiation process. The PRQ is parameterised by birth ($i$) and death ($j$), each pair $(i, j)$ assigning a non-negative number to every gene. We plot these values for each gene for $(i=7, j=7)$ on the x-axis and $(i=2, j=7)$ on the y-axis on subfigure (C). Selected for display (A,B,D,E) are top differentially expressed genes from \cite{muEmbryonicLiverDevelopmental2020}
on the data from \cite{yangSinglecellTranscriptomicAnalysis2017} (see Fig.~\ref{fig:mouse-data set}). Genes \textit{Tubb5}, \textit{Mdk}, and \textit{Igfbp1} are expressed in parent and one daughter cell lineage, hepatoblast to (A) cholangiocyte or (B) hepatocyte and lie above the diagonal. Genes \textit{Aldob} and \textit{Mt2} are expressed in both daughter cell types but not in the parent cell type (D), and lie below the diagonal. Genes \textit{Ahsg} and \textit{Fabp1} are only expressed in one daughter cell type (E) and lie on the diagonal (compare with Fig. \ref{fig:pl_explain}).
}
\label{fig:pl_result_mouse_hepatic}
\end{figure}
\section{Conclusion}
Inspired by the multiscale nature of topological data analysis, we proposed three multiscale methods relying on spectral graph theory and signal processing, which complement standard differential gene expression. We showcased the versatility of eigenscores and multiscale Laplacian scores (MLS) on different data sets. These methods select genes in an unsupervised and continuous manner without requiring a clustering of cells. The persistent Rayleigh quotient (PRQ) was applied to a cell differentiation data set, which validated a known cellular bifurcation and separated genes based on their role in the differentiation process. These methods proposed provide multiple different rankings of genes. Future directions include systematic comparison of multiple rankings (eg using Hodge theory) to compare with methods that output one-dimensionally ranked genes. We provide available code and a future goal is to create a topological genomics signalling package to increase accessibility and adoption.
While we focused on the geometry of gene space with a specific \(k\)-nn graph constructed using scRNA-seq, the proposed methods are flexible for other graphs, such as Mapper graphs \cite{rizviSinglecellTopologicalRNAseq2017}, but the resulting analysis would change if the underlying cell graph changes. The choice of resolution(s) for the MLS is not limited to Markov stability times (e.g., graph wavelets \cite{tremblay2014graph}).
Future directions include extending these signal selection approaches to other complex single-cell network structures \cite{jeitziner2017two} or other higher order networks \cite{schaub2021signal,bick2021higher}, with a view towards data integration \cite{kuchroo2021multimodal}.
\section{Acknowledgements}
The authors thank Mariano Beguerisse, Carla Groenewegen, Joe Kaplinsky, and Vidit Nanda for helpful discussions. We thank Thomas Carroll, Renaud Lambiotte and Michael Schaub for reading an earlier version of this manuscript.
HAH gratefully acknowledges funding from EPSRC EP/R018472/1, EP/R005125/1 and EP/T001968/1, the Royal Society RGF$\backslash$EA$\backslash$201074 and UF150238. RH, HAH and HMB acknowledge funding from the Emerson Collective. This research was funded in part by EPSRC EP/R018472/1.
The work is partially funded by the Ludwig Institute for Cancer Research Ltd.
For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
\newpage
| {
"timestamp": "2022-06-17T02:01:03",
"yymm": "2206",
"arxiv_id": "2206.07760",
"language": "en",
"url": "https://arxiv.org/abs/2206.07760",
"abstract": "Analysis of single-cell transcriptomics often relies on clustering cells and then performing differential gene expression (DGE) to identify genes that vary between these clusters. These discrete analyses successfully determine cell types and markers; however, continuous variation within and between cell types may not be detected. We propose three topologically motivated mathematical methods for unsupervised feature selection that consider discrete and continuous transcriptional patterns on an equal footing across multiple scales simultaneously. Eigenscores ($\\text{eig}_i$) rank signals or genes based on their correspondence to low-frequency intrinsic patterning in the data using the spectral decomposition of the Laplacian graph. The multiscale Laplacian score (MLS) is an unsupervised method for locating relevant scales in data and selecting the genes that are coherently expressed at these respective scales. The persistent Rayleigh quotient (PRQ) takes data equipped with a filtration, allowing the separation of genes with different roles in a bifurcation process (e.g., pseudo-time). We demonstrate the utility of these techniques by applying them to published single-cell transcriptomics data sets. The methods validate previously identified genes and detect additional biologically meaningful genes with coherent expression patterns. By studying the interaction between gene signals and the geometry of the underlying space, the three methods give multidimensional rankings of the genes and visualisation of relationships between them.",
"subjects": "Quantitative Methods (q-bio.QM); Social and Information Networks (cs.SI); Algebraic Topology (math.AT); Spectral Theory (math.SP); Machine Learning (stat.ML)",
"title": "Multiscale methods for signal selection in single-cell data",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850887155808,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7095349722250219
} |
https://arxiv.org/abs/2102.12937 | Sliding down over a horizontally moving semi-sphere | We studied the dynamics of an object sliding down on a semi-sphere with radius $R$. We consider the physical setup where the semi-sphere is free to move over a flat surface. For simplicity, we assume that all surfaces are friction-less. We analyze the values for the last contact angle $\theta^\star$, corresponding to the angle when the object and the semi-sphere detach one of each other. We consider all possible scenarios with different combination of mass values: $m_A$ and $m_B$, and the initial velocity of the sliding object $A$. We found that the last contact angle only depends on the ratio between the masses, and it is independent of the acceleration of gravity and semi-sphere's radius. In addition, we found that the largest possible value of $\theta^\star$ is $48.19^{\circ}$ that coincides with the case of a fixed semi-sphere. On the opposite case, the minimum value of $\theta^\star$ is $0^\circ$ and it occurs then the object on the semi-sphere is extremely heavy, occurring the detachment as soon as the sliding body touches the semi-sphere. In addition, we found that if the initial kinetic energy of the sliding object $A$ is half the value of the potential energy with respect to the floor. The object detaches at the top of the semi-sphere. | \section{Introduction}
In courses of Newtonian mechanics for engineers and physics students at undergraduate level, the concepts behind Newton's laws are key for understanding the kinematics of objects under the effects of forces. Indeed, in courses focuses on engineering, there is preference to solve problem using only Newtonian mechanics instead other possible approaches like Lagrangian or Hamiltonian mechanics.
No matter the approach, it is troublesome for the students to understand the interplay between objects in contact due to the presence of reaction forces in cases where the contact surfaces are not flat. At the undergraduate course level, the focus on the study of vectorial mechanics is crucial for dealing with challenging problems like ones involving many bodies and the use of different vectorial basis at the same time, like cartesian, cylindrical, and spherical basis.\\
The problem of an object sliding down on a circular path is an academic example to teach such concepts~\cite{Beer, hibbeler2017, riley1996}. Similarly, the problem of an object descending on a flat surface with a slope is another common example oriented to teach relative motion in terms of relative position, velocity, and acceleration. However, sometimes there are variations on this class of problems by allowing the inclined plane to be affected by the reaction force between the object and the inclined plane (see problem 15-98 of \cite{riley1996}). In the literature, similar problems have been addressed with different approaches like the use of lagrangian mechanics~\cite{balart:2019}, including friction~\cite{prior:2007,gonzalez:2017,delpino:2018}, or the experimental demonstration~\cite{vermillion:1988}. \\
In this manuscript, we consider the situation of a moving semi-sphere, which is initially at rest, where its movement is due to the reaction force of an object sliding down on top of it. \\
The manuscript is organized as follows: In section~\ref{sec:fixed}, we present the solution for the standard case, where the semi-sphere remains still, in addition, we use the result as benchmark for the moving setup. In section~\ref{sec:mov}, We present the solution and analysis for the moving semi-sphere. Finally, section~\ref{sec:concl} are the conclusions.
\section{System with fixed semi-sphere. \label{sec:fixed}}
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{figure-fixed.pdf}
\caption{(Left) Physical situation on the object $A$ sliding down over a fixed semi-sphere $B$. The semi-sphere $B$ does not present friction. Forces acting on $A$ are shown in blue. (Right) Vector basis description.}
\label{fig:fixed}
\end{figure}
At a first stage, we consider the situation when semi-sphere $B$ remains still (see figure~\ref{fig:fixed}). This is a known problem taught in courses of Newtonian Mechanics at the university level. The equations of motion for the object $A$ are constructed using the second Newton's Law and correspond to:
\begin{eqnarray}
\sum \vec{F}_A = \vec{N}_A + \vec{W}_A = m_A \vec{a}_{A} \, ,
\end{eqnarray}
where $\vec{N}_A$ is the reaction force between objects and $\vec{W}_A = -m_A g \,\hat{k}$ is the weight with $g$ the gravity's acceleration. While the object $A$ is in contact with the semi-sphere $B$, it moves along the surface following a circular path. The acceleration $\vec{a}_A$ is then obtained by
\begin{equation}
\vec{a}_A = \vec{\alpha} \times \vec{r} + \vec{\omega} \times \left( \vec{\omega} \times \vec{r} \right) \, ,
\end{equation}
where $\vec{\alpha}$ and $\vec{\omega}$ are the vectors of angular acceleration and angular velocity, respectively.
For simplicity, the movement of $A$ is 2-dimensional and occurs along the plane defined by the unit vector $\,\hat{\imath}$ and $\,\hat{k}$. Therefore, we will use a vectorial cylindrical basis defined by:
\begin{eqnarray}
\,\hat{r} &=& \cos\theta \,\hat{k} + \sin\theta \,\hat{\imath} \, ,\\
\,\hat{\theta} &=& -\sin\theta \,\hat{k} + \cos\theta \,\hat{\imath} \, ,
\end{eqnarray}
where $\theta$ is the angle between $\,\hat{k}$ and $\,\hat{r}$. Notice that the 3-dimensional vectorial basis maintains the right-hand rule convention, such as $\,\hat{k} \times \,\hat{\imath} = \,\hat{r} \times \,\hat{\theta} = \,\hat{\jmath}$. In addition, this basis is different to the standard cylindrical basis, so the reader might be cautioned about this fact.
The acceleration $\vec{a}_A$ in the cylindrical basis corresponds to
\begin{equation}
\vec{a}_A = \alpha R \,\hat{\theta} - \omega^2 R \,\hat{r} \, ,
\end{equation}
when $\vec{\alpha} = \alpha \,\hat{\jmath}$ and $\vec{\omega} = \omega \,\hat{\jmath}$.
The equations of motions are in terms of the $\,\hat{r}$ and $\,\hat{\theta}$ components:
\begin{eqnarray}
N_A - m_A g \cos\theta & = & - m_A \omega^2 R \, , \\
m_A g \sin\theta & = & m_A \alpha R \, .
\end{eqnarray}
For the setup in figure \ref{fig:fixed}, the angular velocity and acceleration are related to the angle $\theta$ by:
\begin{eqnarray}
\omega &=& \dot{\theta}\, ,\\
\alpha &=& \frac{d \dot{\theta}}{d \theta} \dot{\theta}\, .
\end{eqnarray}
Using the latter expressions, the equations of motion are reduced to a couple of differential equations:
\begin{eqnarray}
\frac{g}{R} \cos\theta - \frac{N_A}{m_A R} &=& \dot{\theta}^2 \, , \\
\frac{g}{R} \sin\theta &=& \frac{d \dot{\theta}}{d \theta} \dot{\theta} \, .
\end{eqnarray}
These equations are further simplified via the substitution: $f(\theta) = \dot{\theta}^2$, $f'(\theta) = df/d\theta = 2 \ddot{\theta}$, and $\kappa = g/R$; obtaining:
\begin{eqnarray}
\label{eq:de1} \kappa \cos\theta - \frac{N_A}{m_A R}& =& f(\theta) \, , \\
\label{eq:de2} 2 \kappa \sin\theta & =& f'(\theta) \, .
\end{eqnarray}
Since the function $f(\theta)$ corresponds to the square of the angular velocity, the kinetic energy of $A$ corresponds to
\begin{eqnarray}
T = \frac{1}{2} m_A R^2 f(\theta) \, ,
\end{eqnarray}
providing that the total mechanical energy is:
\begin{eqnarray}
E = \frac{1}{2} m_A R^2 f(\theta) + m_A g R \cos\theta \, .
\end{eqnarray}
Depending on the program of content in a vectorial mechanics course, the concept of mechanical energy may not be seen. However, besides the resolution using equation of motion, this problem can be also solve using in addition conservation of energy.
\subsection{Solving the equations of motion}
These equations are simply solved by integrating over the $\theta$ angle in equation~\ref{eq:de2} and after by replacing in equation ~\ref{eq:de1}. When considering the initial conditions:
\begin{eqnarray}
\theta(t=0) &=& \theta_0 = 0\, , \\
\dot{\theta}^2(t=0) &=& f(\theta_0) = 2 \kappa \epsilon \, ,
\end{eqnarray}
then the reaction force and the angular velocity squared are:
\begin{eqnarray}
N_A & = m_A g \left( 3 \cos\theta - 2(1 + \epsilon) \right) \, , \\
f(\theta) & = \dot\theta^2 = 2 \kappa \left( 1 + \epsilon- \cos\theta \right) \, .
\end{eqnarray}
We introduce the parameter $\epsilon$ to further simplify the solutions, however, it is related with the initial kinetic energy:
\begin{eqnarray}
\label{eq:energyeps}
\epsilon = \frac{T_0}{m_A g R} \, ,
\end{eqnarray}
and it can be interpreted as the ration between the initial kinetic energy and the initial potential energy.
It is important to remark that the initial position, $\theta(t=0) = 0$, is an unstable equilibrium point. In order to break the symmetrical evolution of sliding down to any side of the semi-sphere, it is important to indicate an initial direction of movement by means of the value of $\epsilon$.
The equations of motion are valid only for the regimen when $N_A \ge 0$ and describe the object $A$ moving over the semi-sphere. The case $N_A = 0$ sets the last contact angle $\theta^\star$ that in this case corresponds to:
\begin{equation}
\label{eq:anglefixed}
\cos\theta^\star = \frac{2}{3} \left( 1+\epsilon\right)
\end{equation}
This general solution allows us to set a maximum value of $\epsilon$ for which the object $A$ detaches at the beginning of the movement ($\cos\theta^\star = 1$):
\begin{equation}
\epsilon_{\rm max} = \frac{1}{2} \, ,
\end{equation}
which sets the maximum angular velocity to be:
\begin{equation}
\dot{\theta}(t=0) = \sqrt{\kappa} \, .
\end{equation}
This sets the maximum initial kinetic energy to be:
\begin{eqnarray}
T_0^{\rm max} = \frac{1}{2} m_A g R \, ,
\end{eqnarray}
which corresponds to half of the initial potential energy.\\
In the limit, $\epsilon \rightarrow 0$, which means that movement starts at rest, the last contact angle becomes independent of $R$ and $g$, and gives the value:
\begin{equation}
\cos\theta^\star = \frac{2}{3} \, \rightarrow \, \theta^\star \simeq 48.19^\circ \, .
\end{equation}
This value is a known result in the frictionless scenario, and the problem corresponds to an exercise in books~\cite{Beer, riley1996, hibbeler2017}.
A more complete case, that includes friction, can be seen in \cite{prior:2007}. This result represents a benchmark value to compare with the following case of a moving semi-sphere.
\section{System with a moving semi-sphere \label{sec:mov}}
In this part, we allow the semi-sphere to freely move over a friction-less surface. Therefore, the aim is to calculate the value of the last contact angle $\theta^\star$ under these new conditions. Figure~\ref{fig:mov} shows the physical setup.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{figure-moving.pdf}
\caption{(Left) Physical situation on the object $A$ sliding down over a moving semi-sphere $B$. The semi-sphere $B$ does not present friction with both: object $A$, and the flat surface. Forces acting on $A$ are shown in blue. Forces acting on $B$ are shown in dark pink. Accelerations are displayed in orange. (Right) Unit vector basis description.}
\label{fig:mov}
\end{figure}
\subsection{Equations of motions}
Similarly to the previously presented case, the object $A$ is affected by its weight and the reaction force with the semi-sphere. Therefore, the equation of motion for $A$ is:
\begin{equation}
\sum \vec{F}_A = \vec{N}_A + \vec{W}_A = m_A \vec{a}_A \, .
\end{equation}
On the other hand, the semi-sphere $B$ is now free to move and its equation of motion is given by:
\begin{equation}
\sum \vec{F}_B = \vec{N}_B - \vec{N}_A + \vec{W}_B = m_B \vec{a}_B \, ,
\end{equation}
where $\vec{N}_B$ is the reaction force between the floor and the semi-sphere and $\vec{W}_B = -m_B g \,\hat{k}$ is its weight. \\
The movement of the semi-sphere $B$ is only horizontal, therefore, the acceleration corresponds to $\vec{a}_B = a_{Bx} \,\hat{\imath}$ as well as its velocity $\vec{v}_B = v_{Bx} \,\hat{\imath}$. It is also important to remark that $\vec{a}_B = \vec{a}_O$ because the semi-sphere $B$ also non-rotating.\\
On the other hand, the acceleration of the object $A$ with respect to the floor i.e. the rest frame is computed through the acceleration of $B$ and the relative acceleration between objects:
\begin{eqnarray}
\vec{a}_A &=& \vec{a}_{B} + \vec{a}_{A/B} \, ,\\
\vec{a}_A &=& \vec{a}_{B} + \vec{\alpha} \times \vec{r} + \vec{\omega} \times \left( \vec{\omega} \times \vec{r} \right) \, .
\end{eqnarray}
While the object $A$ is in contact with the semi-sphere $B$, the relative movement of $A$ with respect to $B$ is a circular motion. Therefore, we can write the acceleration of $A$ in term of the cylindrical components:
\begin{eqnarray}
\vec{a}_A &= a_{Bx} \,\hat{\imath} + \alpha R \,\hat{\theta} - \omega^2 R \,\hat{r} \, .
\end{eqnarray}
Up to this point, the acceleration of $A$ in the cylindrical basis is:
\begin{equation}
\vec{a}_A = \left(a_{Bx} \sin\theta - \omega^2 R\right) \,\hat{r} + \left(a_{Bx} \cos\theta + \alpha R \right) \,\hat{\theta} \, ,
\end{equation}
which allow us to get the full set of equations of motion. 2 equations for the object $A$:
\begin{eqnarray}
N_A - m_A g \cos\theta & =& m_A \left( a_{Bx} \sin\theta - \omega^2 R \right) \, , \\
m_A g \sin\theta &=& m_A \left(a_{Bx} \cos\theta + \alpha R \right) \, ,
\end{eqnarray}
and 2 more for the semi-sphere $B$:
\begin{eqnarray}
-N_A \sin\theta &=& m_B a_{Bx} \, , \\
N_B - N_A \cos\theta - m_B g &=& 0 \, .
\end{eqnarray}
After inspection, the last 2 equations lead to:
\begin{eqnarray}
a_{Bx} &=& - \frac{N_A \sin\theta}{m_B} \, ,\\
N_B &=& m_B g + N_A \cos\theta \, ,
\end{eqnarray}
and these can be used to reduce the full set of equations of motion to only two equations:
\begin{eqnarray}
\label{eq:combeom1}
\frac{g}{R} \cos\theta - \frac{N_A}{m_A R} \left( 1 + \frac{m_A \sin^2\theta}{m_B}\right) &=& f(\theta) \, ,\\
\label{eq:combeom2}
\frac{2g}{R} \sin\theta + \frac{2 N_A}{m_B R} \sin\theta \cos\theta &=& f'(\theta) \, ,
\end{eqnarray}
where $f(\theta) = \dot{\theta}^2$ and $f'(\theta) = 2 \ddot{\theta}$. Notice, we use the same relations for the angular acceleration and velocity with respect to the angle $\theta$: $\alpha = \ddot{\theta}$, and $\omega = \dot{\theta}$.\\
The function $f(\theta)$ is related to the kinetic energy of $A$ only if the semi-sphere $B$ is at rest. In a general case, the kinetic energy of $A$ depends on the velocity of the semi-sphere and the relative velocity between objects:
\begin{eqnarray}
T_A = \frac{1}{2} m_A v_A^2 = \frac{1}{2} m_A \left(\vec{v}_B + \vec{v}_{A / B} \right)^2 = \frac{1}{2} m_A \left(\vec{v}_B + \vec{\omega} \times \vec{r}\right)^2 \, ,
\end{eqnarray}
where in terms of $f(\theta)$, it corresponds to:
\begin{eqnarray}
T_A = \frac{1}{2} m_A \left( v_B^2 + 2 v_B \cos\theta R \sqrt{f(\theta)} + R^2 f(\theta) \right) \, ,
\end{eqnarray}
and with it, we can obtain the total mechanical energy of the system:
\begin{eqnarray}
E = \frac{m_B}{2} v_B^2 + \frac{m_A}{2}\left( v_B^2 + 2 v_B \cos\theta R \sqrt{f(\theta)} + R^2 f(\theta) \right) + m_A g R \cos\theta \, .
\end{eqnarray}
\subsection{Solving the equations of motion}
The solution of the system of equation cannot be performed as in the fixed semi-sphere case because the reaction force $N_A$ is present in both equations, and it depends on the angle. Nevertheless, the reaction force $N_A$ can be isolated from the equation \ref{eq:combeom1} and be written in terms of the dynamical variables:
\begin{equation}
N_A(\theta) = m_A R \, \frac{ \kappa \cos\theta - f(\theta )}{1+ \beta \sin^2\theta} \, ,
\end{equation}
where $\beta = m_A/m_B$ is the ratio between the masses and $\kappa = g/R$ is the ratio between the acceleration of gravity and the radius of the semi-sphere. Here, the reaction force depends on the square of the angular velocity encoded in $f(\theta)$.
After removing the explicit dependence of $N_A$ in the equation \ref{eq:combeom2}, we get the differential equation for $f(\theta)$:
\begin{eqnarray}
\frac{1+ \beta \sin^2\theta}{2 \sin\theta} \, f'(\theta) + \beta \cos\theta \, f(\theta) - \kappa (1 +\beta) = 0 \, .
\end{eqnarray}
This differential equation is analytically solvable and has the solution:
\begin{eqnarray}
\label{eq:solde}
f(\theta) = \frac{2 \kappa (1 + \beta) (1 - \cos\theta) + \epsilon}{1+ \beta \sin^2\theta} \, ,
\end{eqnarray}
when the initial conditions: $\theta(t=0) = 0$ and $f(\theta_0) = 2 \kappa \epsilon$ are included. Here, the parameter $\epsilon$ is related to the kinetic energy of the object $A$ in the same way as in the previous section (equation \ref{eq:energyeps}). This is because the semi-sphere $B$ is initially at rest.\\
Let's remark that this solution is valid for, $N_A(\theta) \ge 0$ and it holds for angles $0\le\theta\le\theta^\star\le \pi/2$ where $\theta^\star$ corresponds to the last contact angle.
\subsection{Finding the last contact angle}
The angle $\theta^\star$ can be obtained by solving the following equation:
\begin{equation}
\label{eq:NA}N_A(\theta^\star) = m_A R \, \frac{ \kappa \cos\theta^\star - f(\theta^\star )}{1+ \beta \sin^2\theta^\star} = 0 \, ,
\end{equation}
where $f(\theta^\star)$ is the solution of the differential equation evaluated at $\theta^\star$ (equation \ref{eq:solde}). This leads to the following equation to solve for $\theta^\star$:
\begin{eqnarray}
\label{eq:precubic}
\kappa \cos\theta^\star = f(\theta^\star) = \frac{2 \kappa (1 + \beta) (1 - \cos\theta^\star) + \epsilon}{1+ \beta \sin^2\theta^\star}\, .
\end{eqnarray}
This equation can be written as a depressed cubic equation for $\xi = \cos\theta^\star$ as follows:
\begin{equation}
\label{eq:dep}
H(\xi) = \sin^2{\left(\frac{\pi}{2}\tau\right)} \,\xi^3 - 3 \xi + 2 + 2\epsilon \cos^2{\left(\frac{\pi}{2}\tau\right)} = 0 \, ,
\end{equation}
where the $H(\xi)$ function is just a reparametrization of equation \ref{eq:precubic} in terms of adimensional parameters.\\
We introduce the $\tau$-parameter ranging $0\le \tau\le 1$ that simplifies the expressions better than the mass ratio $m_A/m_B$ such as:
\begin{equation}
\beta = \frac{m_A}{m_B} = \tan^2\left(\frac{\pi}{2} \tau \right) \, .
\end{equation}
At the extremes values of $\tau$, we obtain that $\tau \rightarrow 0$ corresponds to $\beta \rightarrow 0$ meaning the case of a heavy semi-sphere which is equivalent to the fixed semi-sphere case. Similarly, when $\tau \rightarrow 1$ corresponds to $\beta \rightarrow \infty$ meaning the scenario where object $A$ is extreamly heavy with respect to the semi-sphere. In figure~\ref{fig:hxi}, we present the function $H(\xi)$ for various values of $\tau$. \\
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{fig_h_xi.pdf}
\caption{\label{fig:hxi} $H(\xi)$ versus $\xi$ for combination of the parameters: $\tau = 0, 1/2, 1$ and $\epsilon = 0$ (solid lines), $1/3$ (dashed lines). The green dashed line correspond to $\epsilon=1/2$ and $\tau=0$. The intersection of each curve $H(\xi)$ with the horizontal black line ($H(\xi)=0$) produces the solution where $\xi = \cos\theta^\star$.}
\end{figure}
The analysis of the equation~\ref{eq:dep} reveals the maximum value that $\epsilon$ can take. To find this, we evaluate the function $H(\xi)$ in $\xi=1$ ($\theta^\star = 0$):
\begin{equation}
H(1) = (2 \epsilon_{\rm max} - 1) \cos^2{\left(\frac{\pi}{2} \tau\right)} = 0\, ,
\end{equation}
obtaining that for the range $0\leq \tau < 1$, the maximum value is:
\begin{equation}
\epsilon_{\rm max} = \frac{1}{2}\, ,
\end{equation}
which is same the same limit obtained in the fixed semi-sphere case.
This means that the kinetic energy of $A$ can be at most half of the initial potential energy in order to get the object $A$ sliding down the semi-sphere $B$. This is regardless of the mass ratio $\beta$. \\
As stated before, the limit of fixed semi-sphere is reached when $\tau \rightarrow 0$ and it corresponds to the mass limit: $m_A \rightarrow 0$ or $m_B \rightarrow \infty$. In this limit, equation \ref{eq:dep} corresponds to:
\begin{equation}
- 3 \xi + 2 + 2 \epsilon = 0 \, ,
\end{equation}
with solution :
\begin{eqnarray}
\xi = \cos\theta^\star = \frac{2}{3} (1+\epsilon) \, ,
\end{eqnarray}
corresponding exactly to equation \ref{eq:anglefixed}.
This case agrees with the solution for the fixed semi-sphere described in section~\ref{sec:fixed}, and it indicates the limit $\beta$ is equivalent to restrict the movement of $B$ to be fixed in a point.\\
The other limit, $\tau \rightarrow 1$, gives the equation:
\begin{equation}
\xi^3 - 3 \xi + 2 = (\xi + 2) (\xi - 1)^2 = 0\, ,
\end{equation}
with 3 real solutions: $\xi_1 = 1$, $\xi_2 = 1$, and $\xi_3 = -2$. The solution with physical meaning are $\xi_{1,2}$, both correspond to an angle $\theta^\star = 0$. This means when $m_A \rightarrow \infty$ the angle of last contact between $A$ and $B$ is at the top of the semi-sphere and happens at the beginning of the movement. In this case, the semi-sphere moves fast enough to not be in contact with the object $A$.\\
For the intermediate cases, $0<\tau<1$ and $\epsilon=0$, the solutions for equation~\ref{eq:dep} corresponds to:
\begin{eqnarray}
\xi_1 &=& \frac{2}{1 + 2 \cos{\left( \frac{\pi}{3} \tau\right) } }\, , \\
\xi_2 &=& \frac{\sqrt{3} \cos{\left(\frac{\pi}{6} \tau \right)} - \sin{\left(\frac{\pi}{6} \tau \right)}}{\sin{\left(\frac{\pi}{2} \tau \right)}}\, , \\
\xi_3 &=& - \frac{\sqrt{3} \cos{\left(\frac{\pi}{6} \tau \right)} + \sin{\left(\frac{\pi}{6} \tau \right)}}{\sin{\left(\frac{\pi}{2} \tau \right)}} \, , \\
\end{eqnarray}
which are obtained by analytically solving the depressed cubic equation. However, in order to get the real values the solutions of equation~\ref{eq:dep} need to be rephased by $e^{i \pi/3}$ when the equation is solved via the Vieta's substitution~\cite{354013610X}. From the 3 roots, $\xi_1$ has a physical meaning: $\xi_1 = \cos\theta^\star$ while the other 2 roots give values outside the physical range $0 \le \xi \le 1$. \\
The approach to analytically solve $H(\xi) = 0$ might be hard to perform by students. An alternative approach and much easier might be by using a numerical code, for example, written in python (See appendix \ref{sec:python}).\\
The dependence of the last contact angle $\theta^\star$ in terms of $\tau$ and $\epsilon$ is shown in figure~\ref{fig:thetastar}. We observe that the solution of the cubic equation include the extreme limits, like the fixed semi-sphere ($\tau=0$) for different values of $\epsilon$.\\
In addition, the value of $\theta^\star$ indicates that the largest possible value for any configuration of masses corresponds to the fixed semi-sphere case and the lowest corresponds when $m_A \rightarrow \infty$ or $m_B \rightarrow 0$ producing an extreme situation where the semi-sphere $B$ and the object $A$ get immediately detached upon the first contact.
Moreover, the dependence of $\theta^\star$ in terms of the value of $\epsilon$ gives a interesting situation that implies that the object $A$ can only remain in contact with the semi-sphere if its initial kinetic energy is, at most, half of the initial potential energy.
This can be appreciated in figure~\ref{fig:thetastar} where the blue line gives the value $\theta^\star = 0$ for all possible values of $\tau$.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{fig_thetastar_g.pdf}
\caption{\label{fig:thetastar} Last contact angle $\theta^\star$ versus $\tau$. Red line shown the functional dependence in $\tau$ for $\epsilon=0$, the brown line corresponds to the case when $\epsilon=1/3$, and blue line for $\epsilon=1/2$. The gray dashed line indicate $\theta^\star=48.19^\circ$ which corresponds to the solution of the fixed-sphere case. }
\end{figure}
\section{Conclusions \label{sec:concl}}
We present the analytical solution of finding the last contact angle for the problem of an object sliding down on a semi-sphere of radius $R$ where the semi-sphere is on a friction-less surface. The approach used to solve the problem is in terms of Newtonian mechanics and concepts of vectorial mechanics, which are topics familiar to undergraduate students of engineering and bachelor in physics. The key effect to consider is the reaction forces between the object and the semi-sphere. This force provokes the semi-sphere to move horizontally and the object $A$ to descend, keeping contact with the semi-sphere. If the velocity of the object $A$ is larger enough that the reaction force between the object and the surface is null, then both objects detach one of each other. The angle in which that occurs is the last contact angle, and it was calculated analytically.\\
We found that the case of a fixed semi-sphere (equivalent to $m_B \gg m_A$) gives the maximum possible last contact angle among any physical configuration of masses. In addition, we found that this angle depends only on the ratio of the masses $m_A$ and $m_B$ and it is independent of the value of the acceleration of gravity or the radius of the semi-sphere.\\
In addition, we include the effect produced by the initial velocity that the sliding object might have. This effect is parameterizes in terms of the quantity, $\epsilon$, which is the ratio between the initial kinetic energy and the potential energy. We found also that if the kinetic energy is larger than 1/2 of the initial potential energy, then the sliding object detaches at the very beginning of the movement.\\
The problem discussed in this manuscript present a general physical scenario that might be worth to be used as an example for enthusiast students in courses of Newtonian mechanics at the undergraduate or graduate level. In addition, to verify experimentally might also be interesting.
\ack
We thank Nicolas Rojas, Fernando Guzmán, Julio Yañez, and Eduardo Peinado for useful discussions and comments.
We also thank the comments and suggestions from the anonymous referees.
RL is supported by Universidad Católica del Norte through the Publication Incentive program No. CPIP20180343 and CPIP20200063.
| {
"timestamp": "2022-03-07T02:04:39",
"yymm": "2102",
"arxiv_id": "2102.12937",
"language": "en",
"url": "https://arxiv.org/abs/2102.12937",
"abstract": "We studied the dynamics of an object sliding down on a semi-sphere with radius $R$. We consider the physical setup where the semi-sphere is free to move over a flat surface. For simplicity, we assume that all surfaces are friction-less. We analyze the values for the last contact angle $\\theta^\\star$, corresponding to the angle when the object and the semi-sphere detach one of each other. We consider all possible scenarios with different combination of mass values: $m_A$ and $m_B$, and the initial velocity of the sliding object $A$. We found that the last contact angle only depends on the ratio between the masses, and it is independent of the acceleration of gravity and semi-sphere's radius. In addition, we found that the largest possible value of $\\theta^\\star$ is $48.19^{\\circ}$ that coincides with the case of a fixed semi-sphere. On the opposite case, the minimum value of $\\theta^\\star$ is $0^\\circ$ and it occurs then the object on the semi-sphere is extremely heavy, occurring the detachment as soon as the sliding body touches the semi-sphere. In addition, we found that if the initial kinetic energy of the sliding object $A$ is half the value of the potential energy with respect to the floor. The object detaches at the top of the semi-sphere.",
"subjects": "Classical Physics (physics.class-ph)",
"title": "Sliding down over a horizontally moving semi-sphere",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850867332735,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7095349707943053
} |
https://arxiv.org/abs/2302.14361 | Towards continuity: Universal frequency-preserving KAM persistence and remaining regularity | Beyond Hölder's type, this paper mainly concerns the persistence and remaining regularity of an individual frequency-preserving KAM torus in a finitely differentiable Hamiltonian system, even allows the non-integrable part being critical finitely smooth. To achieve this goal, besides investigating the Jackson approximation theorem towards only modulus of continuity, we demonstrate an abstract regularity theorem adapting to the new iterative scheme. Via these tools, we obtain a KAM theorem with sharp differentiability hypotheses, asserting that the persistent torus keeps prescribed universal Diophantine frequency unchanged. Further, the non-Hölder regularity for invariant KAM torus as well as the conjugation is explicitly shown by introducing asymptotic analysis. To our knowledge, this is the first approach to KAM on these aspects in a continuous sense, and we also provide two systems, which cannot be studied by previous KAM but by ours. | \section{Introduction}
The celebrated KAM theory, due to Kolmogorov and Arnold \cite{MR0068687,R-9,R-10,R-11}, Moser \cite{R-12,R-13}, mainly concerns the preservation of invariant tori of a Hamiltonian function $ H(y) $ under small perturbations (i.e., $H(y) \to H\left( {x,y,\varepsilon } \right) $ of freedom $ n \in \mathbb{N}^+ $ with $ \varepsilon>0 $ sufficiently small), and indeed has a history of more than sixty years. So far, KAM theory has been well developed and widely applied to a variety of dynamical systems. For some other fundamental developments, see Kuksin \cite{MR0911772}, Eliasson \cite{MR1001032}, P\"oschel \cite{MR1022821}, Wayne \cite{MR1040892}, Bourgain \cite{MR1316975,MR1345016} and so on.
As is known to all, for frequency $ \omega = {H_y}\left( {y} \right) $ of the unperturbed Hamiltonian system, one often requires it to satisfy the following classical Diophantine condition (or be of Diophantine class $ \tau $)
\begin{equation}\label{dio}
| {\langle {{\tilde k},\omega } \rangle } | \geq \alpha_ *{ | {{\tilde k }} |}^{-\tau} ,\;\;\forall 0 \ne \tilde k \in {\mathbb{Z}^n}
\end{equation}
with respect to Diophantine index $ \tau \geq n-1 $ and some $ \alpha_ *>0 $, where $ |\tilde k|: = \sum\nolimits_{j = 1}^n {|{{\tilde k}_j}|} $. Otherwise, under a Liouville frequency that is not Diophantine, the torus may break no matter how small the perturbation is, and some chaotic behavior can take place simultaneously. The strong (non-universal) Diophantine frequencies of class $ \tau=n-1 $, are shown to be a continuum but only form a set of zero Lebesgue measure, therefore the KAM preservation based on that is usually said to be \textit{non-universal}. As a contrast, the class $ \tau>n-1 $ Diophantine nonresonance KAM persistence is \textit{universal} because such frequencies are of full Lebesgue measure in $ \mathbb{R}^n $, and we will focus on this case throughout this paper.
Considering Diophantine nonresonance, the invariant tori are shown in classical KAM theorems to be preserved under analytic settings. It should be pointed out that, if a constant Diophantine frequency is prescribed in advance, one could obtain KAM persistence with frequency being unchanged by proposing certain nondegeneracy or transversality conditions, which preserves more dynamics from the perturbed one, see Salamon \cite{salamon}, Du et al \cite{R-18} and Tong et al \cite{TD} for instance, respectively, and this analytic torus is interestingly shown to be never isolated by Eliasson et al in \cite{MR3357183}. Therefore, beyond analyticity, it is still interesting to touch the minimum initial regularity for Hamiltonian to which KAM could applied. On this aspect, the first breakthrough was made by Moser in \cite{MR0147741}, where he studied twist maps (could correspond to an iso-energetic KAM theorem for $ n=2 $) admitting finite smoothness of integral order sufficiently large. Much effort has been devoted on this technical problem in terms of H\"older continuity, including constructing counterexamples and reducing the differentiability hypotheses. For some classic fundamental work, see Moser \cite{J-3}, Jacobowitz \cite{R-2}, Zehnder \cite{R-7,R-8}, Mather \cite{R-55}, Herman \cite{M1,M2}, P\"oschel \cite{Po1,Po2} and etc. It is worth mentioning that, very recently P\"oschel announced in his preprint \cite{Po3} a KAM theorem on the $ n $-dimensional torus $\mathbb{T}^n$ (without action variables) based on a non-universal frequency being of Diophantine class $ \tau=n-1 $ in \eqref{dio}. Specially, he pointed out that the derivatives of order $ n $ need not be continuous, but rather $ L^2 $ in a certain strong sense, by introducing a new norm over $\mathbb{T}^n$.
Back to our concern on general $ n $-freedom Hamiltonian systems having action-angular variables, it is always conjectured that the minimum regularity requirement for the Hamiltonian function $ H(x,y) $ is at least $ C^{2n} $ due to the $C^{2n-\epsilon}$ (with any $\epsilon$ close to $0^+$) counterexample constructed by Cheng and Wang \cite{MR3061774}, which allows for all frequencies in $\mathbb{R}^n$, see also Wang \cite{MR4385768}. However, via Diophantine nonresonance, a more precise counterexample does not seem to have appeared. As to reducing initial regularity, along with the idea of Moser \cite{J-3}, the best known H\"older case $ C^{\ell} $ with $ \ell >2\tau+2 $ has been established by Salamon in \cite{salamon}, the prescribed universal frequency under consideration is of Diophantine class $ \tau>n-1 $ in \eqref{dio}, and the remaining regularity of the frequency-preserving KAM torus as well as the conjugation is also showed to be H\"older's type. More precisely, the KAM torus is at least of class $ C^1 $, and the conjugation from the dynamic on it to the linear flow is at least of class $ C^{\tau+1} $. Besides, the $C^{\ell}$ ($\ell>2\tau+2>2n$) differentiability hypotheses is sharp due to the counterexamples of Herman \cite{M1,M2} and Cheng and Wang \cite{MR3061774}.
In the aspect of H\"older's type, see Khesin et al \cite{MR3269186}, Bounemoura \cite{Bounemoura} and Koudjinan \cite{Koudjinan} for some other new developments. To strictly weaker than H\"older continuity, a KAM theorem via non-universal Diophantine nonresonance of class $ \tau=n-1 $ in \eqref{dio} was proved by Albrecht in \cite{Chaotic}, claiming that $ C^{2n} $ plus certain modulus of continuity $ \varpi $ satisfying the classical Dini condition
\begin{equation}\label{cdini}
\int_0^1 {\frac{{\varpi \left( x \right)}}{x}{\rm d}x} < + \infty
\end{equation}
is enough for the non-universal KAM persistence. To the best of our knowledge, there is no other work on KAM via only modulus of continuity except for \cite{Chaotic}.
Concerning about universal KAM persistence for finitely differentiable Hamiltonians of freedom $ n $, the best result so far, obtained by Salamon \cite{salamon}, still requires $ C^{2n} $ plus certain H\"older continuity depending on the Diophantine index. It is therefore natural that ones should consider the following fundamental questions successively, in order to further touch the criticality in this long standing finitely differentiable KAM since Moser:
\begin{itemize}
\item \textit{Can non-H\"{o}lder regularity allow for KAM persistence with universal frequency-preserving?}
\item \textit{What kind of smoothness shall the invariant KAM torus admit, if it is indeed persistent? How about the conjugation?}
\item \textit{Does there exist a Dini type integrability condition similar to \eqref{cdini} that reveals the explicit relation between nonresonance and regularity?}
\end{itemize}
Usually there are at least two ways reaching finitely smooth KAM: abstract implicit function method began in Nash \cite{MR75639} and Zehnder \cite{R-7,R-8}, as well as the analytic smoothing approach due to Moser \cite{J-3}. We also refer to Hamilton \cite{MR0656198} and H\"ormander \cite{MR0802486} for some other versions, and we shall use the latter to touch the case with only continuity, because it provides an iterative scheme which could derive accurate remaining regularity for KAM torus and conjugation. As a consequence, towards continuity, there are at least four difficulties to overcome. Firstly, note that the Jackson approximation theorem for classical H\"{o}lder continuity is no longer valid at present, hence it must be developed to approximate the perturbed Hamiltonian function $ H\left( {x,y,\varepsilon } \right) $ in the sense of modulus of continuity, as a crucial step. It needs to be emphasized that the analysis uses different approaches in comparison with the H\"older one, by proposing the semi separability on modulus of continuity. Secondly, it is also basic how to establish a corresponding regularity iteration theorem to study the remaining regularity of the invariant torus and the conjugation without H\"older's type. Thirdly, we have to set up a new KAM iterative scheme and prove its uniform convergence in the $C^1$ topology via these tools. Fourthly, it is somewhat difficult to extract an integrability condition of universal nonresonance and initial regularity from KAM iteration, keeping the frequency-preserving KAM persistence. Indeed, to achieve Main Theorem \ref{theorem1} having sharp differentiability hypotheses and \textit{frequency-preserving}, we apply Theorem \ref{Theorem1} to construct a series of analytic approximations to Hamiltonian $ H\left( {x,y,\varepsilon } \right) $ with modulus of continuity, and prove the persistence and regularity of invariant torus via a modified KAM iteration as well as a generalized Dini type condition. As some new efforts, our Main Theorem \ref{theorem1} applies to a wide range in a continuous sense (even allows the non-integrable part to be non-H\"older), and reveals the sharp integral relation between regularity and universal Diophantine nonresonance \textit{for the first time}. Apart from above, it is well known that small divisors must lead to certain loss of smoothness (such as reducing the radius of analyticity), and only H\"older class has been investigated so far, see Salamon \cite{salamon}. If only continuity for highest order derivatives is assumed, obviously ones should not expect the KAM remaining regularity still be of H\"older's type. \textit{On this aspect, our Main Theorem \ref{theorem1} gives the first approach to the regularity of invariant KAM torus and conjugation being non-H\"older, explicitly shown by certain asymptotic analysis.} Besides, as shown by Theorem \ref{lognew}, the KAM torus and conjugation may interestingly admit a completely different class of regularity, due to the affect of the Diophantine index. Particularly, as a direct application, our Main Theorem \ref{theorem1} could deal with the case of general modulus of continuity for $ H\left( {x,y,\varepsilon }\right) $, such as Logarithmic H\"{o}lder continuity case, that is, for all $ 0 < \left| {x - \xi } \right| + \left| {y - \eta } \right| \leq 1/2 $,
\begin{displaymath}
\left| {{\partial ^\alpha }H\left( {x,y,\varepsilon } \right) - {\partial ^\alpha }H\left( {\xi ,\eta ,\varepsilon } \right)} \right| \leq \frac{c}{{{{\left( { - \ln \left( {\left| {x - \xi } \right| + \left| {y - \eta } \right|} \right)} \right)}^\lambda }}}
\end{displaymath}
with respect to all $ \alpha \in {\mathbb{N}^{2n}}$ with $ \left| \alpha \right| = {2n } $, where $ n \geq 2 $, $\lambda>1$, $n-1<\tau \in \mathbb{N}^+$, $c, \varepsilon>0 $ are sufficiently small, $ \left( {x,y} \right) \in {\mathbb{T}^n} \times G $ with $ {\mathbb{T}^n}: = {\mathbb{R}^n}/ \mathbb{Z}^n $, and $ G \subset {\mathbb{R}^n} $ is a connected closed set with interior points. One shall notice that, this Hamiltonian system with non-H\"older continuity cannot be studied by any available KAM theorems so far, see Section \ref{section6} for more details.
This paper is organized as follows. In Section \ref{section2}, by introducing some basic notions and properties for modulus of continuity, we establish a Jackson type approximation theorem in a continuous sense, and the proof will be postponed to Section \ref{JACK}. Then we state our main result in this paper. Namely, considering that the highest order derivatives of Hamiltonian function $ H $ with respect to the action-angular variables $(x,y)$ are only continuous, we present a KAM theorem (Main Theorem \ref{theorem1}) with sharp differentiability hypotheses under certain assumptions, involving a generalized Dini type integrability condition (H1). The applications of this result are given in Section \ref{section6}, including H\"{o}lder, H\"{o}lder plus Logarithmic H\"{o}lder and a more complicated circumstance, aiming to show the importance and universality of Main Theorem \ref{theorem1}. In particular, two explicit Hamiltonian systems are constructed, which cannot be studied by any KAM theorems for finite smoothness via classical H\"{o}lder continuity, namely having a \textit{non-H\"older corner} and even being of \textit{nowhere H\"older's type}, while our Main Theorem \ref{theorem1} still works. Section \ref{section4} provides the proof of Main Theorem \ref{theorem1} and is mainly divided into two parts: the first part deals with the modified KAM steps via only modulus of continuity, while the second part is devoted to giving an iteration theorem (Theorem \ref{t1}) on regularity, which is essential in analyzing the KAM remaining smoothness for the invariant torus as well as the conjugation. Finally Section \ref{ProofOther} presents the proof of Theorems \ref{Holder}, \ref{lognew} and \ref{GLHnew} in Section \ref{section6}, respectively.
\section{Statement of results}
\label{section2}
\subsection{Preliminaries for modulus of continuity}
We first give some notions, including the modulus of continuity along with the norm based on it, the semi separability which will be used in Theorem \ref{Theorem1}, as well as the weak homogeneity which will appear in Theorem \ref{theorem1}.
Denote by $ |\cdot| $ the sup-norm in $ \mathbb{R}^d $ and the dimension $ d \in \mathbb{N}^+ $ may vary throughout this paper. We formulate that in the limit process, $ f_1(x)=\mathcal{O}^{\#}\left(f_2(x)\right) $ means there are absolute positive constants $ \ell_1 $ and $ \ell_2 $ such that $ {\ell _1}{f_2}\left( x \right) \leq {f_1}\left( x \right) \leq {\ell _2}{f_2}\left( x \right) $, and $ f_1(x)=\mathcal{O}\left(f_2(x)\right) $ implies that there exists an absolute positive constant $ \ell_3 $ such that $ |f_1(x)| \leq \ell_3 f_2(x) $, and finally $ f_1(x)\sim f_2(x) $ indicates that $ f_1(x) $ and $ f_2(x) $ are equivalent.
\begin{definition}[Modulus of continuity]\label{d1}
A nondecreasing continuous function $ \varpi (t)>0 $ on the interval $ \left( {0,\delta } \right] $ with respect to some $ \delta >0 $ is said to be a modulus of continuity, if $ \mathop {\lim }\limits_{x \to {0^ + }} \varpi \left( x \right) = 0 $ and $ \mathop {\overline {\lim } }\limits_{x \to {0^ + }} x/{\varpi }\left( x \right) < + \infty $. Next, we define the following semi norm and norm for a continuous function $ f $ on $ {\mathbb{R}^n} $:
\begin{equation}\notag
{\left[ f \right]_\varpi }: = \mathop {\sup }\limits_{x,y \in {\mathbb{R}^n},\;0 < \left| {x - y} \right| \leq \delta } \frac{{\left| {f\left( x \right) - f\left( y \right)} \right|}}{{\varpi \left( {\left| {x - y} \right|} \right)}},\;\;{\left| f \right|_{{C^0}}}: = \mathop {\sup }\limits_{x \in {\mathbb{R}^n}} \left| {f\left( x \right)} \right|.
\end{equation}
We call $ f $ $ C_{k,\varpi} $ continuous if $ f $ has partial derivatives $ {{\partial ^\alpha }f} $ for $ \alpha=(\alpha_1, \ldots ,\alpha_n)\in \mathbb{N}^n, \left| \alpha \right|:=\sum\nolimits_{i = 1}^n {\left| {{\alpha _i}} \right|} \leq k \in \mathbb{N} $ and satisfies
\begin{equation}\label{k-w}
{\left\| f \right\|_\varpi }: = \sum\limits_{\left| \alpha \right| \leq k} {\left( {{{\left| {{\partial ^\alpha }f} \right|}_{{C^0}}} + {{\left[ {{\partial ^\alpha }f} \right]}_\varpi }} \right)} < + \infty .
\end{equation}
Denote by $ {C_{k,\varpi }}\left( {{\mathbb{R}^n}} \right) $ the space composed of all functions $ f $ satisfying \eqref{k-w}.
\end{definition}
\begin{remark}
For $ f:{\mathbb{R}^n} \to \Omega \subset {\mathbb{R}^d} $ with a modulus of continuity $ \varpi $, we modify the above designation to $ {C_{k,\varpi }}\left( {{\mathbb{R}^n},\Omega } \right) $.
\end{remark}
It can be seen that the well-known Lipschitz continuity and H\"{o}lder continuity are special cases in the above definition. In particular, for $ 0<\ell \notin \mathbb{N}^+ $, we denote by $ f \in {C^\ell }\left( {{\mathbb{R}^n}} \right) $ the function space in which the higher derivatives in $ \mathbb{R}^n $ are $ \{\ell\} $-H\"{o}lder continuous, that is, the modulus of continuity is of the form
\[\varpi_{\mathrm{H}}^{\{\ell\}}(x)\sim x^{\ell},\;\; x \to 0^+,\]
where $ \{\ell\} \in (0,1)$ denotes the fractional part of $ \ell $. As a generalization of classical H\"{o}lder continuity, we define the \textit{Logarithmic H\"{o}lder continuity} with index $ \lambda > 0 $, where
\[\varpi_{\mathrm{LH}}^{\lambda} \left( x \right) \sim 1/{\left( { - \ln x} \right)^\lambda },\;\;x \to 0^+,\]
and we will omit the the range $ 0 < x \ll 1 $ without causing ambiguity for these kind of functions with singularities away from $ 0^+ $, because one only needs to focus on the asymptotic behavior of a given modulus of continuity near $ 0^+ $. Moreover, one could further consider \textit{the generalized Logarithmic H\"older's type} with indices $ \varrho\in \mathbb{N}^+ $ and $ \lambda>0 $ as follows:
\begin{equation}\label{mafan}
\varpi_{\mathrm{GLH}}^{\varrho,\lambda} \left( x \right) \sim \frac{1}{{(\ln (1/x))(\ln \ln (1/x)) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/x))}^\lambda }}},\;\;x \to 0^+,
\end{equation}
and we have $ \varpi_{\mathrm{GLH}}^{1,\lambda} \left( x \right) \sim \varpi_{\mathrm{LH}}^{\lambda} \left( x \right) $.
\begin{remark}\label{rema666}
It is well known that a mapping defined on a bounded connected closed set in a finite dimensional space must have a modulus of continuity, see \cite{Herman3,MR1036903}. For example, for a function $ f(x) $ defined on $ [0,1] \subset {\mathbb{R}^1} $, it automatically admits a modulus of continuity
\[{\omega _{f,\delta }}\left( x \right): = \mathop {\sup }\limits_{y \in \left[ {0,1} \right],0 < \left| {x - y} \right| \leq \delta } \left| {f\left( x \right) - f\left( y \right)} \right|.\]
\end{remark}
In view of Remark \ref{rema666}, we therefore only focus on modulus of continuity throughout this paper. Next we introduce the comparison relation between the strength and the weakness of modulus of continuity.
\begin{definition}\label{d5}
Let $ {\varpi _1} $ and $ {\varpi _2} $ be modulus of continuity on interval $ \left( {0,\delta } \right] $. We say that $ {\varpi _1} $ is weaker (strictly weaker) than $ {\varpi _2} $ if $ \mathop {\overline\lim }\limits_{x \to {0^ + }} {\varpi _2}\left( x \right)/{\varpi _1}\left( x \right) <+\infty $ ($ =0 $).
\end{definition}
\begin{remark}\label{strict}
Obviously any modulus of continuity is weaker than Lipschitz's type, and the Logarithmic H\"{o}lder's type $ \varpi_{\mathrm{LH}}^{\lambda} \left( x \right) \sim 1/{\left( { - \ln x} \right)^\lambda } $ with any $ \lambda > 0 $ is strictly weaker than \textit{arbitrary} H\"{o}lder's type $ \varpi_{\mathrm{H}}^{\alpha}\left( x \right) \sim {x^\alpha } $ with any $ 0 < \alpha < 1 $. The generalized Logarithmic H\"older's type $ \varpi_{\mathrm{GLH}}^{\varrho,\lambda} \left( x \right) $ in \eqref{mafan} with $ \varrho\in \mathbb{N}^+ $ and $ \lambda>0 $ is weaker than both of them.
\end{remark}
As a consequence, one directly obtains the following corollary by Remark \ref{strict}. This shows that it is indeed very necessary for us to extend H\"older's continuous KAM to only continuous type.
\begin{corollary}\label{V37-Re2.3}
Give a bounded connected closed set $ \Omega \subset \mathbb{R}^n $ with interior points. Then for every $ k \in \mathbb{N} $, we have $ {C_{k,\varpi_{\mathrm{H}}^{\alpha} }}\left( \Omega \right) \subseteq {C_{k,\varpi_{\mathrm{LH}}^{\lambda} }} \left( \Omega \right) \subset {C_{k,\varpi_{\mathrm{GLH}}^{\varrho,\lambda} }}\left( \Omega \right) $ with arbitrary given $ 0<\alpha<1, \lambda>0 $ and $ \varrho\in \mathbb{N}^+ $.
\end{corollary}
Finally, in order to emphasize the rigor of the KAM analysis throughout this paper, we have to introduce two crucial concepts for modulus of continuity, namely the \textit{semi separability} and the \textit{weak homogeneity}.
\begin{definition}[Semi separability]\label{d2}
A modulus of continuity $ \varpi $ is said to be semi separable, if for $ x \geq 1 $, there holds
\begin{equation}\label{Ox}
\psi \left( x \right): = \mathop {\sup }\limits_{0 < r < \delta /x} \frac{{\varpi \left( {rx} \right)}}{{\varpi \left( r \right)}} = \mathcal{O}\left( x \right),\;\;x \to + \infty .
\end{equation}
\end{definition}
\begin{remark}\label{Remarksemi}
Semi separability directly leads to $ \varpi \left( {rx} \right) \leq \varpi \left( r \right)\psi \left( x \right) $ for $ 0 < rx \leq \delta $, which will be used in the proof of the Jackson type Theorem \ref{Theorem1} via only modulus of continuity.
\end{remark}
\begin{definition}[Weak homogeneity] \label{weak}
A modulus of continuity $ \varpi $ is said to admit weak homogeneity, if for fixed $ 0<a<1 $, there holds
\begin{equation}\label{erfenzhiyi}
\mathop {\overline {\lim } }\limits_{x \to {0^ + }} \frac{{\varpi \left( x \right)}}{{\varpi \left( {ax} \right)}} < + \infty .
\end{equation}
\end{definition}
\begin{remark}
The weak homogeneity plays a controlling role in KAM iteration, see details from the proof of Main Theorem \ref{theorem1}.
\end{remark}
It should be emphasized that semi separability and weak homogeneity are universal hypotheses. The H\"older and Lipschitz types automatically admit them. Many modulus of continuity weaker than the H\"older one are semi separable and also admit weak homogeneity, e.g., for the Logarithmic H\"{o}lder's type $ \varpi_{\mathrm{LH}}^{\lambda} \left( x \right) \sim 1/{\left( { - \ln x} \right)^\lambda } $ with any $ \lambda > 0 $, one verifies that $ \psi \left( x \right) \sim {\left( {\ln x} \right)^\lambda } = \mathcal{O}\left( x \right) $ as $ x \to +\infty $ in \eqref{Ox}, and $ \mathop {\overline {\lim } }\limits_{x \to {0^ + }} {\varpi_{\mathrm{LH}}^{\lambda}}\left( x \right)/{\varpi_{\mathrm{LH}}^{\lambda}}\left( {ax} \right) = 1 < + \infty $ with all $ 0<a<1 $ in \eqref{erfenzhiyi}. See examples in Lemmas \ref{Oxlemma} and \ref{ruotux}, in particular, \textit{it is pointed out that a convex modulus of continuity naturally possesses these two properties.}
\subsection{Jackson type approximation theorem beyond H\"older class}
This section provides a Jackson type approximation theorem beyond H\"older's type and some related corollaries based on Definitions \ref{d1} and \ref{d2}. Their proof will be postponed to Sections \ref{JACK}, \ref{proofcoro1} and \ref{proofcoco2}, respectively. We shall emphasize that, the H\"older's conclusion is relatively easy to obtain, because the H\"older continuity $ \varpi_{\mathrm{H}}^{\alpha}(x) $ with $ 0<\alpha<1 $ is naturally homogeneous, i.e., $ \varpi_{\mathrm{H}}^{\alpha}(bx)=b^{\alpha} \varpi_{\mathrm{H}}^{\alpha}(x) $ holds for any $ b>0 $, and it is indeed crucial in the analysis. As to the general modulus of continuity, one has to propose semi separability instead of this (in a weaker sense) and the analysis becomes more complicated, namely estimating the integrals separately, under different approaches, see details from Section \ref{JACK}.
\begin{theorem}\label{Theorem1}
There is a family of convolution operators
\begin{equation}\notag
{S_r}f\left( x \right) = {r^{ - n}}\int_{{\mathbb{R}^n}} {K\left( {{r^{ - 1}}\left( {x - y} \right)} \right)f\left( y \right){\rm d}y} ,\;\;0 < r \leq 1
\end{equation}
from $ {C^0}\left( {{\mathbb{R}^n}} \right) $ into the space of entire functions on $ {\mathbb{C}^n} $ with the following property. For every $ k \in \mathbb{N} $, there exists a constant $ c\left( {n,k} \right)>0 $ such that, for every $ f \in {C_{k,\varpi }}\left( {{\mathbb{R}^n}} \right) $ with a semi separable modulus of continuity $ \varpi $, every multi-index $ \alpha \in {\mathbb{N}^n} $ with $ \left| \alpha \right| \leq k $, and every $ x \in {\mathbb{C}^n} $ with $ \left| {\operatorname{Im} x} \right| \leq r $, we have
\begin{equation}\label{3.2}
\left| {{\partial ^\alpha }{S_r}f\left( x \right) - {P_{{\partial ^\alpha }f,k - \left| \alpha \right|}}\left( {\operatorname{Re} x;\mathrm{i}\operatorname{Im} x} \right)} \right| \leq c\left( {n,k} \right){\left\| f \right\|_\varpi }{r^{k - \left| \alpha \right|}}\varpi(r),
\end{equation}
where the Taylor polynomial $ P $ is defined as follows
\[{P_{f,k}}\left( {x;y} \right) := \sum\limits_{\left| \beta \right| \leq k} {\frac{1}{{\alpha !}}{\partial ^\beta }f\left( x \right){y^\alpha }}. \]
Moreover, $ {{S_{r}}f} $ is real analytic whenever $ f $ is real valued.
\end{theorem}
As a direct consequence of Theorem \ref{Theorem1}, we give the following Corollaries \ref{coro1} and \ref{coro2}. These results have been widely used in H\"older's case, see for instance, \cite{Koudjinan,salamon,MR2071231}.
\begin{corollary}\label{coro1}
The approximation function $ {{S_r}f\left( x \right)} $ in Theorem \ref{Theorem1} satisfies
\begin{equation}\notag
\left| {{\partial ^\alpha }\left( {{S_r}f\left( x \right) - f\left( x \right)} \right)} \right| \leq c_*{\left\| f \right\|_\varpi }{r^{k - \left| \alpha \right|}}\varpi(r)
\end{equation}
and
\begin{equation}\notag
\left| {{\partial ^\alpha }{S_r}f\left( x \right)} \right| \leq {c^ * }{\left\| f \right\|_\varpi }
\end{equation}
for $ x \in \mathbb{C}^n $ with $ \left| {\operatorname{Im} x} \right| \leq r $, $ |\alpha| \leq k $, where $ c_* = c_*\left( {n,k} \right) >0$ and $ {c^ * } = {c^ * }\left( {n,k,\varpi } \right) >0$ are some universal constants.
\end{corollary}
\begin{corollary}\label{coro2}
If the function $ f\left( x \right) $ in Theorem \ref{Theorem1} also satisfies that the period of each variables $ {x_1}, \ldots ,{x_n} $ is $ 1 $ and the integral over $ {\mathbb{T}^n} $ is zero, then the approximation function $ {S_r}f\left( x \right) $ also satisfies these properties.
\end{corollary}
\subsection{Universal frequency-preserving KAM via modulus of continuity and remaining regularity}
We are now in a position to state the universal frequency-preserving KAM theorem via only modulus of continuity in this paper. Before this, let's start with our parameter settings.
Let $ n \geq 2$ (degree of freedom), $\tau > n - 1$ (Diophantine index), $ 2\tau + 2 \leq k \in {\mathbb{N}^ + } $ (differentiable order) and a sufficiently large number $ M>0 $ be given, throughout all KAM theorems in this paper. Consider a Hamiltonian function with action-angular variables $ H(x,y):{\mathbb{T}^n} \times G \to \mathbb{R} $ with $ {\mathbb{T}^n}: = {\mathbb{R}^n}/ \mathbb{Z}^n $, and $ G \subset {\mathbb{R}^n} $ is a connected closed set with interior points. It follows from Remark \ref{rema666} that $ H(x,y) $ automatically admits a modulus of continuity $ \varpi $. In view of the comments below Definition \ref{weak}, we assume that $ \varpi $ admits semi separability and weak homogeneity without loss of generality. Besides, we make the following assumptions:
\begin{itemize}
\item[(H1)] Dini type integrability condition: Assume that $ H(x,y)\in {C_{k,\varpi }}\left( {{\mathbb{T}^n} \times G} \right) $ with the above modulus of continuity $ \varpi $. In other words, $ H(x,y) $ has at least derivatives of order $ k $, and the highest order derivatives admit the regularity of $ \varpi $. Moreover, $ \varpi $ satisfies the Dini type integrability condition
\begin{equation}\label{Dini}
\int_0^1 {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}{\rm d}x} < + \infty .
\end{equation}
\item[(H2)] Boundedness and nondegeneracy:
\begin{equation}\notag
{\left\| H \right\|_\varpi } \leq M,\;\;\left| {{{\left( {\int_{{\mathbb{T}^n}} {{H_{yy}}\left( {\xi ,0} \right){\rm d}\xi } } \right)}^{ - 1}}} \right| \leq M.
\end{equation}
\item[(H3)] Diophantine condition: For some $ \alpha_ * > 0 $, the prescribed frequency $ \omega \in {\mathbb{R}^n} $ satisfies
\begin{equation}\notag
| {\langle {{\tilde k},\omega } \rangle } | \geq \alpha_ *{ | {{\tilde k }} |}^{-\tau} ,\;\;\forall 0 \ne \tilde k \in {\mathbb{Z}^n},\;\; |\tilde k|: = \sum\limits_{j = 1}^n {|{{\tilde k}_j}|} .
\end{equation}
\item[(H4)] KAM smallness: There holds
\begin{align}\label{T1-2}
&\sum\limits_{\left| \alpha \right| \leq k} {\left| {{\partial ^\alpha }\Big( {H\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {H\left( {\xi ,0} \right){\rm d}\xi } } \Big)} \right|{\varepsilon ^{\left| \alpha \right|}}} \notag \\
+ &\sum\limits_{\left| \alpha \right| \leq k - 1} {\left| {{\partial ^\alpha }\left( {{H_y}\left( {x,0} \right) - \omega } \right)} \right|{\varepsilon ^{\left| \alpha \right| + \tau + 1}}} \leq M{\varepsilon ^k}\varpi \left( \varepsilon \right)
\end{align}
for every $ x \in \mathbb{R}^n $ and some constant $ 0 < \varepsilon \leq {\varepsilon ^ * } $.
\item[(H5)] Criticality: For $ \varphi_i(x):=x^{k-(3-i)\tau-1}\varpi(x) $ with $ i=1,2 $, there exist critical $ k_i^*\in \mathbb{N}^+ $ such that
\[\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ *+1 }}}}{\rm d}x} < + \infty ,\;\;\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ * + 2}}}}{\rm d}x} = + \infty .\]
\end{itemize}
Let us make some comments on our assumptions.
\begin{itemize}
\item[(C1)] Our Dini type integrability condition (H1) reveals the \textit{deep relationship} between the \textit{irrationality} for frequency $ \omega $, \textit{order} and \textit{continuity} of the highest order derivatives for the Hamiltonian $ H(x,y) $, and appears in universal KAM for the first time. It also admits sharpness, because $ H(x,y) $ could be $ C^{[2\tau+2]} $ differentiable, close to the counterexample constructed in \cite{MR3061774}.
\item[(C2)] According to the properties of Banach algebra, for the H\"older's type, it is assumed that (H4) only needs the term of $ \left| \alpha \right| = 0 $, and does not need higher order derivatives to satisfy the condition. However, for general modulus of continuity, it seems not easy to establish the corresponding Banach algebraic properties, we thus add higher order derivatives in (H4).
Sometimes they can be removed correspondingly.
\item[(C3)] The existence of $ k_i^* $ in (H5) is directly guaranteed by (H1), actually this assumption is proposed to investigate the higher regularity of the persistent KAM torus as well as the conjugation, that is, the regularity to $ C^{k_i^*} $ plus certain kinds of modulus of continuity. In general, given an explicit modulus of continuity $ \varpi $, such $ k_i^* $ in (H5) are automatically determined by using asymptotic analysis, see details from Section \ref{section6}.
\end{itemize}
Finally, under sharp differentiability of $ C^{[2\tau+2]} $, we state the following Main Theorem \ref{theorem1} involving \textit{frequency-preserving KAM persistence} as well as \textit{remaining regualrity for invariant torus and conjugation.} \textit{To the best of our knowledge, this is the first approach on these two aspects, towards KAM via only continuity.}
\begin{theorem}[Main Theorem]\label{theorem1}
(Part I) Assume (H1)-(H4). Then there is a solution
\[x = u\left( \xi \right),\;\;y = v\left( \xi \right)\]
of the following equation with the operator $ D: = \sum\limits_{\nu = 1}^n {{\omega _\nu }\frac{\partial }{{\partial {\xi _\nu }}}} = \omega \cdot {\partial _\xi } $
\begin{equation}\notag
Du = {H_y}\left( {u,v} \right),\;\;Dv = - {H_x}\left( {u,v} \right),
\end{equation}
such that $ u\left( \xi \right) - \xi $ and $ v\left( \xi \right) $ are of period $ 1 $ in all variables, where $ u $ and $ v $ are at least $ C^1 $. \vspace{3mm}
\\
(Part II) In addition, assume (H5). Then there exist modulus of continuity $ {\varpi _i} $ ($ i=1,2 $) such that $ u \in {C_{k_1^ * ,{\varpi _1}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{k_2^ * ,{\varpi _2}}}\left( {{\mathbb{R}^n},G} \right) $. Particularly, $ {\varpi _i} $ can be explicitly determined as follows
\begin{equation}\label{varpii}
{\varpi _ i }\left( \gamma \right) \sim \gamma \int_{L_i\left( \gamma \right)}^\varepsilon {\frac{{\varphi_i \left( t \right)}}{{{t^{{k_i^*} + 2}}}}{\rm d}t} = {\mathcal{O}^\# }\left( {\int_0^{L_i\left( \gamma \right)} {\frac{{\varphi_i \left( t \right)}}{{{t^{{k_i^*} + 1}}}}{\rm d}t} } \right),\;\;\gamma \to {0^ + },
\end{equation}
where $ L_i(\gamma) \to 0^+ $ are some functions such that the second relation in \eqref{varpii} holds for $ i=1,2 $.
\end{theorem}
\begin{remark}\label{xiangxi}
We say such a solution $x=u(\xi),y=v(\xi)$ the KAM one. Besides, the regularity for $ v \circ {u^{ - 1}} $ and $ u $ represents to the smoothness of the invariant torus and the conjugation from the dynamic on it to the linear flow, respectively.
\end{remark}
\begin{remark}\label{RemarkM2}
\begin{itemize}
\item[(a)] With the same as in \cite{salamon}, the unperturbed system under consideration might be non-integrable (e.g., $ H = \left\langle {\omega ,y} \right\rangle + \left\langle {A\left( x \right)y,y} \right\rangle + \cdots $), which is different from the requirement of having integrable part in \cite{Po2,Bounemoura}, and the KAM persistence is of frequency-preserving.
\item[(b)] A Hamiltonian system having non-integrable unperturbed part is generally not reduced to a nearly integrable one, see Arnold et al \cite{MR2269239}.
\item[(c)] Moreover, our non-integrable part here could only be $ C_{k,\varpi} $ finitely differentiable with $ k \geq 2\tau+2 $ (we will see later that the critical case $ k=[2\tau+2] $ could be indeed achieved in Theorems \ref{Holder}, \ref{lognew} and \ref{GLHnew}), while considering the integrable part, \cite{Chaotic} and \cite{Po2} require the real-analyticity, and \cite{Bounemoura} requires the H\"older continuity $ C^{\varsigma } $ with $ \varsigma>2\tau+4 $.
\end{itemize}
\end{remark}
\begin{remark}
Actually Theorem \ref{theorem1} provides a method for determining $ \varpi_i $ with $ i=1,2 $, see \eqref{varpii}. For the prescribed modulus of continuity to Hamiltonian, such as the H\"older and Logarithmic H\"older types, we have to use asymptotic analysis to derive the remaining continuity explicitly in Section \ref{section6}, which is somewhat difficult.
\end{remark}
As mentioned forego, the H\"older's type $ H \in C^{\ell}(\mathbb{T}^n,G) $ with $ \ell>2\tau+2 $ (where $ \tau>n-1$ is the Diophantine index) is always regarded as the critical case in the sense of H\"older due to \cite{salamon,Bounemoura,MR3061774}. Now let us consider the non-universal Diophantine nonresonance, i.e., $ \tau=n-1 $, and such frequencies can only form a set of zero Lebesgue measure. Then one observes that $ k=2\tau+2=2n $ is the critical degenerate case in our settings, and our Dini type integrability condition \eqref{Dini} in (H1) becomes the classical Dini condition \eqref{cdini}, which also allows regularity weaker than arbitrary H\"older's type. This is exactly the same regularity required in \cite{Chaotic}, concentrating on non-universal KAM persistence without frequency-preserving. However, the non-universal Diophantine frequencies with index $ \tau=n-1 $ are not enough to represent almost all frequencies in $ \mathbb{R}^n $.
Back to our concern on universal KAM persistence, we may have to require the generalized Dini condition in (H1) instead of \eqref{cdini}. Obviously, (H1) automatically holds if the highest differentiable order $ k $ of $ H(x,y) $ satisfies $ k\geq 2\tau+3 $ or even larger, because the modulus of continuity $ \varpi $ does not have a singularity at $ 0 $, and therefore so does the integrand. But our Main Theorem \ref{theorem1} still makes sense, because the remaining regularity of the persistent torus as well as the conjuation will be higher correspondingly, as one would expect.
\section{Applications}
\label{section6}
In this section, we show certain detailed regularity about KAM torus as well as conjugation such as H\"{o}lder and Logarithmic H\"{o}lder ones etc. Denote by $ \{a\} $ and $ [a] $ the fractional part and the integer part of $ a\geq0 $, respectively. It should be emphasized that the Dini type integrability condition \eqref{Dini} in (H1) is easy to verify, that is, the frequency-preserving KAM persistence is relatively easy to obtain. However, some complicated techniques of asymptotic analysis are needed to investigate the specific regularity of KAM torus as well as the conjugation, which are mainly reflected in the selection of functions $ L_i (\gamma)$ ($ i=1,2 $) in \eqref{varpii}. In particular, caused by small divisors, we will explicitly see how much regularity will be lost, see, for instance, Theorems \ref{Holder}, \ref{lognew}, \ref{GLHnew} and the examples shown in Section \ref{explicitexa}. In what follows, the modulus of continuity under consideration are always convex near $ 0^+ $ and therefore automatically admit semi separability as well as weak homogeneity which we forego.
\subsection{KAM via different continuity}\label{subun}
Recall that we require $\tau>n-1$ and $n \geq 2$ in (H3), that is, the prescribed Diophantine frequency is universal. Under such settings, the known minimum regularity requirement for Hamiltonian $ H(x,y) $ is H\"older's type $ C^{\ell} $ with $ \ell>2\tau +2 $, see Salamon's KAM in \cite{salamon} as well as Theorem \ref{Holder} below. Interestingly, if one considers weaker modulus of continuity, such as $ C^{2\tau+2} $ plus Logarithmic H\"older's type, the above regularity $C^\ell$ could be weakened, which can be seen from our new Theorems \ref{lognew} and \ref{GLHnew}. It should be noted that weakening the initial regularity \textit{does not} mean that KAM results with higher initial regularity are fully included, because we also have to consider the remaining KAM regularity, namely the smoothness of the KAM torus and conjugation. Intuitively and obviously, the weaker the initial regularity, the weaker the remaining regularity. One should not expect the KAM torus and conjugation to be H\"older's type when considering Hamiltonian $ H(x,y) $ via only continuous high order derivatives.
\subsubsection{H\"{o}lder continuous case}\label{subsubsub}
Let us discuss the classical H\"{o}lder case first, see also the result given in \cite{salamon}. As we will see that, the remaining regularity is still of H\"{o}lder's type.
\begin{theorem}\label{Holder}
Let $ H(x,y) \in C^{\ell} (\mathbb{T}^n,G) $ with $ \ell>2\tau +2 $, where $ \ell \notin \mathbb{N}^+ $, $ \ell-\tau \notin \mathbb{N}^+$ and $ \ell-2\tau \notin \mathbb{N}^+ $. That is, $ H(x,y) $ is of $ C_{k,\varpi} $ with $ k=[\ell] $ and $ \varpi(x)\sim \varpi_{\mathrm{H}}^{\ell}(x)\sim x^{\{\ell\}} $. Assume (H2), (H3) and (H4). Then there is a solution $ x = u\left( \xi \right),y = v\left( \xi \right) $ of the following equation with the operator $ D: = \sum\limits_{\nu = 1}^n {{\omega _\nu }\frac{\partial }{{\partial {\xi _\nu }}}}= \omega \cdot {\partial _\xi } $
\begin{equation}\notag
Du = {H_y}\left( {u,v} \right),\;\;Dv = - {H_x}\left( {u,v} \right)
\end{equation}
such that $ u\left( \xi \right) - \xi $ and $ v\left( \xi \right) $ are of period $ 1 $ in all variables. In addition, $ u \in {C^{\ell-2\tau-1}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C^{\ell-\tau-1}}\left( {{\mathbb{R}^n},G} \right) $.
\end{theorem}
\subsubsection{H\"{o}lder plus Logarithmic H\"{o}lder continuous case}\label{SUBLH}
To explicitly show different modulus of continuity strictly weaker than H\"older's type, we establish the following Theorem \ref{lognew}. One will see later that Theorem \ref{lognew} employs very complicated asymptotic analysis, and interestingly, \textit{the remaining regularity basically characterized by $ \varpi_1 $ and $ \varpi_2 $ admit different forms.}
\begin{theorem}\label{lognew}
Let $ \tau>n-1 $ be given and let $ H(x,y)\in C_{[2\tau+2], \varpi} $, where $ \varpi \left( x \right) \sim {x^{\{2\tau+2\}}}/{\left( { - \ln x} \right)^\lambda } $ with $ \lambda > 1 $. Assume (H2), (H3) and (H4). That is, $ H(x,y) $ is of $ C^{k} $ plus the above $ \varpi $ with $k= [2\tau+2] $. Then there is a solution $ x = u\left( \xi \right),y = v\left( \xi \right) $ of the following equation with the operator $ D: = \sum\limits_{\nu = 1}^n {{\omega _\nu }\frac{\partial }{{\partial {\xi _\nu }}}} = \omega \cdot {\partial _\xi } $
\begin{equation}\notag
Du = {H_y}\left( {u,v} \right),\;\;Dv = - {H_x}\left( {u,v} \right)
\end{equation}
such that $ u\left( \xi \right) - \xi $ and $ v\left( \xi \right) $ are of period $ 1 $ in all variables. In addition, letting
\[{\varpi _1}\left( x \right) \sim \frac{1}{{{{\left( { - \ln x} \right)}^{\lambda - 1}}}} \sim \varpi _{\mathrm{LH}}^{\lambda - 1}\left( x \right),\]
and
\[ {\varpi _2}\left( x \right) \sim \left\{ \begin{aligned}
&{\frac{1}{{{{\left( { - \ln x} \right)}^{\lambda - 1}}}} \sim \varpi _{\mathrm{LH}}^{\lambda - 1}\left( x \right)},&n-1<\tau \in {\mathbb{N}^ + } \hfill, \\
&{\frac{{{x^{\left\{ \tau \right\}}}}}{{{{\left( { - \ln x} \right)}^\lambda }}} \sim {x^{\left\{ \tau \right\}}}\varpi _{\mathrm{LH}}^\lambda \left( x \right)},&n-1<\tau \notin {\mathbb{N}^ + } \hfill, \\
\end{aligned} \right.\]
one has that $ u \in {C_{1 ,{\varpi _1}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{[\tau+1] ,{\varpi _2}}}\left( {{\mathbb{R}^n},G} \right) $.
\end{theorem}
\subsubsection{H\"{o}lder plus generalized Logarithmic H\"{o}lder continuous case}
Actually, in view of the Dini type integrability condition (H1), one notices that the regularity for the given Hamiltonian function could be further weakened, such as the generalized Logarithmic H\"{o}lder continuity in \eqref{mafan}. But the parameter $\lambda>0 $ needs to be appropriately chosen, otherwise it may contradict \eqref{Dini}. We will only present the following Theorem \ref{GLHnew} for the case where Diophantine index $ \tau $ is an integer for the sake of simplicity. One could deal with $ \tau \notin \mathbb{N}^+ $ similar to that in Theorem \ref{lognew}.
\begin{theorem}\label{GLHnew}
Let $ n-1<\tau\in \mathbb{N}^+ $, and $ H(x,y)\in C_{2\tau+2, {\varpi_{\mathrm{GLH}}^{\varrho,\lambda}}} $ with $\varrho\in \mathbb{N}^+, \lambda > 1 $. That is, all $ (2\tau+2) $-order derivatives of $ H(x,y) $ admit generalized Logarithmic H\"{o}lder continuity:
\begin{equation}\label{mafan2}
\varpi_{\mathrm{GLH}}^{\varrho,\lambda} \left( x \right) \sim \frac{1}{{(\ln (1/x))(\ln \ln (1/x)) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/x))}^\lambda }}},\;\;x \to 0^+.
\end{equation}
Assume (H2), (H3) and (H4). Then there is a solution $ x = u\left( \xi \right),y = v\left( \xi \right) $ of the following equation with the operator $ D: = \sum\limits_{\nu = 1}^n {{\omega _\nu }\frac{\partial }{{\partial {\xi _\nu }}}} = \omega \cdot {\partial _\xi } $
\begin{equation}\notag
Du = {H_y}\left( {u,v} \right),\;\;Dv = - {H_x}\left( {u,v} \right)
\end{equation}
such that $ u\left( \xi \right) - \xi $ and $ v\left( \xi \right) $ are of period $ 1 $ in all variables. Besides, the remaining regularity in Theorem \ref{theorem1} is $ u \in {C_{1 ,{\varpi _1}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{\tau+1,{\varpi _2}}}\left( {{\mathbb{R}^n},G} \right) $ with modulus of continuity
\begin{equation}\label{rem}
{\varpi _1}\left( x \right) \sim {\varpi _2}\left( x \right) \sim \frac{1}{{{{(\underbrace {\ln \cdots \ln }_\varrho (1/x))}^{\lambda - 1}}}}.
\end{equation}
\end{theorem}
\begin{remark}
Particularly \eqref{mafan2} reduces to the Logarithmic H\"older's type $ \varpi_{\mathrm{LH}}^{\lambda}(x) \sim 1/{(-\ln x)^\lambda} $ with $ \lambda>1 $ as long as $ \varrho=1 $. As can be seen that, the remaining regularity in \eqref{rem} is much weaker than the initial regularity in \eqref{mafan2}, and it is indeed very weak if $ \lambda>1 $ is sufficiently close to $ 1 $ (but cannot degenerate to $ 1 $ by (H1)), or $ \varrho \in \mathbb{N}^+ $ is sufficiently large, because the explicit modulus of continuity in \eqref{rem} tends to $ 0 $ quite slowly as $ x \to 0^+ $.
\end{remark}
\subsection{Two explicit Hamiltonian systems of Logarithmic H\"{o}lder's type}\label{explicitexa}
To illustrate the wider applicability of our theorems, we shall present two explicit examples strictly beyond H\"older's type, involving \textit{non-H\"older corner's type} and even \textit{nowhere H\"older's type}. Note that the H\"older plus Logarithmic H\"older regularity for $ H(x,y) $ in Theorem \ref{lognew} becomes simpler Logarithmic H\"older's type for $ 2n<2\tau+2 \in \mathbb{N}^+ $ (because $ \{2 \tau+2 \}=0 $), we therefore consider the following \textit{critical settings} (with sharp differentiability) throughout this section.
Recall Theorem \ref{lognew}. Let $ n = 2,\tau = 2, k = 6=[2\tau+2], {\alpha _ * } > 0,\lambda > 1$ and $M > 0 $ be given. Assume that $ \left( {x,y} \right) \in {\mathbb{T}^2} \times G $ with $ G := \{ {y \in {\mathbb{R}^2}:\left| y \right| \leq 1} \} $, and the universal Diophantine frequency $ \omega = {\left( {{\omega _1},{\omega _2}} \right)^{\top}} \in \mathbb{R}^2 $ satisfies
\begin{equation}\notag
| {\langle {{\tilde k},\omega } \rangle } | \geq \alpha_ *{ | {{\tilde k }} |}^{-2} ,\;\;\forall 0 \ne \tilde k \in {\mathbb{Z}^2},\;\;|\tilde k|: = |k_1|+|k_2|,
\end{equation}
i.e., with full Lebesgue measure.
\subsubsection{Non-H\"older corner's type}\label{SUBSUB1}
Now we shall construct a function for finite smooth perturbations via \textit{non-H\"older corner's type}, whose regularity is $ C^6 $ plus Logarithmic H\"older's type $ \varpi_{\mathrm{LH}}^{\lambda}(r) \sim 1/(-\ln r)^{\lambda} $ with index $ \lambda>1 $. In other words, the highest order derivative admits exact Logarithmic H\"older regularity at a non-H\"older corner, but it may have good regularity away from this corner, such as being sufficiently smooth. Namely, define
\begin{equation}\notag
\mathcal{P}(r): = \left\{ \begin{aligned}
&{{\int_0^r { \cdots \int_0^{{s_2}} {\frac{1}{{{{(1 - \ln \left| {{s_1}} \right|)}^\lambda }}}{\rm d}{s_1} \cdots {\rm d}{s_6}} }}},&{0 < \left| r \right| \leq 1} \hfill, \\
&{0},&{r=0} \hfill. \\
\end{aligned} \right.
\end{equation}
Obviously $ \mathcal{P}(r)\in C_{6,\varpi_{\mathrm{LH}}^{\lambda}} ([-1,1])$.
Let us consider the perturbed Hamiltonian function below with some constant $ 0 < {\varepsilon } < {\varepsilon ^ * } $ sufficiently small ($ {\varepsilon ^ * } $ depends on the constants given before):
\begin{equation}\label{HH}
\mathcal{H}(x,y,\varepsilon ) = {\omega _1}{y_1} + {\omega _2}{y_2} + \frac{1}{M}(y_1^2 + y_2^2) + \varepsilon \left( {\sin (2\pi {x_1}) + \sin (2\pi {x_2}) + \mathcal{P}({y_1}) + \mathcal{P}\left( {{y_2}} \right)} \right).
\end{equation}
At this point, we have
\begin{align*}
\left| {{{\left( {\int_{{\mathbb{T}^2}} {{\mathcal{H}_{yy}}\left( {\xi ,0} \right){\rm d}\xi } } \right)}^{ - 1}}} \right| &= \left| {{{\left( {\int_{{\mathbb{T}^2}} {\left( {\begin{array}{*{20}{c}}
{2{M^{ - 1}}}&0 \\
0&{2{M^{ - 1}}}
\end{array}} \right){\rm d}\xi } } \right)}^{ - 1}}} \right| \notag \\
&= \left| {\left( {\begin{array}{*{20}{c}}
{{2^{ - 1}}M}&0 \\
0&{{2^{ - 1}}M}
\end{array}} \right)} \right| \leq M < + \infty .
\end{align*}
In addition, one can verify that $ \mathcal{H}(x,y,\varepsilon) \in {C_{6,{\varpi_{\mathrm{LH}}^{\lambda}}}}( {{\mathbb{T}^2} \times G} ) $ with $ \varpi_{\mathrm{LH}}^{\lambda}(r) \sim 1/(-\ln r)^{\lambda} $.
However, for $ \tilde \alpha = {\left( {0,0,6,0} \right)^{\top}} $ with $ \left| {\tilde \alpha } \right| = 6 = k $, we have
\[\left| {{\partial ^{\tilde \alpha }}\mathcal{H}\left( {{{\left( {0,0} \right)}^{\top}},{{\left( {{y_1},0} \right)}^{\top}},\varepsilon } \right) - {\partial ^{\tilde \alpha }}\mathcal{H}\left( {{{\left( {0,0} \right)}^{\top}},{{\left( {0,0} \right)}^{\top}},\varepsilon } \right)} \right| = \frac{\varepsilon }{{{{(1 - \ln \left| {{y_1}} \right|)}^\lambda }}} \geq \varepsilon {c_{\lambda ,\ell }}{\left| {{y_1}} \right|^\ell }\]
for any $ 0<\ell\leq1 $, where $ c_{\lambda ,\ell } >0$ is a constant that only depends on $ \lambda $ and $ \ell $. This implies that $ \mathcal{H}(x,y,\varepsilon) \notin {C_{6,{\varpi _{\mathrm{H}}^\ell}}}( {{\mathbb{T}^2} \times G} ) $ with $ {\varpi _{\mathrm{H}}^\ell}(r) \sim {r^\ell } $, i.e., $ \mathcal{H}(x,y,\varepsilon) \notin C^{6+\ell}( {{\mathbb{T}^2} \times G} ) $ with any $ 0<\ell \leq 1 $, because $ \varpi_{\mathrm{LH}}^{\lambda} $ is strictly weaker than $ \varpi _{\mathrm{H}}^\ell $, see also Remark \ref{strict} and Corollary \ref{V37-Re2.3}.
In other words, the highest order derivatives (of order $ k=6 $) of $ H(x,y) $ in \eqref{HH} can be rigorously proved to be Logarithmic H\"{o}lder continuous with index $ \lambda>1 $, but not any H\"{o}lder's type. Therefore, the finite smooth KAM theorems via classical H\"{o}lder continuity cannot be applied. But, all the assumptions of Theorem \ref{lognew} can be verified to be satisfied, then the invariant torus persists, and the frequency $ \omega = {\left( {{\omega _1},{\omega _2}} \right)^{\top}} $ for the unperturbed system can remain unchanged. Moreover, the remaining regularity for mappings $ u $ and $ v \circ u^{-1} $ in Theorem \ref{lognew} could also be determined as $ u \in {C_{1 ,{\varpi _\mathrm{LH}^{\lambda-1}}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{3 ,{\varpi _\mathrm{LH}^{\lambda-1}}}}\left( {{\mathbb{R}^n},G} \right) $, where $ \varpi _\mathrm{LH}^{\lambda-1}(r)\sim 1/(-\ln r)^{\lambda-1} $. More precisely, $ u $ is at least $ C^1 $, while $ v \circ u^{-1}$ is least $ C^3 $, and the higher regularity for them is still not any H\"older's type, but Logarithmic H\"older one with index $ \lambda-1 $, i.e., lower than the original index $ \lambda>1 $, this is because the small divisors causes the loss of regularity. In KAM terminology, the regularity of invariant torus with frequency-preserving is $ C^3 $ plus $ (\lambda-1) $-Logarithmic H\"older, while the conjugation from the dynamic on it to the linear flow is $ C^1 $ plus $ (\lambda-1) $-Logarithmic H\"older, see Remark \ref{xiangxi}.
\subsubsection{Nowhere H\"older's type}
Although an explicit irregular case has been discussed in Section \ref{SUBSUB1}, it is not universal enough. Notice that the non-integrable part and perturbations for Hamiltonian should at least be of $ C^6 $ under our results (see Theorem \ref{theorem1} and Remark \ref{RemarkM2}). However, if perturbations are selected at random from the function space $ C^6 $, then the derivatives of $ 6 $-order are likely to be \textit{nowhere-differentiable} (note that the example given in Section \ref{SUBSUB1} is $ C^\infty $ except for the corner point), or even \textit{nowhere H\"older} from the point of view of Baire category. More precisely, perturbations may be convergent trigonometric series composed of \textit{high frequency oscillations}. This is also one of our motivations for establishing KAM in the sense of modulus of continuity. We shall focus on this case below, aiming to show the strength of our results.
Given $ 0<q<\frac{1}{3} $ and $ \lambda>1 $. Define a sequence $ \{\mathcal{Q}_n\}_{n \in \mathbb{N}^+} $ that satisfies $ \left\{ {{\mathcal{Q}_{n + 1}}\mathcal{Q}_n^{-1},{\mathcal{Q}_n}} \right\}_{n \in \mathbb{N}^+} \subseteq 10{\mathbb{N}^ + } $ and $ \mathcal{Q}_n=\mathcal{O}^{\#}\left(\exp(\Theta^n)\right) $ with $ \Theta=\left({\frac{1}{q}}\right)^{\frac{1}{\lambda}}>1 $. Then the $ 1 $-periodic function
\begin{equation}\label{mathscrP}
\mathscr{P}\left( x \right): = \sum\limits_{n = 1}^\infty {{q^n}\sin \left( {\pi {\mathcal{Q}_n}x} \right)} ,\;\;x \in \mathbb{R}
\end{equation}
is well defined. We shall construct a Hamiltonian system using $ \mathscr{P} $ as the highest order derivatives for the perturbation. To this end, consider the following Hamiltonian with \textit{high frequency oscillation perturbation}:
\begin{align}
\mathscr{H}(x,y,\varepsilon ) &= {\omega _1}{y_1} + {\omega _2}{y_2} + \left\langle {\mathscr{A}\left( {{x_1},{x_2}} \right){{\left( {{y_1},{y_2}} \right)}^\top},{{\left( {{y_1},{y_2}} \right)}^\top}} \right\rangle \notag \\
&\;\;\;\;+ \varepsilon \left( \sum\limits_{n = 1}^\infty {\frac{{{q^n}}}{{\mathcal{Q}_n^6}}\sin \left( {\pi {\mathcal{Q}_n}{x_1}} \right)} + \sum\limits_{n = 1}^\infty {\frac{{{q^n}}}{{\mathcal{Q}_n^6}}\sin \left( {\pi {\mathcal{Q}_n}{y_2}} \right)} \right)\notag \\
\label{HHH} :&=\mathscr{N}_\mathscr{H}+\varepsilon \mathscr{P}_\mathscr{H},
\end{align}
provided with some constant $ 0 < {\varepsilon } < {\varepsilon ^ * } $ sufficiently small, and $ {\varepsilon ^ * } $ depends on the constants given before. Here the matrix $ \mathscr{A}(x_1,x_2) $ could only be of class $ C_{6,\varpi_{\mathrm{LH}}^\lambda} $ (recall Remark \ref{RemarkM2}), and assume that (H2) holds. At this case, the unperturbed part $ \mathscr{N}_{\mathscr{H}} $ of $ \mathscr{H} $ is indeed $ C_{6,\varpi_{\mathrm{LH}}^\lambda} $ finitely smooth and non-integrable. Let us point out that, the non-integrable part could have weaker regularity, i.e.,
\begin{align*}
\tilde{\mathscr{H}}(x,y,\varepsilon ) &= {\omega _1}{y_1} + {\omega _2}{y_2} + \left\langle \left(\mathscr{A} \left(x_1,x_2\right)+{\tilde{\varepsilon}}{\tilde{\mathscr{A}}} \left(x_1,x_2\right)\right){{{\left( {{y_1},{y_2}} \right)}^\top},{{\left( {{y_1},{y_2}} \right)}^\top}} \right\rangle\notag \\
&\;\;\;\;+ \varepsilon \left( \sum\limits_{n = 1}^\infty {\frac{{{q^n}}}{{\mathcal{Q}_n^6}}\sin \left( {\pi {\mathcal{Q}_n}{x_1}} \right)} + \sum\limits_{n = 1}^\infty {\frac{{{q^n}}}{{\mathcal{Q}_n^6}}\sin \left( {\pi {\mathcal{Q}_n}{y_2}} \right)} \right) \notag \\
&\;\;\;\;-{\tilde{\varepsilon}}\left\langle {\mathscr{A}\left( {{x_1},{x_2}} \right){{\left( {{y_1},{y_2}} \right)}^\top},{{\left( {{y_1},{y_2}} \right)}^\top}} \right\rangle ,
\end{align*}
where $ {\tilde{\varepsilon}}\tilde{\mathscr{A}} $ and $ {\tilde{\varepsilon}}\bar{\mathscr{A}} $ are small perturbations of $ \mathscr{A} $, admitting weaker regularity than $ C_{6,\varpi_{\mathrm{LH}}^\lambda} $, e.g., $ C^5 $, and $ \tilde{\mathscr{A}}-\bar{\mathscr{A}} $ \textit{must} be of at least $ C_{6,\varpi_{\mathrm{LH}}^\lambda} $. The analyzability of this case is due to our technique for dealing with finite smooth KAM in this paper, but obviously, the new perturbation of $ \tilde{\mathscr{H}} $ is quite special.
Back to our concern on Hamiltonian \eqref{HHH}. Now, the universal frequency-preserving KAM persistence and remaining regularity same as that in Section \ref{SUBSUB1} could be obtained if one proves that the perturbation is also $ C_{6,\varpi_{\mathrm{LH}}^\lambda} $. However, this is not simple. The following proposition verifies this claim, but simultaneously shows that it is indeed nowhere H\"older continuous (and therefore, the H\"older type KAM tools cannot be used at all, at any point). As you will see later, the technique used here is somewhat similar to that in Abstract Iterative Theorem \ref{t1}.
\begin{proposition}
The perturbation part $ \varepsilon \mathscr{P}_\mathscr{H} $ of Hamiltonian $ \mathscr{H} $ in \eqref{HHH} is at least $ C^6 $, and the regularity for $ 6 $-order derivatives is $ \lambda $-Logarithmic H\"older's type, but is nowhere H\"older continuous.
\end{proposition}
\begin{proof}
By direct derivation, it suffices to verify that $ \mathscr{P}\left( x \right) $ defined in \eqref{mathscrP} is $ \lambda $-Logarithmic H\"older continuous, but is nowhere H\"older continuous.
Obivously $ \mathscr{P}\left( x \right) $ is continuous on $ \mathbb{R} $ because
\[\sum\limits_{n = 1}^\infty {\left| {{q^n}\sin \left( {\pi {\mathcal{Q}_n}x} \right)} \right|} \leq \sum\limits_{n = 1}^\infty {{q^n}} < + \infty .\]
Then it immediately follows from Remark \ref{rema666} and the $ 1 $-periodicity of $ \mathscr{P} $ that $ \mathscr{P} $ automatically admits a modulus of continuity $ \varpi_{\mathscr{P}} $. We first prove the $ \lambda $-Logarithmic H\"older continuity, i.e., $ \varpi_{\mathscr{P}} $ is weaker than $ \varpi_{\rm LH}^\lambda \sim 1/{\left( { - \ln x} \right)^\lambda }$ for $ \lambda>1 $. For every fixed $ 0<h<1 $, define (it is indeed well defined because $ \mathcal{Q}_n \to +\infty $ as $ n \to +\infty $)
\begin{equation}\label{huaNdingyi}
\mathscr{N}_h: = \max \left\{ {\mathscr{N}_h \in {\mathbb{N}^ + }:h {\mathcal{Q}_{\mathscr{N}_h}} \leq 1} \right\}.
\end{equation}
Now one derives
\begin{equation}\label{jgjgz}
{q^{{\mathscr{N}_h}}} \leq C\left( {q,\lambda } \right)\varpi _{\rm LH}^\lambda \left( h \right),\;\; \forall 0<h \ll 1
\end{equation}
from the assumptions on $ \{\mathcal{Q}_n\}_{n \in \mathbb{N}^+} $, where $ C\left( {q,\lambda } \right)>0 $ is a generic constant that only depends on $ q$ and $ \lambda $. Therefore, for $ 0<h<1 $ sufficiently small, we obtain that
\begin{align}
\left| {\mathscr{P}\left( {x + h} \right) - \mathscr{P}\left( x \right)} \right| &= \left| {\sum\limits_{n = 1}^\infty {{q_n}\left( {\sin \left( {\pi {\mathcal{Q}_n}x + \pi {\mathcal{Q}_n}h} \right) - \sin \left( {\pi {\mathcal{Q}_n}x} \right)} \right)} } \right|\notag \\
& \leq \sum\limits_{n = 1}^{\mathscr{N}_h} { + \sum\limits_{n = \mathscr{N}_h + 1}^\infty {{q_n}\left| {\sin \left( {\pi {\mathcal{Q}_n}x + \pi {\mathcal{Q}_n}h} \right) - \sin \left( {\pi {\mathcal{Q}_n}x} \right)} \right|} } \notag\\
\label{GZ1} & \leq h\pi \sum\limits_{n = 1}^{\mathscr{N}_h} {{\mathcal{Q}_n}{q_n}} + 2\sum\limits_{n = \mathscr{N}_h + 1}^\infty {{q_n}} \\
& = h\pi {Q_{{\mathscr{N}_h}}}{q^{{\mathscr{N}_h}}}\sum\limits_{n = 1}^{{\mathscr{N}_h}} {\frac{{{Q_n}}}{{{Q_{{\mathscr{N}_h}}}}} \cdot \frac{{{q^n}}}{{{q_{{\mathscr{N}_h}}}}}} + \sum\limits_{n = {\mathscr{N}_h} + 1}^\infty {{q^n}}\notag \\
\label{GZ2}& \leq h\pi {Q_{{\mathscr{N}_h}}}{q^{{\mathscr{N}_h}}}\sum\limits_{n = 1}^{{\mathscr{N}_h}} {{{\left( {\frac{q}{{10}}} \right)}^{n - {\mathscr{N}_h}}}} + \sum\limits_{n = {\mathscr{N}_h} + 1}^\infty {{q^n}} \\
& \leq \frac{{10\pi }}{{10 - q}} \cdot h{Q_{{\mathscr{N}_h}}} \cdot {q^{{\mathscr{N}_h}}} + \frac{q}{{1 - q}}{q^{{\mathscr{N}_h}}}\notag\\
\label{GZ3}& \leq C\left( {q,\lambda } \right){q^{{\mathscr{N}_h}}} \\
\label{GZ4}& \leq C\left( {q,\lambda } \right)\varpi _{\rm LH}^\lambda \left( h \right),\;\;\forall x \in
\mathbb{R},
\end{align}
where \eqref{GZ1} uses the Mean Value Theorem and $ \left| {\sin x} \right|\leq1 $, \eqref{GZ2} is due to $ \left\{ {{\mathcal{Q}_{n + 1}}}\mathcal{Q}_n^{-1} \right\}_{n \in \mathbb{N}^+} \subseteq 10{\mathbb{N}^ + } $, \eqref{GZ3} employs the definition of $ \mathscr{N}_h $ in \eqref{huaNdingyi}, and finally \eqref{GZ4} follows from the estimate \eqref{jgjgz}. This gives the claim asserting that $ \mathscr{P} (x)$ is $ \lambda $-Logarithmic H\"older with respect to $ \lambda>1 $.
Next we will show that $ \mathscr{P}(x) $ is nowhere H\"older continuous on $ \mathbb{R} $. Let $ \alpha>0 $ be arbitrary given, and consider $ \varpi_{\rm H}^{\alpha}\sim x^\alpha $ (arbitrary H\"older modulus of continuity). Fix $ x>0 $, and choose $ m=m_{x,\alpha} \in \mathbb{N}^+ $ sufficiently large such that $ {\mathcal{Q}_m}x \geq 1 $, and
\begin{equation}\label{fanjieGZ}
{\mathcal{Q}_m} \geq {\left( {\varpi _{\rm H}^\alpha \left( {\frac{{1 - 3q}}{{2\left( {1 - q} \right)}}\frac{{{q^m}}}{m}} \right)} \right)^{ - 1}}.
\end{equation}
Note that this is achievable because $ \mathcal{Q}_m $ is of the superexponential order.
Now it reads $ {\mathcal{Q}_m}x = {\mathcal{N}_m} + {\mathcal{R}_m} $ with $ {\mathcal{N}_m} \in {\mathbb{N}^ + }$ and $0 < \left| {{\mathcal{R}_m}} \right| \leq {2^{ - 1}} $. Taking
\begin{equation}\label{upsilondy}
\upsilon_{x,m} = \mathcal{Q}_m^{ - 1}\left( { - \operatorname{sgn} \left( {{\mathcal{R}_m}} \right){2^{ - 1}} - {\mathcal{R}_m}} \right) \to 0
\end{equation}
as $ m \to \infty $ yields that $ {2^{ - 1}} \leq \left| {\upsilon_{x,m}} \right|{\mathcal{Q}_m} \leq 1 $. Then we get
\begin{align}
\frac{{\mathscr{P}\left( {x + \upsilon_{x,m}} \right) - \mathscr{P}\left( x \right)}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}} = &\frac{{{q^m}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\left( {\sin \left( {\pi {\mathcal{Q}_m}\left( {x + \upsilon_{x,m}} \right)} \right) - \sin \left( {\pi {\mathcal{Q}_m}x} \right)} \right) \notag\\
&+ \sum\limits_{n = 1}^{m - 1} {\frac{{{q^n}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\left( {\sin \left( {\pi {\mathcal{Q}_n}\left( {x + \upsilon_{x,m}} \right)} \right) - \sin \left( {\pi {\mathcal{Q}_n}x} \right)} \right)} \notag\\
& + \sum\limits_{n = m + 1}^\infty {\frac{{{q^n}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\left( {\sin \left( {\pi {\mathcal{Q}_n}\left( {x + \upsilon_{x,m}} \right)} \right) - \sin \left( {\pi {\mathcal{Q}_n}x} \right)} \right)} \notag \\
\label{sanduan} : = &{\mathscr{S}_1} + {\mathscr{S}_2} + {\mathscr{S}_3}.
\end{align}
Firstly, direct calculation as well as \eqref{upsilondy} gives
\begin{align}
\left| {{\mathscr{S}_1}} \right| &= \frac{{{q^m}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\left| {\sin \left( {\pi {\mathcal{Q}_m}\left( {x + \upsilon_{x,m}} \right)} \right) - \sin \left( {\pi {\mathcal{Q}_m}x} \right)} \right|\notag\\
& = \frac{{{q^m}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\left| {{{\left( { - 1} \right)}^{{\mathcal{N}_m} + 1}}\left( {\operatorname{sgn} \left( {{\mathcal{R}_m}} \right) + \sin \left( {\pi {\mathcal{R}_m}} \right)} \right)} \right|\notag\\
& = \frac{{{q^m}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\left| {\operatorname{sgn} \left( {{\mathcal{R}_m}} \right) + \sin \left( {\pi {\mathcal{R}_m}} \right)} \right|\notag\\
\label{sanduan1} & \geq \frac{{{q^m}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\mathcal{Q}_m^{ - 1}} \right)}}.
\end{align}
Secondly, by applying the Mean Value Theorem we obtain that
\begin{align}
\left| {{\mathscr{S}_2}} \right| &\leq \sum\limits_{n = 1}^{m - 1} {\frac{{{q^n}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\left| {\sin \left( {\pi {\mathcal{Q}_n}\left( {x + \upsilon_{x,m}} \right)} \right) - \sin \left( {\pi {\mathcal{Q}_n}x} \right)} \right|} \notag\\
&\leq \sum\limits_{n = 1}^{m - 1} {\frac{{{q^n}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}} \cdot \pi {\mathcal{Q}_n}\left| {\upsilon_{x,m}} \right|} \notag\\
& \leq \left( {\pi \sum\limits_{n = 1}^\infty {{q^n}} } \right)\frac{{{\mathcal{Q}_{m - 1}}\mathcal{Q}_m^{ - 1}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\mathcal{Q}_m^{ - 1}} \right)}}\notag\\
&= \frac{{{\mathcal{Q}_{m - 1}}\mathcal{Q}_m^{ - 1}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\mathcal{Q}_m^{ - 1}} \right)}}\cdot \frac{{\pi q}}{{1 - q}}\notag\\
\label{sanduan2}&\leq \frac{q^m}{2{{\varpi_{\rm H}^{\alpha}}\left( {\mathcal{Q}_m^{ - 1}} \right)}},
\end{align}
where we use the following fact due to $ \left\{ {{\mathcal{Q}_{n + 1}}}\mathcal{Q}_n^{-1} \right\}_{n \in \mathbb{N}^+} \subseteq 10{\mathbb{N}^ + } $:
\begin{equation}\label{3pi}
{\mathcal{Q}_{m - 1}}\mathcal{Q}_m^{ - 1}\left( {\frac{{\pi q}}{{1 - q}}} \right) \leq \frac{{\pi q}}{{10\left( {1 - q} \right)}} < \frac{{3\pi q}}{{20}} \leq \frac{1}{2}{q^m},\;\;\forall m \in {\mathbb{N}^ + }.
\end{equation}
Thirdly, note that $ {\mathcal{Q}_n}\mathcal{Q}_m^{ - 1} \in 10\mathbb{N}^+ $. Then it follows that
\begin{align*}
\sin \left( {\pi {\mathcal{Q}_n}\left( {x + \upsilon_{x,m}} \right)} \right) &= \sin \left( {\pi {\mathcal{Q}_n}\mathcal{Q}_m^{ - 1} \cdot {\mathcal{Q}_m}\left( {x + \upsilon_{x,m}} \right)} \right)\\
& = \sin \left( {\pi {\mathcal{Q}_n}\mathcal{Q}_m^{ - 1} \cdot \left( {{\mathcal{N}_m} - {2^{ - 1}}\operatorname{sgn} \left( {{\mathcal{R}_m}} \right)} \right)} \right)\\
& = 0,\;\;\forall n \geq m + 1,
\end{align*}
which leads to
\begin{align}
\left| {{\mathscr{S}_3}} \right| &\leq \sum\limits_{n = m + 1}^\infty {\frac{{{q^n}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\left| {\sin \left( {\pi {\mathcal{Q}_n}\left( {x + \upsilon_{x,m}} \right)} \right) - \sin \left( {\pi {\mathcal{Q}_n}x} \right)} \right|} \notag\\
& = \sum\limits_{n = m + 1}^\infty {\frac{{{q^n}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\left| {\sin \left( {\pi {\mathcal{Q}_n}x} \right)} \right|} \notag\\
& \leq \frac{1}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}}\sum\limits_{n = m + 1}^\infty {{q^n}} \notag\\
\label{sanduan3} &\leq \frac{1}{{{\varpi_{\rm H}^{\alpha}}\left( {\mathcal{Q}_m^{ - 1}} \right)}} \cdot \frac{{{q^m}}}{{1 - q}} .
\end{align}
Finally, substituting \eqref{sanduan1}, \eqref{sanduan2}, \eqref{sanduan3} into \eqref{sanduan} and using \eqref{fanjieGZ} yields that
\begin{align*}
\frac{{\left| {\mathscr{P}\left( {x + \upsilon_{x,m} } \right) - \mathscr{P}\left( x \right)} \right|}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}} &\geq \left| {{\mathscr{S}_1}} \right| - \left| {{\mathscr{S}_2}} \right| - \left| {{\mathscr{S}_3}} \right|\\
&\geq \frac{{{q^m}}}{{{\varpi_{\rm H}^{\alpha}}\left( {\mathcal{Q}_m^{ - 1}} \right)}} - \frac{{{q^m}}}{2{{\varpi_{\rm H}^{\alpha}}\left( {\mathcal{Q}_m^{ - 1}} \right)}} - \frac{1}{{{\varpi_{\rm H}^{\alpha}}\left( {\mathcal{Q}_m^{ - 1}} \right)}} \cdot \frac{{{q^m}}}{{1 - q}} \\
&= \frac{1}{{{\varpi_{\rm H}^{\alpha}}\left( {\mathcal{Q}_m^{ - 1}} \right)}}\left( {\frac{{1 - 3q}}{{2\left( {1 - q} \right)}}} \right)q^m\\
&\geq m .
\end{align*}
In other words, for every $ x\in \mathbb{R} $ (note that $ \mathscr{P} $ is $ 1 $-periodic), one could construct a sequence $ \{\upsilon_{x,m}\}_{m \in \mathbb{N}^+} $ such that
\[\mathop {\underline {\lim } }\limits_{m \to \infty } \frac{{\left| {\mathscr{P}\left( {x + \upsilon_{x,m} } \right) - \mathscr{P}\left( x \right)} \right|}}{{{\varpi_{\rm H}^{\alpha}}\left( {\left| {\upsilon_{x,m}} \right|} \right)}} = + \infty. \]
This implies that $ \mathscr{P}(x)$ is nowhere H\"older continuous because the H\"older index $ 0<\alpha<1 $ could be arbitrary chosen, and we therefore prove the proposition.
\end{proof}
\section{Proof of Main Theorem \ref{theorem1}}\label{section4}
Now let us prove Theorem \ref{theorem1} by separating two subsections, namely frequency-preserving KAM persistence (Section \ref{KAM}) and remaining regularity (Section \ref{furtherregularity}) for KAM torus as well as conjugation. For the former, the overall process is similar to that in \cite{salamon}, but the key points to weaken the H\"older regularity to only modulus of continuity are using Jackson type approximation Theorem \ref{Theorem1} and proving the uniform convergence of the transformation mapping, that is, the convergence of the upper bound series (see \eqref{dao} and \eqref{dao2}). As we will see later, the Dini type integrability condition (H1) is crucial for the boundedness. As to the latter, we have to establish a more general regularity iterative theorem (Theorem \ref{t1}), which does not seem simple since the resulting regularity might be somewhat complicated due to asymptotic analysis.
\subsection{Frequency-preserving KAM persistence}\label{KAM}
The proof of the frequency-preserving KAM persistence is organized as follows. Firstly, we construct a series of analytic approximation functions $ H^\nu(x,y) $ of $ H(x,y)$ by using Theorem \ref{Theorem1} and considering (H1) and (H2). Secondly, we shall construct a sequence of frequency-preserving analytic and symplectic transformations $ \psi^\nu $ by induction. According to (H2), (H3) and (H4), the first step of induction is established by applying Theorem \ref{appendix} in Section \ref{Appsalamon} (or Theorem 1 in \cite{salamon}). Then, combining with weak homogeneity and certain specific estimates we complete the proof of induction and obtain the uniform convergence of the composite transformations. Finally, in the light of (H5), the regularity of the KAM torus as well as the conjugation is guaranteed by Theorem \ref{t1}. We shall emphasize that the unperturbed part of Hamiltonian $ H(x,y) $ could be non-integrable and finitely smooth due to our strategy.\vspace{3mm}
\\
{\textbf{Step1:}} In view of Theorem \ref{Theorem1} (we have assumed that the modulus of continuity $ \varpi $ admits semi separability and thus Theorem \ref{Theorem1} could be directly applied here), one could approximate $ H(x, y) $ by a sequence of real analytic functions $ H_\nu(x, y) $ for $ \nu \geq 0 $ in the strips
\[\left| {\operatorname{Im} x} \right| \leq {r_\nu },\;\;\left| {\operatorname{Im} y} \right| \leq {r_\nu },\;\;{r_\nu }: = {2^{ - \nu }}\varepsilon \]
around $ \left| {\operatorname{Re} x} \right| \in {\mathbb{T}^n},\left| {\operatorname{Re} y} \right| \leq \rho , $ such that
\begin{equation}\label{T1-3}
\begin{aligned}
\left| {{H^\nu }\left( z \right) - \sum\limits_{\left| \alpha \right| \leq k} {{\partial ^\alpha }H\left( {\operatorname{Re} z} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} z} \right)}^\alpha }}}{{\alpha !}}} } \right| \leq{}& {c_1}{\left\| H \right\|_\varpi }r_\nu ^k\varpi \left( {{r_\nu }} \right),\\
\left| {H_y^\nu \left( z \right) - \sum\limits_{\left| \alpha \right| \leq k-1} {{\partial ^\alpha }{H_y}\left( {\operatorname{Re} z} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} z} \right)}^\alpha }}}{{\alpha !}}} } \right| \leq{}& {c_1}{\left\| H \right\|_\varpi }r_\nu ^{k - 1}\varpi \left( {{r_\nu }} \right), \\
\left| {H_{yy}^\nu \left( z \right) - \sum\limits_{\left| \alpha \right| \leq k-2} {{\partial ^\alpha }{H_{yy}}\left( {\operatorname{Re} z} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} z} \right)}^\alpha }}}{{\alpha !}}} } \right| \leq{}& {c_1}{\left\| H \right\|_\varpi }r_\nu ^{k - 2}\varpi \left( {{r_\nu }} \right)
\end{aligned}
\end{equation}
for $ \left| {\operatorname{Im} x} \right| \leq {r_\nu },\;\left| {\operatorname{Im} y} \right| \leq {r_\nu } $, and $ c_1=c(n,k) $ is the constant provided in \eqref{3.2}.
Fix $ \theta = 1/\sqrt 2 $. In what follows, we will construct a sequence of real analytic symplectic transformations $ z=(x,y),\zeta=(\xi,\eta),z = {\phi ^\nu }\left( \zeta \right) $ of the form
\begin{equation}\label{bhxs}
x = {u^\nu }\left( \xi \right),\;\;y = v^{\nu}\left( \xi \right) + (u_\xi ^\nu)^{\top}{\left( \xi \right)^{ - 1}}\eta
\end{equation}
by induction, such that $ {u^\nu }\left( \xi \right) - \xi $ and $ {v^\nu }\left( \xi \right) $ are of period $ 1 $ in all variables, and $ {\phi ^\nu } $ maps the strip $ \left| {\operatorname{Im} \xi } \right| ,\left| \eta \right| \leq \theta {r_{\nu + 1}} $ into $ \left| {\operatorname{Im} x} \right| ,\left| y \right| \leq {r_\nu },\left| {\operatorname{Re} y} \right| \leq \rho $, and the transformed Hamiltonian function $ {K^\nu }: = {H^\nu } \circ {\phi ^\nu } $ satisfies
\begin{equation}\label{qiudao}
K_\xi ^\nu \left( {\xi ,0} \right) = 0,\;\;K_\eta ^\nu \left( {\xi ,0} \right) = \omega ,
\end{equation}
i.e., with prescribed universal Diophantine frequency-preserving. Namely by verifying certain conditions we obtain $ z=\psi^\nu(\zeta) $ of the form \eqref{bhxs} from Theorem \ref{appendix} by induction, mapping $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq {r_{\nu + 1}} $ into $ \left| {\operatorname{Im} x} \right|,\left| y \right| \leq \theta {r_\nu } $, and $ \psi^\nu \left( {\xi ,0} \right) - \left( {\xi ,0} \right) $ is of period $ 1 $, and \eqref{qiudao} holds. Here we denote $ \phi^\nu:=\phi^{\nu-1} \circ \psi^\nu$ with $ {\phi ^{ - 1}}: = \mathrm{id} $ (where $ \mathrm{id} $ denotes the $ 2n $-dimensional identity mapping and therefore $ {\phi ^0} = {\psi ^0} $). Further more, Theorem \ref{appendix} will lead to
\begin{align}
\left| {{\psi ^\nu }\left( \zeta \right) - \zeta } \right| &\leq c\left( {1 - \theta } \right)r_\nu ^{k - 2\tau - 1}\varpi \left( {{r_\nu }} \right),\label{2.82}\\
\left| {\psi _\zeta ^\nu\left( \zeta \right) - \mathbb{I}} \right| &\leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right),\label{2.83}\\
\left| {K_{\eta \eta }^\nu\left( \zeta \right) - {Q^\nu}\left( \zeta \right)} \right| &\leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right)/2M,\label{2.84}\\
\left| {U_x^\nu \left( x \right)} \right| &\leq cr_\nu ^{k - \tau - 1}\varpi \left( {{r_\nu }} \right),\label{2.85}
\end{align}
on $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right|,\left| {\operatorname{Im} x} \right| \leq r_{\nu+1} $, where $ {S^\nu }\left( {x,\eta } \right) = {U^\nu }\left( x \right) + \left\langle {{V^\nu }\left( x \right),\eta } \right\rangle $ is the generating function for $ {\psi ^\nu } $, and $ Q^\nu:=K_{\eta \eta}^{\nu-1} $, and $ \mathbb{I} $ denotes the $ 2n \times 2n $-dimensional identity mapping, and
\begin{equation}\label{Q0}
{Q^0}\left( z \right): = \sum\limits_{\left| \alpha \right| \leq k - 2} {{\partial ^\alpha }{H_{yy}}\left( {\operatorname{Re} z} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}} .
\end{equation}
\\
{\textbf{Step2:}} Here we show that $ \psi^0=\phi^0 $ exists, and it admits the properties mentioned in Step 1. Denote
\begin{equation}\notag
h(x) := H\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {H\left( {\xi ,0} \right){\rm d}\xi } ,\;\;x \in {\mathbb{R}^n}.
\end{equation}
Then by the first term in \eqref{T1-2}, we have
\begin{equation}\label{pianh}
\sum\limits_{\left| \alpha \right| \leq k} {\left| {{\partial ^\alpha }h} \right|{\varepsilon ^{\left| \alpha \right|}}} < M{\varepsilon ^k}\varpi \left( \varepsilon \right).
\end{equation}
Note that
\begin{align*}
{H^0}\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {{H^0}\left( {\xi ,0} \right){\rm d}\xi } ={}& {H^0}\left( {x,0} \right) - \sum\limits_{\left| \alpha \right| \leq k} {\partial _x^\alpha H\left( {\operatorname{Re} x,0} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}} \notag \\
{}&+ \int_{{\mathbb{T}^n}} {\left( {H\left( {\xi ,0} \right) - {H^0}\left( {\xi ,0} \right)} \right){\rm d}\xi } \notag \\
{}&+ \sum\limits_{\left| \alpha \right| \leq k} {{\partial ^\alpha }h\left( {\operatorname{Re} x} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}} .
\end{align*}
Hence, for $ \left| {\operatorname{Im} x} \right| \leq \theta {r_0} = \theta \varepsilon $, by using Theorem \ref{Theorem1}, Corollary \ref{coro1} and \eqref{pianh} we arrive at
\begin{align*}
\left| {{H^0}\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {{H^0}\left( {\xi ,0} \right){\rm d}\xi } } \right| &\leq 2{c_1}{\left\| H \right\|_\varpi }{\varepsilon ^k}\varpi \left( \varepsilon \right) + M{\varepsilon ^k}\varpi \left( \varepsilon \right)\notag \\
&\leq c{\varepsilon ^k}\varpi \left( \varepsilon \right) \notag \\
&\leq c{\varepsilon ^{k - 2\tau - 2}}\varpi \left( \varepsilon \right) \cdot {\left( {\theta \varepsilon } \right)^{2\tau + 2}}.
\end{align*}
Now consider the vector valued function $ f\left( x \right): = {H_y}\left( {x,0} \right) - \omega $ for $ x \in {\mathbb{R}^n} $. In view of the second term in \eqref{T1-2}, we have
\begin{equation}\label{pianf}
\sum\limits_{\left| \alpha \right| \leq k - 1} {\left| {{\partial ^\alpha }f} \right|{\varepsilon ^{\left| \alpha \right|}}} \leq M{\varepsilon ^{k - \tau - 1}}\varpi \left( \varepsilon \right).
\end{equation}
Note that
\begin{align*}
H_y^0\left( {x,0} \right) - \omega ={}& H_y^0\left( {x,0} \right) - \sum\limits_{\left| \alpha \right| \leq k - 1} {\partial _x^\alpha {H_y}\left( {\operatorname{Re} x,0} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}} \notag \\
{}&+ \sum\limits_{\left| \alpha \right| \leq k - 1} {{\partial ^\alpha }f\left( {\operatorname{Re} x} \right)\frac{{{{\left( {\mathrm{i}\operatorname{Im} x} \right)}^\alpha }}}{{\alpha !}}}.
\end{align*}
Therefore, for $ \left| {\operatorname{Im} x} \right| \leq \theta \varepsilon $, by using \eqref{T1-3} and \eqref{pianf} we obtain that
\begin{align*}
\left| {H_y^0\left( {x,0} \right) - \omega } \right| &\leq {c_1}{\left\| H \right\|_\varpi }{\varepsilon ^{k - 1}}\varpi \left( \varepsilon \right) + M{\varepsilon ^{k - \tau - 1}}\varpi \left( \varepsilon \right)\notag \\
&\leq c{\varepsilon ^{k - \tau - 1}}\varpi \left( \varepsilon \right) \notag \\
&\leq c{\varepsilon ^{k - 2\tau - 2}}\varpi \left( \varepsilon \right) \cdot {\left( {\theta \varepsilon } \right)^{\tau + 1}}.
\end{align*}
Recall \eqref{Q0}. Then it follows from \eqref{T1-3} that
\begin{align*}
\left| {H_{yy}^0\left( z \right) - {Q^0}\left( z \right)} \right| &\leq {c_1}{\left\| H \right\|_\varpi }{\varepsilon ^{k - 2}}\varpi \left( \varepsilon \right) \notag \\
&\leq \frac{c}{{4M}}{\varepsilon ^{k - 2}}\varpi \left( \varepsilon \right) \\
&\leq \frac{c}{{4M}}{\varepsilon ^{k - 2\tau - 2}}\varpi \left( \varepsilon \right),\;\;\left| {\operatorname{Im} x} \right|,\left| y \right| \leq \theta \varepsilon,
\end{align*}
and
\begin{equation}\notag
\left| {{Q^0}\left( z \right)} \right| \leq \sum\limits_{\left| \alpha \right| \leq k - 2} {{{\left\| H \right\|}_\varpi }\frac{{{\varepsilon ^{\left| \alpha \right|}}}}{{\alpha !}}} \leq {\left\| H \right\|_\varpi }\sum\limits_{\alpha \in {\mathbb{N}^{2n}}} {\frac{{{\varepsilon ^{\left| \alpha \right|}}}}{{\alpha !}}} = {\left\| H \right\|_\varpi }{e^{2n\varepsilon }} \leq 2M,\;\; \left| {\operatorname{Im} z} \right| \leq \varepsilon .
\end{equation}
Now, by taking $ r^{*} = \theta \varepsilon ,\delta^{*} = {\varepsilon ^{k - 2\tau - 2}}\varpi \left( \varepsilon \right) $ and using Theorem \ref{appendix} there exists a real analytic symplectic transformation $ z = {\phi ^0}\left( \zeta \right) $ of the form \eqref{bhxs} (with $ \nu=0 $) mapping the strip $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq {r_1}=r_0/2 $ into $ \left| {\operatorname{Im} x} \right|,\left| y \right| \leq \theta{r_0}=r_0/\sqrt{2} $,
such that $ {u^0}\left( \xi \right) - \xi $ and $ {v^0}\left( \xi \right) $ are of period $ 1 $ in all variables and the Hamiltonian function $ {K^0}: = {H^0} \circ {\phi ^0} $ satisfies \eqref{qiudao} (with $ \nu=0 $). Moreover, \eqref{2.82}-\eqref{2.84} (with $ \nu=0 $) hold.
Also assume that
\begin{equation}\notag
\left| {K_{\eta \eta }^{\nu - 1}\left( \zeta \right)} \right| \leq {M_{\nu - 1}},\;\;\left| {{{\left( {\int_{{\mathbb{T}^n}} {K_{\eta \eta }^{\nu - 1}\left( {\xi ,0} \right){\rm d}\xi } } \right)}^{ - 1}}} \right| \leq {M_{\nu - 1}},\;\;{M_\nu } \leq M
\end{equation}
for $ \left| {\operatorname{Im} x} \right| ,\left| y \right| \leq {r_\nu } $. Finally, define
\[\tilde H\left( {x,y} \right): = {H^\nu } \circ {\phi ^{\nu - 1}}\left( {x,y} \right)\]
with respect to $ \left| {\operatorname{Im} x} \right| ,\left| y \right| \leq {r_\nu } $. One can verify that $ {\tilde H} $ is well defined.
Next we assume that the transformation $ z = {\phi ^{\nu - 1}}\left( \zeta \right) $ of the form \eqref{bhxs} has been constructed, mapping $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq \theta {r_\nu } $ into $ \left| {\operatorname{Im} x} \right|,\left| {\operatorname{Im} y} \right| \leq {r_{\nu - 1}},\left| {\operatorname{Re} y} \right| \leq \rho $, and $ {u^{\nu - 1}}\left( \xi \right) - \xi ,{v^{\nu - 1}}\left( \xi \right) $ are of period $ 1 $ in all variables, and $ K_\xi ^{\nu - 1}\left( {\xi ,0} \right) = 0,K_\eta ^{\nu - 1}\left( {\xi ,0} \right) = \omega $. In addition, we also assume that \eqref{2.82}-\eqref{2.85} hold for $ 0, \ldots ,\nu - 1 $. In the next Step 3, we will verify that the above still hold for $ \nu $, which establishes a complete induction.\vspace{3mm}
\\
{\textbf{Step3:}} We will prove the existence of transformation $ {\phi ^\nu } $ in each step according to the specific estimates below and Theorem \ref{appendix}.
Let $ \left| {\operatorname{Im} x} \right| \leq \theta {r_\nu } $. Then $ \phi^{\nu-1}(x,0) $ lies in the region where the estimates in \eqref{T1-3} hold for both $ H^\nu $ and $ H^{\nu-1} $. Note that $ x \mapsto H^{\nu-1}(\phi^{\nu-1}(x,0)) $ is constant by \eqref{qiudao}. Then by \eqref{T1-3}, we arrive at the following for $ \left| {\operatorname{Im} x} \right| \leq \theta {r_\nu } $
\begin{align*}
\left| {\tilde H\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {\tilde H\left( {\xi ,0} \right){\rm d}\xi } } \right| &\leq 2\mathop {\sup }\limits_{\left| {\operatorname{Im} \xi } \right| \leq \theta {r_\nu }} \left| {{H^\nu }\left( {{\phi ^{\nu - 1}}\left( {\xi ,0} \right)} \right) - {H^{\nu - 1}}\left( {{\phi ^{\nu - 1}}\left( {\xi ,0} \right)} \right)} \right|\notag \\
&\leq 2{c_1}{\left\| H \right\|_\varpi }r_\nu ^k\varpi \left( {{r_\nu }} \right) + 2{c_1}{\left\| H \right\|_\varpi }r_{\nu-1} ^{k}\varpi \left( {{r_{\nu-1} }} \right)\notag \\
&\leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right) \cdot r_\nu ^{2\tau + 2},
\end{align*}
where the weak homogeneity of $ \varpi $ with respect to $ a=1/2 $ (see Definition \ref{weak}) has been used in the last inequality, because $ \varpi(r_{\nu-1})=\varpi(2r_{\nu})\leq c \varpi(r_{\nu}) $ (thus $ c $ is independent of $ \nu $). For convenience we may therefore not mention it in the following.
Taking $ \eta=0 $ in \eqref{2.83} we have
\begin{align}
\left| {u_\xi ^{\nu - 1}\left( \xi \right) - \mathbb{I}} \right| &\leq \sum\limits_{\mu = 0}^{\nu - 1} {\left| {u_\xi ^\mu \left( \xi \right) - u_\xi ^{\mu - 1}\left( \xi \right)} \right|} \notag \\
&\leq c\sum\limits_{\mu = 0}^{\nu - 1} {r_\mu ^{k - 2\tau - 2}\varpi \left( {{r_\mu }} \right)} \notag \\
&\leq c\sum\limits_{\mu = 0}^\infty {{{\left( {\frac{\varepsilon }{{{2^\mu }}}} \right)}^{k - 2\tau - 2}}\varpi \left( {\frac{\varepsilon }{{{2^\mu }}}} \right)} \notag \\
&\leq c\sum\limits_{\mu = 0}^\infty {\left( {\frac{\varepsilon }{{{2^{\mu - 1}}}} - \frac{\varepsilon }{{{2^\mu }}}} \right){{\left( {\frac{\varepsilon }{{{2^\mu }}}} \right)}^{k - 2\tau - 3}}\varpi \left( {\frac{\varepsilon }{{{2^\mu }}}} \right)} \notag \\
& \leq c\sum\limits_{\mu = 0}^\infty {\int_{\varepsilon /{2^\mu }}^{\varepsilon /{2^{\mu - 1}}} {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}{\rm d}x} } \notag \\
&\leq c\int_0^{2\varepsilon } {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}{\rm d}x} \notag \\
\label{dao}&\leq 1 - \theta
\end{align}
for $ \left| {\operatorname{Im} \xi } \right| \leq \theta {r_\nu } $, and the Dini type condition \eqref{Dini} in (H1) together with Cauchy Theorem are used since $ \varepsilon>0 $ is sufficiently small. Then it leads to
\begin{equation}\label{nidao}
\left| {u_\xi ^{\nu - 1}{{\left( \xi \right)}^{ - 1}}} \right| \leq {\theta ^{ - 1}},\;\;\left| {\operatorname{Im} \xi } \right| \leq \theta {r_\nu }.
\end{equation}
Finally, by \eqref{nidao} and \eqref{T1-3} we obtain that
\begin{align*}
\left| {{{\tilde H}_y}\left( {x,0} \right) - \omega } \right| &= \left| {u_\xi ^{\nu - 1}{{\left( x \right)}^{ - 1}}\left( {H_y^\nu \left( {{\phi ^{\nu - 1}}\left( {x,0} \right)} \right) - H_y^{\nu - 1}\left( {{\phi ^{\nu - 1}}\left( {x,0} \right)} \right)} \right)} \right|\notag \\
& \leq {\theta ^{ - 1}}\left| {H_y^\nu \left( {{\phi ^{\nu - 1}}\left( {x,0} \right)} \right) - H_y^{\nu - 1}\left( {{\phi ^{\nu - 1}}\left( {x,0} \right)} \right)} \right|\notag \\
& \leq {\theta ^{ - 1}}\left( {{c_1}{{\left\| H \right\|}_\varpi }r_\nu ^{k - 1}\varpi \left( {{r_\nu }} \right) + {c_1}{{\left\| H \right\|}_\varpi }r_{\nu - 1}^{k - 1}\varpi \left( {{r_{\nu - 1}}} \right)} \right)\notag \\
&\leq cr_\nu ^{k - 1}\varpi \left( {{r_\nu }} \right)\notag \\
&\leq cr_\nu ^{k - \tau - 2}\varpi \left( {{r_\nu }} \right) \cdot r_\nu ^{\tau + 1},
\end{align*}
and
\begin{align*}
\left| {{{\tilde H}_{yy}}\left( z \right) - {Q^\nu }\left( z \right)} \right| &= \left| {u_\xi ^{\nu - 1}{{\left( x \right)}^{ - 1}}\left( {H_{yy}^\nu \left( {{\phi ^{\nu - 1}}\left( z \right)} \right) - H_{yy}^{\nu - 1}\left( {{\phi ^{\nu - 1}}\left( z \right)} \right)} \right){{\left( {u_\xi ^{\nu - 1}{{\left( x \right)}^{ - 1}}} \right)}^{\top}}} \right|\notag \\
&\leq {\theta ^{ - 2}}\left| {H_{yy}^\nu \left( {{\phi ^{\nu - 1}}\left( z \right)} \right) - H_{yy}^{\nu - 1}\left( {{\phi ^{\nu - 1}}\left( z \right)} \right)} \right|\notag \\
&\leq {\theta ^{ - 2}}\left( {{c_1}{{\left\| H \right\|}_\varpi }r_\nu ^{k - 2}\varpi \left( {{r_\nu }} \right) + {c_1}{{\left\| H \right\|}_\varpi }r_{\nu - 1}^{k - 2}\varpi \left( {{r_{\nu - 1}}} \right)} \right)\notag \\
&\leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right)/2M
\end{align*}
for $ \left| {\operatorname{Im} x} \right|,\left| y \right| \leq \theta {r_\nu } $. Then denote $ r^*:= r_\nu $ and $ \delta^*:=c r_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right) $ in Theorem \ref{appendix}, we obtain the analytic symplectic preserving transformation $ {\phi ^\nu } $ of each step, mapping the strip $ \left| {\operatorname{Im} \xi } \right|\leq \theta {r_\nu },\left| \eta \right| \leq \theta {r_\nu } $ into $ \left| {\operatorname{Im} x} \right|\leq {r_\nu },\left| y \right| \leq {r_\nu } $, such that $ {u^\nu }\left( \xi \right) - \xi $ and $ {v^\nu }\left( \xi \right) $ are of period $ 1 $ in all variables, and the transformed Hamiltonian function $ {K^\nu } = {H^\nu } \circ {\phi ^\nu } $ satisfies
\[K_\xi ^\nu \left( {\xi ,0} \right) = 0,\;\;K_\eta ^\nu \left( {\xi ,0} \right) = \omega .\]
Moreover, \eqref{2.82}-\eqref{2.85} are valid for $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right|,\left| {\operatorname{Im} x} \right|\leq \theta {r_\nu }$.\vspace{3mm}
\\
{\textbf{Step4:}} By \eqref{2.83} for $ 0, \ldots ,\nu - 1 $ and the arguments in \eqref{dao}, there holds
\begin{align}
\left| {\phi _\zeta ^{\nu - 1}\left( \zeta \right)} \right| &\leq 1 + \sum\limits_{\mu = 0}^{\nu - 1} {\left| {\phi _\zeta ^\mu \left( \zeta \right) - \phi _\zeta ^{\mu - 1}\left( \zeta \right)} \right|} \notag \\
&\leq 1 + \sum\limits_{\mu = 0}^{\nu - 1} {\left( {\left| {\phi _\zeta ^\mu \left( \zeta \right) - \mathbb{I}} \right| + \left| {\phi _\zeta ^{\mu - 1}\left( \zeta \right) - \mathbb{I}} \right|} \right)} \notag \\
&\leq 1 + c\sum\limits_{\mu = 0}^\infty {{{\left( {\frac{\varepsilon }{{{2^\mu }}}} \right)}^{k - 2\tau - 2}}\varpi \left( {\frac{\varepsilon }{{{2^\mu }}}} \right)} \notag \\
&\leq 1 + c\int_0^{2\varepsilon } {\frac{{\varpi \left( x \right)}}{x^{2\tau +3-k}}{\rm d}x} \notag \\
\label{dao2} &\leq 2
\end{align}
for $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq \theta {r_\nu } $ as long as $ \varepsilon>0 $ is sufficiently small, which leads to
\begin{align*}
\left| {{\phi ^\nu }\left( \zeta \right) - {\phi ^{\nu - 1}}\left( \zeta \right)} \right| &= \left| {{\phi ^{\nu - 1}}\left( {{\psi ^\nu }\left( \zeta \right)} \right) - {\phi ^{\nu - 1}}\left( \zeta \right)} \right| \notag \\
&\leq 2\left| {{\psi ^\nu }\left( \zeta \right) - \zeta } \right| \notag \\
&\leq c\left( {1 - \theta } \right)r_\nu ^{k - 2\tau - 1}\varpi \left( {{r_\nu }} \right)
\end{align*}
for $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq {r_{\nu + 1}} $. Then by Cauchy's estimate, we obtain that
\begin{equation}\notag
\left| {\phi _\zeta ^\nu \left( \zeta \right) - \phi _\zeta ^{\nu - 1}\left( \zeta \right)} \right| \leq cr_\nu ^{k - 2\tau - 2}\varpi \left( {{r_\nu }} \right),\;\;\left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq {r_{\nu + 1}}.
\end{equation}
It can be proved in the same way that $ | {\phi _\zeta ^\nu \left( \zeta \right)} | \leq 2 $ for $ \left| {\operatorname{Im} \xi } \right|,\left| \eta \right| \leq \theta {r_{\nu + 1}} $, which implies
\begin{equation}\notag
\left| {\operatorname{Im} z} \right| \leq 2\left| {\operatorname{Im} \zeta } \right| \leq 2\sqrt {{{\left| {\operatorname{Im} \xi } \right|}^2} + {{\left| {\operatorname{Im} \eta } \right|}^2}} \leq 2\sqrt {{\theta ^2}r_{\nu + 1}^2 + {\theta ^2}r_{\nu + 1}^2} = 2{r_{\nu + 1}} = {r_\nu }.
\end{equation}
Besides, we have $ \left| {\operatorname{Re} y} \right| \leq \rho $.
Note that
\begin{equation}\notag
{v^\nu } \circ {\left( {{u^\nu }} \right)^{ - 1}}\left( x \right) - {v^{\nu - 1}} \circ {\left( {{u^{\nu - 1}}} \right)^{ - 1}}\left( x \right) = {\left( {u_\xi ^{\nu - 1}{{\left( \xi \right)}^{ - 1}}} \right)^{\top}}U_x^\nu \left( \xi \right),\;\;x: = {u^{\nu - 1}}\left( \xi \right).
\end{equation}
Recall \eqref{dao}. By employing the contraction mapping principle we have $ \left| {\operatorname{Im} \xi } \right| \leq {r_{\nu + 1}} $ if $ \left| {\operatorname{Im} x} \right| \leq \theta {r_{\nu + 1}} $ with respect to $ x $ defined above. Then from \eqref{2.85} and \eqref{nidao} one can verify that
\begin{equation}\label{4.97}
\left| {{{\big( {u_\xi ^{\nu - 1}{{\left( \xi \right)}^{ - 1}}} \big)}^{\top}}U_x^\nu \left( \xi \right)} \right| \leq cr_\nu ^{k - \tau - 1}\varpi \left( {{r_\nu }} \right).
\end{equation}
{\textbf{Step5:}} Finally, we are in a position to prove the uniform convergence of $ u^\nu $ and $ v^\nu $, and the regularity of their limit functions. Note \eqref{4.97}. Then we obtain the following analytic iterative scheme
\begin{equation}\label{4.98}
\left| {{u^\nu }\left( \xi \right) - {u^{\nu - 1}}\left( \xi \right)} \right| \leq cr_\nu ^{k - 2\tau - 1}\varpi \left( {{r_\nu }} \right), \;\; \left| {\operatorname{Im} \xi } \right| \leq {r_{\nu + 1}},
\end{equation}
and
\begin{equation}\label{4.99}
\left| {{v^\nu } \circ {{\left( {{u^\nu }} \right)}^{ - 1}}\left( x \right) - {v^{\nu - 1}} \circ {{\left( {{u^{\nu - 1}}} \right)}^{ - 1}}\left( x \right)} \right| \leq c r_\nu ^{k - \tau - 1}\varpi \left( {{r_\nu }} \right),\;\;\left| {\operatorname{Im} x} \right| \leq \theta {r_{\nu + 1}}.
\end{equation}
And especially, \eqref{4.98} and \eqref{4.99} hold when $ \nu=0 $ since $ {u^{0 - 1}} = \mathrm{id} $ and $ {v^{ 0- 1}} = 0 $. It is obvious to see that the uniform limits $ u $ and $ v\circ u^{-1} $ of $ u^\nu $ and $ v^\nu\circ (u^\nu)^{-1} $ are at least $ C^1 $ (in fact, this is implied by the higher regularity studied later in Section \ref{furtherregularity}, or one could derive the lower one using simpler technique than that in Theorem \ref{t1}, which we omit here). In addition, the persistent invariant torus possesses the same universal Diophantine (class $ \tau>n-1 $) frequency $ \omega $ as the unperturbed torus by \eqref{qiudao}.
\subsection{Iteration theorem on regularity beyond H\"older's type}\label{furtherregularity}
To obtain accurate regularity for $ u $ and $ v\circ u^{-1} $ from the analytic iterative scheme \eqref{4.98} and \eqref{4.99}, we shall along with the idea of Moser and Salamon to establish an \textit{abstract iterative theorem} on regularity beyond H\"older's type, which explicitly shows the modulus of continuity of the integral form. It should be emphasized that, from H\"older to non-H\"older, the estimates here are much more difficult.
\begin{theorem}[Abstract iterative theorem]\label{t1}
Let $ n\in \mathbb{N}^+, \varepsilon>0 $ and $ \{r_\nu\}_{\nu \in \mathbb
N}=\{\varepsilon2^{-\nu}\}_{n \in \mathbb{N}} $ be given, and denote by $ f:{\mathbb{R}^n} \to \mathbb{R} $ the limit of a sequence of real analytic functions $ {f_\nu }\left( x \right) $ in the strips $ \left| {\operatorname{Im} x} \right| \leq {r_{\nu} } $ such that
\begin{equation}\label{huoche}
{f_0} = 0,\;\;\left| {{f_\nu }\left( x \right) - {f_{\nu - 1}}\left( x \right)} \right| \leq \varphi \left( {{r_\nu }} \right),\;\;\nu \geq 1,
\end{equation}
where $ \varphi $ is a nondecreasing continuous function satisfying $ \varphi \left( 0 \right) = 0 $. Assume that there is a critical $ k_* \in \mathbb{N} $ such that
\begin{equation}\label{330}
\int_0^1 {\frac{{\varphi \left( x \right)}}{{{x^{{k_*} + 1}}}}{\rm d}x} < + \infty ,\;\;\int_0^1 {\frac{{\varphi \left( x \right)}}{{{x^{{k_*} + 2}}}}{\rm d}x} = + \infty .
\end{equation}
Then there exists a modulus of continuity $ \varpi_* $ such that $ f \in {C_{k_*,\varpi_* }}\left( {{\mathbb{R}^n}} \right) $. In other words, the regularity of $ f $ is at least of $ C^{k_*} $ plus $ \varpi_* $. In particular, $ \varpi_* $ could be determined as
\begin{equation}\label{LLL}
{\varpi _ * }\left( \gamma \right) \sim \gamma \int_{L\left( \gamma \right)}^\varepsilon {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 2}}}}{\rm d}t} = {\mathcal{O}^\# }\left( {\int_0^{L\left( \gamma \right)} {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 1}}}}{\rm d}t} } \right) ,\;\;\gamma \to {0^ + },
\end{equation}
where $ L(\gamma) \to 0^+ $ is some function such that the second relation in \eqref{LLL} holds.
\end{theorem}
\begin{proof}
Define $ {g_\nu }(x): = {f_\nu }\left( x \right) - {f_{\nu - 1}}\left( x \right) $ for all $ \nu \in \mathbb{N}^+ $. Determine an integer function $ \widetilde N(\gamma) : [0,1] \to \mathbb{N}^+ $ (note that $ \widetilde N(\gamma) $ can be extended to $ \mathbb{R}^+ $ due to the arguments below, we thus assume that it is a continuous function). Then for the given critical $ k_*\in \mathbb{N} $ and $ x,y\in \mathbb{R}^n $, we obtain the following for all multi-indices $ \alpha = \left( {{\alpha _1}, \ldots ,{\alpha _n}} \right) \in {\mathbb{N}^n} $ with $ \left| \alpha \right| = k_* $:
\begin{align}
\sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)-1} {\left| {{\partial ^\alpha }{g_\nu }\left( x \right) - {\partial ^\alpha }{g_\nu }\left( y \right)} \right|}&\leq \left| {x - y} \right|\sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)-1} {{{\left| {{\partial ^\alpha }{g_{\nu x}}} \right|}_{{C^0}(\mathbb{R}^n)}}}\notag \\
&\leq \left| {x - y} \right|\sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)-1} {\frac{1}{{r_\nu ^{k_* + 1}}}\varphi \left( {{r_\nu }} \right)} \notag \\
& = 2\left| {x - y} \right|\sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)-1} {\left( {\frac{\varepsilon }{{{2^\nu }}} - \frac{\varepsilon }{{{2^{\nu + 1}}}}} \right){{\left( {\frac{{{2^\nu }}}{\varepsilon }} \right)}^{{k_ * } + 2}}\varphi \left( {\frac{\varepsilon }{{{2^\nu }}}} \right)}\notag \\
\label{zhengze1}&\leq c\left| {x - y} \right|\int_{\varepsilon {2^{ - \widetilde N\left( {\left| {x - y} \right|} \right) }}}^\varepsilon {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 2}}}}{\rm d}t} ,
\end{align}
where Cauchy's estimate and \eqref{huoche} are used in the second inequality, and arguments similar to \eqref{dao} are employed in \eqref{zhengze1}, $ c>0 $ is a universal constant. Besides, we similarly get
\begin{align}
\sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right) }^\infty {\left| {{\partial ^\alpha }{g_\nu }\left( x \right) - {\partial ^\alpha }{g_\nu }\left( y \right)} \right|} &\leq \sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right) }^\infty {2{{\left| {{\partial ^\alpha }{g_\nu }} \right|}_{{C^0}(\mathbb{R}^n)}}}\notag \\
&\leq 2\sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right)}^\infty {\frac{1}{{r_\nu ^{k_*}}}\varphi \left( {{r_\nu }} \right)}\notag \\
&=2\sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right)}^\infty {\left( {\frac{\varepsilon }{{{2^\nu }}} - \frac{\varepsilon }{{{2^{\nu + 1}}}}} \right){{\left( {\frac{{{2^\nu }}}{\varepsilon }} \right)}^{{k_ * } + 1}}\varphi \left( {\frac{\varepsilon }{{{2^\nu }}}} \right)}\notag \\
\label{zhengze2}& \leq c\int_0^{\varepsilon {2^{ - \widetilde N\left( {\left| {x - y} \right|} \right) }}} {\frac{{\varphi \left(t \right)}}{{{t^{{k_*} + 1}}}}{\rm d}t} .
\end{align}
Now let us choose $ \widetilde{N}(\gamma) \to +\infty $ as $ \gamma \to 0^+ $ such that
\begin{equation}\label{varpi*}
\gamma \int_{L\left( \gamma \right)}^\varepsilon {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 2}}}}{\rm d}t} = {\mathcal{O}^\# }\left( {\int_0^{L\left( \gamma \right)} {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 1}}}}{\rm d}t} } \right): = {\varpi _ * }\left( \gamma \right),\;\;\gamma \to {0^ + },
\end{equation}
where $ \varepsilon {2^{ - \widetilde N\left( \gamma \right) - 1}}: = L\left( \gamma \right)\to 0^+ $. This is achievable due to assumption \eqref{330}, Cauchy Theorem and The Intermediate Value Theorem. Note that the choice of $ L(\gamma) $ (i.e., $ \widetilde{N} $) and $ \varpi_* $ is not unique (may up to a constant), and $ \varpi_* $ could be continuously extended to some given interval (e.g., $ [0,1] $), but this does not affect the qualitative result. Combining \eqref{zhengze1}, \eqref{zhengze2} and \eqref{varpi*} we finally arrive at $ f \in {C_{k_*,\varpi_* }}\left( {{\mathbb{R}^n}} \right) $ because
\begin{align*}
\left| {{\partial ^\alpha }f\left( x \right) - {\partial ^\alpha }f\left( y \right)} \right| &\leq \sum\limits_{\nu = 1}^{\widetilde N\left( {\left| {x - y} \right|} \right)} { + \sum\limits_{\nu = \widetilde N\left( {\left| {x - y} \right|} \right) + 1}^\infty {\left| {{\partial ^\alpha }{g_\nu }\left( x \right) - {\partial ^\alpha }{g_\nu }\left( y \right)} \right|} } \\
& \leq c\left( {\left| {x - y} \right|\int_{\varepsilon {2^{ - \widetilde N\left( {\left| {x - y} \right|} \right) - 1}}}^\varepsilon {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 2}}}}{\rm d}t} + \int_0^{\varepsilon {2^{ - \widetilde N\left( {\left| {x - y} \right|} \right) - 1}}} {\frac{{\varphi \left( t \right)}}{{{t^{{k_*} + 1}}}}{\rm d}t} } \right) \\
&\leq c\varpi_* \left( {\left| {x - y} \right|} \right).
\end{align*}
\end{proof}
Theorem \ref{t1} can be extended to the case $ f:{\mathbb{R}^n} \to \mathbb{R}^m $ with $ n,m \in \mathbb{N}^+ $ since the analysis is completely same, and the strip $ \left| {\operatorname{Im} x} \right| \leq {r_\nu } $ can also be replaced by $ \left| {\operatorname{Im} x} \right| \leq {r_{\nu+1} } $ (or $ \leq \theta r_{\nu+1} $). Theorem \ref{t1} can also be used to estimate the regularity of solutions of finite smooth homological equations, thus KAM uniqueness theorems in some cases might be derived, see Section 4 in \cite{salamon} for instance. However, in order to avoid too much content in this paper, it is omitted here.
Finally, by recalling \eqref{4.98} and \eqref{4.99}, one applies Theorem \ref{t1} on $ \{u^{\nu}-\mathrm{id}\}_\nu $ (because Theorem \ref{t1} requires that the initial value vanishes) and $ \{v^{\nu}\circ (u^\nu)^{-1}\}_\nu $ to directly analyze the regularity of KAM torus and conjugation according to (H5), i.e., there exist modulus of continuity $ {\varpi _i} $ ($ i=1,2 $) such that $ u \in {C_{k_1^ * ,{\varpi _1}}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C_{k_2^ * ,{\varpi _2}}}\left( {{\mathbb{R}^n},G} \right) $. This completes the proof of Theorem \ref{theorem1}.
\section{Proof of other results}\label{ProofOther}
\subsection{Proof of Theorem \ref{Holder}}\label{8}
Note that $ \ell \notin {\mathbb{N}^ + } $ implies $ \{\ell\}\in (0,1) $. Then $ k=[\ell] $ and $ \varpi(x)\sim \varpi_{\mathrm{H}}^{\ell}(x)\sim x^{\{\ell\}} $, i.e., modulus of continuity of H\"older's type. Consequently, (H1) can be directly verified because of $ \ell > 2\tau + 2 $:
\begin{align*}
\int_0^1 {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}{\rm d}x} & = \int_0^1 {\frac{{{x^{\left\{ \ell \right\}}}}}{{{x^{2\tau + 3 - \left[ \ell \right]}}}}{\rm d}x} \notag \\
&= \int_0^1 {\frac{1}{{{x^{1 - \left( {\ell - 2\tau - 2} \right)}}}}{\rm d}x} < + \infty .
\end{align*}
Here and below, let $ i $ be $ 1 $ or $ 2 $ for simplicity. Recall that $ {\varphi _i}\left( x \right) = {x^{k - \left( {3 - i} \right)\tau - 1}}\varpi \left( x \right) = {x^{\left[ \ell \right] - \left( {3 - i} \right)\tau - 1}} \times {x^{\left\{ \ell \right\}}} = {x^{\ell - \left( {3 - i} \right)\tau - 1}} $, and let
\begin{align}
\label{fff}\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ * + 1}}}}{\rm d}x} &= \int_0^1 {\frac{1}{{{x^{k_i^ * - \left( {\ell - \left( {3 - i} \right)\tau - 2} \right)}}}}{\rm d}x}<+\infty,\\
\label{ffff}\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ * + 2}}}}{\rm d}x} &= \int_0^1 {\frac{1}{{{x^{k_i^ * - \left( {\ell - \left( {3 - i} \right)\tau - 2} \right) + 1}}}}{\rm d}x}=+\infty .
\end{align}
Then the critical $ k_i^* $ in (H5) could be uniquely chosen as $ k_i^ * : = \left[ {\ell - \left( {3 - i} \right)\tau - 1} \right] \in \mathbb{N}^+$ since $ \ell - \left( {3 - i} \right)\tau - 1 \notin \mathbb{N}^+ $. Further, letting $ {L_i}\left( \gamma \right) = \gamma \to {0^ + } $ yields
\begin{align*}
\int_0^{{L_i}\left( \gamma \right)} {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 1}}}}{\rm d}t} &= {\mathcal{O}^\# }\left( {\int_0^\gamma {\frac{1}{{{t^{1 - \left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}}}{\rm d}t} } \right) \notag \\
&= {\mathcal{O}^\# }\left( {{\gamma ^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}} \right)
\end{align*}
and
\begin{align*}
\gamma \int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 2}}}}{\rm d}t} &= {\mathcal{O}^\# }\left( {\gamma \int_\gamma ^\varepsilon {\frac{1}{{{t^{2 - \left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}}}{\rm d}t} } \right) \notag \\
&= {\mathcal{O}^\# }\left( {{\gamma ^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}} \right).
\end{align*}
This leads to modulus of continuity being of H\"older's type
\[{\varpi _i}\left( \gamma \right) \sim {\left( {{L_i}\left( \gamma \right)} \right)^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}} \sim {\gamma ^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}}\sim \varpi_{\mathrm{H}}^{\left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\}}(\gamma)\]
due to \eqref{varpii} in Theorem \ref{Theorem1}.
Finally by observing the H\"older indices $ k_i^ * + \left\{ {\ell - \left( {3 - i} \right)\tau - 2} \right\} = \ell - \left( {3 - i} \right)\tau - 1 $ for $ i=1,2 $, we conclude that $ u \in {C^{\ell-2\tau-1}}\left( {{\mathbb{R}^n},{\mathbb{R}^n}} \right) $ and $ v \circ {u^{ - 1}} \in {C^{\ell-\tau-1}}\left( {{\mathbb{R}^n},G} \right) $. This proves Theorem \ref{Holder}.
\subsection{Proof of Theorem \ref{lognew}}
Firstly, note that $ k=[2\tau+2] $ and $ \varpi \left( x \right) \sim {x^{\{2\tau+2\}}}/{\left( { - \ln x} \right)^\lambda } $ with $ \lambda > 1 $, then (H1) holds since
\begin{align*}
\int_0^1 {\frac{{\varpi \left( x \right)}}{{{x^{2\tau + 3 - k}}}}{\rm d}x} &= {\mathcal{O}^\# }\left( {\int_0^{1/2} {\frac{{{x^{\left\{ {2\tau + 2} \right\}}}}}{{{x^{2\tau + 3 - \left[ {2\tau + 2} \right]}}{{\left( { - \ln x} \right)}^\lambda }}}{\rm d}x} } \right)\\
& = {\mathcal{O}^\# }\left( {\int_0^{1/2} {\frac{1}{{x{{\left( { - \ln x} \right)}^\lambda }}}{\rm d}x} } \right) < + \infty .
\end{align*}
Thus the frequency-preserving KAM persistence is obtained. The followings are devoted to the remaining regularity.
Secondly, in view of $ \varphi_i(x) $ in (H5), we have
\[\int_0^1 {\frac{{{\varphi _i}\left( x \right)}}{{{x^{k_i^ * + 1}}}}{\rm d}x} = {\mathcal{O}^\# }\left( {\int_0^{1/2} {\frac{1}{{{x^{k_i^ * - \left( {i - 1} \right)\tau }}{{\left( { - \ln x} \right)}^\lambda }}}{\rm d}x} } \right)\;\;i=1,2.\]
This leads to critical $ k_1^*=1 $ and $ k_2^*=[\tau+1] $ in (H5). Here one uses the following fact: for given $ \lambda>1 $,
\[\int_0^{1/2} {\frac{1}{{{x^\iota }{{\left( { - \ln x} \right)}^\lambda }}}{\rm d}x} < + \infty ,\;\;\int_0^{1/2} {\frac{1}{{{x^{\iota + 1}}{{\left( { - \ln x} \right)}^\lambda }}}{\rm d}x}=+\infty \]
if and only if $ \iota \in (0,1] $.
Next, we shall investigate the KAM remaining regularity through certain complicated asymptotic analysis, that is, show the modulus of continuity $ \varpi_1 $ and $ \varpi_2 $ explicitly. We have to discuss this problem in two cases. This is because $ (i-1)\tau=0 $ when $ i= 1$ (corresponds to the conjugation), i.e., independent of the Diophantine property; while $ (i-1)\tau=\tau $ when $ i=2 $ (corresponds to the KAM torus). As can be seen later, they lead to different asymptotic analysis.\vspace{3mm}
\\
{\textbf{Case 1:}} Here we provide the analysis for $ \varpi_1 $ with all $ \tau>n-1 $, as well as $ \varpi_2 $ with $ n-1<\tau \in \mathbb{N}^+ $. In view of $ \varphi_i(x) $ in (H5), then by applying Lemma \ref{duochongduishu} with $ \varrho=1 $ we get
\begin{align}
\gamma \int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 2}}}}{\rm d}t} &= {\mathcal{O}^\# }\Bigg( {\gamma \int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{1}{{{t^2}(\ln (1/t))^\lambda }}{\rm d}t} } \Bigg)\notag \\
& = {\mathcal{O}^\# }\Bigg( {\gamma \int_{1/\varepsilon }^{1/{L_i}\left( \gamma \right)} {\frac{1}{{(\ln z)^\lambda }}{\rm d}z} } \Bigg)\notag \\
\label{Cguji1}& = {\mathcal{O}^\# }\Bigg( {\frac{\gamma }{{{L_i}\left( \gamma \right)(\ln (1/{L_i}\left( \gamma \right)))^\lambda}}} \Bigg),
\end{align}
and by direct calculation one arrives at
\begin{align}
\int_0^{{L_i}\left( \gamma \right)} {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 1}}}}{\rm d}t} & = {\mathcal{O}^\# }\Bigg( {\int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{1}{{t(\ln (1/t))^\lambda }}{\rm d}t} } \Bigg)\notag \\
\label{Cguji2}&= {\mathcal{O}^\# }\Bigg( {\frac{1}{{{{( {\ln } (1/{L_i}\left( \gamma \right)))}^{\lambda - 1}}}}} \Bigg).
\end{align}
Finally, choosing
\begin{equation}\label{CLit4}
{L_i}\left( \gamma \right) \sim \frac{{\gamma }}{{\ln (1/\gamma) }} \to {0^ + },\;\;\gamma \to {0^ + }
\end{equation}
leads to the second relation in \eqref{varpii} for $ i=1,2 $, and substituting $ L_i(\gamma) $ into \eqref{Cguji1} or \eqref{Cguji2} yields
\[{\varpi _1}\left( \gamma \right) \sim \frac{1}{{{{\left( { - \ln \gamma} \right)}^{\lambda - 1}}}} \sim \varpi _{\mathrm{LH}}^{\lambda - 1}\left( \gamma \right), \;\; \tau >n-1\]
and
\[{\varpi _2}\left( \gamma \right) \sim \frac{1}{{{{\left( { - \ln \gamma} \right)}^{\lambda - 1}}}} \sim \varpi _{\mathrm{LH}}^{\lambda - 1}\left( \gamma \right), \;\; n-1<\tau \notin \mathbb{N}^+\]
in Theorem \ref{theorem1}, see \eqref{varpii}. \vspace{3mm}
\\
{\textbf{Case 2:}}
However, the asymptotic analysis for $ \varpi_2 $ becomes more different when $ n-1<\tau \notin \mathbb{N}^+ $. Note that $ \{\tau\} \in (0,1) $ and $ \left[ {\tau + 1} \right] - \tau = \left[ \tau \right] + 1 - \tau = 1 - \left\{ \tau \right\} $ at present. Hence, by applying \eqref{erheyi1} in Lemma \ref{erheyi} we get
\begin{align}
\int_0^{{L_2}\left( \gamma \right)} {\frac{{{\varphi _2}\left( t \right)}}{{{t^{k_2^ * + 1}}}}{\rm d}t} &= {\mathcal{O}^\# }\left( {\int_0^{{L_2}\left( \gamma \right)} {\frac{1}{{{t^{\left[ {\tau + 1} \right] - \tau }}{{\left( { - \ln t} \right)}^\lambda }}}{\rm d}t} } \right) \notag \\
&= {\mathcal{O}^\# }\left( {\int_0^{{L_2}\left( \gamma \right)} {\frac{1}{{{t^{1 - \left\{ \tau \right\}}}{{\left( { - \ln t} \right)}^\lambda }}}{\rm d}t} } \right)\notag \\
&= {\mathcal{O}^\# }\left( {\int_{1/{L_2}\left( \gamma \right)}^{ + \infty } {\frac{1}{{{z^{1 + \left\{ \tau \right\}}}{{\left( {\ln z} \right)}^\lambda }}}{\rm d}z} } \right) \notag \\
\label{coro42}& = {\mathcal{O}^\# }\left( {\frac{{{{\left( {{L_2}\left( \gamma \right)} \right)}^{\left\{ \tau \right\}}}}}{{{{\left( {\ln \left( {1/{L_2}\left( \gamma \right)} \right)} \right)}^\lambda }}}} \right),
\end{align}
and similarly according to \eqref{erheyi2} in Lemma \ref{erheyi} we have
\begin{align}
\gamma \int_{{L_2}\left( \gamma \right)}^\varepsilon {\frac{{{\varphi _2}\left( t \right)}}{{{t^{k_2^ * + 2}}}}{\rm d}t} &= {\mathcal{O}^\# }\left( {\gamma \int_{1/\varepsilon }^{1/{L_2}\left( \gamma \right)} {\frac{1}{{{z^{\left\{ \tau \right\}}}{{\left( {\ln z} \right)}^\lambda }}}{\rm d}z} } \right)\notag \\
\label{coro43}& = {\mathcal{O}^\# }\left( {\frac{{\gamma {{\left( {{L_2}\left( \gamma \right)} \right)}^{\left\{ \tau \right\} - 1}}}}{{{{\left( {\ln \left( {1/{L_2}\left( \gamma \right)} \right)} \right)}^\lambda }}}} \right).
\end{align}
Now let us choose $ L_2(\gamma) \sim \gamma \to 0^+ $, i.e., different from that when $ n-1<\tau \in \mathbb{N}^+ $ in Case 1, one verifies that the second relation in \eqref{varpii} holds for $ i=2 $, and substituting $ L_2(\gamma) $ into \eqref{coro42} or \eqref{coro43} yields the following modulus of continuity
\[{\varpi _2}\left( \gamma \right) \sim \frac{{{\gamma ^{\left\{ \tau \right\}}}}}{{{{\left( { - \ln \gamma } \right)}^\lambda }}} \sim {\gamma ^{\left\{ \tau \right\}}}\varpi _{\mathrm{LH}}^\lambda \left( \gamma \right),\;\; n-1<\tau \notin \mathbb{N}^+\]
due to \eqref{varpii} in Theorem \ref{theorem1}.
This finishes the proof of Theorem \ref{lognew}.
\subsection{Proof of Theorem \ref{GLHnew}}
It is sufficient to determine $ k_i^* $ in (H5) and to choose functions $ L_i(\gamma)\to 0^+ $ (as $ \gamma \to 0^+ $), obtaining the modulus of continuity $ \varpi_i $ in \eqref{varpii} for $ i=1,2 $. Obviously $ k_1^*=1 $ and $ k_2^*=\tau+1 $ due to $ \tau \in \mathbb{N}^+ $ and
\[\int_0^1 {\frac{{\varpi _{\mathrm{GLH}}^{\varrho ,\lambda }\left( x \right)}}{x}{\rm d}x} < + \infty ,\;\;\int_0^1 {\frac{{\varpi _{\mathrm{GLH}}^{\varrho ,\lambda }\left( x \right)}}{{{x^2}}}{\rm d}x} = + \infty .\]
In view of $ \varphi_i(x) $ in (H5), then by applying Lemma \ref{duochongduishu} we get
\begin{align}
\gamma \int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 2}}}}{\rm d}t} &= {\mathcal{O}^\# }\Bigg( {\gamma \int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{1}{{{t^2}(\ln (1/t)) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/t))}^\lambda }}}{\rm d}t} } \Bigg)\notag \\
\label{guji1}& = {\mathcal{O}^\# }\Bigg( {\frac{\gamma }{{{L_i}\left( \gamma \right)(\ln (1/{L_i}\left( \gamma \right))) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/{L_i}\left( \gamma \right)))}^\lambda }}}} \Bigg),
\end{align}
and by direct calculation one derives
\begin{align}
\int_0^{{L_i}\left( \gamma \right)} {\frac{{{\varphi _i}\left( t \right)}}{{{t^{k_i^ * + 1}}}}{\rm d}t} & = {\mathcal{O}^\# }\Bigg( {\int_{{L_i}\left( \gamma \right)}^\varepsilon {\frac{1}{{t(\ln (1/t)) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho (1/t))}^\lambda }}}{\rm d}t} } \Bigg)\notag \\
\label{guji2}&= {\mathcal{O}^\# }\Bigg( {\frac{1}{{{{(\underbrace {\ln \cdots \ln }_\varrho (1/{L_i}\left( \gamma \right)))}^{\lambda - 1}}}}} \Bigg).
\end{align}
Finally, choosing
\begin{equation}\label{Lit4}
{L_i}\left( \gamma \right) \sim \frac{{\gamma }}{{(\ln (1/\gamma)) \cdots (\underbrace {\ln \cdots \ln }_{\varrho}(1/\gamma )})} \to {0^ + },\;\;\gamma \to {0^ + }
\end{equation}
leads to the second relation in \eqref{varpii} for $ i=1,2 $, and substituting $ L_i(\gamma) $ into \eqref{guji1} or \eqref{guji2} yields that
\begin{equation}\label{rho1}
{\varpi _1}\left( \gamma \right) \sim {\varpi _2}\left( \gamma \right) \sim \frac{1}{{{{(\underbrace {\ln \cdots \ln }_\varrho (1/\gamma))}^{\lambda - 1}}}}
\end{equation}
in Theorem \ref{theorem1}, see \eqref{varpii}, which proves Theorem \ref{GLHnew}.
\section{Appendix}
\subsection{Semi separability and weak homogeneity for modulus of continuity}
\begin{lemma}\label{Oxlemma}
Let a modulus continuity $ \varpi $ be given. If $ \varpi $ is piecewise continuously differentiable and $ \varpi'\geq 0 $ is nonincreasing, then $ \varpi $ admits semi separability in Definition \ref{d2}. As a consequence, if $ \varpi $ is convex near $ 0^+ $, then it is semi separable.
\end{lemma}
\begin{proof}
Assume that $ \varpi $ is continuously differentiable without loss of generality. Then we obtain semi separability due to
\begin{align*}
\mathop {\sup }\limits_{0 < r < \delta /x} \frac{{\varpi \left( {rx} \right)}}{{\varpi \left( r \right)}} &= \mathop {\sup }\limits_{0 < r < \delta /x} \frac{{\varpi \left( {rx} \right) - \varpi \left( 0+ \right)}}{{\varpi \left( r \right)}} \leq \mathop {\sup }\limits_{0 < r < \delta /x} \frac{1}{{\varpi \left( r \right)}}\sum\limits_{j = 0}^{\left[ x \right]} {\int_{jr}^{\left( {j + 1} \right)r} {\varpi '\left( t \right){\rm d}t} } \\
& \leq \mathop {\sup }\limits_{0 < r < \delta /x} \frac{1}{{\varpi \left( r \right)}}\sum\limits_{j = 0}^{\left[ x \right]} {\int_0^r {\varpi '\left( t \right){\rm d}t} } = \left( {\left[ x \right] + 1} \right) = \mathcal{O}\left( x \right),\;\;x \to + \infty .
\end{align*}
\end{proof}
\begin{lemma}\label{ruotux}
Let a modulus continuity $ \varpi $ be given. If $ \varpi $ is convex near $ 0^+ $, then it admits weak homogeneity in Definition \ref{weak}.
\end{lemma}
\begin{proof}
For $ x>0 $ sufficiently small, one verifies that
\[\varpi \left( x \right) = x \cdot \frac{{\varpi \left( x \right) - \varpi \left( 0+ \right)}}{x-0} \leq x \cdot \frac{{\varpi \left( {ax} \right) - \varpi \left( 0+ \right)}}{{ax}-0} = a^{-1}\varpi \left( {ax} \right),\]
for $ 0<a<1 $, which leads to weak homogeneity
\[ \mathop {\overline {\lim } }\limits_{x \to {0^ + }} \frac{{\varpi \left( x \right)}}{{\varpi \left( {ax} \right)}} \leq a^{-1} < + \infty .\]
\end{proof}
\subsection{Proof of Theorem \ref{Theorem1}} \label{JACK}
\begin{proof}
For the completeness of the analysis we shall give a very detailed proof. An outline of the strategy for the proof is provided: we firstly construct an approximation integral operator by the Fourier transform of a compactly supported function, and then present certain properties of the operator (note that these preparations are classical); finally, we estimate the approximation error in the sense of modulus of continuity.
Let\[K\left( x \right) = \frac{1}{{{{\left( {2\pi } \right)}^n}}}\int_{{\mathbb{R}^n}} {\widehat K\left( \xi \right){e^{\mathrm{i}\left\langle {x,\xi } \right\rangle }}{\rm d}\xi } ,\;\;x \in {\mathbb{C}^n}\]
be an entire function, whose Fourier transform
\[\widehat K\left( \xi \right) = \int_{{\mathbb{R}^n}} {K\left( x \right){e^{ - \mathrm{i}\left\langle {x,\xi } \right\rangle }}{\rm d}x} ,\;\;\xi \in {\mathbb{R}^n}\]
is a smooth function with compact support, contained in the ball $ \left| \xi \right| \leq 1 $, that satisfies $ \widehat K\left( \xi \right) = \widehat K\left( { - \xi } \right) $ and
\begin{equation}\label{3.3}
{\partial ^\alpha }\widehat K\left( 0 \right) = \left\{ \begin{gathered}
1,\;\;\alpha = 0, \hfill \\
0,\;\;\alpha \ne 0. \hfill \\
\end{gathered} \right.
\end{equation}
Next, we assert that
\begin{equation}\label{5}
\left| {{\partial ^\beta }\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right| \leq \frac{{c\left( {\beta ,p} \right)}}{{{{\left( {1 + \left| {\operatorname{Re} z} \right|} \right)}^p}}}{e^{\left| {\operatorname{Im} z} \right|}},\;\;\max \left\{ {1,\left|\beta\right|} \right\} \leq p \in \mathbb{R}.
\end{equation}
Note that we assume $ \widehat K \in C_0^\infty \left( {{\mathbb{R}^n}} \right) $ and $ \operatorname{supp} \widehat K \subseteq B\left( {0,1} \right) $, thus
\begin{equation}\label{FK}
\left| {{{\left( {1 + \left| z \right|} \right)}^k}{\partial ^\beta }\mathcal{F}\left( {\widehat K\left( \xi \right)} \right)\left( z \right)} \right| \leq \sum\limits_{\left| \gamma \right| \leq k} {\left| {{z^\gamma }{\partial ^\beta }\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right|} = \sum\limits_{\left| \gamma \right| \leq k} {\left| {{\partial ^{\beta + \gamma }}\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right|},
\end{equation}
where $ \mathcal{F} $ represents the Fourier transform. Since $ {\partial ^{\beta + \gamma }}\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right) \in C_0^\infty ( {\overline {B \left( {0,1} \right)}} ) $ does not change the condition, we only need to prove that
\[\left| {\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right| \leq {c_k}{e^{\left| {\operatorname{Im} z} \right|}}.\]
Obviously
\[\left| {\mathcal{F}\big( {\widehat K\left( \xi \right)} \big)\left( z \right)} \right| \leq \frac{1}{{{{\left( {2\pi } \right)}^n}}}\int_{{\mathbb{R}^n}} {\big| {\widehat K\left( \xi \right)} \big|{e^{ - \left\langle {\operatorname{Im} z,\xi } \right\rangle }}{\rm d}\xi } \leq \frac{c}{{{{\left( {2\pi } \right)}^n}}}\int_{B\left( {0,1} \right)} {{e^{\left| {\left\langle {\operatorname{Im} z,\xi } \right\rangle } \right|}}{\rm d}\xi } \leq c{e^{\left| {\operatorname{Im} z} \right|}},\]
where $ c>0 $ is independent of $ n $. Then assertion \eqref{5} is proved by recalling \eqref{FK}.
The inequality in \eqref{5} is usually called the Paley-Wiener Theorem, see also Chapter III in \cite{Stein}. As we will see later, it plays an important role in the subsequent verification of definitions, integration by parts and the translational feasibility according to Cauchy's integral formula.
Next we assert that $ K:{\mathbb{C}^n} \to \mathbb{R} $ is a real analytic function with the following property
\begin{equation}\label{3.6}
\int_{{\mathbb{R}^n}} {{{\left( {u + \mathrm{i}v} \right)}^\alpha }{\partial ^\beta }K\left( {u + \mathrm{i}v} \right){\rm d}u} = \left\{ \begin{aligned}
&{\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !,&\alpha = \beta , \hfill \\
&0,&\alpha \ne \beta , \hfill \\
\end{aligned} \right.
\end{equation}
for $ u,v \in {\mathbb{R}^n} $ and multi-indices $ \alpha ,\beta \in {\mathbb{N}^n} $. In order to prove assertion \eqref{3.6}, we first consider proving the following for $ x\in \mathbb{R}^n $:
\begin{equation}\label{3.7}
\int_{{\mathbb{R}^n}} {{x^\alpha }{\partial ^\beta }K\left( x \right){\rm d}x} = \left\{ \begin{aligned}
&{\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !,&\alpha = \beta \hfill, \\
&0,&\alpha \ne \beta \hfill. \\
\end{aligned} \right.
\end{equation}
{\bf{Case1:}} If $ \alpha = \beta $, then
\begin{align*}
\int_{{\mathbb{R}^n}} {{x^\alpha }{\partial ^\beta }K\left( x \right){\rm d}x} &= \int_{{\mathbb{R}^n}} {\Big( {\prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} } \Big)\Big( {\prod\limits_{j = 1}^n {\partial _{{x_j}}^{{\alpha _j}}} } \Big)K\left( x \right){\rm d}x} \\
&= {\left( { - 1} \right)^{{\alpha _1}}}{\alpha _1}!\int_{{\mathbb{R}^{n - 1}}} {\Big( {\prod\limits_{j = 2}^n {x_j^{{\alpha _j}}} } \Big)\Big( {\prod\limits_{j = 2}^n {\partial _{{x_j}}^{{\alpha _j}}} } \Big)K\left( x_2,\cdots,x_n \right){\rm d}x_2 \cdots {\rm d}x_n} \\
&= \cdots = {\left( { - 1} \right)^{{\alpha _1} + \cdots + {\alpha _n}}}{\alpha _1}! \cdots {\alpha _n}!\int_\mathbb{R} {K\left( x_n \right){\rm d}x_n} \\
& = {\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !\widehat K\left( 0 \right) = {\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !.
\end{align*}
{\bf{Case2:}} There exists some $ {\alpha _j} \leq {\beta _j} - 1 $, let $ j=1 $ without loss of generality. Then
\begin{align*}
\int_{{\mathbb{R}^n}} {{x^\alpha }{\partial ^\beta }K\left( x \right){\rm d}x} &= \int_{{\mathbb{R}^n}} {\Big( {\prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} } \Big)\Big( {\prod\limits_{j = 1}^n {\partial _{{x_j}}^{{\beta _j}}} } \Big)K\left( x \right){\rm d}x} \\
&= {\left( { - 1} \right)^{{\beta _1} - {\alpha _1}}}\int_{{\mathbb{R}^n}} {\Big( {\prod\limits_{j = 2}^n {x_j^{{\alpha _j}}} } \Big)\Big( {\prod\limits_{j = 2}^n {\partial _{{x_j}}^{{\beta _j}}} } \Big)\partial _{{x_1}}^{{\beta _1} - {\alpha _1}}K\left( x \right){\rm d}x}=0.
\end{align*}
{\bf{Case3:}} Now we have $ {\alpha _1} \geq {\beta _1} $, and some $ {\alpha _j} \geq {\beta _j} + 1 $ (otherwise $ \alpha = \beta $). Let $ j=1 $ without loss of generality. At this time we first prove a conclusion according to \eqref{3.3}. Since
\[{\partial ^\alpha }\widehat K\left( 0 \right) = {\left( { - \mathrm{i}} \right)^{\left| \alpha \right|}}\int_{{\mathbb{R}^n}} {{x^\alpha }K\left( x \right){\rm d}x} = 0,\;\;\alpha \ne 0,\]
it follows that
\begin{displaymath}
\int_{{\mathbb{R}^n}} {{x^\alpha }K\left( x \right){\rm d}x} = 0,\;\;\alpha \ne 0.
\end{displaymath}
Hence, we arrive at
\[\int_{{\mathbb{R}^n}} {{x^\alpha }{\partial ^\beta }K\left( x \right){\rm d}x} = {\left( { - 1} \right)^{\sum\limits_{j = 1}^n {\left( {{\beta _j} - {\alpha _j}} \right)} }}\int_{{\mathbb{R}^n}} {\left( {x_1^{{\alpha _1} - {\beta _1}}} \right)\Big( {\prod\limits_{j = 2}^n {x_j^{{\alpha _j} - {\beta _j}}} } \Big)K\left( x \right){\rm d}x} = 0.\]
This proves \eqref{3.7}. Next, we will consider a complex translation of \eqref{3.7} and prove that
\begin{equation}\notag
\int_{{\mathbb{R}^n}} {{{\left( {u + \mathrm{i}v} \right)}^\alpha }{\partial ^\beta }K\left( {u + \mathrm{i}v} \right){\rm d}u} = \left\{ \begin{aligned}
&{\left( { - 1} \right)^{\left| \alpha \right|}}\alpha !,&\alpha = \beta, \hfill \\
&0,&\alpha \ne \beta . \hfill \\
\end{aligned} \right.
\end{equation}
Actually one only needs to pay attention to \eqref{5}, and the proof is completed according to Cauchy's integral formula.
Finally, we only prove that
\begin{equation}\label{3.10}
{S_r}{p} = {p},\;\;{p}:{\mathbb{R}^n} \to \mathbb{R}.
\end{equation}
In fact, only real polynomials need to be considered
\[{p} = {x^\alpha } = \prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} \ .\]
It can be obtained by straight calculation
\begin{align*}
{S_r}{p} &= {r^{ - n}}\int_{{\mathbb{R}^n}} {K\left( {\frac{{x - y}}{r}} \right)\prod\limits_{j = 1}^n {y_j^{{\alpha _j}}} {\rm d}y} = \int_{{\mathbb{R}^n}} {K\left( z \right)\prod\limits_{j = 1}^n {{{\left( {r{z_j} + {x_j}} \right)}^{{\alpha _j}}}} {\rm d}z} \\
& = \Big( {\prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} } \Big)\int_{{\mathbb{R}^n}} {K\left( z \right){\rm d}z} + \sum\limits_\gamma {{\varphi _\gamma }\left( {r,x} \right)\int_{{\mathbb{R}^n}} {{z^\gamma }K\left( z \right){\rm d}z} } = \prod\limits_{j = 1}^n {x_j^{{\alpha _j}}} = {p},
\end{align*}
where $ {{\varphi _\gamma }\left( {r,x} \right)} $ are coefficients independent of $ z $.
As to the complex case one only needs to perform complex translation to obtain
\begin{equation}\notag
{p}\left( {u;\mathrm{i}v} \right) = {S_r}{p}\left( {u + \mathrm{i}v} \right) = \int_{{\mathbb{R}^n}} {K\left( {\mathrm{i}{r^{ - 1}}v - \eta } \right){p}\left( {u;r\eta } \right){\rm d}\eta }.
\end{equation}
The above preparations are classical, see also \cite{salamon,MR2071231}. Next we begin to prove the Jackson type approximation theorem via only modulus of continuity. We will make use of \eqref{3.10} in case of the Taylor polynomial
\[{p_k}\left( {x;y} \right): = {P_{f,k}}\left( {x;y} \right) = \sum\limits_{\left| \alpha \right| \leq k} {\frac{1}{{\alpha !}}{\partial ^\alpha }f\left( x \right){y^\alpha }} \]
of $ f $ with $ k\in \mathbb{N} $. Note that
\[\left| {f\left( {x + y} \right) - {p_k}\left( {x;y} \right)} \right| = \Bigg| {\int_0^1 {k{{\left( {1 - t} \right)}^{k - 1}}\sum\limits_{\left| \alpha \right| = k} {\frac{1}{{\alpha !}}\left( {{\partial ^\alpha }f\left( {x + ty} \right) - {\partial ^\alpha }f\left( x \right)} \right){y^\alpha }{\rm d}t} } } \Bigg|\]
for every $ x,y \in {\mathbb{R}^n} $.
Define the following domains to partition $ \mathbb{R}^n $:
\[{\Omega _1}: = \left\{ {\eta \in {\mathbb{R}^n}:\left| \eta \right| < \delta{r^{ - 1}}} \right\},\;\;{\Omega _2}: = \left\{ {\eta \in {\mathbb{R}^n}:\left| \eta \right| \geq \delta{r^{ - 1}}} \right\}.\]
We have to use different estimates in the above two domains, which are abstracted as follows. If $ 0 < \left| y \right| < \delta $, we obtain that
\begin{align}
\left| {f\left( {x + y} \right) - {p_k}\left( {x;y} \right)} \right| \leq{}& \int_0^1 {k{{\left( {1 - t} \right)}^{k - 1}}\sum\limits_{\left| \alpha \right| = k} {\frac{1}{{\alpha !}} \cdot {{\left[ {{\partial ^\alpha }f} \right]}_\varpi }\varpi \left( {t\left| y \right|} \right) \cdot \left| {{y^\alpha }} \right|{\rm d}t} } \notag \\
\leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }\int_0^1 {\varpi \left( {t\left| y \right|} \right){\rm d}t} \cdot \left| {{y^\alpha }} \right|\notag \\
\leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }\varpi \left( {\left| y \right|} \right)\left| {{y^\alpha }} \right|\notag \\
\label{e1}\leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }\varpi \left( {\left| y \right|} \right){\left| y \right|^k}.
\end{align}
If $ \left| y \right| \geq \delta $, one easily arrives at
\begin{align}
\left| {f\left( {x + y} \right) - {p_k}\left( {x;y} \right)} \right| \leq{}& \int_0^1 {k{{\left( {1 - t} \right)}^{k - 1}}\sum\limits_{\left| \alpha \right| = k} {\frac{1}{{\alpha !}} \cdot 2{{\left| {{\partial ^\alpha }f} \right|}_{{C^0}}} \cdot \left| {{y^\alpha }} \right|{\rm d}t} } \notag \\
\leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }\left| {{y^\alpha }} \right|\notag \\
\label{e2} \leq{}& c\left( {n,k} \right){\left\| f \right\|_\varpi }{\left| y \right|^k}.
\end{align}
The H\"{o}lder inequality has been used in \eqref{e1} and \eqref{e2} with $ k \geq 1,{\alpha _i} \geq 1, \mu_i=k/\alpha_i\geq 1 $ without loss of generality:
\[\left| {{y^\alpha }} \right| = \prod\limits_{i = 1}^n {{{\left| {{y_i}} \right|}^{{\alpha _i}}}} \leq \sum\limits_{i = 1}^n {\frac{1}{{{\mu _i}}}{{\left| {{y_i}} \right|}^{{\alpha _i}{\mu _i}}}} \leq \sum\limits_{i = 1}^n {{{\left| {{y_i}} \right|}^k}} \leq \sum\limits_{i = 1}^n {{{\left| y \right|}^k}} = n{\left| y \right|^k}.\]
Now let $ x=u+\mathrm{i}v $ with $ u,v \in {\mathbb{R}^n} $ and $ \left| v \right| \leq r $. Fix $ p = n + k + 2 $, and let $ c = c\left( {n,k} \right) > 0 $ be a universal constant, then it follows that
\begin{align*}
\left| {{S_r}f\left( {u + \mathrm{i}v} \right) - {p_k}\left( {u;\mathrm{i}v} \right)} \right| \leq{}& \int_{{\mathbb{R}^n}} {K\left( {\mathrm{i}{r^{ - 1}}v - \eta } \right)\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|{\rm d}\eta } \notag \\
\leq{}& c\int_{{\mathbb{R}^n}} {\frac{{{e^{{r^{ - 1}}v}}}}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|{\rm d}\eta } \notag \\
\leq{}& c\int_{{\mathbb{R}^n}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|{\rm d}\eta } \notag \\
={}& c\int_{{\Omega _1}} + \int_{{\Omega _2}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|{\rm d}\eta } \notag \\
: ={}& c\left( {{I_1} + {I_2}} \right).
\end{align*}
As it can be seen later, $ I_1 $ is the main part while $ I_2 $ is the remainder.
Recall Remark \ref{Remarksemi} and \eqref{Ox}. Hence the following holds due to \eqref{e1}
\begin{align}\label{I1}
{I_1} ={}& \int_{{\Omega _1}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|{\rm d}\eta }\notag \\
\leq{}& \int_{\left| \eta \right| < \delta{r^{ - 1}}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}} \cdot c{{\left\| f \right\|}_\varpi }\varpi \left( {\left| {r\eta } \right|} \right){{\left| {r\eta } \right|}^k}{\rm d}\eta } \notag \\
\leq{}& \int_{\left| \eta \right| < \delta{r^{ - 1}}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}} \cdot c{{\left\| f \right\|}_\varpi }\varpi \left( { {r } } \right) \psi(|\eta|) {{\left| {r\eta } \right|}^k}{\rm d}\eta } \notag \\
\leq{}& c{\left\| f \right\|_\varpi }{r^k}{\varpi(r)}\int_0^{\delta{r^{ - 1}}} {\frac{{{w^{k + n }}}}{{{{\left( {1 + w} \right)}^p}}}{\rm d}w} \notag \\
\leq{}& c{\left\| f \right\|_\varpi }{r^k}{\varpi(r)}\int_0^{+\infty} {\frac{{{w^{k + n }}}}{{{{\left( {1 + w} \right)}^p}}}{\rm d}w} \notag \\
\leq{}& c{\left\| f \right\|_\varpi }{r^k}{\varpi(r)}.
\end{align}
In view of \eqref{e2}, we have
\begin{align}\label{I2}
{I_2} ={}& \int_{{\Omega _2}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}}\left| {f\left( {u + r\eta } \right) - {p_k}\left( {u;r\eta } \right)} \right|{\rm d}\eta } \notag \\
\leq{}& \int_{\left| \eta \right| \geq \delta{r^{ - 1}}} {\frac{1}{{{{\left( {1 + \left| \eta \right|} \right)}^p}}} \cdot c{{\left\| f \right\|}_\varpi }{{\left| {r\eta } \right|}^k}{\rm d}\eta } \notag \\
\leq{}& c{\left\| f \right\|_\varpi }{r^k}\int_{\delta{r^{ - 1}}}^{ + \infty } {\frac{{{w^{k + n - 1}}}}{{{{\left( {1 + w} \right)}^p}}}{\rm d}w} \notag \\
\leq{}& c{\left\| f \right\|_\varpi }{r^k}\int_{\delta{r^{ - 1}}}^{ + \infty } {\frac{1}{{{w^{p - k - n + 1}}}}{\rm d}w} \notag \\
\leq{}& c{\left\| f \right\|_\varpi }{r^{k+2}}.
\end{align}
By \eqref{I1} and \eqref{I2}, we finally arrive at
\begin{equation}\notag
\left| {{S_r}f\left( {u + \mathrm{i}v} \right) - {p_k}\left( {u;\mathrm{i}v} \right)} \right| \leq c{\left\| f \right\|_\varpi }{r^k} {\varpi(r)}
\end{equation}
due to $ \mathop {\overline {\lim } }\limits_{r \to {0^ + }} r/{\varpi }\left( r \right) < + \infty $ in Definition \ref{d1}. This proves Theorem \ref{Theorem1} for the case $ |\alpha| = 0 $. As to $ |\alpha| \ne 0 $, the result follows from the fact that $ {S_r} $ commutes with $ {\partial ^\alpha } $. We therefore finish the proof of Theorem \ref{Theorem1}.
\end{proof}
\subsection{Proof of Corollary \ref{coro1}}\label{proofcoro1}
\begin{proof}
Only the analysis of case $ \left| \alpha \right| = 0 $ is given. In view of Theorem \ref{Theorem1} and \eqref{e1}, we obtain that
\begin{align}
\left| {{S_r}f\left( x \right) - f\left( x \right)} \right| \leq{}& \left| {{S_r}f\left( x \right) - {P_{f,k}}\left( {\operatorname{Re} x;\mathrm{i}\operatorname{Im} x} \right)} \right| + \left| {{P_{f,k}}\left( {\operatorname{Re} x;\mathrm{i}\operatorname{Im} x} \right) - f\left( x \right)} \right|\notag \\
\label{Srf-f} \leq{}& c_*{\left\| f \right\|_\varpi }{r^k}\varpi(r) ,
\end{align}
where the constant $ c_*>0 $ depends on $ n $ and $ k $.
Further, by \eqref{Srf-f} we have
\begin{align*}
\left| {{S_r}f\left( x \right)} \right| \leq{}& \left| {{S_r}f\left( x \right) - f\left( x \right)} \right| + \left| {f\left( x \right)} \right|\notag \\
\leq{}& c_*{\left\| f \right\|_\varpi }{r^k}\varpi(r) + {\left\| f \right\|_\varpi } \leq {c^ * }{\left\| f \right\|_\varpi },
\end{align*}
provided a constant $ c^*>0 $ depending on $ n,k $ and $ \varpi $. This completes the proof.
\end{proof}
\subsection{Proof of Corollary \ref{coro2}}\label{proofcoco2}
\begin{proof}
It is easy to verify that
\begin{align*}
{S_r}f\left( {x + 1} \right) ={}& \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {K\left( {\frac{{x - \left( {y - 1} \right)}}{r}} \right)f\left( y \right){\rm d}y} \notag \\
={}& \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {K\left( {\frac{{x - u}}{r}} \right)f\left( {u + 1} \right){\rm d}u} \notag \\
={}& \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {K\left( {\frac{{x - u}}{r}} \right)f\left( u \right){\rm d}u} = {S_r}f\left( x \right).
\end{align*}
According to Fubini's theorem, we obtain
\begin{align*}
\int_{{\mathbb{T}^n}} {{S_r}f\left( x \right)dx} ={}& \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {\int_{{\mathbb{T}^n}} {K\left( {\frac{{x - y}}{r}} \right)f\left( y \right){\rm d}y} } \notag \\
={}& \frac{1}{{{r^n}}}\int_{{\mathbb{R}^n}} {K\left( {\frac{m}{r}} \right)\left( {\int_{{\mathbb{T}^n}} {f\left( {x + m} \right){\rm d}x} } \right) {\rm d}m} = 0.
\end{align*}
This completes the proof.
\end{proof}
\subsection{Asymptotic analysis}
Here we provide some useful asymptotic results, all of which can be proved by L'Hopital's rule or by integration by parts, thus the proof is omitted here.
\begin{lemma}\label{duochongduishu}
Let $ \varrho \in \mathbb{N}^+ $, $ \lambda>1 $ and some $ M>0 $ sufficiently large be fixed. Then as $ X\to +\infty $,
\[\int_M^X {\frac{1}{{(\ln z) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho z)}^\lambda }}}{\rm d}z} = {\mathcal{O}^\# }\Bigg( {\frac{X}{{(\ln X) \cdots {{(\underbrace {\ln \cdots \ln }_\varrho X)}^\lambda }}}} \Bigg).\]
\end{lemma}
\begin{lemma}\label{erheyi}
Let $ 0 <\sigma<1 $, $ \lambda>1 $ and some $ M>0 $ sufficiently large be fixed. Then as $ X\to +\infty $,
\begin{equation}\label{erheyi1}
\int_M^X {\frac{1}{{{z^\sigma }{{\left( {\ln z} \right)}^\lambda }}}{\rm d}z} = {\mathcal{O}^\# }\left( {\frac{{{X^{1 - \sigma }}}}{{{{\left( {\ln X} \right)}^\lambda }}}} \right),
\end{equation}
and
\begin{equation}\label{erheyi2}
\int_X^{ + \infty } {\frac{1}{{{z^{1 + \sigma }}{{\left( {\ln z} \right)}^\lambda }}}{\rm d}z} = {\mathcal{O}^\# }\left( {\frac{1}{{{X^\sigma }{{\left( {\ln X} \right)}^\lambda }}}} \right).
\end{equation}
\end{lemma}
\subsection{KAM theorem for quantitative estimates}\label{Appsalamon}
Here we give a KAM theorem for quantitative estimates, which is used in Theorem \ref{theorem1} in this paper. See Theorem 1 in Salamon's paper \cite{salamon} for details.
\begin{theorem}\label{appendix}
Let $ n \geq 2, \tau > n - 1, 0 < \theta < 1$, and $ M \geq 1 $ be given. Then there are positive constants $ \delta_* $ and $ c $ such that $ c\delta_*\leq1/2 $ and the following holds for every $ 0 < r^* \leq 1 $ and every $ \omega\in\mathbb{R}^n $ that satisfies \eqref{dio} (for $ \tau>n-1 $).
Suppose that $ H(x, y) $ is a real analytic Hamiltonian function defined in the strip
$ \left| {\operatorname{Im} x} \right| \leq {r^ * },\left| y \right| \leq {r^ * } $, which is of period $ 1 $ in the variables $ {x_1}, \ldots ,{x_n} $ and satisfies
\begin{align*}
\left| {H\left( {x,0} \right) - \int_{{\mathbb{T}^n}} {H\left( {\xi ,0} \right)d\xi } } \right| &\leq {\delta ^ * }{r^ * }^{2\tau + 2},\notag\\
\left| {{H_y}\left( {x,0} \right) - \omega } \right| &\leq {\delta ^ * }{r^ * }^{\tau + 1},\notag\\
\left| {{H_{yy}}\left( {x,y} \right) - Q\left( {x,y} \right)} \right| &\leq \frac{{c{\delta ^ * }}}{{2M}},\notag
\end{align*}
for $ \left| {\operatorname{Im} x} \right| \leq r^*,\left| y \right| \leq r^* $, where $ 0 < {\delta ^ * } \leq {\delta _ * } $, and $ Q\left( {x,y} \right) \in {\mathbb{C}^{n \times n}} $ is a symmetric
(not necessarily analytic) matrix valued function in the strip $ \left| {\operatorname{Im} x} \right| \leq r,\left| y \right| \leq r $
and satisfies in this domain
\[\left| {Q\left( z \right)} \right| \leq M,\;\;\left| {{{\left( {\int_{{\mathbb{T}^n}} {Q\left( {x,0} \right){\rm d}x} } \right)}^{ - 1}}} \right| \leq M.\]
Then there exists a real analytic symplectic transformation $ z = \phi \left( \zeta \right) $ of the
form
\[z = \left( {x,y} \right),\;\;\zeta = \left( {\xi ,\eta } \right),\;\;x = u\left( \xi \right),\;\;y = v\left( \xi \right) + u_\xi ^{\top}{\left( \xi \right)^{ - 1}}\eta \]
mapping the strip $ \left| {\operatorname{Im} \xi } \right| \leq \theta r^*,\left| \eta \right| \leq \theta r^* $ into $ \left| {\operatorname{Im} x} \right| \leq r^*,\left| y \right| \leq r^* $, such that $ u\left( \xi \right) - \xi $ and $ v\left( \xi \right) $ are of period $ 1 $ in all variables and the Hamiltonian function $ K: = H \circ \phi $ satisfies
\[{K_\xi }\left( {\xi ,0} \right) = 0,\;\;{K_\eta }\left( {\xi ,0} \right) = \omega .\]
Moreover, $ \phi $ and $ K $ satisfy the estimates
\begin{align*}
&\left| {\phi \left( \zeta \right) - \zeta } \right| \leq c{\delta ^ * }\left( {1 - \theta } \right){r^ * },\;\;\left| {{\phi _\zeta }\left( \zeta \right) - \mathbb{I}} \right| \leq c{\delta ^ * },\\
&\left| {{K_{\eta \eta }}\left( \zeta \right) - Q\left( \zeta \right)} \right| \leq \frac{{c{\delta ^ * }}}{M},\\
&\left| {v \circ {u^{ - 1}}\left( x \right)} \right| \leq c{\delta ^ * }{r^ * }^{\tau + 1},
\end{align*}
for $ \left| {\operatorname{Im} \xi } \right| \leq \theta r^*,\left| \eta \right| \leq \theta r^* $, and $ \left| {\operatorname{Im} x} \right| \leq \theta r^* $.
\end{theorem}
\section*{Acknowledgements}
This work was supported in part by National Basic Research Program of China (Grant No. 2013CB834100), National Natural Science Foundation of China (Grant No. 12071175, Grant No. 11171132, Grant No. 11571065), Project of Science and Technology Development of Jilin Province (Grant No. 2017C028-1, Grant No. 20190201302JC), and Natural Science Foundation of Jilin Province (Grant No. 20200201253JC).
| {
"timestamp": "2023-03-01T02:11:03",
"yymm": "2302",
"arxiv_id": "2302.14361",
"language": "en",
"url": "https://arxiv.org/abs/2302.14361",
"abstract": "Beyond Hölder's type, this paper mainly concerns the persistence and remaining regularity of an individual frequency-preserving KAM torus in a finitely differentiable Hamiltonian system, even allows the non-integrable part being critical finitely smooth. To achieve this goal, besides investigating the Jackson approximation theorem towards only modulus of continuity, we demonstrate an abstract regularity theorem adapting to the new iterative scheme. Via these tools, we obtain a KAM theorem with sharp differentiability hypotheses, asserting that the persistent torus keeps prescribed universal Diophantine frequency unchanged. Further, the non-Hölder regularity for invariant KAM torus as well as the conjugation is explicitly shown by introducing asymptotic analysis. To our knowledge, this is the first approach to KAM on these aspects in a continuous sense, and we also provide two systems, which cannot be studied by previous KAM but by ours.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Towards continuity: Universal frequency-preserving KAM persistence and remaining regularity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850842553892,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7095349690059091
} |
https://arxiv.org/abs/1502.02966 | Quotient graphs for power graphs | In a previous paper of the first author a procedure was developed for counting the components of a graph through the knowledge of the components of its quotient graphs. We apply here that procedure to the proper power graph $\mathcal{P}_0(G)$ of a finite group $G$, finding a formula for the number $c(\mathcal{P}_0(G))$ of its components which is particularly illuminative when $G\leq S_n$ is a fusion controlled permutation group. We make use of the proper quotient power graph $\widetilde{\mathcal{P}}_0(G)$, the proper order graph $\mathcal{O}_0(G)$ and the proper type graph $\mathcal{T}_0(G)$. We show that all those graphs are quotient of $\mathcal{P}_0(G)$ and demonstrate a strong link between them dealing with $G=S_n$. We find simultaneously $c(\mathcal{P}_0(S_n))$ as well as the number of components of $\widetilde{\mathcal{P}}_0(S_n)$, $\mathcal{O}_0(S_n)$ and $\mathcal{T}_0(S_n)$. | \section{\bf Introduction and main results}
\vskip 0.4 true cm
Kelarev and Quinn~\cite{kq} defined the directed power graph $\overrightarrow{P(S)}$ of a semigroup $S$ as the directed graph in which the set of vertices is $S$ and, for $x, y\in S$, there is an arc $(x,y)$ if $y=x^m$, for some $m\in\mathbb{N}$. The \emph{power graph} $P(S)$ of a semigroup $S$, was defined by Chakrabarty, Ghosh and Sen~\cite{cgs} as the corresponding underlying undirected graph. They proved that for a finite group $G$, the power graph $P(G)$ is complete if and only if $G$ is a cyclic group of prime power order. In~\cite{c, cg} Cameron and Ghosh obtained interesting results about power graphs of finite groups, studying how the group of the graph automorphisms of $P(G)$ affects the structure of the group $G$. Mirzargar, Ashrafi and Nadjafi~\cite{man} considered some further graph theoretical properties of the power graph $P(G)$, such as the clique number, the independence number and the chromatic number and their relation to the group theoretical properties of $G$. Even though young, the theory of power graphs seems to be a very promising research area and the majority of its beautiful results, dating before 2013, are collected in a survey \cite{sur}.
In this paper we deal with the connectivity level of $P(G)$. Since it is obvious that $P(G)$ is connected of diameter at most $2$, the focus is on $2$-connectivity. Recall that a graph $X=(V_X,E_X)$ is $2$-connected if, for every $x\in V_X$, the $x$-deleted subgraph of $X$ is connected.
Thus $P(G)$ is $2$-connected if and only if $P_0 (G)$, the $1$-deleted subgraph of $P (G)$, is connected. $P_0 (G)$ is called the {\it proper power graph} of $G$ and our main aim is to find a formula for the number $c_0(G)$ of its components. We denote its vertex set $G\setminus\{1\}$ by $G_0.$ Recently Curtin, Pourgholi and Yousefi-Azari \cite{pya2} considered the properties of the diameter of $P_0 (G)$ and characterized the groups $G$ for which $P_0 (G)$ is Eulerian. Moghaddamfar, Rahbariyan and Shi \cite{MRS}, found many relations between the group theoretical properties of $G$ and the graph theoretical properties of $P_0 (G)$.
Here we fruitfully apply the theory developed in \cite{bub} to get control on number and nature of the components of $P_0 (G)$ through those of some of its quotient graphs.
We will be using notation and definitions given there about graphs and homomorphisms. In particular, every graph is finite, undirected, simple and reflexive, so that there is a loop on each vertex. The assumption about loops, which is not common for power graphs, is clearly very mild in treating connectivity and do not affect any result about components.
Up to now, the $2$-connectivity of $P(G)$ has been studied for nilpotent groups and for some types of simple groups in ~\cite{pya}; for groups admitting a partition and for the symmetric and alternating groups, with a particular interest on the diameter of $P_0 (G)$, in ~\cite{DF}. In those papers the results are obtained
through ingenious ad hoc arguments, without developing a general method. Anyway, the reasonings often involve element orders and, when $G\leq S_n,$ the cycle decomposition of permutations. We observed that what makes those ideas work is the existence of some quotient graphs for $P_0 (G)$. For $\psi\in S_n$, let $T_{\psi}$ denote the type of $\psi$, that is, the partition of $n$ given by the lengths of the orbits of $\psi$. Then
there exists a quotient graph $\mathcal{O}_0(G)$ of $P_0 (G)$ having as vertex set $\{o(g): g\in G_0\}$ and, when $G$ is a permutation group, there exists a quotient graph $P_0(\mathcal{T}(G))$ of $P_0 (G)$ having vertex set $\mathcal{T}(G_0)$ given by the types $T_{\psi}$ of the permutations $\psi\in G_0$ (Sections \ref{order-sec} and \ref{permutation}).
Recall that a homomorphisms $\varphi$ from the graph $X$ to the graph $Y$ is called complete if it maps both the vertices and edges of $X$ onto those of $Y$; tame if vertices with the same image are connected; locally surjective if it maps the neighborhood of each vertex of $X$ onto the neighborhood in $Y$ of its image; orbit if the sets of vertices in $V_X$ sharing the same image coincide with the orbits of a group of automorphisms of $X$.
The starting point of our approach is to consider the \emph{quotient power graph} $\widetilde{P}_0(G)$, obtained from $P_0 (G)$ by the identification of the vertices generating the same cyclic subgroup (Section \ref{onda}). The projection of $P_0 (G)$ on $\widetilde{P}_0(G)$ is tame and thus the number $\widetilde {c}_0(G)$ of components of $\widetilde{P}_0(G)$ is equal to $c_0(G)$. Moreover, both $\mathcal{O}_0(G)$ and $P_0(\mathcal{T}(G))$ may be seen also as quotients of $\widetilde{P}_0(G)$, with a main difference between them.
The projection $\widetilde{o}$ on $\mathcal{O}_0(G)$ is not, in general, locally surjective (Example \ref{no-fusion}) while, for any $G\leq S_n$, the projection $\tilde{t}$ on $P_0(\mathcal{T}(G))$ is complete and locally surjective (Propositions \ref{tau} and \ref{2con}).
As a consequence, while finding $c_0(G)$ through $\mathcal{O}_0(G)$ can be hard, it is manageable through $P_0(\mathcal{T}(G))$.
Call now
$G\leq S_n$ fusion controlled if, for every $\psi\in G$ and $x\in S_n$ such that $\psi^x\in G$, there exists $y\in N_{S_n}(G)$ such that $\psi^x=\psi^y.$ Obviously $S_n$ and $A_n$ are both fusion controlled, but they are not the only examples. For instance, if $n=mr$, with $m,r\geq 2$, then the base group $G$ of the wreath product $S_m\wr S_r=N_{S_n}(G)$ is fusion controlled.
If $G$ is fusion controlled, then $\tilde{t}$ is a complete orbit homomorphism (Proposition \ref{conj}) and hence \cite[Theorem B] {bub} applies to $X=\widetilde{P}_0(G)$ and $Y=P_0(\mathcal{T}(G))$, giving an algorithmic method to get $c_0(G).$
In order to state our main results we need some further notation.
Denote by $\widetilde{\mathcal{C}}_0(G)$ the set of components of $\widetilde{P}_0(G)$; by $\mathcal{C}_0(\mathcal{T}(G))$ the set of components of $P_0(\mathcal{T}(G))$ and by $c_0(\mathcal{T}(G))$ their number; by $c_0(\mathcal{O}(G))$ the number of components of $\mathcal{O}_0(G)$.
For $T\in \mathcal{T}(G_0)$, denote by $\mu_T(G)$ the number of permutations of type $T$ in $G$; by $o(T)$ the order of any permutation having type $T;$ by $C(T)$ the component of $P_0(\mathcal{T}(G))$ containing $T$; by $\widetilde{\mathcal{C}}_0(G)_{T}$ the set of components of $\widetilde{P}_0(G)$ in which there exists at least one vertex of type $T.$ Finally, for $C\in \widetilde{\mathcal{C}}_0(G)_{T}$, let $k_{C}(T)$ be the number of vertices in $C$ having type $T,$
and let $\phi$ denote the Euler's totient function.
\begin{thmA} Let $G\leq S_n$ be fusion controlled. For $1\leq i\leq c_0(\mathcal{T}(G))$, let $T_i\in \mathcal{T}(G_0)$ be such that $\mathcal{C}_0(\mathcal{T}(G))=\{C(T_i): i\in\{1,\dots,c_0(\mathcal{T}(G))\}\}$, and pick $C_i\in \widetilde{\mathcal{C}}_0(G)_{T_i}.$
Then
\begin{equation}\label{formula-fusion22}\displaystyle{c_0(G)=\sum_{i=1}^{c_0(\mathcal{T}(G))}\frac{\mu_{T_i} (G)}{\phi(o(T_i))k_{C_i}(T_i)}}.\end{equation}
\end{thmA}
The connectivity properties of the graphs $\widetilde{P}_0(G)$, $P_0(\mathcal{T}(G))$ and $\mathcal{O}_0(G)$ are strictly linked when $G\leq S_n$, especially when $G$ is fusion controlled.
In the last section of the paper we consider $G=S_n$, finding in one go $c_0(S_n)$ as well as $c_0(\mathcal{T}(S_n))$ and $c_0(\mathcal{O}(S_n))$. In particular we find, with a different approach, the values of $c_0(S_n)$ in \cite[Theorem 4.2]{DF}. Throughout the paper we denote by $P$ the set of prime numbers and put $P+1=\{x\in \mathbb{N}: x=p+1, \ \hbox{for some}\ p\in P\}. $
\begin{thmB} The values of $c_0(S_n)= \widetilde {c}_0(S_n), c_0(\mathcal{T}(S_n))$ and $c_0(\mathcal{O}(S_n))$ are as follows.
\begin{itemize}
\item[(i)] For $2\leq n\leq 7,$ they are given in Table \ref{eqtable1}
below.
\\
\begin{table}[h]
\caption{$c_0(S_n), c_0(\mathcal{T}(S_n))$ and $c_0(\mathcal{O}(S_n))$ for $2\leq n \leq 7$.}\label{eqtable1}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $2 $&$ 3$ & $4$ & $5$ & $6$ &$ 7$\\
\hline
$c_0(S_n)$ &$ 1 $&$ 4$ & $13$ &$ 31$ & $83$ & $128$\\
\hline
$c_0(\mathcal{T}(S_n))$ & $1$ & $2$ &$ 3$ &$ 3$ &$ 4$&$ 3$\\
\hline
$c_0(\mathcal{O}(S_n))$ & $1$ &$ 2$ &$ 2$ & $2 $&$ 2$&$ 2$\\
\hline
\end{tabular}
\end{center}
\end{table}
\item[(ii)] For $n\geq 8$, they are given by Table \ref{eqtable2} below, according to whether $n$ is a prime, one greater than a prime, or neither.
\end{itemize}
\begin{table}[ht]
\caption{$c_0(S_n)$, $c_0(\mathcal{T}(S_n))$ and $c_0(\mathcal{O}(S_n))$ for $ n \geq 8$}\label{eqtable2}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$n$ & $n\in P$ & $n\in P+1$ & $n\notin P\cup (P+1)$ \\
\hline
$c_0(S_n)$ & $(n-2)!+1$ & $n(n-3)!+1$ & $1$ \\
\hline
$c_0(\mathcal{T}(S_n))=c_0(\mathcal{O}(S_n))$ & $2$ & $2$ & $1$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{thmB}
\begin{corC} Let $n\in\mathbb{N}$, with $n\geq 2.$ The following facts are equivalent:
\begin{itemize}
\item[(i)] $P(S_n)$ is $2$-connected;
\item[(ii)] $P_0(S_n)$ is connected;
\item[(iii)] $\widetilde{P}_0(S_n)$ is connected;
\item[(iv)] $P_0(\mathcal{T}(S_n))$ is connected;
\item[(v)] $\mathcal{O}_0(S_n)$ is connected;
\item[(vi)] $n=2$ or $n\in \mathbb{N}\setminus [P\cup(P+1)].$
\end{itemize}
\end{corC}
Observe that the proper power type graph of $S_n,$ has a purely arithmetic interest, because $\mathcal{T}(S_n)$ is the set of the partition of $n.$
Going beyond a mere counting, we describe the components of graphs in $\mathcal {G}_0=\{P_0(S_n),\widetilde{P}_0(S_n), P_0(\mathcal{T}(S_n)), \mathcal{O}_0(S_n)\}. $
To start with, note that in the connected case $n\in \mathbb{N}\setminus [P\cup(P+1)]$, no $X_0\in \mathcal {G}_0$ is a complete graph because $X_0$ admits as quotient $\mathcal{O}_0(S_n)$ which is surely incomplete having as vertices at least two primes. For $\psi\in S_n\setminus\{id\}$ vertex of $P_0 (S_n)$, let $[\psi]$ denote the corresponding vertex of $\widetilde{P}_0(S_n)$.
\begin{thmD} Let $n\geq 8$, with $n=p$ or $n=p+1$ for some prime $p$ and $\widetilde{\Delta}_n$ be the component of $\widetilde{P}_0(S_n)$ containing $[(12)]$.
\begin{itemize}
\item[(i)] $\widetilde{P}_0(S_n)$ consists of the main component $\widetilde{\Delta}_n$ which contains $[\psi]$, for every permutation $\psi\in S_n\setminus\{id\}$ other than the $p$-cycles and $(n-2)!$ or $n(n-3)!$ components, respectively, each reduced to an isolated vertex.
\item[(ii)] $P_0(S_n)$ consists of the main component induced by $\{\psi\in S_n\setminus\{id\}: [\psi]\in V_{\widetilde{\Delta}_n}\}$ which contains every permutation other than the $p$-cycles and $(n-2)!$ or $n(n-3)!$ components, respectively, each comprised of $(p-1)$-many $p$-cycles which form a complete graph.
\item[(iii)] $P_0(\mathcal{T}(S_n))$ consists of the main component $\widetilde{t}(\widetilde{\Delta}_n)$ which contains $T_{\psi}$, for every permutation $\psi\in S_n\setminus\{id\}$ other than the $p$-cycles, and the component containing the type $[p]$ or $[1,p]$, respectively, which is an isolated vertex.
\item[(iv)] $\mathcal{O}_0(S_n)$ consists of the main component $\widetilde{o}(\widetilde{\Delta}_n)$ which contains the order of every permutation in $ S_n\setminus\{id\}$ other than $p$ and the component reduced to the isolated vertex $p.$
\end{itemize}
In all the above cases, the main component is never complete.
\end{thmD}
Complete information about the components of the graphs in $\mathcal{G}_0$, for $3\leq n\leq 7$, can be found within the proof of Theorem B\,(i), taking into account Lemma \ref{component-quotient} for $P_0(S_n)$.
In particular, looking at the details, one easily checks that all the components of $X_0 \in\mathcal{G}_0$ apart from one are isolated vertices (complete graphs when $X_0=P_0(S_n)$) if and only if $n\geq 8$ or $n=2.$
In a forthcoming paper \cite{BIS2} we will treat the alternating group $A_n$ computing $c_0(A_n)$, $c_0(\mathcal{T}(A_n))$ and $c_0(\mathcal{O}(A_n)).$ We will also correct some mistakes about $c_0(A_n)$ found in \cite{DF}. We believe that our algorithmic method \cite[Theorem B]{bub} may help, more generally, to obtain $c_0(G)$ where $G$ is simple and almost simple. This, in particular, could give an answer to the interesting problem of classifying all the simple groups with $2$-connected power graph, posed in \cite[Question 2.10]{pya}. About that problem, in \cite{BIS2} we show that there exist infinite examples of alternating groups with $2$-connected power graph and that $A_{16}$ is that of smaller degree.
\vskip 0.6 true cm
\section{\bf Graphs}\label{graphs}
\vskip 0.4 true cm
For a finite set $A$ and $k\in \mathbb{N}$, let $\binom{A}{k}$ be the set of the subsets of $A$ of size $k.$ In this paper, as in \cite{bub}, a graph
$X=(V_X,E_X)$ is a pair of finite sets such that $V_X\neq\varnothing$ is the set of vertices, and $E_X$
is the set of edges which is the union of the set of loops $L_X=\binom{V_X}{1}$
and a set of proper edges $E^*_X\subseteq \binom{V_X}{2}$. We usually specify the edges of a graph $X$ giving only $E^*_X$.
Paper \cite{bub} is the main reference for the present paper. For the general information about graphs see \cite[Section 2]{bub}. Recall that, for a graph $X$, $\mathcal{C}(X)$ denotes the set of components of $X$ and $c(X)$ their number. If $x\in V_X$, the component of $X$ containing $x$ is denoted by $C_X(x)$ or more simply, when there is no risk of misunderstanding, by $C(x)$.
For $s\in\mathbb{N}\cup\{0\}$, a subgraph $\gamma$ of $X$ such that $V_{\gamma}=\{x_i: 0\leq i\leq s\}$ with distinct $x_i\in V_X$ and $E^*_{\gamma}=\{\{x_i,x_{i+1}\} : 0\leq i\leq s-1\}$, is a path of length $s$ between $x_0$ and $x_s$, and will be simply denoted by the ordered list $x_0,\dots, x_s$ of its vertices.
For the formal definitions of surjective, complete, tame, locally surjective, pseudo-covering, orbit and component equitable homomorphism and the notation for the corresponding sets of homomorphisms see \cite[Section 4.2, Definitions 5.3, 5.7, 4.4, 6.4]{bub}.
By \cite[Propositions 5.9 and 6.9]{bub}, we have that
\begin{equation}\label{general-inc}
\mathrm{O}(X,Y)\cap \mathrm{Com}(X,Y)\subseteq \mathrm{LSur}(X,Y)\cap \mathrm{CE}(X,Y)\cap \mathrm{Com}(X,Y)
\end{equation}
The content of \cite{bub} is all we need to conduce our arguments up to just a couple of definitions and related results.
\begin{defn}\label{q-2hom} {\rm Let $X$ and $Y$ be graphs, and $\varphi\in\mathrm{Hom}(X,Y).$ Then $\varphi$ is called a $2$-{\it homomorphism} if, for every $e\in E^*_X$, $\varphi(e)\in E^*_Y$. We denote the set of the $2$-homomorphisms from $X$ to $Y$ by $2 \mathrm{Hom}(X,Y)$.}
\end{defn}
From that definition we immediately deduce the following lemma.
\begin{lem}\label{2hom-iso} Let $\varphi\in 2\mathrm{Hom}(X,Y)$ and $x\in V_X$. If $\varphi(x)$ is isolated in $Y,$ then $x$ is isolated in $X$.
\end{lem}
\begin{defn}\label{deleted} {\rm
Let $X$ be a graph.
If $x_0\in V_X$, then the {\it $x_0$-deleted subgraph} $X-x_0$ is defined as the subgraph of $X$ with vertex set $V_X\setminus\{x_0\}$ and edge set given by the edges in $E_X$ not incident to $x_0$.
$X$ is called {\it $2$-connected} if, for every $x\in V_X$, $X-x$ is connected.}
\end{defn}
To deal with vertex deleted subgraphs and quotient graphs, we will use several times the following lemma.
\begin{lem}\label{cutgraph-hom} Let $\varphi\in\mathrm{Hom}(X,Y)$.
\begin{itemize}
\item[(i)] If $x_0\in V_X$ is such that $\varphi^{-1}(\varphi(x_0))=\{x_0\}$, then $\varphi$ induces naturally $\varphi_{x_0}\in\mathrm{Hom}( X-x_0,Y-\varphi(x_0)).$
If $\varphi$ is surjective (complete, pseudo-covering), then also $\varphi_{x_0}$ is surjective (complete, pseudo-covering).
\item[(ii)] Let $\sim$ be an equivalence relation on $V_X$ such that, for each $x_1,x_2\in V_X$, $x_1\sim x_2$ implies $\varphi(x_1)=\varphi(x_2).$ Then the map $\tilde{\varphi}:[V_X]\rightarrow V_Y$, defined by $\tilde{\varphi}([x])=\varphi(x)$ for all $[x]\in[V_X]$, is a homomorphism from $X/\hspace{-1mm}\sim$ to $Y$ such that $\tilde{\varphi}\circ \pi=\varphi.$
If $\varphi$ is surjective (complete, pseudo-covering), then also $\tilde{\varphi}$ is surjective (complete, pseudo-covering).
\end{itemize}
\end{lem}
\begin{proof} (i) Since $\varphi(V_X\setminus\{x_0\})\subseteq V_Y\setminus\{\varphi(x_0)\}$, we can consider the map $\varphi_{x_0}: V_X\setminus\{x_0\}\rightarrow V_Y\setminus\{\varphi(x_0)\}$, defined by $\varphi_{x_0}(x)=\varphi(x)$ for all $x\in V_X\setminus\{x_0\}.$
We show that $\varphi_{x_0}$ defines a homomorphism. Pick $e\in E_{X-x_0}$, so that $e=\{x_1,x_2\}\in E_X$ for suitable $x_1,x_2\in V_X$ with $x_1,x_2\neq x_0$.
By $\varphi^{-1}(\varphi(x_0))=\{x_0\}$, we get $\varphi(x_1), \varphi(x_2)\neq \varphi(x_0)$ and thus, since $\varphi$ is a homomorphism, we get $\{\varphi(x_1), \varphi(x_2)\}\in E_{Y-\varphi(x_0)}.$
If $\varphi$ is surjective, clearly $\varphi_{x_0}$ is surjective.
Assume now that $\varphi$ is complete and show that $\varphi_{x_0}$ is complete. Let $\{\varphi_{x_0}(x_1), \varphi_{x_0}(x_2)\}\in E_{Y-\varphi(x_0)}.$ Then $\{\varphi(x_1), \varphi(x_2)\}\in E_Y,$ with $\varphi(x_1), \varphi(x_2)\neq \varphi(x_0)$.
Since $\varphi$ is complete, there exists $\overline{x}_1, \overline{x}_2\in V$ such that $\varphi(\overline{x}_1)=\varphi(x_1),\ \varphi(\overline{x}_2)=\varphi(x_2)$ and $\{\overline{x}_1, \overline{x}_2\}\in E_X.$ From $\varphi(x_1), \varphi(x_2)\neq \varphi(x_0)$ we deduce that $\overline{x}_1, \overline{x}_2\neq x_0.$ Thus $\overline{x}_1, \overline{x}_2\in V_{X-x_0}$ and $\{\overline{x}_1, \overline{x}_2\}\in E_{X-x_0}.$ An obvious adaptation of this argument works also in the pseudo-covering case.
(ii) The fact that $\tilde{\varphi}$ is an homomorphism such that $\tilde{\varphi}\circ \pi=\varphi$ is the content of \cite[Theorem 1.6.10]{kna}. Assume that $\varphi$ is surjective. Then, by $\tilde{\varphi}\circ \pi=\varphi$, $\tilde{\varphi}$ is surjective too.
Assume now that $\varphi$ is complete and show that $\tilde{\varphi}$ is complete. By what observed above, $\tilde{\varphi}$ is surjective.
Let $e=\{\tilde{\varphi}([x_1]), \tilde{\varphi}([x_2])\}=\{\varphi(x_1), \varphi(x_2)\}\in E_Y$.
Since $\varphi$ is complete, there exists $\overline{x}_1, \overline{x}_2\in V_X$ such that $\varphi(\overline{x}_1)=\varphi(x_1), \varphi(\overline{x}_2)=\varphi(x_2)$ and $\{\overline{x}_1, \overline{x}_2\}\in E_X.$ Then also $e'=\{[\overline{x}_1], [\overline{x}_2]\}\in [E_X]$ and $e=\tilde{\varphi}(e').$ An obvious adaptation of this argument works also in the pseudo-covering case.
\end{proof}
When no ambiguity arises, we denote the map $\varphi_{x_0}$ of the above lemma again by $\varphi.$ In the following, whatever the deleted vertex $x_0\in V_X$ is, we write $X_0=((V_X)_0,(E_X)_0)$ for the $x_0$-deleted subgraph. Moreover we write $\mathcal{C}_0(X)$ for the set of the components of $X_0$ as well as $c_0(X)$ for their number. This helps to standardize the notation throughout the paper. In order to make our exposition more readable we occasionally introduce some slight variation to that notational rule.
The terminology not explicitly introduced is standard and can be find in \cite{dl}.
\vskip 1 true cm
\section{\bf Power graphs}\label{onda}
\vskip 0.4 true cm
Throughout the next sections let $G$ be a finite group with unit element $1$ and put $G_0=G\setminus\{1\}$. For $x\in G,$ denote by $o(x)$ the order of $x.$
\vskip 0.4 true cm
\begin{defn}\label{pow-gr}{\rm The
{\it power graph} of $G$ is the graph $P (G)=(G,E)$ where, for $x,y\in G$, $\{x,y\}\in E$ if there exists $m\in\mathbb{N}$ such that $x=y^m$ or $y=x^m$.
The {\it proper power graph} $P_0 (G)=(G_0,E_0)$ is defined as the $1$-deleted subgraph of $P(G).$}
\end{defn}
To deal with the graphs $P(G)$ and $P_0 (G)$ and simplify their complexity, we start considering the corresponding quotient graphs in which the elements of $G,$ generating the same cyclic subgroup, are identified in a unique vertex.
\begin{defn}\label{qpow-gr}{\rm Define for $x,y\in G$, $x\sim y$ if $\langle x\rangle=\langle y\rangle$. Then $\sim$ is an equivalence relation on $G$ and the equivalence class of $x\in G$, $[x]=\{x^m : 1\leq m\leq o(x),\,\gcd (m,o(x))=1\}$ has size $\phi(o(x))$.
The quotient graph $P(G)/\hspace{-1mm}\sim=([G],[E])$ is called the\emph{ quotient power graph} of $G$ and denoted by $\widetilde{P}(G)$.}
\end{defn}
By definition of quotient graph, the vertex set of $P(G)/\hspace{-1mm}\sim$ is $[G]=\{[x] : x\in G\}$ and $\{[x],[y]\}\in [E]$ is an edge in $\widetilde{P}(G)$ if there exist $\tilde{x}\in [x]$ and $\tilde{y}\in[y]$ such that $\{\tilde{x},\tilde{y}\}\in E$, that is, $\tilde{x},\tilde{y}$ are one the power of the other.
\begin{lem}\label{lato} For every $x,y\in G$, $\{[x], [y]\}\in [E]$
if and only if $\{x,y\}\in E$.
\end{lem}
\begin{proof} Let $x,y\in G$ such that $\{[x], [y]\}$ is an edge in $\widetilde{P}(G)$. Then there exist $\tilde{x}\in [x]$ and $\tilde{y}\in[y]$ such that one of them is a power of the other. To fix the ideas, let $\tilde{x}=(\tilde{y})^m$, for some $m\in\mathbb{N}.$ Since $x\in \langle \tilde{x}\rangle$ and $\tilde{y}\in \langle y\rangle$, there exist $a,b\in\mathbb{N}$ such that $x=(\tilde{x})^a$ and $\tilde{y}=y^b$. It follows that $x=y^{abm}$ and thus $\{x,y\}\in E$. The converse is trivial.
\end{proof}
The above lemma may be thought of as saying that the projection of $P(G)$ onto its quotient $\widetilde{P}(G)$ is a strong homomorphism in the sense of \cite[Definition 1.5]{kna-paper}.
\begin{defn}\label{qpropow-gr}{\rm The $[1]$-deleted subgraph of $\widetilde{P}(G)$ is called
the \emph{proper quotient power graph} of $G$ and denoted by $\widetilde{P}_0(G)=([G]_0,[E]_0)$. }
\end{defn}
Since $[x]=[1]$ if and only if $x=1,$ Lemma \ref{cutgraph-hom} and \cite[Lemma 4.3]{bub} apply to the projection of $P(G)$ onto $\widetilde{P}(G)$ giving that $\widetilde{P}_0(G)$ is equal to the quotient graph $P_0(G)/\hspace{-1mm}\sim$. For short, we denote the set of components of $P_0(G)$ by $\mathcal{C}_0 (G)$ and their number by $c_0(G)$. Similarly we denote the set of components of $\widetilde{P}_0(G)$ by $\widetilde{\mathcal{C}}_0 (G)$ and their number by $\widetilde{c}_0(G)=c(\widetilde{P}_0(G)).$ Lemma \ref{lato} immediately extends to $\widetilde{P}_0(G)$.
\begin{lem}\label{lato2} For every $x,y\in G_0,$ $\{[x], [y]\}\in [E]_0$ if and only if $\{x,y\}\in E_0$.
\end{lem}
\begin{lem}\label{rmk:1} The graph $\widetilde{P}_0(G)$ is a tame and pseudo-covered
quotient of $P_0 (G)$. In particular $c_0(G)=\widetilde{c}_0(G).$
\end{lem}
\begin{proof} Let $x,y\in G_0$ such that $x\sim y$. Then $y=x^m$ for some $m\in \mathbb{N}$ and thus $\{x,y\}\in E.$ This shows that the quotient graph $\widetilde{P}_0(G)$ is tame (\cite[Definition 3.1]{bub}). Then, \cite[Proposition 3.2]{bub} applies giving $c_0(G)=\widetilde{c}_0(G).$ The fact that the quotient is pseudo-covered (\cite[Definition 5.14]{bub}) is an immediate consequence of Lemma \ref{lato2}.
\end{proof}
\begin{lem}\label{component-quotient} Let $\pi$ be the projection of $ P_0(G)$ on $\widetilde{P}_0(G) $. Then, the map from $\mathcal{C}_0 (G)$ to $\widetilde{\mathcal{C}}_0 (G) $ which associates to every $C\in \mathcal{C}_0 (G)$ the component $\pi(C) $ is a bijection. Given $\widetilde{C}\in \widetilde{\mathcal{C}}_0 (G) $, the set of vertices of the unique $C\in \mathcal{C}_0 (G)$ such that $\pi(C)=\widetilde{C}$ is given by $\pi^{-1}(V_{\widetilde{C}})$.
\end{lem}
\begin{proof} By Lemma \ref{rmk:1}, $\pi$ is tame and pseudo-covering. Thus we apply \cite[Corollary 5.13]{bub} to $\pi$.
\end{proof}
\vskip 0.6 true cm
\section{\bf Order graphs}\label{order-sec}
\vskip 0.4 true cm
Let $O(G)=\{ o(g): g\in G \}$. The map $o:G\rightarrow O(G)$, associating to every $x\in G$ its order $o(x)$, is called the {\it order map} on $G$. We say that $m\in \mathbb{N}$ is a proper divisor of $n\in \mathbb{N}$ if $m\mid n$ and $m\notin\{1,n\}$.
\begin{defn}\label{O}{\rm The {\it order graph} of $G$ is the graph $\mathcal{O}(G)$ with vertex set $O(G)$ and edge set $E_{\mathcal{ O}(G)}$ where, for every $m,n\in O(G)$, $\{m,n\}\in E_{\mathcal{ O}(G)}$ if $m\mid n$ or $n\mid m$.
The {\it proper order graph} $\mathcal {O}_0(G)$ is defined as the $1$-cut graph of $\mathcal{O}(G).$ Its vertex set is $O_0(G)=O(G)\setminus \{1\}.$}
\end{defn}
Note that $\{m,n\}\in E^*_{\mathcal{ O}_0(G)}$ if and only if one of $m,n$ is a proper divisor of the other.
Clearly $\mathcal{O}(G)$ is always connected and it is $2$-connected if and only if $\mathcal {O}_0(G)$ is connected.
\begin{prop}\label{order} Let $G$ be a group. The order map defines a complete homomorphism
$o :P(G)\rightarrow \mathcal{O}(G)$ which induces a complete homomorphism $o:P_0(G)\rightarrow \mathcal{O}_0(G)$, and a complete $2$-homomorphism
$\widetilde{o}:\widetilde{P}_0(G)\rightarrow \mathcal{O}_0(G).$
If $G$ is cyclic, $\widetilde{o}$ is an isomorphism.
\end{prop}
\begin{proof} For every $m\in \mathbb{N}$, $o(x^m)$ is a divisor of $o(x),$ so $o$ is a surjective homomorphism. We show that $o$ is complete. Let $e=\{o(x),o(y)\}\in E_{\mathcal{ O}(G)},$ for some $x,y\in G$. Then, without loss of generality, we may assume that $o(y)\mid o(x).$ Since in $\langle x\rangle$ there exist elements of each order dividing $o(x),$ there exists $\overline{y}\in\langle x\rangle$ with $o(\overline{y})=o(y)$. Let $m\in\mathbb{N}$ be such that $\overline{y}=x^m.$ Then $\{x,\overline{y}\}\in E$ and $o(\{x,\overline{y}\})=e.$ Now, since $o(x)=1$ if and only if $x=1$, Lemma \ref{cutgraph-hom} applies giving the desired result both for the vertex deleted graph and for the quotient graph. We are left to check that $\widetilde{o}$ is a $2$-homomorphism. Let $\{[x],[y]\}\in [E]^*_0$ and show that $\widetilde{o}([x])\neq\widetilde{o}([y])$. Assume the contrary, that is, $o(x)=o(y).$ By Lemma \ref{lato2}, we have $\{x,y\}\in E_0$ so that $x$ and $y$ are one the power of the other. It follows that $\langle x\rangle=\langle y\rangle$, against $[x]\neq [y].$
Finally let $G$ be cyclic. To prove that $\widetilde{o}$ is an isomorphism, it is enough to show that $\widetilde{o}$ is injective. Assume that for some $[x],[y] \in [G]_0$ we have $\widetilde{o}([x])=\widetilde{o}([y])$, that is, $o(x)=o(y)=m.$ Since in a cyclic group there exists exactly one subgroup for each $m\mid |G|,$ we deduce that $\langle x\rangle= \langle y\rangle$ and so $[x]=[y].$
\end{proof}
An application of \cite[Lemma 4.3]{bub} gives, in particular, the following result.
\begin{cor}\label{quotient-order} For each finite group $G$, the graph $\mathcal {O}_0(G)$ is a quotient of the graph $\widetilde{P}_0(G).$
\end{cor}
We exhibit now an example showing that, in general, $\widetilde{o}$ is not pseudo-covering.
\begin{ex} \label{no-fusion}{\rm Let $G$ be the $2$-Sylow subgroup of $S_4$ given by
\[G=\{id, (1\ 3), (2\ 4), (1\ 3)(2\ 4), (1\ 2)(3\ 4), (1\ 4)(2\ 3), (1\ 2\ 3\ 4), (1\ 4\ 3\ 2)\}.\]
Then $\mathcal{O}_0(G)$ is reduced to a path of length $1$ between the only two vertices $2$ and $4,$ while $\widetilde{P}_0(G)$ admits $6$ vertices and $5$ components because the vertices $[(1\ 3)],$ $ [(2\ 4)]$, $ [(1\ 2)(3\ 4)],$ $[(1\ 4)(2\ 3)]$ are isolated while $[(1\ 2\ 3\ 4)]$ and $[(1\ 3)(2\ 4)]$ are the end vertices of a path of length $1$. $\widetilde{o}$ takes the component having as only vertex $[(1\ 3)]$ onto the subgraph $(2,\{2\})$, which is not a component of $\mathcal{O}_0(G)$. By \cite[Theorem A\,(i)]{bub}, this guarantees that $\widetilde{o}$ is not pseudo-covering. }
\end{ex}
The above example indicates that the reduction of complexity obtained passing from the proper power graph to the proper order graph, is usually too strong. For instance, if $G$ is the group in the previous example and $C_4$ the cyclic group of order $4$, we have $\mathcal{O}_0(G)\cong\mathcal{O}_0(C_4).$ In particular we cannot hope, in general, to count the components of $\widetilde{P}_0(G)$ relying on those of $\mathcal{O}_0(G).$ Anyway, taking into account the graph $\mathcal{O}_0(G)$, we get useful information on the isolated vertices of $\widetilde{P}_0(G)$.
\begin{lem}\label{isolated-order} Let $x\in G_0$.
\begin{itemize}
\item[(i)] If $o(x)\in O_0(G)$ is isolated in $\mathcal{O}_0(G)$, then $[x]$ is isolated in $\widetilde{P}_0(G).$
\item[(ii)] If $[x]$ is isolated in $\widetilde{P}_0(G),$ then $o(x)$ is a prime and the component of $P_0(G)$ containing $x$ is a complete graph on $p-1$ vertices.
\item[(iii)] $c_0(\mathcal{O}(G))\leq |\{p\in P: p\mid |G|\}|.$
\end{itemize}
\end{lem}
\begin{proof} (i) Apply Lemma \ref{2hom-iso} to the $2$-homomorphism $\widetilde{o}$ of Proposition \ref{order}.
(ii) Let $[x]$ be isolated in $\widetilde{P}_0(G)$. We first show that $o(x)=p$, for some prime $p$.
Assume, by contradiction, that $o(x)$ is composite. Then there exists $k\in\mathbb{N}$ such that $1\neq \langle x^k\rangle\neq \langle x\rangle$ and so $\{[x],[x^k]\}\in [E]^*_0$, contradicting $[x]$ isolated. Let next $C=C_{P_0(G)}(x)$. By \cite[Theorem A\,(i)]{bub} applied to the pseudo-covering projection $\pi: P_0(G)\rightarrow \widetilde{P}_0(G)$, we have that $\pi(C)=C_{\widetilde{P}_0(G)}([x])$ and thus $\pi(C)$ is reduced to the vertex $[x]$.
So, if $x'\in V_C$ we have that $[x']=[x]$ that is $\langle x'\rangle=\langle x \rangle$. It follows that $V_C$ is the set of generators of the cyclic group $\langle x\rangle$ of order $p$. Thus $|V_C|=p-1$ and $C$ is a complete graph on $p-1$ vertices.
(iii) Let $m\in O_0(G)$. If $p$ is a prime dividing $m$, then $p\in O_0(G)$ and $p\mid |G|.$ Moreover,
$\{m,p\}\in E_{\mathcal{O}_0(G)}$ so that $m\in C_{\mathcal{O}_0(G)}(p).$
\end{proof}
\vskip 0.6 true cm
\section{\bf {\bf \em{\bf Quotient graphs of power graphs associated to permutation groups}}}\label{permutation}
\vskip 0.4 true cm
Let $G\leq S_n$ be a permutation group of degree $n$ acting naturally on $N=\{1,\dots,n\}$.
We want to determine $c_0(G)$, starting from $X=\widetilde{P}_0(G)$ and looking
for suitable quotients. We need to find a graph $Y$ and a homomorphism $\varphi\in \mathrm{PC}(X,Y)=\mathrm{LSur}(X,Y)\cap \mathrm{Com}(X,Y)$ to which to apply Formula $(1.1)$ in \cite[Theorem A]{bub} or, better, a homomorphism $\varphi\in \mathrm{O}(X,Y)\cap \mathrm{Com}(X,Y)$ to which we can apply the Procedure \cite[6.10]{bub}.
To start with, we need to associate to every permutation an arithmetic object.
\subsection{\bf {\bf \em{\bf Partitions }}}\label{partition}
\vskip 0.4 true cm
Let $n,r\in \mathbb{N},$ with $n\geq r.$ A
$r$-{\em partition} of $n$ is an unordered $r$-tuple
$[x_1,\dots,x_r]$, with $x_i\in\mathbb{N}$ for every $i\in
\{1,\dots,r\},$ such that $n=\sum_{i=1}^{r}x_i.$
The $x_i$ are called the {\em terms} of the partition.
We denote by $\mathcal{T}_r(n)$ the set of the
$r$-partitions of $n$ and we call each element in $\mathcal{T}(n)=\bigcup_{r=1}^n\mathcal{T}_r(n)$ a {\em partition} of $n.$
Given $T\in \mathcal{T}(n)$, let $m_1<\dots < m_k$ be its $k\in \mathbb{N}$ distinct terms; if $m_j$ appears $t_j\geq 1$ times in $T$ we use the notation $T=[m_1^{t_1},..., m_k^{t_k}]$. Moreover, we say that $[m_1^{t_1},..., m_k^{t_k}]$ is
the {\it normal form} of $T$ and that $t_j$ is the {\it multiplicity} of the term $m_j$. We will accept, in some occasions, the multiplicity $t_j=0$ simply to say that a certain natural number $m_j$ does not appear as a term in $T.$ We usually omit to write the multiplicities equal to $ 1$. The partition $[1^n]$ is called the {\it trivial partition}. We put $\mathcal{T}_0(n)=\mathcal{T}(n)\setminus \{[1^n]\}$.
We also define $\mathop{\mathrm{lcm}}(T)=\mathop{\mathrm{lcm}}\{m_i\}_{i=1}^k$ and $\gcd (T)=\gcd\{m_i\}_{i=1}^k$. The {\it order} of $T$ is defined by $\mathop{\mathrm{lcm}}(T)$ and written $o(T).$
\subsection{\bf {\bf \em{\bf Types of permutations }}}\label{types}
\vskip 0.4 true cm
Let $\psi\in S_n.$ The {\sl type} of $\psi$ is the partition of $n$ given by the unordered list
$T_{\psi}=[x_1,...,x_r]$ of the sizes $x_i$ of the $r$ orbits of $\psi$ on $N.$ Note that the fixed points of $\psi$ correspond to the terms $x_i=1,$ while the lengths of the disjoint cycles in which $\psi$ uniquely splits are given by the terms $x_i\geq 2.$ For instance $(1\ 2\ 3)\in S_3$ has type $[3]$, while $(1\ 2\ 3)\in S_4$ has type $[1,3].$ For $\psi \in S_n$, we denote by $M_{\psi}=\{i\in N :\psi(i)\neq i\}$ the support of $\psi$. Thus $|M_{\psi}|$ is the sum of the terms different form $1$ in $T_{\psi}.$
The permutations of type $[1^{n-k},k]$ are the $k$-cycles; the $2$-cycles are also called transpositions. Note that for every $\psi\in S_n$ and $s\in\mathbb{N},$ $T_{\psi}=T_{\psi^s}$ if and only if $\langle \psi\rangle =\langle \psi^s\rangle. $ Note also that $o(T_{\psi})=o(\psi).$
Recall that $\psi, \varphi\in S_n$ are conjugate in $S_n$ if and only if $T_{\psi}=T_{\varphi}$. The map $t:S_n\rightarrow \mathcal{T}(n)$, defined by $t(\psi)=T_{\psi}$ for all $\psi\in S_n$, is surjective. In other words, each partition of $n$ may be viewed as the type of some permutation in $S_n$. We call $t$ the {\it type map}. If $X\subseteq S_n$, then $t(X)$ is the set of types {\it admissible} for $X$ in the sense of \cite[Section 4.1]{bub}, and denoted by $\mathcal{T}(X)$. Let $\mu_T(G)$ be the number of permutations of type $T=[m_1^{t_1},..., m_k^{t_k}]$ in $G\leq S_n$. It is well known that
\begin{equation}\label{count}\mu_T(S_n)=\dfrac{n!}{m_1^{t_1}\cdots m_k^{t_k}t_1!\cdots t_k!}.
\end{equation}
\subsection{\bf {\bf \em{\bf Powers of partitions }}}\label{powers-partition}
\vskip 0.4 true cm
Given $T=[m_1^{t_1},..., m_k^{t_k}]\in\mathcal{T}(n)$, the {\it power} of $T$ of exponent $a\in \mathbb{N}$, is defined as the partition
\begin{equation}\label{power} T^a=\left[\left(\frac{m_1}{\gcd(a,m_1)}\right)^{t_1\gcd(a,m_1)},..., \left(\frac{m_k}{\gcd(a,m_k)}\right)^{t_k\gcd(a,m_k)}\right].
\end{equation}
Note that $T^a$ is not necessarily in normal form. Moreover, for each $\psi\in S_n$ and each $a\in\mathbb{N},$ we have $T_{\psi^a}=(T_{\psi})^a.$ As a consequence,
the power notation for partitions is consistent with a typical property of the powers: if $a, b\in \mathbb{N},$ then $(T^a)^b=T^{ab}=T^{ba}=(T^b)^a.$ We say that $T^a$ is a {\it proper power} of $T$ if $[1^n]\neq T^a\neq T.$
Throughout the section, we will assume the notation in \eqref{power} without further reference.
\begin{lem}\label{trivial-power} Let $T\in \mathcal{T}(n)$ and $a\in \mathbb{N}$.
\begin{itemize}
\item[(i)] $T^a=T$ if and only if $\gcd(a, o(T))=1.$
\item[(ii)] $T^a=[1^n]$ if and only if $o(T)\mid a.$
\item[(iii)] $T^a$ is a proper power of $T$ if and only if $\gcd(a, o(T))$ is a proper divisor of $o(T).$
\item[(iv)] If $T^a$ is a proper power of $T$, then $o(T^a)$ is a proper divisor of $o(T)$.
\end{itemize}
\end{lem}
\begin{proof}
(i) Let $T^a=T$. Then the number of terms in $T^a$ and in $T$ is the same, that is, $\sum_{i=1}^kt_i\gcd(a,m_i)=\sum_{i=1}^kt_i$, which implies $\gcd(a,m_i)=1$ for all $i\in\{1,\dots,k\}$ and thus $\gcd(a, \mathop{\mathrm{lcm}}\{m_i\}_{i=1}^k)=\gcd(a, o(T))=1.$ Conversely, if $\gcd(a, o(T))=1,$ then $\gcd(a, \mathop{\mathrm{lcm}}\{m_i\}_{i=1}^k)=1$ and so $\frac{m_i}{\gcd(a,m_i)}=m_i,$ for all $i\in\{1,\dots,k\}$ so that $T^a=T$.
(ii) $T^a=[1^n]$ is equivalent to $\frac{m_i}{\gcd(a,m_i)}=1$, and thus to $m_i\mid a$ for all $i\in\{1,\dots,k\}$, that is to $o(T)=\mathop{\mathrm{lcm}}\{m_i\}_{i=1}^k\mid a.$
(iii) It follows by the definition of proper power and by (i) and (ii).
(iv) Assume that $T^a$ is a proper power of $T$. Then, by (iii), $\gcd(a, o(T))\neq 1,o(T).$
Let $\psi\in S_n$ such that $T=T_{\psi}$. Then $T^a=T_{\psi^a}$ and $o(T^a)=o(\psi^a)\notin \{ 1,o(\psi)=o(T)\}$ is a proper divisor of $o(T)$.
\end{proof}
\begin{lem}\label{prospli}
\noindent\begin{itemize}
\item[(i)] Let $T\in\mathcal{T}(n)$, $a\in\mathbb{N}$ and $T'=T^a$ be a proper power of $T.$ Then the following facts hold:
\begin{itemize}
\item[(a)] $T'$ admits at least one term with multiplicity at least $2$;
\item[(b)] if there exists a term $x$ of $T'$ appearing with multiplicity $1,$ then $a$ is coprime with $x$ and $x$ is a term of $T.$
\end{itemize}
\item[(ii)] Let $h\in\mathbb{N}$, with $1\leq h<n/2.$ Then there exists no type in $\mathcal{T}(n)$ having $[h,n-k]$ or $[n]$ as a proper power.
\end{itemize}
\end{lem}
\begin{proof} (i)(a) Since $T'=T^a$ is a proper power of $T,$ by Lemma \ref{trivial-power},
there exists $j\in \{1,\dots,k\}$ with $\gcd(a,m_j)\geq 2.$ Thus the term $\frac{m_j}{\gcd(a,m_j)}$ in $T'$ has multiplicity at least $t_j\gcd(a,m_j)\geq 2.$
(b) If a term $x$ in $T'$ appears with multiplicity $1$, then there exists $j\in \{1,\dots,k\}$ with $t_j\gcd(a,m_j)=1$ and $x=\frac{m_j}{\gcd(a,m_j)}$.
It follows that $t_j=1$ and $\gcd(a,m_j)=1$. Moreover $x=m_j$ is a term of $T$.
(ii) This is a consequence of (i) and of $k\neq n-k.$
\end{proof}
\subsection{\bf {\bf \em{\bf The power type graphs of a permutation group }}}\label{power type graph}
\vskip 0.4 true cm
\begin{defn}\label{type-graph}{\rm
Let $G\leq S_n$. We define the {\it power type graph} of $G$, as the graph $P(\mathcal{T}(G))$ having as vertex set $\mathcal{T}(G)$ and edge set $E_{\mathcal{T}(G)}$ where $\{T, T'\}\in E_{\mathcal{T}(G)}$ if $T, T'\in \mathcal{T}(G)$ are one the power of the other.
We define also the {\it proper power type graph} of $G$, as the $[1^n]$-cut subgraph of $P(\mathcal{T}(G))$ and denote it by $P_0(\mathcal{T}(G)).$ Its vertex set is then $\mathcal{T}(G_0)=\mathcal{T}(G)\setminus \{[1^n]\}$ and its edge set is denoted by $E_{\mathcal{T}(G_0)}.$ For short, we write $\mathcal{C}_0(\mathcal{T}(G))$ instead of $\mathcal{C}(P_0(\mathcal{T}(G)))$ and $c_0(\mathcal{T}(G))$ instead of $c(P_0(\mathcal{T}(G))).$}
\end{defn}
Note that
$\{T, T'\}\in E^*_{\mathcal{T}(G_0)}$ if and only if $T, T'\in \mathcal{T}(G_0)$ are one the proper power of the other. Clearly $P(\mathcal{T}(G))$ is always connected and it is $2$-connected if and only if $P_0(\mathcal{T}(G))$ is connected. Since from now on we intend to focus on proper graphs, we will tacitly assume $n\geq 2.$
\begin{prop}\label{tau} Let $G\leq S_n$. Then the following facts hold:
\begin{itemize}
\item[(i)] the type map on $G$ induces a complete homomorphism
$t: P_0(G)\rightarrow P_0(\mathcal{T}(G))$ and a complete $2$-homomorphism
$\widetilde{t}: \widetilde{P}_0(G)\rightarrow P_0(\mathcal{T}(G));$
\item[(ii)] $P_0(\mathcal{T}(G))$ is a quotient of $\widetilde{P}_0(G)$. In particular, if $\widetilde{P}_0(G)$ is connected, then $P_0(\mathcal{T}(G))$ is connected.
\end{itemize}
\end{prop}
\begin{proof} (i) We begin showing that the map $t:G\rightarrow \mathcal{T}(G)$ defines a complete homomorphism $t:P(G)\rightarrow P(\mathcal{T}(G)).$ Let $\{\psi, \varphi\}\in E$. Thus $\psi, \varphi\in G$ are one the power of the other. Let, say, $\psi=\varphi^a$ for some $a\in \mathbb{N}.$ Then
$t(\psi)=T_{\psi}=T_{\varphi^a}=(T_{\varphi})^a=t(\varphi)^a$ and thus $\{t(\psi), t(\varphi)\}\in E_{\mathcal{T}(G)}.$ This shows that $t$ is a homomorphism. We show that $t$ is complete, that is, $t(P(G))=P(\mathcal{T}(G))$. First of all, we have $t(G)=\mathcal{T}(G)$ by definition. Let $e=\{t(\psi), t(\varphi)\}\in E_{\mathcal{T}(G)}$ so that $t(\psi), t(\varphi)$ are one the power of the other. Let, say, $t(\psi)= t(\varphi)^a$ for some $a\in \mathbb{N}.$ Thus we have $T_{\psi}=T_{\varphi^a}$. Now, define $\overline{\psi}=\varphi^a$ and note that $t(\overline{\psi})=t(\psi)$. Hence $\overline{e}=\{\overline{\psi}, \varphi\}\in E$ and $t(\overline{e})=e.$ Since the only permutation having type $[1^n]$ is $id$, by Lemma \ref{cutgraph-hom}\,(i), we also have a homomorphism $t: P_0(G)\rightarrow P_0(\mathcal{T}(G)).$ Then, applying
Lemma \ref{cutgraph-hom}\,(ii), that homomorphism $t$ induces the complete homomorphism $\widetilde{t}:\widetilde{P}(G)\rightarrow P(\mathcal{T}(G))$ defined by $\widetilde{t}([\psi])=T_\psi$ for all $[\psi]\in [G],$ and a corresponding complete homomorphism $\widetilde{t}: \widetilde{P}_0(G)\rightarrow P_0(\mathcal{T}(G)).$ It remains to show that $\widetilde{t}$ is a $2$-homomorphism. Let $\{[\psi], [\varphi]\}\in [E]^*_{0}$ and assume that $\widetilde{t}([\psi])=\widetilde{t}([\varphi])$. Then $[\psi]\neq [\varphi]$ and, by Lemma \ref{lato2}, $\psi$ and $\varphi$ are one the power of the other. Moreover, we have $T_{\psi}=T_{\varphi}$. Hence $\langle \psi\rangle=\langle \varphi\rangle$, against $[\psi]\neq [\varphi]$.
(ii) It follows by (i), \cite[Lemma 4.3]{bub} and \cite[Proposition 3.2]{bub}.
\end{proof}
Let $G\leq S_n$. The homomorphism $\widetilde{t}$, defined in the above proposition, transfers to $[G]$ all the concepts introduced for $G$ in terms of type. In particular we define
the type $T_{[\psi]}$ of $[\psi] \in [G]$, by $T_{\psi}.$ Moreover, we say that $[\psi]$ is a $k$-cycle (a transposition) if $\psi$ is $k$-cycle (a transposition).
For $X\subseteq S_n$ consider $[X]=\{[x]\in[S_n] : x\in X\}\subseteq [S_n]$. Then, according to \cite[Section 4.1]{bub}, $\widetilde{t}([X])=t(X)=\mathcal{T}(X)$
is the set of {\it types admissible} for $[X]$.
Since every subset of $[S_n]$ is given by $[X]$, for a suitable $X\subseteq S_n,$ that defines the concept of admissibility for all the subsets of $[S_n]$. If $\hat{X}$ is a subgraph of $ \widetilde{P}_0(G)$ the set of types admissible for $\hat{X}$, denoted by $ \mathcal{T}(\hat{X})$, is given by the set of types admissible for $V_{\hat{X}}.$ In particular, for $C\in \widetilde{\mathcal{C}}_0(G),$ we have $ \mathcal{T}(C)=\{T\in \mathcal{T}(G_0):\hbox{there exists}\ [\psi]\in V_C\ \hbox{with}\ T_{\psi}=T\}.$
It is useful to isolate a fact contained in Proposition \ref{tau}.
\begin{cor}\label{edge} Let $G\leq S_n$. If $\varphi,\psi\in G_0$ are such that $\{[\varphi],[\psi]\}\in[E]^*_0$, then $T_{\varphi}$, $T_{\psi}\in\mathcal{T}(G_0)$ are one the proper power of the other. In particular $T_{\varphi}\neq T_{\psi}$.
\end{cor}
Observe that the converse of the above corollary does not hold. Consider, for instance, $\varphi=(1\ 2\ 3\ 4),\ \psi=(1\ 2)(3\ 4)\in S_4.$ We have that $T_{\psi}=[2^2]$ is the power of exponent $2$ of $T_{\varphi}=[4]$, but there is no edge between $[\varphi]$ and $[\psi]$ in $\widetilde{P}_0(S_4),$ because no power of $(1\ 2\ 3\ 4)$ is equal to $(1\ 2)(3\ 4).$
\subsection{\bf {\bf \em{\bf The order graph of a permutation group }}}\label{rod-perm}
\vskip 0.4 true cm
In this section we show that, for any permutation group, the graph $\mathcal{O}_0(G)$ is a quotient of $P_0(\mathcal{T}(G))$.
Define the map
\begin{equation}\label{omegaT}o_{\mathcal{T}}:\mathcal{T}_0(G)\rightarrow O_0(G), \qquad o_{\mathcal{T}}(T)=o(T)\ \hbox{for all}\ T\in \mathcal{T}_0(G)\end{equation}
and recall the map $\widetilde{o}$ defined in Proposition \ref{order}.
\begin{prop}\label{order-quo} For every $G\leq S_n$, the map $o_{\mathcal{T}}$ defines a complete $2$-homomorphism $o_{\mathcal{T}}:P_0(\mathcal{T}(G))\rightarrow \mathcal{O}_0(G)$ such that $o_{\mathcal{T}}\circ \widetilde{t}=\widetilde{o}.$
In particular $\mathcal{O}_0(G)$ is a quotient of $P_0(\mathcal{T}(G))$ and $c_0(\mathcal{O}(G))\leq c_0(\mathcal{T}(G)).$
\end{prop}
\begin{proof} The map $o_{\mathcal{T}}$ is well defined because if $T\in \mathcal{T}_0(G)$ there exists $\psi\in G_0$ such that $T=T_{\psi}$ and so $o(T)=o(T_{\psi})=o(\psi)\in O_0(G).$ The same argument shows that $o_{\mathcal{T}}\circ \widetilde{t}=\widetilde{o}.$ By Proposition \ref{order}, the map $\widetilde{o}$ is a complete homomorphism from $\widetilde{P}_0(G)$ in $\mathcal{O}_0(G)$. In particular $\widetilde{o}$ is surjective and so also $o_{\mathcal{T}}$ is surjective.
We show that $o_{\mathcal{T}}$ is a $2$-homomorphism.
Let $\{T,T'\}$ be an edge in $P_0(\mathcal{T}(G))$. Then $T,T'$ are one the power of the other. Let, say, $T'=T^a$ for some $a\in \mathbb{N}.$ Then $o(T')$ is a divisor of $o(T),$ which says $\{o(T),o(T')\}\in E_{\mathcal{O}_0(G)}$.
Moreover, if $T\neq T'$ we have $[1^n]\neq T^a\neq T$ and thus Lemma \ref{trivial-power}\,(iv) implies that $o(T')$ is a proper divisor of $o(T).$
We finally show that $o_{\mathcal{T}}$ is complete. Let $e=\{m,m'\}\in E_{\mathcal{O}_0(G)}$. Since $\widetilde{o}$ is complete, there exist $[\varphi],[\varphi']\in [G]_0$ such that $\widetilde{o}([\varphi])=m, \widetilde{o}([\varphi'])=m'$ and $\{[\varphi],[\varphi']\}\in [E]_0.$ By Proposition \ref{tau}\,(i), the map $\widetilde{t}$ is a homomorphism and so $\{\widetilde{t}([\varphi]),\widetilde{t}([\varphi'])\}=\{T_{\varphi},T_{\varphi'}\}$ is an edge in $P_0(\mathcal{T}(G)).$ Now it is enough to observe that $o_{\mathcal{T}}(T_{\varphi})=o(\varphi)=m$ and $o_{\mathcal{T}}(T_{\varphi'})=o(\varphi')=m'.$
To close, we apply \cite[ Lemma 4.3 and Proposition 3.2]{bub}.
\end{proof}
\begin{cor}\label{iso-rod-type} Let $G\leq S_n$. If $m\in O_0(G)$ is isolated in $\mathcal{O}_0(G),$ then each type of order $m$ is isolated in $P_0(\mathcal{T}(G)).$
\end{cor}
\begin{proof} By Proposition \ref{order-quo}, $o_{\mathcal{T}}$ is a $2$-homomorphism. Thus we can apply to $o_{\mathcal{T}},$ Lemma \ref{2hom-iso}.
\end{proof}
\vskip 0.6 true cm
\section{\bf {\bf \em{\bf Quotient graphs of power graphs associated to fusion controlled permutation groups}}}\label{fusion case}
\vskip 0.4 true cm
\begin{defn}\label{fusion}{\rm Let $G\leq S_n$.
\begin{itemize} \item[(a)] $G$ is called {\it fusion controlled} if $N_{S_n}(G)$ controls the fusion in $G,$ with respect to $S_n$, that is, if for every $ \psi\in G$ and $x\in S_n$ such that $\psi^x\in G$, there exists $y\in N_{S_n}(G)$ such that $\psi^x=\psi^y;$
\item[(b)] For each $x\in N_{S_n}(G)$, define the map
\begin{equation}\label{F}
F_{x}: [G]_0\rightarrow[G]_0,\quad F_{x}([\psi])=[\psi^x]\ \hbox{for all} \ [\psi]\in [G]_0.
\end{equation}
\end{itemize}
}
\end{defn}
Note that $F_{x}$ is well defined, that is, for every $[\psi]\in [G]_0,$ $F_{x}([\psi])$ does not depend on the representative of $[\psi]$, and $F_{x}([\psi])\in [G]_0$. Those facts are immediate considering that conjugation is an automorphism of the group $S_n$ and that $x\in N_{S_n}(G).$
Recalling now the homomorphism $\widetilde{t}$ defined in Proposition \ref{tau} and the definition of $\mathfrak{G}$-consistency given in \cite[Definition 4.4\,(b)]{bub},
we get the following result.
\begin{prop}\label{conj} Let $G\leq S_n$.
\begin{itemize}
\item[(i)] Then, for every $x\in N_{S_n}(G)$, the map $F_x$ is a graph automorphism of $\widetilde{P}_0(G)$ which preserves the type.
\item[(ii)] If $G$ is fusion controlled, then $\mathfrak{G}=\{F_x :x\in N_{S_n}(G)\}$ is a subgroup of $\mathrm {Aut}(\widetilde{P}_0(G)) $ and $\widetilde{t}$ is a $\mathfrak{G}$-consistent $2$-homomorphism from $\widetilde{P}_0(G)$ to $P_0(\mathcal{T}(G))$. In particular $\widetilde{t}$ is a complete orbit $2$-homomorphism.
\end{itemize}
\end{prop}
\begin{proof}(i) Since, for every $x\in N_{S_n}(G)$, we have $F_x\circ F_{x^{-1}}=F_{x^{-1}}\circ F_x=id_{[G]_0}$ we deduce that $F_x$ is a bijection.
We show that $F_x$ is a graph homomorphism. Let $\{[\varphi],[\psi]\}\in [E]_0.$ Then, by Lemma \ref{lato2}, $\varphi$ and $\psi$ are one the power of the other. Let , say, $\varphi=\psi^m$ for some $m\in \mathbb{N}$.
Thus, since conjugation is an automorphism of $S_n$, we have $\varphi^x=(\psi^m)^x=(\psi^x)^m$
and so $\{[\varphi^x],[\psi^x]\}\in [E]_0.$ Next we see that $F_x$ is complete. Let $e=\{[\varphi^x],[\psi^x]\}\in [E]_0$. Then by Lemma \ref{lato2}, $\varphi^x$ and $\psi^x$ are one the power of the other. Let, say, $\varphi^x=(\psi^x)^m$ for some $m\in \mathbb{N}$. Now, considering $\overline{e}=\{[\psi],[\psi^m]\}\in [E]_0$, we have $F_x(\overline{e})=e.$
Finally $T_{F_x([\psi])}=T_{[\psi]}$ because $\psi^x$ is a conjugate of $\psi$ and thus has its same type. In other words, for every $x\in N_{S_n}(G)$, we have \begin{equation}\label{a}\widetilde{t}\circ F_x=\widetilde{t}.\end{equation}
(ii) The map $N_{S_n}(G)\rightarrow {\mathrm {Aut}}(\widetilde{P}_0(G))$, associating $F_x$ to $x\in N_{S_n}(G),$ is a group homomorphism and so its image $\mathfrak{G}$ is a subgroup of ${\mathrm {Aut}}(\widetilde{P}_0(G))$. We show that $\widetilde{t}$ is $\mathfrak{G}$-consistent checking that conditions (a) and (b) of \cite[Lemma 4.5]{bub} are satisfied. Condition (a) is just \eqref{a}. To get condition (b), pick $[\varphi],[\psi]\in[G]_0$ with $T_{\varphi}=T_{\psi}$. Then $\varphi$ and $\psi$ are elements of $G$ conjugate in $S_n$. Thus, as $G$ is fusion controlled, they are conjugate also in $N_{S_n}(G)$. So there exists $x\in N_{S_n}(G)$ such that $\varphi=\psi^x$, which gives $[\varphi]=[\psi^x]=F_x([\psi]).$
\end{proof}
We say that $C, \hat{C}\in \widetilde{\mathcal{C}}_0(G)$ are {\it conjugate} if there exists $x\in N_{S_n}(G)$ such that $ \hat{C}=F_x(C).$
\begin{prop}\label{2con} Let $G\leq S_n$ be fusion controlled.
\begin{itemize}
\item[(i)] Then $P_0(\mathcal{T}(G))$ is an orbit quotient of $\widetilde{P}_0(G)$.
\item[(ii)] $T\in \mathcal{T}(G_0)$ is isolated in $P_0(\mathcal{T}(G))$ if and only if each $[\psi]\in [G]_0$ of type $T$ is isolated in $\widetilde{P}_0(G)$.
\item[(iii)] If $T\in \mathcal{T}(G_0)$, then the components of $\widetilde{P}_0(G)$ admissible for $T$ are conjugate.
\end{itemize}
\end{prop}
\begin{proof} Keeping in mind the definition of orbit quotient given in \cite[Definition 5.14]{bub}, (i) is a rephrase of Proposition \ref{conj}\,(ii). To show (ii) recall that, by \cite[Proposition 5.9\,(ii)]{bub}, every orbit homomorphism is locally surjective and then
apply \cite[Corollary 5.12]{bub} and Lemma \ref{2hom-iso} to the orbit $2$-homomorphism $\widetilde{t}.$ (iii) is an application of \cite[Proposition 6.9\,(i)] {bub}.
\end{proof}
\begin{lem}\label{tc} Let $G$ be fusion controlled, $C\in\widetilde{\mathcal{C}}_0(G)$ and $T\in \mathcal{T}(C)$. Then $\mathcal{T}(C)=V_{C(T)}.$
\end{lem}
\begin{proof} Apply \cite [Theorem A\,(i)]{bub} to $\widetilde{t}$ recalling that, by \eqref{general-inc}, every orbit homomorphism is locally surjective.
\end{proof}
Proposition \ref{conj} guarantees that when $G$ is fusion controlled, we have
\begin{equation}\label{ttilde-prop}
\widetilde{t}\in \mathrm{O}\big (\widetilde{P}_0(G),P_0(\mathcal{T}(G))\big )\cap\mathrm{Com}\big(\widetilde{P}_0(G),P_0(\mathcal{T}(G))\big).
\end{equation}
Thus the machinery for counting the components of $\widetilde{P}_0(G)$ by those in $P_0(\mathcal{T}(G))$ can start providing that we control the numbers $k_{\widetilde{P}_0(G)}(T)$ and $k_{C}(T)$, for $T\in\mathcal{T}(G_0)$ and $C\in\widetilde{\mathcal{C}}_0(G)$. We see first that $k_{\widetilde{P}_0(G)}(T)$ is easily determined, for any permutation group $G$, through $\mu_{T} (G).$
\begin{lem}\label{lem:2}
Let $G\leq S_n$. If $T\in \mathcal{T}(G_0),$ then $k_{\widetilde{P}_0(G)}(T)=\dfrac{\mu_{T} (G)}{\phi(o(T))}$.
\end{lem}
\begin{proof}
If $T=[m_1^{t_1},..., m_k^{t_k}]\in \mathcal{T}(G_0)$, then the set $G_T=\{\sigma\in G: \sigma\ \hbox{is of type }\ T\}$ is nonempty and each element in $G_T$ has the same order given by $\mathrm{lcm}\{m_i\}_{i=1}^k=o(T).$ Consider the equivalence relation $\sim$ which defines the quotient graph $\widetilde{P}_0(G)$.
Since generators of the same cyclic subgroup of $G$ share the same type, it follows that $G_T$ is union of $\sim$-classes, each of them of size $\phi(o(T))$. On the other hand, $[\sigma]\in [G]$ has type $T$ if and only if $\sigma\in G_T.$ This means that $k_{\widetilde{P}_0(G)}(T)$ is the number of $\sim$-classes contained in $G_T,$ and so $k_{\widetilde{P}_0(G)}(T)=\dfrac{\mu_{T} (G)}{\phi(o(T))}$.
\end{proof}
Let $T\in\mathcal{T}_0(G)$. Accordingly to \cite[Definition 6.1]{bub}, we consider the set $\widetilde{\mathcal{C}}_0(G)_T$ of the components of $\widetilde{P}_0(G)$ admissible for $T$ and denote by $\widetilde {c}_0(G)_{T}$ its order.
Thus $\widetilde {c}_0(G)_{T}$ counts the number of components of $\widetilde{P}_0(G)$ in which there exists at least one vertex $[\psi]$ of type $T.$
\begin{lem}\label{fusion2} Let $G\leq S_n$ be fusion controlled and $T\in\mathcal{T}_0(G)$.
\begin{itemize}
\item[(i)] Then $\widetilde {c}_0(G)_{T}=\frac{k_{\widetilde{P}_0(G)}(T)}{k_C(T)}=\frac{\mu_{T} (G)}{\phi(o(T))k_C(T)}$ for all $C\in \widetilde{\mathcal{C}}_0(G)_T.$
\item[(ii)] If $T$ is isolated in $P_0(\mathcal{T}(G))$, then $\widetilde {c}_0(G)_{T}=k_{\widetilde{P}_0(G)}(T)=\frac{\mu_{T} (G)}{\phi(o(T))}.$
\end{itemize}
\end{lem}
\begin{proof}(i) By \eqref{ttilde-prop} and \eqref{general-inc}, we can apply \cite[Proposition 6.8]{bub} to $\widetilde{t}$ and then use
Lemma \ref{lem:2} to make the computation explicit.
(ii) is a consequence of (i) and of Proposition \ref{2con}\,(ii).
\end{proof}
\begin{proof} [Proof of Theorem A] Applying \cite[Theorem A]{bub} to $\widetilde{t}$, we get
\[c_0(G)=\widetilde {c}_0(G)=\sum_{i=1}^{c_0(\mathcal{T}(G))}\widetilde {c}_0(G)_{T_i}\]
and, by Lemma \ref{fusion2}\,(i), we obtain
\[\sum_{i=1}^{c_0(\mathcal{T}(G))}\widetilde {c}_0(G)_{T_i}=\sum_{i=1}^{c_0(\mathcal{T}(G))}\frac{\mu_{T_i} (G)}{\phi(o(T_i))k_{C_i}(T_i)}\]
\end{proof}
We observe now some interesting limitations to the types which can appear in a same component of $\widetilde{P}_0(G)$.
\begin{cor}\label{two-types} Let $G\leq S_n$ be fusion controlled and $C\in \widetilde{\mathcal{C}}_0(G)$.
\begin{itemize}
\item[(i)] If $T,T'\in \mathcal{T}(C),$ then $\frac{k_{\widetilde{P}_0(G)}(T)}{k_C(T)}=\frac{k_{\widetilde{P}_0(G)}(T')}{k_C(T')}.$
\item[(ii)] $C\cong \tilde{t}(C)$ if and only if
there exists $T\in \mathcal{T}(C)$ such that $k_{C}(T)=1$ and, for every $T'\in \mathcal{T}(C)$, $k_{\widetilde{P}_0(G)}(T)=k_{\widetilde{P}_0(G)}(T')$.
\item[(iii)] If $C$ contains all the vertices of $[G]_0$ of a certain type $T$, then $C$ contains also all the vertices of $[G]_0$ of type $T'$ for all $T'\in \mathcal{T}(C).$
\item[(iv)] If there exists $T\in \mathcal{T}(C)$ such that $k_{C}(T)=k_{\widetilde{P}_0(G)}(T)>1,$ then $C\not\cong \tilde{t}(C).$
\end{itemize}
\end{cor}
\begin{proof} An immediate application of \cite[Proposition 7.2]{bub}.
\end{proof}
Finally we state a useful criterium of connectedness for $\widetilde{P}_0(G)$.
\begin{cor}\label{2conreverse} Let $G$ be fusion controlled and let $P_0(\mathcal{T}(G))$ be connected. Then $\widetilde{P}_0(G)$ is a union of conjugate components. Moreover, if there exists $T\in \mathcal{T}(G_0)$ and $C\in \widetilde{\mathcal{C}}_0(G)$ containing all the vertices of $[G]_0$ of type $T,$ then $\widetilde{P}_0(G)$ is connected.
\end{cor}
\begin{proof}
Let $C\in \widetilde{\mathcal{C}}_0(G)$ and consider $\widetilde{t}(C).$ Then $\widetilde{t}(V_C)=\mathcal{T}(C).$
By \cite[Theorem A\,(i)] {bub}, $\widetilde{t}(C)$ is a component of $P_0(\mathcal{T}(G))$ and since that graph is connected, we get $\widetilde{t}(V_C)=\mathcal{T}(G_0)$ and so $\widetilde{t}^{-1}(\widetilde{t}(V_C))=[G]_0.$
By \cite[Theorem A\,(iii)] {bub}, this implies that $\widetilde{P}_0(G)$ is the union of the components in $\mathcal{C}(\widetilde{P}_0(G))_{\widetilde{t}(C)}$ which, by \cite[Lemma 6.3\,(i)] {bub} are equal to the components admissible for any $T \in \mathcal{T}(G_0)$. So Proposition \ref{2con}\,(iii) applies giving $\widetilde{P}_0(G)$ as a union of conjugate components. The last part follows from \cite[Corollary 5.15]{bub}.
\end{proof}
\vskip 0.6 true cm
\section{\bf {\bf \em{\bf The number of components in $\widetilde{P}_0(S_n),\ \widetilde{P}_0(\mathcal{T}(S_n)),\ \mathcal{ O}_0(S_n)$}}}
\vskip 0.4 true cm
In this section we clarify how Formula \eqref{formula-fusion22} can be concrete thanks to the Procedure in \cite[ 6.10]{bub}, applying it to $S_n$, which is a particular fusion controlled permutation group. Exploiting the strong link among the graphs $P_0(S_n),\ \widetilde{P}_0(S_n),\ \widetilde{P}_0(\mathcal{T}(S_n)),\ \mathcal{ O}_0(S_n)$, we determine simultaneously $c_0(S_n)=\widetilde {c}_0(S_n)$, $c_0(\mathcal{T}(S_n))$ and $c_0(\mathcal{ O}(S_n))$. On the route, we give a description of the components of those four graphs.
Recall that $\mathcal{T}_0(n)= \mathcal{T}(S_n)\setminus\{[1^n]\}$ and that, for $T\in \mathcal{T}_0(n)$, the numbers $\mu_T(S_n)$ are computed by \eqref{count}. Recall also that, if $T\in \mathcal{T}_0(n)$ and $C$ is a component of $\widetilde{P}_0(S_n)$, $k_{C}(T)$ counts the number of vertices in $C$ having type $T.$
By Lemma \ref{tc} and Theorem A, the procedure to get $\widetilde {c}_0(S_n)$ translates into the following.
\begin{proc} {\bf Procedure to compute $\widetilde {c}_0(S_n)$ }\label{procedureS_n}
{\rm
\begin{itemize}\item[ (I)] {\it Selection of $T_i$ and $C_i$}
\end{itemize}
\begin{itemize}
\item[ {\it Start}] : Pick arbitrary $T_1\in \mathcal{T}_0(n)$ and choose any $C_1\in\widetilde{\mathcal{C}}_0(S_n)_{T_1}$.
\item[ {\it Basic step}]: Given $T_1,\dots, T_i\in \mathcal{T}_0(n) $ and $C_1,\dots,C_i\in\widetilde{\mathcal{C}}_0(S_n)$ such that $C_j\in \widetilde{\mathcal{C}}_0(S_n)_{T_j}$ ($1\leq j\leq i$), choose any $T_{i+1}\in \mathcal{T}_0(n) \setminus \bigcup_{j=1}^i \mathcal{T}(C_{j})$ and any $C_{i+1}\in \widetilde{\mathcal{C}}_0(S_n)_{T_{i+1}}.$
\item[ {\it Stop}]: The procedure stops in $c_0(\mathcal{T}(S_n))$ steps.
\end{itemize}
\medskip
\begin{itemize}\item[ (II)] {\it The value of $\widetilde {c}_0(S_n)$ }
\end{itemize}
Compute the integers $$\widetilde {c}_0(S_n)_{T_j}=\frac{\mu_{T_j} (S_n)}{\phi(o(T_j))k_{C_j}(T_j)}\quad (1\leq j\leq c_0(\mathcal{T}(S_n)))$$ and sum them up to get $\widetilde {c}_0(S_n)$.}
\end{proc}
The complete freedom in the choice of the
$C_j\in \widetilde{\mathcal{C}}_0(S_n)_{T_j}$ allows us
to compute each $ \widetilde{c}_0(S_n)_{T_j}={\frac{\mu_{T_j} (S_n)}{\phi(o(T_j)) k_{C_j}(T_j)}}$,
selecting $C_j$ as the component containing $[\psi]$, for $[\psi]$ chosen
as preferred among the $\psi\in S_n\setminus\{id\}$ with $T_{\psi}=T_j.$ We will
apply this fact with no further mention. We emphasize also that that computing is made easy by \eqref{count}.
Remarkably, the number
$c_0(\mathcal{T}(S_n))$ counts the steps of the procedure.
\vskip 0.4 true cm
\subsection{\bf Preliminary lemmas and small degrees }
\vskip 0.4 true cm
We start summarising what we know about isolated vertices by Proposition \ref{2con}\,(ii), Lemma \ref{isolated-order} and Corollary \ref{iso-rod-type}.
\begin{lem} \label{isolatedSn}\noindent \begin{itemize}\item[(i)] The type $T\in \mathcal{T}_0(S_n)$ is isolated in $P_0(\mathcal{T}(S_n))$ if and only if each $[\psi]\in [S_n]_0$ of type $T$ is isolated in $\widetilde{P}_0(S_n)$.
\item[(ii)] If $m\in O_0(S_n)$ is isolated in $\mathcal{O}_0(S_n),$ then each vertex of order $m$ is isolated in $\widetilde{P}_0(S_n)$ and each type of order $m$ is isolated in $\widetilde{P}_0(\mathcal{T}(G)).$
\item[(iii)] If, for some $\psi\in S_n$, $[\psi]$ is isolated in $\widetilde{P}_0(S_n)$, then $o(\psi)$ is prime and the component of $P_0(S_n)$ containing $\psi$ is a complete graph on $p-1$ vertices.
\end{itemize}
\end{lem}
As a consequence, we are able to analyze the prime or prime plus $1$ degrees.
\begin{lem}\label{lem:3}
Let $n\in\{p, p+1\}$ for some $p\in P$. Then the following facts hold:
\begin{itemize}
\item[(i)] $p$ is isolated in $\mathcal{O}_0(S_n).$ The type $[1^{n-p},p]$ is isolated in $\widetilde{P}_0(\mathcal{T}(S_n));$
\item[(ii)] each vertex of $[S_n]_0$ of order $p$ is isolated in $\widetilde{P}_0(S_n)$;
\item[(iii)] the number of components of $\widetilde{P}_0(S_n)$ containing the elements of order $p$ in $[S_n]_0$ is given by $\widetilde {c}_0(S_p)_{[p]}=(p-2)!$ if $n=p$, and by $\widetilde {c}_0(S_{p+1})_{[1,p]}=(p+1)(p-2)!$ if $n=p+1$.
\end{itemize}
\end{lem}
\begin{proof} (i)-(ii) Since $n\in\{p, p+1\}$, we have $p\leq n$ so that $S_n$ admits elements of order $p$. Since there exists no element with order $kp$ for $k\geq 2$, $p$ is isolated in $\mathcal{O}_0(S_n).$ Thus Lemma \ref{isolatedSn} applies.
(iii) The counting follows from Lemma \ref{fusion2}\,(ii) and formula \eqref{count} after having observed that the only type of order $p$ in $\widetilde{P}_0(\mathcal{T}(S_p))$ is $[p]$ and that the only type of order $p$ in $\widetilde{P}_0(\mathcal{T}(S_{p+1}))$ is $[1,p]$.
\end{proof}
\begin{lem}\label{lem:4}
For $n\geq 6$, the transpositions of $\widetilde{P}_0(S_n)$ lie in the same component $\widetilde{\Delta}_n$ of $\widetilde{P}_0(S_n)$.
Moreover $\mathcal{T}(\widetilde{\Delta}_n)\supseteq \{[1^{n-2},2][1^{n-5},2,3], [1^{n-3},3]\}.$
\end{lem}
\begin{proof}
Let $[\varphi_1]$ and $[\varphi_2]$ be two distinct transpositions in $S_n$. Then their supports $M_{\varphi_1}$ $M_{\varphi_2}$ are distinct. If
$|M_{\varphi_1}\cap M_{\varphi_2}|=1$, then there exist distinct $a,b,c\in N$ such that $\varphi_1=(a~b)$ and $\varphi_2=(a~c)$. Moreover, as $n\geq 6$, there exists distinct $e,f,g\in N\setminus \{a,b,c\}$ and we have path
$$[(a~b)], [(a~b)(d~e~f)], [(d~e~f)], [(a~c)(d~e~f)], [(a~c)]$$
between $[\varphi_1]$ and $[\varphi_2]$.
If $|M_{\varphi_1}\cap M_{\varphi_2}|=0$, then there exist distinct $a,b,c, d\in N$ such that $\varphi_1=(a~b)$ and $\varphi_2=(c~d).$ Let $\varphi_3=(a~c)$. By the previous case, there exists a path between $[\varphi_1]$ and
$[\varphi_3]$ and a path between $[\varphi_2]$ and $[\varphi_3]$. Therefore there exists also a path between $[\varphi_1]$ and $[\varphi_2]$. This shows that all the transpositions of $\widetilde{P}_0(S_n)$ lie in the same component $\widetilde{\Delta}_n.$ Next, collecting the types met in the paths, we get $\mathcal{T}(\widetilde{\Delta}_n)\supseteq \{[1^{n-2},2][1^{n-5},2,3], [1^{n-3},3]\}.$
\end{proof}
We note now an interesting immediate fact.
\begin{lem}\label{complete} Let $X, Y$ be graphs and $\varphi\in\mathrm{Hom}(X,Y)$. If $\hat{X}$ is a complete subgraph of $X$, then $\varphi(\hat{X})$ is a complete subgraph of $Y.$
\end{lem}
\begin{cor}\label{comp-structure} Let $n\geq 6$ and $\Delta_n$ be the unique component of $P_0(A_n)$ such that $\pi (\Delta_n)=\widetilde{\Delta}_n.$ Then neither one of the components $\Delta_n,\ \widetilde{\Delta}_n,\ \widetilde{t}(\widetilde{\Delta}_n)$ of the graphs $P_0(S_n)$, $\widetilde{P}_0(S_n)$, $P_0(\mathcal{T}(S_n))$
respectively, nor the connected subgraph $\widetilde{o}(\widetilde{\Delta}_n)$ of $\mathcal{O}_0(S_n)$ is a complete graph.
\end{cor}
\begin{proof} First note that the existence of a unique component $\Delta_n$ of $P_0(A_n)$ such that $\pi(\Delta_n)=\widetilde{\Delta}_n$ is guaranteed by \cite[Corollary 5.13]{bub}, because $\pi$ is pseudo-covering and tame due to Lemma \ref{rmk:1}. Moreover, by Proposition \ref{conj}, $\widetilde{t}$ is a complete orbit homomorphism and thus locally surjective. Hence \cite[Theorem A\,(i)] {bub} guarantees that $\widetilde{t}(\widetilde{\Delta}_n)$ is a component of $\widetilde{P}_0(S_n)$ with $V_{\widetilde{t}(\widetilde{\Delta}_n)}=\mathcal{T}(\widetilde{\Delta}_n)$. On the other hand, by Proposition \ref{order-quo},
we have the complete graph homomorphism $o_{\mathcal{T}}:P_0(\mathcal{T}(S_n))\rightarrow \mathcal{O}_0(S_n)$ such that $o_{\mathcal{T}}\circ \widetilde{t}=\widetilde{o}$. In particular $\widetilde{o}(\widetilde{\Delta}_n)=o_{\mathcal{T}}(\widetilde{t}(\widetilde{\Delta}_n)),$ so that
we can interpret the sequence of graphs
$$\Delta_n,\ \widetilde{\Delta}_n,\ \widetilde{t}(\widetilde{\Delta}_n),\ \widetilde{o}(\widetilde{\Delta}_n)$$
as
\begin{equation}\label{new}\Delta_n,\ \pi(\Delta_n),\ ( \widetilde{t}\circ \pi)(\Delta_n),\ (o_{\mathcal{T}}\circ \widetilde{t}\circ \pi)(\Delta_n).
\end{equation}
It is immediate to check that $\widetilde{o}(\widetilde{\Delta}_n)$ is not a complete graph because, by Lemma \ref{lem:4}, it admits as vertices the integer $2$ and $3$ and no edge exists between them in $\mathcal{O}_0(S_n)$. Then to deduce that no graph in the sequence \eqref{new} is complete we start from the bottom and apply three times Lemma \ref{complete}.
\end{proof}
Note that, in general, $\widetilde{o}(\widetilde{\Delta}_n)$ is not a component of $\mathcal{O}_0(S_n)$ because $\widetilde{o}$ is not pseudo-covering. For instance, $\widetilde{o}(\widetilde{\Delta}_6)$ is not a component of $\mathcal{O}_0(S_6)$ because $4\notin V_{\widetilde{o}(\widetilde{\Delta}_6)}$ while $4$ belongs to the component of $\mathcal{O}_0(S_6)$ containing $\widetilde{o}(\widetilde{\Delta}_6)$.
Anyway an argument in the proof of Theorem B\,(ii) shows that $\widetilde{o}(\widetilde{\Delta}_n)$ is indeed a component at least for $n\geq8$.
\begin{proof} [Proof of Theorem B\,(i)] Let $G=S_n$, for $2\leq n \leq 7$, acting on $N=\{1,\dots,n\}$. We compute $\widetilde{c}_0(S_n)$ separately for each degree.
Since $[S_2]_0=\{ [(1~2)]\}$, we immediately have $\widetilde{c}_0(S_2)=c_0(\mathcal{T}(S_2))=c_0(\mathcal{O}(S_n))=1.$ Since $[S_3]_0=\{ [(1~2)], [(1~3)], [(2~3)],[(1~2~3)]\}$, we have $\mathcal{T}_0(S_3)=\{[1,2], [3]\}$ and $O_0(S_3)=\{2, 3\}$. Thus, by Lemma \ref{lem:3}, we get $\widetilde{c}_0(S_3)=4$ and $c_0(\mathcal{T}(S_3))=c_0(\mathcal{O}(S_n))=2.$
Let $n=4$. We start considering the type $T_1=[4]$ and the cycle $\psi=(1~2~3~4) \in S_4$. By Lemmas \ref{edge} and \ref{prospli}, the only vertex distinct from $[\psi]$ adjacent to $[\psi]$ is $\varphi=[(1~3)(2 ~4)]$ and no other vertex can be adjacent to $[\psi]$ or $[\varphi ]$. Thus the component $C_1$ of $\widetilde{P}_0(S_4)$ having as a vertex $[\psi]$ is a path of length one, $k_{C_1}(T_1)=1$ and $\widetilde{c}_0(S_4)_{T_1}=\frac{\mu_{[4]}(S_4)}{\phi(4)}=3$. Note that $\mathcal{T}(C_1)=\{[4], [2^2]\}.$
By Lemma \ref{lem:3}, a vertex of type $T_2=[1, 3]$ is isolated and thus $\widetilde {c}_0(S_4)_{T_2}=\frac{\mu_{[1,3]}(S_4)}{\phi(3)}=4.$
Consider now the type $T_3=[1^2, 2]$. $T_3$ is not a proper power and has no proper power. So, by Lemma \ref{edge}, a component admissible for $T_3$ is again reduced to a single vertex. Thus $\widetilde {c}_0(S_4)_{T_3}=\mu_{[1^2, 2]}(S_4)=6$. Since all the possible types in $S_4$ have been considered, the procedure \ref{procedureS_n} ends, giving $c_0(\mathcal{T}(S_4))=3$ and $\widetilde{c}_0(S_4)=3+4+6=13$. Since $O_0(S_4)=\{2,4,3\}$ we instead have $c_0(\mathcal{O}(S_4))=2.$
Let $n=5$. By Lemma~\ref{lem:3}\,(ii), the vertices of type $T_1=[5]$ in $\widetilde{P}_0(S_5)$ are in $3!=6$ components which are isolated vertices. Let $C_1$ be one of those components.
Consider for the type $T_2=[1,4]$, the cycle $\psi=(1~2~3~4) \in S_5$ . By Lemmas \ref{edge} and \ref{prospli}, the component $C_2$ containing $[\psi]$ admits as vertices just $[(1~2~3~4)]$ and $[(1~3)(2~4)].$ Thus $k_{C_2}(T_2)=1$ and $\widetilde {c}_0(S_5)_{T_2}=\frac{\mu_{[1,4]}(S_5)}{\phi(4)}=15$. Moreover, $\mathcal{T}(C_1)\cup\mathcal{T}(C_2)=\{[5], [1,4], [1,2^2]\}.$ We next consider $T_3=[2,3]$ and $\psi=(1~2)(3~4~5)\in S_5$. By Lemma \ref{prospli}, $T_3$ is not a power and so there exists no $[\varphi]\in [S_5]_0$ such that $\varphi^s=\psi$. On the other hand, to get a power of $T_3=T_{\psi}$ different from $[1^5]$, we must consider $T_{\psi^a}$ where $\gcd(a, o(\psi))\neq 1$, that is, $\psi^a$ for $a\in \{2,3,4\}.$ So the component $C_3$ containing $[\psi]$ contains the path $[(1~2)],[(1~2)(3~4~5)],[(3~4~5)].$ We show that $C_3$ is indeed that path.
If there exists a proper edge $\{[(3~4~5)], [\varphi]\}$, then, by Lemma \ref{edge}, $T_{\varphi}\notin \mathcal{T}(C_1)\cup\mathcal{T}(C_2)\cup\{[1^2,3]\}$ and thus $T_{\varphi}\in \{[1^3, 2], [2,3]\}.$ But $[1^3, 2]$ and $[1^2,3]$ are not one the power of the other and thus, by Lemma \ref{edge}, we must have $T_{\varphi}=[2,3]$, say $\varphi=(a~b)(c~d~e)$ where $\{a, b, c, d, e\} =N$. So $\left[(a~b)(c~d~e)\right]^2=(c~e~d)$ is a generator of $\langle (3~4~5)\rangle$. It follows that $(a~b)=(1~2)$ and $(c~e~d)\in \{(3~4~5), (3~5~4)\}$. Thus $\varphi\in \{\varphi_1=(1~2)(3~4~5), \varphi_2=(1~2)(3~5~4)\}.$ But it is immediately checked that
$[\varphi_1]=[\varphi_2]=[\psi].$
Similarly one can check that the only proper edge $\{[(1~2)], [\varphi]\}$ is given by the choice $\varphi=\psi.$
So we have $\widetilde {c}_0(S_5)_{T_3}=\frac{\mu_{[2,3]}(S_5)}{\phi(6)}=10$ and, since all the possible types in $S_4$ have been considered, we get $c_0(\mathcal{T}(S_5))=3$ and $\widetilde{c}_0(S_5)=31$. On the other hand there are only two components for $\mathcal{O}_0(S_5)$: one reduced to the vertex $5$ and the other one having as set of vertices $\{2,3,4,6\}.$
Let $n=6$. In $\widetilde{P}_0(S_6)$, by Lemma~\ref{lem:3}\,(iii), the elements of type $T_1=[1, 5]$ are inside $36$ components which are isolated vertices. Let $C_1$ be one of them. Consider the type $T_2=[2,4]$ and $\psi=(1~2)(3~4~5~6)$. The component $C_2$ containing $[\psi]$ is the path
$$[(1~2)(3~4~5~6)],\ [(3~5)(4~6)],\ [(3~4~5~6)].$$
This is easily checked, by Lemma \ref{edge}, taking into account that the only proper power of $[2,4]$ is $[1^2,2^2]$ and that, by Lemma \ref{prospli}, no type admits $[2,4]$ as proper power. Moreover, $[1^2,2^2]$ admits no proper power and is the proper power only of $[2,4]$ and $[1^2,4].$
It follows that $k_{C_2}(T_2)=1$ and $\widetilde {c}_0(S_6)_{T_2}=\frac{\mu_{[2,4]}(S_6)}{\phi(4)}=45$. Since $$\mathcal{T}(C_1)\cup\mathcal{T}(C_2)=\{[1,5], [2,4], [1^2,2^2], [1^2,4]\},$$ we consider $T_3=[1^4,2]$ and $\psi=(1~2)\in S_6$. By Lemma~\ref{lem:4}, all the vertices of type $T_3$ are in $C_3=\widetilde{\Delta}_6$ and, using Corollary \ref{two-types}\,(iii), we see that $C_3$ contains also all the vertices of type $[1,2,3]$ and $[1^3,3]$.
But it is easily checked that no further type exists having as power one of the types $[1^4,2], [1,2,3], [1^3,3]$, so that $\mathcal{T}(C_3)=\{[1^4,2], [1,2,3], [1^3,3]\}$. We claim that all the elements of type $T_4=[2^3]$ are in a same component $C_4,$ so that $\widetilde {c}_0(S_6)_{T_4}=1.$
Let $[\varphi_1]$ and $[\varphi_2]$ be distinct elements in $[S_6]_0$ of type $[2^3]$. Note that, since $[\varphi_1], [\varphi_2]$ are distinct they share at most one transposition. Let $\varphi_1=(a~b)(c~d)(e~f)$, with $\{a, b, c, d, e, f\}=\{1, 2, 3, 4, 5, 6\}$. Since the $2$-cycles in which $\varphi_1$ splits commute and also the entries in each cycle commute, we can restrict our analysis to $\varphi_2=(a~b)(c~e)(d~f)$, if $\varphi_1, \varphi_2$ have one cycle in common, and to $\varphi_2=(a~c)(b~e)(d~f)$, if $\varphi_1, \varphi_2$ have no cycle in common. In the first case we have the following path of length $8$ between $[\varphi_1]$ and $[\varphi_2]$:
$$[\varphi_1], [(a~e~d~b~f~c)], [(a~d~f)(e~b~c)], [(e~a~b~d~c~f)], [(e~d)(a~c)(b~f)],$$
$$ [(d~a~b~e~c~f)], [(d~b~c)(a~e~f)], [(a~d~e~b~f~c)], [\varphi_2].$$
In the second case we have the following path of length $4$ between $[\varphi_1]$ and $[\varphi_2]$:
$$[\varphi_1], [(a~f~d~b~e~c)], [(a~d~e)(f~b~c)], [(f~a~b~d~c~e)], [\varphi_2].$$
Collecting the types met in those paths, we see that $\mathcal{T}(C_4)\supseteq\{[2^3], [6], [3^2]\}$ and since all the other possible types in $S_6$ have been considered we get that $\mathcal{T}(C_4)=\{[2^3], [6], [3^2]\}$. Thus
our procedure ends giving $c_0(\mathcal{T}(S_6))=4$ and $\widetilde{c}_0(S_6)=83$. Moreover $c_0(\mathcal{O}(S_6))=2$ with the two components of $\mathcal{O}_0(S_6)$ having as vertex sets $\{5\}$ and $\{2,3,4,6\}.$
Let $n=7.$ In $\widetilde{P}_0(S_7)$, by Lemma~\ref{lem:3}, the elements of type $T_1=[7]$ are in $\widetilde {c}_0(S_7)_{T_1}=120$ components which are isolated vertices. Let $C_1$ be one of them.
By Lemma~\ref{lem:4} and Corollary \ref{two-types}\,(iii), all the vertices of type $T_2=[1^5,2]$ and those of types $[1^2,2,3], [1^4,3]$
are in the same component $C_2=\widetilde{\Delta}_7.$ In particular $\widetilde {c}_0(S_7)_{T_2}=1.$ We show that also the types $[1^3, 4]$, $[1^3, 2^2]$, $[2^2, 3]$, $[1, 2, 4]$, $[2, 5]$ and $[1^2, 5]$ are admissible for $C_2$. From the path
$$ [(1~2~3~4)],\ [(1~3)(2~4)],\ [(1~3)(2~4)(5~6~7)],\ [(5~6~7)]$$
we deduce that $[1^3, 4]$, $[1^3, 2^2]$ and $[2^2, 3]$ are admissible for $C_2$, because $[(5~6~7)]$ is a vertex of $C_2$. Then it is enough to consider the path
$ [(1~2)(3~4~5~6)], [(3~5)(4~6)]$ for getting the type $[1, 2, 4]$
and the path
$[(1~2~3~4~5)], [(1~2~3~4~5)(6~7)], [(6~7)]$ for getting the types $[2, 5]$ and $[1^2, 5]$.
We turn now our attention to the type $T_3=[1, 6]$ and to $\psi_0=(1~2~3~4~5~6)\in S_7.$ Let $C_3$ be the component of $\widetilde{P}_0(S_7)$, containing $[\psi_0].$ By Lemma \ref{prospli}\,(ii), $T_3$ is not a power and its only power are the types $[1, 2^3], [1, 3^2]$, which in turn admit no powers and are only the power of $T_3$.
It follows that $\mathcal{T}(C_3)=\{[1, 2^3], [1, 3^2],[1, 6] \}.$ In particular, if $[\psi]\in V_{C_3}$, then $\psi$ admits a unique fixed point. Moreover, by what shown for $[S_6]$, all the vertices in $[S_7]$ of type $T_3$ fixing $7$ are contained in $C_3$. We show that, indeed, each $[\psi]\in V_{C_3}$ is such that $\psi(7)=7.$
By contradiction, assume $\psi(7)\neq 7$, for some $[\psi]\in V_{C_3}$. Then, since there is a path between $[\psi]$ and $[\psi_0]$, there exist $[\varphi], [\psi']\in V_{C_3}$ with $\varphi(7)=7$, $\psi'(j)=j $, for some $j\neq 7$ and an edge $\{[\varphi], [\psi']\}\in [E]^*_{0}$. But either $\varphi$ is a power of $\psi'$ and so $\varphi$ admits $j$ as a fixed point or $\psi'$ is a power of $\varphi$ and so $\psi'$ admits $7$ as a fixed point. In any case, we reach a contradiction. Thus we have $k_{C_3}(T_3)=\frac{\mu_{[6]}(S_6)}{\phi(6)}$ and so $\widetilde {c}_0(S_7)_{T_3}=\frac{\mu_{[1,6]}(S_7)}{\mu_{[6]}(S_6)}=7$. Since there are no other types left in $S_7$, we conclude that $c_0(\mathcal{T}(S_7))=3$ and $\widetilde{c}_0(S_7)=128$. Moreover $c_0(\mathcal{O}(S_7))=2$ with the two components of $\mathcal{O}_0(S_7)$ having as vertex sets $\{7\}$ and $\{2,3,4,5,6,10\}.$
\end{proof}
We are ready to show that, for $n \geq 8,$ the main role is played by the component $\widetilde{\Delta}_n$ defined in Lemma \ref{lem:4}.
\begin{prop}\label{lem:8} For $n \geq 8$, all the vertices of $[S_n]_0$ apart those of order a prime $p\geq n-1$ are contained in $\widetilde{\Delta}_n.$
\end{prop}
\begin{proof} Let $n \geq 8$. We start showing that each
$[\psi] \in [S_n]_0$ having even order is a vertex of $\widetilde{\Delta}_n.$ Let
$o(\psi)=2k$, for $k$ a positive integer. Since $o(\psi^k)=2$, $\psi^k$ is the product of $s\geq 1$ transpositions. If $s=1$ we have $\psi^k=(a~b)$ for suitable $a,b\in N$ and, by Lemma \ref{lem:4}, the path $[\psi], [(a~b)]$ has its final vertex in $\widetilde{\Delta}_n$. If $s=2,$ then
$\psi^k=(a~b)(c~d)$, for suitable $a, b, c, d \in
N$. Since $n\geq 8$, then there exist distinct
$e, f, g \in N\setminus \{a, b, c, d\}$ and we have the path
$$[\psi],[\psi^k], [(a~b)(c~d)(e~f~g)], [(e~f~g)], [(a~b)(e~f~g)], [(a~b)]$$
with an end vertex belonging to $\widetilde{\Delta}_n$.
Finally if $s\geq 3$, we have $\psi^k=(a~b)(c~d)(e~f)\sigma$, for suitable
$a, b, c, d, e, f \in N$ and $\sigma\in S_n$, with $\sigma^2=id$. Let $\varphi=(a~c~e~b~d~f)\sigma$, so that $\varphi^3=\psi^k$. Since $n\geq 8$, then there exist distinct $g, h \in N\setminus\{a, b, c, d, e, f\}$ and we have the path
$$[\psi], [\psi^k], [\varphi], [(a~e~d)(c~b~f)], [(a~e~d)(c~b~f)(g~h)], [(g~h)]$$
with an end vertex belonging to $\widetilde{\Delta}_n$.
Next let $o(\psi)=p$, where $p$ is an odd prime such that $p\leq n-2$. If $|M_{\psi}|\leq n-2,$ pick $a, b\in \{1, 2, \dots, n\}\setminus M_{\psi}$ and consider the path $[\psi], [\psi(a~b)], [(a~b)].$ If $|M_{\psi}|\geq n-1$, observe that, due to $p\leq n-2$,
$\psi$ is the product of $s\geq 2$ cycles of length $p$, say $\psi=(a_1~a_2~\dots~a_p)(b_1~b_2~\dots~b_p)\sigma$, where $\sigma=id$ or is the product of $s-2$ cycles of length $p$. Let $\varphi=(a_1~b_1~a_2~b_2~...~a_p~b_p){\sigma}^{\frac{(p+1)}{2}}$. Since $o(\varphi)$ is even, by what shown above, we get $[\varphi] \in V_{\widetilde{\Delta}_n}$. Moreover we have $\varphi^{2}=\psi$ and thus $[\psi]\in V_{\widetilde{\Delta}_n}$.
Finally let $o(\psi)=tpq$, where $p, q\geq 3$ are prime numbers and $t$ is
an odd positive integer. Then $o(\psi^t)=pq$ and in the split of $\psi^t$ into disjoint cycles, there exists either a cycle of length $pq$ or two cycles of length $p$ and $q$. In the first case $pq\leq n$ gives $p\leq n/q\leq n/3\leq n-2$. In the second case $p+q\leq n$ gives $p\leq n-q\leq n-3.$
Thus, as $o(\psi^{tq})=p\leq n-2$, by the previous case we obtain $[\psi^{tp}]\in V_{\widetilde{\Delta}_n}$ and so $[\psi]\in V_{\widetilde{\Delta}_n}$.
\end{proof}
\begin{proof} [Proof of Theorem B (ii) and Theorem D] Let $n\geq 8$ be fixed and recall that
$c_0(S_n)=\widetilde {c}_0(S_n)$. First of all, observe that $P\cap O_0(S_n)=\{p\in P: p\leq n\}.$ Therefore we can reformulate Lemma \ref{lem:8} by saying that all the vertices in $[S_n]_0$ of order not belonging to the set $B(n)=P\cap \{n, n-1\}$ are in $\widetilde{\Delta}_n.$ In particular, $V_{\widetilde {o}(\widetilde{\Delta}_n)}\supseteq O_0(S_n)\setminus B(n).$
Let $\Sigma_n$ be the unique component of $\mathcal{O}_0(S_n)$
such that $V_{\Sigma_n}\supseteq V_{\widetilde {o}(\widetilde{\Delta}_n)}$.
If $n\notin P\cup (P+1)$, then $B(n)=\varnothing$ and thus all the vertices of $[S_n]_0$ are in $\widetilde{\Delta}_n$. In this case $\widetilde{P}_0(S_n)=\widetilde{\Delta}_n$ is connected and, by \cite[Proposition 3.2]{bub}, both its quotients $\widetilde{P}_0(\mathcal{T}(S_n))$ and $\mathcal{O}_0(S_n)$ are connected.
Note that the completeness of $\tilde{t}$ and $\widetilde {o}$ implies $\tilde{t}( \widetilde{\Delta}_n )=P_0(\mathcal{T}(S_n))$ and $\widetilde {o}(\widetilde{\Delta}_n)=\mathcal{O}_0(S_n)=\Sigma_n.$
Next let $n\in P\cup (P+1)$, say $n=p$ or $n=p+1$ for some $p\in P,$ necessarily odd. Then $B(n)=\{p\}$.
We need to understand only the components of $\widetilde {P}_0(S_n)$ and $P_0(\mathcal{T}(S_n))$
containing vertices of order $p$, and decide wether or not $p\in V_{\Sigma_n}$.
By Lemma~\ref{lem:3}, $p$ is isolated in $\mathcal{O}_0(S_p)$ and the unique type $T$ of order $p$ is isolated in $\widetilde{P}_0(\mathcal{T}(S_n)).$ Moreover
each component of $\widetilde{P}_0(S_n)$ admissible for $T$ is an isolated vertex and their number is known. The values for $c_0(S_n)$ as displayed in Table \ref{eqtable2} and the fact that $c_0(\mathcal{T}(S_n))=2$ immediately follows. Now note that $\Sigma_n$ cannot reduce to the isolated vertex $p$, because $\Sigma_n$ contains at least the vertex $2\in O_0(S_n)\setminus \{p\}$.
Thus $c_0(\mathcal{O}_0(S_n))=2$ and the two components of $\mathcal{O}_0(S_n)$ have as vertex sets $\{p\}$ and $V_{\Sigma_n}= O_0(S_n)\setminus \{p\}=V_{\widetilde {o}(\widetilde{\Delta}_n)}.$ Since we have shown that $\widetilde{\Delta}_n$ is the only possible component of $\widetilde{P}_0(S_n)$ not reduced to an isolated vertex and $\widetilde {o}$ is complete, applying \cite[Proposition 5.2]{bub} we get $\Sigma_n=\widetilde {o}(\widetilde{\Delta}_n).$
So far, for every $n\geq 8$, we have shown that $\widetilde{\Delta}_n$ is the only possible component of $\widetilde{P}_0(S_n)$ not reduced to an isolated vertex; $\tilde{t}(\widetilde{\Delta}_n)$ is the only possible component of $P_0(\mathcal{T}(S_n))$ not reduced to an isolated vertex; $\widetilde {o}(\widetilde{\Delta}_n)$ is the only possible component of $\mathcal{O}_0(S_n)$ not reduced to an isolated vertex.
Call now the main component of $P_0(S_n)$, $\widetilde{P}_0(S_n)$, $P_0(\mathcal{T}(S_n))$ and $\mathcal{O}_0(S_n)$, respectively, the component $\Delta_n$ such that $\pi(\Delta_n)=\widetilde{\Delta}_n$ defined in Corollary \ref{comp-structure}, $\widetilde{\Delta}_n$, $\widetilde{t}(\widetilde{\Delta}_n)$ and
$\widetilde {o}(\widetilde{\Delta}_n)$.
By Corollary \ref{comp-structure}, no main component is complete.
Finally we show that every component $C$ of $P_0(A_n)$, with $C\neq \Delta_n$ is a complete graph on $p-1$ vertices. Let $\psi\in V_C$. Then $[\psi]\notin V_{\widetilde{\Delta}_n}$ and thus $[\psi]$ is isolated in $\widetilde{P}_0(S_n)$. Hence, to conclude, we invoke Lemma \ref{isolatedSn}\,(iii).
\end{proof}
\begin{cor}\label{isoclass} The following facts are equivalent:
\begin{itemize}
\item[(i)] every component $C$ of $\widetilde{P}_0(S_n)$ is isomorphic to the component of $P_0(\mathcal{T}(S_n))$ induced by $\mathcal{T}(C)$;
\item[(ii)] $2\leq n\leq 5.$
\end{itemize}
\end{cor}
\begin{proof} (ii)$\Rightarrow $(i) If $2\leq n\leq 5$, the case by case proof of Theorem B\,(i) shows directly the required isomorphism.
(i)$\Rightarrow $(ii) For $n\geq 6,$ we show that the component $\widetilde{\Delta}_n$ is not isomorphic to the component of $P_0(\mathcal{T}(S_n))$ induced by $\mathcal{T}(\widetilde{\Delta}_n)$. Namely $\widetilde{\Delta}_n$ contains all the vertices of type $T=[1^{n-2},2]$ and since $k_{\widetilde{P}_0(S_n)}(T)=\frac{n(n-1)}{2}>1,$ Corollary \ref{two-types}\,(iv) applies.
\end{proof}
\begin{proof} [Proof of Corollary C] A check on Tables \ref{eqtable1} and \ref{eqtable2} of Theorem B.
\end{proof}
\begin{cor} \label{final} Apart the trivial case $n=2$, the minimum $n\in\mathbb{N}$ such that $P(S_n)$ is $2$-connected is $n=9$. There exists infinitely many $n\in\mathbb{N}$ such that $P(S_n)$ is $2$-connected.
\end{cor}
\begin{proof} Let $n=k^2$, for some $k\geq 3.$ Then $n\geq 8, n\notin P$ and $n-1=k^2-1=(k-1)(k+1)$ is not a prime. Thus, by Theorem B, $P(S_n)$ is $2$-connected. In particular $c_0(S_9)=1$. Moreover by Tables \ref{eqtable1} and \ref{eqtable2}, we have $c_0(S_n)>1$ for all $3\leq n\leq 8.$
\end{proof}
{\bf Acknowledgements} The authors wish to thank Gena Hahn and Silvio Dolfi for some suggestions on a preliminary version of the paper.
The first author is partially supported by GNSAGA of INdAM.
| {
"timestamp": "2016-07-22T02:10:20",
"yymm": "1502",
"arxiv_id": "1502.02966",
"language": "en",
"url": "https://arxiv.org/abs/1502.02966",
"abstract": "In a previous paper of the first author a procedure was developed for counting the components of a graph through the knowledge of the components of its quotient graphs. We apply here that procedure to the proper power graph $\\mathcal{P}_0(G)$ of a finite group $G$, finding a formula for the number $c(\\mathcal{P}_0(G))$ of its components which is particularly illuminative when $G\\leq S_n$ is a fusion controlled permutation group. We make use of the proper quotient power graph $\\widetilde{\\mathcal{P}}_0(G)$, the proper order graph $\\mathcal{O}_0(G)$ and the proper type graph $\\mathcal{T}_0(G)$. We show that all those graphs are quotient of $\\mathcal{P}_0(G)$ and demonstrate a strong link between them dealing with $G=S_n$. We find simultaneously $c(\\mathcal{P}_0(S_n))$ as well as the number of components of $\\widetilde{\\mathcal{P}}_0(S_n)$, $\\mathcal{O}_0(S_n)$ and $\\mathcal{T}_0(S_n)$.",
"subjects": "Combinatorics (math.CO)",
"title": "Quotient graphs for power graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850832642354,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7095349682905505
} |
https://arxiv.org/abs/1710.06886 | Bounds for totally separable translative packings in the plane | A packing of translates of a convex domain in the Euclidean plane is said to be totally separable if any two packing elements can be separated by a line disjoint from the interior of every packing element. This notion was introduced by G. Fejes Tóth and L. Fejes Tóth (1973) and has attracted significant attention. In this paper we prove an analogue of Oler's inequality for totally separable translative packings of convex domains and then we derive from it some new results. This includes finding the largest density of totally separable translative packings of an arbitrary convex domain and finding the smallest area convex hull of totally separable packings (resp., totally separable soft packings) generated by given number of translates of a convex domain (resp., soft convex domain). Finally, we determine the largest covering ratio (that is, the largest fraction of the plane covered by the soft disks) of an arbitrary totally separable soft disk packing with given soft parameter. | \section{Introduction}\label{intro}
Our paper intends to bridge totally separable packings of discrete geometry and Oler's inequality of geometry of numbers. The concept of totally separable packings was introduced by G. Fejes T\'oth and L. Fejes T\'oth in \cite{FeFe} as follows. We say that a set of domains is totally separable if any two of them can be separated by a straight line avoiding all of the domains. The main question investigated in \cite{FeFe} is to find the densest totally separable arrangement of congruent replicas of a given domain. The paper \cite{FeFe} generated a good deal of interest in the density problem of totally separable arrangements and led to further important publications such as \cite{Andras} and \cite{Ke}. Coming from this direction our goal was to find the densest totally separable arrangement of translates of a given domain and then to extend that approach to the analogue question for finite totally separable arrangements. It turned out that an efficient method to achieve all that is based on a new version of Oler's classical inequality (\cite{Oler}). So, next we introduce some basic terminology and then state Oler's inequality in the form which is most suitable for this paper.
Let $K$ be a {\it convex domain}, i.e., a compact convex set with non-empty interior in the Euclidean plane $\mathbb{E}^2$. A family $\mathcal{F}$ of $n$ translates of $K$ in $\mathbb{E}^2$ is called a {\it packing} if no two members of $\mathcal{F}$ have an interior point in common.
If $K$ is an $o$-symmetric convex domain in $\mathbb{E}^2$, where $o$ stands for the origin of $\mathbb{E}^2$, then let $| \cdot |_{K}$ denote the {\it norm generated by $K$}, i.e., let $|x|_{K} = \min\{ \lambda : x \in \lambda K\}$ for any $x\in \mathbb{E}^2$. The distance between the points $p$ and $q$ measured in the norm $| \cdot |_{K}$ is denoted by $|p-q|_K$. For the sake of simplicity, the Euclidean distance between the points $p$ and $q$ of $\mathbb{E}^2$ is denoted by $|p-q|$.
If $P = \bigcup_{i=1}^n [x_{i-1},x_i]$ is a polygonal curve in $\mathbb{E}^2$, and $K$ is an $o$-symmetric plane convex domain, then the {\it Minkowski length} of $P$ is defined as $M_K(P) = \sum_{i=1}^n |x_i-x_{i-1}|_K$. Based on this and using approximation by closed polygons one can define the Minkowski length $M_K(G)$ of any rectifiable curve $G \subseteq \mathbb{E}^2$ in the norm $| \cdot |_{K}$. If $K$ is a not $o$-symmetric, by $M_K(G)$ we mean the length of $G$ in the \emph{relative norm} of $K$, i.e., in the norm defined by $\frac{1}{2}(K-K)$ \cite{Oler}.
Finally, if $K$ is an $o$-symmetric convex domain in $\mathbb{E}^2$, then let $\diamond(K)$ denote a minimal area circumscribed hexagon of $K$.
Now, we are ready to state Oler's inequality (\cite{Oler}) in the following form. Let $K$ be an $o$-symmetric convex domain in $\mathbb{E}^2$. Let $$\mathcal{F} = \{ x_i + K : i=1,2,\ldots, n\}$$ be a packing of $n$ translates of $K$ in $\mathbb{E}^2$, and set $X = \{ x_1, x_2, \ldots, x_n\}$. Furthermore, let $\Pi$ be a simple closed polygonal curve with the following properties: \begin{enumerate}
\item the vertices of $\Pi$ are points of $X$
and
\item $X \subseteq \Pi^*$ with $\Pi^* = \Pi \cup \inter \Pi$, where $\inter \Pi$ refers to the interior of $\Pi$.
\end{enumerate}
Then
\begin{equation}\label{eq:Oler-original}
\frac{\area (\Pi^*)}{\area \left(\diamond (K)\right)} + \frac{M_K(\Pi)}{4} + 1 \geq n,
\end{equation}
where $\area(\cdot)$ denotes the area of the corresponding set. The formula (\ref{eq:Oler-original}) was conjectured by H. J. Zassenhaus and has a number of interesting aspects discussed in \cite{Z} (see also \cite{BetkeHenkWills} and \cite{BoRu}).
The rest of the paper is organized as follows. First, we prove an analogue of Oler's inequality for totally separable translative packings of convex domains (Section~\ref{1}) and then we derive from it some new results. This includes finding the largest density of totally separable translative packings of an arbitrary convex domain (Section~\ref{2}) and finding the smallest area convex hull of totally separable packings (resp., totally separable soft packings) generated by given number of translates of a convex domain (resp., soft convex domain) (Sections~\ref{3} and~\ref{4}). Finally, we determine the largest covering ratio (that is, the largest fraction of the plane covered by the soft disks) of an arbitrary totally separable soft disk packing with given soft parameter (Section~\ref{5}).
\section{An analogue of Oler's inequality for totally separable translative packings}\label{1}
We need the following definitions.
\begin{Definition}\label{defn:totallyseparable}
Let $K \subseteq \mathbb{E}^2$ be a convex domain. A packing $\mathcal{F}$ of translates of $K$ in $\mathbb{E}^2$ is called \emph{totally separable} if any two members of $\mathcal{F}$ can be separated by a line which is disjoint from the interiors of all members of $\mathcal{F}$.
\end{Definition}
\begin{Definition}\label{defn:permissiblepolygon}
A closed polygonal curve $P = \bigcup_{i=1}^m [x_{i-1},x_i]$, where $x_0 = x_m$, is called \emph{permissible} if there is a sequence
of simple closed polygonal curves $P^n = \bigcup_{i=1}^m [x^n_{i-1},x^n_i]$, where $x^n_0 = x^n_m$, satisfying $x^n_i \to x_i$ for every value of $i$. The interior $\inter P$ is defined as $\lim_{n \to \infty} \inter P^n$.
\end{Definition}
\begin{Remark}\label{rem:interioriswelldefined}
By the properties of limits, if $P = \bigcup_{i=1}^m [x_{i-1},x_i]$ is permissible and $P^n$ and $Q^n$ are sequences of simple closed polygonal curves with $\lim_{n \to \infty} P^n = \lim_{n \to \infty} Q^n = P$, then $\lim_{n \to \infty} \inter P^n = \lim_{n \to \infty} \inter Q^n$, i.e., the interior of a permissible curve is well defined.
\end{Remark}
\begin{Definition}
Let $K$ be an $o$-symmetric convex domain in $\mathbb{E}^2$. Then let $\square(K)$ denote a minimal area circumscribed parallelogram of $K$.
\end{Definition}
The main result of this section is the following totally separable analogue of Oler's inequality.
\begin{Theorem}\label{thm:Oler}
Let $K$ be an $o$-symmetric convex domain in $\mathbb{E}^2$. Let $$\mathcal{F} = \{ x_i + K : i=1,2,\ldots, n\}$$ be a totally separable packing of $n$ translates of $K$ in $\mathbb{E}^2$, and set $X = \{ x_1, x_2, \ldots, x_n\}$. Furthermore, let $\Pi$ be a permissible closed polygonal curve with the following properties: \begin{enumerate}
\item the vertices of $\Pi$ are points of $X$
and
\item $X \subseteq \Pi^*$ with $\Pi^* = \Pi \cup \inter \Pi$.
\end{enumerate}
Then
\begin{equation}\label{eq:Oler}
\frac{\area (\Pi^*)}{\area \left(\square (K)\right)} + \frac{M_K(\Pi)}{4} + 1 \geq n.
\end{equation}
\end{Theorem}
\begin{Remark}\label{equality}
We note that equality in (\ref{eq:Oler}) of Theorem~\ref{thm:Oler} is attained in a variety of ways as indicated in Fig.~\ref{fig:Oler}, which consists of blocks of zig-zags and simple closed polygons having sides parallel to the two sides of a chosen $\square (K)$.
\end{Remark}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{Oler.pdf}
\caption{A totally separable packing of translates of $K$ (with $K$ being a circular disk for the sake of simplicity), which satisfies the conditions in Theorem~\ref{thm:Oler} and for which there is equality in (\ref{eq:Oler}) of Theorem~\ref{thm:Oler}.}
\label{fig:Oler}
\end{center}
\end{figure}
\begin{Remark}\label{rem:parallelogram}
It is well-known that the width of any convex body $K$ in any direction is equal to the width of its central symmetrization $\frac{1}{2}(K-K)$ in this direction. This readily implies that $\square(K)$ does not change under central symmetrization.
\end{Remark}
\begin{Remark}\label{no-symmetry}
Let $\mathcal{F} = \{ x_i + K : i=1,2,\ldots, n\}$ be a family of $n$ translates of $K$ in $\mathbb{E}^2$, where $K$ is an $o$-symmetric convex domain of $\mathbb{E}^2$, and let $K^*$ be a convex domain satisfying $K = \frac{1}{2} (K^*-K^*)$ with $o \in \inter K^*$, and let $\mathcal{F}^* = \{ x_i + K^* : i=1,2,\ldots, n\}$. Then $\mathcal{F}$ is a packing if and only if $\mathcal{F}^*$ is a packing, and $\mathcal{F}$ is a totally separable packing if and only if $\mathcal{F^*}$ is a totally separable packing. (For details see for example, \cite{BeKhOl}.)
Thus, Theorem~\ref{thm:Oler} holds for any (not necessarily $o$-symmetric) plane convex domain $K^*$ (with $o \in \inter K^*$) as well.
\end{Remark}
\begin{Remark}
In fact, the proof of Theorem~\ref{thm:Oler} presented in this section works for more general point sets $X$ as well. Namely, one may assume only
that $\Pi^*$ can be cut into $n$ pieces by $(n-1)$ successive cuts by segments (cutting only one piece at a time) such that
\begin{itemize}
\item each segment starts (resp., ends) at some point of $\Pi$ or a preceding segment;
\item the relative interior of each segment is contained in the piece it cuts;
\item when cutting a piece by a segment then no point of $X$ lying in the given piece lies closer to this segment than one measured in the norm generated by $K$;
\item at the end, each piece contains exactly one point of $X$.
\end{itemize}
\end{Remark}
\begin{proof}
For any permissible closed polygonal curve $\Pi$ in Theorem~\ref{thm:Oler}, we set
\[
F(\Pi) = \frac{\area (\Pi^*)}{\area \left(\square (K)\right)} + \frac{M_K(\Pi)}{4} + 1.
\]
We prove the assertion by induction on $n$. Clearly, if $n=1$, then $F(\Pi) = 0 + 0 + 1=1$, and Theorem~\ref{thm:Oler} holds.
Assume that for any $n' < n$, Theorem~\ref{thm:Oler} holds for any totally separable translative packing of $K$ with $n'$ elements and for any permissible polygonal curve associated to it.
We prove that it holds for $n$ element packings as well.
Let $L$ be a line intersecting $\Pi$ and separating the elements of $\mathcal{F}$.
We present the proof for the case only that $L$ intersects $\Pi$ at exactly two points, as the proof in the other cases is similar.
Let these intersection points be $p$ and $q$. Then $p$ and $q$ are points in the relative interior of some edges $p \in [p_1,p_2]$ and $q \in [q_1,q_2]$ of $\Pi$ whose vertices are not contained in $L + \inter K$. For simplicity, we imagine $L$ as a horizontal line, $p$ to the left of $q$, and $p_1$ and $q_1$ to be above $L$. Let $L_1$ and $L_2$ be the upper, respectively lower, line bounding $L + K$. For $i=1,2$, let $p'_i$ and $q'_i$ be the intersection points of $L_i$ with $[p,p_i]$ and $[q,q_i]$, respectively.
Without loss of generality, we assume that the parallelogram $P_L$ circumscribed about $K$ and having the property that its area is minimal among the circumscribed parallelograms, under the condition that it has a pair of sides parallel to $L$, is a square of edge length $2$. Thus, we have $\area (P_L) = 4$.
Observe that the lines $L_1$ and $L_2$ decompose $\Pi$ into four components: one above $L_1$, one below $L_2$, and the last two ones being the segments $[p'_1,p'_2]$ and $[q'_1,q'_2]$. We define $\Pi'_1$ as the union of the component above $L_1$ and the segment $[p'_1,q'_1]$, and we define $\Pi'_2$ similarly. Clearly, these polygonal curves are permissible. Finally, for $i=1,2$, we let $\Pi'^*_i = \Pi'_i \cup \inter \Pi'_i$ (cf. Figure~\ref{fig:Thm1_1}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{Thm1_1.pdf}
\caption{Notations in the proof of Theorem~\ref{thm:Oler}}
\label{fig:Thm1_1}
\end{center}
\end{figure}
Then we have
\begin{equation}\label{eq:area}
\area (\Pi^*) = \area(\Pi'^*_1)+\area(\Pi'^*_2)+ \area \left( \conv \{ p'_1,p'_2, q'_2, q'_1\} \right) =
\end{equation}
\[
\area(\Pi'^*_1)+\area(\Pi'^*_2) + 2 |p-q| = \area(\Pi'^*_1)+\area(\Pi'^*_2) + 2 |p-q|_K.
\]
Furthermore, since the normed distance of $L_1$ and $L_2$ is two, we have
\begin{equation}\label{eq:perimeter}
M_K(\Pi) = M_K(\Pi'_1)+M_K(\Pi'_2)-|p'_1-q'_1|_K - |p'_2-q'_2|_K + |p'_1-p'_2|_K + |q'_1-q'_2|_K \geq
\end{equation}
\[
\geq M_K(\Pi'_1)+M_K(\Pi'_2) -2|p-q|_K +4.
\]
Now we define a polygonal curve in $\Pi'^*_1$. Consider the points of $X$ in the region $R_1$ bounded by $[p_1,p'_1]$, $[p'_1,q'_1]$, $[q'_1,q_1]$ and $[q_1,p_1]$. Note that this region is a (not necessarily convex) quadrangle. If $R_1$ is not convex, we assume, without loss of generality, that $p_1 \in \conv \{p'_1,q_1,q'_1\}$. Consider the ray starting at $p_1$ through $p$ and begin to rotate it counterclockwise until it hits the first point $p_2$ of $X$ in $R_1$. Then rotate this half line about $p_2$ clockwise until it hits the next point of $X$. Continuing
this process we end up with a simple curve $C_1$ in $R_1$, starting at $p_1$ and ending at $q_1$, which divides $R_1$ into two connected components one of which contains all points of $X$ in $R_1$. We remark that if $R_1$ is convex, then $C_1$ is a convex curve.
Let $\Pi_1$ denote the closed polygonal curve $\left( \Pi'_1 \setminus ([p_1,p'_1] \cup [p'_1,q'_1] \cup [q'_1,q_1] ) \right) \cup C_1$. It is easy to see that $\Pi_1$ is a permissible polygonal curve whose vertices are points of $X$ above $L$, and whose interior contain every other point of $X$ above $L$. Let $\Pi^*_1 = \Pi_1 \cup \inter \Pi_1$.
Clearly, $\area (\Pi^*_1) \leq \area (\Pi'^*_1)$. We show that $M_K(\Pi_1) \leq M_K(\Pi'_1)$.
\emph{Case 1:} $R_1$ is convex.\\
Note that in this case $C_1 \cup [p_1,q_1]$ is a convex region contained in the convex region $R_1$, and thus, $M_K(C_1)+|p_1-q_1|_K \leq M_K (\bd R_1)$, which readily implies our claim.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\textwidth]{Thm1_2.pdf}
\caption{The points $p_1,p'_1, q_1, q'_1$ are not in convex position as in Case 2}
\label{fig:Thm1_2}
\end{center}
\end{figure}
\emph{Case 2:} $R_1$ is not convex.\\
Then, according to our assumption, the line through $p'_1$ and $p_1$ intersects $[q_1,q'_1]$ (cf. Figure~\ref{fig:Thm1_2}). This line intersects $C_1$ at exactly one point $z$, and there is a line $L_z$ through $z$ which supports $C_1$ at $z$. Let $L_z$ intersect $[q_1,q'_1]$ at $x$, and $[p'_1,q'_1]$ at $y$.
The point $z$ decomposes $C_1$ into two convex polygonal curves $C_x$ and $C_y$ such that $p_1 \in C_y$ and $q_1 \in C_x$.
Then we have
\[
M_K(C_1) = M_K(C_y) + M_K(C_x) \leq |p_1-p'_1|_K + | p'_1-y|_K + |y-z|_K + |z-x|_K +|x-q_1|_K \leq
\]
\[
\leq |p_1-p'_1|_K + |p'_1 - q'_1|_K + |q'_1-q_1|_K.
\]
This implies that $M_K(\Pi_1) \leq M_K(\Pi'_1)$.
To construct a permissible polygon $\Pi_2$ in $\Pi'^*_2$ with the same properties we may apply an analogous process.
Thus, we have obtained two permissible polygons $\Pi_1$ and $\Pi_2$ associated to totally separable translative packings of $K$, with strictly less elements than $n$, say $k$ and $n-k$. Now, by (\ref{eq:area}), (\ref{eq:perimeter}), $\area \left(\square (K)\right) \leq 4$, $\area (\Pi^*_i) \leq \area (\Pi'^*_i)$, $M_K(\Pi_i) \leq M_K(\Pi'_i)$ (for $i=1,2$), and the induction hypothesis, we have
\[
F(\Pi) \geq \frac{\area(\Pi^*_1)+\area(\Pi^*_2) + 2 |p-q|_K}{\area \left(\square (K)\right)} + \frac{M_K(\Pi_1)+M_K(\Pi_2) -2|p-q|_K +4}{4} + 1 \geq
\]
\[
\geq F(\Pi_1) + F(\Pi_2) \geq k+n-k = n.
\]
\end{proof}
\section{On the densest totally separable translative packings}\label{2}
Theorem~\ref{thm:Oler} and Remark~\ref{no-symmetry} imply the following statement, which was proved (using a method different from ours) for $o$-symmetric convex domains in \cite{FeFe} with a weaker estimate than (\ref{Fejes-Toth}) for convex domains in general namely, with $\square(K)$ standing for a minimal area circumscribed quadrangle of $K$.
\begin{Theorem}\label{cor:FeFe}
If $\delta_{sep}(K)$ denotes the largest (upper) density of totally separable translative packings of the convex domain $K$ in $\mathbb{E}^2$, then
\begin{equation}\label{Fejes-Toth}
\delta_{sep}(K) = \frac{\area(K)}{\area(\square(K))}.
\end{equation}
\end{Theorem}
\begin{proof}
Clearly, $\delta_{sep}(K) \geq \frac{\area(K)}{\area(\square(K))}$. To show the opposite inequality, without loss of generality we may assume that $o \in \inter K$.
Set $C = \min \{ \mu>0: K-K \subseteq \mu K\}$, and $C' = \frac{M_K(\bd K)}{4}$.
Consider any totally separable packing $\mathcal{F}$ of translates of $K$ in $\mathbb{E}^2$. For any $t > 0$, let $\mathcal{F}_t$ denote the subfamily of $\mathcal{F}$ consisting of the elements that intersect $tK$, and let $X_t$ denote the set of the translation vectors of the elements of $\mathcal{F}_t$ and $n_t$ the cardinality of $\mathcal{F}_t$.
Note that if $y \in (x+K) \cap tK$, then $x+K \subseteq y+(K-K)$ and therefore $x+K \subseteq (t+C)K$, implying that $\bigcup \mathcal{F}_t \subseteq (t+C)K$.
On the other hand, by Theorem~\ref{thm:Oler} and Remark~\ref{no-symmetry}, it follows that
\[
n_t \leq \frac{\area (\conv (X_t))}{\area(\square (K))}+\frac{M_K(\bd \conv(X_t))}{4}+1 \leq
\]
\[
\frac{\area ((t+C)K)}{\area(\square (K))}+\frac{M_K(\bd ((t+C)K))}{4}+1 = (t+C)^2 \frac{\area (K)}{\area(\square (K))}+(t+C)C'+1.
\]
This yields that
\[
\frac{\area((\bigcup \mathcal{F}) \cap tK)}{\area(tK)} \leq \frac{\area(\bigcup\mathcal{F}_t)}{\area(tK)}=\frac{n_t\area (K)}{\area(tK)}=\frac{n_t}{t^2} \leq \frac{(t+C)^2\area(K)}{t^2\area(\square(K))} + \frac{tC'+CC'+1}{t^2},
\]
from which the claim follows by letting $t\to+\infty$.
\end{proof}
Theorem~\ref{cor:Fary} is a totally separable analogue of the well-known theorem (which is a combination of the results published in \cite{Fary}, \cite{FTL50}, \cite{FTL83}, and \cite{R}), stating that the maximal density of translative packings of a convex domain in $\mathbb{E}^2$ is minimal if and only if the domain is a triangle.
\begin{Theorem}\label{cor:Fary}
For any convex domain $K$ in $\mathbb{E}^2$, we have
\begin{equation}\label{eq:density}
\frac{1}{2} \leq \delta_{sep}(K) \leq 1,
\end{equation}
with equality on the left if and only if $K$ is a triangle, and on the right if and only if $K$ is a parallelogram.
\end{Theorem}
\begin{proof}
The right-hand side inequality in (\ref{eq:density}) is an immediate consequence of Theorem~\ref{cor:FeFe}. We prove only the left-hand side inequality.
Let $P$ be a minimum area parallelogram circumscribed about $K$. Without loss of generality, we may assume that $P$ is the square $[0,1]^2$ in a suitable Cartesian coordinate system.
Let the sides of $P$ be $S_1, S_2, S_3, S_4$ in counterclockwise order such that the endpoints of $S_1$ are $(0,0)$ and $(1,0)$. Since $P$ has minimum area, each side of $P$ intersects $K$.
We show that $S_2 \cap K$ and $S_4 \cap K$ contain points with equal $y$-coordinates.
Suppose for contradiction that it is not so. Then $(1,0)+(S_4 \cap K)$ and $S_2 \cap K$ are disjoint, implying that there is some point $p_2 \in S_2$ separating these two sets. Set $p_4 = (-1,0)+p_2$. Then we may rotate the line of $S_2$ around $p_2$, and the line of $S_4$ around $p_4$ slightly, with the same angle, to obtain a parallelogram containing $K$, with area equal to $\area(P)$ and having two sides disjoint from $K$, which contradicts our assumption that $P$ has minimum area. Thus, there are some points $p_4 = (0,t) \in S_4 \cap K$ and $p_2=(1,t)\in S_2 \cap K$ for some $t \in [0,1]$.
We obtain similarly the existence of points $p_1 = (s,0) \in S_1 \cap K$ and $p_3=(s,1) \in S_3 \cap K$.
Hence, $\area(K) \geq \area (\conv \{ p_1,p_2,p_3,p_4\}) = \frac{1}{2} \area(P)$, which yields the left-hand side inequality in (\ref{eq:density}).
Now we examine the equality case. Note that, using the notations of the previous paragraph, $\frac{1}{2} = \delta_{sep}(K)$ implies that
$K = \conv \{ p_1,p_2,p_3,p_4\}$. Consider the case that $s,t \in (0,1)$. Let $P'$ be the parallelogram obtained by rotating the line of $S_2$ around $p_2$ and the line of $S_4$ around $p_4$, with the same small angle. Then $P'$ is a parallelogram circumscribed about $K$, having area equal to $\area(P)$.
Let the sides of $P'$ be $S'_1,S'_2,S'_3, S'_4$ such that for $i=1,2,3,4$, $p_i \in S'_i$. Observe that $S'_1 \cap K = \{ p_1\}$, $S'_3 \cap K = \{p_3\}$, and $[p_1,p_3]$ is not parallel to $S'_2$ and $S'_4$. Thus, applying the argument in the previous paragraph, it follows that $P'$ is not a minimum area circumscribed parallelogram, a contradiction.
Thus, $s$ or $t$ is equal to $0$ or $1$, which implies that $K$ is a triangle.
\end{proof}
\section{On the smallest area convex hull of totally separable translative finite packings}\label{3}
\begin{Theorem}\label{thm:areaformula}
Let $\mathcal{F} = \{ c_i + K : i=1,2,\ldots, n\}$ be a totally separable packing of $n$ translates of the convex domain $K$ in $\mathbb{E}^2$.
Let $C = \conv \{ c_1,c_2,\ldots, c_n \}$.
\begin{enumerate}
\item[(\ref{thm:areaformula}.1)]
Then we have
\[
\area \left(\conv \left( \bigcup_{i=1}^n (c_i+K) \right)\right) = \area (C+K)\geq \frac{2}{3} (n-1)\area \left(\square(K)\right) + \area (K)+\frac{1}{3} \area(C).
\]
\item[(\ref{thm:areaformula}.2)]
If $K$ or $C$ is centrally symmetric, then
\[
\area (C+K)\geq (n-1) \area \left(\square(K)\right) + \area (K).
\]
\end{enumerate}
\end{Theorem}
\begin{Remark}
We note that equality is attained in $(\ref{thm:areaformula}.1)$ of Theorem~\ref{thm:areaformula} for the following totally separable translative packings of a triangle (cf. Figure~\ref{fig:area_triangle}).
Let $K$ be a triangle, with the origin $o$ at a vertex, and $u$ and $v$ being the position vectors of the other two vertices, and let $T = m K$, where $m > 1$ is an integer. Let $\mathcal{F}$ be the family consisting of the elements of the lattice packing $\{ iu+jv +K : i,j, \in \mathbb{Z} \}$ contained in $T$.
Then $\mathcal{F}$ is a totally separable packing of $n=\frac{m(m+1)}{2}$ translates of $K$ with $\conv \left(\bigcup \mathcal{F} \right) = T=C+K$, where $C=(m-1)K$.
Thus, $\area(T) = m^2 \area (K)=[\frac{2}{3}m(m+1)-\frac{1}{3}+\frac{1}{3}(m-1)^2]\area (K)=\frac{4}{3}(n-1)\area (K)+\area (K)+\frac{1}{3}\area (C)=$
$\frac{2}{3}(n-1)\area (\square(K))+\area(K)+\frac{1}{3}\area(C)$.
\end{Remark}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.2\textwidth]{area_triangle.pdf}
\caption{An example for equality in (\ref{thm:areaformula}.1)}
\label{fig:area_triangle}
\end{center}
\end{figure}
\begin{Remark}
In (\ref{thm:areaformula}.2) of Theorem~\ref{thm:areaformula} equality can be attained in a variety of ways shown in Figure~\ref{fig:convexhull} for both cases namely, when $C$ is centrally symmetric (and $K$ is not centrally symmetric such as a triangle) and when $K$ is centrally symmetric (such as a circular disk) without any assumption on the symmetry of $C$.
\end{Remark}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{convexhull.pdf}
\caption{Totally separable translative packings of a triangle and a unit disk for which equality is attained in (\ref{thm:areaformula}.2) in Theorem~\ref{thm:areaformula}}
\label{fig:convexhull}
\end{center}
\end{figure}
\begin{proof}
We start with proving the following inequalities.
\begin{Lemma}\label{lem:Radon}
Let $K$ be a convex domain in $\mathbb{E}^2$ and let $Q$ be a convex polygon. Furthermore, let $A(Q,K)$ denote the mixed area of $Q$ and $K$.
\begin{enumerate}
\item[(\ref{lem:Radon}.1)] Then we have
\[
\frac{12 A(Q,K)}{\area\left(\square (K)\right)} \geq M_K(\bd Q).
\]
Here, equality holds, for instance, if $Q=K$ is a triangle.
\item[(\ref{lem:Radon}.2)] If $K$ or $Q$ is centrally symmetric, then
\[
\frac{8 A(Q,K)}{\area\left(\square (K)\right)} \geq M_K(\bd Q).
\]
Furthermore, if $K$ is centrally symmetric, then equality holds for every convex polygon $Q$ if and only if $\bd K$ is a Radon curve.
\end{enumerate}
\end{Lemma}
\begin{proof}
Without loss of generality, we may assume that $\area(\square(K))=1$.
Let $k$ denote the number of sides of $Q$, and for $i=1,2,\ldots, k$, let $l_i$ and $x_i$ denote the (Euclidean) length and the outer unit normal vector of the $i$th side of $Q$. Note that then for every value of $i$, $w_K(x_i) = h_K(x_i)+h_K(-x_i)$, where $w_K(x_i)$ is the width of $K$ in the direction of $x_i$ and $h_K(x)=\sup \{x\cdot k\ :\ k\in K\}$ is the support function of $K$ evaluated at $x\in\mathbb{E}^2$ with "$\cdot$" standing for the standard inner product of $\mathbb{E}^2$.
Furthermore, observe that
\begin{equation}\label{eq:mixedarea}
A(Q,K)= \frac{1}{2} \sum_{i=1}^k l_i h_K(x_i).
\end{equation}
First, we prove (\ref{lem:Radon}.2) for the case that $K$ is centrally symmetric. Since a translation of $K$ or $Q$ does not change their mixed area, we may assume that $K$ is $o$-symmetric, which implies that $h_K(x_i)= \frac{1}{2} w_K(x_i)$ for every $i$. Let $r_i$ be the Euclidean length of the radius of $K$ in the direction of the $i$th side of $Q$. Then the normed
length of this side is $\frac{l_i}{r_i}$. On the other hand, since $2r_iw_K(x_i)$ is the area of a parallelogram circumscribed about $K$ having minimum area under the condition that it has a side parallel to the $i$th side of $Q$, therefore for every value of $i$ we have $r_i w_K(x_i) \geq \frac{1}{2}$. Combining these observations and (\ref{eq:mixedarea}), it follows that
\[
A(Q,K) = \frac{1}{4} \sum_{i=1}^k l_i w_K(x_i) = \frac{1}{4} \sum_{i=1}^k \frac{l_i}{r_i} r_i w_K(x_i) \geq \frac{1}{8} \sum_{i=1}^k \frac{l_i}{r_i} = \frac{1}{8} M_K(Q).
\]
Here, equality holds for every convex polygon $Q$ if and only if for any $v \in \mathbb{S}^1=\{x\in\mathbb{E}^2\ :\ |x|=1\}$, there is a minimum area parallelogram circumscribed about $K$, which has a side parallel to $v$. In other words,
for any $v \in \mathbb{S}^1$, we have that $l_K(v) w_K(v^{\perp})$ is independent of $v$, where $l_K(v)$ is the length of a longest chord of $K$ in the direction of $v$, and $w_K(v^{\perp})$ is the width of $K$ in the direction perpendicular to $v$.
The observation that this property is equivalent to the fact that $\bd K$ is a Radon curve can be found, for example, in the proof of Theorem 2 of \cite{GHorvathLangi}.
Now consider the case that $Q$ is $o$-symmetric, but $K$ is not necessarily. Note that in this case $k$ is even, and for every $i$ we have
$l_{i+k/2} = l_i$, and $x_{i+k/2}=-x_i$. Thus, by (\ref{eq:mixedarea})
\[
A(Q,K) = \frac{1}{2}\sum_{i=1}^{k/2}l_i(h_K(x_i)+h_K(-x_i)) = \frac{1}{2} \sum_{i=1}^{k/2} l_i w_K(x_i) = \frac{1}{4} \sum_{i=1}^k l_i w_K(x_i).
\]
From this equality, the statement follows by a similar argument using the relative norm of $K$ whenever $K$ is not centrally symmetric.
Finally we prove (\ref{lem:Radon}.1) about the general case.
Let $\bar{K}= \frac{1}{2}(K-K)$. Without loss of generality, we may assume that the origin $o$ is the center of a maximum area triangle inscribed in $K$. Then, clearly, $-\frac{1}{2}K \subseteq K$, from which a simple algebraic transformation yields that $\frac{2}{3}\bar{K} \subseteq K$.
This implies that for any unit vector $x$ we have
\begin{equation}\label{eq:asymmRadon}
h_K(x) \geq \frac{2}{3} h_{\bar{K}}(x).
\end{equation}
Then, by (\ref{eq:mixedarea}), we have
\[
A(Q,K) \geq \frac{1}{3} \sum_{i=1}^k l_i h_{\bar{K}}(x_i) = \frac{2}{3} A(Q,\bar{K}).
\]
Thus, our inequality readily follows from (\ref{lem:Radon}.2). The fact that here equality holds if $Q=K$ is a triangle can be shown by an elementary computation.
\end{proof}
First, we prove (\ref{thm:areaformula}.2).
Note that $\bd C$ satisfies the conditions in Theorem~\ref{thm:Oler}, and thus (using Remark~\ref{no-symmetry} if $K$ is not centrally symmetric), we have
\[
\frac{\area (C)}{\area \left(\square (K)\right)} + \frac{M_K(\bd C)}{4} + 1 \geq n.
\]
Thus, $(\ref{lem:Radon}.2)$ of Lemma~\ref{lem:Radon} yields that
\[
\frac{\area (C)}{\area \left(\square (K)\right)} + \frac{2 A(C,K)}{\area \left(\square (K)\right)} + 1 \geq n.
\]
From this, it follows that
\[
\area \left(\conv \left( \bigcup_{i=1}^n (c_i+K) \right)\right)= \area (C+K)=\area (C) + 2 A(C,K) + \area (K)\geq
\]
\[
(n-1) \area \left(\square (K)\right) + \area (K).
\]
Now we prove (\ref{thm:areaformula}.1).
In this case, Theorem~\ref{thm:Oler} applied to $\bd C$ in the same way as above followed by $(\ref{lem:Radon}.1)$ of Lemma~\ref{lem:Radon} implies that
\[
(n-1) \area(\square(K)) \leq \area(C) + 3A(C,K) = \frac{3}{2} \area(C+K) - \frac{1}{2} \area(C) - \frac{3}{2} \area(K).
\]
This inequality yields
\[
\area(C+K) \geq \frac{2(n-1)}{3} \area(\square(K)) + \area(K) + \frac{1}{3} \area(C),
\]
finishing the proof of (\ref{thm:areaformula}.1).
\end{proof}
\section{On the smallest area convex hull of totally separable translative finite soft packings}\label{4}
The following notion has been defined for Euclidean balls in $\mathbb{E}^d$ in \cite{BezdekLangi}.
\begin{Definition}
Let $K$ be an $o$-symmetric convex domain in $\mathbb{E}^2$. Let $\lambda \geq 0$, and let $K^{\lambda}$ denote the soft domain $(1+\lambda) K$ with the soft parameter $\lambda$, hard core $K$, and soft annulus $(1+\lambda)K \setminus K$ in $\mathbb{E}^2$.
\end{Definition}
\begin{Remark}
Clearly, $K^{\lambda}$ and $K^{\lambda} \setminus K$ are symmetric about $o$ in $\mathbb{E}^2$.
\end{Remark}
\begin{Definition}
Let $\{ c_1, c_2, \ldots, c_n \} \subseteq \mathbb{E}^2$. We say that $\{ c_1 + K^{\lambda}, c_2 + K^{\lambda}, \ldots, c_n + K^{\lambda} \}$
is a \emph{totally separable soft packing} of $n$ translates of the soft domain $K^{\lambda}$ in $\mathbb{E}^2$, if
$\{ c_1 + K, c_2+K, \ldots, c_n + K\}$ is a totally separable packing in the usual sense (see Definition \ref{defn:totallyseparable}). Let $\P^{sep}_{K,n,\lambda}$ be the family of all totally separable soft packings of $n$ translates of the soft domain $K^{\lambda}$ for given $K$, $n > 1$, $\lambda \geq 0$.
\end{Definition}
The following statement is an extension of (\ref{thm:areaformula}.2) of Theorem~\ref{thm:areaformula} and also it is a totally separable version of Theorem 2.1 in \cite{BetkeHenkWills}.
\begin{Theorem}\label{thm:Betke}
Let $K$ be an $o$-symmetric convex domain in $\mathbb{E}^2$, and let $n > 1$ and $\lambda \geq 0$ be given.
If $\{ c_1 + K^{\lambda}, c_2 + K^{\lambda}, \ldots, c_n + K^{\lambda} \} \in \P^{sep}_{K,n,\lambda}$, then
\[
\area \left( \conv \left( \bigcup_{i=1}^n (c_i + K^{\lambda}) \right) \right) \geq
\]
\[
(n-1)\area \left(\square(K)\right)+2\lambda A(\conv \{c_1,c_2,\ldots,c_n \}, K)+(1+\lambda)^2 \area (K) \geq
\]
\[
\left( n-1 + \frac{\lambda}{4} M_K\left(\bd(\conv \{ c_1,c_2,\ldots, c_n\})\right) \right) \area \left(\square(K)\right) + (1+\lambda)^2 \area(K).
\]
\end{Theorem}
\begin{Remark}
We note that equality in Theorem~\ref{thm:Betke} is attained, for example, for "sausages" in the form $\{ c_i + K : i=1,2,\ldots, n\}$, where $|c_2-c_1|_K=\dots =|c_n-c_{n-1}|_K=2$, and $c_2-c_1,\dots , c_n-c_{n-1}$ are parallel to a chosen side of $\square (K)$.
\end{Remark}
\begin{proof}
First, we prove the inequality stated first. By the definition of mixed area (cf. e.g. \cite{BetkeHenkWills}), we have
\[
\area\left(\conv \left( \bigcup_{i=1}^n (c_i + K)\right)\right) =\area(\conv \{c_1,c_2,\ldots,c_n \}+K)=
\]
\[
\area(\conv \{c_1,c_2,\ldots,c_n \})+2A(\conv \{c_1,c_2,\ldots,c_n \}, K) + \area(K).
\]
Thus, (\ref{thm:areaformula}.2) of Theorem~\ref{thm:areaformula} implies that
\begin{equation}\label{eq:convexhullarea}
\area(\conv \{c_1,c_2,\ldots,c_n \}) \geq (n-1)\area(\square (K))-2A(\conv \{c_1,c_2,\ldots,c_n \}, K).
\end{equation}
Again using the definition of mixed area and also (\ref{eq:convexhullarea}) we get that
\[
\area\left(\conv \left( \bigcup_{i=1}^n (c_i + K^{\lambda})\right)\right) =\area(\conv \{c_1,c_2,\ldots,c_n \}+K^{\lambda})=
\]
\[
\area(\conv \{c_1,c_2,\ldots,c_n \}) +2(1+\lambda) A(\conv \{c_1,c_2,\ldots,c_n \}, K) + (1+\lambda)^2 \area(K)\geq
\]
\[
(n-1)\area(\square (K))+2\lambda A(\conv \{c_1,c_2,\ldots,c_n \}, K)+(1+\lambda)^2 \area (K)
\]
finishing the proof of the first inequality. Finally, (\ref{lem:Radon}.2) of Lemma~\ref{lem:Radon} implies the second inequality of Theorem~\ref{thm:Betke} in a straightforward way.
\end{proof}
\bigskip
\section{On the covering ratio of totally separable soft disk packings}\label{5}
Let $B$ denote the circular disk of radius $1$ (in short, the unit disk) centered at $o$ in $\mathbb{E}^2$.
\begin{Definition}
Let $\mathcal{F} = \{ c_i + B : c_i\in\mathbb{E}^2\ {\text for}\ i \in \mathbb{N}\}$ be a totally separable packing (resp., lattice packing) of unit disks in $\mathbb{E}^2$. Then $\mathcal{F}^{\lambda} = \{ c_i + (1+\lambda)B : i \in \mathbb{N}\}$
is called a \emph{totally separable soft packing} (resp., \emph{totally separable soft lattice packing}) of the \emph{soft disks} $c_i + (1+\lambda)B$ each being congruent to the soft disk $B^{\lambda}=(1+\lambda)B$ with \emph{soft parameter} $\lambda > 0$.
In this case the \emph{(upper) covering ratio} of the soft packing $\mathcal{F}^{\lambda}$ is defined as
\[
\rho(\mathcal{F}^\lambda) = \limsup_{r \to \infty} \frac{\area \left( r B \cap \bigcup_{i\in\mathbb{N} } (c_i+B^{\lambda})\right)}{\area (r B)}.
\]
We denote by $\rho_{\lambda, B}^{sep}$ (respectively, $\rho_{\lambda, B}^{sep, lattice}$) the supremum of the (upper) covering ratios over the family of totally separable soft packings (respectively, of totally separable soft lattice packings) of soft disks congruent to $B^{\lambda}$ with soft parameter $\lambda > 0$.
\end{Definition}
We note that in \cite{BezdekLangi} the covering ratio just introduced was called soft density. We prefer to use the term covering ratio in order to emphasize that it means the fraction of plane covered by the soft elements of the given soft packing.
To state our main result in this section, for $\frac{\pi}{6} \leq \alpha \leq \frac{\pi}{4}$, we denote by $T_{\alpha}$ an isosceles triangle whose half angle at its apex is $\alpha$, and the two heights starting at the endpoints of its base are equal to two (cf. Figure~\ref{fig:Talpha}). Observe that if the vertices of this triangle are $p_1,p_2,p_3$, then the triple $\{ p_1+B, p_2 + B, p_3+B\}$ is totally separable, and the vectors $p_2-p_1$, $p_3-p_1$ generate a totally separable lattice packing of translates of $B$.
We introduce the notation
\[
\rho(\lambda,T_{\alpha}) = \frac{\area \left( T_{\alpha} \cap \bigcup_{i=1}^3 (p_i+ B^{\lambda} ) \right)}{\area(T_{\alpha})}.
\]
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.4\textwidth]{Talpha.pdf}
\caption{The isosceles triangle $T_{\alpha}$ for some value of $\alpha$}
\label{fig:Talpha}
\end{center}
\end{figure}
\begin{Theorem}\label{thm:soft_packing}
For every $\lambda > 0$, we have $\rho_{\lambda,B}^{sep}=\rho_{\lambda,B}^{sep,lattice} = \rho(\lambda,T_{\alpha})$ for some $\frac{\pi}{6} \leq \alpha \leq \frac{\pi}{4}$.
\end{Theorem}
The proof is based on a refinement of a tessellation defined by Moln\'ar in \cite{Molnar}.
Let $C=\{c_i : i\in\mathbb{N}\}\subseteq \mathbb{E}^2$ be a \emph{saturated} point set in $\mathbb{E}^2$, that is, assume that there are some values $0 < a \leq b$ such that the distance of any two points in $C$ is at least $a$, and for any $x \in \mathbb{E}^2$, $|x-c_i| < b$ for some $c_i \in C$. For any $c_i \in C$, let the \emph{Voronoi cell} $D_i$ of $c_i$ be the set of points in $\mathbb{E}^2$ not farther from $c_i$ than from any other element of $C$. If for any cells $D_i$ and $D_j$ meeting at an edge, this edge is replaced by the segment $[c_i,c_j]$, we obtain the \emph{Delaunay} tessellation of $\mathbb{E}^2$.
In this tessellation, the circumcenter of any cell is a vertex of some Voronoi cells, and the circumcircle of any Delaunay cell contains no point of $C$ in its interior. (Moreover, the circumcircle of any Delaunay cell contains no point of $C$ different from the vertices of the cell.)
Let the circumcenter of a cell $P$ of the Delaunay decomposition be $v$. If the line through an edge $S = [c_i,c_j]$ of $P$ separates $P$ and $v$, we say that $S$ is a \emph{separating side} of $P$. Then the polygonal curve $[c_i,v] \cup [v,c_j]$ is called the \emph{bridge} of $P$. Clearly, every cell $P$ has at most one bridge.
Let us replace the separating side of each cell (if it exists), by the bridge of the cell. Then, by Lemma 1 of \cite{Molnar}, we obtain another cell decomposition of $\mathbb{E}^2$, which we call \emph{Moln\'ar} tessellation or in short, $M$-tessellation.
\begin{proof}
We prove the theorem for a larger family of packings which we call \emph{weakly separable} packings of unit disks: we assume only that any three unit disks in the packing $\mathcal{F}$, under the condition that the pairwise distances between their centers are at most $2\sqrt{2}$, form a totally separable triple.
Observe that if $\mathcal{F} = \{ c_i + B : c_i \in C\}$ is a weakly separable packing of unit disks, and there is some point $p \in \mathbb{E}^2$ such that $|p-c_i| > 2\sqrt{2}$ for all $c_i\in C$, then after adding the circle $p + \mathbf{B}$ to the packing it remains weakly separable.
Thus, we may assume that $C$ is saturated, and the circumradius of any Voronoi cell of $C$ is at most $2\sqrt{2}$.
In the proof we let $\mathcal{F}^{\lambda} = \{ c_i + B^{\lambda} : c_i \in C\}$, and for any region $Q$ in the plane, $\rho(\mathcal{F}^{\lambda} | Q) = \frac{\area(Q \cap \bigcup \mathcal{F}^{\lambda})}{\area(Q)}$.
Let $P$ be an arbitrary Delaunay cell of $C$. Assume that the circumradius of $P$ is less than $\sqrt{2}$. Then, since the sides of $P$ are at least $2$, $P$ is an acute triangle, which, thus, contains its circumcenter. Hence, if $P$ has a separating side $S$, then it separates the Delaunay cell $P'$, meeting $P$ in $S$, from the circumcenter $v'$ of $P'$. On the other hand, since the circumcircle of $P'$ does not contain vertices of $P$ in its interior, it follows that the circumradius of $P'$ is not greater than that of $P$. This yields that the circumradius of $P'$ is less than $\sqrt{2}$, which contradicts our assumption that a side of $P'$ separates $v'$ from $P'$. In other words, we have that if the circumradius of $P$ is less than $\sqrt{2}$, then it is a triangle which remains the same in the $M$-tessellation as well.
We show that any other $M$-cell can be decomposed into cells of the form
\[
Q = \cl \left( \conv \{ v,c_i,c_j\} \setminus \conv \{ v',c_i,c_j\} \right),
\]
where $c_i, c_j \in C$, $|v-c_i| = |v-c_j| \geq \sqrt{2}$, and $|v'-c_i| = |v'-c_j|$.
Let $P$ be a Delaunay cell of $C$ with circumradius at least $\sqrt{2}$. Assume, first, that $P$ contains its circumcenter $v$. Thus, if $[c_i,c_j]$ is a separating side, then it separates the cell $P'$ which meets $P$ in $[c_i,c_j]$, from its circumcenter $v'$.
Then $[c_i,c_j]$ is replaced by the bridge $[c_i,v'] \cup [v',c_j]$. Note that $\sqrt{2}\leq |v'-c_i| = |v'-c_j| < |v-c_i|=|v-c_j| $. If $[c_i,c_j]$ is not a separating side, then one can choose $v'$ to be the midpoint of $[c_i,c_j]$. Thus, dissecting the $M$-cell obtained from $P$ by the segments connecting $v$ to the vertices of $P$ results in regions with the desired property.
If $P$ does not contain its circumcenter, we may apply a similar construction.
We call this tessellation the \emph{refined} $M$-tessellation, or $M'$-tessellation. Then, if $P$ is an $M'$-cell, then $P$ is either
\begin{itemize}
\item[(i)] an acute triangle with circumradius less than $\sqrt{2}$, in this case we say that $P$ is type 1, or
\item[(ii)] it is of the form $P=\cl \left( \conv \{ v,c_i,c_j\} \setminus \conv \{ v',c_i,c_j\} \right)$, where $c_i,c_j \in C$, $v$ is the circumcenter of a Delaunay cell with $c_i$ and $c_j$ as vertices and with circumradius at least $\sqrt{2}$, and $|v'-c_i| = |v'-c_j|$. In this case we say that $P$ is type 2.
\end{itemize}
\begin{Lemma}\label{lem:Molnarisgood}
Let $P$ be an $M'$-cell defined by $C$. Then, for any point $p \in \inter P$, if $c_i \in C$ is closest to $p$, then $c_i$ is a vertex of $P$.
\end{Lemma}
\begin{proof}
Consider the case that $P$ is type 1. Let $c_i \in C$ be closest to $p$.
Let $P'$ be a \emph{Delaunay cell} with $c_i$ as a vertex such that $P'$ intersects $[p,c_i]$. If $[p,c_i]$ does not intersect $\inter P'$, then it is contained in a sideline of $P'$. On the other hand, since $c_i$ is closest to $p$ in $C$, and no side of $P'$ crosses the edges of $P$, this is impossible. Thus, $[p,c_i]$ intersects $\inter P'$, which means that the line $L$ through two other vertices $c_j$, $c_k$ of $P'$ separates $p$ from $P'$. Let $x$ and $y$ denote the intersection points of $L$ with the circle, centered at $p$, of radius $|c_i-p|$. Since $|c_j-p|, |c_k-p| \geq |c_i-p|$, we have $[x,y] \subseteq [c_j,c_k]$. Note that since $L$ separates $c_i$ and $p$, it follows that $\frac{\pi}{2} < \angle(x,c_i,y) \leq \angle(c_j,c_i,c_k)$; that is, $P'$ has an obtuse angle at $c_i$. Hence, if $v'$ denotes the circumcenter of $P'$, then $L$ separates $v'$ and $P'$, or in other words, $[c_j,v'] \cup [c_k,v']$ is a bridge. Since no bridge can cross the sides of $P$,
to finish the proof it suffices to show that $p \in \conv \{ c_j,c_k,v'\}$. Let $L_j$ (respectively, $L_k$) be the line bisecting the segment $[c_i,c_j]$ (respectively, $[c_i,c_k]$). Note that $L_j$ and $L_k$ intersect at $v'$. Let $V$ be the closed convex angular region, with apex $v'$ and $\bd V \subseteq L_j \cup L_k$ such that $c_i \in V$. Note that since $p$ is not farther from $c_i$ than from $c_j$ or $c_k$, we have $p \in V$, which yields that $p \in \conv \{ c_j,c_k,v'\}$ (cf. Figure~\ref{fig:closestpoint}).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.35\textwidth]{closestpoint.pdf}
\caption{An illustration for the proof of Lemma~\ref{lem:Molnarisgood}}
\label{fig:closestpoint}
\end{center}
\end{figure}
If $P$ is type 2, we may apply a similar argument.
\end{proof}
By Lemma~\ref{lem:Molnarisgood} we have that for any cell $P$ in the $M'$-decomposition, if $\mathcal{F}^{\lambda}(P)$ denotes the family of translates of $B^{\lambda}=(1+\lambda)B$ centered at the vertices of $P$ in $C$, then $\rho(\mathcal{F}^{\lambda}(P) | P) = \rho(\mathcal{F}^{\lambda} | P)$.
In other words, to compute the covering ratio of the soft packing in the cell $P$ it suffices to consider the soft disks centered at the vertices of $P$ in $C$.
To finish the proof, we show that for any $M'$-cell $P$, we have $\rho(\mathcal{F}^{\lambda}(P) | P) \leq \rho(\lambda,T_{\alpha})$ for some $\frac{\pi}{6}\leq \alpha \leq \frac{\pi}{4}$.
\emph{Case 1}, $P$ is type 2. To prove the assertion in this case, we need the next lemma.
\begin{Lemma}\label{lem:isosceles}
Let $T=\conv\{p_1,p_2,o\}$ be an isosceles triangle with its apex at $o$, and $2a=|p_1-p_2|$, $b=|p_1-o|$. Assume that $b \geq 1+\lambda$, and let
\[
\rho(a,b)= \frac{\area\left( T \cap \bigcup_{i=1}^2(p_i + B^{\lambda})\right)}{\area(T)}.
\]
Then $\rho(a,b)$ is a strictly decreasing function of both $a$ and $b$.
\end{Lemma}
\begin{proof}[Proof of Lemma~\ref{lem:isosceles}]
Set $\bar{\lambda}=1+\lambda$. Let $a > \bar{\lambda}$. The fact that in this case $\rho(a,b)$ is a (not necessarily strictly) decreasing function of $a$ and $b$ is proved in Lemma in \cite{Andras}. To prove that this function is strictly decreasing, we may apply a straightforward modification of its proof.
Hence, from now on, we assume that $a \leq \bar{\lambda}$.
Then we have
\begin{equation}\label{eq:density2}
\rho(a,b)= \frac{\bar{\lambda}^2 \arccos \frac{a}{b} -\bar{\lambda}^2 \arccos \frac{a}{\bar{\lambda}} + a \sqrt{\bar{\lambda}^2-a^2}}{a \sqrt{b^2-a^2}} .
\end{equation}
This implies that
\[
\rho'_b(a,b)= \frac{b}{b^2-a^2}\left( \frac{\bar{\lambda}^2}{b^2} - \rho(a,b) \right).
\]
Here, using the integral formula for the area of a function given in polar form, it is easy to see that $\rho(a,b)> \frac{\bar{\lambda}}{b} > \frac{\bar{\lambda}^2}{b^2}$, which yields that $\rho'_b(a,b)< 0$.
On the other hand, by an elementary computation, we obtain
\[
\rho'_a(a,b)= \frac{ab^2\sqrt{\bar{\lambda}^2-a^2}-a\bar{\lambda}^2 \sqrt{b^2-a^2}-\bar{\lambda}^2(b^2-2a^2) \left( \arccos \frac{a}{b} -\arccos \frac{a}{\bar{\lambda}} \right)}{a^2 \left( b^2-a^2\right)^{3/2}} .
\]
Let us use the substitutions $b = \frac{a}{\cos \mu}$ and $\bar{\lambda} = \frac{a}{\cos \nu}$. Then we have $0 \leq \nu < \mu < \frac{\pi}{2}$, and
\[
\rho'_a(\mu,\nu) = \frac{\sin 2\nu - \sin 2\mu + 2 \cos 2\mu (\mu-\nu)}{2a \tan^2 \mu \cos^2 \mu \cos^2 \nu} .
\]
Clearly, the denominator of this fraction is positive. On the other hand, it is easy to check that its numerator is a strictly increasing function
of $\nu$ on the interval $[0,\mu]$, and its value is zero if $\nu = \mu$. Thus, we have $\rho'_a(a,b)>0$ for every value of $a$ and $b$.
\end{proof}
Since $P$ is type 2, $P=\cl \left( \conv \{ v,c_i,c_j\} \setminus \conv \{ v',c_i,c_j\} \right)$, for some $c_i,c_j \in C$, where $|v-c_i| = |v-c_j| \geq \sqrt{2}$,
and $|v'-c_i| = |v'-c_j|$.
Let $T = \conv \{ v,c_i,c_j\}$ and $T' = \conv \{ v',c_i,c_j\}$. Then, by Lemma~\ref{lem:isosceles}, we have $\rho(\mathcal{F}^{\lambda} | T) \leq \rho(\mathcal{F}^{\lambda} | T')$, yielding that $\rho(\mathcal{F}^{\lambda} | P) \leq \rho(\mathcal{F}^{\lambda} | T)$. Furthermore, since the legs of $T$ are at least $\sqrt{2}$, and its base is at least $2$, therefore by Lemma~\ref{lem:isosceles} we have $\rho(\mathcal{F}^{\lambda} | T) \leq \rho(\mathcal{F}^{\lambda} | T_0) = \rho\left( \lambda, T_\frac{\pi}{4}\right)$, where $T_0$ is the isosceles right triangle whose hypothenus is of length $2$.
\emph{Case 2}, $P$ is type 1. In this case the sides of $P$ are of length less than $2\sqrt{2}$, and thus, the unit disks centered at the vertices of $P$ are totally separable. This fact is equivalent to the condition that two heights of $P$ are at least two. Hence, the assertion follows immediately from Lemma~\ref{lem:acute}.
\begin{Lemma}\label{lem:acute}
Let $T=\conv\{p_1,p_2,p_3\}$ be an acute triangle with two heights at least two. Let
\[
\rho(\lambda,T) = \frac{\area\left( T \cap \bigcup_{i=1}^3 (p_i + B^{\lambda}) \right)}{\area(T)}.
\]
Then $\rho(\lambda,T) \leq \rho(\lambda,T_{\alpha})$, for some $\frac{\pi}{6} \leq \alpha \leq \frac{\pi}{4}$.
\end{Lemma}
\begin{proof}[Proof of Lemma~\ref{lem:acute}]
First, note that by our assumption, all sidelengths of $T$ are greater than two. Let $R$ be the circumradius of $T$.
If $R \leq 1+\lambda$, then the assertion follows by a simple geometric observation.
Thus, we prove the statement under the condition that $R > 1+\lambda$. This condition implies that the circumcenter of $T$ is not covered by the three soft disks, and also that at most two of the three soft disks intersect.
If there is a soft disk that does not intersect the other two soft disks (i.e. two sides of $T$ are longer than $2 + 2 \lambda$), we may move it towards the opposite side of $T$, and, thus, increase the covering ratio. Hence, if $\rho(\lambda,T)$ is maximal, then it has at most one side longer than $2+2\lambda$.
Let $p_1$ be the vertex of $T$ such that the altitude starting at $p_1$ is a shortest altitude. Then, by the area formula for triangles, if $T$ has a side longer than $2+2\lambda$, then it is $[p_2,p_3]$, and hence, $p_1 + B^{\lambda}$ intersects the other two soft disks.
Let $T' = \conv\{p'_1,p_2,p_3\}$ be an isosceles triangle with circumradius $R$. In the remaining part of the proof we show that $\rho(\lambda,T') \geq \rho(\lambda,T)$.
Observe that $T'$ also has two (equal) heights that are at least two, and also that its base is not shorter than its legs.
Thus, this inequality implies the assertion of the lemma, since, if these two heights of $T'$ are greater than two, then we can replace $T'$ by a smaller similar copy of itself, which clearly increases its covering ratio.
Let $o$ be the circumcenter of $T$, and for $i=1,2,3$, let $t_i = \area (\conv \{ o,p_j,p_k\})$, and
\[
t^{tr}_i = \area (\left( \left( \left( p_j + (1+\lambda) \mathbf{B}\right) \cup \left( p_k + (1+\lambda) \mathbf{B}\right) \right) \cap \conv \{ o,p_j,p_k\} \right),
\]
where $\{i,j,k\} = \{ 1,2,3 \}$.
We define $t'_i$ and $t'^{tr}_i$ similarly for the triangle $T'$.
Note that by Lemma~\ref{lem:isosceles}, we have $\frac{t^{tr}_1}{t_1} = \frac{t'^{tr}_1}{t'_1} \leq \min\{ \frac{t^{tr}_2}{t_2} , \frac{t^{tr}_3}{t_3}\}$.
Since, clearly, $t_2+t_3 \leq 2 t'_2 = 2t'_3$, it is sufficient to prove that $\frac{t^{tr}_2+t^{tr}_3}{t_2+t_3} \leq \frac{t'^{tr}_2}{t'_2}$.
Set $\mu = p_1p_2p_3\angle$, $\nu = p_1p_3p_2 \angle$, and $\bar{\mu} = \frac{\mu+\nu}{2} = p'_1p_2p_3\angle =
p'_1p_3p_2\angle$, and $f(x)= \frac{\pi}{2}-x-\arccos(R \sin x)+r \sin (x) \sqrt{1-R^2 \sin^2 x}$, and $g(x) = \frac{1}{2} R^2 \sin (2x)$.
Then the desired inequality can be written in the form
\[
\frac{f(\mu)+f(2\tau-\mu)}{g(\mu)+g(2\tau-\mu)} \leq \frac{f(\tau)}{g(\tau)}
\]
for some $0 < \mu \leq \tau \leq 2\tau-\mu < \arcsin \frac{1}{R} < \frac{\pi}{2}$.
Let us define
\[
F(\mu,\tau) = 2f(\tau) \left( g(\mu) + g(2\tau-\mu)\right) - 2 g(\tau) \left( f(\mu)+f(2\tau-\mu) \right).
\]
Note that $F(\tau,\tau) = 0$ for every value of $\tau$. We show that $F(\mu,\tau)$ is a strictly decreasing function of $\mu$ for every value of $\tau$, which readily implies the assertion.
By an elementary computation, we obtain
\[
F'_{\mu}(\mu,\tau) = 2R^2 \sin2\tau \left( 2 f(\tau) \sin (2\tau-2\mu) - R \cos \alpha \sqrt{1-R^2 \sin^2 \mu} + R \cos(2\tau-\mu) \sqrt{1-R^2 \sin^2(2\tau-\mu)}\right).
\]
Using the inequalities $2R^2 \sin 2\tau > 0$, $\sin(2\tau-2\mu) > 0$, $f(\tau) < g(\tau)$, and some trigonometric identities, we obtain that
\[
F'_{\mu}(\mu,\tau) < 2R^2 \sin2\tau \left( h(\mu) - h(2\tau-\mu) \right),
\]
where $h(x)=R^2 \cos^2 x - R \cos x \sqrt{1-R^2 \sin^2 x}$. Observe that
\[
h'(x) = \frac{R \cos x \left( R \cos x - \sqrt{1-R^2 \sin^2 x} \right)^2}{\sqrt{1-R^2 \sin^2 x}} > 0
\]
if $0 < x < \arcsin \frac{1}{R}$. This implies that $h(\mu) < h(2\tau-\mu)$, from which the inequality $F'_{\mu}(\mu,\tau) < 0$ readily follows.
\end{proof}
This completes the proof of Theorem~\ref{thm:soft_packing}. \end{proof}
\begin{figure}[ht]
\centering
\begin{minipage}{.45\textwidth}
\begin{center}
\includegraphics[width=0.9\textwidth]{soft_density_suruseg.pdf}
\end{center}
\end{minipage}%
\begin{minipage}{0.45\textwidth}
\begin{center}
\includegraphics[width=0.9\textwidth]{soft_density_szog.pdf}
\end{center}
\end{minipage}
\caption{Maximal covering ratios of soft circle packings (on the left), and half-angles of isosceles triangles with maximal covering ratios (on the right)}
\label{fig:density}
\end{figure}
\begin{Corollary}
An elementary computaion yields that the smallest value $\Lambda$ of $\lambda$ with the property that $\rho(\lambda,T_{\alpha}) = 1$ for some value of $\alpha$ is
$\Lambda = \frac{3\sqrt{3}}{4} - 1 \approx 0.299$. Thus, by Theorem~\ref{thm:soft_packing}, $\rho_{\lambda,B}^{sep}= 1$ if, and only if $\lambda \geq \frac{3\sqrt{3}}{4} - 1$. Furthermore, $\rho(\Lambda,T_{\alpha}) = 1$ is satisfied if, and only if $\alpha = \arccos \sqrt{\frac{2}{3}} \approx 35.27644^{\circ}$. It is worth rephrasing this result in terms of closeness of packings introduced by L. Fejes T\'oth (\cite{FTL78}) as follows. If $\cal P$ be a packing of unit disks in $\mathbb{E}^2$, then let $r_{sep}({\cal P})$ be the supremum of $r>0$ for which there exists a circular disk of radius $r$ having no point in common with the elements of $\cal P$. Then call $r_{sep}({\cal P})$ the {\it closeness} of $\cal P$. Thus, the above mentioned claim can be stated saying that $ r_{sep}({\cal P})\geq \frac{3\sqrt{3}}{4} - 1$ holds for any totally separable unit disk packing ${\cal P}$ of $\mathbb{E}^2$ with equality for a unique totally separable lattice packing.
\end{Corollary}
\begin{Remark}\label{rem:max_density}
By Theorem~\ref{thm:soft_packing}, we have that $\rho_{\lambda,\mathbf{B}}^{sep} = \rho(\lambda,T_{\alpha})$ for some $\frac{\pi}{6} \leq \alpha \leq \frac{\pi}{4}$.
Unfortunately, in general the value(s) of $\alpha$ where $\rho(\lambda,T_{\alpha})$ is maximal for a fixed value of $\lambda$ can be computed only numerically. The graph on the left in Figure~\ref{fig:density} shows the maximal values of $\rho(\lambda,T_{\alpha})$ as a function of $\lambda$, and the one on the right fshows values of $\alpha$, belonging to these covering ratios. Let $t_0$ denote the unique solution of the equation $\sin 2t = \frac{\pi}{2}-2t$, and set $\lambda_1 = \frac{1}{\cos t_0}-1 \approx 0.093$ and $\lambda_2 = \frac{2 \cos t_0}{\sqrt{4\cos^2 t_0-1}} - 1 \approx 0.194$. If $0 < \lambda \leq \lambda_2$, then there is a unique optimal value of $\alpha$, and in the corresponding triangle $T_{\alpha}$ the two soft disks centered at the vertices of the base do not overlap. Within this case, if $ < \lambda \leq \lambda_1$, then the covering ratio is maximal for $\alpha = \frac{\pi}{4}$. If $\lambda_1 \leq \lambda \leq \lambda_2$, then the optimal value of $\alpha$ is $\alpha = \frac{1}{2} \arcsin \left( (1+\lambda) \cos t_ 0 \right)$, and the maximal covering ratio is the linear function $ (1+\lambda) \left( \frac{\pi}{4 \cos t_0} - \frac{t_0}{\cos t_0} + \sin t_ 0 \right)$. If $\lambda > \lambda_2$, then the two soft circles centered at the endpoints of the base overlap, we could express the maximal values of the covering ratios only numerically.
\end{Remark}
\bigskip
\section{Appendix}
We cannot resist to raise the following questions motivated by the theorems proved in this paper.
\begin{Problem}
Characterize the case of equality in (\ref{eq:Oler}) of Theorem~\ref{thm:Oler}.
\end{Problem}
\begin{Definition}
For any $o$-symmetric convex domain $K$ in $\mathbb{E}^2$, $n>1$, and $\lambda\ge 0$ let
\[
a_{sep}(K,n,\lambda) = \min \left\{\area \left( \bigcup_{i=1}^n (c_i+ K^{\lambda})\right) : \{c_1 + K^{\lambda}, \ldots, c_n + K^{\lambda} \} \in \P^{sep}_{K,n,\lambda} \right\}.
\]
\end{Definition}
\begin{Problem}\label{soft-Betke}
Compute $a_{sep}(K,n,\lambda)$ for given $o$-symmetric convex domain $K$, $n>1$, and $\lambda\ge 0$.
\end{Problem}
If $K$ is an $o$-symmetric convex domain in $\mathbb{E}^2$, $n>1$, and $\{c_1,c_2,\dots , c_n\}\subseteq \mathbb{E}^2$, then it is easy to see that
\begin{equation}\label{large}
\area\left( \bigcup_{i=1}^n (c_i + K^{\lambda})\right) = (1+\lambda)^2 \area(K)+2\lambda A(\conv \{c_1,c_2,\ldots,c_n \}, K)+o(1+\lambda).
\end{equation}
for $(1+\lambda)\to+\infty$.
Based on (\ref{large}) the following is immediate from $(\ref{lem:Radon}.2)$ of Lemma~\ref{lem:Radon}.
\begin{Remark}
\[
a_{sep}(K,n,\lambda) \geq
\]
\[
(1+\lambda)^2 \area(K) + \lambda \frac{\area(\square(K))}{4} \min_{ \{c_i + K : i=1,2,\ldots,n \} \in \P^{sep}_{K,n,0}} \left\{ M_{K} (\bd (\conv \{ c_1,\ldots, c_n\}))\right\}
+ o(1+\lambda)
\]
for $(1+\lambda)\to+\infty$.
\end{Remark}
Thus, the problem of lower bounding $a_{sep}(K,n,\lambda)$ for large $\lambda$ leads us to
\begin{Problem}
For a given $o$-symmetric convex domain $K$ in $\mathbb{E}^2$ and given $n > 1$ compute
\[
\min_{ \{c_i + K : i=1,2,\ldots,n \} \in \P^{sep}_{K,n,0}} \left\{ M_{K} (\bd (\conv \{ c_1,\ldots, c_n\}))\right\}=
\]
\[
\min_{ \{c_i + K : i=1,2,\ldots,n \} \in \P^{sep}_{K,n,0}} \left\{ M_{K} \left(\bd \left( \conv \left( \bigcup_{i=1}^n (c_i+K) \right) \right)\right)\right\}-M_K(\bd K).
\]
\end{Problem}
Recall that the {\it maximum separable contact number} $c_{sep}(K, n, 2)$ is the largest contact number of totally separable packings of $n$ translates of a given ($o$-symmetric) convex domain $K$ in $\mathbb{E}^2$ for given $n>1$, where the contact number of a packing is simply the number of touching pairs among the packing elements.
\begin{Remark}
Let $n>1$ be given and let $K$ be an $o$-symmetric convex domain in $\mathbb{E}^2$. Then it is easy to see that there exists $\lambda(K, n)>0$ with the following property: for any $\lambda$ with $0\leq\lambda\leq \lambda(K, n)$ and any $\{ c_1 + K^{\lambda}, c_2 + K^{\lambda}, \ldots, c_n + K^{\lambda} \} \in \P^{sep}_{K,n,\lambda}$ the number of pairs
$\{ c_i + K^{\lambda}, c_j + K^{\lambda}\}$ with $(c_i + K^{\lambda})\cap (c_j + K^{\lambda})\neq\emptyset$ is at most $c_{sep}(K, n, 2)$. Furthermore, if $K$ is smooth, then there exits $\lambda^*(K, n)>0$ with $\lambda^*(K, n)\leq \lambda(K, n)$ such that no three of the sets $\{ c_1 + K^{\lambda}, c_2 + K^{\lambda}, \ldots, c_n + K^{\lambda} \}$
intersect. As a result it is not hard to see that
\[
n\area(K^\lambda)- c_{sep}(K, n, 2)A_{\rm max}(K, \lambda)\leq a_{sep}(K,n,\lambda) \leq n\area (K^\lambda)- c_{sep}(K, n, 2)A_{min}(K, \lambda),
\]
where $A_{\rm min}(K, \lambda)=\min\{\area \left((c_i + K^{\lambda})\cap (c_j + K^{\lambda})\right)\ |\ |c_i-c_j|_K=2\}$
\newline
(resp., $A_{\rm max}(K, \lambda)=$ $\max\{\area \left((c_i + K^{\lambda})\cap (c_j + K^{\lambda})\right)\ |\ |c_i-c_j|_K=2\}$).
\end{Remark}
Thus, the problem of bounding $a_{sep}(K,n,\lambda)$ for small $\lambda$ leads us to
\begin{Problem}\label{separable-contact-numbers}
Let $K$ be a (smooth) convex domain in $\mathbb{E}^2$ and $n>1$. Then compute
\[
c_{sep}(K,n,2).
\]
\end{Problem}
In connection with Problem~\ref{separable-contact-numbers} the following result was proved in \cite{BeKhOl}.
\begin{Theorem}
\item{(A)} $c_{sep}(K,n,2) = \left\lfloor 2n - 2\sqrt{n}\right\rfloor$, for any smooth strictly convex domain $K$ in $\mathbb{E}^2$.
\item{(B)} Let $R$ be a smooth Radon domain and let $n=\ell(\ell+\epsilon)+k\ge 4$ be the unique decomposition of a positive integer $n$ such that $k$, $\ell$ and $\epsilon$ are integers satisfying $\epsilon\in \{0,1\}$ and $0\le k< \ell+\epsilon$. Suppose that $\cal{P}$ is a totally separable packing of $n$ translates of $R$ with $c_{sep}(R,n,2) =\lfloor{2n-2\sqrt{n}}\rfloor$ contacts. If $k\ne 1$, then $\cal{P}$ is a finite lattice packing lying on an Auerbach lattice of $R$, while if $k=1$, then all but at most one translate in $\cal{P}$ form a lattice packing on an Auerbach lattice of $R$.
\end{Theorem}
\begin{Definition}
Let $\mathcal{F} = \{ c_i + K : c_i\in\mathbb{E}^2\ {\text for}\ i \in \mathbb{N}\}$ be a totally separable packing (resp., lattice packing) of translates of the $o$-symmetric convex domain $K$ in $\mathbb{E}^2$. Then $\mathcal{F}^{\lambda} = \{ c_i + (1+\lambda)K : i \in \mathbb{N}\}$
is called a \emph{totally separable soft packing} (resp., \emph{totally separable soft lattice packing}) of translates of the \emph{soft convex domain} $K^{\lambda}=(1+\lambda)K$ with \emph{soft parameter} $\lambda > 0$.
In this case the \emph{(upper) covering ratio} of the soft packing $\mathcal{F}^{\lambda}$ is defined as
\[
\rho(\mathcal{F}^\lambda) = \limsup_{r \to \infty} \frac{\area \left( r K \cap \bigcup_{i\in\mathbb{N} } (c_i+K^{\lambda})\right)}{\area (r K)}.
\]
We denote by $\rho_{\lambda, K}^{sep}$ (respectively, $\rho_{\lambda, K}^{sep, lattice}$) the supremum of the (upper) covering ratios over the family of totally separable soft packings (respectively, of totally separable soft lattice packings) of translates of the soft convex domain $K^{\lambda}$ with soft parameter $\lambda > 0$.
\end{Definition}
\begin{Problem}
Let $K$ be an $o$-symmetric convex domain in $\mathbb{E}^2$. Then prove or disprove that $\rho_{\lambda,K}^{sep}=\rho_{\lambda,K}^{sep,lattice} $
holds for every $\lambda>0$.
\end{Problem}
\begin{Definition}
If $\cal P$ is a totally separable packing of translates of an $o$-symmetric convex domain $K$ in $\mathbb{E}^2$, then let $r_{sep}({\cal P})$ be the supremum of $r>0$ for which there exists a translate of $rK$ having no point in common with the elements of $\cal P$. Then call $r_{sep}({\cal P})$ the {\it closeness} of $\cal P$ and set
\[
\underline{r}_{sep}(K)=\inf\{r_{sep}({\cal P})\ :\ {\cal P}\ \text{is a totally separable packing of translates of}\ K\ {\text in}\ \mathbb{E}^2\}.
\]
\end{Definition}
\begin{Problem}
Prove or disprove that for any $o$-symmetric convex domain $K$ of $\mathbb{E}^2$ we have $\underline{r}_{sep}(K)=r_{sep}({\cal P})$ for some totally separable lattice packing $\cal P$ of $K$.
\end{Problem}
| {
"timestamp": "2017-10-20T02:00:32",
"yymm": "1710",
"arxiv_id": "1710.06886",
"language": "en",
"url": "https://arxiv.org/abs/1710.06886",
"abstract": "A packing of translates of a convex domain in the Euclidean plane is said to be totally separable if any two packing elements can be separated by a line disjoint from the interior of every packing element. This notion was introduced by G. Fejes Tóth and L. Fejes Tóth (1973) and has attracted significant attention. In this paper we prove an analogue of Oler's inequality for totally separable translative packings of convex domains and then we derive from it some new results. This includes finding the largest density of totally separable translative packings of an arbitrary convex domain and finding the smallest area convex hull of totally separable packings (resp., totally separable soft packings) generated by given number of translates of a convex domain (resp., soft convex domain). Finally, we determine the largest covering ratio (that is, the largest fraction of the plane covered by the soft disks) of an arbitrary totally separable soft disk packing with given soft parameter.",
"subjects": "Metric Geometry (math.MG)",
"title": "Bounds for totally separable translative packings in the plane",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9830850827686584,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7095349679328712
} |
https://arxiv.org/abs/2107.01881 | Robust Online Convex Optimization in the Presence of Outliers | We consider online convex optimization when a number k of data points are outliers that may be corrupted. We model this by introducing the notion of robust regret, which measures the regret only on rounds that are not outliers. The aim for the learner is to achieve small robust regret, without knowing where the outliers are. If the outliers are chosen adversarially, we show that a simple filtering strategy on extreme gradients incurs O(k) additive overhead compared to the usual regret bounds, and that this is unimprovable, which means that k needs to be sublinear in the number of rounds. We further ask which additional assumptions would allow for a linear number of outliers. It turns out that the usual benign cases of independently, identically distributed (i.i.d.) observations or strongly convex losses are not sufficient. However, combining i.i.d. observations with the assumption that outliers are those observations that are in an extreme quantile of the distribution, does lead to sublinear robust regret, even though the expected number of outliers is linear. | \section{Introduction}
Methods for online convex optimization (OCO) are designed to work even in
the presence of adversarially generated data
\citep{Hazan2016,ShalevShwartz,CesaBianchiLugosi2006}, but this is
only possible because strong boundedness assumptions are imposed on
the losses that limit the influence of individual data points. On the
other hand, the most practically successful methods do not enforce an
a priori specified bound on the losses, but instead adapt to the norms
of the observed gradients or to the observed loss range. For example,
the regret bound for AdaGrad adapts to the ranges of the gradient
components per dimension \citep{JMLR:v12:duchi11a}, the regret bound for
online ridge regression scales with the largest observed loss
\citep{Vovk01}, the regret bound for AdaHedge in the prediction with
experts setting scales with the observed loss range of the experts
\Citep{DeRooijEtAl}, the regret bound for online gradient descent on
strongly convex losses scales with the maximum gradient norm squared
\citep{HazanAgarwalKale2007}, etc. In all such cases a small number of
outliers with large gradients among an otherwise benign dataset can
significantly worsen performance. This is also clear directly from the
algorithms themselves, where we see that large gradients have the
effect of significantly decreasing the effective step size for all
subsequent data points, leading to slower learning. Extreme outliers
may occur naturally, for instance because of heavy-tailed
distributions or sensor glitches, but if each loss is based on the
input of a user, then we may also be concerned that a small number of
adversarial users may try to poison the data stream
\citep{DBLP:conf/iclr/KurakinGB17a}.
We formally capture the robustness of OCO methods by modifying the
standard setting to measure performance only on the rounds that are
not outliers. The goal of the learner is to perform as well as if the
outliers were not present, up to some overhead that is incurred for
filtering out the outliers. As in standard OCO, learning proceeds in
$T$ rounds, and at the start of each round $t$ the learner needs to
issue a prediction $\w_t$ from a bounded convex domain. The
environment then reveals a convex loss function $f_t$ with
(sub)gradient $\grad_t := \nabla f_t(\w_t)$ at $\w_t$, and performance
is measured by the cumulative difference between the learner's losses
and the losses of the best fixed parameters $\u$. Unlike in the
standard OCO setting, however, we only sum up losses over the subset
of inlier rounds $\mathcal{S} \subseteq \{1,2,\ldots,T\}$ that are not
outliers, leading to the following notion of \emph{robust regret}:
\begin{equation}\label{eqn:robustregret}
R_T(\u,\mathcal{S}) := \sum_{t \in \mathcal{S}} \big(f_t(\w_t) - f_t(\u)\big).
\end{equation}
The challenge for the learner is to guarantee small robust regret
without knowing $\mathcal{S}$. Importantly, we aim for robust regret bounds
that scale with the loss range or gradient norms of the rounds
in~$\mathcal{S}$, but not with the size of the outliers, so even extreme
outliers should not be able to confuse the learner.
In Section~\ref{sec:okbound}, we first consider the fully adversarial
case where the only thing the learner knows is that there are at most
$k$ outliers, so
$T - |\mathcal{S}| \leq k$,
and both the inliers and the outliers are generated adversarially,
without any bound on the range of the outliers, and with the range of
the inliers also unknown a priori. We introduce a simple filtering
approach that filters out some of the largest gradients, and passes on
the remaining rounds to a standard online learning algorithm ALG. When
the losses are linear, this approach is able to guarantee that
\begin{equation}\label{eqn:introOKbound}
R_T(\u,\mathcal{S}) = R_T^\text{ALG}(\u) + O\big(G(\mathcal{S}) k\big)
\qquad
\text{for all $\mathcal{S}$ such that $T - |\mathcal{S}| \leq k$ simultaneously,}
\end{equation}
where $G(\mathcal{S})$ is the norm of the largest gradient among the rounds
in $\mathcal{S}$ and $R_T^\text{ALG}(\u)$ is the regret of ALG on a subset of
rounds under the guarantee that their gradient norms are at most $2
G(\mathcal{S})$. The extension to general convex losses then follows from a
standard reduction to the linear loss case. We follow up by showing
that \eqref{eqn:introOKbound} is unimprovable, not just for
adversarial losses, but even if the losses are independent and
identically distributed (i.i.d.) according to a fixed probability
distribution or if the losses are strongly convex. This fixes the
dependence on the number of outliers $k$ to be linear in~$k$ in quite some
generality. Nevertheless, in Section~\ref{sec:quantileMethod} we identify
sufficient conditions to get around the linear dependence: if the gradients
are i.i.d., and we take $\mathcal{S} = \mathcal{S}_p$ to be the rounds in which
$\|\grad_t\|_*$ is at most the $p$-quantile $G_p$ of the common distribution of
their norms, then there exists a method based on approximating $G_p$ by its
empirical counterpart on the available data that guarantees that the expected
robust regret is at most
\begin{equation}\label{eqn:introquantilebound}
\ex \sbr*{R_T(\u,\mathcal{S}_p)} = O\del*{G_p \del*{\sqrt{p T} + \sqrt{p(1-p)T \ln T} +
\ln^2 T}}.
\end{equation}
Since $O\del*{G_p \sqrt{p T}}$ would be expected if $\mathcal{S}_p$ were known
in advance, we see that the overhead grows sublinearly in $T$ and is
even asymptotically negligible for outlier proportion $1-p = o(1/\ln(T))$.
More generally, we extend
this result such that the gradients do not need to be i.i.d.\
themselves, but it is sufficient if there exist i.i.d.\ random
variables $\X_t$ and a constant $L$ such that $\|\grad_t\|_* \leq L
\|\X_t\|_*$. We then define the quantile with respect to the distribution of
the $\X_t$. This covers nonlinear losses of the form $f_t(\w) = h_t(\w^\top
\X_t)$ for convex functions $h_t$ that are $L$-Lipschitz, like the logistic
loss $f_t(\w) = \ln(1+\exp(-Y_t \w^\top \X_t))$ and the hinge loss $f_t(\w) =
\max\{1-Y_t\w^\top \X_t\}$ for $Y_t \in \{-1,+1\}$, both with $L=1$.
\paragraph{Related Work}
The definition of robust regret may remind the reader of the adaptive
regret \citep{SheshradiHazan,pmlr-v37-daniely15}, which measures regret on a contiguous
interval of rounds~$\cI$ that is unknown to the learner. Since adaptive
regret can be controlled by casting it into the framework of specialist
(sleeping) experts \citep{fssw-se-97,ChernovVovk2009}, it is natural to
ask whether the same is possible for the robust regret. To apply the
specialist experts framework, we would assign a separate learner
(specialist) to each possible subset of rounds $\mathcal{S}$ that would then be
active only on $\mathcal{S}$, and such a pool of $m$ learners would be
aggregated using a meta-algorithm. Computational issues aside, this
approach runs into two problems: the first is that all existing
meta-algorithms assume the losses to be bounded within a known range,
and therefore cannot be applied since we do not assume that even the
range of the inliers is known. Second, even if the range issue could be
resolved, the specialist regret would incur a $\Omega(\sqrt{T \log m}) =
\Omega(\sqrt{k T \log(T/k)}$ overhead, already if we only consider all
$m = \binom{T}{T-k} \geq (T/k)^k$ possible subsets with exactly $k$
outliers. We see that $k$ now multiplies $T$, which is much worse than
the optimal additive dependence on $k$ in~\eqref{eqn:introOKbound}.
Reducing the dependence on the largest gradient norm has previously been considered in
the context of adaptive online and stochastic convex optimization \citep{JMLR:v12:duchi11a, pmlr-v97-ward19a}. However, these methods still depend on the average of all (squared) gradient norms, and therefore require these norms to be finite. In contrast to these adaptive methods, our method can handle a small number of adversarial samples, with large or even infinite norm, while our robust regret analysis still guarantees a sub-linear bound.
In the context of stochastic optimization, \citet{juditsky2019algorithms} propose a robust version of mirror descent based on truncating the gradients returned by a stochastic oracle. Their main goal is to establish a sub-Gaussian confidence bound on the optimization error under weak assumptions about the tails of the noise distribution. Contrary to our setup, they control the smoothness of the objective and the variance of the noise, so that already a vanilla (non-robust) version of SGD would achieve a vanishing optimization error in expectation (but not with a sub-Gaussian confidence). \citet{pmlr-v97-diakonikolas19a} propose a robust meta-algorithm for stochastic optimization that repeatedly trains a standard algorithm as a base learner and filters out the outliers. This approach is conceptually similar to our filtering method, but it is designed to work in a batch setting, with the data (sample functions) given in advance. \citet{DBLP:journals/corr/abs-1802-06485} provide a robust batch algorithm for stochastic optimization by applying the ideas from robust mean estimation to robustify stochastic gradient estimates in a (batch) gradient descent algorithm.
In the online learning and bandit literature, interesting results were
obtained for dealing with adversarial corruptions of data that are
otherwise generated i.i.d., to still benefit from the stochastic setting
\citep{DBLP:conf/stoc/LykourisML18,pmlr-v99-gupta19a,DBLP:conf/nips/AmirAKML20}.
\citet{wang2018data} and \citet{pmlr-v120-zhang20b} further consider data poisoning attacks on an online learner, but the focus is on the optimization of the adversary, while the learner remains fixed.
In all these works, contrary to ours, the corrupted data is still
assumed to lie in the
same range as the non-corrupted data. A notable exception is the very
recent work of \citet{DBLP:journals/corr/abs-2010-04157}, which proposes
online algorithms for contextual bandits and linear regression in a
framework in which the linear model is realizable (well-specified) up to
small noise, and a fixed, randomly selected, fraction of examples is
arbitrarily corrupted (as in the Huber $\epsilon$-contamination model
\citep{Huber}), but still remains bounded. In contrast, we avoid strong
distributional assumptions such as model realizability, and do not make
any probabilistic assumptions about the corruption mechanism or impose
any constraints on the magnitude of the outliers.
Starting with pioneering works of Tukey and Huber \citep{Tukey,Huber} there has been a tremendous amount of past work in the area of robust statistics, which concerns the basic tasks of classical statistics in the presence of outliers and heavy-tailed distributions \citep{HuberBook}. A more recent line of research building on the work of \citet{Catoni2012,Minsker,LugosiMendelson2019,LugosiMendelson2019survey} concerns estimation with sub-Gaussian-style confidence for heavy-tailed distributions.
Finally, our setup is different from, but conceptually related to, a
line of research on machine learning and statistical problems in the
presence of adversarial data corruptions \citep{Charikar_etal}. This has
been studied, for instance, in the context of parameter estimation
\citep{Lai_etal,10.5555/3310435.3310606,10.5555/3174304.3175475,LugosiMendelson2019adversarial,pmlr-v108-prasad20a},
robust PCA \citep{robustPCA}, regression
\citep{pmlr-v75-klivans18a,Diakonikolas_etal,pmlr-v108-liu20b},
classification
\citep{JMLR:v10:klivans09a,10.1109/TPAMI.2015.2456899,10.1145/3006384}
and many other cases. See the in-depth survey by \citet{DBLP:journals/corr/abs-1911-05911} for an overview of recent advances in this direction.
\paragraph{Outline}
We start by summarizing our setting and notation in the next section.
Then, in Section~\ref{sec:okbound}, we prove the upper bound
\eqref{eqn:introOKbound} for adversarial losses, and show matching lower
bounds both for i.i.d.\ losses and for strongly convex losses. As a
further example, we show how robust regret can be used to bound the
excess risk in the Huber $\epsilon$-contaminated setting via
online-to-batch-conversion.
In
Section~\ref{sec:quantileMethod} we turn to the quantile case and establish
\eqref{eqn:introquantilebound}.
Finally, Section~\ref{sec:conclusion} concludes with a discussion of possible
directions for future work. Some proofs are deferred to the
appendix.
\section{Setting and Notation} \label{sec:setting}
Formally, we consider the following online learning protocol. In each
round $t = 1,2,\ldots$ the learner first predicts $\w_t \in \mathcal{W}$,
where the domain $\mathcal{W}$ is a non-empty, compact and convex subset of
$\mathbb{R}^d$. The adversary then reveals a convex loss function $f_t: \mathcal{W}
\rightarrow \mathbb{R}$, and the learner suffers loss $f_t(\w_t)$. We assume
throughout that there always exists a gradient or, more generally, a
subgradient $\grad_t := \nabla f_t(\w_t)$ at the learner's prediction,
which is implied by convexity of $f_t$ whenever $\w_t$ lies in the
interior of $\mathcal{W}$ and also on the boundary if there exists a finite
convex extension of all $f_t$ to a larger domain that contains $\mathcal{W}$
in its interior. The performance of the learner with respect to any
fixed parameters $\u \in \mathcal{W}$ is measured by the \emph{robust
regret} $R_T(\u,\mathcal{S})$ over the rounds $\mathcal{S} \subseteq [T] :=
\{1,2,\ldots,T\}$ that are not outliers, as defined in
\eqref{eqn:robustregret}. The definition of subgradients implies that
$f_t(\w_t) - f_t(\u) \leq (\w_t - \u)^\top \grad_t$, which implies that
$R_T(\u,\mathcal{S})$ is bounded from above by the \emph{linearized robust
regret}
\[
\linReg_T(\u,\mathcal{S}) := \sum_{t \in \mathcal{S}} (\w_t - \u)^\top \grad_t.
\]
We will state our main results for an arbitrary norm $\Norm{\cdot}$ on
$\mathcal{W}$ and measure gradient lengths in terms of the dual norm
$\Norm{\grad_t}_* = \sup_{\w \in \mathbb R^d : \Norm{\w} \le 1} \w^\top \grad_t$. Let $D = \max_{\u,\w \in \mathcal{W}} \Norm{\w - \u}$
denote the diameter of the domain. For the analysis of the robust
regret, we need a Lipschitz bound for the gradients that are in the set
$\mathcal{S}$, which we denote by
\begin{equation*}
G(\mathcal{S}) := \max_{t \in \mathcal{S}} \Norm{\grad_t}_*.
\end{equation*}
\section{Robustness to Adversarial Outliers}
\label{sec:okbound}
In this section we derive matching upper and lower bounds of the form in
\eqref{eqn:introOKbound}.
\subsection{Upper Bounds}
Let ALG be any Lipschitz-adaptive algorithm, which we will use as our
base online learning algorithm. Our general approach is to add a
filtering meta-algorithm FILTER that examines (the norm of) incoming
gradients and decides whether to filter them or pass them on to ALG for
learning. If $\mathcal{S}$ were known in advance, then FILTER could filter out
all outliers and pass on only the rounds in $\mathcal{S}$, but since $\mathcal{S}$ is
not known, FILTER needs to learn which rounds to pass on. Although most
online learning algorithms base their updates only on gradients, we note
that we do allow ALG to use the full loss function $f_t$ to update its
state when FILTER passes on round $t$ to ALG. When a round $t$ is
filtered, we assume that ALG behaves as if that round had not happened,
so we will have $\w_{t+1} = \w_t$. Our FILTER for this section is displayed in Algorithm~\ref{alg:filterok}.
\begin{algorithm2e}[htb]
\caption{Top-$k$ Filter: Filtering for Adversarial Setting}
\KwIn{Maximum number of outliers $k$}
\textbf{Initialize:} Let $\gradlist_0 = \{0,0,\ldots,0\}$ be an ordered
list of length $k+1$.\\
\For{$t = 1,2,\ldots$}{
\CommentSty{Maintain invariant that $\gradlist_t$ contains $k+1$ largest
gradients}\;
\eIf{$\|\grad_t\|_* > \min \gradlist_{t-1}$}{
Obtain $\gradlist_t$ from $\gradlist_{t-1}$ by removing the smallest
item in $\gradlist_{t-1}$ and inserting $\|\grad_t\|_*$\;
}{
Set $\gradlist_t$ equal to $\gradlist_{t-1}$\;
}
\CommentSty{Filter with factor $2$ slack}\;
\eIf{$\|\grad_t\|_* > 2 \min \gradlist_t$}{
Filter round $t$\;
}{
Pass round $t$ on to ALG\;
}
}
\label{alg:filterok}
\end{algorithm2e}
\begin{theorem}\label{thm:okbound}
Suppose ALG is any Lipschitz-adaptive algorithm that guarantees
linearized regret bounded by $B_T(G)$ on the rounds that it is passed
by FILTER, if the gradients in those rounds have length at most $G$,
and let FILTER be Algorithm~\ref{alg:filterok} with parameter $k$.
Then the linearized robust regret of ALG+FILTER is bounded by
\begin{equation}\label{eqn:okbound}
\linReg_T(\u,\mathcal{S})
\leq B_T\big(2 G(\mathcal{S})\big) + 4D(\u,\mathcal{S}) G(\mathcal{S}) (k+1)
\qquad
\text{for any $\mathcal{S}: T - |\mathcal{S}| \leq k$ and $\u \in \mathcal{W}$,}
\end{equation}
where $D(\u,\mathcal{S}) = \max_{t : \|\grad_t\|_* \leq 2 G(\mathcal{S})} \|\w_t - \u\|$.
\end{theorem}
There are two main ideas to the proof. First, since the list
$\gradlist_t$ in Algorithm~\ref{alg:filterok} contains $k+1$ elements
and there are at most $k$ outliers, at least one of the elements of
$\gradlist_t$ must be one of the inliers from~$\mathcal{S}$. It follows that the
smallest element of $\gradlist_t$ is a lower bound on $G(\mathcal{S})$. The
second idea is that, instead of filtering on this lower bound directly,
we filter with factor $2$ slack. Since every filtered gradient is also
added to $\gradlist_t$, this factor $2$ ensures that the minimum of
$\gradlist_t$ must at least double for every $k+1$ rounds that are
filtered. The resulting exponential growth of the filtered rounds means
that the contribution to the robust regret of all filtered rounds is
dominated by the last $k+1$ rounds, and therefore does not grow
with~$T$.
\begin{proof
Let $\filtered \subset [T]$ denote the rounds filtered out by
Algorithm~\ref{alg:filterok}, and let $\passed = [T] \setminus
\filtered$ denote the rounds that are passed on to ALG. Then the
linearized robust regret splits as follows:
\begin{equation*}
\linReg_T(\u,\mathcal{S})
= \sum_{t \in \mathcal{S} \cap \passed} (\w_t - \u)^\top \grad_t
+ \sum_{t \in \mathcal{S} \cap \filtered} (\w_t - \u)^\top
\grad_t.
\end{equation*}
We will show that Algorithm~\ref{alg:filterok} guarantees that the
gradients on the passed rounds are bounded as follows:
\begin{equation}\label{eqn:passedgradientsbound}
\|\grad_t\|_* \leq 2 G(\mathcal{S})
\qquad
\text{for all $t \in \passed$,}
\end{equation}
which implies that
\begin{align*}
\sum_{t \in \mathcal{S} \cap \passed} (\w_t - \u)^\top \grad_t
&= \sum_{t \in \passed} (\w_t - \u)^\top \grad_t
- \sum_{t \in \passed \setminus \mathcal{S}} (\w_t - \u)^\top \grad_t\\
&\leq B_T(2 G(\mathcal{S})) + 2 D(\u,\mathcal{S}) G(\mathcal{S}) |\passed \setminus \mathcal{S}|
\leq B_T(2 G(\mathcal{S})) + 2 D(\u,\mathcal{S}) G(\mathcal{S}) k,
\end{align*}
where the first inequality uses the assumption on ALG and H\"older's
inequality, and the second inequality uses that $|\passed \setminus
\mathcal{S}| \leq |[T] \setminus \mathcal{S}| \leq k$.
We proceed to prove \eqref{eqn:passedgradientsbound}. During the first
$k$ rounds, $\min \gradlist_t = 0$, so
\eqref{eqn:passedgradientsbound} is trivially satisfied. In all later
rounds, $\gradlist_t \subseteq \{\|\grad_s\|_* : s \leq t\}
\subseteq \{\|\grad_s\|_* : s \leq T\}$. Consequently, $\gradlist_t$
must contain at least one element $\|\grad_t\|_*$ with $t \in \mathcal{S}$,
because $T - |\mathcal{S}| \leq k$ and $|\gradlist_t| = k+1 > k$.
It follows that $\min \gradlist_t \leq G(\mathcal{S})$, so all passed
gradients satisfy \eqref{eqn:passedgradientsbound}.
Let $G_\text{min} = \min \{\|\grad_t\|_* \mid t \in \filtered\} > 0$ be
the length of the shortest filtered gradient. To complete the proof,
we will show that
\[
\sum_{t \in \mathcal{S} \cap \filtered} (\w_t - \u)^\top \grad_t
\leq D(\u,\mathcal{S}) \!\sum_{t \in \mathcal{S} \cap \filtered} \!\!\|\grad_t\|_*
\leq D(\u,\mathcal{S}) \hspace{-2em}\sum_{\substack{t \in \filtered \\ G_\text{min} \leq \|\grad_t\|_* \leq G(\mathcal{S})}} \hspace{-2em} \|\grad_t\|_*
\leq 2D(\u,\mathcal{S}) G(\mathcal{S}) (k+1).
\]
The first of these inequalities follows from H\"older's inequality, and the second
from the definition of $G(\mathcal{S})$. To establish the last inequality, we
proceed by induction: since $\gradlist_t$ contains the $k+1$ largest observed
gradient norms, we observe that there can be at most $k+1$ filtered rounds in
which $G(\mathcal{S})/2^{i+1} < \|\grad_t\|_* \leq G(\mathcal{S})/2^i$, because after $k+1$
such rounds we will have $\min \gradlist_t > G(\mathcal{S})/2^{i+1}$ forever. It
follows that we have the following induction step:
\[
\hspace{-2em}\sum_{\substack{t \in \filtered \\ G_\text{min} \leq
\|\grad_t\|_* \leq G(\mathcal{S})/2^i}} \hspace{-2em} \|\grad_t\|_*
\leq (k+1) G(\mathcal{S})/2^i \quad +
\hspace{-2em}\sum_{\substack{t \in \filtered \\ G_\text{min} \leq
\|\grad_t\|_* \leq G(\mathcal{S})/2^{i+1}}} \hspace{-2em} \|\grad_t\|_*
\qquad \text{for $i=0,1,2,\ldots$}
\]
Unrolling the induction, we therefore obtain
\[
\hspace{-2em}\sum_{\substack{t \in \filtered \\ G_\text{min} \leq \|\grad_t\|_* \leq G(\mathcal{S})}} \hspace{-2em} \|\grad_t\|_*
\leq (k+1) G(\mathcal{S}) \!\!\sum_{i=0}^{\ceil{\log_2
\frac{G(\mathcal{S})}{G_{\text{min}}}}} \!\!2^{-i}
\leq (k+1) G(\mathcal{S}) \sum_{i=0}^\infty 2^{-i}
= (k+1) G(\mathcal{S}) 2,
\]
which is what remained to be shown.
\end{proof}
As for the run-time, one may maintain the $k$ largest gradient norms encountered in a priority queue. The time used by Algorithm~\ref{alg:filterok} on top of ALG is $O(\ln k) \le O(\ln T)$ per round. This may be pessimistic in practise, as FILTER only performs work if the current gradient is among the $k+1$ largest seen so far.
\subsubsection{Examples}\label{sec:examples}
To make the result from Theorem~\ref{thm:okbound} more concrete, let us
instantiate ALG as online gradient descent (OGD), which starts from any
$\w_1 \in \mathcal{W}$ and updates according to
\[
\w_{t+1} = \Pi_\mathcal{W}(\w_t - \eta_t \grad_t),
\]
where $\Pi_\mathcal{W}(\w)$ denotes Euclidean projection of $\w$ onto
$\mathcal{W}$, and $\eta_t > 0$ is a hyperparameter called the step size.
Tuning the step size for general convex losses, we find that we can
tolerate at most $k = O(\sqrt{T})$ outliers without suffering in the
rate:
\begin{corollary}[General Convex Losses]\label{cor:generalconvex}
Let $\Norm{\cdot}$ be the $\ell_2$-norm, let ALG be OGD with step size
$\eta_t = D/\sqrt{2 \sum_{s=1}^t \|\grad_s\|_2^2}$ and let FILTER be
Algorithm~\ref{alg:filterok} with parameter $k$. Then the robust regret is
bounded by
\begin{equation}\label{eqn:generalconvexlosses}
\begin{split}
R_T(\u,\mathcal{S})
&\leq 2 D \sqrt{\sum_{t \in \mathcal{S}} \|\grad_t\|_2^2}
+ 2D G(\mathcal{S}) \Big(2k + \sqrt{k} + 2\Big)\\
&\leq 2 D G(\mathcal{S}) \Big(\sqrt{T} + 2k + \sqrt{k} + 2\Big)
\qquad
\text{for any $\mathcal{S}: T - |\mathcal{S}| \leq k$ and $\u \in \mathcal{W}$.}
\end{split}
\end{equation}
\end{corollary}
The proof of the corollary is in Appendix~\ref{app:exampleproofs}.
The step size of OGD may also be tuned for \emph{$\sigma$-strongly
convex losses}, which are guaranteed to be curved in all directions, and
satisfy the requirement that
\[
f_t(\u) \geq f_t(\w) + (\u - \w)^\top \nabla f_t(\w) +
\frac{\sigma}{2} \|\u - \w\|_2^2
\qquad \text{for all $\u,\w \in \mathcal{W}$.}
\]
In this case, we obtain the following guarantee on the robust regret, which is proved in Appendix~\ref{app:exampleproofs}:
\begin{corollary}[Strongly Convex Losses]
\label{cor:stronglyconvex}
Suppose the loss functions $f_t$ are $\sigma$-strongly convex. Let
$\Norm{\cdot}$ be the $\ell_2$-norm, let ALG be OGD with step size $\eta_t =
\frac{1}{\sigma t}$ and let FILTER be Algorithm~\ref{alg:filterok} with
parameter $k$. Then the robust regret is bounded by
\begin{equation}
R_T(\u,\mathcal{S})
\leq \frac{2 G(\mathcal{S})^2}{\sigma} \big(\ln T + 1\big)
+ \frac{5\widetilde{G}(\u,\mathcal{S})^2}{2\sigma}(k+1)
\qquad
\text{for any $\mathcal{S}: T - |\mathcal{S}| \leq k$ and $\u \in \mathcal{W}$,}
\end{equation}
where $\widetilde{G}(\u,\mathcal{S}) = 2 G(\mathcal{S}) + \max_{t : \|\grad_t\|_2 \leq 2G(\mathcal{S})} \|\nabla
f_t(\u)\|_2$.
\end{corollary}
The standard regret bound of OGD for strongly convex losses is of order
$\frac{G^2}{\sigma} \log T$ \citep{HazanAgarwalKale2007}, so in this
case we can tolerate $k = O(\log T)$ outliers without suffering in the
rate, under the additional assumption that $\widetilde{G}(\u,\mathcal{S}) =
O(G(\mathcal{S}))$. This seems like a reasonable assumption if we think of the
condition $\|\grad_t\|_2 \leq 2 G(\mathcal{S})$ as expressing that round $t$ is
not too extreme.
\paragraph{Huber $\epsilon$-Contamination}
As a final example, we consider the Huber $\epsilon$-contamination
setting \citep{Huber}. In this case losses are of the form $f_t(\w) =
f(\w,\xi_t)$, where the random variables $\xi_t$ are sampled i.i.d.\
from a mixture distribution $P_\epsilon$ defined by
\[
\xi \sim \begin{cases}
P & \text{if $M = 0$}\\
Q & \text{if $M = 1$}
\end{cases}
\qquad \text{where} \qquad M \sim \Bernoullidist(\epsilon)
\]
for some $\epsilon \in [0,1)$. The interpretation is that $P$ is the
actual distribution of interest, which is contaminated by outliers drawn
from $Q$. The hidden variable $M$ is not observed by the learner, so it
is not known which observations are outliers. Let $\mathcal{S}^* \subseteq [T]$
denote the set of inlier rounds in which $M_t = 0$. Then the robust
regret $R_T(\u,\mathcal{S}^*)$ may be viewed as the ordinary regret on a modified
loss function $\tilde f(\w,M,\xi)$ that is equal to $f(\w,\xi)$ on
samples from $P$ but zero on samples from~$Q$, i.e.\ $\tilde f(\w,M,\xi)
= \mathbf{1}\{M = 0\}f(\w,\xi)$ and
\begin{equation}\label{eqn:robustasregularregret}
R_T(\u,\mathcal{S}^*) = \sum_{t=1}^T \del*{\tilde f(\w_t,M_t,\xi_t) - \tilde
f(\u,M_t,\xi_t)}.
\end{equation}
Let the risk with respect to the inlier distribution $P$ be defined as
\[
\risk_P(\w) = \ex_{\xi \sim P} \sbr*{f(\w,\xi)}.
\]
Then, applying online-to-batch conversion
\citep{CesaBianchiConconiGentile2004} to the modified loss $\tilde f$,
we obtain the following result, which bounds the excess risk under $P$
by the robust regret when the observations are drawn from the
contaminated mixture $P_\epsilon$, without requiring any assumptions
about the outliers coming from~$Q$:
\begin{lemma}[Huber $\epsilon$-Contamination]\label{lem:huber}
Suppose the losses $f_t$ are i.i.d.\ according to the mixture
distribution $P_\epsilon$, and let $\u_P \in \argmin_{\w \in \mathcal{W}}
\risk_P(\w)$ be the optimal parameters for the distribution of the
inliers. Let $\bar \w_T = \frac{1}{T} \sum_{t=1}^T \w_t$, where
$\w_1,\ldots,\w_T$ are the predictions of the learner. Then
\begin{equation}\label{eqn:huberExp}
\ex_{P_\epsilon} \sbr*{\risk_P(\bar \w_T) - \risk_P(\u_P)}
\leq \frac{\ex_{P_\epsilon} \sbr*{R_T(\u_P,\mathcal{S}^*)}}{(1-\epsilon)T}.
\end{equation}
Moreover, if $|f(\w,\xi) - f(\u_P,\xi)| \leq B$ almost surely when $\xi \sim
P$ is an inlier, then for any $0 < \delta \leq 1$
\begin{equation}\label{eqn:huberProb}
\risk_P(\bar \w_T) - \risk_P(\u_P)
\leq \frac{R_T(\u_P,\mathcal{S}^*)}{(1-\epsilon)T}
+ \frac{2B}{1-\epsilon} \sqrt{\frac{2}{T} \ln \frac{1}{\delta}}
\end{equation}
with $P_\epsilon$-probability at least $1-\delta$.
\end{lemma}
(Details of the proof are given in Appendix~\ref{app:exampleproofs}.) We
see that, if we can control the robust regret with respect to the
unknown set $\mathcal{S}^*$ of inlier rounds, then we can also control the excess
risk with respect to the inlier distribution $P$. For example,
instantiating the learner as in Corollary~\ref{cor:generalconvex}
leads to the following specialization of Lemma~\ref{lem:huber}.
\begin{corollary}\label{cor:hubercor}
In the setting of Lemma~\ref{lem:huber}, suppose that $\|\nabla
f(\w,\xi)\| \leq G$ for all $\w \in \mathcal{W}$ almost surely when $\xi
\sim P$ is an inlier, and that $\epsilon \leq 1/2$. Let the
learner be instantiated as in Corollary~\ref{cor:generalconvex} with
$k = \ceil{\epsilon T + \sqrt{2 T \epsilon (1-\epsilon) \ln(2/\delta)}
+ \tfrac{1}{3}(1-\epsilon) \ln(2/\delta)}$ for any $0 <
\delta \leq 1$. Then
\begin{equation}\label{eqn:huberapplied}
\risk_P(\bar \w_T) - \risk_P(\u_P)
\leq
12 D G \epsilon
+ \frac{2DG \big(5 \sqrt{2\ln(2/\delta)}+2\big)}{\sqrt{T}}
+ \frac{2 D G \big(\ln(2/\delta) + 10\big)}{T}
\end{equation}
with $P_\epsilon$-probability at least $1-\delta$.
\end{corollary}
Here $\nabla f(\w,\xi)$ should be read as the gradient of $f(\w,\xi)$
with respect to~$\w$. The constant dependence on $DG\epsilon$, which does
not go to zero with increasing $T$, is unavoidable because $P$ is
non-identifiable based on samples from $P_\epsilon$. For instance,
consider the linear loss $f(w,\xi) = \xi w$ with $\mathcal{W} = [-D/2,+D/2]$
and $P_\epsilon$ such that $\xi = -G$ and $\xi = +G$ both with
probability $\epsilon$, and $\xi = 0$ with probability $1-2\epsilon$.
Then we cannot distinguish the case that $P = P_\epsilon(\cdot \mid \xi
\leq 0)$ and $Q$ is a point-mass on $+G$ from the case that
$P=P_\epsilon(\cdot \mid \xi \geq 0)$ with $Q$ a point-mass on $-G$. No
matter what the output of the learner is, its excess risk under $P$ will
always be at least $DG\epsilon$ in one of these two cases.
The proof of Corollary~\ref{cor:hubercor} is postponed to
Appendix~\ref{app:exampleproofs}. It is a straightforward combination of
Lemma~\ref{lem:huber} and Corollary~\ref{cor:generalconvex}, with the
only point of attention being the tuning of the number of outliers~$k$.
In expectation, the number of outliers is $\epsilon T$, but we choose
$k$ slightly larger so that the probability that the number of outliers
exceeds $k$ is negligible.
\subsection{Lower Bounds}
We now show that the bounds obtained in the previous part of this section are
non-improvable in general. First note that one can always choose $\mathcal{S} =
[T]$ (no outliers) and apply a standard lower bound for online learning
algorithms which guarantees expected regret $\Omega(\sqrt{T})$ for general
losses and $\Omega(\ln T)$ for strongly-convex losses. This matches the first term in the
bound of Theorem \ref{thm:okbound}. Therefore, we will only
show a bound $\Omega(k)$, which, combined with the standard one, leads to a
$\Omega(\max\{\sqrt{T},k\}) = \Omega(\sqrt{T} + k)$ lower bound on the regret
for general convex losses and $\Omega(\ln T + k)$ for strongly convex losses.
Consider a learning task over domain $\mathcal{W} = [-W,W]$ for some $W > 0$.
To prove a lower bound for general convex losses, we choose the loss sequence
to be $f_t(w) = G\xi_t w$, where $\xi_t \in \{-1,+1\}$ are i.i.d.\ Rademacher
random variables with $\Pr(\xi_t = -1) = \Pr(\xi_t = +1) = \frac 12$, while $G > 0$
controls the size of the gradients/losses.
\begin{theorem}[Lower Bound with I.I.D.\ Losses]
\label{thm:lower_bound_1}
For any $k$ and any online learning algorithm run on the sequence defined above, there
exist adversarial choices of $\mathcal{S}$ with $T-|\mathcal{S}| \le k$ and $u \in \mathcal{W}$ such that
\[
\ex_{f_1,\ldots,f_T} \sbr*{ R_T(u, \mathcal{S}) } \geq \frac{D G(\mathcal{S}) k}{4},
\]
where $f_1,\ldots,f_T$ are i.i.d.\ as described above.
\end{theorem}
\begin{proof}
Let $S_1 = \{t \in [k] \colon \xi_t = 1\}$ and $S_{-1} = \{t \in [k] \colon \xi_t = -1\}$. The
adversary will choose $u = -W \zeta$ and $\mathcal{S} = S_{\zeta} \cup \{k+1,\ldots,T\}$,
where $\zeta \in \{-1,1\}$ is
a Rademacher random variable independent of $\xi_1,\ldots,\xi_T$. The expected regret
jointly over $\xi_1,\ldots,\xi_T,\zeta$ is then given by
\begin{align*}
\ex \sbr*{ R_T(u, \mathcal{S}) } &=
\ex \sbr*{ G \sum_{t=1}^T \mathbf 1_{t \in \mathcal{S}} w_t \xi_t - G \sum_{t=1}^T
\mathbf 1_{t \in \mathcal{S}} u \xi_t} \\
&=G \sum_{t=1}^k \underbrace{\ex \sbr*{\mathbf 1_{\zeta = \xi_t} w_t \xi_t }}_{=0}
+ G \sum_{t=k+1}^T \underbrace{\ex \sbr*{w_t \xi_t}}_{=0}
+ GW \sum_{t=1}^k \underbrace{\ex \sbr*{\mathbf 1_{\zeta = \xi_t} \zeta \xi_t }}_{=1/2}
+ GW \sum_{t=k+1}^T \underbrace{\ex \sbr*{\zeta \xi_t}}_{=0} \\
&= GW \frac{k}{2},
\end{align*}
where we used the independence of $\xi_t$ and $\zeta$ in the second and the fourth sum,
while
\[
\ex \sbr*{\mathbf 1_{\zeta = \xi_t} w_t \xi_t }
= \ex \sbr*{\; \ex \sbr*{\mathbf 1_{\zeta = \xi_t} w_t \xi_t \, |\, \xi_t}\;}
= \ex \sbr*{w_t \xi_t/ 2} = 0, \quad \text{and} \;
\ex \sbr*{\mathbf 1_{\zeta = \xi_t} \zeta \xi_t } = \ex \sbr*{\mathbf 1_{\zeta = \xi_t}} = \frac{1}{2}.
\]
As the bound holds for the random choice of $\zeta$ it also holds for the worst-case choice
of $\zeta$. The theorem now follows from $D = \max_{u,w \in \mathcal{W}} |w - u| = 2 W$ and $G(\mathcal{S}) = \max_{t \in \mathcal{S}} |g_t| = \max_{t \in \mathcal{S}} G |\xi_t| = G$.
\end{proof}
A similar bounding technique leads to a lower bound for $\sigma$-strongly convex losses,
except that the distribution of the losses differs between the first $k$ rounds
and the later rounds. This still implies a lower bound for adversarially
generated data, but not for i.i.d.\ losses. In this case, we will choose the domain
$\mathcal{W} = [-W,W]$, the
loss sequence based on the $\sigma$-strongly convex squared loss, $f_t(w) = \frac{\sigma}{2} (w - W \xi_t)^2$, for $t \le
k$, and $f_t(w) = \frac{\sigma}{2} (w-W\zeta)^2$ for $t \ge k$, where $\xi_1,\ldots\xi_k$ and
$\zeta$ are again i.i.d.\ Rademacher variables.
\begin{theorem}[Lower Bound for Strongly Convex Losses]
For any $k$ and any online learning algorithm, there exist adversarial choices of $\mathcal{S}$ with $T-|\mathcal{S}| \le k$ and $u \in \mathcal{W}$ such that
\[
\ex_{f_1,\ldots,f_T} \sbr*{ R_T(u, \mathcal{S}) } \geq \frac{G^2(\mathcal{S}) k}{16 \sigma},
\]
where $f_1,\ldots,f_T$ are the
$\sigma$-strongly convex losses described above.
\end{theorem}
\begin{proof}
Using the same notation as in the proof of Theorem~\ref{thm:lower_bound_1}, the adversary will choose $u = W \zeta$
and $\mathcal{S} = S_{\zeta} \cup \{t+1,\ldots,T\}$.
The expected regret
jointly over $\xi_1,\ldots,\xi_k,\zeta$ is given by
\begin{align*}
\ex \sbr*{ R_T(u, \mathcal{S}) } &=
\frac{\sigma}{2} \sum_{t=1}^k \underbrace{\ex \sbr*{\mathbf 1_{\zeta = \xi_t} (w_t - W\xi_t)^2}}_{\ge W^2/2}
+ \frac{\sigma}{2} \sum_{t=k+1}^T \underbrace{\ex \sbr*{(w_t - W\zeta)^2}}_{\ge 0} \\
&\quad - \frac{\sigma}{2} \sum_{t=1}^k \underbrace{\ex \sbr*{\mathbf 1_{\zeta = \xi_t} (\zeta - W\xi_t)^2}}_{=0}
\ge \frac{\sigma W^2 k}{4},
\end{align*}
where to bound the first sum we used
\begin{align*}
\ex \sbr*{\mathbf 1_{\zeta = \xi_t} (w_t - W\xi_t)^2 }
&= \ex \sbr*{\; \ex \sbr*{\mathbf 1_{\zeta = \xi_t} (w_t - W\xi_t)^2 \, |\, \xi_t}\;}
= \ex \sbr*{(w_t - W\xi_t)^2 / 2} \\
&= \ex \sbr*{w_t^2/2 - W\xi_t w_t + W^2/2} = w_t^2/2 + W^2/2 \ge W^2/2.
\end{align*}
To finish the proof note that
$|\nabla f_t(w_t)| = \sigma|w_t - W\xi_i| \le 2 \sigma W$ so that
$G(\mathcal{S}) \le 2 \sigma W$.
\end{proof}
\section{Robustness for Quantiles}
\label{sec:quantileMethod}
In this section we consider robust online linear optimization in the stochastic i.i.d.\ setting. That is, we consider i.i.d.\ gradients $\grad_t \sim \pr$ that are in particular independent of the learner's prediction $\w_t$. Let $G_p \df q_p(\norm{\grad}_*)$ be the $p$-quantile of the gradient in dual norm $\norm{\cdot}_*$. To keep things simple, we will assume that $\pr$ does not have an atom at $G_p$, so that $\pr\set{\norm{\grad}_* \le G_p} = p$ exactly. We call a gradient $\grad_t$ an \markdef{outlier} if $\norm{\grad_t}_* > G_p$.
Fix a domain $\mathcal{W}$ of diameter $D$ in the norm $\norm{\cdot}$. We are interested in algorithms that know $\mathcal{W}$ and $p$ but not $G_p$, play $\w_t \in \mathcal{W}$, and we aim to bound their expected robust regret on the (random!) set of inliers $\mathcal{S} = \set{t \in [T] : \norm{\grad_t}_* \le G_p}$. That is, we aim to control
\begin{equation}\label{eq:exp.rob.reg}
\bar R_T
~\df~
\ex \sbr*{
\max_{\u \in \mathcal{W}}
R_T(\u, \mathcal{S})
}
~=~
\ex \sbr*{
\max_{\u \in \mathcal{W}}
\sum_{t \in [T] : \norm{\grad_t}_* \le G_p} \tuple{\w_t - \u, \grad_t}
}
.
\end{equation}
Note that a bound on the expected robust regret implies a robust pseudo-regret bound, where the data-dependent maximum is replaced by the fixed minimiser of the expected loss on inliers, i.e.\ $\u^* \in \arg\min_{\u \in \mathcal{W}} \u^\top \ex\sbrc{\grad_t\,}{\,\norm{\grad_t}_* \le G_p}$.
Our FILTER algorithm for the stochastic setting is shown as Algorithm~\ref{alg:filtermetaAlgorithmQuantile}. The main idea is that it only passes rounds to the base ALG for which it is virtually certain that they are inliers. To this end our FILTER computes a lower confidence bound $\LCB_t$ on the quantile $G_p$. Smaller gradients are included, while larger ones are discarded. The crux of the robust regret bound proof is then dealing with the inlier gradients that end up being dropped. We will find it instructive to state our algorithms and confidence bounds with a free confidence parameter $\delta$. Tuning our approach will then lead us to set $\delta=T^{-2}$.
\begin{algorithm2e}[htb]
\KwIn{Quantile level $p \in (0,1)$, confidence $\delta$, online learner ALG}
\For{$t = 1,2,\ldots$}{
Have ALG produce $\w_t$. Receive gradient $\grad_t$\;
Let $\hat q_{t-1}$ be the empirical quantile function of past gradients $\grad_1, \ldots, \grad_{t-1}$. \;
Compute $\LCB_{t-1} = \hat q_{t-1}(p - u_{t-1})$ at threshold $u_{t-1} = \sqrt{t^{-1} 2 p(1-p) \ln \frac{1}{\delta}}
+ \frac{1}{3} t^{-1} \ln \frac{1}{\delta}$ \;
\eIf{$\norm{\grad_t}_* \le \LCB_{t-1}$}{
Pass round $t$ on to ALG\;
}{
Ignore round $t$\;
}
}
\caption{Filtering meta algorithm for Robust Quantile Regret}
\label{alg:filtermetaAlgorithmQuantile}
\end{algorithm2e}
We now show that the expected robust regret is small.
\begin{theorem}
\label{Thm:quantilebound}
Let ALG have individual sequence regret bound $B_T(G)$ for $T$ rounds with gradients of dual norm at most $G$, and which is concave in $T$. Let $D$ be the diameter of the domain. Then the FILTER Meta-Algorithm~\ref{alg:filtermetaAlgorithmQuantile} with $\delta=T^{-2}$ has expected robust regret bounded by
\[
\bar R_T
~\le~
B_{p T}(G_p)
+
D G_p \del*{
4 \sqrt{2 p (1-p) T \ln T}
+ \frac{13}{3} (\ln T)^2
+ 3
}
.
\]
\end{theorem}
If ALG does its job, the first term is the minimax optimal regret for when the outlier rounds were
known. The other terms quantify the cost of being robust. When $p$ is
not extreme, this cost is of order $G_p D \sqrt{T \ln T}$, rendering it
the dominant term overall (escalating the minimax regret by a mild log
factor). When $p$ tends to $1$ or $0$, the robustness overhead gracefully reduces to the $(\ln T)^2$ regime.
\medskip
The proof can be found in Appendix~\ref{appx:proof.quantiles}. The main ideas are as follows. As we have no control over outlier gradients (they may be astronomical), we must assume that ALG gets confused without recourse if FILTER ever passes it any outlier. Note that FILTER is not evaluated on outlier rounds, so it does not suffer from this gradient's \emph{magnitude}. But its effect is that, for all we know, ALG is rendered forever useless, upon which FILTER may incur the maximum possible regret of $G_p D T$. Our approach will be to choose our threshold for inclusion conservatively, and to apply concentration in all rounds simultaneously, to ensure this bad event is rare (this is the source of the $\ln T$ factor). A second concentration allows us to deal with the discarded inliers.
\medskip
\noindent
We conclude the section with a selection of remarks.
\noindent
\textbf{Examples} The examples of Section~\ref{sec:examples} also apply here. Depending on the setting, and hence the appropriate base algorithm ALG, the dominant regret term can be either the $D G_p \sqrt{p (1-p) T \ln T}$ term, or the $B_{p T}(G_p)$ term. The former case applies for OGD, while the latter case happens in the $K$-experts setting with many experts and few rounds, i.e.\ $K \gg T$. There adding robustness comes essentially for free.
\noindent
\textbf{Anytime Robust Regret} As stated, the algorithm needs to know the horizon $T$ up front to set the confidence parameter $\delta$ in the deviation width $u_t$. We can use a standard doubling trick on $T$ to get an anytime algorithm.
\noindent
\textbf{Anytime concentration} One may wonder how much the analysis can be improved by replacing our union bound over time steps with a time-uniform Bernstein concentration inequality, as e.g.\ developed by \cite{howard2019sequential}. Sadly, the best we can hope for is to be able to use $\delta = \frac{1}{T}$, which would lead to a constant factor $\sqrt{2}$ improvement on the dominant term. We cannot tolerate a higher overall failure probability, for we have to pacify the regret upon failure, which may be of order $T$.
\noindent
\textbf{High Probability Version} Going into the proof, we see that a high probability robust regret bound is also possible. We would need to change the analysis of $P^{(2)}$, as we currently analyse it in expectation. Observing that it is a sum of $T$ conditionally independent increments, we may use martingale concentration to find that, with probability at least $1-T^{-1}$, this sum is at most its mean (which features in the expected regret bound) plus a deviation of order $\sqrt{T \ln T}$. We obtain a high-probability analogue of Theorem~\ref{Thm:quantilebound} with slightly inflated constant.
\noindent
\textbf{Large-Feature-Vectors-as-Outliers} We may also deal with
non-i.i.d.\ gradients using exactly the same techniques developed above,
as follows. We assume that $f_t(\w) = h_t(\w^\top \X_t)$, where
$\X_t \in \mathbb R^d$ is a feature vector available at the beginning of
round $t$, and $h_t$ is a scalar Lipschitz convex loss function, revealed at the end of round $t$. This setting includes e.g.\ linear classification with hinge or logistic loss. Upon assuming that feature vectors $\X_1, \X_2, \ldots$ are drawn i.i.d.\ from $\pr$ (while the $h_t$ are arbitrary, possibly adversarially chosen), we can take the $p$-quantile $X_p \df q_p(\norm{\X}_*)$ of the dual norm of the feature vectors. We may then measure the robust expected regret \eqref{eq:exp.rob.reg} on the inlier rounds $\mathcal{S} = \set*{t \in [T] : \norm{\X_t}_* \le X_p}$, and obtain the analogue of Theorem~\ref{Thm:quantilebound}, where the only subtlety is using the gradient bound on $h_t$ to transfer from inlier $\X_t$ to small loss.
\begin{proposition}\label{prop:scalar.lipschitz.convex}
Consider a joint distribution on sequences of feature vectors and scalar Lipschitz convex functions $(\X_1, h_1), (\X_2, h_2), \ldots$ such that the feature vectors $\X_1, \X_2, \ldots$ are i.i.d.\ with distribution $\pr$ on $\mathbb R^d$.\footnote{We do not constrain the distribution of $h_1,h_2,\ldots$, so we can model adversarial loss functions that are correlated with the feature vectors.} Let $X_p = q_p(\norm{\X}_*)$ be the $p$-quantile of the feature dual norm.
Let ALG be an algorithm for online-convex optimisation over a domain
of diameter $D$ and loss functions $f_t(\w) = h_t(\w^\top \X_t)$ that
guarantees individual-sequence regret bounded by $B_T(X)$ in any
$T$-round interaction with $\norm{\X_t}_* \le X$, without having to
know $X$ up front. Consider FILTER
Meta-Algorithm~\ref{alg:filtermetaAlgorithmQuantile} with $\grad_t$
replaced by $\X_t$. Then the expected robust regret on inlier rounds
$\mathcal{S} = \set{t \in [T] : \norm{\X_t}_* \le X_p}$ is bounded by
\[
\bar R_T
=
\ex \sbr*{
\max_{\u \in \mathcal{W}}
\sum_{t \in \mathcal{S}} \del*{
f_t(\w_t) - f_t(\u)
}
}
\le
B_{p T}(X_p)
+
D X_p \del*{
4 \sqrt{2 p (1-p) T \ln T}
+ \frac{13}{3} (\ln T)^2
+ 3
}
.
\]
\end{proposition}
\begin{proof}
The proof follows that of Theorem~\ref{Thm:quantilebound}, with one extra (standard) step. Namely, to bound the loss on inlier rounds (for the dropped rounds term $P^{(2)}$ in the proof, and the concentration failure term $P^{(3)}$ in the proof), we use convexity, H\"older and bounded derivative to obtain
\[
f_t(\w_t) - f_t(\u^*)
~\le~
h_t(\w_t^\top \X_t) - h_t({\u^*}^\top \X_t )
~\le~
h_t'(\w_t^\top \X_t) (\w_t - \u^*)^\top \X_t
~\le~
D X_p
.
\]
\end{proof}
\paragraph{Online-to-Batch Example}
We now discuss an example where the standard theory for stochastic gradient descent does not apply, but the iterate average of online gradient descent with quantile-based filtering still gives risk convergence guarantees. To keep things simple, we work in the one-dimensional setting with $\mathcal{W}=[-1,+1]$. To stay within the assumptions of Proposition~\ref{prop:scalar.lipschitz.convex}, we take $f_t$ to be the logistic loss $f_t(w) = h_t(w \X_t)$ with $h_t(z) = \ln(1+e^{- y_t z})$ for $y_t \in \set{-1,+1}$. To make things interesting, we take $\X_t \in \mathbb R$ to have a distribution with heavy tails, with $\pr(\abs{\X_t} > x)$ of order $x^{-(1+\gamma)}$ for large enough $x$, for some $\gamma \in (0,1)$. Taking $\gamma > 0$ ensures that the expected loss $\ex[f_t(w)]$ is finite (as $f_t(w) \approx (-\X_t y_t w)_+$ for large $\X_t$), and hence has a bonafide minimiser (which can be in the interior or on the boundary, depending on the details of the distribution). Taking $\gamma < 1$ ensures that the tails are so heavy that $\ex[f_t'(w)^2] = \infty$ (as $f_t'(w) \approx (-y_t \X_t)_+$ for large $\X_t$), and hence standard theory for SGD does not apply. Instead we will use Lemma~\ref{lem:huber} and Proposition~\ref{prop:scalar.lipschitz.convex} to argue that the filtered iterate average $\bar w_T$ approximates the minimiser of the risk $u^*$ in the sense that
\begin{equation}\label{eq:ogdworksb}
\ex\nolimits_{\pr} \sbr*{\risk_{\pr}(\bar w_T) - \risk_{\pr}(u^*)}
~\to~ 0
\quad
\text{as}
\quad
T \to \infty
.
\end{equation}
To bound the risks above, we will decompose $\pr = p P + (1-p) Q$ where $p$ is a quantile level chosen below, $P = \pr \delc[\big]{\cdot}{\norm{\X_t} \le X_p}$ and $Q = \pr \delc[\big]{\cdot}{\norm{\X_t} > X_p}$. We will bound the $\pr$-risks in terms of $P$-risks, then we will use Lemma~\ref{lem:huber} to bound the $P$-risk difference in terms of the robust regret, and we will use Proposition~\ref{prop:scalar.lipschitz.convex} to bound that regret. We will settle on picking $p = 1-\frac{1}{\sqrt{T}}$. This has the effect that the $p$-quantile is $X_p \propto T^{\frac{1}{2(1+\gamma)}}$ (by inverting the tail probability). On the one hand, for any $w \in \mathcal{W}$, the bias, i.e.\ the difference in risk on $P$ (inliers only) and on $\pr$ (full distribution), is at most of order
\begin{align*}
\abs{
\risk_{\pr}(w)
-
p \risk_{P}(w)
}
&~\le~
\ex\nolimits_{\pr} \sbr*{ f_t(w) \mathbf 1_{\abs{\X_t} > X_p}}
~\approx~
\ex\nolimits_{\pr} \sbr*{ (- \X_t y_t w) \mathbf 1_{\abs{\X_t} > X_p}}
\\
&~\le~
\ex\nolimits_{\pr} \sbr*{ \abs{\X_t} \mathbf 1_{\abs{\X_t} > X_p}}
~=~
\int_{X_p}^\infty \pr(\abs{\X_t} > x) \dif x
~\propto~
X_p^{-\gamma}
\propto T^{-\frac{\gamma}{2(1+\gamma)}}
.
\end{align*}
On the other hand, the regret bound for $T$-round online gradient descent with gradient norms bounded by $X$ is $B_T(X) = O(D X \sqrt{T})$. Hence for our choice of $p$, the first term in the bound from Proposition~\ref{prop:scalar.lipschitz.convex} is dominant and of order $X_p \sqrt{T}$. Dividing by $T$ to plug in to Lemma~\ref{lem:huber} results in $\frac{X_p\sqrt{T}}{T} \propto T^{-\frac{\gamma}{2(1+\gamma)}}$. Both contributions (bias and regret) are of the same order and converge to zero, indicating that quantile-filtered online gradient descent achieves \eqref{eq:ogdworksb}.
\section{Conclusion and Future Work}
\label{sec:conclusion}
We have shown that the robust regret can be controlled for adversarial data
when there are at most $k$ outliers. A general question that we leave open is
whether it is possible to get a bound for adversarial losses that does not
depend on the number of outliers $k$, but on some other natural property of the
losses. For instance, we may try to incorporate prior knowledge about the size
of the gradients by specifying a prior $\pi$ on gradient norms and bounding the
robust regret in terms of the prior probability $\pi(G(\mathcal{S}))$ of the size of
the inlier gradients. A possible way to approach this might be to introduce
specialist experts for different thresholds $G$ and then aggregate these. This
runs into severe difficulties, however, because we only find out whether a
specialist should be active or not in round~$t$ \emph{after} making our
prediction $\w_t$ and observing $\grad_t$. Moreover, specialists would have
different loss ranges and the robust regret can only depend on the loss range
$G(\mathcal{S})$ of the correct specialist.
We also provided a sublinear bound on the robust regret for i.i.d.\ gradients
when the outliers are defined as rounds in which the gradients exceed their
$p$-quantile, or when they can be bounded in terms of an i.i.d.\ variable
$\X_t$. Alternatively, outliers might be defined as gradients with norms
exceeding their empirical $p$-quantile at the end of $T$ rounds. For i.i.d.\
gradients, the empirical $p$-quantile after $T$ rounds is close to the actual
$p$-quantile with high probability, so this case can be handled by running the
method from Section~\ref{sec:quantileMethod} for a slightly inflated $p$.
However, the empirical quantile formulation continues to make sense even when
gradients are not i.i.d., so it would be interesting to know whether a linear
number of outliers can be tolerated in any such non-i.i.d.\ cases.
\paragraph{Acknowledgments}
Van Erven and Sachs were supported by the Netherlands Organization
for Scientific Research (NWO) under grant number VI.Vidi.192.095. Kot{\l}owski was supported by the Polish National Science Centre under grant No.\ 2016/22/E/ST6/00299.
\bibliographystyle{abbrvnat}
| {
"timestamp": "2021-07-06T02:31:20",
"yymm": "2107",
"arxiv_id": "2107.01881",
"language": "en",
"url": "https://arxiv.org/abs/2107.01881",
"abstract": "We consider online convex optimization when a number k of data points are outliers that may be corrupted. We model this by introducing the notion of robust regret, which measures the regret only on rounds that are not outliers. The aim for the learner is to achieve small robust regret, without knowing where the outliers are. If the outliers are chosen adversarially, we show that a simple filtering strategy on extreme gradients incurs O(k) additive overhead compared to the usual regret bounds, and that this is unimprovable, which means that k needs to be sublinear in the number of rounds. We further ask which additional assumptions would allow for a linear number of outliers. It turns out that the usual benign cases of independently, identically distributed (i.i.d.) observations or strongly convex losses are not sufficient. However, combining i.i.d. observations with the assumption that outliers are those observations that are in an extreme quantile of the distribution, does lead to sublinear robust regret, even though the expected number of outliers is linear.",
"subjects": "Machine Learning (cs.LG)",
"title": "Robust Online Convex Optimization in the Presence of Outliers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750525759487,
"lm_q2_score": 0.7185944046238982,
"lm_q1q2_score": 0.7095221880463041
} |
https://arxiv.org/abs/1303.2599 | Step by step categorification of the Jones polynomial in Kauffman's version | Given any diagram of a link, we define on the cube of Kauffman's states a "2-complex" whose homology is an invariant of the associated framed links, and such that the graded Euler characteristic reproduces the unnormalized Kauffman bracket. This includes a categorification of brackets skein relation. Then we incorporate the orientation information and get a further complex on the same cube that gives rise to a new invariant homology for oriented links, so that the graded Euler characteristic reproduces the unnormalized Jones polynomial in Kauffman's version. Finally we clarify the relations between this homology and the original Khovanov homology of oriented links, extending the well known relation between the associated two versions of the Jones polynomial. | \subsection*{Abstract}
Given any diagram of a link, we define on the cube of Kauffman's states a ``$2$-complex'' whose homology is an invariant of the associated framed links, and such that the graded Euler characteristic reproduces the unnormalized Kauffman bracket. This includes a categorification of brackets skein relation. Then we incorporate the orientation information and get a further complex on the same cube that gives rise to a new invariant homology for oriented links, so that the graded Euler characteristic reproduces the unnormalized Jones polynomial in Kauffman's version. Finally we clarify the relations between this homology and the original Khovanov homology of oriented links, extending the well known relation between the associated two versions of the Jones polynomial.
\subsection*{Keywords}
Kauffman bracket, Jones polynomial, frame, Khovanov homology, framed links, computable, invariant, oriented links
\section{Introduction}
Kauffman's derivation \cite{Kauffman} of the Jones polynomial of an oriented link, starts with the brackets state sum over any diagram of the link (non oriented and equipped with the black-board framing) interpreted as an unnormalized invariant of the framed link; then modifies the brackets by forgetting the framing and incorporating the orientation information carried by the writhe, eventually obtaining the unnormalized Jones polynomial in Kauffman's version.
The Khovanov homology of an oriented link \cite{Khovanov} is obtained from a complex of graded $\Z$-modules supported by the cube of Kauffman states of any non oriented diagram of the link. The orientation information is incorporated applying some shift. The graded Euler characteristic of this homology reproduces the unnormalized Jones polynomial in Khovanov's version. There is a well known easy relation ($q=-A^{-2}$) between these two versions of the Jones polynomial of an oriented link.
The aim of this paper is to adapt Khovanov's constructions and ideas (mostly following Bar-Natan's algebraic exposition \cite{Bar-Natan}), in order to provide a step by step categorification of Kauffman's derivation. For any diagram of a link (non oriented and equipped with the black-board framing), with the same cube of Kauffman's states as support, we define a new ``$2$-complex'' whose homology is an invariant of the associated framed link. The graded Euler characteristic reproduces the unnormalized Kauffman bracket. This includes a categorification of the skein relations that determines the brackets. Then, by forgetting the framing and incorporating the orientation information, we produce a further complex whose homology is a invariant of the oriented link, so that the graded Euler characteristic reproduces the unnormalized Jones polynomial in Kauffman's version.
Although this homology of oriented links is formally new, the respective constructions are so close to each other, that one expects a strict relation with the original Khovanov homology, extending the relation between the two versions of the Jones polynomial. These relations are completely clarified at the end of the paper. We note that existing efficient algorithms in order to compute Khovanov homology should be easily adapted to compute the homologies defined in this paper (in particular the framed link one). We conclude this paper showing some examples and proving that our homology for framed links is actually a generalization of Kauffman bracket, namely there exist framed links (in particular we show framed knots) with the same Kauffman bracket but different homology.
Khovanov's categorification is based on Khovanovs's version of the (unnormalized) Jones polynomial, in particular on its writing as sum on the Kauffman states. The Khovanov's version of the Jones polynomial is useful for obtain a generalization because, from the second equation of the skein relations we have a power of the polynomial $q+q^{-1}$ on each summand of the Jones polynomial, and is easy to find a graded abelian group whose graded Euler characteristic is $q+q^{-1}$. Kauffman's version is not so good since we have the polynomial $-A^2 - A^{-2}$ instead of $q+q^{-1}$, and is impossible to find a graded abelian group whose graded Euler characteristic is $-A^2-A^{-2}$. One of the problems that we had in make our construction was precisely ``how to put the less inside the powers of $A^2+A^{-2}$''. As we said, at the end of this paper we show the connection between the classical Khovanov homology and our invariant for oriented links. This result says that Khovanov homology generalize our construction, but this is not surprising since our idea for obtain a correct proof of the invariance was to merge the even components and the odd ones of the complex.
\section{Preliminary algebraic definitions}
\begin{defn}
Given $i\in\{0,1\}$ we denote with $\underline{i}$ the element of $\{0,1\}$ different from $i$.
\end{defn}
\begin{defn}
Let $\mathcal{C}$ be a preadditive category. $2\!-\!\mathcal{C}_*(\mathcal{C})$ is the category defined by the followings:
\begin{itemize}
\item{\textit{objects}: pairs of objects of $\mathcal{C}$, $(X_0,X_1)$, together with two arrows of $\mathcal{C}$, $\partial_0 : X_0 \rightarrow X_1$ e $\partial_1 : X_1 \rightarrow X_0$, such taht $\partial_1 \circ \partial_1 =0$ and $\partial_0 \circ \partial_1= 0$. Those maps are called \emph{differentials} or \emph{boundary homomorphisms} respectively of degree $0$ e $1$. We'll write $X=(X_0 ,X_1 , \partial_0, \partial_1)$. These objects are said to be $2$-\emph{complexes} in $\mathcal{C}$.}
\item{\textit{morphisms}: pairs of morphisms of $\mathcal{C}$, $(f_0: X_0 \rightarrow Y_0 , f_1: X_1 \rightarrow Y_1)$, that commute with the differentials: $f_{\underline{i}} \circ \partial_i = \partial_i \circ f_i$ for each $i \in\{0,1 \}$.}
\item{\textit{identities}: pairs of identities of $\mathcal{C}$.}
\item{\textit{composizione}: component whise.}
\end{itemize}
\end{defn}
We are interested in $2\!-\!\mathcal{C}_*(\mathcal{G}\textit{r}\mathcal{A}\textit{b})$ and $2\!-\!\mathcal{C}_*(2\!-\!\mathcal{C}_*(\mathcal{G}\textit{r}\mathcal{A}\textit{b}))$, where $\mathcal{G}\textit{r}\mathcal{A}\textit{b}$ is the category of graded abelians groups and morphisms of graded abelian groups of degree $0$.
\begin{defn}
A $2$-\emph{subcomplex} of a $2$-complex of graded abelian groups $A=(A_0, A_1, \partial_0, \partial_1)$ is a $2$-complex $A'=(A'_0 , A'_1, \partial'_0, \partial'_1)$ such that for each $i\in\{0,1\}$ $A'_i$ is a graded subgroup of $A_i$ and $\partial'_i$ is obtained by restriction of $\partial_i$.
\end{defn}
\begin{defn}
Let $A'=(A'_0 , A'_1, \partial'_0, \partial'_1)$ be a $2$-subcomplex of a $2$-complex of abelian groups $A=(A_0, A_1, \partial_0, \partial_1)$. The \emph{quotient} of $A$ modulo $A'$ is the $2$-complex defined by $A/A' :=(A_0/A'_0, A_1/A'_1, \bar \partial_0, \bar \partial_1)$ where the differentials are given by passing to the quotient of abelian groups.
\end{defn}
In the same way we can define the subobjects and the quotients in the case of an arbitrary abelian category, in particular for $2\!-\!\mathcal{C}_*(\mathcal{G}\textit{r}\mathcal{A}\textit{b})$.
\begin{defn}
Given two $2$-coplexes in an additive category $\mathcal{C}$, $A=(A_0, A_1, \partial^A_0, \partial^A_1)$ and $B=(B_0, B_1, \partial^B_0, \partial^B_1)$, we define the \emph{direct sum} of $A$ and $B$ as the $2$-complex $A\oplus B :=(A_0\oplus B_0, A_1\oplus B_1, \partial^A_0\oplus \partial^B_0 , \partial^A_1\oplus \partial^B_1)$.
\end{defn}
With these notions the category of $2$-complexes in an additive category becomes additive. If moreover $\mathcal{C}$ is an abelian category, we can define the kernels and the images of morphisms in $2\!-\!\mathcal{C}_*(\mathcal{C})$, obtaining an abelian category.
\begin{defn}
The \emph{homology} of a $2$-complex of graded abelian groups $A=(A_0, A_1, \partial_0, \partial_1)$ is the pair of graded abelian groups $(H_o(A), H_1(A))$ defined by:
$$
H_i(A) := \frac{\Ker \partial_i}{ \Im \partial_{\underline{i}} }
$$
Given a morphism of $2$-complexes $f=(f_0,f_1): A \rightarrow B$, with $A=(A_0, A_1, \partial^A_0, \partial^A_1)$ and $B=(B_0, B_1, \partial^B_0, \partial^B_1)$, this induces a map in homology for each $i\in\{0,i\}$ $f_{*,i} : H_i(A) \rightarrow H_i(B)$ obtained as a restriction to the kernels and passing at the quotients.\\
Given $i\in\{0,1\}$ we define the following functor, called $i$-th \emph{homology functor}:
$$
H_i: 2\!-\!\mathcal{C}_*(\mathcal{G}\textit{r}\mathcal{A}\textit{b} ) \longrightarrow \mathcal{G}\textit{r}\mathcal{A}\textit{b}
$$
$$
\xymatrix{
A \ar[dd]_{f} \ar@{|->}[rr] & & H_i(A) \ar[dd]^{f_{*,i}} \\
\ar@{|->}[rr] & & \\
B \ar@{|->}[rr] & & H_i(B)
}
$$
Likewise we can define the homology functor on the category of $2$-complexes in an arbitrary abelian category, in particular for $2\!-\!\mathcal{C}_*(\mathcal{G}\textit{r}\mathcal{A}\textit{b})$.
\end{defn}
\begin{defn}
Given a $2$-complex $X=(X_0, X_1, \partial_0, \partial_1)$ we define the \emph{reflexed} of $X$: $X^\spadesuit :=(X_1, X_0, \partial_1, \partial_0)$. The \emph{reflexed} of a map of $2$-complexes $f: X\rightarrow Y$ is the morphism $f^\spadesuit :=(f_1,f_0) : A^\spadesuit \rightarrow B^\spadesuit$.
\end{defn}
\begin{defn}
Given a $2$-complex of $2$-complex of graded abelian groups $A=(A_0, A_1, \partial_0, \partial_1)$, with $\partial_i = (\partial_i^0, \partial_i^1)$ and $A_i=(A_{i,0}, A_{i,1}, \partial_{i,0}, \partial_{i,1})$ for each $i\in\{0,1\}$, the \emph{flatten} of $A$ is the $2$-complex of graded abelian groups $\textit{Fl} (A) := ( A_{0,0} \oplus A_{1,1}, A_{0,1}\oplus A_{1,0}, (\partial_{0,0} \oplus \partial_{1,1}) + (\partial_0^1\oplus \partial_1^0) , (\partial_{0,1}\oplus \partial_{1,0}) + (\partial_0^0\oplus \partial_1^1)) $. Hence at the level of spaces $\textit{Fl}(A) = A_0 \oplus A^\spadesuit_1$.\\
Given a morphism of $2$-complexes of $2$-complexes of graded abelian groups $f=(f_0,f_1): A \rightarrow B$, with $A=(A_0, A_1, \partial^A_0, \partial^A_1)$, $B=(B_0, B_1, \partial^B_0, \partial^B_1)$, $A_i=(A_{i,0}, A_{i,1}, \partial^A_{i,0}, \partial^A_{i,1})$, $B_i=(B_{i,0}, B_{i,1}, \partial^A_{i,0}, \partial^B_{i,1})$ and $f_i = (f_{i,0} ,f_{i,1})$ for each $i\in\{0,1\}$, we define the morphism of $2$-complexes $\textit{Fl}(f) := (f_0 \oplus f^\spadesuit_1) : \textit{Fl}(A) \rightarrow \textit{Fl} (B)$.\\
We define the \emph{flatting functor}:
$$
\textit{Fl}: 2\!-\!\mathcal{C}_*(2\!-\!\mathcal{C}_* ( \mathcal{G}\textit{r}\mathcal{A}\textit{b} ) ) \longrightarrow 2\!-\!\mathcal{C}_* ( \mathcal{G}\textit{r}\mathcal{A}\textit{b} )
$$
$$
\xymatrix{
A \ar[dd]_{f} \ar@{|->}[rr] & & \textit{Fl}(A) \ar[dd]^{\textit{Fl}(f)} \\
\ar@{|->}[rr] & & \\
B \ar@{|->}[rr] & & \textit{Fl}(B)
}
$$
\end{defn}
We have that for each $i\in\{0,1\}$
$$
\textit{Fl}\circ H_i = H_i \circ \textit{Fl} : 2\!-\!\mathcal{C}_*(2\!-\!\mathcal{C}_* ( \mathcal{G}\textit{r}\mathcal{A}\textit{b} ) ) \rightarrow \mathcal{G}\textit{r}\mathcal{A}\textit{b}
$$
\begin{defn}
Let $A=(A_0, A_1, \partial_0, \partial_1)$ be a $2$-complex of graded abelian groups. The \emph{graded Euler characteristic} of $A$ is the Laurent polynomial
$$
\chi_A (A) := q\!\dim A_0 - q\!\dim A_1
$$
\end{defn}
\begin{defn}
Let $A=(A_0, A_1, \partial^A_0, \partial^A_1)$ and $B=(B_0, B_1, \partial^B_0, \partial^B_1)$ be two $2$-complexes of graded abelian groups. The \emph{tensor product} of $A$ and $B$ is the $2$-complex $A\otimes B :=((A_0\otimes B_0) \oplus (A_1\otimes B_1), (A_0\otimes B_1) \oplus (A_1\otimes B_0), [\partial^A_0 \otimes \textit{id}_{B_0} + \textit{id}_{A_0} \partial^B_0, \partial^A_1 \otimes \textit{id}_{B_1} - \textit{id}_{A_1} \partial^B_1] , [\partial^A_0 \otimes \textit{id}_{B_1} + \textit{id}_{A_0} \partial^B_1, \partial^A_1 \otimes \textit{id}_{B_0} - \textit{id}_{A_1} \partial^B_0] )$.
\end{defn}
It holds that the graded Euler characteristic of the direct sum of two $2$-complexes is the sum of the characteristics, that is
$$
\chi_A (A \oplus B) = \chi_A (A) + \chi_A(B)
$$
The characteristic of the tensor product is the product of the characteristics, that is
$$
\chi_A (A \otimes B) = \chi_A (A) \cdot\chi_A(B)
$$
\begin{teo}
Let
$$
0 \longrightarrow A \stackrel{f}{\longrightarrow} B \stackrel{g}{\longrightarrow} C \longrightarrow 0
$$
be a short sequence in $2\!-\!\mathcal{C}_*( \mathcal{C} )$, with $\mathcal{C}$ an abelian category. Then we have an homology exact sequence
$$
\xymatrix{
H_0(A) \ar[r]^{f_{*,0}} & H_0(B) \ar[r]^{g_{*,0}} & H_0(C) \ar[d]^{\Delta_0} \\
H_1(C) \ar[u]^{\Delta_1} & H_1(B) \ar[l]^{g_{*,1}} & H_1(A) \ar[l]^{f_{*,1}}
}
$$
\begin{proof}
As in the classic case.
\end{proof}
\end{teo}
So we have a version of the lemma used by Bar-Natan for proving the invariance of Khovanov homology for oriented links in \cite{Bar-Natan}:
\begin{lem}\label{lemquoziente}
Let $A$ be a $2$-complex in an abelian category and $A'$ a subcomplex. Then
\begin{itemize}
\item{if $A'$ has homology $0$, then $H^*(A/ A') \cong H^*(A)$;}
\item{if $A/A'$ has homology $0$, then $H^*(A) \cong H^*(A')$.}
\end{itemize}
\begin{proof}
It follows from the long exact sequence induced by the short exact sequence
$$
\begin{matrix}
0 & \rightarrow & A' & \stackrel{j}{\hookrightarrow } & A & \stackrel{\pi}{\rightarrow} & \frac{A}{A'} & \rightarrow & 0
\end{matrix}
$$
\end{proof}
\end{lem}
We remark that this lemma has not to be used in the category $2\!-\!\mathcal{C}_*(\mathcal{G}\textit{r}\mathcal{A}\textit{b})$, but in $2\!-\!\mathcal{C}_* ( 2\!-\!\mathcal{C}_* ( \mathcal{G}\textit{r}\mathcal{A}\textit{b} ) )$.
\section{Costruction of the invariant}
\begin{defn}
We define the graded abelian group $W$ as the group with each homogeneous component $0$ except in degree $2$ end $-2$, where it has the free abelian groups $W_+$ and $W_-$ respectively generated by $w_+$ e $w_-$.
$$
W: \begin{matrix}
\ldots & 0 & W_- & 0 & 0 & 0 & W_+ & 0 & \ldots \\
\ldots & -3 & -2 & -1 & 0 & 1 & 2 & 3 & \ldots
\end{matrix}
$$
\end{defn}
\begin{defn}
Given a link diagram $D$ and a Kauffman state $s$ of $D$, we define
$$
W_s(D) := \left( \bigotimes_{\pic{0.2}{0.1}{banp.eps} \text{ in } D_s} W \right) \{a(s) -b(s)\}
$$
Where $D_s$ is the splitting of $D$ by the state $s$, $a(s)$ is the number of crossings of $D$ marked with $A$ (or $0$) by $s$, and $b(s)$ is the number of crossings marked with $B$ (or $1$).
\end{defn}
\begin{defn}
Let $D$ be a link diagram. For each $i\in\{0,1\}$ we define the following graded abelian group
$$
\llangle D \rrangle_i := \bigoplus_{s\ : \ b(s) + |s_A| \in 2\Z + i} W_s(D)
$$
where $s_A$ is the Kauffman state of $D$ maid only by $A$ (or only $0$), and $|s|$, for a state $s$ of $D$, is the number of components of the splitting of $D$ by $s$.
\end{defn}
\begin{defn}
We define the following maps of graded abelian groups of degree $2$ similarly to the classic $m$ and $\Delta$ by replacing $v_+$ with $w_-$ and $v_-$ with $w_+$:
$$
\begin{matrix}
\begin{array}{rcl}
W\otimes W & \stackrel{\bar m}{\longrightarrow} & W \\
w_- \otimes w_- & \longmapsto & w_- \\
w_+ \otimes w_- & \longmapsto & w_+ \\
w_- \otimes w_+ & \longmapsto & w_+ \\
w_+ \otimes w_+ & \longmapsto & 0
\end{array}
&
\begin{array}{rcl}
W & \stackrel{\bar \Delta}{\longrightarrow} & W \otimes W \\
w_- & \longmapsto & w_+\otimes w_- + w_-\otimes w_+ \\
w_+ & \longmapsto & w_+\otimes w_+
\end{array}
\end{matrix}
$$
\end{defn}
\begin{defn}
Given an edge $\xi:s_0 \rightarrow s_1$ of a diagram $D$, we define the homomorphism of graded abelian groups of degree $0$
$$
\partial_\xi : W_{s_0}(D) \longrightarrow W_{s_1}(D)
$$
likewise the classic $d_\xi : V_{s_0}(D) \rightarrow V_{s_1}(D)$ using $\bar m$ and $\bar \Delta$ instead of $m$ and $\Delta$
$$
\partial_\xi := \left\{ \begin{array}{cl}
\{-2\} ( \textit{id}_W \otimes \ldots \otimes \textit{id}_W \otimes \bar m \textit{id}_W \otimes \ldots \otimes \textit{id}_W ) \{ a(s_0) - b(s_0) \} & \text{if} \ |s_1| = |s_0| + 1 \\
\{-2\} ( \textit{id}_W \otimes \ldots \otimes \textit{id}_W \otimes \bar \Delta \textit{id}_W \otimes \ldots \otimes \textit{id}_W ) \{ a(s_0) - b(s_0) \} & \text{if} \ |s_1| = |s_0| - 1
\end{array}\right.
$$
\end{defn}
The maps defined above are of degree $0$. in fact $\bar m$ and $\bar \Delta $ are of degree $2$, $a(s_1) = a(s_0) - 1 $ and $b(s_1)=b(s_0) +1$, hence $a(s_1) - b(s_1) = a(s_0) - b(s_0) - 2$. By definition of $W_s(D)$, we obtain that the degree of $\partial_\xi$ is $0$.
\begin{defn}
Let $D$ be a link diagram, together with an order of its crossings. Mimicking what we have done in the classic case to define the differentials $d^j : \mathcal{C}^j(D) \rightarrow \mathcal{C}^{j+1}(D)$ with $j\in\Z$, for each $i\in\{ 0, 1\}$ we define the map
$$
\partial_i := \sum_{\xi \ :\ |\xi| + |s_A| \in 2\Z + i} (-1)^\xi \partial_\xi : \llangle D \rrangle_i \longrightarrow \llangle D \rrangle_{\underline{i}}
$$
\end{defn}
The definition is well done. In fact, given an edge $\xi: s_0 \rightarrow s_1$, $\partial_\xi$ is of degree $0$, $b(s_1)=b(s_0)+1$, hence $b(s_1)+|s_A| \in 2\Z \Leftrightarrow b(s_0)+|s_A| \in 2\Z +1$.
\begin{lem}
Let $D$ be a link diagram with an order of the crossings and let $i \in\{0,1\}$. Then
$$
\partial_{\underline{i}} \circ \partial_i = 0
$$
\begin{proof}
It follows from the commutativity of the induced cube, such as in the classic case.
\end{proof}
\end{lem}
So we can define the following $2$-complexes of graded abelian groups:
\begin{defn}
Let $D$ be a link diagram with an enumeration of the crossings
$$
\llangle D \rrangle := ( \llangle D \rrangle_0, \llangle D \rrangle_1 , \partial_0 , \partial_1)
$$
\end{defn}
\begin{teo}\label{carEul}
Let $D$ be a link diagram. Then, using the variable $A$
$$
\chi_A( \llangle D \rrangle ) = (-A^{-2} - A^2) \langle D \rangle
$$
Thus the graded Euler characteristic of $\llangle D \rrangle $ is equal to the unnormalized Kauffman bracket of $D$.
\begin{proof}
We remember that $s_A$ is the state of $D$ with all $A$.
\beq
\chi_A(\llangle D \rrangle ) & = & q\!\dim \llangle D \rrangle_0 - q\!\dim \llangle D \rrangle_1 \\
& = & \sum_{s\ : \ b(s) + |s_A| \in 2\Z} q\!\dim W_s(D) - \sum_{s\ : \ b(s) + |s_A| \in 2\Z + 1 } q\!\dim W_s(D) \\
& = & \sum_{s \text{ of }D } (-1)^{b(s)+|s_A|} q\!\dim W_s(D) \\
& = & \sum_{s \text{ of }D } (-1)^{b(s)+|s_A|} A^{a(s)- b(s)} (q\!\dim W )^{ |s|} \\
& = & \sum_{s \text{ of }D } (-1)^{b(s)+|s_A|} A^{a(s)-b(s)} (A^{-2} + A^2)^{|s|}
\eeq
Given an edge $\xi:s_0 \rightarrow s_1$, we have that $b(s_1)=b(s_0)+1$ and $|s_1| = |s_0| \pm 1$. Each state $s$ can be linked to $s_A$ with a sequence of edges whose length $b(s)$
$$
s_A \stackrel{\xi_1}{\longrightarrow} \ldots \stackrel{\xi_{b(s)}}{\longrightarrow} s
$$
Hence the class modulo $2$ of $b(s)$ is the same as $|s|- |s_A|$, therefore $(-1)^{b(s)+|s_A|} = (-1)^{|s|}$. Hence
\beq
\chi_A(\llangle D \rrangle ) & = & \sum_{s \text{ of }D } (-1)^{|s|} A^{a(s)-b(s)} (A^{-2} + A^2)^{|s|} \\
& = & \sum_{s \text{ of }D } A^{a(s)-b(s)} (-A^{-2} - A^2)^{|s|} \\
& = & (-A^{-2}-A^2) \langle D \rangle
\eeq
\end{proof}
\end{teo}
\section{Proof of the invariance}
We denote with $\llangle \ \rrangle $ the application defined on the set of the link diagrams with an order of the crossings and values in the class of $2$-complexes of graded abelian groups that associates to a diagram $D$ the $2$-complex $\llangle D \rrangle$. We denote with $H_*: 2\!-\!\mathcal{C}_*( \mathcal{G}\textit{r}\mathcal{A}\textit{b} ) \rightarrow (\mathcal{G}\textit{r}\mathcal{A}\textit{b})^2$ the homology functor joining the functors $H_0$ and $H_1$.
\begin{teo}
$H_* \circ \llangle \ \rrangle $ is an invariant for framed links, up to isomophism.
\end{teo}
The isomorphism for two different orders of crossings can be obtained as for the Khovanov homology for oriented links (see \cite{Lee}). The invariance for the change of a curl with another of the same type is trivial. Following the proofs of the invariance for the Reidemeister moves of the second and third type for Khovanov homology in \cite{Bar-Natan} we obtain proofs valid also for this version. In this section we adapt the Bar-Natan's proof about the moves of the second type to this case. Doing the same adaptations for the moves of the third type we conclude.\\
We denote with $FL(\mathbb{S}^3)$ the set of the framed links of the $3$-sphere up to equivalence. Thanks to the theorem, we can give the following definition:
\begin{defn}
$$
\mathcal{H}^F_* : FL(\mathbb{S}^3) \longrightarrow \{ \text{Pairs of graded abelian groups} \}/_{\cong}
$$
$$
\mathcal{H}^F_*(L) := [H_*( \llangle D \rrangle ) ]/_{\cong}
$$
where $D$ is a diagram of the framed link $L$ with a fixed enumeration of the crossings. Given $L\in FL(\mathbb{S}^3)$, $i\in\{0,1\}$ and $j\in\Z$ we denote with $\mathcal{H}^F_i(L)$ the $i$-th component of $\mathcal{H}^F_*(L)$, and with $\mathcal{H}^F_{i,j}(L)$ the $j$-th homogeneous component of $\mathcal{H}^F_i(L)$. Given a diagram $D$ with an order of the crossings, we use $\mathcal{H}^F_*(D)$ instead of $H_*(\llangle D \rrangle )$ and in analogous way for the components. If the diagram has not a fixed order of the crossings, we use the same notation to indicate the item up to isomorphism.
\end{defn}
\begin{teo}
Let $L$ be a framed link. Then the graded Euler characteristic of $\mathcal{H}^F_*(L)$ is equal to the Kauffman bracket of $L$.
$$
\chi_A(\mathcal{H}^F_*(L)) = \langle L \rangle
$$
\begin{proof}
It follows from the Theorem \ref{carEul} and from the invariance of the graded Euler characteristic by passing in homology.
\end{proof}
\end{teo}
As usual we implicitly give an order on the first presented diagram and we consider the order induced from that one to the others. Given a diagram with a local behavior of the type $\pic{0.9}{0.2}{reid2-1p.eps}$, we have the following decomposition in direct sum for the spaces:
$$
\left\llangle \pic{1.2}{0.3}{reid2-1p.eps} \right\rrangle = \left\llangle \pic{1.2}{0.3}{reid2c.eps} \right\rrangle\{2\} \oplus \left\llangle \pic{1.2}{0.3}{Bcanalep.eps} \right\rrangle \oplus \left\llangle \pic{1.2}{0.3}{reid2d.eps} \right\rrangle \oplus \left\llangle \pic{1.2}{0.3}{reid2e.eps} \right\rrangle \{-2\}
$$
We explain the decomposition. Let $s$ be a Kauffman state of one of the diagrams in the right member of the equation. We denote with $s_A$ the state of the diagram with all $A$ (or $0$), and we denote with $s'_A$ the state of \pic{0.9}{0.2}{reid2-1p.eps} with all $A$. $s$ corresponds to a state $s'$ of \pic{0.9}{0.2}{reid2-1p.eps}.
\begin{itemize}
\item{If $s$ is of \pic{0.9}{0.2}{reid2c.eps} then $a(s)=a(s')-2$, $b(s)=b(s')$, $a(s)-b(s)=a(s')-b(s')-2$, hence the polynomial degree has to be shifted by $2$. $|s_A| = |s'_A|$, hence $b(s) + |s_A| \in 2\Z \Leftrightarrow b(s') + |s'_A|$, it is correct not to reflex.}
\item{If $s$ is of \pic{0.9}{0.2}{Bcanalep.eps} then $a(s)=a(s')-1$, $b(s)=b(s')-1$, $a(s)-b(s)=a(s')-b(s')$, hence the polynomial degree has to not be shifted. $|s_A| = |s'_A| \pm 1$, depend if the pieces of the link that we can see in the figure \pic{0.9}{0.2}{reid2-1p.eps} stay in the same component or not, therefore $b(s) + |s_A| \in 2\Z \Leftrightarrow b(s') + |s'_A|$, it is correct not to reflex.}
\item{If $s$ is of \pic{0.9}{0.2}{reid2d.eps} then $a(s)=a(s')-1$, $b(s)=b(s')-1$, $a(s)-b(s)=a(s')-b(s')$, hence the polynomial degree has not to be shifted. $|s_A| = |s'_A| + 1$, hence $b(s) + |s_A| \in 2\Z \Leftrightarrow b(s') + |s'_A|$, it is correct not to reflex.}
\item{If $s$ is of \pic{0.9}{0.2}{reid2e.eps} then $a(s)=a(s')$, $b(s)=b(s')-2$, $a(s)-b(s)=a(s')-b(s')+2$, hence the polynomial degree has to be shifted by $-2$. $|s_A| = |s'_A|$, hence $b(s) + |s_A| \in 2\Z \Leftrightarrow b(s') + |s'_A|$, it is correct not to reflex.}
\end{itemize}
We define the $2$-complex of $2$-complexes $C$ such that $\textit{Fl}(C) = \left\llangle \pic{0.9}{0.2}{reid2-1p.eps} \right\rrangle$:
$$
\begin{matrix}
C:= \left( \left\llangle \pic{1.2}{0.3}{Bcanalep.eps} \right\rrangle \oplus \left\llangle \pic{1.2}{0.3}{reid2d.eps} \right\rrangle , \left\llangle \pic{1.2}{0.3}{reid2c.eps} \right\rrangle^\spadesuit\{2\} \oplus \left\llangle \pic{1.2}{0.3}{reid2e.eps} \right\rrangle^\spadesuit\{-2\} ,\right.\\
, [( 0, \partial_{B*} ), (0, \bar m )], [(\partial_{*A}, \partial_{A*} ) , 0 ] \Bigr{ ) }
\end{matrix}
$$
where the maps that compose the differentials are built summing the maps induced from the edges of \pic{0.9}{0.2}{reid2-1p.eps} that change the crossings in figure multiplied by the sign of the edge. These maps are pieces of the differentials of $\left\llangle \pic{0.9}{0.2}{reid2-1p.eps} \right\rrangle$, therefore taken $i\in\{0,1\}$ we have that the $i$-th component of these goes from the component of degree $i$ to the one of degree $\underline{i}$, therefore if we reflex the codomain or the domain we obtain the respect of the degree.\\
We define the $2$-subcomplex of $C$, \\
$$
C':= \left(\left\llangle \pic{1.2}{0.3}{reid2d.eps} \right\rrangle_- , \left\llangle \pic{1.2}{0.3}{reid2e.eps} \right\rrangle^\spadesuit \{-2\}, \bar m , 0 \right)
$$
where $\left\llangle \pic{0.9}{0.2}{reid2d.eps} \right\rrangle_-$ is the $2$-subcomplex of $\left\llangle \pic{0.9}{0.2}{reid2d.eps} \right\rrangle$ obtained considering only the component $W_-$ of the factor of the tensor products that correspond to the circle in figure. The differentiale of degree $0$ is given by $\bar m$ restricted to the $2$-subcomplex. We can easily see that it is an isomorphism, and therefore that $C'$ has homology $0$. Hence the homology of $C$ is isomorphic to the homology of
$$
\frac{C}{C'} = \left( \left\llangle \pic{1.2}{0.3}{Bcanalep.eps} \right\rrangle \oplus \left\llangle \pic{1.2}{0.3}{reid2d.eps} \right\rrangle_+ , \left\llangle \pic{1.2}{0.3}{reid2c.eps} \right\rrangle^\spadesuit\{2\} , 0 , [(\partial_{*A}, \partial_{A*} ) , 0 ] \right)
$$
The map $\partial_{A*}: \left\llangle \pic{0.9}{0.2}{reid2c.eps} \right\rrangle \{2\} \rightarrow \left\llangle \pic{0.9}{0.2}{reid2d.eps} \right\rrangle $ is an isomorphism. We can define the map $\tau:= \partial_{*A} \circ \partial_{A*}^{-1} : \left\llangle \pic{0.9}{0.2}{reid2d.eps} \right\rrangle \rightarrow \left\llangle \pic{1.2}{0.3}{Bcanalep.eps} \right\rrangle $ and the $2$-subcomplex of $C/C'$,
$$
C''' := \left( \left\{ (\tau(b), b) \in \left\llangle \pic{1.2}{0.3}{Bcanalep.eps} \right\rrangle \oplus \left\llangle \pic{1.2}{0.3}{reid2d.eps} \right\rrangle \right\}, \left\llangle \pic{1.2}{0.3}{reid2c.eps} \right\rrangle \{2\} , 0 , (\partial_{*A}, \partial_{A8} ) \right)
$$
As in the case for oriented links, $C'''$ has null homology and we can conclude showing that the quotient $(C/C')/C'''$ is isomorphic to $\left(\left\llangle \pic{0.9}{0.2}{Bcanalep.eps} \right\rrangle , 0 , 0, 0 \right)$ and using the flatten functor and the lemma. The isomorphism can be found explicitly as in the classic case.
\section{Categorification of the skein relations}
We note that $\left\llangle \pic{0.5}{0.2}{banp.eps} \right\rrangle = ( 0, W , 0 , 0 ) $, hence $\mathcal{H}^F_*\left(\pic{0.5}{0.2}{banp.eps} \right) = (0, W)$, therefore we have the last equation of the skein relations of the unnormalized Kauffman bracket
$$
\chi_A\left( \mathcal{H}^F_*\left( \pic{0.8}{0.3}{banp.eps} \right) \right) = -A^{-2} - A^2
$$
\begin{teo}
Let $D$ and $D'$ be two link diagrams. Then
$$
\llangle D \sqcup D' \rrangle = \llangle D \rrangle \otimes \llangle D' \rrangle
$$
\begin{proof}
As in the classic case before the addition of the information of the orientation.
\end{proof}
\end{teo}
So we have the second equation of the skein relation, in fact given a diagram $D$
\beq
\chi_A\left( \mathcal{H}^F_* \left( D \sqcup \pic{0.8}{0.3}{banp.eps} \right) \right) & = & \chi_A \left( \left\llangle D \sqcup \pic{0.8}{0.3}{banp.eps} \right\rrangle \right) \\
& = & \chi_A \left( \llangle D \rrangle \otimes \left\llangle \pic{0.8}{0.3}{banp.eps} \right\rrangle \right) \\
& = & \chi_A ( \llangle D \rrangle ) \chi_A\left( \left\llangle \pic{0.8}{0.3}{banp.eps} \right\rrangle \right) \\
& = & \chi_A ( \llangle D \rrangle ) (-A^{-2} - A^2)
\eeq
Let $D$ be a link diagram. We consider one of its crossings $\pic{1.2}{0.3}{incrociop.eps}$. We have the following decomposition in direct sum at the level of spaces:
$$
\left\llangle \pic{1.2}{0.3}{incrociop.eps} \right\rrangle = \left\llangle \pic{1.2}{0.3}{Acanalep.eps} \right\rrangle \{1\} \oplus \left\llangle \pic{1.2}{0.3}{Bcanalep.eps} \right\rrangle \{-1\}
$$
Hence
\beq
\chi_A \left( \mathcal{H}^F_*\left( \pic{1.2}{0.3}{incrociop.eps} \right) \right) & = & \chi_A \left( \left\llangle \pic{1.2}{0.3}{incrociop.eps} \right\rrangle \right) \\
& = & \chi_A \left( \left\llangle \pic{1.2}{0.3}{Acanalep.eps} \right\rrangle\{1\} \right) + \chi_A \left( \left\llangle \pic{1.2}{0.3}{Bcanalep.eps} \right\rrangle \{-1\} \right) \\
& = & A\chi_A \left( \mathcal{H}^F_*\left( \pic{1.2}{0.3}{Acanalep.eps} \right) \right) + A^{-1}\chi_A \left( \mathcal{H}^F_*\left( \pic{1.2}{0.3}{Bcanalep.eps} \right) \right)
\eeq
Therefore we have the first equation of the skein relations.
We note that this last equation holds only at the level of spaces and if we want to consider also the differentials we obtain a short exact sequence in $2\!-\!\mathcal{C}_*(\mathcal{G}\textit{r}\mathcal{A}\textit{b})$ where the first map is the inclusion in the first factor and the second is the projection on the second one:
$$
0 \longrightarrow \left\llangle \pic{1.2}{0.3}{Bcanalep.eps} \right\rrangle\{-1\} \longrightarrow \left\llangle \pic{1.2}{0.3}{incrociop.eps} \right\rrangle \longrightarrow \left\llangle \pic{1.2}{0.3}{Acanalep.eps} \right\rrangle\{1\} \longrightarrow 0
$$
So in the end the categorification of the skein relations in homology is:
\begin{enumerate}
\item{
This diagram is exact for each $j\in\Z$
$$
\xymatrix{
\mathcal{H}^F_{0,j+1}\left(\pic{1.2}{0.3}{Bcanalep.eps}\right) \ar[r] & \mathcal{H}_{0,j}\left(\pic{1.2}{0.3}{incrociop.eps}\right) \ar[r] & \mathcal{H}_{0,j-1}\left(\pic{1.2}{0.3}{Acanalep.eps}\right) \ar[d] \\
\mathcal{H}^F_{1,j-1}\left(\pic{1.2}{0.3}{Acanalep.eps}\right) \ar[u] & \mathcal{H}^F_{1,j}\left(\pic{1.2}{0.3}{incrociop.eps}\right) \ar[l] & \mathcal{H}^F_{1,j+1}\left(\pic{1.2}{0.3}{Bcanalep.eps}\right) \ar[l]
}
$$
}
\item{
$$
\mathcal{H}^F_0\left(D \sqcup \pic{1.2}{0.3}{banp.eps}\right) = \mathcal{H}^F_0( D ) \otimes W^\spadesuit
$$
}
\item{
$$
\mathcal{H}^F_*\left(\pic{1.2}{0.3}{banp.eps}\right) = W^\spadesuit
$$
}
\end{enumerate}
The second relation follow from the previous observations and the K\"unneth formula.
\subsection{Positive curls}
Now we see what happens if we follow the proof of the invariance of Khovanov homology for the Reidemeister moves of the first type as presented in \cite{Bar-Natan} and we adapt it.
Given a diagram with a positive curl \pic{0.8}{0.2}{ricciolopos.eps} we have the following decomposition in direct sum for the spaces:
$$
\left\llangle \pic{1.0}{0.3}{ricciolopos.eps} \right\rrangle = \left\llangle \pic{1.0}{0.3}{riccioloa.eps} \right\rrangle \{ 1\} \oplus \left\llangle \pic{1.0}{0.3}{ricciolob.eps} \right\rrangle \{-1\}
$$
By only using the skein relations we have the following result, as in the proposition about the Kauffman bracket to introduce Kauffman's version of the Jones polynomial.
$$
\chi_A \left( \mathcal{H}^F_*\left( \pic{1.0}{0.3}{ricciolopos.eps} \right) \right) = -A^3 \chi_A \left( \mathcal{H}^F_*\left( \pic{1.0}{0.3}{riga.eps} \right) \right)
$$
Following the proof of the invariance of Khovanov homology for the moves of the first type we define a $2$-complex $C$ such that $\textit{Fl}(C)= \left\llangle \pic{0.8}{0.2}{ricciolopos.eps} \right\rrangle$
$$
C := \left( \left\llangle \pic{1.0}{0.3}{riccioloa.eps} \right\rrangle \{1\} , \left\llangle \pic{1.0}{0.3}{ricciolob.eps} \right\rrangle^\spadesuit \{-1\} , \bar m , 0 \right)
$$
Now we define the $2$-subcomplex
$$
C' := \left( \left\llangle \pic{1.0}{0.3}{riccioloa.eps} \right\rrangle_- \{1\} , \left\llangle \pic{1.0}{0.3}{ricciolob.eps} \right\rrangle^\spadesuit \{-1\} , \bar m , 0 \right)
$$
As before $\bar m$ is an isomorphism and hence $C'$ has null homology. $C$ has the same homology as the quotient
$$
\frac{C}{C'} = \left( \left\llangle \pic{1.0}{0.3}{riccioloa.eps} \right\rrangle_+ \{1\}, 0 , 0 , 0 \right)
$$
Therefore using the flatten functor we obtain that
\beq
\mathcal{H}^F_*\left( \pic{1.0}{0.3}{ricciolopos.eps} \right) & \cong & \textit{Fl}( H_*(C/C') ) \\
& \cong & H_* \left( \left\llangle \pic{1.0}{0.3}{riccioloa.eps} \right\rrangle \{1\} \right) \\
& \cong & H_* \left( \left( \left\llangle \pic{1.0}{0.3}{riga.eps} \right\rrangle \otimes W_+^\spadesuit \right) \{1\} \right)
\eeq
having identified an graded abelian group $G$ with the $2$-complex $(G,0,0,0)$, and hence $G^\spadesuit = (0, G , 0, 0)$.
\beq
\mathcal{H}^F_*\left( \pic{1.0}{0.3}{ricciolopos.eps} \right) & \cong & H_* \left( \left( \left\llangle \pic{1.0}{0.3}{riga.eps} \right\rrangle \otimes \Z^\spadesuit \{2\} \right) \{1\} \right) \\
& \cong & H_* \left( \left\llangle \pic{1.0}{0.3}{riga.eps} \right\rrangle^\spadesuit \{3\} \right) \\
& = & \left( \mathcal{H}^F_* \left( \pic{1.0}{0.3}{riga.eps} \right) \right)^\spadesuit \{3\}
\eeq
Therefore
\beq
\chi_A \left( \mathcal{H}^F_* \left( \pic{1.0}{0.3}{ricciolopos.eps} \right) \right) & = & \chi_A \left( \mathcal{H}^F_* \left( \left( \pic{1.0}{0.3}{riga.eps} \right) \right)^\spadesuit \{3\} \right) \\
& = & -A^3 \chi_A \left( \mathcal{H}^F_* \left( \pic{1.0}{0.3}{riga.eps} \right) \right)
\eeq
Coherently with what said above.
\section{Returning to the orientations}
As in Kauffman's approach we can consider the oriented link diagrams and add to our construction the information about orientations.
\begin{defn}
Let $A=(A_0,A_1, \partial_0 , \partial_1)$ be a $2$-complex, and $n$ a natural number. We denote with $A^{\spadesuit (n)}$ the $2$-complex obtained reflexing $C$ $n$ times, namely $C^{\spadesuit (n)} = C$ if $n$ is even and $C^{\spadesuit (n)} = C^\spadesuit$ if $n$ is odd.
\end{defn}
\begin{defn}
Given an oriented link diagram $D$ we define the $2$-complex of graded abelian groups
$$
\ddot{C}( D ) := \llangle D \rrangle^{\spadesuit (w(D))} \{-3w(D) \}
$$
where $w(D)$ is the writhe number of $D$.
\end{defn}
\begin{teo}
$H_* \circ \ddot{C} $ up to isomophism is an invariant for oriented links.
\begin{proof}
The operations of shift and reflex commute with the passing in homology. Hence for each link diagram $D$ $H_* ( \ddot{C} (D)) = ( H_* ( \llangle D \rrangle ) )^{\spadesuit(w(D))} \{-3w(D)\} = ( \mathcal{H}^F_*(D))^{\spadesuit(w(D))} \{-3w(D) \}$. Therefore from the invariance of the writhe number and of $\mathcal{H}^F_*$ for the Reidemeister moves of the second and third type and the choice of the order of the crossings, we have the invariance of $H_* \circ \ddot{C}$ for the same modifications. It remain to prove the invariance for the moves of the first type. We consider a diagram with a positive curl \pic{0.8}{0.2}{ricciolopos.eps}. For what said in the previous section we have that
\beq
H_*\left( \ddot{C} \left( \pic{1.0}{0.3}{ricciolopos.eps} \right) \right) & = & \left( \mathcal{H}^F_*\left( \pic{1.0}{0.3}{ricciolopos.eps} \right) \right)^{\spadesuit\left(w \left( \pic{0.5}{0.1}{ricciolopos.eps} \right) \right)} \left\{ -3w\left( \pic{1.0}{0.3}{ricciolopos.eps} \right) \right\} \\
& \cong & \left( \left( \mathcal{H}^F_* \left( \pic{1.0}{0.3}{riga.eps} \right) \right)^\spadesuit \{3\} \right)^{\spadesuit\left(w \left( \pic{0.5}{0.1}{ricciolopos.eps} \right) \right)} \left\{ -3w\left( \pic{1.0}{0.3}{ricciolopos.eps} \right) \right\} \\
& \cong & \left( \left( \mathcal{H}^F_* \left( \pic{1.0}{0.3}{riga.eps} \right) \right)^\spadesuit \{3\} \right)^{\spadesuit\left(w \left( \pic{0.5}{0.1}{riga.eps} \right) +1\right)} \left\{ -3w\left( \pic{1.0}{0.3}{riga.eps} \right) - 3\right\} \\
& \cong & \left( \mathcal{H}^F_* \left( \pic{1.0}{0.3}{riga.eps} \right) \right)^{\spadesuit\left(w \left( \pic{0.5}{0.1}{riga.eps} \right) \right)} \left\{ -3w\left( \pic{1.0}{0.3}{riga.eps} \right) \right\} \\
& = & H_*\left( \ddot{C}\left( \pic{1.0}{0.3}{riga.eps} \right) \right)
\eeq
As usual the proof of the invariance for the moves of the first type for the negative curls follows from the invariance for the positive curls and the moves of the second type.
\end{proof}
\end{teo}
\begin{defn}
We denote the composition $H_* \circ \ddot{C}$ with $\ddot{\mathcal{H}}_*$ and in the same way the application defined on the set of the oriented links of $\mathbb{S}^3$.
\end{defn}
\begin{teo}
Let $L$ be an oriented link. Then the graded Euler characteristic of $\ddot{\mathcal{H}}_*(L)$ is equal to the unnormalized Kauffman's version of the Jones polynomial
$$
\chi_A ( \ddot{\mathcal{H}}_*(L) ) = \hat f_L
$$
\begin{proof}
It suffices to prove it at the level of $2$-complexes. Let $D$ be a diagram of $L$.
\beq
\chi_A( \ddot{C}(L) ) & = & \chi_A( (\llangle D \rrangle )^{\spadesuit(w(D) ) } \{ -3w(D)\} ) \\
& = & (-1)^{w(D)}A^{-3w(D)} \chi_A ( \llangle D \rrangle ) \\
& = & (-A^3)^{-w(D)} (-A^2-A^{-2}) \langle D \rangle \\
& = & \hat f_L
\eeq
\end{proof}
\end{teo}
\begin{prop}
Let $D$ be an oriented diagram, $i \in\{0,1\}$ and $j \in 2\Z +1$. Then
$$
\ddot{C}_{i,j}(D) = 0, \ \ddot{\mathcal{H}}_{i,j}(D)= 0
$$
\begin{proof}
The second statement follows from the first one and the arbitrariness of $i\in\{0,1\}$ and $j\in 2\Z +1$. $\ddot{C}_i(D) = ( \bigoplus_{s\ : \ b(s) + |s_A| \in 2\Z + i} W_s(D) )^{\spadesuit(w(D))} \{-3w(D)\} $. Let $s$ be a state of $D$, $W_s(D) \cong W^{\otimes |s|} \{a(s) -b(s)\}$, where $|s|$ is the number of components of the splitting of $D$ by $s$, $D_s$. By definition of $W$ and of the tensor product of graded abelian groups we have that the only non null homogeneous components of $W^{\otimes |s|}$ are the ones with index that can be obtained summing some copies of $2$ and some copies of $-2$. Hence each odd homogeneous component of $W_s(D)$ is $0$.
\beq
\ddot{C}_{i,j}(D) & = & ( \bigoplus_{s\ : \ b(s) + |s_A| \in 2\Z + i + w(D) } W_s(D) )_{j+3w(D)} \\
& = & \bigoplus_{s\ : \ b(s) + |s_A| \in 2\Z + i + w(D) } W^{\otimes |s|} )_{j+3w(D) - a(s) + b(s)}
\eeq
$j+3w(D) - a(s) + b(s) = j + 3n_+(D) - 3n_-(D) - n(D) + 2 b(s)$ where $n(D)$ is the number of crossings of $D$ and $n_+(D)$ and $n_-(D)$ are the numbers of positive and negative crossings of $D$. $j+3w(D) - a(s) + b(s) = j + 4 n_+(D) - 2 n_-(D) + 2 b(s)$, since $j$ is odd this number is odd. Hence the component is null.
\end{proof}
\end{prop}
Now we investigate the connections between this new invariant for oriented links, $\ddot{H}_*$, and the classical Khovanov homology, $\mathcal{H}^*$. The first part of the following theorem has been suggested by the famous relation between Khovanov's version of the Jones polynomial and the one of Kauffman: $q= -A^{-2}$.
\begin{teo}\label{teo1}
Let $L$ be an oriented link, $D$ a diagram of $L$ and $N$ the number of components of $L$. Then for each $i\in\{0,1\}$ and $j\in 2\Z$
$$
\begin{array}{cl}
\ddot{H}_{i,j}(L) = \bigoplus_{k \in 2\Z +i} \mathcal{H}^{k,- \frac{j}{2} } (L) & \text{if } j \in 4\Z \\
\ddot{H}_{i,j}(L) = \bigoplus_{k \in 2\Z +i+1} \mathcal{H}^{k,- \frac{j}{2} } (L) & \text{if } j \not\in 4\Z \\
\end{array}
$$
Furthermore for each $i\in\{0,1\}$ and $j\in 2\Z$
$$
\ddot{H}_{i,j} (L) = \bigoplus_{k\in 2\Z +i + n_+(D) + |s_A|} \mathcal{H}^{k, - \frac{j}{2}} (L)
$$
and
\begin{itemize}
\item{if $N\in 2\Z +1$, then $n_+(D) + |s_A| \in 2\Z +1$ and $\ddot{H}_{i,j} (L) = 0$ for each $i\in\{0,1\}$ and $j\in4\Z$;}
\item{if $N\in 2\Z $, then $n_+(D) + |s_A| \in 2\Z $ and $\ddot{H}_{i,j} (L) = 0$ for each $i\in\{0,1\}$ and $j\in 4\Z + 2$.}
\end{itemize}
\begin{proof}
We remind that the classical Khovanov homology is constructed starting from the graded abelian group $V$, that is the free abelian group generated by the elements $v_+$ and $v_-$ where $v_+$ has degree $1$ and $v_-$ has degree $-1$. Let $\alpha : W \rightarrow V$ be the map of abelian groups defined by $\alpha(w_+)= v_-$, $\alpha(w_-) = v_+$. For eache $n\in \mathbb{N}$ we have the map $\alpha^{\otimes n} : W^{\otimes n} \rightarrow V^{\otimes n}$. We note that for eache $j\in 2\Z$ $\alpha((W)_j) = (V)_{-\frac j 2 }$ and $\alpha^{\otimes n}( (W^{\otimes n})_j ) = ( V^{\otimes n} )_{-\frac j 2 }$. From this we can define a map for each state $s$ of $D$ by applying the right shifts: $\alpha_s: W_s(D) \rightarrow V_s(D)$, where $V_s(D) = \left( \bigotimes_{\pic{0.2}{0.1}{banp.eps} \text{ in } D_s} V \right) \{b(s)\}$. For any $j \in 2\Z + n(D)$ $\alpha_s( (W_s(D))_j ) = ( V_s(D) )_{b(s) - \frac{j - a(s) + b(s)}{2} }$. We note that $b(s) - \frac{j - a(s) + b(s)}{2} = - \frac{j-n(D)}{2}$. For each edge $\xi: s \rightarrow s'$ of $D$ the following square in the category of abelian groups (not graded) and morphisms of these, is commutative and the vertical arrows are isomorphisms
$$
\xymatrix{
W_s(D) \ar[d]_{\alpha_s} \ar[r]^{\partial_\xi} & W_{s'}(D) \ar[d]_{\alpha_{s'}} \\
V_s(D) \ar[r]^{d_\xi} & V_{s'}(D)
}
$$
$d_\xi$ is the map of graded abelian groups induced by the edge $\xi$ in the classical Khovanov homology. For each $k\in\Z$ we define the isomorphism of abelian groups $\alpha_k : \bigoplus_{s \ : \ b(s) = k} W_s(D) \rightarrow \llbracket D \rrbracket^k $, where $\llbracket D \rrbracket^k$ is the component in homological degree $k$ of the Khovanov complex of $D$ before the shifts: $\llbracket D \rrbracket^k = \bigoplus_{s \ : \ b(s) = k} V_s(D)$. For any $j\in2\Z+n(K)$, $k\in\Z$ $\alpha_k( (\bigoplus_{s \ : \ b(s) = k} W_s(D) )_j ) = \llbracket D \rrbracket^{k, -\frac{j-n(D)}{2} } $. For any $k\in\Z$ we have the map $\hat d ^k = \sum_{\xi \ :\ |\xi| = k } (-1)^\xi d_\xi : \llbracket D \rrbracket^k \rightarrow \llbracket D \rrbracket^{k+1}$, and we define $\partial^k := \sum_{\xi \ :\ |\xi| = k} (-1)^\xi \partial_\xi : \bigoplus_{s \ : \ b(s) = k} W_s(D) \rightarrow \bigoplus_{s' \ : \ b(s') = k + 1} W_{s'}(D)$. Hence for each integer $k$ we have the following commutative square in the category of abelian groups with vertical arrows that are isomorphisms
$$
\xymatrix{
\bigoplus_{s \ : \ b(s) = k } W_s(D) \ar[d]_{\alpha_k} \ar[r]^{\partial^k} & \bigoplus_{s' \ : \ b(s') = k + 1} W_{s'}(D) \ar[d]_{\alpha_{k+1}} \\
\llbracket D \rrbracket^k \ar[r]^{\hat d^k} & \llbracket D \rrbracket^{k+1}
}
$$
We note that for each $i\in\{0,1\}$ $\partial_i = \bigoplus_{k\in 2\Z+i + |s_A|} \partial^k$. Let $i\in\{0,1\}$ and $j\in2\Z$
\beq
\ddot{H}_{i,j}(L) & = & \mathcal{H}^F_{[i+w(D)], j +3w(D)} (D) \\
& = & \left( \frac{\Ker \partial_{[i+w(D)]}}{ \Im \partial_{[i+w(D) +1]} } \right)_{j+3w(D)} \\
& \cong & \bigoplus_{k \in 2\Z + i + w(D) + |s_A|} \left( \frac{\Ker \partial^k}{\Im \partial^{k+1}} \right)_{j+3w(D)} \\
& \cong & \bigoplus_{k \in 2\Z + i + w(D) + |s_A|} \left( \frac{\Ker \hat d^k}{\Im \hat d^{k+1}} \right)_{- \frac{j+3w(D) - n(D)}{2} } \\
& = & \bigoplus_{k \in 2\Z + i + w(D) + |s_A|} \left( \frac{\Ker \hat d^k}{\Im \hat d^{k+1}} \right)_{- \frac j 2 - n_+(D) + 2n_-(D) } \\
& = & \bigoplus_{k \in 2\Z + i + n_+(D) + |s_A|} \left( \frac{\Ker d^k}{\Im d^{k+1}} \right)_{- \frac j 2} \\
& = & \bigoplus_{k \in 2\Z + i + n_+(D) + |s_A|} \mathcal{H}^{k, -\frac j 2} (L)
\eeq
Therefore
\beq
\chi_A (\ddot{H}_*(L)) & = & \sum_{i\in\{0,1\},j\in \Z} (-1)^i A^j \rk \ddot{H}_{i,j} (L) \\
& = & \sum_{k\in\Z,j\in 2\Z} (-1)^i A^j \rk \mathcal{H}^{k - n_+(D) - |s_A| ,-frac j 2 } (L) \\
& = & \sum_{k,j\in\Z} (-1)^{n_+(D) + |s_A|} (-1)^i A^{-2j} \rk \mathcal{H}^{i,j}(L) \\
& = & (-1)^{n_+(D) +|s_A|} \chi_A ( \mathcal{H}^*(L) )(A^{-2})
\eeq
Hence
$$
\hat f_L = (-1)^{n_+(D) + |s_A|} \hat J_L (A^{-2})
$$
where $\hat f$ is the Kauffman's version of the Jones polynomial and $\hat J$ is the Khovanov's one.\\
Let $i\in\{0,1\}$. If $N\in 2\Z +i$, the number of components of $L$, then for each $k \in \Z$, $j\in 2\Z +i +1 $ $\mathcal{H}^{k,j}(L) = 0$. Hence
\begin{itemize}
\item{if $N$ is odd, then $\hat J_L$ has only odd exponents and $\hat f_L$ hasn't exponents multiple of $4$;}
\item{if $N$ is even, then $\hat J_L$ has only exponents even and $\hat f_L$ has only exponents multiple of $4$.}
\end{itemize}
Hence
\beq
\hat f_L & = & \hat J_L (-A^{-2}) \\
& = & \left\{\begin{array}{cl}
- \hat J_L (A^{-2}) & \text{if } N \in 2\Z +1 \\
\hat J_L (A^{-2}) & \text{if } N \in 2\Z
\end{array}\right. \\
& = & \left\{\begin{array}{cl}
- \chi_A( \mathcal{H}^*(L) ) (A^{-2}) & \text{if } N \in 2\Z +1 \\
\chi_A ( \mathcal{H}^*(L ) ) (A^{-2}) & \text{if } N \in 2\Z
\end{array}\right. \\
\eeq
It follow that $n_+ (D) + |s_A| $ is congruous modulo $2$ to $N$.\\
Now we suppose that $N$ and $n_+(D) + |s_A|$ are odd. For each $i\in\{0,1\}$ and $j\in 2\Z$ $\ddot{H}_{i,j}(L) = \bigoplus_{k \in2\Z + i +1} \mathcal{H}^{i,-\frac j 2} (L)$. If $j\in 4\Z$ $\ddot{H}_{i,j}(L) = \bigoplus_{k \in2\Z + i +1} \mathcal{H}^{i,-\frac j 2} (L) = 0 = \bigoplus_{k \in2\Z + i } \mathcal{H}^{i,-\frac j 2} (L)$, because $\mathcal{H}^{i,-\frac j 2} (L) = 0$ for each $k$.
If $N$ and $n_+(D) + |s_A|$ are even, for each $i\in\{0,1\}$ and $j\in 2\Z$ $\ddot{H}_{i,j}(L) = \bigoplus_{k \in2\Z + i } \mathcal{H}^{i,-\frac j 2} (L)$. If $j\not\in 4\Z$ $\ddot{H}_{i,j}(L) = \bigoplus_{k \in2\Z + i } \mathcal{H}^{i,-\frac j 2} (L) = 0 = \bigoplus_{k \in2\Z + i +1} \mathcal{H}^{i,-\frac j 2} (L)$, because $\mathcal{H}^{i,-\frac j 2} (L) = 0$ for each $k$.
\end{proof}
\end{teo}
\section{Examples}
$$
\begin{array}{cccccccccccc}
\mathcal{H}^*\left( \pic{1.1}{0.3}{hopfor.eps} \right) &&&&&&&&&& \\
\mathcal{H}^2 & \ldots & 0 & 0 & 0 & 0 & 0 & \Z & 0 & \Z & 0 &\ldots \\
\mathcal{H}^1 & \ldots & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots \\
\mathcal{H}^0 & \ldots & 0 & \Z & 0 & \Z & 0 & 0 & 0 & 0 & 0 & \ldots \\
& \ldots & -1 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & \ldots
\end{array}
$$
$$
\begin{array}{cccccccccccc}
\mathcal{H}^F_*\left( \pic{1.1}{0.3}{hopfnonor.eps} \right) &&&&&&&&&& \\
\mathcal{H}^F_1 & \ldots & 0 & 0 & 0 & 0 & 0 & 0 & \ldots \\
\mathcal{H}^F_0 & \ldots & 0 & \Z & \Z & \Z & \Z & 0 & \ldots \\
& \ldots & -16 & -12 & -8 & -4 & 0 & 4 & \ldots
\end{array}
$$
$$
\begin{array}{ccccccccccccccc}
\mathcal{H}^*\left( \pic{0.7}{0.2}{trifp.eps} \right) \\
\mathcal{H}^3 & & \ldots & 0 & 0 & 0 & 0 & 0 & 0 & 0 & (\Z_2)^3 & 0 & \Z & 0 & \ldots \\
\mathcal{H}^2 & & \ldots & 0 & 0 & 0 & 0 & 0 & \Z & 0 & 0 & 0 & 0 & 0 & \ldots \\
\mathcal{H}^1 & & \ldots & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots \\
\mathcal{H}^0 & & \ldots & 0 & \Z & 0 & \Z & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \ldots \\
\\
& & \ldots & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \ldots
\end{array}
$$
$$
\begin{array}{ccccccccccc}
\mathcal{H}^F_*\left( \pic{0.7}{0.2}{trifp.eps} \right) \\
\mathcal{H}^F_1 & & \ldots & 0 & \Z & (\Z_2)^3 & 0 & 0 & 0 & 0 & \ldots \\
\mathcal{H}^F_0 & & \ldots & 0 & 0 & 0 & \Z & \Z & \Z & 0 & \ldots \\
& & \ldots & -13 & -9 & -5 & -1 & 3 & 7 & 11 & \ldots
\end{array}
$$
\subsection{Homology is stronger than Kauffman bracket}
Let $\textit{Kh}(L) \in \Z[t,t^{-1},q,q^{-1}]$ be the graded Poincar\'e polynomial of the classical Khovanov homology of the oriented link $L$:
$$
\textit{Kh}(L) := \sum_{i,j\in\Z} t^i q^j \rk \mathcal{H}^{i,j}(L)
$$
Let $\textit{FKh}(L) \in \Z[t,t^{-1},A,A^{-1}]$ be the Poincar\'e polynomial of the homology of the framed link $L$:
$$
\textit{FKh}(L) := \sum_{i \in \{0,1\},j\in\Z} t^i A^j \rk \mathcal{H}^F_{i,j}(L)
$$
Since the theorem \ref{teo1} we know how to obtain $\sum_{i\in\{0,1\}, j\in\Z} t^i A^j \ddot{\mathcal{H}}_{i,j}(L)$ from $\textit{Kh}(L)$ summing some coefficients. If $L$ is represented by the diagram $D$ we also know how to obtain $\mathcal{H}^F_*(D)$ from $\ddot{\mathcal{H}}_*(D)$ using the writhe number, and hence obtain $\textit{FKh}(D)$ from $\textit{Kh}(D)$. In particular if $w(D)=0$ $\mathcal{H}^F_*(D) = \ddot{\mathcal{H}}_*(D)$.
Let $D'_1$ be the classical diagram of the knot $5_1$ (Figure \ref{figure:5-1} left) and let $D_1$ be the diagram of the knot $5_1$ obtained from $D'_1$ adding $5$ positive curls (Figure \ref{figure:5-1} right). Let $D'_2$ be the classical diagram of the knot $10_{132}$ (Figure \ref{figure:10-132} left) and let $D_2$ be the diagram of $10_{132}$ obtained adding $4$ positive curls (Figure \ref{figure:10-132} right).
\begin{figure}[htbp]
\begin{center}
\subfigure[$D'_1$]{
$$
\includegraphics[scale=0.5]{5-1.eps}
$$
}\qquad
\subfigure[$D_1$]{
$$
\includegraphics[scale=0.5]{5-1f.eps}
$$
}
\end{center}
\caption{$5_1$ with two different framings}
\label{figure:5-1}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\subfigure[$D'_2$]{
$$
\includegraphics[scale=0.5]{10-132.eps}
$$
}\qquad
\subfigure[$D_2$]{
$$
\includegraphics[scale=0.5]{10-132f.eps}
$$
}
\end{center}
\caption{$10_{132}$ with two different framings}
\label{figure:10-132}
\end{figure}
Since the writhe number of $D'_1$ is $-5$, the one of $D_1$ is $0$, $w(D'_1)=-5$ and $w(D_1)=0$. And in the same way $w(D'_2)=-4$ and $w(D_2)=0$.
For each diagram $D$ the Kauffman's version of the Jones polynomial of $D$ is $f_D= (-A^3)^{-w(D)} f\langle D \rangle$, therefore
\beq
\langle D_1 \rangle & = & f_{D_1} \\
& = & f_{D'_1} \\
& = & -A^{28} + A^{24} - A^{20} + A^{16} + A^8 \\
& = & f_{D'_2} \\
& = & f_{D_2} \\
& = & \langle D_2 \rangle
\eeq
\beq
\textit{Kh}(5_1) & = & q^{-5}+q^{-3}+t^{-5}q^{-15}+t^{-4}q^{-11}+t^{-3}q^{-11}+t^{-2}q^{-7} \\
\textit{Kh}(10_{132}) & = & q^{-3}+q^{-1}+t^{-7}q^{15}+t^{-6}q^{-11}+t^{-5}q^{-11}+t^{-4}q^{-9} \\
& & +t^{-4}q^{-7}+t^{-3}q^{-9}+t^{-3}q^{-5}+2t^{-2}q^{-5}+t^{-1}q^{-1}
\eeq
\beq
\textit{FKh}(5_1) & = & A^{28} + A^{30} + t( A^6 + A^{10} + A^{14}) + A^{22} \\
\textit{FKh}(10_{132}) & = & A^2 + A^{10} + A^{18} + A^{22} + A^{30} \\
& & + t( A^2 + A^6 + 2A^{10} + A^{14} + A^{18} + A^{22} )
\eeq
Hence we have found two framed links with the same Kauffman bracket and different homology.
\section*{Ackoledgements}
The author would like to express his gratitude to Riccardo Benedetti for his useful discussion. He would also like to thank his friends for helping him in the details of this paper.
| {
"timestamp": "2013-06-14T02:01:11",
"yymm": "1303",
"arxiv_id": "1303.2599",
"language": "en",
"url": "https://arxiv.org/abs/1303.2599",
"abstract": "Given any diagram of a link, we define on the cube of Kauffman's states a \"2-complex\" whose homology is an invariant of the associated framed links, and such that the graded Euler characteristic reproduces the unnormalized Kauffman bracket. This includes a categorification of brackets skein relation. Then we incorporate the orientation information and get a further complex on the same cube that gives rise to a new invariant homology for oriented links, so that the graded Euler characteristic reproduces the unnormalized Jones polynomial in Kauffman's version. Finally we clarify the relations between this homology and the original Khovanov homology of oriented links, extending the well known relation between the associated two versions of the Jones polynomial.",
"subjects": "Geometric Topology (math.GT)",
"title": "Step by step categorification of the Jones polynomial in Kauffman's version",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750499754304,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.709522186177586
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.